Trees, bialgebras and intrinsic numerical algorithms
NASA Technical Reports Server (NTRS)
Crouch, Peter; Grossman, Robert; Larson, Richard
1990-01-01
Preliminary work about intrinsic numerical integrators evolving on groups is described. Fix a finite dimensional Lie group G; let g denote its Lie algebra, and let Y(sub 1),...,Y(sub N) denote a basis of g. A class of numerical algorithms is presented that approximate solutions to differential equations evolving on G of the form: dot-x(t) = F(x(t)), x(0) = p is an element of G. The algorithms depend upon constants c(sub i) and c(sub ij), for i = 1,...,k and j is less than i. The algorithms have the property that if the algorithm starts on the group, then it remains on the group. In addition, they also have the property that if G is the abelian group R(N), then the algorithm becomes the classical Runge-Kutta algorithm. The Cayley algebra generated by labeled, ordered trees is used to generate the equations that the coefficients c(sub i) and c(sub ij) must satisfy in order for the algorithm to yield an rth order numerical integrator and to analyze the resulting algorithms.
NASA Astrophysics Data System (ADS)
LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; Bulik, Ireneusz W.; Chan, Garnet Kin-Lic; Chung, Chia-Min; Deng, Youjin; Ferrero, Michel; Henderson, Thomas M.; Jiménez-Hoyos, Carlos A.; Kozik, E.; Liu, Xuan-Wen; Millis, Andrew J.; Prokof'ev, N. V.; Qin, Mingpu; Scuseria, Gustavo E.; Shi, Hao; Svistunov, B. V.; Tocchio, Luca F.; Tupitsyn, I. S.; White, Steven R.; Zhang, Shiwei; Zheng, Bo-Xiao; Zhu, Zhenyue; Gull, Emanuel; Simons Collaboration on the Many-Electron Problem
2015-10-01
Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification of uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.
Numerical Algorithms Based on Biorthogonal Wavelets
NASA Technical Reports Server (NTRS)
Ponenti, Pj.; Liandrat, J.
1996-01-01
Wavelet bases are used to generate spaces of approximation for the resolution of bidimensional elliptic and parabolic problems. Under some specific hypotheses relating the properties of the wavelets to the order of the involved operators, it is shown that an approximate solution can be built. This approximation is then stable and converges towards the exact solution. It is designed such that fast algorithms involving biorthogonal multi resolution analyses can be used to resolve the corresponding numerical problems. Detailed algorithms are provided as well as the results of numerical tests on partial differential equations defined on the bidimensional torus.
LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; Bulik, Ireneusz W.; Chan, Garnet Kin-Lic; Chung, Chia -Min; Deng, Youjin; Ferrero, Michel; Henderson, Thomas M.; Jiménez-Hoyos, Carlos A.; et al
2015-12-14
Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification ofmore » uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Furthermore, cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.« less
LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; Bulik, Ireneusz W.; Chan, Garnet Kin-Lic; Chung, Chia -Min; Deng, Youjin; Ferrero, Michel; Henderson, Thomas M.; Jiménez-Hoyos, Carlos A.; Kozik, E.; Liu, Xuan -Wen; Millis, Andrew J.; Prokof’ev, N. V.; Qin, Mingpu; Scuseria, Gustavo E.; Shi, Hao; Svistunov, B. V.; Tocchio, Luca F.; Tupitsyn, I. S.; White, Steven R.; Zhang, Shiwei; Zheng, Bo -Xiao; Zhu, Zhenyue; Gull, Emanuel
2015-12-14
Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification of uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Furthermore, cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.
Stochastic Formal Correctness of Numerical Algorithms
NASA Technical Reports Server (NTRS)
Daumas, Marc; Lester, David; Martin-Dorel, Erik; Truffert, Annick
2009-01-01
We provide a framework to bound the probability that accumulated errors were never above a given threshold on numerical algorithms. Such algorithms are used for example in aircraft and nuclear power plants. This report contains simple formulas based on Levy's and Markov's inequalities and it presents a formal theory of random variables with a special focus on producing concrete results. We selected four very common applications that fit in our framework and cover the common practices of systems that evolve for a long time. We compute the number of bits that remain continuously significant in the first two applications with a probability of failure around one out of a billion, where worst case analysis considers that no significant bit remains. We are using PVS as such formal tools force explicit statement of all hypotheses and prevent incorrect uses of theorems.
Numerical taxonomy on data: Experimental results
Cohen, J.; Farach, M.
1997-12-01
The numerical taxonomy problems associated with most of the optimization criteria described above are NP - hard [3, 5, 1, 4]. In, the first positive result for numerical taxonomy was presented. They showed that if e is the distance to the closest tree metric under the L{sub {infinity}} norm. i.e., e = min{sub T} [L{sub {infinity}} (T-D)], then it is possible to construct a tree T such that L{sub {infinity}} (T-D) {le} 3e, that is, they gave a 3-approximation algorithm for this problem. We will refer to this algorithm as the Single Pivot (SP) heuristic.
A Numerical Instability in an ADI Algorithm for Gyrokinetics
E.A. Belli; G.W. Hammett
2004-12-17
We explore the implementation of an Alternating Direction Implicit (ADI) algorithm for a gyrokinetic plasma problem and its resulting numerical stability properties. This algorithm, which uses a standard ADI scheme to divide the field solve from the particle distribution function advance, has previously been found to work well for certain plasma kinetic problems involving one spatial and two velocity dimensions, including collisions and an electric field. However, for the gyrokinetic problem we find a severe stability restriction on the time step. Furthermore, we find that this numerical instability limitation also affects some other algorithms, such as a partially implicit Adams-Bashforth algorithm, where the parallel motion operator v{sub {parallel}} {partial_derivative}/{partial_derivative}z is treated implicitly and the field terms are treated with an Adams-Bashforth explicit scheme. Fully explicit algorithms applied to all terms can be better at long wavelengths than these ADI or partially implicit algorithms.
Technical Report: Scalable Parallel Algorithms for High Dimensional Numerical Integration
Masalma, Yahya; Jiao, Yu
2010-10-01
We implemented a scalable parallel quasi-Monte Carlo numerical high-dimensional integration for tera-scale data points. The implemented algorithm uses the Sobol s quasi-sequences to generate random samples. Sobol s sequence was used to avoid clustering effects in the generated random samples and to produce low-discrepancy random samples which cover the entire integration domain. The performance of the algorithm was tested. Obtained results prove the scalability and accuracy of the implemented algorithms. The implemented algorithm could be used in different applications where a huge data volume is generated and numerical integration is required. We suggest using the hyprid MPI and OpenMP programming model to improve the performance of the algorithms. If the mixed model is used, attention should be paid to the scalability and accuracy.
A hybrid artificial bee colony algorithm for numerical function optimization
NASA Astrophysics Data System (ADS)
Alqattan, Zakaria N.; Abdullah, Rosni
2015-02-01
Artificial Bee Colony (ABC) algorithm is one of the swarm intelligence algorithms; it has been introduced by Karaboga in 2005. It is a meta-heuristic optimization search algorithm inspired from the intelligent foraging behavior of the honey bees in nature. Its unique search process made it as one of the most competitive algorithm with some other search algorithms in the area of optimization, such as Genetic algorithm (GA) and Particle Swarm Optimization (PSO). However, the ABC performance of the local search process and the bee movement or the solution improvement equation still has some weaknesses. The ABC is good in avoiding trapping at the local optimum but it spends its time searching around unpromising random selected solutions. Inspired by the PSO, we propose a Hybrid Particle-movement ABC algorithm called HPABC, which adapts the particle movement process to improve the exploration of the original ABC algorithm. Numerical benchmark functions were used in order to experimentally test the HPABC algorithm. The results illustrate that the HPABC algorithm can outperform the ABC algorithm in most of the experiments (75% better in accuracy and over 3 times faster).
Results from Numerical General Relativity
NASA Technical Reports Server (NTRS)
Baker, John G.
2011-01-01
For several years numerical simulations have been revealing the details of general relativity's predictions for the dynamical interactions of merging black holes. I will review what has been learned of the rich phenomenology of these mergers and the resulting gravitational wave signatures. These wave forms provide a potentially observable record of the powerful astronomical events, a central target of gravitational wave astronomy. Asymmetric radiation can produce a thrust on the system which may accelerate the single black hole resulting from the merger to high relative velocity.
An efficient cuckoo search algorithm for numerical function optimization
NASA Astrophysics Data System (ADS)
Ong, Pauline; Zainuddin, Zarita
2013-04-01
Cuckoo search algorithm which reproduces the breeding strategy of the best known brood parasitic bird, the cuckoos has demonstrated its superiority in obtaining the global solution for numerical optimization problems. However, the involvement of fixed step approach in its exploration and exploitation behavior might slow down the search process considerably. In this regards, an improved cuckoo search algorithm with adaptive step size adjustment is introduced and its feasibility on a variety of benchmarks is validated. The obtained results show that the proposed scheme outperforms the standard cuckoo search algorithm in terms of convergence characteristic while preserving the fascinating features of the original method.
A novel bee swarm optimization algorithm for numerical function optimization
NASA Astrophysics Data System (ADS)
Akbari, Reza; Mohammadi, Alireza; Ziarati, Koorush
2010-10-01
The optimization algorithms which are inspired from intelligent behavior of honey bees are among the most recently introduced population based techniques. In this paper, a novel algorithm called bee swarm optimization, or BSO, and its two extensions for improving its performance are presented. The BSO is a population based optimization technique which is inspired from foraging behavior of honey bees. The proposed approach provides different patterns which are used by the bees to adjust their flying trajectories. As the first extension, the BSO algorithm introduces different approaches such as repulsion factor and penalizing fitness (RP) to mitigate the stagnation problem. Second, to maintain efficiently the balance between exploration and exploitation, time-varying weights (TVW) are introduced into the BSO algorithm. The proposed algorithm (BSO) and its two extensions (BSO-RP and BSO-RPTVW) are compared with existing algorithms which are based on intelligent behavior of honey bees, on a set of well known numerical test functions. The experimental results show that the BSO algorithms are effective and robust; produce excellent results, and outperform other algorithms investigated in this consideration.
Adaptive Numerical Algorithms in Space Weather Modeling
NASA Technical Reports Server (NTRS)
Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav
2010-01-01
Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical
Adaptive numerical algorithms in space weather modeling
NASA Astrophysics Data System (ADS)
Tóth, Gábor; van der Holst, Bart; Sokolov, Igor V.; De Zeeuw, Darren L.; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Najib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav
2012-02-01
Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different relevant physics in different domains. A multi-physics system can be modeled by a software framework comprising several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solarwind Roe-type Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamic (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit
Determining the Numerical Stability of Quantum Chemistry Algorithms.
Knizia, Gerald; Li, Wenbin; Simon, Sven; Werner, Hans-Joachim
2011-08-01
We present a simple, broadly applicable method for determining the numerical properties of quantum chemistry algorithms. The method deliberately introduces random numerical noise into computations, which is of the same order of magnitude as the floating point precision. Accordingly, repeated runs of an algorithm give slightly different results, which can be analyzed statistically to obtain precise estimates of its numerical stability. This noise is produced by automatic code injection into regular compiler output, so that no substantial programming effort is required, only a recompilation of the affected program sections. The method is applied to investigate: (i) the numerical stability of the three-center Obara-Saika integral evaluation scheme for high angular momenta, (ii) if coupled cluster perturbative triples can be evaluated with single precision arithmetic, (iii) how to implement the density fitting approximation in Møller-Plesset perturbation theory (MP2) most accurately, and (iv) which parts of density fitted MP2 can be safely evaluated with single precision arithmetic. In the integral case, we find a numerical instability in an equation that is used in almost all integral programs. Due to the results of (ii) and (iv), we conjecture that single precision arithmetic can be applied whenever a calculation is done in an orthogonal basis set and excessively long linear sums are avoided. PMID:26606614
An algorithm for the numerical solution of linear differential games
Polovinkin, E S; Ivanov, G E; Balashov, M V; Konstantinov, R V; Khorev, A V
2001-10-31
A numerical algorithm for the construction of stable Krasovskii bridges, Pontryagin alternating sets, and also of piecewise program strategies solving two-person linear differential (pursuit or evasion) games on a fixed time interval is developed on the basis of a general theory. The aim of the first player (the pursuer) is to hit a prescribed target (terminal) set by the phase vector of the control system at the prescribed time. The aim of the second player (the evader) is the opposite. A description of numerical algorithms used in the solution of differential games of the type under consideration is presented and estimates of the errors resulting from the approximation of the game sets by polyhedra are presented.
Research on numerical algorithms for large space structures
NASA Technical Reports Server (NTRS)
Denman, E. D.
1982-01-01
Numerical algorithms for large space structures were investigated with particular emphasis on decoupling method for analysis and design. Numerous aspects of the analysis of large systems ranging from the algebraic theory to lambda matrices to identification algorithms were considered. A general treatment of the algebraic theory of lambda matrices is presented and the theory is applied to second order lambda matrices.
Numerical Algorithm for Delta of Asian Option
Zhang, Boxiang; Yu, Yang; Wang, Weiguo
2015-01-01
We study the numerical solution of the Greeks of Asian options. In particular, we derive a close form solution of Δ of Asian geometric option and use this analytical form as a control to numerically calculate Δ of Asian arithmetic option, which is known to have no explicit close form solution. We implement our proposed numerical method and compare the standard error with other classical variance reduction methods. Our method provides an efficient solution to the hedging strategy with Asian options. PMID:26266271
Numerical Algorithm for Delta of Asian Option.
Zhang, Boxiang; Yu, Yang; Wang, Weiguo
2015-01-01
We study the numerical solution of the Greeks of Asian options. In particular, we derive a close form solution of Δ of Asian geometric option and use this analytical form as a control to numerically calculate Δ of Asian arithmetic option, which is known to have no explicit close form solution. We implement our proposed numerical method and compare the standard error with other classical variance reduction methods. Our method provides an efficient solution to the hedging strategy with Asian options. PMID:26266271
Linsen, Sarah; Torbeyns, Joke; Verschaffel, Lieven; Reynvoet, Bert; De Smedt, Bert
2016-03-01
There are two well-known computation methods for solving multi-digit subtraction items, namely mental and algorithmic computation. It has been contended that mental and algorithmic computation differentially rely on numerical magnitude processing, an assumption that has already been examined in children, but not yet in adults. Therefore, in this study, we examined how numerical magnitude processing was associated with mental and algorithmic computation, and whether this association with numerical magnitude processing was different for mental versus algorithmic computation. We also investigated whether the association between numerical magnitude processing and mental and algorithmic computation differed for measures of symbolic versus nonsymbolic numerical magnitude processing. Results showed that symbolic, and not nonsymbolic, numerical magnitude processing was associated with mental computation, but not with algorithmic computation. Additional analyses showed, however, that the size of this association with symbolic numerical magnitude processing was not significantly different for mental and algorithmic computation. We also tried to further clarify the association between numerical magnitude processing and complex calculation by also including relevant arithmetical subskills, i.e. arithmetic facts, needed for complex calculation that are also known to be dependent on numerical magnitude processing. Results showed that the associations between symbolic numerical magnitude processing and mental and algorithmic computation were fully explained by individual differences in elementary arithmetic fact knowledge. PMID:26914586
A Polynomial Time, Numerically Stable Integer Relation Algorithm
NASA Technical Reports Server (NTRS)
Ferguson, Helaman R. P.; Bailey, Daivd H.; Kutler, Paul (Technical Monitor)
1998-01-01
Let x = (x1, x2...,xn be a vector of real numbers. X is said to possess an integer relation if there exist integers a(sub i) not all zero such that a1x1 + a2x2 + ... a(sub n)Xn = 0. Beginning in 1977 several algorithms (with proofs) have been discovered to recover the a(sub i) given x. The most efficient of these existing integer relation algorithms (in terms of run time and the precision required of the input) has the drawback of being very unstable numerically. It often requires a numeric precision level in the thousands of digits to reliably recover relations in modest-sized test problems. We present here a new algorithm for finding integer relations, which we have named the "PSLQ" algorithm. It is proved in this paper that the PSLQ algorithm terminates with a relation in a number of iterations that is bounded by a polynomial in it. Because this algorithm employs a numerically stable matrix reduction procedure, it is free from the numerical difficulties, that plague other integer relation algorithms. Furthermore, its stability admits an efficient implementation with lower run times oil average than other algorithms currently in Use. Finally, this stability can be used to prove that relation bounds obtained from computer runs using this algorithm are numerically accurate.
New Results in Astrodynamics Using Genetic Algorithms
NASA Technical Reports Server (NTRS)
Coverstone-Carroll, V.; Hartmann, J. W.; Williams, S. N.; Mason, W. J.
1998-01-01
Generic algorithms have gained popularity as an effective procedure for obtaining solutions to traditionally difficult space mission optimization problems. In this paper, a brief survey of the use of genetic algorithms to solve astrodynamics problems is presented and is followed by new results obtained from applying a Pareto genetic algorithm to the optimization of low-thrust interplanetary spacecraft missions.
Research on numerical algorithms for large space structures
NASA Technical Reports Server (NTRS)
Denman, E. D.
1981-01-01
Numerical algorithms for analysis and design of large space structures are investigated. The sign algorithm and its application to decoupling of differential equations are presented. The generalized sign algorithm is given and its application to several problems discussed. The Laplace transforms of matrix functions and the diagonalization procedure for a finite element equation are discussed. The diagonalization of matrix polynomials is considered. The quadrature method and Laplace transforms is discussed and the identification of linear systems by the quadrature method investigated.
A numerical comparison of discrete Kalman filtering algorithms: An orbit determination case study
NASA Technical Reports Server (NTRS)
Thornton, C. L.; Bierman, G. J.
1976-01-01
The numerical stability and accuracy of various Kalman filter algorithms are thoroughly studied. Numerical results and conclusions are based on a realistic planetary approach orbit determination study. The case study results of this report highlight the numerical instability of the conventional and stabilized Kalman algorithms. Numerical errors associated with these algorithms can be so large as to obscure important mismodeling effects and thus give misleading estimates of filter accuracy. The positive result of this study is that the Bierman-Thornton U-D covariance factorization algorithm is computationally efficient, with CPU costs that differ negligibly from the conventional Kalman costs. In addition, accuracy of the U-D filter using single-precision arithmetic consistently matches the double-precision reference results. Numerical stability of the U-D filter is further demonstrated by its insensitivity of variations in the a priori statistics.
Wake Vortex Algorithm Scoring Results
NASA Technical Reports Server (NTRS)
Robins, R. E.; Delisi, D. P.; Hinton, David (Technical Monitor)
2002-01-01
This report compares the performance of two models of trailing vortex evolution for which interaction with the ground is not a significant factor. One model uses eddy dissipation rate (EDR) and the other uses the kinetic energy of turbulence fluctuations (TKE) to represent the effect of turbulence. In other respects, the models are nearly identical. The models are evaluated by comparing their predictions of circulation decay, vertical descent, and lateral transport to observations for over four hundred cases from Memphis and Dallas/Fort Worth International Airports. These observations were obtained during deployments in support of NASA's Aircraft Vortex Spacing System (AVOSS). The results of the comparisons show that the EDR model usually performs slightly better than the TKE model.
Algorithm-Based Fault Tolerance for Numerical Subroutines
NASA Technical Reports Server (NTRS)
Tumon, Michael; Granat, Robert; Lou, John
2007-01-01
A software library implements a new methodology of detecting faults in numerical subroutines, thus enabling application programs that contain the subroutines to recover transparently from single-event upsets. The software library in question is fault-detecting middleware that is wrapped around the numericalsubroutines. Conventional serial versions (based on LAPACK and FFTW) and a parallel version (based on ScaLAPACK) exist. The source code of the application program that contains the numerical subroutines is not modified, and the middleware is transparent to the user. The methodology used is a type of algorithm- based fault tolerance (ABFT). In ABFT, a checksum is computed before a computation and compared with the checksum of the computational result; an error is declared if the difference between the checksums exceeds some threshold. Novel normalization methods are used in the checksum comparison to ensure correct fault detections independent of algorithm inputs. In tests of this software reported in the peer-reviewed literature, this library was shown to enable detection of 99.9 percent of significant faults while generating no false alarms.
Understanding disordered systems through numerical simulation and algorithm development
NASA Astrophysics Data System (ADS)
Sweeney, Sean Michael
Disordered systems arise in many physical contexts. Not all matter is uniform, and impurities or heterogeneities can be modeled by fixed random disorder. Numerous complex networks also possess fixed disorder, leading to applications in transportation systems, telecommunications, social networks, and epidemic modeling, to name a few. Due to their random nature and power law critical behavior, disordered systems are difficult to study analytically. Numerical simulation can help overcome this hurdle by allowing for the rapid computation of system states. In order to get precise statistics and extrapolate to the thermodynamic limit, large systems must be studied over many realizations. Thus, innovative algorithm development is essential in order reduce memory or running time requirements of simulations. This thesis presents a review of disordered systems, as well as a thorough study of two particular systems through numerical simulation, algorithm development and optimization, and careful statistical analysis of scaling properties. Chapter 1 provides a thorough overview of disordered systems, the history of their study in the physics community, and the development of techniques used to study them. Topics of quenched disorder, phase transitions, the renormalization group, criticality, and scale invariance are discussed. Several prominent models of disordered systems are also explained. Lastly, analysis techniques used in studying disordered systems are covered. In Chapter 2, minimal spanning trees on critical percolation clusters are studied, motivated in part by an analytic perturbation expansion by Jackson and Read that I check against numerical calculations. This system has a direct mapping to the ground state of the strongly disordered spin glass. We compute the path length fractal dimension of these trees in dimensions d = {2, 3, 4, 5} and find our results to be compatible with the analytic results suggested by Jackson and Read. In Chapter 3, the random bond Ising
Multiresolution representation and numerical algorithms: A brief review
NASA Technical Reports Server (NTRS)
Harten, Amiram
1994-01-01
In this paper we review recent developments in techniques to represent data in terms of its local scale components. These techniques enable us to obtain data compression by eliminating scale-coefficients which are sufficiently small. This capability for data compression can be used to reduce the cost of many numerical solution algorithms by either applying it to the numerical solution operator in order to get an approximate sparse representation, or by applying it to the numerical solution itself in order to reduce the number of quantities that need to be computed.
Fast Quantum Algorithms for Numerical Integrals and Stochastic Processes
NASA Technical Reports Server (NTRS)
Abrams, D.; Williams, C.
1999-01-01
We discuss quantum algorithms that calculate numerical integrals and descriptive statistics of stochastic processes. With either of two distinct approaches, one obtains an exponential speed increase in comparison to the fastest known classical deterministic algotithms and a quadratic speed increase incomparison to classical Monte Carlo methods.
Experimentally constructing finite difference algorithms in numerical relativity
NASA Astrophysics Data System (ADS)
Anderson, Matthew; Neilsen, David; Matzner, Richard
2002-04-01
Computational studies of gravitational waves require numerical algorithms with long-term stability (necessary for convergence). However, constructing stable finite difference algorithms (FDA) for the ADM formulation of the Einstein equations, especially in multiple dimensions, has proven difficult. Most FDA's are constructed using rules of thumb gained from experience with simple model equations. To search for FDA's with improved stability, we adopt a brute-force approach, where we systematically test thousands of numerical schemes. We sort the spatial derivatives of the Einstein equations into groups, and parameterize each group by finite difference type (centered or upwind) and order. Furthermore, terms proportional to the constraints are added to the evolution equations with additional parameters. A spherically symmetric, excised Schwarzschild black hole (one dimension) and linearized waves in multiple dimensions are used as model systems to evaluate the different numerical schemes.
Numerical stability analysis of the pseudo-spectral analytical time-domain PIC algorithm
Godfrey, Brendan B.; Vay, Jean-Luc; Haber, Irving
2014-02-01
The pseudo-spectral analytical time-domain (PSATD) particle-in-cell (PIC) algorithm solves the vacuum Maxwell's equations exactly, has no Courant time-step limit (as conventionally defined), and offers substantial flexibility in plasma and particle beam simulations. It is, however, not free of the usual numerical instabilities, including the numerical Cherenkov instability, when applied to relativistic beam simulations. This paper derives and solves the numerical dispersion relation for the PSATD algorithm and compares the results with corresponding behavior of the more conventional pseudo-spectral time-domain (PSTD) and finite difference time-domain (FDTD) algorithms. In general, PSATD offers superior stability properties over a reasonable range of time steps. More importantly, one version of the PSATD algorithm, when combined with digital filtering, is almost completely free of the numerical Cherenkov instability for time steps (scaled to the speed of light) comparable to or smaller than the axial cell size.
A numerical algorithm for magnetohydrodynamics of ablated materials.
Lu, Tianshi; Du, Jian; Samulyak, Roman
2008-07-01
A numerical algorithm for the simulation of magnetohydrodynamics in partially ionized ablated material is described. For the hydro part, the hyperbolic conservation laws with electromagnetic terms is solved using techniques developed for free surface flows; for the electromagnetic part, the electrostatic approximation is applied and an elliptic equation for electric potential is solved. The algorithm has been implemented in the frame of front tracking, which explicitly tracks geometrically complex evolving interfaces. An elliptic solver based on the embedded boundary method were implemented for both two- and three-dimensional simulations. A surface model on the interface between the solid target and the ablated vapor has also been developed as well as a numerical model for the equation of state which accounts for atomic processes in the ablated material. The code has been applied to simulations of the pellet ablation in a magnetically confined plasma and the laser-ablated plasma plume expansion in magnetic fields. PMID:19051925
Algorithms for the Fractional Calculus: A Selection of Numerical Methods
NASA Technical Reports Server (NTRS)
Diethelm, K.; Ford, N. J.; Freed, A. D.; Luchko, Yu.
2003-01-01
Many recently developed models in areas like viscoelasticity, electrochemistry, diffusion processes, etc. are formulated in terms of derivatives (and integrals) of fractional (non-integer) order. In this paper we present a collection of numerical algorithms for the solution of the various problems arising in this context. We believe that this will give the engineer the necessary tools required to work with fractional models in an efficient way.
A Numerical Algorithm for the Solution of a Phase-Field Model of Polycrystalline Materials
Dorr, M R; Fattebert, J; Wickett, M E; Belak, J F; Turchi, P A
2008-12-04
We describe an algorithm for the numerical solution of a phase-field model (PFM) of microstructure evolution in polycrystalline materials. The PFM system of equations includes a local order parameter, a quaternion representation of local orientation and a species composition parameter. The algorithm is based on the implicit integration of a semidiscretization of the PFM system using a backward difference formula (BDF) temporal discretization combined with a Newton-Krylov algorithm to solve the nonlinear system at each time step. The BDF algorithm is combined with a coordinate projection method to maintain quaternion unit length, which is related to an important solution invariant. A key element of the Newton-Krylov algorithm is the selection of a preconditioner to accelerate the convergence of the Generalized Minimum Residual algorithm used to solve the Jacobian linear system in each Newton step. Results are presented for the application of the algorithm to 2D and 3D examples.
Copps, Kevin D.; Carnes, Brian R.
2008-04-01
We examine algorithms for the finite element approximation of thermal contact models. We focus on the implementation of thermal contact algorithms in SIERRA Mechanics. Following the mathematical formulation of models for tied contact and resistance contact, we present three numerical algorithms: (1) the multi-point constraint (MPC) algorithm, (2) a resistance algorithm, and (3) a new generalized algorithm. We compare and contrast both the correctness and performance of the algorithms in three test problems. We tabulate the convergence rates of global norms of the temperature solution on sequentially refined meshes. We present the results of a parameter study of the effect of contact search tolerances. We outline best practices in using the software for predictive simulations, and suggest future improvements to the implementation.
Numerical simulations of catastrophic disruption: Recent results
NASA Technical Reports Server (NTRS)
Benz, W.; Asphaug, E.; Ryan, E. V.
1994-01-01
Numerical simulations have been used to study high velocity two-body impacts. In this paper, a two-dimensional Largrangian finite difference hydro-code and a three-dimensional smooth particle hydro-code (SPH) are described and initial results reported. These codes can be, and have been, used to make specific predictions about particular objects in our solar system. But more significantly, they allow us to explore a broad range of collisional events. Certain parameters (size, time) can be studied only over a very restricted range within the laboratory; other parameters (initial spin, low gravity, exotic structure or composition) are difficult to study at all experimentally. The outcomes of numerical simulations lead to a more general and accurate understanding of impacts in their many forms.
Convergence Results on Iteration Algorithms to Linear Systems
Wang, Zhuande; Yang, Chuansheng; Yuan, Yubo
2014-01-01
In order to solve the large scale linear systems, backward and Jacobi iteration algorithms are employed. The convergence is the most important issue. In this paper, a unified backward iterative matrix is proposed. It shows that some well-known iterative algorithms can be deduced with it. The most important result is that the convergence results have been proved. Firstly, the spectral radius of the Jacobi iterative matrix is positive and the one of backward iterative matrix is strongly positive (lager than a positive constant). Secondly, the mentioned two iterations have the same convergence results (convergence or divergence simultaneously). Finally, some numerical experiments show that the proposed algorithms are correct and have the merit of backward methods. PMID:24991640
Predictive Lateral Logic for Numerical Entry Guidance Algorithms
NASA Technical Reports Server (NTRS)
Smith, Kelly M.
2016-01-01
Recent entry guidance algorithm development123 has tended to focus on numerical integration of trajectories onboard in order to evaluate candidate bank profiles. Such methods enjoy benefits such as flexibility to varying mission profiles and improved robustness to large dispersions. A common element across many of these modern entry guidance algorithms is a reliance upon the concept of Apollo heritage lateral error (or azimuth error) deadbands in which the number of bank reversals to be performed is non-deterministic. This paper presents a closed-loop bank reversal method that operates with a fixed number of bank reversals defined prior to flight. However, this number of bank reversals can be modified at any point, including in flight, based on contingencies such as fuel leaks where propellant usage must be minimized.
Numerical optimization algorithm for rotationally invariant multi-orbital slave-boson method
NASA Astrophysics Data System (ADS)
Quan, Ya-Min; Wang, Qing-wei; Liu, Da-Yong; Yu, Xiang-Long; Zou, Liang-Jian
2015-06-01
We develop a generalized numerical optimization algorithm for the rotationally invariant multi-orbital slave boson approach, which is applicable for arbitrary boundary constraints of high-dimensional objective function by combining several classical optimization techniques. After constructing the calculation architecture of rotationally invariant multi-orbital slave boson model, we apply this optimization algorithm to find the stable ground state and magnetic configuration of two-orbital Hubbard models. The numerical results are consistent with available solutions, confirming the correctness and accuracy of our present algorithm. Furthermore, we utilize it to explore the effects of the transverse Hund's coupling terms on metal-insulator transition, orbital selective Mott phase and magnetism. These results show the quick convergency and robust stable character of our algorithm in searching the optimized solution of strongly correlated electron systems.
Path Integrals and Exotic Options:. Methods and Numerical Results
NASA Astrophysics Data System (ADS)
Bormetti, G.; Montagna, G.; Moreni, N.; Nicrosini, O.
2005-09-01
In the framework of Black-Scholes-Merton model of financial derivatives, a path integral approach to option pricing is presented. A general formula to price path dependent options on multidimensional and correlated underlying assets is obtained and implemented by means of various flexible and efficient algorithms. As an example, we detail the case of Asian call options. The numerical results are compared with those obtained with other procedures used in quantitative finance and found to be in good agreement. In particular, when pricing at the money (ATM) and out of the money (OTM) options, path integral exhibits competitive performances.
The Aquarius Salinity Retrieval Algorithm: Early Results
NASA Technical Reports Server (NTRS)
Meissner, Thomas; Wentz, Frank J.; Lagerloef, Gary; LeVine, David
2012-01-01
The Aquarius L-band radiometer/scatterometer system is designed to provide monthly salinity maps at 150 km spatial scale to a 0.2 psu accuracy. The sensor was launched on June 10, 2011, aboard the Argentine CONAE SAC-D spacecraft. The L-band radiometers and the scatterometer have been taking science data observations since August 25, 2011. The first part of this presentation gives an overview over the Aquarius salinity retrieval algorithm. The instrument calibration converts Aquarius radiometer counts into antenna temperatures (TA). The salinity retrieval algorithm converts those TA into brightness temperatures (TB) at a flat ocean surface. As a first step, contributions arising from the intrusion of solar, lunar and galactic radiation are subtracted. The antenna pattern correction (APC) removes the effects of cross-polarization contamination and spillover. The Aquarius radiometer measures the 3rd Stokes parameter in addition to vertical (v) and horizontal (h) polarizations, which allows for an easy removal of ionospheric Faraday rotation. The atmospheric absorption at L-band is almost entirely due to O2, which can be calculated based on auxiliary input fields from numerical weather prediction models and then successively removed from the TB. The final step in the TA to TB conversion is the correction for the roughness of the sea surface due to wind. This is based on the radar backscatter measurements by the scatterometer. The TB of the flat ocean surface can now be matched to a salinity value using a surface emission model that is based on a model for the dielectric constant of sea water and an auxiliary field for the sea surface temperature. In the current processing (as of writing this abstract) only v-pol TB are used for this last process and NCEP winds are used for the roughness correction. Before the salinity algorithm can be operationally implemented and its accuracy assessed by comparing versus in situ measurements, an extensive calibration and validation
NASA Astrophysics Data System (ADS)
Simpson, Matthew J.; Landman, Kerry A.; Newgreen, Donald F.
2006-08-01
A numerical algorithm to simulate chemotactic and/or diffusive migration on a one-dimensional growing domain is developed. The domain growth can be spatially nonuniform and the growth-derived advection term must be discretised. The hyperbolic terms in the conservation equations associated with chemotactic migration and domain growth are accurately discretised using an explicit central scheme. Generality of the algorithm is maintained using an operator split technique to simulate diffusive migration implicitly. The resulting algorithm is applicable for any combination of diffusive and/or chemotactic migration on a growing domain with a general growth-induced velocity field. The accuracy of the algorithm is demonstrated by testing the results against some simple analytical solutions and in an inter-code comparison. The new algorithm demonstrates that the form of nonuniform growth plays a critical role in determining whether a population of migratory cells is able to overcome the domain growth and fully colonise the domain.
Wang, Peng; Zhu, Zhouquan; Huang, Shuai
2013-01-01
This paper presents a novel biologically inspired metaheuristic algorithm called seven-spot ladybird optimization (SLO). The SLO is inspired by recent discoveries on the foraging behavior of a seven-spot ladybird. In this paper, the performance of the SLO is compared with that of the genetic algorithm, particle swarm optimization, and artificial bee colony algorithms by using five numerical benchmark functions with multimodality. The results show that SLO has the ability to find the best solution with a comparatively small population size and is suitable for solving optimization problems with lower dimensions. PMID:24385879
Zhu, Zhouquan
2013-01-01
This paper presents a novel biologically inspired metaheuristic algorithm called seven-spot ladybird optimization (SLO). The SLO is inspired by recent discoveries on the foraging behavior of a seven-spot ladybird. In this paper, the performance of the SLO is compared with that of the genetic algorithm, particle swarm optimization, and artificial bee colony algorithms by using five numerical benchmark functions with multimodality. The results show that SLO has the ability to find the best solution with a comparatively small population size and is suitable for solving optimization problems with lower dimensions. PMID:24385879
Simulation results for the Viterbi decoding algorithm
NASA Technical Reports Server (NTRS)
Batson, B. H.; Moorehead, R. W.; Taqvi, S. Z. H.
1972-01-01
Concepts involved in determining the performance of coded digital communications systems are introduced. The basic concepts of convolutional encoding and decoding are summarized, and hardware implementations of sequential and maximum likelihood decoders are described briefly. Results of parametric studies of the Viterbi decoding algorithm are summarized. Bit error probability is chosen as the measure of performance and is calculated, by using digital computer simulations, for various encoder and decoder parameters. Results are presented for code rates of one-half and one-third, for constraint lengths of 4 to 8, for both hard-decision and soft-decision bit detectors, and for several important systematic and nonsystematic codes. The effect of decoder block length on bit error rate also is considered, so that a more complete estimate of the relationship between performance and decoder complexity can be made.
Burtsev, S.; Camassa, R.; Timofeyev, I.
1998-11-20
The authors implement two different algorithms for computing numerically the direct Zakharov-Shabat eigenvalue problem on the infinite line. The first algorithm replaces the potential in the eigenvalue problem by a piecewise-constant approximation, which allows one to solve analytically the corresponding ordinary differential equation. The resulting algorithm is of second order in the step size. The second algorithm uses the fourth-order Runge-Kutta method. They test and compare the performance of these two algorithms on three exactly solvable potentials. They find that even though the Runge-Kutta method is of higher order, this extra accuracy can be lost because of the additional dependence of its numerical error on the eigenvalue. this limits the usefulness of the Runge-Kutta algorithm to a region inside the unit circle around the origin in the complex plane of the eigenvalues. For the computation of the continuous spectrum density, this limitation is particularly severe, as revealed by the spectral decomposition of the L{sup 2}-norm of a solution to the nonlinear Schroedinger equation. They show that no such limitations exist for the piecewise-constant algorithm. In particular, this scheme converges uniformly for both continuous and discrete spectrum components.
Computational Fluid Dynamics. [numerical methods and algorithm development
NASA Technical Reports Server (NTRS)
1992-01-01
This collection of papers was presented at the Computational Fluid Dynamics (CFD) Conference held at Ames Research Center in California on March 12 through 14, 1991. It is an overview of CFD activities at NASA Lewis Research Center. The main thrust of computational work at Lewis is aimed at propulsion systems. Specific issues related to propulsion CFD and associated modeling will also be presented. Examples of results obtained with the most recent algorithm development will also be presented.
A simple algorithm for analyzing uncertainty of accident reconstruction results.
Zou, Tiefang; Hu, Lin; Li, Pingfan; Wu, Hequan
2015-12-01
In order to analyzing the uncertainty in accident reconstruction, based on the theory of extreme value and the convex model theory, the uncertainty analysis problem is turn to an extreme value problem. In order to calculate the range of the dependent variable, the extreme value in the definition domain and on the boundary of the definition domain are calculated independently, and then the upper and lower bound of the dependent variable can be given by these obtained extreme values. Based on such idea and through analyzing five numerical cases, a simple algorithm for calculating the range of an accident reconstruction result was given; appropriate results can be obtained through the proposed algorithm in these cases. Finally, a real world vehicle-motorcycle accident was given, the range of the reconstructed velocity of the vehicle was calculated by employing the Pc-Crash, the response surface methodology and the new proposed algorithm, the range was [66.1-67.3] km/h. This research will provide another choice for uncertainty analysis in accident reconstruction. PMID:26386339
NASA Astrophysics Data System (ADS)
Aleksandrova, A. G.; Galushina, T. Yu.
2015-12-01
The paper describes the software package developed for the numerical simulation of the breakups of natural and artificial objects and algorithms on which it is based. A new software "Numerical model of breakups" includes models of collapse of the spacecraft (SC) as a result of the explosion and collision as well as two models of the explosion of an asteroid.
NASA Technical Reports Server (NTRS)
Nacozy, P. E.
1984-01-01
The equations of motion are developed for a perfectly flexible, inelastic tether with a satellite at its extremity. The tether is attached to a space vehicle in orbit. The tether is allowed to possess electrical conductivity. A numerical solution algorithm to provide the motion of the tether and satellite system is presented. The resulting differential equations can be solved by various existing standard numerical integration computer programs. The resulting differential equations allow the introduction of approximations that can lead to analytical, approximate general solutions. The differential equations allow more dynamical insight of the motion.
Saturn's North Polar Hexagon Numerical Modeling Results
NASA Astrophysics Data System (ADS)
Morales-Juberias, R.; Sayanagi, K. M.; Dowling, T. E.
2008-12-01
In 1980, Voyager images revealed the presence of a circumpolar wave at 78 degrees planetographic latitude in the northern hemisphere of Saturn. It was notable for having a dominant planetary wavenumber-six zonal mode, and for being stationary with respect to Saturn's Kilometric Radiation rotation rate measured by Voyager. The center of this hexagonal feature was coincident with the center of a sharp eastward jet with a peak speed of 100 ms-1 and it had a meridional width of about 4 degrees. This hexagonal feature was confirmed in 1991 through ground-based observations, and it was observed again in 2006 with the Cassini VIMS instrument. The latest observations highlight the longevity of the hexagon and suggest that it extends at least several bars deep into the atmosphere. We use the Explicit Planetary Isentropic Code (EPIC) to perform high-resolution numerical simulations of this unique feature. We show that a wavenumber six instability mode arises naturally from initially barotropic jets when seeded with weak random turbulence. We also discuss the properties of the wave activity on the background vertical stability, zonal wind, planetary rotation rate and adjacent vortices. Computational resources were provided by the New Mexico Computing Applications Center and New Mexico Institute of Mining and Technology and the Comparative Planetology Laboratory at the University of Louisville.
Analysis of V-cycle multigrid algorithms for forms defined by numerical quadrature
Bramble, J.H. . Dept. of Mathematics); Goldstein, C.I.; Pasciak, J.E. . Applied Mathematics Dept.)
1994-05-01
The authors describe and analyze certain V-cycle multigrid algorithms with forms defined by numerical quadrature applied to the approximation of symmetric second-order elliptic boundary value problems. This approach can be used for the efficient solution of finite element systems resulting from numerical quadrature as well as systems arising from finite difference discretizations. The results are based on a regularity free theory and hence apply to meshes with local grid refinement as well as the quasi-uniform case. It is shown that uniform (independent of the number of levels) convergence rates often hold for appropriately defined V-cycle algorithms with as few as one smoothing per grid. These results hold even on applications without full elliptic regularity, e.g., a domain in R[sup 2] with a crack.
PolyPole-1: An accurate numerical algorithm for intra-granular fission gas release
NASA Astrophysics Data System (ADS)
Pizzocri, D.; Rabiti, C.; Luzzi, L.; Barani, T.; Van Uffelen, P.; Pastore, G.
2016-09-01
The transport of fission gas from within the fuel grains to the grain boundaries (intra-granular fission gas release) is a fundamental controlling mechanism of fission gas release and gaseous swelling in nuclear fuel. Hence, accurate numerical solution of the corresponding mathematical problem needs to be included in fission gas behaviour models used in fuel performance codes. Under the assumption of equilibrium between trapping and resolution, the process can be described mathematically by a single diffusion equation for the gas atom concentration in a grain. In this paper, we propose a new numerical algorithm (PolyPole-1) to efficiently solve the fission gas diffusion equation in time-varying conditions. The PolyPole-1 algorithm is based on the analytic modal solution of the diffusion equation for constant conditions, combined with polynomial corrective terms that embody the information on the deviation from constant conditions. The new algorithm is verified by comparing the results to a finite difference solution over a large number of randomly generated operation histories. Furthermore, comparison to state-of-the-art algorithms used in fuel performance codes demonstrates that the accuracy of PolyPole-1 is superior to other algorithms, with similar computational effort. Finally, the concept of PolyPole-1 may be extended to the solution of the general problem of intra-granular fission gas diffusion during non-equilibrium trapping and resolution, which will be the subject of future work.
A fast algorithm for numerical solutions to Fortet's equation
NASA Astrophysics Data System (ADS)
Brumen, Gorazd
2008-10-01
A fast algorithm for computation of default times of multiple firms in a structural model is presented. The algorithm uses a multivariate extension of the Fortet's equation and the structure of Toeplitz matrices to significantly improve the computation time. In a financial market consisting of M[not double greater-than sign]1 firms and N discretization points in every dimension the algorithm uses O(nlogn·M·M!·NM(M-1)/2) operations, where n is the number of discretization points in the time domain. The algorithm is applied to firm survival probability computation and zero coupon bond pricing.
NASA Astrophysics Data System (ADS)
Alfonso, Lester; Zamora, Jose; Cruz, Pedro
2015-04-01
The stochastic approach to coagulation considers the coalescence process going in a system of a finite number of particles enclosed in a finite volume. Within this approach, the full description of the system can be obtained from the solution of the multivariate master equation, which models the evolution of the probability distribution of the state vector for the number of particles of a given mass. Unfortunately, due to its complexity, only limited results were obtained for certain type of kernels and monodisperse initial conditions. In this work, a novel numerical algorithm for the solution of the multivariate master equation for stochastic coalescence that works for any type of kernels and initial conditions is introduced. The performance of the method was checked by comparing the numerically calculated particle mass spectrum with analytical solutions obtained for the constant and sum kernels, with an excellent correspondence between the analytical and numerical solutions. In order to increase the speedup of the algorithm, software parallelization techniques with OpenMP standard were used, along with an implementation in order to take advantage of new accelerator technologies. Simulations results show an important speedup of the parallelized algorithms. This study was funded by a grant from Consejo Nacional de Ciencia y Tecnologia de Mexico SEP-CONACYT CB-131879. The authors also thanks LUFAC® Computacion SA de CV for CPU time and all the support provided.
Numerical advection algorithms and their role in atmospheric transport and chemistry models
NASA Technical Reports Server (NTRS)
Rood, Richard B.
1987-01-01
During the last 35 years, well over 100 algorithms for modeling advection processes have been described and tested. This review summarizes the development and improvements that have taken place. The nature of the errors caused by numerical approximation to the advection equation are highlighted. Then the particular devices that have been proposed to remedy these errors are discussed. The extensive literature comparing transport algorithms is reviewed. Although there is no clear cut 'best' algorithm, several conclusions can be made. Spectral and pseudospectral techniques consistently provide the highest degree of accuracy, but expense and difficulties assuring positive mixing ratios are serious drawbacks. Schemes which consider fluid slabs bounded by grid points (volume schemes), rather than the simple specification of constituent values at the grid points, provide accurate positive definite results.
Numerical analysis of EPR spectra. 7. The simplex algorithm
NASA Astrophysics Data System (ADS)
Beckwith, Athelstan L. J.; Brumby, Steven
The Simplex algorithm is well suited to the least-squares analysis of highly complex EPR spectra. The application of the algorithm to the analysis of the spectra of benzo[ a]pyrenyl-6-oxy, chloro(methoxycarbonyl)methyl, and cyano(methoxy)methyl free radicals is described.
NASA Technical Reports Server (NTRS)
Daso, E. O.
1986-01-01
An implicit approximate factorization algorithm is employed to quantify the parametric effects of Courant number and artificial smoothing on numerical solutions of the unsteady 3-D Euler equations for a windmilling propeller (low speed) flow field. The results show that propeller global or performance chracteristics vary strongly with Courant number and artificial dissipation parameters, though the variation is such less severe at high Courant numbers. Candidate sets of Courant number and dissipation parameters could result in parameter-dependent solutions. Parameter-independent numerical solutions can be obtained if low values of the dissipation parameter-time step ratio are used in the computations. Furthermore, it is realized that too much artificial damping can degrade numerical stability. Finally, it is demonstrated that highly resolved meshes may, in some cases, delay convergence, thereby suggesting some optimum cell size for a given flow solution. It is suspected that improper boundary treatment may account for the cell size constraint.
Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu
2015-01-01
Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted “useful” data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency. PMID:26569247
Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu
2015-01-01
Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted "useful" data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency. PMID:26569247
A semi-numerical algorithm for instability of compressible multilayered structures
NASA Astrophysics Data System (ADS)
Tang, Shan; Yang, Yang; Peng, Xiang He; Liu, Wing Kam; Huang, Xiao Xu; Elkhodary, Khalil
2015-07-01
A computational method is proposed for the analysis and prediction of instability (wrinkling or necking) of multilayered compressible plates and sheets made by metals or polymers under plane strain conditions. In previous works, a basic assumption (or a physical argument) that has been frequently made is that materials are incompressible to simplify mathematical derivations. To account for the compressibility of metals and polymers (the lower Poisson's ratio leads to the more compressible material), we propose a combined semi-numerical algorithm and finite element method for instability analysis. Our proposed algorithm is herein verified by comparing its predictions with published results in literature for thin films with polymer/metal substrates and for polymer/metal systems. The new combined method is then used to predict the effects of compressibility on instability behaviors. Results suggest potential utility for compressibility in the design of multilayered structures.
Numerical Optimization Algorithms and Software for Systems Biology
Saunders, Michael
2013-02-02
The basic aims of this work are: to develop reliable algorithms for solving optimization problems involving large stoi- chiometric matrices; to investigate cyclic dependency between metabolic and macromolecular biosynthetic networks; and to quantify the significance of thermodynamic constraints on prokaryotic metabolism.
Variationally consistent discretization schemes and numerical algorithms for contact problems
NASA Astrophysics Data System (ADS)
Wohlmuth, Barbara
We consider variationally consistent discretization schemes for mechanical contact problems. Most of the results can also be applied to other variational inequalities, such as those for phase transition problems in porous media, for plasticity or for option pricing applications from finance. The starting point is to weakly incorporate the constraint into the setting and to reformulate the inequality in the displacement in terms of a saddle-point problem. Here, the Lagrange multiplier represents the surface forces, and the constraints are restricted to the boundary of the simulation domain. Having a uniform inf-sup bound, one can then establish optimal low-order a priori convergence rates for the discretization error in the primal and dual variables. In addition to the abstract framework of linear saddle-point theory, complementarity terms have to be taken into account. The resulting inequality system is solved by rewriting it equivalently by means of the non-linear complementarity function as a system of equations. Although it is not differentiable in the classical sense, semi-smooth Newton methods, yielding super-linear convergence rates, can be applied and easily implemented in terms of a primal-dual active set strategy. Quite often the solution of contact problems has a low regularity, and the efficiency of the approach can be improved by using adaptive refinement techniques. Different standard types, such as residual- and equilibrated-based a posteriori error estimators, can be designed based on the interpretation of the dual variable as Neumann boundary condition. For the fully dynamic setting it is of interest to apply energy-preserving time-integration schemes. However, the differential algebraic character of the system can result in high oscillations if standard methods are applied. A possible remedy is to modify the fully discretized system by a local redistribution of the mass. Numerical results in two and three dimensions illustrate the wide range of
Numerical Results of 3-D Modeling of Moon Accumulation
NASA Astrophysics Data System (ADS)
Khachay, Yurie; Anfilogov, Vsevolod; Antipin, Alexandr
2014-05-01
For the last time for the model of the Moon usually had been used the model of mega impact in which the forming of the Earth and its sputnik had been the consequence of the Earth's collision with the body of Mercurial mass. But all dynamical models of the Earth's accumulation and the estimations after the Pb-Pb system, lead to the conclusion that the duration of the planet accumulation was about 1 milliard years. But isotopic results after the W-Hf system testify about a very early (5-10) million years, dividing of the geochemical reservoirs of the core and mantle. In [1,2] it is shown, that the account of energy dissipating by the decay of short living radioactive elements and first of all Al26,it is sufficient for heating even small bodies with dimensions about (50-100) km up to the iron melting temperature and can be realized a principal new differentiation mechanism. The inner parts of the melted preplanets can join and they are mainly of iron content, but the cold silicate fragments return to the supply zone and additionally change the content of Moon forming to silicates. Only after the increasing of the gravitational radius of the Earth, the growing area of the future Earth's core can save also the silicate envelope fragments [3]. For understanding the further system Earth-Moon evolution it is significant to trace the origin and evolution of heterogeneities, which occur on its accumulation stage.In that paper we are modeling the changing of temperature,pressure,velocity of matter flowing in a block of 3d spherical body with a growing radius. The boundary problem is solved by the finite-difference method for the system of equations, which include equations which describe the process of accumulation, the Safronov equation, the equation of impulse balance, equation Navier-Stocks, equation for above litho static pressure and heat conductivity in velocity-pressure variables using the Businesque approach.The numerical algorithm of the problem solution in velocity
Shock focusing flow field simulated by a high-resolution numerical algorithm
NASA Astrophysics Data System (ADS)
Jung, Y. G.; Chang, K. S.
2012-11-01
Shock-focusing concave reflector is a very simple and effective tool to obtain a high-pressure pulse wave near the physical focal point. In the past, many optical images were obtained through experimental studies. However, measurement of field variables is not easy because the phenomenon is of short duration and the magnitude of shock waves is varied from pulse to pulse due to poor reproducibility. Using a wave propagation algorithm and the Cartesian embedded boundary method, we have successfully obtained numerical schlieren images that resemble the experimental results. By the numerical results, various field variables, such as pressure, density and vorticity, become available for the better understanding and design of shock focusing devices.
Godfrey, Brendan B.; Vay, Jean-Luc
2013-09-01
Rapidly growing numerical instabilities routinely occur in multidimensional particle-in-cell computer simulations of plasma-based particle accelerators, astrophysical phenomena, and relativistic charged particle beams. Reducing instability growth to acceptable levels has necessitated higher resolution grids, high-order field solvers, current filtering, etc. except for certain ratios of the time step to the axial cell size, for which numerical growth rates and saturation levels are reduced substantially. This paper derives and solves the cold beam dispersion relation for numerical instabilities in multidimensional, relativistic, electromagnetic particle-in-cell programs employing either the standard or the Cole–Karkkainnen finite difference field solver on a staggered mesh and the common Esirkepov current-gathering algorithm. Good overall agreement is achieved with previously reported results of the WARP code. In particular, the existence of select time steps for which instabilities are minimized is explained. Additionally, an alternative field interpolation algorithm is proposed for which instabilities are almost completely eliminated for a particular time step in ultra-relativistic simulations.
NASA Technical Reports Server (NTRS)
Whyte, W. A.; Heyward, A. O.; Ponchak, D. S.; Spence, R. L.; Zuzek, J. E.
1988-01-01
A detailed description of a Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software package for communication satellite systems planning is presented. This software provides a method of generating predetermined arc segments for use in the development of an allotment planning procedure to be carried out at the 1988 World Administrative Radio Conference (WARC - 88) on the use of the GEO and the planning of space services utilizing GEO. The features of the NASARC software package are described, and detailed information is given about the function of each of the four NASARC program modules. The results of a sample world scenario are presented and discussed.
A bibliography on parallel and vector numerical algorithms
NASA Technical Reports Server (NTRS)
Ortega, James M.; Voigt, Robert G.; Romine, Charles H.
1988-01-01
This is a bibliography on numerical methods. It also includes a number of other references on machine architecture, programming language, and other topics of interest to scientific computing. Certain conference proceedings and anthologies which have been published in book form are also listed.
A bibliography on parallel and vector numerical algorithms
NASA Technical Reports Server (NTRS)
Ortega, J. M.; Voigt, R. G.
1987-01-01
This is a bibliography of numerical methods. It also includes a number of other references on machine architecture, programming language, and other topics of interest to scientific computing. Certain conference proceedings and anthologies which have been published in book form are listed also.
Numerical Laplace Transform Inversion Employing the Gaver-Stehfest Algorithm.
ERIC Educational Resources Information Center
Jacquot, Raymond G.; And Others
1985-01-01
Presents a technique for the numerical inversion of Laplace Transforms and several examples employing this technique. Limitations of the method in terms of available computer word length and the effects of these limitations on approximate inverse functions are also discussed. (JN)
Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.
2010-01-01
The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward-difference-formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications. PMID:20577570
NASA Astrophysics Data System (ADS)
Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.
2010-07-01
The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte-Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward difference formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte-Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.
A Novel Quantum-Behaved Bat Algorithm with Mean Best Position Directed for Numerical Optimization
Zhu, Wenyong; Liu, Zijuan; Duan, Qingyan; Cao, Long
2016-01-01
This paper proposes a novel quantum-behaved bat algorithm with the direction of mean best position (QMBA). In QMBA, the position of each bat is mainly updated by the current optimal solution in the early stage of searching and in the late search it also depends on the mean best position which can enhance the convergence speed of the algorithm. During the process of searching, quantum behavior of bats is introduced which is beneficial to jump out of local optimal solution and make the quantum-behaved bats not easily fall into local optimal solution, and it has better ability to adapt complex environment. Meanwhile, QMBA makes good use of statistical information of best position which bats had experienced to generate better quality solutions. This approach not only inherits the characteristic of quick convergence, simplicity, and easy implementation of original bat algorithm, but also increases the diversity of population and improves the accuracy of solution. Twenty-four benchmark test functions are tested and compared with other variant bat algorithms for numerical optimization the simulation results show that this approach is simple and efficient and can achieve a more accurate solution. PMID:27293424
A Novel Quantum-Behaved Bat Algorithm with Mean Best Position Directed for Numerical Optimization.
Zhu, Binglian; Zhu, Wenyong; Liu, Zijuan; Duan, Qingyan; Cao, Long
2016-01-01
This paper proposes a novel quantum-behaved bat algorithm with the direction of mean best position (QMBA). In QMBA, the position of each bat is mainly updated by the current optimal solution in the early stage of searching and in the late search it also depends on the mean best position which can enhance the convergence speed of the algorithm. During the process of searching, quantum behavior of bats is introduced which is beneficial to jump out of local optimal solution and make the quantum-behaved bats not easily fall into local optimal solution, and it has better ability to adapt complex environment. Meanwhile, QMBA makes good use of statistical information of best position which bats had experienced to generate better quality solutions. This approach not only inherits the characteristic of quick convergence, simplicity, and easy implementation of original bat algorithm, but also increases the diversity of population and improves the accuracy of solution. Twenty-four benchmark test functions are tested and compared with other variant bat algorithms for numerical optimization the simulation results show that this approach is simple and efficient and can achieve a more accurate solution. PMID:27293424
A Numerical Algorithm for Complex Biological Flow in Irregular Microdevice Geometries
Nonaka, A; Miller, G H; Marshall, T; Liepmann, D; Gulati, S; Trebotich, D; Colella, P
2003-12-15
We present a numerical algorithm to simulate non-Newtonian flow in complex microdevice components. The model consists of continuum viscoelastic incompressible flow in irregular microscale geometries. Our numerical approach is the projection method of Bell, Colella and Glaz (BCG) to impose the incompressibility constraint coupled with the polymeric stress splitting discretization of Trebotich, Colella and Miller (TCM). In this approach we exploit the hyperbolic structure of the equations of motion to achieve higher resolution in the presence of strong gradients and to gain an order of magnitude in the timestep. We also extend BCG and TCM to an embedded boundary method to treat irregular domain geometries which exist in microdevices. Our method allows for particle representation in a continuum fluid. We present preliminary results for incompressible viscous flow with comparison to flow of DNA and simulants in microchannels and other components used in chem/bio microdevices.
NASA Astrophysics Data System (ADS)
Acebrón, Juan A.; Rodríguez-Rozas, Ángel
2013-10-01
An efficient numerical method based on a probabilistic representation for the Vlasov-Poisson system of equations in the Fourier space has been derived. This has been done theoretically for arbitrary dimensional problems, and particularized to unidimensional problems for numerical purposes. Such a representation has been validated theoretically in the linear regime comparing the solution obtained with the classical results of the linear Landau damping. The numerical strategy followed requires generating suitable random trees combined with a Padé approximant for approximating accurately a given divergent series. Such series are obtained by summing the partial contributions to the solution coming from trees with arbitrary number of branches. These contributions, coming in general from multi-dimensional definite integrals, are efficiently computed by a quasi-Monte Carlo method. It is shown how the accuracy of the method can be effectively increased by considering more terms of the series. The new representation was used successfully to develop a Probabilistic Domain Decomposition method suited for massively parallel computers, which improves the scalability found in classical methods. Finally, a few numerical examples based on classical phenomena such as the non-linear Landau damping, and the two streaming instability are given, illustrating the remarkable performance of the algorithm, when compared the results with those obtained using a classical method.
Numerical results for the WFNDEC 2012 eddy current benchmark problem
NASA Astrophysics Data System (ADS)
Theodoulidis, T. P.; Martinos, J.; Poulakis, N.
2013-01-01
We present numerical results for the World Federation of NDE Centers (WFNDEC) 2012 eddy current benchmark problem obtained with a commercial FEM package (Comsol Multiphysics). The measurements of the benchmark problem consist of coil impedance values acquired when an inspection probe coil is moved inside an Inconel tube along an axial through-wall notch. The simulation runs smoothly with minimal user interference (default settings used for mesh and solver) and agreement between numerical and experimental results is excellent for all five inspection frequencies. Comments are made for the pros and cons of FEM and also some good practice rules are presented when using such numerical tools.
Stochastic algorithms for the analysis of numerical flame simulations
Bell, John B.; Day, Marcus S.; Grcar, Joseph F.; Lijewski, Michael J.
2004-04-26
Recent progress in simulation methodologies and high-performance parallel computers have made it is possible to perform detailed simulations of multidimensional reacting flow phenomena using comprehensive kinetics mechanisms. As simulations become larger and more complex, it becomes increasingly difficult to extract useful information from the numerical solution, particularly regarding the interactions of the chemical reaction and diffusion processes. In this paper we present a new diagnostic tool for analysis of numerical simulations of reacting flow. Our approach is based on recasting an Eulerian flow solution in a Lagrangian frame. Unlike a conventional Lagrangian view point that follows the evolution of a volume of the fluid, we instead follow specific chemical elements, e.g., carbon, nitrogen, etc., as they move through the system . From this perspective an ''atom'' is part of some molecule of a species that is transported through the domain by advection and diffusion. Reactions cause the atom to shift from one chemical host species to another and the subsequent transport of the atom is given by the movement of the new species. We represent these processes using a stochastic particle formulation that treats advection deterministically and models diffusion and chemistry as stochastic processes. In this paper, we discuss the numerical issues in detail and demonstrate that an ensemble of stochastic trajectories can accurately capture key features of the continuum solution. The capabilities of this diagnostic are then demonstrated by applications to study the modulation of carbon chemistry during a vortex-flame interaction, and the role of cyano chemistry in rm NO{sub x} production for a steady diffusion flame.
Thrombosis modeling in intracranial aneurysms: a lattice Boltzmann numerical algorithm
NASA Astrophysics Data System (ADS)
Ouared, R.; Chopard, B.; Stahl, B.; Rüfenacht, D. A.; Yilmaz, H.; Courbebaisse, G.
2008-07-01
The lattice Boltzmann numerical method is applied to model blood flow (plasma and platelets) and clotting in intracranial aneurysms at a mesoscopic level. The dynamics of blood clotting (thrombosis) is governed by mechanical variations of shear stress near wall that influence platelets-wall interactions. Thrombosis starts and grows below a shear rate threshold, and stops above it. Within this assumption, it is possible to account qualitatively well for partial, full or no occlusion of the aneurysm, and to explain why spontaneous thrombosis is more likely to occur in giant aneurysms than in small or medium sized aneurysms.
NASA Astrophysics Data System (ADS)
Dong, Suchuan
2015-11-01
This talk focuses on simulating the motion of a mixture of N (N>=2) immiscible incompressible fluids with given densities, dynamic viscosities and pairwise surface tensions. We present an N-phase formulation within the phase field framework that is thermodynamically consistent, in the sense that the formulation satisfies the conservations of mass/momentum, the second law of thermodynamics and Galilean invariance. We also present an efficient algorithm for numerically simulating the N-phase system. The algorithm has overcome the issues caused by the variable coefficient matrices associated with the variable mixture density/viscosity and the couplings among the (N-1) phase field variables and the flow variables. We compare simulation results with the Langmuir-de Gennes theory to demonstrate that the presented method produces physically accurate results for multiple fluid phases. Numerical experiments will be presented for several problems involving multiple fluid phases, large density contrasts and large viscosity contrasts to demonstrate the capabilities of the method for studying the interactions among multiple types of fluid interfaces. Support from NSF and ONR is gratefully acknowledged.
A method for data handling numerical results in parallel OpenFOAM simulations
NASA Astrophysics Data System (ADS)
Anton, Alin; Muntean, Sebastian
2015-12-01
Parallel computational fluid dynamics simulations produce vast amount of numerical result data. This paper introduces a method for reducing the size of the data by replaying the interprocessor traffic. The results are recovered only in certain regions of interest configured by the user. A known test case is used for several mesh partitioning scenarios using the OpenFOAM toolkit®[1]. The space savings obtained with classic algorithms remain constant for more than 60 Gb of floating point data. Our method is most efficient on large simulation meshes and is much better suited for compressing large scale simulation results than the regular algorithms.
A method for data handling numerical results in parallel OpenFOAM simulations
Anton, Alin; Muntean, Sebastian
2015-12-31
Parallel computational fluid dynamics simulations produce vast amount of numerical result data. This paper introduces a method for reducing the size of the data by replaying the interprocessor traffic. The results are recovered only in certain regions of interest configured by the user. A known test case is used for several mesh partitioning scenarios using the OpenFOAM toolkit{sup ®}[1]. The space savings obtained with classic algorithms remain constant for more than 60 Gb of floating point data. Our method is most efficient on large simulation meshes and is much better suited for compressing large scale simulation results than the regular algorithms.
Sheet Hydroforming Process Numerical Model Improvement Through Experimental Results Analysis
NASA Astrophysics Data System (ADS)
Gabriele, Papadia; Antonio, Del Prete; Alfredo, Anglani
2010-06-01
The increasing application of numerical simulation in metal forming field has helped engineers to solve problems one after another to manufacture a qualified formed product reducing the required time [1]. Accurate simulation results are fundamental for the tooling and the product designs. The wide application of numerical simulation is encouraging the development of highly accurate simulation procedures to meet industrial requirements. Many factors can influence the final simulation results and many studies have been carried out about materials [2], yield criteria [3] and plastic deformation [4,5], process parameters [6] and their optimization. In order to develop a reliable hydromechanical deep drawing (HDD) numerical model the authors have been worked out specific activities based on the evaluation of the effective stiffness of the blankholder structure [7]. In this paper after an appropriate tuning phase of the blankholder force distribution, the experimental activity has been taken into account to improve the accuracy of the numerical model. In the first phase, the effective capability of the blankholder structure to transfer the applied load given by hydraulic actuators to the blank has been explored. This phase ended with the definition of an appropriate subdivision of the blankholder active surface in order to take into account the effective pressure map obtained for the given loads configuration. In the second phase the numerical results obtained with the developed subdivision have been compared with the experimental data of the studied model. The numerical model has been then improved, finding the best solution for the blankholder force distribution.
Numerical Algorithms for Precise and Efficient Orbit Propagation and Positioning
NASA Astrophysics Data System (ADS)
Bradley, Ben K.
Motivated by the growing space catalog and the demands for precise orbit determination with shorter latency for science and reconnaissance missions, this research improves the computational performance of orbit propagation through more efficient and precise numerical integration and frame transformation implementations. Propagation of satellite orbits is required for astrodynamics applications including mission design, orbit determination in support of operations and payload data analysis, and conjunction assessment. Each of these applications has somewhat different requirements in terms of accuracy, precision, latency, and computational load. This dissertation develops procedures to achieve various levels of accuracy while minimizing computational cost for diverse orbit determination applications. This is done by addressing two aspects of orbit determination: (1) numerical integration used for orbit propagation and (2) precise frame transformations necessary for force model evaluation and station coordinate rotations. This dissertation describes a recently developed method for numerical integration, dubbed Bandlimited Collocation Implicit Runge-Kutta (BLC-IRK), and compare its efficiency in propagating orbits to existing techniques commonly used in astrodynamics. The BLC-IRK scheme uses generalized Gaussian quadratures for bandlimited functions. It requires significantly fewer force function evaluations than explicit Runge-Kutta schemes and approaches the efficiency of the 8th-order Gauss-Jackson multistep method. Converting between the Geocentric Celestial Reference System (GCRS) and International Terrestrial Reference System (ITRS) is necessary for many applications in astrodynamics, such as orbit propagation, orbit determination, and analyzing geoscience data from satellite missions. This dissertation provides simplifications to the Celestial Intermediate Origin (CIO) transformation scheme and Earth orientation parameter (EOP) storage for use in positioning and
A stable and efficient numerical algorithm for unconfined aquifer analysis.
Keating, Elizabeth; Zyvoloski, George
2009-01-01
The nonlinearity of equations governing flow in unconfined aquifers poses challenges for numerical models, particularly in field-scale applications. Existing methods are often unstable, do not converge, or require extremely fine grids and small time steps. Standard modeling procedures such as automated model calibration and Monte Carlo uncertainty analysis typically require thousands of model runs. Stable and efficient model performance is essential to these analyses. We propose a new method that offers improvements in stability and efficiency and is relatively tolerant of coarse grids. It applies a strategy similar to that in the MODFLOW code to the solution of Richard's equation with a grid-dependent pressure/saturation relationship. The method imposes a contrast between horizontal and vertical permeability in gridblocks containing the water table, does not require "dry" cells to convert to inactive cells, and allows recharge to flow through relatively dry cells to the water table. We establish the accuracy of the method by comparison to an analytical solution for radial flow to a well in an unconfined aquifer with delayed yield. Using a suite of test problems, we demonstrate the efficiencies gained in speed and accuracy over two-phase simulations, and improved stability when compared to MODFLOW. The advantages for applications to transient unconfined aquifer analysis are clearly demonstrated by our examples. We also demonstrate applicability to mixed vadose zone/saturated zone applications, including transport, and find that the method shows great promise for these types of problem as well. PMID:19341374
A stable and efficient numerical algorithm for unconfined aquifer analysis
Keating, Elizabeth; Zyvoloski, George
2008-01-01
The non-linearity of equations governing flow in unconfined aquifers poses challenges for numerical models, particularly in field-scale applications. Existing methods are often unstable, do not converge, or require extremely fine grids and small time steps. Standard modeling procedures such as automated model calibration and Monte Carlo uncertainty analysis typically require thousands of forward model runs. Stable and efficient model performance is essential to these analyses. We propose a new method that offers improvements in stability and efficiency, and is relatively tolerant of coarse grids. It applies a strategy similar to that in the MODFLOW code to solution of Richard's Equation with a grid-dependent pressure/saturation relationship. The method imposes a contrast between horizontal and vertical permeability in gridblocks containing the water table. We establish the accuracy of the method by comparison to an analytical solution for radial flow to a well in an unconfined aquifer with delayed yield. Using a suite of test problems, we demonstrate the efficiencies gained in speed and accuracy over two-phase simulations, and improved stability when compared to MODFLOW. The advantages for applications to transient unconfined aquifer analysis are clearly demonstrated by our examples. We also demonstrate applicability to mixed vadose zone/saturated zone applications, including transport, and find that the method shows great promise for these types of problem, as well.
Comparison of Fully Numerical Predictor-Corrector and Apollo Skip Entry Guidance Algorithms
NASA Astrophysics Data System (ADS)
Brunner, Christopher W.; Lu, Ping
2012-09-01
The dramatic increase in computational power since the Apollo program has enabled the development of numerical predictor-corrector (NPC) entry guidance algorithms that allow on-board accurate determination of a vehicle's trajectory. These algorithms are sufficiently mature to be flown. They are highly adaptive, especially in the face of extreme dispersion and off-nominal situations compared with reference-trajectory following algorithms. The performance and reliability of entry guidance are critical to mission success. This paper compares the performance of a recently developed fully numerical predictor-corrector entry guidance (FNPEG) algorithm with that of the Apollo skip entry guidance. Through extensive dispersion testing, it is clearly demonstrated that the Apollo skip entry guidance algorithm would be inadequate in meeting the landing precision requirement for missions with medium (4000-7000 km) and long (>7000 km) downrange capability requirements under moderate dispersions chiefly due to poor modeling of atmospheric drag. In the presence of large dispersions, a significant number of failures occur even for short-range missions due to the deviation from planned reference trajectories. The FNPEG algorithm, on the other hand, is able to ensure high landing precision in all cases tested. All factors considered, a strong case is made for adopting fully numerical algorithms for future skip entry missions.
A numerical comparison of discrete Kalman filtering algorithms - An orbit determination case study
NASA Technical Reports Server (NTRS)
Thornton, C. L.; Bierman, G. J.
1976-01-01
An improved Kalman filter algorithm based on a modified Givens matrix triangularization technique is proposed for solving a nonstationary discrete-time linear filtering problem. The proposed U-D covariance factorization filter uses orthogonal transformation technique; measurement and time updating of the U-D factors involve separate application of Gentleman's fast square-root-free Givens rotations. Numerical stability and accuracy of the algorithm are compared with those of the conventional and stabilized Kalman filters and the Potter-Schmidt square-root filter, by applying these techniques to a realistic planetary navigation problem (orbit determination for the Saturn approach phase of the Mariner Jupiter-Saturn Mission, 1977). The new algorithm is shown to combine the numerical precision of square root filtering with the efficiency of the original Kalman algorithm.
NASA Astrophysics Data System (ADS)
Barnes, T.
In this article we review numerical studies of the quantum Heisenberg antiferromagnet on a square lattice, which is a model of the magnetic properties of the undoped “precursor insulators” of the high temperature superconductors. We begin with a brief pedagogical introduction and then discuss zero and nonzero temperature properties and compare the numerical results to analytical calculations and to experiment where appropriate. We also review the various algorithms used to obtain these results, and discuss algorithm developments and improvements in computer technology which would be most useful for future numerical work in this area. Finally we list several outstanding problems which may merit further investigation.
Extremal polynomials and methods of optimization of numerical algorithms
Lebedev, V I
2004-10-31
Chebyshev-Markov-Bernstein-Szegoe polynomials C{sub n}(x) extremal on [-1,1] with weight functions w(x)=(1+x){sup {alpha}}(1- x){sup {beta}}/{radical}(S{sub l}(x)) where {alpha},{beta}=0,1/2 and S{sub l}(x)={pi}{sub k=1}{sup m}(1-c{sub k}T{sub l{sub k}}(x))>0 are considered. A universal formula for their representation in trigonometric form is presented. Optimal distributions of the nodes of the weighted interpolation and explicit quadrature formulae of Gauss, Markov, Lobatto, and Rado types are obtained for integrals with weight p(x)=w{sup 2}(x)(1-x{sup 2}){sup -1/2}. The parameters of optimal Chebyshev iterative methods reducing the error optimally by comparison with the initial error defined in another norm are determined. For each stage of the Fedorenko-Bakhvalov method iteration parameters are determined which take account of the results of the previous calculations. Chebyshev filters with weight are constructed. Iterative methods of the solution of equations containing compact operators are studied.
Extremal polynomials and methods of optimization of numerical algorithms
NASA Astrophysics Data System (ADS)
Lebedev, V. I.
2004-10-01
Chebyshëv-Markov-Bernstein-Szegö polynomials C_n(x) extremal on \\lbrack -1,1 \\rbrack with weight functions w(x)=(1+x)^\\alpha(1- x)^\\beta/\\sqrt{S_l(x)} where \\alpha,\\beta=0,\\frac12 and S_l(x)=\\prod_{k=1}^m(1-c_kT_{l_k}(x))>0 are considered. A universal formula for their representation in trigonometric form is presented. Optimal distributions of the nodes of the weighted interpolation and explicit quadrature formulae of Gauss, Markov, Lobatto, and Rado types are obtained for integrals with weight p(x)=w^2(x)(1-x^2)^{-1/2}. The parameters of optimal Chebyshëv iterative methods reducing the error optimally by comparison with the initial error defined in another norm are determined. For each stage of the Fedorenko-Bakhvalov method iteration parameters are determined which take account of the results of the previous calculations. Chebyshëv filters with weight are constructed. Iterative methods of the solution of equations containing compact operators are studied.
On the impact of communication complexity in the design of parallel numerical algorithms
NASA Technical Reports Server (NTRS)
Gannon, D.; Vanrosendale, J.
1984-01-01
This paper describes two models of the cost of data movement in parallel numerical algorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In the second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm independent upper bounds on system performance are derived for several problems that are important to scientific computation.
A Parallel Compact Multi-Dimensional Numerical Algorithm with Aeroacoustics Applications
NASA Technical Reports Server (NTRS)
Povitsky, Alex; Morris, Philip J.
1999-01-01
In this study we propose a novel method to parallelize high-order compact numerical algorithms for the solution of three-dimensional PDEs (Partial Differential Equations) in a space-time domain. For this numerical integration most of the computer time is spent in computation of spatial derivatives at each stage of the Runge-Kutta temporal update. The most efficient direct method to compute spatial derivatives on a serial computer is a version of Gaussian elimination for narrow linear banded systems known as the Thomas algorithm. In a straightforward pipelined implementation of the Thomas algorithm processors are idle due to the forward and backward recurrences of the Thomas algorithm. To utilize processors during this time, we propose to use them for either non-local data independent computations, solving lines in the next spatial direction, or local data-dependent computations by the Runge-Kutta method. To achieve this goal, control of processor communication and computations by a static schedule is adopted. Thus, our parallel code is driven by a communication and computation schedule instead of the usual "creative, programming" approach. The obtained parallelization speed-up of the novel algorithm is about twice as much as that for the standard pipelined algorithm and close to that for the explicit DRP algorithm.
Dong, S.
2015-02-15
We present a family of physical formulations, and a numerical algorithm, based on a class of general order parameters for simulating the motion of a mixture of N (N⩾2) immiscible incompressible fluids with given densities, dynamic viscosities, and pairwise surface tensions. The N-phase formulations stem from a phase field model we developed in a recent work based on the conservations of mass/momentum, and the second law of thermodynamics. The introduction of general order parameters leads to an extremely strongly-coupled system of (N−1) phase field equations. On the other hand, the general form enables one to compute the N-phase mixing energy density coefficients in an explicit fashion in terms of the pairwise surface tensions. We show that the increased complexity in the form of the phase field equations associated with general order parameters in actuality does not cause essential computational difficulties. Our numerical algorithm reformulates the (N−1) strongly-coupled phase field equations for general order parameters into 2(N−1) Helmholtz-type equations that are completely de-coupled from one another. This leads to a computational complexity comparable to that for the simplified phase field equations associated with certain special choice of the order parameters. We demonstrate the capabilities of the method developed herein using several test problems involving multiple fluid phases and large contrasts in densities and viscosities among the multitude of fluids. In particular, by comparing simulation results with the Langmuir–de Gennes theory of floating liquid lenses we show that the method using general order parameters produces physically accurate results for multiple fluid phases.
A new free-surface stabilization algorithm for geodynamical modelling: Theory and numerical tests
NASA Astrophysics Data System (ADS)
Andrés-Martínez, Miguel; Morgan, Jason P.; Pérez-Gussinyé, Marta; Rüpke, Lars
2015-09-01
The surface of the solid Earth is effectively stress free in its subaerial portions, and hydrostatic beneath the oceans. Unfortunately, this type of boundary condition is difficult to treat computationally, and for computational convenience, numerical models have often used simpler approximations that do not involve a normal stress-loaded, shear-stress free top surface that is free to move. Viscous flow models with a computational free surface typically confront stability problems when the time step is bigger than the viscous relaxation time. The small time step required for stability (< 2 Kyr) makes this type of model computationally intensive, so there remains a need to develop strategies that mitigate the stability problem by making larger (at least ∼10 Kyr) time steps stable and accurate. Here we present a new free-surface stabilization algorithm for finite element codes which solves the stability problem by adding to the Stokes formulation an intrinsic penalization term equivalent to a portion of the future load at the surface nodes. Our algorithm is straightforward to implement and can be used with both Eulerian or Lagrangian grids. It includes α and β parameters to respectively control both the vertical and the horizontal slope-dependent penalization terms, and uses Uzawa-like iterations to solve the resulting system at a cost comparable to a non-stress free surface formulation. Four tests were carried out in order to study the accuracy and the stability of the algorithm: (1) a decaying first-order sinusoidal topography test, (2) a decaying high-order sinusoidal topography test, (3) a Rayleigh-Taylor instability test, and (4) a steep-slope test. For these tests, we investigate which α and β parameters give the best results in terms of both accuracy and stability. We also compare the accuracy and the stability of our algorithm with a similar implicit approach recently developed by Kaus et al. (2010). We find that our algorithm is slightly more accurate
Evaluation of registration, compression and classification algorithms. Volume 1: Results
NASA Technical Reports Server (NTRS)
Jayroe, R.; Atkinson, R.; Callas, L.; Hodges, J.; Gaggini, B.; Peterson, J.
1979-01-01
The registration, compression, and classification algorithms were selected on the basis that such a group would include most of the different and commonly used approaches. The results of the investigation indicate clearcut, cost effective choices for registering, compressing, and classifying multispectral imagery.
Numerical Results of Earth's Core Accumulation 3-D Modelling
NASA Astrophysics Data System (ADS)
Khachay, Yurie; Anfilogov, Vsevolod
2013-04-01
For a long time as a most convenient had been the model of mega impact in which the early forming of the Earth's core and mantle had been the consequence of formed protoplanet collision with the body of Mercurial mass. But all dynamical models of the Earth's accumulation and the estimations after the Pb-Pb system, lead to the conclusion that the duration of the planet accumulation was about 1 milliard years. But isotopic results after the W-Hf system testify about a very early (5-10) million years, dividing of the geochemical reservoirs of the core and mantle. In [1,3] it is shown, that the account of energy dissipating by the decay of short living radioactive elements and first of all Al,it is sufficient for heating even small bodies with dimensions about (50-100) km up to the iron melting temperature and can be realized a principal new differentiation mechanism. The inner parts of the melted preplanets can join and they are mainly of iron content, but the cold silicate fragments return to the supply zone. Only after the increasing of the gravitational radius, the growing area of the future core can save also the silicate envelope fragments. All existing dynamical accumulation models are constructed by using a spherical-symmetrical model. Hence for understanding the further planet evolution it is significant to trace the origin and evolution of heterogeneities, which occur on the planet accumulation stage. In that paper we are modeling distributions of temperature, pressure, velocity of matter flowing in a block of 3D- spherical body with a growing radius. The boundary problem is solved by the finite-difference method for the system of equations, which include equations which describe the process of accumulation, the Safronov equation, the equation of impulse balance, equation Navier-Stocks, equation for above litho static pressure and heat conductivity in velocity-pressure variables using the Businesque approach. The numerical algorithm of the problem solution in
Some theoretical and numerical results for delayed neural field equations
NASA Astrophysics Data System (ADS)
Faye, Grégory; Faugeras, Olivier
2010-05-01
In this paper we study neural field models with delays which define a useful framework for modeling macroscopic parts of the cortex involving several populations of neurons. Nonlinear delayed integro-differential equations describe the spatio-temporal behavior of these fields. Using methods from the theory of delay differential equations, we show the existence and uniqueness of a solution of these equations. A Lyapunov analysis gives us sufficient conditions for the solutions to be asymptotically stable. We also present a fairly detailed study of the numerical computation of these solutions. This is, to our knowledge, the first time that a serious analysis of the problem of the existence and uniqueness of a solution of these equations has been performed. Another original contribution of ours is the definition of a Lyapunov functional and the result of stability it implies. We illustrate our numerical schemes on a variety of examples that are relevant to modeling in neuroscience.
Integrating Numerical Groundwater Modeling Results With Geographic Information Systems
NASA Astrophysics Data System (ADS)
Witkowski, M. S.; Robinson, B. A.; Linger, S. P.
2001-12-01
Many different types of data are used to create numerical models of flow and transport of groundwater in the vadose zone. Results from water balance studies, infiltration models, hydrologic properties, and digital elevation models (DEMs) are examples of such data. Because input data comes in a variety of formats, for consistency the data need to be assembled in a coherent fashion on a single platform. Through the use of a geographic information system (GIS), all data sources can effectively be integrated on one platform to store, retrieve, query, and display data. In our vadoze zone modeling studies in support of Los Alamos National Laboratory's Environmental Restoration Project, we employ a GIS comprised of a Raid storage device, an Oracle database, ESRI's spatial database engine (SDE), ArcView GIS, and custom GIS tools for three-dimensional (3D) analysis. We store traditional GIS data, such as, contours, historical building footprints, and study area locations, as points, lines, and polygons with attributes. Numerical flow and transport model results from the Finite Element Heat and Mass Transfer Code (FEHM) are stored as points with attributes, such as fluid saturation, or pressure, or contaminant concentration at a given location. We overlay traditional types of GIS data with numerical model results, thereby allowing us to better build conceptual models and perform spatial analyses. We have also developed specialized analysis tools to assist in the data and model analysis process. This approach provides an integrated framework for performing tasks such as comparing the model to data and understanding the relationship of model predictions to existing contaminant source locations and water supply wells. Our process of integrating GIS and numerical modeling results allows us to answer a wide variety of questions about our conceptual model design: - Which set of locations should be identified as contaminant sources based on known historical building operations
NASA Astrophysics Data System (ADS)
Kim, J.; Sonnenthal, E. L.; Rutqvist, J.
2011-12-01
Rigorous modeling of coupling between fluid, heat, and geomechanics (thermo-poro-mechanics), in fractured porous media is one of the important and difficult topics in geothermal reservoir simulation, because the physics are highly nonlinear and strongly coupled. Coupled fluid/heat flow and geomechanics are investigated using the multiple interacting continua (MINC) method as applied to naturally fractured media. In this study, we generalize constitutive relations for the isothermal elastic dual porosity model proposed by Berryman (2002) to those for the non-isothermal elastic/elastoplastic multiple porosity model, and derive the coupling coefficients of coupled fluid/heat flow and geomechanics and constraints of the coefficients. When the off-diagonal terms of the total compressibility matrix for the flow problem are zero, the upscaled drained bulk modulus for geomechanics becomes the harmonic average of drained bulk moduli of the multiple continua. In this case, the drained elastic/elastoplastic moduli for mechanics are determined by a combination of the drained moduli and volume fractions in multiple porosity materials. We also determine a relation between local strains of all multiple porosity materials in a gridblock and the global strain of the gridblock, from which we can track local and global elastic/plastic variables. For elastoplasticity, the return mapping is performed for all multiple porosity materials in the gridblock. For numerical implementation, we employ and extend the fixed-stress sequential method of the single porosity model to coupled fluid/heat flow and geomechanics in multiple porosity systems, because it provides numerical stability and high accuracy. This sequential scheme can be easily implemented by using a porosity function and its corresponding porosity correction, making use of the existing robust flow and geomechanics simulators. We implemented the proposed modeling and numerical algorithm to the reaction transport simulator
The Effect of Pansharpening Algorithms on the Resulting Orthoimagery
NASA Astrophysics Data System (ADS)
Agrafiotis, P.; Georgopoulos, A.; Karantzalos, K.
2016-06-01
This paper evaluates the geometric effects of pansharpening algorithms on automatically generated DSMs and thus on the resulting orthoimagery through a quantitative assessment of the accuracy on the end products. The main motivation was based on the fact that for automatically generated Digital Surface Models, an image correlation step is employed for extracting correspondences between the overlapping images. Thus their accuracy and reliability is strictly related to image quality, while pansharpening may result into lower image quality which may affect the DSM generation and the resulting orthoimage accuracy. To this direction, an iterative methodology was applied in order to combine the process described by Agrafiotis and Georgopoulos (2015) with different pansharpening algorithms and check the accuracy of orthoimagery resulting from pansharpened data. Results are thoroughly examined and statistically analysed. The overall evaluation indicated that the pansharpening process didn't affect the geometric accuracy of the resulting DSM with a 10m interval, as well as the resulting orthoimagery. Although some residuals in the orthoimages were observed, their magnitude cannot adversely affect the accuracy of the final orthoimagery.
A numerical algorithm for the explicit calculation of SU(N) and SL(N,C) Clebsch-Gordan coefficients
Alex, Arne; Delft, Jan von; Kalus, Matthias; Huckleberry, Alan
2011-02-15
We present an algorithm for the explicit numerical calculation of SU(N) and SL(N,C) Clebsch-Gordan coefficients, based on the Gelfand-Tsetlin pattern calculus. Our algorithm is well suited for numerical implementation; we include a computer code in an appendix. Our exposition presumes only familiarity with the representation theory of SU(2).
Slump Flows inside Pipes: Numerical Results and Comparison with Experiments
NASA Astrophysics Data System (ADS)
Malekmohammadi, S.; Naccache, M. F.; Frigaard, I. A.; Martinez, D. M.
2008-07-01
In this work an analysis of the buoyancy-driven slumping flow inside a pipe is presented. This flow usually occurs when an oil well is sealed by a plug cementing process, where a cement plug is placed inside the pipe filled with a lower density fluid, displacing it towards the upper cylinder wall. Both the cement and the surrounding fluids have a non Newtonian behavior. The cement is viscoplastic and the surrounding fluid presents a shear thinning behavior. A numerical analysis was performed to evaluate the effects of some governing parameters on the slump length development. The conservation equations of mass and momentum were solved via a finite volume technique, using Fluent software (Ansys Inc.). The Volume of Fluid surface-tracking method was used to obtain the interface between the fluids and the slump length as a function of time. The results were obtained for different values of fluids densities differences, fluids rheology and pipe inclinations. The effects of these parameters on the interface shape and on the slump length versus time curve were analyzed. Moreover, the numerical results were compared to experimental ones, but some differences are observed, possibly due to chemical effects at the interface.
Synthetic jet parameter identification and numerical results validation
NASA Astrophysics Data System (ADS)
Sabbatini, Danilo; Rimasauskiene, Ruta; Matejka, Milan; Kurowski, Marcin; Wandowski, Tomasz; Malinowski, Paweł; Doerffer, Piotr
2012-06-01
The design of a synthetic jet requires a careful identification of the components' parameters, in order to be able to perform accurate numerical simulations, this identification must be done by mean of a series of measurements that, due to the small dimensions of the components, are required to be non-contact techniques. The activities described in this paper have been performed in the frame of the STA-DY-WI-CO project, whose purpose is the design of a synthetic jet and demonstrate its effectiveness and efficiency for a real application. To measure the energy saving, due to the synthetic jet effects on the separation, the increased performances of the profile must be compared to the energy absorbed by the actuator and the weight of the system. In design phase a series of actuators has being considered as well as a series of cavity layout, in order to obtain the most effective, efficient and durable package. The modal characteristics of piezoelectric component was assessed by means of tests performed with a 3D scanning laser vibrometer, measuring the frequency response to voltage excitation. Analyzed the effects of the parameters, and chosen components and layout, the system can be dimensioned by means of numeric simulations. The outcome of the simulation is the effect of the synthetic jet, in an assumed flow, for the selected profile. The numerical results on the field of the separated flow with recirculating area were validated by means of tests performed in an Eiffel type wind tunnel. The last test performed on the synthetic jet aims to understand the acoustic impact, noise measurements were performed to have full analysis and synthesis.
François, Marianne M.
2015-05-28
A review of recent advances made in numerical methods and algorithms within the volume tracking framework is presented. The volume tracking method, also known as the volume-of-fluid method has become an established numerical approach to model and simulate interfacial flows. Its advantage is its strict mass conservation. However, because the interface is not explicitly tracked but captured via the material volume fraction on a fixed mesh, accurate estimation of the interface position, its geometric properties and modeling of interfacial physics in the volume tracking framework remain difficult. Several improvements have been made over the last decade to address these challenges. In this study, the multimaterial interface reconstruction method via power diagram, curvature estimation via heights and mean values and the balanced-force algorithm for surface tension are highlighted.
François, Marianne M.
2015-05-28
A review of recent advances made in numerical methods and algorithms within the volume tracking framework is presented. The volume tracking method, also known as the volume-of-fluid method has become an established numerical approach to model and simulate interfacial flows. Its advantage is its strict mass conservation. However, because the interface is not explicitly tracked but captured via the material volume fraction on a fixed mesh, accurate estimation of the interface position, its geometric properties and modeling of interfacial physics in the volume tracking framework remain difficult. Several improvements have been made over the last decade to address these challenges.more » In this study, the multimaterial interface reconstruction method via power diagram, curvature estimation via heights and mean values and the balanced-force algorithm for surface tension are highlighted.« less
BLUM,T.
1999-09-14
The RIKEN BNL Research Center hosted its 19th workshop April 27th through May 1, 1999. The topic was Numerical Algorithms at Non-Zero Chemical Potential. QCD at a non-zero chemical potential (non-zero density) poses a long-standing unsolved challenge for lattice gauge theory. Indeed, it is the primary unresolved issue in the fundamental formulation of lattice gauge theory. The chemical potential renders conventional lattice actions complex, practically excluding the usual Monte Carlo techniques which rely on a positive definite measure for the partition function. This ''sign'' problem appears in a wide range of physical systems, ranging from strongly coupled electronic systems to QCD. The lack of a viable numerical technique at non-zero density is particularly acute since new exotic ''color superconducting'' phases of quark matter have recently been predicted in model calculations. A first principles confirmation of the phase diagram is desirable since experimental verification is not expected soon. At the workshop several proposals for new algorithms were made: cluster algorithms, direct simulation of Grassman variables, and a bosonization of the fermion determinant. All generated considerable discussion and seem worthy of continued investigation. Several interesting results using conventional algorithms were also presented: condensates in four fermion models, SU(2) gauge theory in fundamental and adjoint representations, and lessons learned from strong; coupling, non-zero temperature and heavy quarks applied to non-zero density simulations.
Zhu, Xinjun; Chen, Zhanqing; Tang, Chen; Mi, Qinghua; Yan, Xiusheng
2013-03-20
In this paper, we are concerned with denoising in experimentally obtained electronic speckle pattern interferometry (ESPI) speckle fringe patterns with poor quality. We extend the application of two existing oriented partial differential equation (PDE) filters, including the second-order single oriented PDE filter and the double oriented PDE filter, to two experimentally obtained ESPI speckle fringe patterns with very poor quality, and compare them with other efficient filtering methods, including the adaptive weighted filter, the improved nonlinear complex diffusion PDE, and the windowed Fourier transform method. All of the five filters have been illustrated to be efficient denoising methods through previous comparative analyses in published papers. The experimental results have demonstrated that the two oriented PDE models are applicable to low-quality ESPI speckle fringe patterns. Then for solving the main shortcoming of the two oriented PDE models, we develop the numerically fast algorithms based on Gauss-Seidel strategy for the two oriented PDE models. The proposed numerical algorithms are capable of accelerating the convergence greatly, and perform significantly better in terms of computational efficiency. Our numerically fast algorithms are extended automatically to some other PDE filtering models. PMID:23518722
Particle-In-Cell Multi-Algorithm Numerical Test-Bed
NASA Astrophysics Data System (ADS)
Meyers, M. D.; Yu, P.; Tableman, A.; Decyk, V. K.; Mori, W. B.
2015-11-01
We describe a numerical test-bed that allows for the direct comparison of different numerical simulation schemes using only a single code. It is built from the UPIC Framework, which is a set of codes and modules for constructing parallel PIC codes. In this test-bed code, Maxwell's equations are solved in Fourier space in two dimensions. One can readily examine the numerical properties of a real space finite difference scheme by including its operators' Fourier space representations in the Maxwell solver. The fields can be defined at the same location in a simulation cell or can be offset appropriately by half-cells, as in the Yee finite difference time domain scheme. This allows for the accurate comparison of numerical properties (dispersion relations, numerical stability, etc.) across finite difference schemes, or against the original spectral scheme. We have also included different options for the charge and current deposits, including a strict charge conserving current deposit. The test-bed also includes options for studying the analytic time domain scheme, which eliminates numerical dispersion errors in vacuum. We will show examples from the test-bed that illustrate how the properties of some numerical instabilities vary between different PIC algorithms. Work supported by the NSF grant ACI 1339893 and DOE grant DE-SC0008491.
Numerical algorithms for steady and unsteady incompressible Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Hafez, Mohammed; Dacles, Jennifer
1989-01-01
The numerical analysis of the incompressible Navier-Stokes equations are becoming important tools in the understanding of some fluid flow problems which are encountered in research as well as in industry. With the advent of the supercomputers, more realistic problems can be studied with a wider choice of numerical algorithms. An alternative formulation is presented for viscous incompressible flows. The incompressible Navier-Stokes equations are cast in a velocity/vorticity formulation. This formulation consists of solving the Poisson equations for the velocity components and the vorticity transport equation. Two numerical algorithms for the steady two-dimensional laminar flows are presented. The first method is based on the actual partial differential equations. This uses a finite-difference approximation of the governing equations on a staggered grid. The second method uses a finite element discretization with the vorticity transport equation approximated using a Galerkin approximation and the Poisson equations are obtained using a least squares method. The equations are solved efficiently using Newton's method and a banded direct matrix solver (LINPACK). The method is extended to steady three-dimensional laminar flows and applied to a cubic driven cavity using finite difference schemes and a staggered grid arrangement on a Cartesian mesh. The equations are solved iteratively using a plane zebra relaxation scheme. Currently, a two-dimensional, unsteady algorithm is being developed using a generalized coordinate system. The equations are discretized using a finite-volume approach. This work will then be extended to three-dimensional flows.
Analysis of Numerical Simulation Results of LIPS-200 Lifetime Experiments
NASA Astrophysics Data System (ADS)
Chen, Juanjuan; Zhang, Tianping; Geng, Hai; Jia, Yanhui; Meng, Wei; Wu, Xianming; Sun, Anbang
2016-06-01
Accelerator grid structural and electron backstreaming failures are the most important factors affecting the ion thruster's lifetime. During the thruster's operation, Charge Exchange Xenon (CEX) ions are generated from collisions between plasma and neutral atoms. Those CEX ions grid's barrel and wall frequently, which cause the failures of the grid system. In order to validate whether the 20 cm Lanzhou Ion Propulsion System (LIPS-200) satisfies China's communication satellite platform's application requirement for North-South Station Keeping (NSSK), this study analyzed the measured depth of the pit/groove on the accelerator grid's wall and aperture diameter's variation and estimated the operating lifetime of the ion thruster. Different from the previous method, in this paper, the experimental results after the 5500 h of accumulated operation of the LIPS-200 ion thruster are presented firstly. Then, based on these results, theoretical analysis and numerical calculations were firstly performed to predict the on-orbit lifetime of LIPS-200. The results obtained were more accurate to calculate the reliability and analyze the failure modes of the ion thruster. The results indicated that the predicted lifetime of LIPS-200's was about 13218.1 h which could satisfy the required lifetime requirement of 11000 h very well.
Numerical Study of Three-Dimensional Flows Using Unfactored Upwind-Relaxation Sweeping Algorithm
NASA Astrophysics Data System (ADS)
Zha, G.-C.; Bilgen, E.
1996-05-01
The linear stability analysis of the unfactored upwind relaxation-sweeping (URS) algorithm for 3D flow field calculations has been carried out and it is shown that the URS algorithm is unconditionally stable. The algorithm is independent of the global sweeping direction selection. However, choosing the direction with relatively low variable gradient as the global sweeping direction results in a higher degree of stability. Three-dimensional compressible Euler equations are solved by using the implicit URS algorithm to study internal flows of a non-axisymmetric nozzle with a circular-to-rectangular transition duct and complex shock wave structures for a 3D channel flow. The efficiency and robustness of the URS algorithm has been demonstrated.
NASA Astrophysics Data System (ADS)
Degtyarev, Alexander; Khramushin, Vasily
2016-02-01
The paper deals with the computer implementation of direct computational experiments in fluid mechanics, constructed on the basis of the approach developed by the authors. The proposed approach allows the use of explicit numerical scheme, which is an important condition for increasing the effciency of the algorithms developed by numerical procedures with natural parallelism. The paper examines the main objects and operations that let you manage computational experiments and monitor the status of the computation process. Special attention is given to a) realization of tensor representations of numerical schemes for direct simulation; b) realization of representation of large particles of a continuous medium motion in two coordinate systems (global and mobile); c) computing operations in the projections of coordinate systems, direct and inverse transformation in these systems. Particular attention is paid to the use of hardware and software of modern computer systems.
Adaptively resizing populations: Algorithm, analysis, and first results
NASA Technical Reports Server (NTRS)
Smith, Robert E.; Smuda, Ellen
1993-01-01
Deciding on an appropriate population size for a given Genetic Algorithm (GA) application can often be critical to the algorithm's success. Too small, and the GA can fall victim to sampling error, affecting the efficacy of its search. Too large, and the GA wastes computational resources. Although advice exists for sizing GA populations, much of this advice involves theoretical aspects that are not accessible to the novice user. An algorithm for adaptively resizing GA populations is suggested. This algorithm is based on recent theoretical developments that relate population size to schema fitness variance. The suggested algorithm is developed theoretically, and simulated with expected value equations. The algorithm is then tested on a problem where population sizing can mislead the GA. The work presented suggests that the population sizing algorithm may be a viable way to eliminate the population sizing decision from the application of GA's.
Numerical algorithms for computations of feedback laws arising in control of flexible systems
NASA Technical Reports Server (NTRS)
Lasiecka, Irena
1989-01-01
Several continuous models will be examined, which describe flexible structures with boundary or point control/observation. Issues related to the computation of feedback laws are examined (particularly stabilizing feedbacks) with sensors and actuators located either on the boundary or at specific point locations of the structure. One of the main difficulties is due to the great sensitivity of the system (hyperbolic systems with unbounded control actions), with respect to perturbations caused either by uncertainty of the model or by the errors introduced in implementing numerical algorithms. Thus, special care must be taken in the choice of the appropriate numerical schemes which eventually lead to implementable finite dimensional solutions. Finite dimensional algorithms are constructed on a basis of a priority analysis of the properties of the original, continuous (infinite diversional) systems with the following criteria in mind: (1) convergence and stability of the algorithms and (2) robustness (reasonable insensitivity with respect to the unknown parameters of the systems). Examples with mixed finite element methods and spectral methods are provided.
Upper cervical injuries: Clinical results using a new treatment algorithm
Joaquim, Andrei F.; Ghizoni, Enrico; Tedeschi, Helder; Yacoub, Alexandre R. D.; Brodke, Darrel S.; Vaccaro, Alexander R.; Patel, Alpesh A.
2015-01-01
Introduction: Upper cervical injuries (UCI) have a wide range of radiological and clinical presentation due to the unique complex bony, ligamentous and vascular anatomy. We recently proposed a rational approach in an attempt to unify prior classification system and guide treatment. In this paper, we evaluate the clinical results of our algorithm for UCI treatment. Materials and Methods: A prospective cohort series of patients with UCI was performed. The primary outcome was the AIS. Surgical treatment was proposed based on our protocol: Ligamentous injuries (abnormal misalignment, facet perched or locked, increase atlanto-dens interval) were treated surgically. Bone fractures without ligamentous injuries were treated with a rigid cervical orthosis, with exception of fractures in the dens base with risk factors for non-union. Results: Twenty-three patients treated initially conservatively had some follow-up (mean of 171 days, range from 60 to 436 days). All of them were neurologically intact. None of the patients developed a new neurological deficit. Fifteen patients were initially surgically treated (mean of 140 days of follow-up, ranging from 60 to 270 days). In the surgical group, preoperatively, 11 (73.3%) patients were AIS E, 2 (13.3%) AIS C and 2 (13.3%) AIS D. At the final follow-up, the American Spine Injury Association (ASIA) score was: 13 (86.6%) AIS E and 2 (13.3%) AIS D. None of the patients had neurological worsening during the follow-up. Conclusions: This prospective cohort suggested that our UCI treatment algorithm can be safely used. Further prospective studies with longer follow-up are necessary to further establish its clinical validity and safety. PMID:25788816
Comparative Study of Algorithms for the Numerical Simulation of Lattice QCD
Luz, Fernando H. P.; Mendes, Tereza
2010-11-12
Large-scale numerical simulations are the prime method for a nonperturbative study of QCD from first principles. Although the lattice simulation of the pure-gauge (or quenched-QCD) case may be performed very efficiently on parallel machines, there are several additional difficulties in the simulation of the full-QCD case, i.e. when dynamical quark effects are taken into account. We discuss the main aspects of full-QCD simulations, describing the most common algorithms. We present a comparative analysis of performance for two versions of the hybrid Monte Carlo method (the so-called R and RHMC algorithms), as provided in the MILC software package. We consider two degenerate flavors of light quarks in the staggered formulation, having in mind the case of finite-temperature QCD.
Interaction between subducting plates: results from numerical and analogue modeling
NASA Astrophysics Data System (ADS)
Kiraly, Agnes; Capitanio, Fabio A.; Funiciello, Francesca; Faccenna, Claudio
2016-04-01
The tectonic setting of the Alpine-Mediterranean area is achieved during the late Cenozoic subduction, collision and suturing of several oceanic fragments and continental blocks. In this stage, processes such as interactions among subducting slabs, slab migrations and related mantle flow played a relevant role on the resulting tectonics. Here, we use numerical models to first address the mantle flow characteristic in 3D. During the subduction of a single plate the strength of the return flow strongly depends on the slab pull force, that is on the plate's buoyancy, however the physical properties of the slab, such as density, viscosity or width, do not affect largely the morphology of the toroidal cell. Instead, dramatic effects on the geometry and the dynamics of the toroidal cell result in models where the thickness of the mantle is varied. The vertical component of the vorticity vector is used to define the characteristic size of the toroidal cell, which is ~1.2-1.3 times the mantle depth. This latter defines the range of viscous stress propagation through the mantle and consequent interactions with other slabs. We thus further investigate on this setup where two separate lithospheric plates subduct in opposite sense, developing opposite polarities and convergent slab retreat, and model different initial sideways distance between the plates. The stress profiles in time illustrate that the plates interacts when slabs are at the characteristic distance and the two slabs toroidal cells merge. Increased stress and delayed slab migrations are the results. Analogue models of double-sided subduction show similar maximum distance and allow testing the additional role of stress propagated through the plates. We use a silicon plate subducting on its two opposite margins, which is either homogeneous or comprises oceanic and continental lithospheres, differing in buoyancy. The modeling results show that the double-sided subduction is strongly affected by changes in plate
New Concepts in Breast Cancer Emerge from Analyzing Clinical Data Using Numerical Algorithms
Retsky, Michael
2009-01-01
A small international group has recently challenged fundamental concepts in breast cancer. As a guiding principle in therapy, it has long been assumed that breast cancer growth is continuous. However, this group suggests tumor growth commonly includes extended periods of quasi-stable dormancy. Furthermore, surgery to remove the primary tumor often awakens distant dormant micrometastases. Accordingly, over half of all relapses in breast cancer are accelerated in this manner. This paper describes how a numerical algorithm was used to come to these conclusions. Based on these findings, a dormancy preservation therapy is proposed. PMID:19440287
Numerical Asymptotic Solutions Of Differential Equations
NASA Technical Reports Server (NTRS)
Thurston, Gaylen A.
1992-01-01
Numerical algorithms derived and compared with classical analytical methods. In method, expansions replaced with integrals evaluated numerically. Resulting numerical solutions retain linear independence, main advantage of asymptotic solutions.
NASA Technical Reports Server (NTRS)
Shia, Run-Lie; Ha, Yuk Lung; Wen, Jun-Shan; Yung, Yuk L.
1990-01-01
Extensive testing of the advective scheme proposed by Prather (1986) has been carried out in support of the California Institute of Technology-Jet Propulsion Laboratory two-dimensional model of the middle atmosphere. The original scheme is generalized to include higher-order moments. In addition, it is shown how well the scheme works in the presence of chemistry as well as eddy diffusion. Six types of numerical experiments including simple clock motion and pure advection in two dimensions have been investigated in detail. By comparison with analytic solutions, it is shown that the new algorithm can faithfully preserve concentration profiles, has essentially no numerical diffusion, and is superior to a typical fourth-order finite difference scheme.
NASA Technical Reports Server (NTRS)
Pline, Alexander D.; Werner, Mark P.; Hsieh, Kwang-Chung
1991-01-01
The Surface Tension Driven Convection Experiment (STDCE) is a Space Transportation System flight experiment to study both transient and steady thermocapillary fluid flows aboard the United States Microgravity Laboratory-1 (USML-1) Spacelab mission planned for June, 1992. One of the components of data collected during the experiment is a video record of the flow field. This qualitative data is then quantified using an all electric, two dimensional Particle Image Velocimetry (PIV) technique called Particle Displacement Tracking (PDT), which uses a simple space domain particle tracking algorithm. Results using the ground based STDCE hardware, with a radiant flux heating mode, and the PDT system are compared to numerical solutions obtained by solving the axisymmetric Navier Stokes equations with a deformable free surface. The PDT technique is successful in producing a velocity vector field and corresponding stream function from the raw video data which satisfactorily represents the physical flow. A numerical program is used to compute the velocity field and corresponding stream function under identical conditions. Both the PDT system and numerical results were compared to a streak photograph, used as a benchmark, with good correlation.
NASA Technical Reports Server (NTRS)
Pline, Alexander D.; Wernet, Mark P.; Hsieh, Kwang-Chung
1991-01-01
The Surface Tension Driven Convection Experiment (STDCE) is a Space Transportation System flight experiment to study both transient and steady thermocapillary fluid flows aboard the United States Microgravity Laboratory-1 (USML-1) Spacelab mission planned for June, 1992. One of the components of data collected during the experiment is a video record of the flow field. This qualitative data is then quantified using an all electric, two dimensional Particle Image Velocimetry (PIV) technique called Particle Displacement Tracking (PDT), which uses a simple space domain particle tracking algorithm. Results using the ground based STDCE hardware, with a radiant flux heating mode, and the PDT system are compared to numerical solutions obtained by solving the axisymmetric Navier Stokes equations with a deformable free surface. The PDT technique is successful in producing a velocity vector field and corresponding stream function from the raw video data which satisfactorily represents the physical flow. A numerical program is used to compute the velocity field and corresponding stream function under identical conditions. Both the PDT system and numerical results were compared to a streak photograph, used as a benchmark, with good correlation.
Busted Butte: Achieving the Objectives and Numerical Modeling Results
W.E. Soll; M. Kearney; P. Stauffer; P. Tseng; H.J. Turin; Z. Lu
2002-10-07
The Unsaturated Zone Transport Test (UZTT) at Busted Butte is a mesoscale field/laboratory/modeling investigation designed to address uncertainties associated with flow and transport in the UZ site-process models for Yucca Mountain. The UZTT test facility is located approximately 8 km southeast of the potential Yucca Mountain repository area. The UZTT was designed in two phases, to address five specific objectives in the UZ: the effect of heterogeneities, flow and transport (F&T) behavior at permeability contrast boundaries, migration of colloids , transport models of sorbing tracers, and scaling issues in moving from laboratory scale to field scale. Phase 1A was designed to assess the influence of permeability contrast boundaries in the hydrologic Calico Hills. Visualization of fluorescein movement , mineback rock analyses, and comparison with numerical models demonstrated that F&T are capillary dominated with permeability contrast boundaries distorting the capillary flow. Phase 1B was designed to assess the influence of fractures on F&T and colloid movement. The injector in Phase 1B was located at a fracture, while the collector, 30 cm below, was placed at what was assumed to be the same fracture. Numerical simulations of nonreactive (Br) and reactive (Li) tracers show the experimental data are best explained by a combination of molecular diffusion and advective flux. For Phase 2, a numerical model with homogeneous unit descriptions was able to qualitatively capture the general characteristics of the system. Numerical simulations and field observations revealed a capillary dominated flow field. Although the tracers showed heterogeneity in the test block, simulation using heterogeneous fields did not significantly improve the data fit over homogeneous field simulations. In terms of scaling, simulations of field tracer data indicate a hydraulic conductivity two orders of magnitude higher than measured in the laboratory. Simulations of Li, a weakly sorbing tracer
NASA Astrophysics Data System (ADS)
Zhang, Lisha
We present fast and robust numerical algorithms for 3-D scattering from perfectly electrical conducting (PEC) and dielectric random rough surfaces in microwave remote sensing. The Coifman wavelets or Coiflets are employed to implement Galerkin's procedure in the method of moments (MoM). Due to the high-precision one-point quadrature, the Coiflets yield fast evaluations of the most off-diagonal entries, reducing the matrix fill effort from O(N2) to O( N). The orthogonality and Riesz basis of the Coiflets generate well conditioned impedance matrix, with rapid convergence for the conjugate gradient solver. The resulting impedance matrix is further sparsified by the matrix-formed standard fast wavelet transform (SFWT). By properly selecting multiresolution levels of the total transformation matrix, the solution precision can be enhanced while matrix sparsity and memory consumption have not been noticeably sacrificed. The unified fast scattering algorithm for dielectric random rough surfaces can asymptotically reduce to the PEC case when the loss tangent grows extremely large. Numerical results demonstrate that the reduced PEC model does not suffer from ill-posed problems. Compared with previous publications and laboratory measurements, good agreement is observed.
NASA Astrophysics Data System (ADS)
Wang, Jiong; Steinmann, Paul
2016-05-01
This is part II of this series of papers. The aim of the current paper was to solve the governing PDE system derived in part I numerically, such that the procedure of variant reorientation in a magnetic shape memory alloy (MSMA) sample can be simulated. The sample to be considered in this paper has a 3D cuboid shape and is subject to typical magnetic and mechanical loading conditions. To investigate the demagnetization effect on the sample's response, the surrounding space of the sample is taken into account. By considering the different properties of the independent variables, an iterative numerical algorithm is proposed to solve the governing system. The related mathematical formulas and some techniques facilitating the numerical calculations are introduced. Based on the results of numerical simulations, the distributions of some important physical quantities (e.g., magnetization, demagnetization field, and mechanical stress) in the sample can be determined. Furthermore, the properties of configurational force on the twin interfaces are investigated. By virtue of the twin interface movement criteria derived in part I, the whole procedure of magnetic field- or stress-induced variant reorientations in the MSMA sample can be properly simulated.
Fast numerical algorithms for fitting multiresolution hybrid shape models to brain MRI.
Vemuri, B C; Guo, Y; Lai, S H; Leonard, C M
1997-09-01
In this paper, we present new and fast numerical algorithms for shape recovery from brain MRI using multiresolution hybrid shape models. In this modeling framework, shapes are represented by a core rigid shape characterized by a superquadric function and a superimposed displacement function which is characterized by a membrane spline discretized using the finite-element method. Fitting the model to brain MRI data is cast as an energy minimization problem which is solved numerically. We present three new computational methods for model fitting to data. These methods involve novel mathematical derivations that lead to efficient numerical solutions of the model fitting problem. The first method involves using the nonlinear conjugate gradient technique with a diagonal Hessian preconditioner. The second method involves the nonlinear conjugate gradient in the outer loop for solving global parameters of the model and a preconditioned conjugate gradient scheme for solving the local parameters of the model. The third method involves the nonlinear conjugate gradient in the outer loop for solving the global parameters and a combination of the Schur complement formula and the alternating direction-implicit method for solving the local parameters of the model. We demonstrate the efficiency of our model fitting methods via experiments on several MR brain scans. PMID:9873915
Numerical calculations of high-altitude differential charging: Preliminary results
NASA Technical Reports Server (NTRS)
Laframboise, J. G.; Godard, R.; Prokopenko, S. M. L.
1979-01-01
A two dimensional simulation program was constructed in order to obtain theoretical predictions of floating potential distributions on geostationary spacecraft. The geometry was infinite-cylindrical with angle dependence. Effects of finite spacecraft length on sheath potential profiles can be included in an approximate way. The program can treat either steady-state conditions or slowly time-varying situations, involving external time scales much larger than particle transit times. Approximate, locally dependent expressions were used to provide space charge, density profiles, but numerical orbit-following is used to calculate surface currents. Ambient velocity distributions were assumed to be isotropic, beam-like, or some superposition of these.
Numerical Simulation of Turbulent MHD Flows Using an Iterative PNS Algorithm
NASA Technical Reports Server (NTRS)
Kato, Hiromasa; Tannehill, John C.; Mehta, Unmeel B.
2003-01-01
A new parabolized Navier-Stokes (PNS) algorithm has been developed to efficiently compute magnetohydrodynamic (MHD) flows in the low magnetic Reynolds number regime. In this regime, the electrical conductivity is low and the induced magnetic field is negligible compared to the applied magnetic field. The MHD effects are modeled by introducing source terms into the PNS equation which can then be solved in a very efficient manner. To account for upstream (elliptic) effects, the flowfields are computed using multiple streamwise sweeps with an iterated PNS algorithm. Turbulence has been included by modifying the Baldwin-Lomax turbulence model to account for MHD effects. The new algorithm has been used to compute both laminar and turbulent, supersonic, MHD flows over flat plates and supersonic viscous flows in a rectangular MHD accelerator. The present results are in excellent agreement with previous complete Navier-Stokes calculations.
NASA Astrophysics Data System (ADS)
Harries, Tim J.
2015-04-01
We present a set of new numerical methods that are relevant to calculating radiation pressure terms in hydrodynamics calculations, with a particular focus on massive star formation. The radiation force is determined from a Monte Carlo estimator and enables a complete treatment of the detailed microphysics, including polychromatic radiation and anisotropic scattering, in both the free-streaming and optically thick limits. Since the new method is computationally demanding we have developed two new methods that speed up the algorithm. The first is a photon packet splitting algorithm that enables efficient treatment of the Monte Carlo process in very optically thick regions. The second is a parallelization method that distributes the Monte Carlo workload over many instances of the hydrodynamic domain, resulting in excellent scaling of the radiation step. We also describe the implementation of a sink particle method that enables us to follow the accretion on to, and the growth of, the protostars. We detail the results of extensive testing and benchmarking of the new algorithms.
Spurious frequencies as a result of numerical boundary treatments
NASA Technical Reports Server (NTRS)
Abarbanel, Saul; Gottlieb, David
1990-01-01
The stability theory for finite difference Initial Boundary-Value approximations to systems of hyperbolic partial differential equations states that the exclusion of eigenvalues and generalized eigenvalues is a sufficient condition for stability. The theory, however, does not discuss the nature of numerical approximations in the presence of such eigenvalues. In fact, as was shown previously, for the problem of vortex shedding by a 2-D cylinder in subsonic flow, stating boundary conditions in terms of the primitive (non-characteristic) variables may lead to such eigenvalues, causing perturbations that decay slowly in space and remain periodic time. Characteristic formulation of the boundary conditions avoided this problem. A more systematic study of the behavior of the (linearized) one-dimensional gas dynamic equations under various sets of oscillation-inducing legal boundary conditions is reported.
NASA Astrophysics Data System (ADS)
Li, Yiming
2007-12-01
This symposium is an open forum for discussion on the current trends and future directions of physical modeling, mathematical theory, and numerical algorithm in electrical and electronic engineering. The goal is for computational scientists and engineers, computer scientists, applied mathematicians, physicists, and researchers to present their recent advances and exchange experience. We welcome contributions from researchers of academia and industry. All papers to be presented in this symposium have carefully been reviewed and selected. They include semiconductor devices, circuit theory, statistical signal processing, design optimization, network design, intelligent transportation system, and wireless communication. Welcome to this interdisciplinary symposium in International Conference of Computational Methods in Sciences and Engineering (ICCMSE 2007). Look forward to seeing you in Corfu, Greece!
NASA Astrophysics Data System (ADS)
Angeli, D.; Stalio, E.; Corticelli, M. A.; Barozzi, G. S.
2015-11-01
A parallel algorithm is presented for the Direct Numerical Simulation of buoyancy- induced flows in open or partially confined periodic domains, containing immersed cylindrical bodies of arbitrary cross-section. The governing equations are discretized by means of the Finite Volume method on Cartesian grids. A semi-implicit scheme is employed for the diffusive terms, which are treated implicitly on the periodic plane and explicitly along the homogeneous direction, while all convective terms are explicit, via the second-order Adams-Bashfort scheme. The contemporary solution of velocity and pressure fields is achieved by means of a projection method. The numerical resolution of the set of linear equations resulting from discretization is carried out by means of efficient and highly parallel direct solvers. Verification and validation of the numerical procedure is reported in the paper, for the case of flow around an array of heated cylindrical rods arranged in a square lattice. Grid independence is assessed in laminar flow conditions, and DNS results in turbulent conditions are presented for two different grids and compared to available literature data, thus confirming the favorable qualities of the method.
Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC), version 4.0: User's manual
NASA Technical Reports Server (NTRS)
Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.
1988-01-01
The information in the NASARC (Version 4.0) Technical Manual (NASA-TM-101453) and NASARC (Version 4.0) User's Manual (NASA-TM-101454) relates to the state of Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through November 1, 1988. The Technical Manual describes the NASARC concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions were incorporated in the Version 4.0 software over prior versions. These revisions have further enhanced the modeling capabilities of the NASARC procedure and provide improved arrangements of predetermined arcs within the geostationary orbit. Array dimensions within the software were structured to fit within the currently available 12-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 4.) allows worldwide planning problem scenarios to be accommodated within computer run time and memory constraints with enhanced likelihood and ease of solution.
Numerical Arc Segmentation Algorithm for a Radio Conference-NASARC (version 4.0) technical manual
NASA Technical Reports Server (NTRS)
Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.
1988-01-01
The information contained in the NASARC (Version 4.0) Technical Manual and NASARC (Version 4.0) User's Manual relates to the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through November 1, 1988. The Technical Manual describes the NASARC concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions were incorporated in the Version 4.0 software over prior versions. These revisions have further enhanced the modeling capabilities of the NASARC procedure and provide improved arrangements of predetermined arcs within the geostationary orbits. Array dimensions within the software were structured to fit within the currently available 12 megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 4.0) allows worldwide planning problem scenarios to be accommodated within computer run time and memory constraints with enhanced likelihood and ease of solution.
Numerical Arc Segmentation Algorithm for a Radio Conference-NASARC, Version 2.0: User's Manual
NASA Technical Reports Server (NTRS)
Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.
1987-01-01
The information contained in the NASARC (Version 2.0) Technical Manual (NASA TM-100160) and the NASARC (Version 2.0) User's Manual (NASA TM-100161) relates to the state of the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through October 16, 1987. The technical manual describes the NASARC concept and the algorithms which are used to implement it. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions have been incorporated in the Version 2.0 software over prior versions. These revisions have enhanced the modeling capabilities of the NASARC procedure while greatly reducing the computer run time and memory requirements. Array dimensions within the software have been structured to fit into the currently available 6-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 2.0) allows worldwide scenarios to be accommodated within these memory constraints while at the same time reducing computer run time.
Numerical arc segmentation algorithm for a radio conference-NASARC (version 2.0) technical manual
NASA Technical Reports Server (NTRS)
Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.
1987-01-01
The information contained in the NASARC (Version 2.0) Technical Manual (NASA TM-100160) and NASARC (Version 2.0) User's Manual (NASA TM-100161) relates to the state of NASARC software development through October 16, 1987. The Technical Manual describes the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operating instructions. Significant revisions have been incorporated in the Version 2.0 software. These revisions have enhanced the modeling capabilities of the NASARC procedure while greatly reducing the computer run time and memory requirements. Array dimensions within the software have been structured to fit within the currently available 6-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 2.0) allows worldwide scenarios to be accommodated within these memory constraints while at the same time effecting an overall reduction in computer run time.
NASA Astrophysics Data System (ADS)
Kitaura, F. S.; Enßlin, T. A.
2008-09-01
We address the inverse problem of cosmic large-scale structure reconstruction from a Bayesian perspective. For a linear data model, a number of known and novel reconstruction schemes, which differ in terms of the underlying signal prior, data likelihood and numerical inverse extraregularization schemes are derived and classified. The Bayesian methodology presented in this paper tries to unify and extend the following methods: Wiener filtering, Tikhonov regularization, ridge regression, maximum entropy and inverse regularization techniques. The inverse techniques considered here are the asymptotic regularization, the Jacobi, Steepest Descent, Newton-Raphson, Landweber-Fridman and both linear and non-linear Krylov methods based on Fletcher-Reeves, Polak-Ribière and Hestenes-Stiefel conjugate gradients. The structures of the up-to-date highest performing algorithms are presented, based on an operator scheme, which permits one to exploit the power of fast Fourier transforms. Using such an implementation of the generalized Wiener filter in the novel ARGO software package, the different numerical schemes are benchmarked with one-, two- and three-dimensional problems including structured white and Poissonian noise, data windowing and blurring effects. A novel numerical Krylov scheme is shown to be superior in terms of performance and fidelity. These fast inverse methods ultimately will enable the application of sampling techniques to explore complex joint posterior distributions. We outline how the space of the dark matter density field, the peculiar velocity field and the power spectrum can jointly be investigated by a Gibbs-sampling process. Such a method can be applied for the redshift distortions correction of the observed galaxies and for time-reversal reconstructions of the initial density field.
NASA Astrophysics Data System (ADS)
Ersoy, Ozlem; Dag, Idris
2015-12-01
The solutions of the reaction-diffusion system are given by method of collocation based on the exponential B-splines. Thus the reaction-diffusion systemturns into an iterative banded algebraic matrix equation. Solution of the matrix equation is carried out byway of Thomas algorithm. The present methods test on both linear and nonlinear problems. The results are documented to compare with some earlier studies by use of L∞ and relative error norm for problems respectively.
NASA Astrophysics Data System (ADS)
Miao, J. C.; Zhu, P.; Shi, G. L.; Chen, G. L.
2008-01-01
Numerical stability is an important issue for any integral procedure. Since sub-cycling algorithm has been presented by Belytschko et al. (Comput Methods Appl Mech Eng 17/18: 259-275, 1979), various kinds of these integral procedures were developed in later 20 years and their stability were widely studied. However, on how to apply the sub-cycling to flexible multi-body dynamics (FMD) is still a lack of investigation up to now. A particular sub-cycling algorithm for the FMD based on the central difference method was introduced in detail in part I (Miao et al. in Comp Mech doi: 10.1007/s00466-007-0183-9) of this paper. Adopting an integral approximation operator method, stability of the presented algorithm is transformed to a generalized eigenvalue problem in the paper and is discussed by solving the problem later. Numerical examples are performed to verify the availability and efficiency of the algorithm further.
NASA Astrophysics Data System (ADS)
Carrano, Charles S.; Rino, Charles L.
2016-06-01
We extend the power law phase screen theory for ionospheric scintillation to account for the case where the refractive index irregularities follow a two-component inverse power law spectrum. The two-component model includes, as special cases, an unmodified power law and a modified power law with spectral break that may assume the role of an outer scale, intermediate break scale, or inner scale. As such, it provides a framework for investigating the effects of a spectral break on the scintillation statistics. Using this spectral model, we solve the fourth moment equation governing intensity variations following propagation through two-dimensional field-aligned irregularities in the ionosphere. A specific normalization is invoked that exploits self-similar properties of the structure to achieve a universal scaling, such that different combinations of perturbation strength, propagation distance, and frequency produce the same results. The numerical algorithm is validated using new theoretical predictions for the behavior of the scintillation index and intensity correlation length under strong scatter conditions. A series of numerical experiments are conducted to investigate the morphologies of the intensity spectrum, scintillation index, and intensity correlation length as functions of the spectral indices and strength of scatter; retrieve phase screen parameters from intensity scintillation observations; explore the relative contributions to the scintillation due to large- and small-scale ionospheric structures; and quantify the conditions under which a general spectral break will influence the scintillation statistics.
Scanning of wind turbine upwind conditions: numerical algorithm and first applications
NASA Astrophysics Data System (ADS)
Calaf, Marc; Cortina, Gerard; Sharma, Varun; Parlange, Marc B.
2014-11-01
Wind turbines still obtain in-situ meteorological information by means of traditional wind vane and cup anemometers installed at the turbine's nacelle, right behind the blades. This has two important drawbacks: 1-turbine misalignment with the mean wind direction is common and energy losses are experienced; 2-the near-blade monitoring does not provide any time to readjust the profile of the wind turbine to incoming turbulence gusts. A solution is to install wind Lidar devices on the turbine's nacelle. This technique is currently under development as an alternative to traditional in-situ wind anemometry because it can measure the wind vector at substantial distances upwind. However, at what upwind distance should they interrogate the atmosphere? A new flexible wind turbine algorithm for large eddy simulations of wind farms that allows answering this question, will be presented. The new wind turbine algorithm timely corrects the turbines' yaw misalignment with the changing wind. The upwind scanning flexibility of the algorithm also allows to track the wind vector and turbulent kinetic energy as they approach the wind turbine's rotor blades. Results will illustrate the spatiotemporal evolution of the wind vector and the turbulent kinetic energy as the incoming flow approaches the wind turbine under different atmospheric stability conditions. Results will also show that the available atmospheric wind power is larger during daytime periods at the cost of an increased variance.
Sediment Pathways Across Trench Slopes: Results From Numerical Modeling
NASA Astrophysics Data System (ADS)
Cormier, M. H.; Seeber, L.; McHugh, C. M.; Fujiwara, T.; Kanamatsu, T.; King, J. W.
2015-12-01
Until the 2011 Mw9.0 Tohoku earthquake, the role of earthquakes as agents of sediment dispersal and deposition at erosional trenches was largely under-appreciated. A series of cruises carried out after the 2011 event has revealed a variety of unsuspected sediment transport mechanisms, such as tsunami-triggered sheet turbidites, suggesting that great earthquakes may in fact be important agents for dispersing sediments across trench slopes. To complement these observational data, we have modeled the pathways of sediments across the trench slope based on bathymetric grids. Our approach assumes that transport direction is controlled by slope azimuth only, and ignores obstacles smaller than 0.6-1 km; these constraints are meant to approximate the behavior of turbidites. Results indicate that (1) most pathways issued from the upper slope terminate near the top of the small frontal wedge, and thus do not reach the trench axis; (2) in turn, sediments transported to the trench axis are likely derived from the small frontal wedge or from the subducting Pacific plate. These results are consistent with the stratigraphy imaged in seismic profiles, which reveals that the slope apron does not extend as far as the frontal wedge, and that the thickness of sediments at the trench axis is similar to that of the incoming Pacific plate. We further applied this modeling technique to the Cascadia, Nankai, Middle-America, and Sumatra trenches. Where well-defined canyons carve the trench slopes, sediments from the upper slope may routinely reach the trench axis (e.g., off Costa Rica and Cascadia). In turn, slope basins that are isolated from the canyons drainage systems must mainly accumulate locally-derived sediments. Therefore, their turbiditic infill may be diagnostic of seismic activity only - and not from storm or flood activity. If correct, this would make isolated slope basins ideal targets for paleoseismological investigation.
Biphasic indentation of articular cartilage--II. A numerical algorithm and an experimental study.
Mow, V C; Gibbs, M C; Lai, W M; Zhu, W B; Athanasiou, K A
1989-01-01
Part I (Mak et al., 1987, J. Biomechanics 20, 703-714) presented the theoretical solutions for the biphasic indentation of articular cartilage under creep and stress-relaxation conditions. In this study, using the creep solution, we developed an efficient numerical algorithm to compute all three material coefficients of cartilage in situ on the joint surface from the indentation creep experiment. With this method we determined the average values of the aggregate modulus. Poisson's ratio and permeability for young bovine femoral condylar cartilage in situ to be HA = 0.90 MPa, vs = 0.39 and k = 0.44 x 10(-15) m4/Ns respectively, and those for patellar groove cartilage to be HA = 0.47 MPa, vs = 0.24, k = 1.42 x 10(-15) m4/Ns. One surprising finding from this study is that the in situ Poisson's ratio of cartilage (0.13-0.45) may be much less than those determined from measurements performed on excised osteochondral plugs (0.40-0.49) reported in the literature. We also found the permeability of patellar groove cartilage to be several times higher than femoral condyle cartilage. These findings may have important implications on understanding the functional behavior of cartilage in situ and on methods used to determine the elastic moduli of cartilage using the indentation experiments. PMID:2613721
NASA Technical Reports Server (NTRS)
Whyte, W. A.; Heyward, A. O.; Ponchak, D. S.; Spence, R. L.; Zuzek, J. E.
1988-01-01
The Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) provides a method of generating predetermined arc segments for use in the development of an allotment planning procedure to be carried out at the 1988 World Administrative Radio Conference (WARC) on the Use of the Geostationary Satellite Orbit and the Planning of Space Services Utilizing It. Through careful selection of the predetermined arc (PDA) for each administration, flexibility can be increased in terms of choice of system technical characteristics and specific orbit location while reducing the need for coordination among administrations. The NASARC software determines pairwise compatibility between all possible service areas at discrete arc locations. NASARC then exhaustively enumerates groups of administrations whose satellites can be closely located in orbit, and finds the arc segment over which each such compatible group exists. From the set of all possible compatible groupings, groups and their associated arc segments are selected using a heuristic procedure such that a PDA is identified for each administration. Various aspects of the NASARC concept and how the software accomplishes specific features of allotment planning are discussed.
NASA Technical Reports Server (NTRS)
Carter, Richard G.
1989-01-01
For optimization problems associated with engineering design, parameter estimation, image reconstruction, and other optimization/simulation applications, low accuracy function and gradient values are frequently much less expensive to obtain than high accuracy values. Here, researchers investigate the computational performance of trust region methods for nonlinear optimization when high accuracy evaluations are unavailable or prohibitively expensive, and confirm earlier theoretical predictions when the algorithm is convergent even with relative gradient errors of 0.5 or more. The proper choice of the amount of accuracy to use in function and gradient evaluations can result in orders-of-magnitude savings in computational cost.
ERIC Educational Resources Information Center
Gonzalez-Vega, Laureano
1999-01-01
Using a Computer Algebra System (CAS) to help with the teaching of an elementary course in linear algebra can be one way to introduce computer algebra, numerical analysis, data structures, and algorithms. Highlights the advantages and disadvantages of this approach to the teaching of linear algebra. (Author/MM)
NASA Astrophysics Data System (ADS)
Korneev, Boris; Levchenko, Vadim
2016-02-01
Interaction between a shock wave and an inhomogeneity in fluid has complicated behavior, including vortex and turbulence generating, mixing, shock wave scattering and reflection. In the present paper we deal with the numerical simulation of the considered process. The Euler equations of unsteady inviscid compressible three-dimensional flow are used into the four-equation model of multicomponent flow. These equations are discretized using the RKDG numerical method. It is implemented with the help of the DiamondTorre algorithm, so the effective GPGPU solver is obtained having outstanding computing properties. With its use we carry out several sets of numerical experiments of shock-bubble interaction problem. The bubble deformation and mixture formation is observed.
NASA Technical Reports Server (NTRS)
Spratlin, Kenneth Milton
1987-01-01
An adaptive numeric predictor-corrector guidance is developed for atmospheric entry vehicles which utilize lift to achieve maximum footprint capability. Applicability of the guidance design to vehicles with a wide range of performance capabilities is desired so as to reduce the need for algorithm redesign with each new vehicle. Adaptability is desired to minimize mission-specific analysis and planning. The guidance algorithm motivation and design are presented. Performance is assessed for application of the algorithm to the NASA Entry Research Vehicle (ERV). The dispersions the guidance must be designed to handle are presented. The achievable operational footprint for expected worst-case dispersions is presented. The algorithm performs excellently for the expected dispersions and captures most of the achievable footprint.
Pathmanathan, P; Bernabeu, M O; Niederer, S A; Gavaghan, D J; Kay, D
2012-08-01
A recent verification study compared 11 large-scale cardiac electrophysiology solvers on an unambiguously defined common problem. An unexpected amount of variation was observed between the codes, including significant error in conduction velocity in the majority of the codes at certain spatial resolutions. In particular, the results of the six finite element codes varied considerably despite each using the same order of interpolation. In this present study, we compare various algorithms for cardiac electrophysiological simulation, which allows us to fully explain the differences between the solvers. We identify the use of mass lumping as the fundamental cause of the largest variations, specifically the combination of the commonly used techniques of mass lumping and operator splitting, which results in a slightly different form of mass lumping to that supported by theory and leads to increased numerical error. Other variations are explained through the manner in which the ionic current is interpolated. We also investigate the effect of different forms of mass lumping in various types of simulation. PMID:25099569
An efficient algorithm for numerical computations of continuous densities of states
NASA Astrophysics Data System (ADS)
Langfeld, K.; Lucini, B.; Pellegrini, R.; Rago, A.
2016-06-01
In Wang-Landau type algorithms, Monte-Carlo updates are performed with respect to the density of states, which is iteratively refined during simulations. The partition function and thermodynamic observables are then obtained by standard integration. In this work, our recently introduced method in this class (the LLR approach) is analysed and further developed. Our approach is a histogram free method particularly suited for systems with continuous degrees of freedom giving rise to a continuum density of states, as it is commonly found in lattice gauge theories and in some statistical mechanics systems. We show that the method possesses an exponential error suppression that allows us to estimate the density of states over several orders of magnitude with nearly constant relative precision. We explain how ergodicity issues can be avoided and how expectation values of arbitrary observables can be obtained within this framework. We then demonstrate the method using compact U(1) lattice gauge theory as a show case. A thorough study of the algorithm parameter dependence of the results is performed and compared with the analytically expected behaviour. We obtain high precision values for the critical coupling for the phase transition and for the peak value of the specific heat for lattice sizes ranging from 8^4 to 20^4. Our results perfectly agree with the reference values reported in the literature, which covers lattice sizes up to 18^4. Robust results for the 20^4 volume are obtained for the first time. This latter investigation, which, due to strong metastabilities developed at the pseudo-critical coupling of the system, so far has been out of reach even on supercomputers with importance sampling approaches, has been performed to high accuracy with modest computational resources. This shows the potential of the method for studies of first order phase transitions. Other situations where the method is expected to be superior to importance sampling techniques are pointed
NASA Technical Reports Server (NTRS)
Bui, Trong T.; Mankbadi, Reda R.
1995-01-01
Numerical simulation of a very small amplitude acoustic wave interacting with a shock wave in a quasi-1D convergent-divergent nozzle is performed using an unstructured finite volume algorithm with a piece-wise linear, least square reconstruction, Roe flux difference splitting, and second-order MacCormack time marching. First, the spatial accuracy of the algorithm is evaluated for steady flows with and without the normal shock by running the simulation with a sequence of successively finer meshes. Then the accuracy of the Roe flux difference splitting near the sonic transition point is examined for different reconstruction schemes. Finally, the unsteady numerical solutions with the acoustic perturbation are presented and compared with linear theory results.
NASA Astrophysics Data System (ADS)
Lakshminarayana, B.; Ho, Y.; Basson, A.
1993-07-01
The objective of this research is to simulate steady and unsteady viscous flows, including rotor/stator interaction and tip clearance effects in turbomachinery. The numerical formulation for steady flow developed here includes an efficient grid generation scheme, particularly suited to computational grids for the analysis of turbulent turbomachinery flows and tip clearance flows, and a semi-implicit, pressure-based computational fluid dynamics scheme that directly includes artificial dissipation, and is applicable to both viscous and inviscid flows. The values of these artificial dissipation is optimized to achieve accuracy and convergency in the solution. The numerical model is used to investigate the structure of tip clearance flows in a turbine nozzle. The structure of leakage flow is captured accurately, including blade-to-blade variation of all three velocity components, pitch and yaw angles, losses and blade static pressures in the tip clearance region. The simulation also includes evaluation of such quantities of leakage mass flow, vortex strength, losses, dominant leakage flow regions and the spanwise extent affected by the leakage flow. It is demonstrated, through optimization of grid size and artificial dissipation, that the tip clearance flow field can be captured accurately. The above numerical formulation was modified to incorporate time accurate solutions. An inner loop iteration scheme is used at each time step to account for the non-linear effects. The computation of unsteady flow through a flat plate cascade subjected to a transverse gust reveals that the choice of grid spacing and the amount of artificial dissipation is critical for accurate prediction of unsteady phenomena. The rotor-stator interaction problem is simulated by starting the computation upstream of the stator, and the upstream rotor wake is specified from the experimental data. The results show that the stator potential effects have appreciable influence on the upstream rotor wake
NASA Technical Reports Server (NTRS)
Lakshminarayana, B.; Ho, Y.; Basson, A.
1993-01-01
The objective of this research is to simulate steady and unsteady viscous flows, including rotor/stator interaction and tip clearance effects in turbomachinery. The numerical formulation for steady flow developed here includes an efficient grid generation scheme, particularly suited to computational grids for the analysis of turbulent turbomachinery flows and tip clearance flows, and a semi-implicit, pressure-based computational fluid dynamics scheme that directly includes artificial dissipation, and is applicable to both viscous and inviscid flows. The values of these artificial dissipation is optimized to achieve accuracy and convergency in the solution. The numerical model is used to investigate the structure of tip clearance flows in a turbine nozzle. The structure of leakage flow is captured accurately, including blade-to-blade variation of all three velocity components, pitch and yaw angles, losses and blade static pressures in the tip clearance region. The simulation also includes evaluation of such quantities of leakage mass flow, vortex strength, losses, dominant leakage flow regions and the spanwise extent affected by the leakage flow. It is demonstrated, through optimization of grid size and artificial dissipation, that the tip clearance flow field can be captured accurately. The above numerical formulation was modified to incorporate time accurate solutions. An inner loop iteration scheme is used at each time step to account for the non-linear effects. The computation of unsteady flow through a flat plate cascade subjected to a transverse gust reveals that the choice of grid spacing and the amount of artificial dissipation is critical for accurate prediction of unsteady phenomena. The rotor-stator interaction problem is simulated by starting the computation upstream of the stator, and the upstream rotor wake is specified from the experimental data. The results show that the stator potential effects have appreciable influence on the upstream rotor wake
Numerical prediction of freezing fronts in cryosurgery: comparison with experimental results.
Fortin, André; Belhamadia, Youssef
2005-08-01
Recent developments in scientific computing now allow to consider realistic applications of numerical modelling to medicine. In this work, a numerical method is presented for the simulation of phase change occurring in cryosurgery applications. The ultimate goal of these simulations is to accurately predict the freezing front position and the thermal history inside the ice ball which is essential to determine if cancerous cells have been completely destroyed. A semi-phase field formulation including blood flow considerations is employed for the simulations. Numerical results are enhanced by the introduction of an anisotropic remeshing strategy. The numerical procedure is validated by comparing the predictions of the model with experimental results. PMID:16298846
NASA Astrophysics Data System (ADS)
Li, Cong; Lei, Jianshe
2014-10-01
In this paper, we focus on the influences of various parameters in the niching genetic algorithm inversion procedure on the results, such as various objective functions, the number of the models in each subpopulation, and the critical separation radius. The frequency-waveform integration (F-K) method is applied to synthesize three-component waveform data with noise in various epicentral distances and azimuths. Our results show that if we use a zero-th-lag cross-correlation function, then we will obtain the model with a faster convergence and a higher precision than other objective functions. The number of models in each subpopulation has a great influence on the rate of convergence and computation time, suggesting that it should be obtained through tests in practical problems. The critical separation radius should be determined carefully because it directly affects the multi-extreme values in the inversion. We also compare the inverted results from full-band waveform data and surface-wave frequency-band (0.02-0.1 Hz) data, and find that the latter is relatively poorer but still has a higher precision, suggesting that surface-wave frequency-band data can also be used to invert for the crustal structure.
NASA Astrophysics Data System (ADS)
Li, Cong; Lei, Jianshe
2014-09-01
In this paper, we focus on the influences of various parameters in the niching genetic algorithm inversion procedure on the results, such as various objective functions, the number of the models in each subpopulation, and the critical separation radius. The frequency-waveform integration (F-K) method is applied to synthesize three-component waveform data with noise in various epicentral distances and azimuths. Our results show that if we use a zero-th-lag cross-correlation function, then we will obtain the model with a faster convergence and a higher precision than other objective functions. The number of models in each subpopulation has a great influence on the rate of convergence and computation time, suggesting that it should be obtained through tests in practical problems. The critical separation radius should be determined carefully because it directly affects the multi-extreme values in the inversion. We also compare the inverted results from full-band waveform data and surface-wave frequency-band (0.02-0.1 Hz) data, and find that the latter is relatively poorer but still has a higher precision, suggesting that surface-wave frequency-band data can also be used to invert for the crustal structure.
Experimental Results in the Comparison of Search Algorithms Used with Room Temperature Detectors
Guss, P., Yuan, D., Cutler, M., Beller, D.
2010-11-01
Analysis of time sequence data was run for several higher resolution scintillation detectors using a variety of search algorithms, and results were obtained in predicting the relative performance for these detectors, which included a slightly superior performance by CeBr{sub 3}. Analysis of several search algorithms shows that inclusion of the RSPRT methodology can improve sensitivity.
NASA Technical Reports Server (NTRS)
Smutek, C.; Bontoux, P.; Roux, B.; Schiroky, G. H.; Hurford, A. C.
1985-01-01
The results of a three-dimensional numerical simulation of Boussinesq free convection in a horizontal differentially heated cylinder are presented. The computation was based on a Samarskii-Andreyev scheme (described by Leong, 1981) and a false-transient advancement in time, with vorticity, velocity, and temperature as dependent variables. Solutions for velocity and temperature distributions were obtained for Rayleigh numbers (based on the radius) Ra = 74-18,700, thus covering the core- and boundary-layer-driven regimes. Numerical solutions are compared with asymptotic analytical solutions and experimental data. The numerical results well represent the complex three-dimensional flows found experimentally.
Manzini, Gianmarco; Cangiani, Andrea; Sutton, Oliver
2014-10-02
This document presents the results of a set of preliminary numerical experiments using several possible conforming virtual element approximations of the convection-reaction-diffusion equation with variable coefficients.
Structure of the Gabor matrix and efficient numerical algorithms for discrete Gabor expansions
NASA Astrophysics Data System (ADS)
Qiu, Sigang; Feichtinger, Hans G.
1994-09-01
The standard way to obtain suitable coefficients for the (non-orthogonal) Gabor expansion of a general signal for a given Gabor atom g and a pair of lattice constants in the (discrete) time/frequency plane, requires to compute the dual Gabor window function g- first. In this paper, we present an explicit description of the sparsity, the block and banded structure of the Gabor frame matrix G. On this basis efficient algorithms are developed for computing g- by solving the linear equation g- * G equals g with the conjugate- gradients method. Using the dual Gabor wavelet, a fast Gabor reconstruction algorithm with very low computational complexity is proposed.
A numerical algorithm suggested by problems of transport in periodic media - The matrix case.
NASA Technical Reports Server (NTRS)
Allen, R. C., Jr.; Burgmeier, J. W.; Mundorff, P.; Wing, G. M.
1972-01-01
Extension of Allen and Wing's (1970) previous work on problems of transport in periodic media to the matrix case. A method in the form of a complete set of equations is presented that may be used without any further analytical work by investigators interested in computing solutions to problems of the type the method is designed to handle. All the formulas have been checked out numerically, and their effectiveness is demonstrated by numerical examples.
NASA Astrophysics Data System (ADS)
Bor, E.; Turduev, M.; Kurt, H.
2016-08-01
Photonic structure designs based on optimization algorithms provide superior properties compared to those using intuition-based approaches. In the present study, we numerically and experimentally demonstrate subwavelength focusing of light using wavelength scale absorption-free dielectric scattering objects embedded in an air background. An optimization algorithm based on differential evolution integrated into the finite-difference time-domain method was applied to determine the locations of each circular dielectric object with a constant radius and refractive index. The multiobjective cost function defined inside the algorithm ensures strong focusing of light with low intensity side lobes. The temporal and spectral responses of the designed compact photonic structure provided a beam spot size in air with a full width at half maximum value of 0.19λ, where λ is the wavelength of light. The experiments were carried out in the microwave region to verify numerical findings, and very good agreement between the two approaches was found. The subwavelength light focusing is associated with a strong interference effect due to nonuniformly arranged scatterers and an irregular index gradient. Improving the focusing capability of optical elements by surpassing the diffraction limit of light is of paramount importance in optical imaging, lithography, data storage, and strong light-matter interaction.
NASA Astrophysics Data System (ADS)
Wilkie, George J.; Dorland, William
2016-05-01
The δf particle-in-cell algorithm has been a useful tool in studying the physics of plasmas, particularly turbulent magnetized plasmas in the context of gyrokinetics. The reduction in noise due to not having to resolve the full distribution function indicates an efficiency advantage over the standard ("full-f") particle-in-cell. Despite its successes, the algorithm behaves strangely in some circumstances. In this work, we document a fully resolved numerical instability that occurs in the simplest of multiple-species test cases: the electrostatic ΩH mode. There is also a poorly understood numerical instability that occurs when one is under-resolved in particle number, which may require a prohibitively large number of particles to stabilize. Both of these are independent of the time-stepping scheme, and we conclude that they exist if the time advancement were exact. The exact analytic form of the algorithm is presented, and several schemes for mitigating these instabilities are also presented.
Bor, E; Turduev, M; Kurt, H
2016-01-01
Photonic structure designs based on optimization algorithms provide superior properties compared to those using intuition-based approaches. In the present study, we numerically and experimentally demonstrate subwavelength focusing of light using wavelength scale absorption-free dielectric scattering objects embedded in an air background. An optimization algorithm based on differential evolution integrated into the finite-difference time-domain method was applied to determine the locations of each circular dielectric object with a constant radius and refractive index. The multiobjective cost function defined inside the algorithm ensures strong focusing of light with low intensity side lobes. The temporal and spectral responses of the designed compact photonic structure provided a beam spot size in air with a full width at half maximum value of 0.19λ, where λ is the wavelength of light. The experiments were carried out in the microwave region to verify numerical findings, and very good agreement between the two approaches was found. The subwavelength light focusing is associated with a strong interference effect due to nonuniformly arranged scatterers and an irregular index gradient. Improving the focusing capability of optical elements by surpassing the diffraction limit of light is of paramount importance in optical imaging, lithography, data storage, and strong light-matter interaction. PMID:27477060
Bor, E.; Turduev, M.; Kurt, H.
2016-01-01
Photonic structure designs based on optimization algorithms provide superior properties compared to those using intuition-based approaches. In the present study, we numerically and experimentally demonstrate subwavelength focusing of light using wavelength scale absorption-free dielectric scattering objects embedded in an air background. An optimization algorithm based on differential evolution integrated into the finite-difference time-domain method was applied to determine the locations of each circular dielectric object with a constant radius and refractive index. The multiobjective cost function defined inside the algorithm ensures strong focusing of light with low intensity side lobes. The temporal and spectral responses of the designed compact photonic structure provided a beam spot size in air with a full width at half maximum value of 0.19λ, where λ is the wavelength of light. The experiments were carried out in the microwave region to verify numerical findings, and very good agreement between the two approaches was found. The subwavelength light focusing is associated with a strong interference effect due to nonuniformly arranged scatterers and an irregular index gradient. Improving the focusing capability of optical elements by surpassing the diffraction limit of light is of paramount importance in optical imaging, lithography, data storage, and strong light-matter interaction. PMID:27477060
Real-space, mean-field algorithm to numerically calculate long-range interactions
NASA Astrophysics Data System (ADS)
Cadilhe, A.; Costa, B. V.
2016-02-01
Long-range interactions are known to be of difficult treatment in statistical mechanics models. There are some approaches that introduce a cutoff in the interactions or make use of reaction field approaches. However, those treatments suffer the illness of being of limited use, in particular close to phase transitions. The use of open boundary conditions allows the sum of the long-range interactions over the entire system to be done, however, this approach demands a sum over all degrees of freedom in the system, which makes a numerical treatment prohibitive. Techniques like the Ewald summation or fast multipole expansion account for the exact interactions but are still limited to a few thousands of particles. In this paper we introduce a novel mean-field approach to treat long-range interactions. The method is based in the division of the system in cells. In the inner cell, that contains the particle in sight, the 'local' interactions are computed exactly, the 'far' contributions are then computed as the average over the particles inside a given cell with the particle in sight for each of the remaining cells. Using this approach, the large and small cells limits are exact. At a fixed cell size, the method also becomes exact in the limit of large lattices. We have applied the procedure to the two-dimensional anisotropic dipolar Heisenberg model. A detailed comparison between our method, the exact calculation and the cutoff radius approximation were done. Our results show that the cutoff-cell approach outperforms any cutoff radius approach as it maintains the long-range memory present in these interactions, contrary to the cutoff radius approximation. Besides that, we calculated the critical temperature and the critical behavior of the specific heat of the anisotropic Heisenberg model using our method. The results are in excellent agreement with extensive Monte Carlo simulations using Ewald summation.
Algorithm for calculating turbine cooling flow and the resulting decrease in turbine efficiency
NASA Technical Reports Server (NTRS)
Gauntner, J. W.
1980-01-01
An algorithm is presented for calculating both the quantity of compressor bleed flow required to cool the turbine and the decrease in turbine efficiency caused by the injection of cooling air into the gas stream. The algorithm, which is intended for an axial flow, air routine in a properly written thermodynamic cycle code. Ten different cooling configurations are available for each row of cooled airfoils in the turbine. Results from the algorithm are substantiated by comparison with flows predicted by major engine manufacturers for given bulk metal temperatures and given cooling configurations. A list of definitions for the terms in the subroutine is presented.
Chen, Deng-kai; Gu, Rong; Gu, Yu-feng; Yu, Sui-huai
2016-01-01
Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design.
Shuttle Entry Air Data System (SEADS) - Optimization of preflight algorithms based on flight results
NASA Technical Reports Server (NTRS)
Wolf, H.; Henry, M. W.; Siemers, Paul M., III
1988-01-01
The SEADS pressure model algorithm results were tested against other sources of air data, in particular, the Shuttle Best Estimated Trajectory (BET). The algorithm basis was also tested through a comparison of flight-measured pressure distribution vs the wind tunnel database. It is concluded that the successful flight of SEADS and the subsequent analysis of the data shows good agreement between BET and SEADS air data.
Comparison of results of experimental research with numerical calculations of a model one-sided seal
NASA Astrophysics Data System (ADS)
Joachimiak, Damian; Krzyślak, Piotr
2015-06-01
Paper presents the results of experimental and numerical research of a model segment of a labyrinth seal for a different wear level. The analysis covers the extent of leakage and distribution of static pressure in the seal chambers and the planes upstream and downstream of the segment. The measurement data have been compared with the results of numerical calculations obtained using commercial software. Based on the flow conditions occurring in the area subjected to calculations, the size of the mesh defined by parameter y+ has been analyzed and the selection of the turbulence model has been described. The numerical calculations were based on the measurable thermodynamic parameters in the seal segments of steam turbines. The work contains a comparison of the mass flow and distribution of static pressure in the seal chambers obtained during the measurement and calculated numerically in a model segment of the seal of different level of wear.
Kisselev, V B; Roberti, L; Perona, G
1995-12-20
The recently developed finite-element method for solution of the radiative transfer equation has been extended to compute the full azimuthal dependence of the radiance in a vertically inhomogeneous plane-parallel medium. The physical processes that are included in the algorithm are multiple scattering and bottom boundary bidirectional reflectivity. The incident radiation is a parallel flux on the top boundary that is characteristic for illumination of the atmosphere by the Sun in the UV, visible, and near-infrared regions of the electromagnetic spectrum. The theoretical basis is presented together with a number of applications to realistic atmospheres. The method is shown to be accurate even with a low number of grid points for most of the considered situations. The FORTRAN code for this algorithm is developed and is available for applications. PMID:21068966
NASA Astrophysics Data System (ADS)
Kisselev, Viatcheslav B.; Roberti, Laura; Perona, Giovanni
1995-12-01
The recently developed finite-element method for solution of the radiative transfer equation has been extended to compute the full azimuthal dependence of the radiance in a vertically inhomogeneous plane-parallel medium. The physical processes that are included in the algorithm are multiple scattering and bottom boundary bidirectional reflectivity. The incident radiation is a parallel flux on the top boundary that is characteristic for illumination of the atmosphere by the Sun in the UV, visible, and near-infrared regions of the electromagnetic spectrum. The theoretical basis is presented together with a number of applications to realistic atmospheres. The method is shown to be accurate even with a low number of grid points for most of the considered situations. The fortran code for this algorithm is developed and is available for applications.
Parallel technology for numerical modeling of fluid dynamics problems by high-accuracy algorithms
NASA Astrophysics Data System (ADS)
Gorobets, A. V.
2015-04-01
A parallel computation technology for modeling fluid dynamics problems by finite-volume and finite-difference methods of high accuracy is presented. The development of an algorithm, the design of a software implementation, and the creation of parallel programs for computations on large-scale computing systems are considered. The presented parallel technology is based on a multilevel parallel model combining various types of parallelism: with shared and distributed memory and with multiple and single instruction streams to multiple data flows.
2014-01-01
Background Eukaryotic transcriptional regulation is known to be highly connected through the networks of cooperative transcription factors (TFs). Measuring the cooperativity of TFs is helpful for understanding the biological relevance of these TFs in regulating genes. The recent advances in computational techniques led to various predictions of cooperative TF pairs in yeast. As each algorithm integrated different data resources and was developed based on different rationales, it possessed its own merit and claimed outperforming others. However, the claim was prone to subjectivity because each algorithm compared with only a few other algorithms and only used a small set of performance indices for comparison. This motivated us to propose a series of indices to objectively evaluate the prediction performance of existing algorithms. And based on the proposed performance indices, we conducted a comprehensive performance evaluation. Results We collected 14 sets of predicted cooperative TF pairs (PCTFPs) in yeast from 14 existing algorithms in the literature. Using the eight performance indices we adopted/proposed, the cooperativity of each PCTFP was measured and a ranking score according to the mean cooperativity of the set was given to each set of PCTFPs under evaluation for each performance index. It was seen that the ranking scores of a set of PCTFPs vary with different performance indices, implying that an algorithm used in predicting cooperative TF pairs is of strength somewhere but may be of weakness elsewhere. We finally made a comprehensive ranking for these 14 sets. The results showed that Wang J's study obtained the best performance evaluation on the prediction of cooperative TF pairs in yeast. Conclusions In this study, we adopted/proposed eight performance indices to make a comprehensive performance evaluation on the prediction results of 14 existing cooperative TFs identification algorithms. Most importantly, these proposed indices can be easily applied to
Height of burst explosions: a comparative study of numerical and experimental results
NASA Astrophysics Data System (ADS)
Omang, M.; Christensen, S. O.; Børve, S.; Trulsen, J.
2009-06-01
In the current work, we use the Constant Volume model and the numerical method, Regularized Smoothed Particle Hydrodynamics (RSPH) to study propagation and reflection of blast waves from detonations of the high explosives C-4 and TNT. The results from simulations of free-field TNT explosions are compared to previously published data, and good agreement is found. Measurements from height of burst tests performed by the Norwegian Defence Estates Agency are used to compare against numerical simulations. The results for shock time of arrival and the pressure levels are well represented by the numerical results. The results are also found to be in good agreement with results from a commercially available code. The effect of allowing different ratios of specific heat capacities in the explosive products are studied. We also evaluate the effect of changing the charge shape and height of burst on the triple point trajectory.
Chaotic scattering in an open vase-shaped cavity: Topological, numerical, and experimental results
NASA Astrophysics Data System (ADS)
Novick, Jaison Allen
point to each "detector point". We then construct the wave function directly from these classical trajectories using the two-dimensional WKB approximation. The wave function is Fourier Transformed using a Fast Fourier Transform algorithm resulting in a spectrum in which each peak corresponds to an interpolated trajectory. Our predictions are based on an imagined experiment that uses microwave propagation within an electromagnetic waveguide. Such an experiment exploits the fact that under suitable conditions both Maxwell's Equations and the Schrodinger Equation can be reduced to the Helmholtz Equation. Therefore, our predictions, while compared to the electromagnetic experiment, contain information about the quantum system. Identifying peaks in the transmission spectrum with chaotic trajectories will allow for an additional experimental verification of the intermediate recursive structure. Finally, we summarize our results and discuss possible extensions of this project.
AN ACCURATE AND EFFICIENT ALGORITHM FOR NUMERICAL SIMULATION OF CONDUCTION-TYPE PROBLEMS. (R824801)
A modification of the finite analytic numerical method for conduction-type (diffusion) problems is presented. The finite analytic discretization scheme is derived by means of the Fourier series expansion for the most general case of nonuniform grid and variabl...
Kalpathy-Cramer, Jayashree; Zhao, Binsheng; Goldgof, Dmitry; Gu, Yuhua; Wang, Xingwei; Yang, Hao; Tan, Yongqiang; Gillies, Robert; Napel, Sandy
2016-08-01
Tumor volume estimation, as well as accurate and reproducible borders segmentation in medical images, are important in the diagnosis, staging, and assessment of response to cancer therapy. The goal of this study was to demonstrate the feasibility of a multi-institutional effort to assess the repeatability and reproducibility of nodule borders and volume estimate bias of computerized segmentation algorithms in CT images of lung cancer, and to provide results from such a study. The dataset used for this evaluation consisted of 52 tumors in 41 CT volumes (40 patient datasets and 1 dataset containing scans of 12 phantom nodules of known volume) from five collections available in The Cancer Imaging Archive. Three academic institutions developing lung nodule segmentation algorithms submitted results for three repeat runs for each of the nodules. We compared the performance of lung nodule segmentation algorithms by assessing several measurements of spatial overlap and volume measurement. Nodule sizes varied from 29 μl to 66 ml and demonstrated a diversity of shapes. Agreement in spatial overlap of segmentations was significantly higher for multiple runs of the same algorithm than between segmentations generated by different algorithms (p < 0.05) and was significantly higher on the phantom dataset compared to the other datasets (p < 0.05). Algorithms differed significantly in the bias of the measured volumes of the phantom nodules (p < 0.05) underscoring the need for assessing performance on clinical data in addition to phantoms. Algorithms that most accurately estimated nodule volumes were not the most repeatable, emphasizing the need to evaluate both their accuracy and precision. There were considerable differences between algorithms, especially in a subset of heterogeneous nodules, underscoring the recommendation that the same software be used at all time points in longitudinal studies. PMID:26847203
NASA Astrophysics Data System (ADS)
Wojcik, J.; Powalowski, T.; Trawinski, Z.
2008-02-01
The aim of this paper is to compare the results of the mathematical modeling and experimental results of the ultrasonic waves scattering in the inhomogeneous dissipative medium. The research was carried out for an artery model (a pipe made of a latex), with internal diameter of 5 mm and wall thickness of 1.25 mm. The numerical solver was created for calculation of the fields of ultrasonic beams and scattered fields under different boundary conditions, different angles and transversal displacement of ultrasonic beams with respect to the position of the arterial wall. The investigations employed the VED ultrasonic apparatus. The good agreement between the numerical calculation and experimental results was obtained.
An Effective Hybrid Firefly Algorithm with Harmony Search for Global Numerical Optimization
Guo, Lihong; Wang, Gai-Ge; Wang, Heqi; Wang, Dinan
2013-01-01
A hybrid metaheuristic approach by hybridizing harmony search (HS) and firefly algorithm (FA), namely, HS/FA, is proposed to solve function optimization. In HS/FA, the exploration of HS and the exploitation of FA are fully exerted, so HS/FA has a faster convergence speed than HS and FA. Also, top fireflies scheme is introduced to reduce running time, and HS is utilized to mutate between fireflies when updating fireflies. The HS/FA method is verified by various benchmarks. From the experiments, the implementation of HS/FA is better than the standard FA and other eight optimization methods. PMID:24348137
NASA Technical Reports Server (NTRS)
Gunzburger, M. D.; Nicolaides, R. A.
1986-01-01
Substructuring methods are in common use in mechanics problems where typically the associated linear systems of algebraic equations are positive definite. Here these methods are extended to problems which lead to nonpositive definite, nonsymmetric matrices. The extension is based on an algorithm which carries out the block Gauss elimination procedure without the need for interchanges even when a pivot matrix is singular. Examples are provided wherein the method is used in connection with finite element solutions of the stationary Stokes equations and the Helmholtz equation, and dual methods for second-order elliptic equations.
NASA Astrophysics Data System (ADS)
Hu, P.; Shi, D. Y.; Ying, L.; Shen, G. Z.; Chang, Y.; Liu, W. Q.
2013-05-01
Thermal-mechanical-transformation coupled theoretical model for hot stamping and rheological behavior of high strength steel at elevated temperatures were obtained through non-isothermal and isothermal tensile tests respectively in this work. The static explicit finite element equations for hot stamping were proposed based on thermal-mechanical-transformation coupled constitutive laws and nonlinear, large deformation analysis. According to these equations, the hot stamping module of KMAS (King Mesh Analysis System) was developed for the numerical simulation of sheet metal forming at elevated temperatures. Afterwards, the hot stamping simulation of a typical B-pillar conducted by the KMAS software was compared to the experiment. The comparison consists of the following sides: temperature distribution, thickness distribution and martensite fraction. The good agreement between numerical simulation and the experiment confirms that the multi-field coupled constitutive laws and the KMAS software can predict hot stamping process accurately.
Numerical results on the transcendence of constants involving pi, e, and Euler's constant
NASA Technical Reports Server (NTRS)
Bailey, David H.
1988-01-01
The existence of simple polynomial equations (integer relations) for the constants e/pi, e + pi, log pi, gamma (Euler's constant), e exp gamma, gamma/e, gamma/pi, and log gamma is investigated by means of numerical computations. The recursive form of the Ferguson-Fourcade algorithm (Ferguson and Fourcade, 1979; Ferguson, 1986 and 1987) is implemented on the Cray-2 supercomputer at NASA Ames, applying multiprecision techniques similar to those described by Bailey (1988) except that FFTs are used instead of dual-prime-modulus transforms for multiplication. It is shown that none of the constants has an integer relation of degree eight or less with coefficients of Euclidean norm 10 to the 9th or less.
NASA Technical Reports Server (NTRS)
Morrell, F. R.; Motyka, P. R.; Bailey, M. L.
1990-01-01
Flight test results for two sensor fault-tolerant algorithms developed for a redundant strapdown inertial measurement unit are presented. The inertial measurement unit (IMU) consists of four two-degrees-of-freedom gyros and accelerometers mounted on the faces of a semi-octahedron. Fault tolerance is provided by edge vector test and generalized likelihood test algorithms, each of which can provide dual fail-operational capability for the IMU. To detect the wide range of failure magnitudes in inertial sensors, which provide flight crucial information for flight control and navigation, failure detection and isolation are developed in terms of a multi level structure. Threshold compensation techniques, developed to enhance the sensitivity of the failure detection process to navigation level failures, are presented. Four flight tests were conducted in a commercial transport-type environment to compare and determine the performance of the failure detection and isolation methods. Dual flight processors enabled concurrent tests for the algorithms. Failure signals such as hard-over, null, or bias shift, were added to the sensor outputs as simple or multiple failures during the flights. Both algorithms provided timely detection and isolation of flight control level failures. The generalized likelihood test algorithm provided more timely detection of low-level sensor failures, but it produced one false isolation. Both algorithms demonstrated the capability to provide dual fail-operational performance for the skewed array of inertial sensors.
Image restoration by the method of convex projections: part 2 applications and numerical results.
Sezan, M I; Stark, H
1982-01-01
The image restoration theory discussed in a previous paper by Youla and Webb [1] is applied to a simulated image and the results compared with the well-known method known as the Gerchberg-Papoulis algorithm. The results show that the method of image restoration by projection onto convex sets, by providing a convenient technique for utilizing a priori information, performs significantly better than the Gerchberg-Papoulis method. PMID:18238262
Stamnes, K; Tsay, S C; Wiscombe, W; Jayaweera, K
1988-06-15
We summarize an advanced, thoroughly documented, and quite general purpose discrete ordinate algorithm for time-independent transfer calculations in vertically inhomogeneous, nonisothermal, plane-parallel media. Atmospheric applications ranging from the UV to the radar region of the electromagnetic spectrum are possible. The physical processes included are thermal emission, scattering, absorption, and bidirectional reflection and emission at the lower boundary. The medium may be forced at the top boundary by parallel or diffuse radiation and by internal and boundary thermal sources as well. We provide a brief account of the theoretical basis as well as a discussion of the numerical implementation of the theory. The recent advances made by ourselves and our collaborators-advances in both formulation and numerical solution-are all incorporated in the algorithm. Prominent among these advances are the complete conquest of two illconditioning problems which afflicted all previous discrete ordinate implementations: (1) the computation of eigenvalues and eigenvectors and (2) the inversion of the matrix determining the constants of integration. Copies of the FORTRAN program on microcomputer diskettes are available for interested users. PMID:20531783
NASA Astrophysics Data System (ADS)
Kuraz, Michal
2016-06-01
This paper presents pseudo-deterministic catchment runoff model based on the Richards equation model [1] - the governing equation for the subsurface flow. The subsurface flow in a catchment is described here by two-dimensional variably saturated flow (unsaturated and saturated). The governing equation is the Richards equation with a slight modification of the time derivative term as considered e.g. by Neuman [2]. The nonlinear nature of this problem appears in unsaturated zone only, however the delineation of the saturated zone boundary is a nonlinear computationally expensive issue. The simple one-dimensional Boussinesq equation was used here as a rough estimator of the saturated zone boundary. With this estimate the dd-adaptivity algorithm (see Kuraz et al. [4, 5, 6]) could always start with an optimal subdomain split, so it is now possible to avoid solutions of huge systems of linear equations in the initial iteration level of our Richards equation based runoff model.
Benetazzo, Flavia; Freddi, Alessandro; Monteriù, Andrea; Longhi, Sauro
2014-09-01
Both the theoretical background and the experimental results of an algorithm developed to perform human respiratory rate measurements without any physical contact are presented. Based on depth image sensing techniques, the respiratory rate is derived by measuring morphological changes of the chest wall. The algorithm identifies the human chest, computes its distance from the camera and compares this value with the instantaneous distance, discerning if it is due to the respiratory act or due to a limited movement of the person being monitored. To experimentally validate the proposed algorithm, the respiratory rate measurements coming from a spirometer were taken as a benchmark and compared with those estimated by the algorithm. Five tests were performed, with five different persons sat in front of the camera. The first test aimed to choose the suitable sampling frequency. The second test was conducted to compare the performances of the proposed system with respect to the gold standard in ideal conditions of light, orientation and clothing. The third, fourth and fifth tests evaluated the algorithm performances under different operating conditions. The experimental results showed that the system can correctly measure the respiratory rate, and it is a viable alternative to monitor the respiratory activity of a person without using invasive sensors. PMID:26609383
Akbari, Hamed; Bilello, Michel; Da, Xiao; Davatzikos, Christos
2015-01-01
Evaluating various algorithms for the inter-subject registration of brain magnetic resonance images (MRI) is a necessary topic receiving growing attention. Existing studies evaluated image registration algorithms in specific tasks or using specific databases (e.g., only for skull-stripped images, only for single-site images, etc.). Consequently, the choice of registration algorithms seems task- and usage/parameter-dependent. Nevertheless, recent large-scale, often multi-institutional imaging-related studies create the need and raise the question whether some registration algorithms can 1) generally apply to various tasks/databases posing various challenges; 2) perform consistently well, and while doing so, 3) require minimal or ideally no parameter tuning. In seeking answers to this question, we evaluated 12 general-purpose registration algorithms, for their generality, accuracy and robustness. We fixed their parameters at values suggested by algorithm developers as reported in the literature. We tested them in 7 databases/tasks, which present one or more of 4 commonly-encountered challenges: 1) inter-subject anatomical variability in skull-stripped images; 2) intensity homogeneity, noise and large structural differences in raw images; 3) imaging protocol and field-of-view (FOV) differences in multi-site data; and 4) missing correspondences in pathology-bearing images. Totally 7,562 registrations were performed. Registration accuracies were measured by (multi-)expert-annotated landmarks or regions of interest (ROIs). To ensure reproducibility, we used public software tools, public databases (whenever possible), and we fully disclose the parameter settings. We show evaluation results, and discuss the performances in light of algorithms’ similarity metrics, transformation models and optimization strategies. We also discuss future directions for the algorithm development and evaluations. PMID:24951685
NASA Astrophysics Data System (ADS)
Bouallegue, Kais; Chaari, Abdessattar
In this study, one propose to study a numeric type strategy permitting the generation of any shape of path in view of the scheduling of the trajectories for a car-like mobile robot where the planned motions considered are continuous sequences in the space of the robot. These paths are programmed in order to have some types of closed or open trajectories. One is interested in the motion control of the robot from an initial position to a final position while optimizing the consumed energy in its alternated circular motion on both sides of the segment joining these two points. In this study, one presents a new method based on a numeric approach conceived from the kinematics equations of the robot. This new technique of numeric, adaptive and dynamic control of the robot is implemented on DSP21065L of the SHARC family. This algorithm assures the robot control of an initial position of departure to a final position of arrival without the existence of obstacles.
A critical evaluation of numerical algorithms and flow physics in complex supersonic flows
NASA Astrophysics Data System (ADS)
Aradag, Selin
In this research, two different complex supersonic flows are selected to apply CFD to Navier-Stokes simulations. First test case is "Supersonic Flow over an Open Rectangular Cavity". Open cavity flow fields are remarkably complicated with internal and external regions that are coupled via self-sustained shear layer oscillations. Supersonic flow past a cavity has numerous applications in store carriage and release. Internal carriage of stores, which can be modeled using a cavity configuration, is used for supersonic aircraft in order to reduce radar cross section, aerodynamic drag and aerodynamic heating. Supersonic, turbulent, three-dimensional unsteady flow past an open rectangular cavity is simulated, to understand the physics and three-dimensional nature of the cavity flow oscillations. Influences of numerical parameters such as numerical flux scheme, computation time and flux limiter on the computed flow are determined. Two dimensional simulations are also performed for comparison purposes. The next test case is "The Computational Design of Boeing/AFOSR Mach 6 Wind Tunnel". Due to huge differences between geometrical scales, this problem is both challenging and computationally intensive. It is believed that most of the experimental data obtained from conventional ground testing facilities are not reliable due to high levels of noise associated with the acoustic fluctuations from the turbulent boundary layers on the wind tunnel walls. Therefore, it is very important to have quiet testing facilities for hypersonic flow research. The Boeing/AFOSR Mach 6 Wind tunnel in Purdue University has been designed as a quiet tunnel for which the noise level is an order of magnitude lower than that in conventional wind tunnels. However, quiet flow is achieved in the Purdue Mach 6 tunnel for only low Reynolds numbers. Early transition of the nozzle wall boundary layer has been identified as the cause of the test section noise. Separation bubbles on the bleed lip and associated
Numerical modeling of on-orbit propellant motion resulting from an impulsive acceleration
NASA Technical Reports Server (NTRS)
Aydelott, John C.; Mjolsness, Raymond C.; Torrey, Martin D.; Hochstein, John I.
1987-01-01
In-space docking and separation maneuvers of spacecraft that have large fluid mass fractions may cause undesirable spacecraft motion in response to the impulsive-acceleration-induced fluid motion. An example of this potential low gravity fluid management problem arose during the development of the shuttle/Centaur vehicle. Experimentally verified numerical modeling techniques were developed to establish the propellant dynamics, and subsequent vehicle motion, associated with the separation of the Centaur vehicle from the shuttle orbiter cargo bay. Although the shuttle/Centaur development activity was suspended, the numerical modeling techniques are available to predict on-orbit liquid motion resulting from impulsive accelerations for other missions and spacecraft.
Algorithms for detecting antibodies to HIV-1: results from a rural Ugandan cohort.
Nunn, A J; Biryahwaho, B; Downing, R G; van der Groen, G; Ojwiya, A; Mulder, D W
1993-08-01
Although the Western blot test is widely used to confirm HIV-1 serostatus, concerns over its additional cost have prompted review of the need for supplementary testing and the evaluation of alternative test algorithms. Serostatus tends to be confirmed with this additional test especially when tested individuals will be informed of their serostatus or when results will be used for research purposes. The confirmation procedure has been adopted as a means of securing suitably high levels of specificity and sensitivity. With the goal of exploring potential alternatives to Western blot confirmation, the authors describe the use of parallel testing with a competitive and an indirect enzyme immunoassay with and without supplementary Western blots. Sera were obtained from 7895 people in the rural population survey and tested with an algorithm based on the Recombigen HIV-1 EIA and Wellcozyme HIV-1 Recombinant; alternative algorithms were assessed on negative or confirmed positive sera. None of the 227 sera classified as negative by the 2 assays were positive by Western blot. Of the 192 identified ass positive by both assays, 4 were found to be seronegative with Western blot. The possibility of technical error does, however, exist for 3 of these latter cases. One of the alternative algorithms assessed classified all borderline or discordant assay results as negative with 100% specificity and 98.4% sensitivity. This particular algorithm costs only one-third the price of the conventional algorithm. These results therefore suggest that high specificity and sensitivity may be obtained without using Western blot and at a considerable reduction in cost. PMID:8397940
NASA Astrophysics Data System (ADS)
García, Hermes A.; Guerrero-Bolaño, Francisco J.; Obregón-Neira, Nelson
2010-05-01
Due to both mathematical tractability and efficiency on computational resources, it is very common to find in the realm of numerical modeling in hydro-engineering that regular linearization techniques have been applied to nonlinear partial differential equations properly obtained in environmental flow studies. Sometimes this simplification is also made along with omission of nonlinear terms involved in such equations which in turn diminishes the performance of any implemented approach. This is the case for example, for contaminant transport modeling in streams. Nowadays, a traditional and one of the most common used water quality model such as QUAL2k, preserves its original algorithm, which omits nonlinear terms through linearization techniques, in spite of the continuous algorithmic development and computer power enhancement. For that reason, the main objective of this research was to generate a flexible tool for non-linear water quality modeling. The solution implemented here was based on two genetic algorithms, used in a nested way in order to find two different types of solutions sets: the first set is composed by the concentrations of the physical-chemical variables used in the modeling approach (16 variables), which satisfies the non-linear equation system. The second set, is the typical solution of the inverse problem, the parameters and constants values for the model when it is applied to a particular stream. From a total of sixteen (16) variables, thirteen (13) was modeled by using non-linear coupled equation systems and three (3) was modeled in an independent way. The model used here had a requirement of fifty (50) parameters. The nested genetic algorithm used for the numerical solution of a non-linear equation system proved to serve as a flexible tool to handle with the intrinsic non-linearity that emerges from the interactions occurring between multiple variables involved in water quality studies. However because there is a strong data limitation in
Image Artifacts Resulting from Gamma-Ray Tracking Algorithms Used with Compton Imagers
Seifert, Carolyn E.; He, Zhong
2005-10-01
For Compton imaging it is necessary to determine the sequence of gamma-ray interactions in a single detector or array of detectors. This can be done by time-of-flight measurements if the interactions are sufficiently far apart. However, in small detectors the time between interactions can be too small to measure, and other means of gamma-ray sequencing must be used. In this work, several popular sequencing algorithms are reviewed for sequences with two observed events and three or more observed events in the detector. These algorithms can result in poor imaging resolution and introduce artifacts in the backprojection images. The effects of gamma-ray tracking algorithms on Compton imaging are explored in the context of the 4π Compton imager built by the University of Michigan.
The design and results of an algorithm for intelligent ground vehicles
NASA Astrophysics Data System (ADS)
Duncan, Matthew; Milam, Justin; Tote, Caleb; Riggins, Robert N.
2010-01-01
This paper addresses the design, design method, test platform, and test results of an algorithm used in autonomous navigation for intelligent vehicles. The Bluefield State College (BSC) team created this algorithm for its 2009 Intelligent Ground Vehicle Competition (IGVC) robot called Anassa V. The BSC robotics team is comprised of undergraduate computer science, engineering technology, marketing students, and one robotics faculty advisor. The team has participated in IGVC since the year 2000. A major part of the design process that the BSC team uses each year for IGVC is a fully documented "Post-IGVC Analysis." Over the nine years since 2000, the lessons the students learned from these analyses have resulted in an ever-improving, highly successful autonomous algorithm. The algorithm employed in Anassa V is a culmination of past successes and new ideas, resulting in Anassa V earning several excellent IGVC 2009 performance awards, including third place overall. The paper will discuss all aspects of the design of this autonomous robotic system, beginning with the design process and ending with test results for both simulation and real environments.
Numerical Studies of Magnetohydrodynamic Activity Resulting from Inductive Transients Final Report
Sovinec, Carl R.
2005-08-29
This report describes results from numerical studies of transients in magnetically confined plasmas. The work has been performed by University of Wisconsin graduate students James Reynolds and Giovanni Cone and by the Principal Investigator through support from contract DE-FG02-02ER54687, a Junior Faculty in Plasma Science award from the DOE Office of Science. Results from the computations have added significantly to our knowledge of magnetized plasma relaxation in the reversed-field pinch (RFP) and spheromak. In particular, they have distinguished relaxation activity expected in sustained configurations from transient effects that can persist over a significant fraction of the plasma discharge. We have also developed the numerical capability for studying electrostatic current injection in the spherical torus (ST). These configurations are being investigated as plasma confinement schemes in the international effort to achieve controlled thermonuclear fusion for environmentally benign energy production. Our numerical computations have been performed with the NIMROD code (http://nimrodteam.org) using local computing resources and massively parallel computing hardware at the National Energy Research Scientific Computing Center. Direct comparisons of simulation results for the spheromak with laboratory measurements verify the effectiveness of our numerical approach. The comparisons have been published in refereed journal articles by this group and by collaborators at Lawrence Livermore National Laboratory (see Section 4). In addition to the technical products, this grant has supported the graduate education of the two participating students for three years.
Trescott, Peter C.; Pinder, George Francis; Larson, S.P.
1976-01-01
The model will simulate ground-water flow in an artesian aquifer, a water-table aquifer, or a combined artesian and water-table aquifer. The aquifer may be heterogeneous and anisotropic and have irregular boundaries. The source term in the flow equation may include well discharge, constant recharge, leakage from confining beds in which the effects of storage are considered, and evapotranspiration as a linear function of depth to water. The theoretical development includes presentation of the appropriate flow equations and derivation of the finite-difference approximations (written for a variable grid). The documentation emphasizes the numerical techniques that can be used for solving the simultaneous equations and describes the results of numerical experiments using these techniques. Of the three numerical techniques available in the model, the strongly implicit procedure, in general, requires less computer time and has fewer numerical difficulties than do the iterative alternating direction implicit procedure and line successive overrelaxation (which includes a two-dimensional correction procedure to accelerate convergence). The documentation includes a flow chart, program listing, an example simulation, and sections on designing an aquifer model and requirements for data input. It illustrates how model results can be presented on the line printer and pen plotters with a program that utilizes the graphical display software available from the Geological Survey Computer Center Division. In addition the model includes options for reading input data from a disk and writing intermediate results on a disk.
Gurkiewicz, Meron; Korngreen, Alon
2007-01-01
The activity of trans-membrane proteins such as ion channels is the essence of neuronal transmission. The currently most accurate method for determining ion channel kinetic mechanisms is single-channel recording and analysis. Yet, the limitations and complexities in interpreting single-channel recordings discourage many physiologists from using them. Here we show that a genetic search algorithm in combination with a gradient descent algorithm can be used to fit whole-cell voltage-clamp data to kinetic models with a high degree of accuracy. Previously, ion channel stimulation traces were analyzed one at a time, the results of these analyses being combined to produce a picture of channel kinetics. Here the entire set of traces from all stimulation protocols are analysed simultaneously. The algorithm was initially tested on simulated current traces produced by several Hodgkin-Huxley–like and Markov chain models of voltage-gated potassium and sodium channels. Currents were also produced by simulating levels of noise expected from actual patch recordings. Finally, the algorithm was used for finding the kinetic parameters of several voltage-gated sodium and potassium channels models by matching its results to data recorded from layer 5 pyramidal neurons of the rat cortex in the nucleated outside-out patch configuration. The minimization scheme gives electrophysiologists a tool for reproducing and simulating voltage-gated ion channel kinetics at the cellular level. PMID:17784781
Improving the trust in results of numerical simulations and scientific data analytics
Cappello, Franck; Constantinescu, Emil; Hovland, Paul; Peterka, Tom; Phillips, Carolyn; Snir, Marc; Wild, Stefan
2015-04-30
This white paper investigates several key aspects of the trust that a user can give to the results of numerical simulations and scientific data analytics. In this document, the notion of trust is related to the integrity of numerical simulations and data analytics applications. This white paper complements the DOE ASCR report on Cybersecurity for Scientific Computing Integrity by (1) exploring the sources of trust loss; (2) reviewing the definitions of trust in several areas; (3) providing numerous cases of result alteration, some of them leading to catastrophic failures; (4) examining the current notion of trust in numerical simulation and scientific data analytics; (5) providing a gap analysis; and (6) suggesting two important research directions and their respective research topics. To simplify the presentation without loss of generality, we consider that trust in results can be lost (or the results’ integrity impaired) because of any form of corruption happening during the execution of the numerical simulation or the data analytics application. In general, the sources of such corruption are threefold: errors, bugs, and attacks. Current applications are already using techniques to deal with different types of corruption. However, not all potential corruptions are covered by these techniques. We firmly believe that the current level of trust that a user has in the results is at least partially founded on ignorance of this issue or the hope that no undetected corruptions will occur during the execution. This white paper explores the notion of trust and suggests recommendations for developing a more scientifically grounded notion of trust in numerical simulation and scientific data analytics. We first formulate the problem and show that it goes beyond previous questions regarding the quality of results such as V&V, uncertainly quantification, and data assimilation. We then explore the complexity of this difficult problem, and we sketch complementary general
NASA Astrophysics Data System (ADS)
Pearce, J. D.; Esler, J. G.
2010-10-01
A pseudo-spectral algorithm is presented for the solution of the rotating Green-Naghdi shallow water equations in two spatial dimensions. The equations are first written in vorticity-divergence form, in order to exploit the fact that time-derivatives then appear implicitly in the divergence equation only. A nonlinear equation must then be solved at each time-step in order to determine the divergence tendency. The nonlinear equation is solved by means of a simultaneous iteration in spectral space to determine each Fourier component. The key to the rapid convergence of the iteration is the use of a good initial guess for the divergence tendency, which is obtained from polynomial extrapolation of the solution obtained at previous time-levels. The algorithm is therefore best suited to be used with a standard multi-step time-stepping scheme (e.g. leap-frog). Two test cases are presented to validate the algorithm for initial value problems on a square periodic domain. The first test is to verify cnoidal wave speeds in one-dimension against analytical results. The second test is to ensure that the Miles-Salmon potential vorticity is advected as a parcel-wise conserved tracer throughout the nonlinear evolution of a perturbed jet subject to shear instability. The algorithm is demonstrated to perform well in each test. The resulting numerical model is expected to be of use in identifying paradigmatic behavior in mesoscale flows in the atmosphere and ocean in which both vortical, nonlinear and dispersive effects are important.
NASA Astrophysics Data System (ADS)
Tang, Yu-Hang; Karniadakis, George Em
2014-11-01
We present a scalable dissipative particle dynamics simulation code, fully implemented on the Graphics Processing Units (GPUs) using a hybrid CUDA/MPI programming model, which achieves 10-30 times speedup on a single GPU over 16 CPU cores and almost linear weak scaling across a thousand nodes. A unified framework is developed within which the efficient generation of the neighbor list and maintaining particle data locality are addressed. Our algorithm generates strictly ordered neighbor lists in parallel, while the construction is deterministic and makes no use of atomic operations or sorting. Such neighbor list leads to optimal data loading efficiency when combined with a two-level particle reordering scheme. A faster in situ generation scheme for Gaussian random numbers is proposed using precomputed binary signatures. We designed custom transcendental functions that are fast and accurate for evaluating the pairwise interaction. The correctness and accuracy of the code is verified through a set of test cases simulating Poiseuille flow and spontaneous vesicle formation. Computer benchmarks demonstrate the speedup of our implementation over the CPU implementation as well as strong and weak scalability. A large-scale simulation of spontaneous vesicle formation consisting of 128 million particles was conducted to further illustrate the practicality of our code in real-world applications. Catalogue identifier: AETN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AETN_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 1 602 716 No. of bytes in distributed program, including test data, etc.: 26 489 166 Distribution format: tar.gz Programming language: C/C++, CUDA C/C++, MPI. Computer: Any computers having nVidia GPGPUs with compute capability 3.0. Operating system: Linux. Has the code been
Results from the New IGS Time Scale Algorithm (version 2.0)
NASA Astrophysics Data System (ADS)
Senior, K.; Ray, J.
2009-12-01
Since 2004 the IGS Rapid and Final clock products have been aligned to a highly stable time scale derived from a weighted ensemble of clocks in the IGS network. The time scale is driven mostly by Hydrogen Maser ground clocks though the GPS satellite clocks also carry non-negligible weight, resulting in a time scale having a one-day frequency stability of about 1E-15. However, because of the relatively simple weighting scheme used in the time scale algorithm and because the scale is aligned to UTC by steering it to GPS Time the resulting stability beyond several days suffers. The authors present results of a new 2.0 version of the IGS time scale highlighting the improvements to the algorithm, new modeling considerations, as well as improved time scale stability.
Dragna, Didier; Blanc-Benon, Philippe; Poisson, Franck
2014-03-01
Results from outdoor acoustic measurements performed in a railway site near Reims in France in May 2010 are compared to those obtained from a finite-difference time-domain solver of the linearized Euler equations. During the experiments, the ground profile and the different ground surface impedances were determined. Meteorological measurements were also performed to deduce mean vertical profiles of wind and temperature. An alarm pistol was used as a source of impulse signals and three microphones were located along a propagation path. The various measured parameters are introduced as input data into the numerical solver. In the frequency domain, the numerical results are in good accordance with the measurements up to a frequency of 2 kHz. In the time domain, except a time shift, the predicted waveforms match the measured waveforms with a close agreement. PMID:24606253
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Lomax, Harvard
1987-01-01
The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.
Forecasting Energy Market Contracts by Ambit Processes: Empirical Study and Numerical Results
Di Persio, Luca; Marchesan, Michele
2014-01-01
In the present paper we exploit the theory of ambit processes to develop a model which is able to effectively forecast prices of forward contracts written on the Italian energy market. Both short-term and medium-term scenarios are considered and proper calibration procedures as well as related numerical results are provided showing a high grade of accuracy in the obtained approximations when compared with empirical time series of interest. PMID:27437500
NASA Astrophysics Data System (ADS)
Kitaygorsky, J.; Amburgey, C.; Elliott, J. R.; Fisher, R.; Perala, R. A.
A broadband (100 MHz-1.2 GHz) plane wave electric field source was used to evaluate electric field penetration inside a simplified Boeing 707 aircraft model with a finite-difference time-domain (FDTD) method using EMA3D. The role of absorption losses inside the simplified aircraft was investigated. It was found that, in this frequency range, none of the cavities inside the Boeing 707 model are truly reverberant when frequency stirring is applied, and a purely statistical electromagnetics approach cannot be used to predict or analyze the field penetration or shielding effectiveness (SE). Thus it was our goal to attempt to understand the nature of losses in such a quasi-statistical environment by adding various numbers of absorbing objects inside the simplified aircraft and evaluating the SE, decay-time constant τ, and quality factor Q. We then compare our numerical results with experimental results obtained by D. Mark Johnson et al. on a decommissioned Boeing 707 aircraft.
Some numerical simulation results of swirling flow in d.c. plasma torch
NASA Astrophysics Data System (ADS)
Felipini, C. L.; Pimenta, M. M.
2015-03-01
We present and discuss some results of numerical simulation of swirling flow in d.c. plasma torch, obtained with a two-dimensional mathematical model (MHD model) which was developed to simulate the phenomena related to the interaction between the swirling flow and the electric arc in a non-transferred arc plasma torch. The model was implemented in a computer code based on the Finite Volume Method (FVM) to enable the numerical solution of the governing equations. For the study, cases were simulated with different operating conditions (gas flow rate; swirl number). Some obtained results were compared to the literature and have proved themselves to be in good agreement in most part of computational domain regions. The numerical simulations performed with the computer code enabled the study of the behaviour of the flow in the plasma torch and also study the effects of different swirl numbers on temperature and axial velocity of the plasma flow. The results demonstrated that the developed model is suitable to obtain a better understanding of the involved phenomena and also for the development and optimization of plasma torches.
Angus, Simon D.; Piotrowska, Monika Joanna
2014-01-01
Multi-dose radiotherapy protocols (fraction dose and timing) currently used in the clinic are the product of human selection based on habit, received wisdom, physician experience and intra-day patient timetabling. However, due to combinatorial considerations, the potential treatment protocol space for a given total dose or treatment length is enormous, even for relatively coarse search; well beyond the capacity of traditional in-vitro methods. In constrast, high fidelity numerical simulation of tumor development is well suited to the challenge. Building on our previous single-dose numerical simulation model of EMT6/Ro spheroids, a multi-dose irradiation response module is added and calibrated to the effective dose arising from 18 independent multi-dose treatment programs available in the experimental literature. With the developed model a constrained, non-linear, search for better performing cadidate protocols is conducted within the vicinity of two benchmarks by genetic algorithm (GA) techniques. After evaluating less than 0.01% of the potential benchmark protocol space, candidate protocols were identified by the GA which conferred an average of 9.4% (max benefit 16.5%) and 7.1% (13.3%) improvement (reduction) on tumour cell count compared to the two benchmarks, respectively. Noticing that a convergent phenomenon of the top performing protocols was their temporal synchronicity, a further series of numerical experiments was conducted with periodic time-gap protocols (10 h to 23 h), leading to the discovery that the performance of the GA search candidates could be replicated by 17–18 h periodic candidates. Further dynamic irradiation-response cell-phase analysis revealed that such periodicity cohered with latent EMT6/Ro cell-phase temporal patterning. Taken together, this study provides powerful evidence towards the hypothesis that even simple inter-fraction timing variations for a given fractional dose program may present a facile, and highly cost
NASA Astrophysics Data System (ADS)
Bhrawy, A. H.; Doha, E. H.; Baleanu, D.; Ezz-Eldien, S. S.
2015-07-01
In this paper, an efficient and accurate spectral numerical method is presented for solving second-, fourth-order fractional diffusion-wave equations and fractional wave equations with damping. The proposed method is based on Jacobi tau spectral procedure together with the Jacobi operational matrix for fractional integrals, described in the Riemann-Liouville sense. The main characteristic behind this approach is to reduce such problems to those of solving systems of algebraic equations in the unknown expansion coefficients of the sought-for spectral approximations. The validity and effectiveness of the method are demonstrated by solving five numerical examples. Numerical examples are presented in the form of tables and graphs to make comparisons with the results obtained by other methods and with the exact solutions more easier.
Wave interpretation of numerical results for the vibration in thin conical shells
NASA Astrophysics Data System (ADS)
Ni, Guangjian; Elliott, Stephen J.
2014-05-01
The dynamic behaviour of thin conical shells can be analysed using a number of numerical methods. Although the overall vibration response of shells has been thoroughly studied using such methods, their physical insight is limited. The purpose of this paper is to interpret some of these numerical results in terms of waves, using the wave finite element, WFE, method. The forced response of a thin conical shell at different frequencies is first calculated using the dynamic stiffness matrix method. Then, a wave finite element analysis is used to calculate the wave properties of the shell, in terms of wave type and wavenumber, as a function of position along it. By decomposing the overall results from the dynamic stiffness matrix analysis, the responses of the shell can then be interpreted in terms of wave propagation. A simplified theoretical analysis of the waves in the thin conical shell is also presented in terms of the spatially-varying ring frequency, which provides a straightforward interpretation of the wave approach. The WFE method provides a way to study the types of wave that travel in thin conical shell structures and to decompose the response of the numerical models into the components due to each of these waves. In this way the insight provided by the wave approach allows us to analyse the significance of different waves in the overall response and study how they interact, in particular illustrating the conversion of one wave type into another along the length of the conical shell.
Recent Analytical and Numerical Results for The Navier-Stokes-Voigt Model and Related Models
NASA Astrophysics Data System (ADS)
Larios, Adam; Titi, Edriss; Petersen, Mark; Wingate, Beth
2010-11-01
The equations which govern the motions of fluids are notoriously difficult to handle both mathematically and computationally. Recently, a new approach to these equations, known as the Voigt-regularization, has been investigated as both a numerical and analytical regularization for the 3D Navier-Stokes equations, the Euler equations, and related fluid models. This inviscid regularization is related to the alpha-models of turbulent flow; however, it overcomes many of the problems present in those models. I will discuss recent work on the Voigt-regularization, as well as a new criterion for the finite-time blow-up of the Euler equations based on their Voigt-regularization. Time permitting, I will discuss some numerical results, as well as applications of this technique to the Magnetohydrodynamic (MHD) equations and various equations of ocean dynamics.
Temperature Fields in Soft Tissue during LPUS Treatment: Numerical Prediction and Experiment Results
Kujawska, Tamara; Wojcik, Janusz; Nowicki, Andrzej
2010-03-09
Recent research has shown that beneficial therapeutic effects in soft tissues can be induced by the low power ultrasound (LPUS). For example, increasing of cells immunity to stress (among others thermal stress) can be obtained through the enhanced heat shock proteins (Hsp) expression induced by the low intensity ultrasound. The possibility to control the Hsp expression enhancement in soft tissues in vivo stimulated by ultrasound can be the potential new therapeutic approach to the neurodegenerative diseases which utilizes the known feature of cells to increase their immunity to stresses through the Hsp expression enhancement. The controlling of the Hsp expression enhancement by adjusting of exposure level to ultrasound energy would allow to evaluate and optimize the ultrasound-mediated treatment efficiency. Ultrasonic regimes are controlled by adjusting the pulsed ultrasound waves intensity, frequency, duration, duty cycle and exposure time. Our objective was to develop the numerical model capable of predicting in space and time temperature fields induced by a circular focused transducer generating tone bursts in multilayer nonlinear attenuating media and to compare the numerically calculated results with the experimental data in vitro. The acoustic pressure field in multilayer biological media was calculated using our original numerical solver. For prediction of temperature fields the Pennes' bio-heat transfer equation was employed. Temperature field measurements in vitro were carried out in a fresh rat liver using the 15 mm diameter, 25 mm focal length and 2 MHz central frequency transducer generating tone bursts with the spatial peak temporal average acoustic intensity varied between 0.325 and 1.95 W/cm{sup 2}, duration varied from 20 to 500 cycles at the same 20% duty cycle and the exposure time varied up to 20 minutes. The measurement data were compared with numerical simulation results obtained under experimental boundary conditions. Good agreement between
NASA Technical Reports Server (NTRS)
Carrier, Alain C.; Aubrun, Jean-Noel
1993-01-01
New frequency response measurement procedures, on-line modal tuning techniques, and off-line modal identification algorithms are developed and applied to the modal identification of the Advanced Structures/Controls Integrated Experiment (ASCIE), a generic segmented optics telescope test-bed representative of future complex space structures. The frequency response measurement procedure uses all the actuators simultaneously to excite the structure and all the sensors to measure the structural response so that all the transfer functions are measured simultaneously. Structural responses to sinusoidal excitations are measured and analyzed to calculate spectral responses. The spectral responses in turn are analyzed as the spectral data become available and, which is new, the results are used to maintain high quality measurements. Data acquisition, processing, and checking procedures are fully automated. As the acquisition of the frequency response progresses, an on-line algorithm keeps track of the actuator force distribution that maximizes the structural response to automatically tune to a structural mode when approaching a resonant frequency. This tuning is insensitive to delays, ill-conditioning, and nonproportional damping. Experimental results show that is useful for modal surveys even in high modal density regions. For thorough modeling, a constructive procedure is proposed to identify the dynamics of a complex system from its frequency response with the minimization of a least-squares cost function as a desirable objective. This procedure relies on off-line modal separation algorithms to extract modal information and on least-squares parameter subset optimization to combine the modal results and globally fit the modal parameters to the measured data. The modal separation algorithms resolved modal density of 5 modes/Hz in the ASCIE experiment. They promise to be useful in many challenging applications.
Evaluation of observation-driven evaporation algorithms: results of the WACMOS-ET project
NASA Astrophysics Data System (ADS)
Miralles, Diego G.; Jimenez, Carlos; Ershadi, Ali; McCabe, Matthew F.; Michel, Dominik; Hirschi, Martin; Seneviratne, Sonia I.; Jung, Martin; Wood, Eric F.; (Bob) Su, Z.; Timmermans, Joris; Chen, Xuelong; Fisher, Joshua B.; Mu, Quiaozen; Fernandez, Diego
2015-04-01
Terrestrial evaporation (ET) links the continental water, energy and carbon cycles. Understanding the magnitude and variability of ET at the global scale is an essential step towards reducing uncertainties in our projections of climatic conditions and water availability for the future. However, the requirement of global observational data of ET can neither be satisfied with our sparse global in-situ networks, nor with the existing satellite sensors (which cannot measure evaporation directly from space). This situation has led to the recent rise of several algorithms dedicated to deriving ET fields from satellite data indirectly, based on the combination of ET-drivers that can be observed from space (e.g. radiation, temperature, phenological variability, water content, etc.). These algorithms can either be based on physics (e.g. Priestley and Taylor or Penman-Monteith approaches) or be purely statistical (e.g., machine learning). However, and despite the efforts from different initiatives like GEWEX LandFlux (Jimenez et al., 2011; Mueller et al., 2013), the uncertainties inherent in the resulting global ET datasets remain largely unexplored, partly due to a lack of inter-product consistency in forcing data. In response to this need, the ESA WACMOS-ET project started in 2012 with the main objectives of (a) developing a Reference Input Data Set to derive and validate ET estimates, and (b) performing a cross-comparison, error characterization and validation exercise of a group of selected ET algorithms driven by this Reference Input Data Set and by in-situ forcing data. The algorithms tested are SEBS (Su et al., 2002), the Penman- Monteith approach from MODIS (Mu et al., 2011), the Priestley and Taylor JPL model (Fisher et al., 2008), the MPI-MTE model (Jung et al., 2010) and GLEAM (Miralles et al., 2011). In this presentation we will show the first results from the ESA WACMOS-ET project. The performance of the different algorithms at multiple spatial and temporal
Swiler, Laura Painton; Eldred, Michael Scott
2009-09-01
This report documents the results of an FY09 ASC V&V Methods level 2 milestone demonstrating new algorithmic capabilities for mixed aleatory-epistemic uncertainty quantification. Through the combination of stochastic expansions for computing aleatory statistics and interval optimization for computing epistemic bounds, mixed uncertainty analysis studies are shown to be more accurate and efficient than previously achievable. Part I of the report describes the algorithms and presents benchmark performance results. Part II applies these new algorithms to UQ analysis of radiation effects in electronic devices and circuits for the QASPR program.
NASA Astrophysics Data System (ADS)
Zueco, Joaquín; López-González, Luis María
2016-04-01
We have studied decompression processes when pressure changes that take place, in blood and tissues using a technical numerical based in electrical analogy of the parameters that involved in the problem. The particular problem analyzed is the behavior dynamics of the extravascular bubbles formed in the intercellular cavities of a hypothetical tissue undergoing decompression. Numerical solutions are given for a system of equations to simulate gas exchanges of bubbles after decompression, with particular attention paid to the effect of bubble size, nitrogen tension, nitrogen diffusivity in the intercellular fluid and in the tissue cell layer in a radial direction, nitrogen solubility, ambient pressure and specific blood flow through the tissue over the different molar diffusion fluxes of nitrogen per time unit (through the bubble surface, between the intercellular fluid layer and blood and between the intercellular fluid layer and the tissue cell layer). The system of nonlinear equations is solved using the Network Simulation Method, where the electric analogy is applied to convert these equations into a network-electrical model, and a computer code (electric circuit simulator, Pspice). In this paper, numerical results new (together to a network model improved with interdisciplinary electrical analogies) are provided.
A treatment algorithm for patients with large skull bone defects and first results.
Lethaus, Bernd; Ter Laak, Marielle Poort; Laeven, Paul; Beerens, Maikel; Koper, David; Poukens, Jules; Kessler, Peter
2011-09-01
Large skull bone defects resulting from craniotomies due to cerebral insults, trauma or tumours create functional and aesthetic disturbances to the patient. The reconstruction of large osseous defects is still challenging. A treatment algorithm is presented based on the close interaction of radiologists, computer engineers and cranio-maxillofacial surgeons. From 2004 until today twelve consecutive patients have been operated on successfully according to this treatment plan. Titanium and polyetheretherketone (PEEK) were used to manufacture the implants. The treatment algorithm is proved to be reliable. No corrections had to be performed either to the skull bone or to the implant. Short operations and hospitalization periods are essential prerequisites for treatment success and justify the high expenses. PMID:21055960
Knowledge-Aided Multichannel Adaptive SAR/GMTI Processing: Algorithm and Experimental Results
NASA Astrophysics Data System (ADS)
Wu, Di; Zhu, Daiyin; Zhu, Zhaoda
2010-12-01
The multichannel synthetic aperture radar ground moving target indication (SAR/GMTI) technique is a simplified implementation of space-time adaptive processing (STAP), which has been proved to be feasible in the past decades. However, its detection performance will be degraded in heterogeneous environments due to the rapidly varying clutter characteristics. Knowledge-aided (KA) STAP provides an effective way to deal with the nonstationary problem in real-world clutter environment. Based on the KA STAP methods, this paper proposes a KA algorithm for adaptive SAR/GMTI processing in heterogeneous environments. It reduces sample support by its fast convergence properties and shows robust to non-stationary clutter distribution relative to the traditional adaptive SAR/GMTI scheme. Experimental clutter suppression results are employed to verify the virtue of this algorithm.
Performance analysis results of a battery fuel gauge algorithm at multiple temperatures
NASA Astrophysics Data System (ADS)
Balasingam, B.; Avvari, G. V.; Pattipati, K. R.; Bar-Shalom, Y.
2015-01-01
Evaluating a battery fuel gauge (BFG) algorithm is a challenging problem due to the fact that there are no reliable mathematical models to represent the complex features of a Li-ion battery, such as hysteresis and relaxation effects, temperature effects on parameters, aging, power fade (PF), and capacity fade (CF) with respect to the chemical composition of the battery. The existing literature is largely focused on developing different BFG strategies and BFG validation has received little attention. In this paper, using hardware in the loop (HIL) data collected form three Li-ion batteries at nine different temperatures ranging from -20 °C to 40 °C, we demonstrate detailed validation results of a battery fuel gauge (BFG) algorithm. The BFG validation is based on three different BFG validation metrics; we provide implementation details of these three BFG evaluation metrics by proposing three different BFG validation load profiles that satisfy varying levels of user requirements.
Bearup, Daniel; Petrovskaya, Natalia; Petrovskii, Sergei
2015-05-01
Monitoring of pest insects is an important part of the integrated pest management. It aims to provide information about pest insect abundance at a given location. This includes data collection, usually using traps, and their subsequent analysis and/or interpretation. However, interpretation of trap count (number of insects caught over a fixed time) remains a challenging problem. First, an increase in either the population density or insects activity can result in a similar increase in the number of insects trapped (the so called "activity-density" problem). Second, a genuine increase of the local population density can be attributed to qualitatively different ecological mechanisms such as multiplication or immigration. Identification of the true factor causing an increase in trap count is important as different mechanisms require different control strategies. In this paper, we consider a mean-field mathematical model of insect trapping based on the diffusion equation. Although the diffusion equation is a well-studied model, its analytical solution in closed form is actually available only for a few special cases, whilst in a more general case the problem has to be solved numerically. We choose finite differences as the baseline numerical method and show that numerical solution of the problem, especially in the realistic 2D case, is not at all straightforward as it requires a sufficiently accurate approximation of the diffusion fluxes. Once the numerical method is justified and tested, we apply it to the corresponding boundary problem where different types of boundary forcing describe different scenarios of pest insect immigration and reveal the corresponding patterns in the trap count growth. PMID:25744607
O'Brien, James Edward; Sohal, Manohar Singh; Huff, George Albert
2002-08-01
A combined experimental and numerical investigation is under way to investigate heat transfer enhancement techniques that may be applicable to large-scale air-cooled condensers such as those used in geothermal power applications. The research is focused on whether air-side heat transfer can be improved through the use of finsurface vortex generators (winglets,) while maintaining low heat exchanger pressure drop. A transient heat transfer visualization and measurement technique has been employed in order to obtain detailed distributions of local heat transfer coefficients on model fin surfaces. Pressure drop measurements have also been acquired in a separate multiple-tube row apparatus. In addition, numerical modeling techniques have been developed to allow prediction of local and average heat transfer for these low-Reynolds-number flows with and without winglets. Representative experimental and numerical results presented in this paper reveal quantitative details of local fin-surface heat transfer in the vicinity of a circular tube with a single delta winglet pair downstream of the cylinder. The winglets were triangular (delta) with a 1:2 height/length aspect ratio and a height equal to 90% of the channel height. Overall mean fin-surface Nusselt-number results indicate a significant level of heat transfer enhancement (average enhancement ratio 35%) associated with the deployment of the winglets with oval tubes. Pressure drop measurements have also been obtained for a variety of tube and winglet configurations using a single-channel flow apparatus that includes four tube rows in a staggered array. Comparisons of heat transfer and pressure drop results for the elliptical tube versus a circular tube with and without winglets are provided. Heat transfer and pressure-drop results have been obtained for flow Reynolds numbers based on channel height and mean flow velocity ranging from 700 to 6500.
2015-01-01
Background Due to the limited number of experimental studies that mechanically characterise human atherosclerotic plaque tissue from the femoral arteries, a recent trend has emerged in current literature whereby one set of material data based on aortic plaque tissue is employed to numerically represent diseased femoral artery tissue. This study aims to generate novel vessel-appropriate material models for femoral plaque tissue and assess the influence of using material models based on experimental data generated from aortic plaque testing to represent diseased femoral arterial tissue. Methods Novel material models based on experimental data generated from testing of atherosclerotic femoral artery tissue are developed and a computational analysis of the revascularisation of a quarter model idealised diseased femoral artery from a 90% diameter stenosis to a 10% diameter stenosis is performed using these novel material models. The simulation is also performed using material models based on experimental data obtained from aortic plaque testing in order to examine the effect of employing vessel appropriate material models versus those currently employed in literature to represent femoral plaque tissue. Results Simulations that employ material models based on atherosclerotic aortic tissue exhibit much higher maximum principal stresses within the plaque than simulations that employ material models based on atherosclerotic femoral tissue. Specifically, employing a material model based on calcified aortic tissue, instead of one based on heavily calcified femoral tissue, to represent diseased femoral arterial vessels results in a 487 fold increase in maximum principal stress within the plaque at a depth of 0.8 mm from the lumen. Conclusions Large differences are induced on numerical results as a consequence of employing material models based on aortic plaque, in place of material models based on femoral plaque, to represent a diseased femoral vessel. Due to these large
Equations of state of freely jointed hard-sphere chain fluids: Numerical results
Stell, G.; Lin, C.; Kalyuzhnyi, Y.V.
1999-03-01
We continue our series of studies in which the equations of state (EOS) are derived based on the product-reactant Ornstein{endash}Zernike approach (PROZA) and first-order thermodynamic perturbation theory (TPT1). These include two compressibility EOS, two virial EOS, and one TPT1 EOS (TPT1-D) that uses the structural information of the dimer fluid as input. In this study, we carry out the numerical implementation for these five EOS and compare their numerical results as well as those obtained from Attard{close_quote}s EOS and GF-D (generalized Flory-dimer) EOS with computer simulation results for the corresponding chain models over a wide range of densities and chain length. The comparison shows that our compressibility EOS, GF-D, and TPT1-D are in quantitative agreement with simulation results, and TPT1-D is the best among various EOS according to its average absolute deviation (AAD). On the basis of a comparison of limited data, our virial EOS appears to be superior to the predictions of Attard{close_quote}s approximate virial EOS and the approximate virial EOS derived by Schweizer and Curro in the context of the PRISM approach; all of them are only qualitatively accurate. The degree of accuracy predicted by our compressibility EOS is comparable to that of GF-D EOS, and both of them overestimate the compressibility factor at low densities and underestimate it at high densities. The compressibility factor of a polydisperse homonuclear chain system is also investigated in this work via our compressibility EOS; the numerical results are identical to those of a monodisperse system with the same chain length. {copyright} {ital 1999 American Institute of Physics.}
Algorithms for personalized therapy of type 2 diabetes: results of a web-based international survey
Gallo, Marco; Mannucci, Edoardo; De Cosmo, Salvatore; Gentile, Sandro; Candido, Riccardo; De Micheli, Alberto; Di Benedetto, Antonino; Esposito, Katherine; Genovese, Stefano; Medea, Gerardo; Ceriello, Antonio
2015-01-01
Objective In recent years increasing interest in the issue of treatment personalization for type 2 diabetes (T2DM) has emerged. This international web-based survey aimed to evaluate opinions of physicians about tailored therapeutic algorithms developed by the Italian Association of Diabetologists (AMD) and available online, and to get suggestions for future developments. Another aim of this initiative was to assess whether the online advertising and the survey would have increased the global visibility of the AMD algorithms. Research design and methods The web-based survey, which comprised five questions, has been available from the homepage of the web-version of the journal Diabetes Care throughout the month of December 2013, and on the AMD website between December 2013 and September 2014. Participation was totally free and responders were anonymous. Results Overall, 452 physicians (M=58.4%) participated in the survey. Diabetologists accounted for 76.8% of responders. The results of the survey show wide agreement (>90%) by participants on the utility of the algorithms proposed, even if they do not cover all possible needs of patients with T2DM for a personalized therapeutic approach. In the online survey period and in the months after its conclusion, a relevant and durable increase in the number of unique users who visited the websites was registered, compared to the period preceding the survey. Conclusions Patients with T2DM are heterogeneous, and there is interest toward accessible and easy to use personalized therapeutic algorithms. Responders opinions probably reflect the peculiar organization of diabetes care in each country. PMID:26301097
Numerical computation of the effective-one-body potential q using self-force results
NASA Astrophysics Data System (ADS)
Akcay, Sarp; van de Meent, Maarten
2016-03-01
The effective-one-body theory (EOB) describes the conservative dynamics of compact binary systems in terms of an effective Hamiltonian approach. The Hamiltonian for moderately eccentric motion of two nonspinning compact objects in the extreme mass-ratio limit is given in terms of three potentials: a (v ) , d ¯ (v ) , q (v ) . By generalizing the first law of mechanics for (nonspinning) black hole binaries to eccentric orbits, [A. Le Tiec, Phys. Rev. D 92, 084021 (2015).] recently obtained new expressions for d ¯(v ) and q (v ) in terms of quantities that can be readily computed using the gravitational self-force approach. Using these expressions we present a new computation of the EOB potential q (v ) by combining results from two independent numerical self-force codes. We determine q (v ) for inverse binary separations in the range 1 /1200 ≤v ≲1 /6 . Our computation thus provides the first-ever strong-field results for q (v ) . We also obtain d ¯ (v ) in our entire domain to a fractional accuracy of ≳10-8 . We find that our results are compatible with the known post-Newtonian expansions for d ¯(v ) and q (v ) in the weak field, and agree with previous (less accurate) numerical results for d ¯(v ) in the strong field.
Fluid Instabilities in the Crab Nebula Jet: Results from Numerical Simulations
NASA Astrophysics Data System (ADS)
Mignone, A.; Striani, E.; Bodo, G.; Anjiri, M.
2014-09-01
We present an overview of high-resolution relativistic MHD numerical simulations of the Crab Nebula South-East jet. The models are based on hot and relativistic hollow outflows initially carrying a purely toroidal magnetic field. Our results indicate that weakly relativistic (γ˜ 2) and strongly magnetized jets are prone to kink instabilities leading to a noticeable deflection of the jet. These conclusions are in good agreement with the recent X-ray (Chandra) data of Crab Nebula South-East jet indicating a change in the direction of propagation on a time scale of the order of few years.
Orion Guidance and Control Ascent Abort Algorithm Design and Performance Results
NASA Technical Reports Server (NTRS)
Proud, Ryan W.; Bendle, John R.; Tedesco, Mark B.; Hart, Jeremy J.
2009-01-01
During the ascent flight phase of NASA s Constellation Program, the Ares launch vehicle propels the Orion crew vehicle to an agreed to insertion target. If a failure occurs at any point in time during ascent then a system must be in place to abort the mission and return the crew to a safe landing with a high probability of success. To achieve continuous abort coverage one of two sets of effectors is used. Either the Launch Abort System (LAS), consisting of the Attitude Control Motor (ACM) and the Abort Motor (AM), or the Service Module (SM), consisting of SM Orion Main Engine (OME), Auxiliary (Aux) Jets, and Reaction Control System (RCS) jets, is used. The LAS effectors are used for aborts from liftoff through the first 30 seconds of second stage flight. The SM effectors are used from that point through Main Engine Cutoff (MECO). There are two distinct sets of Guidance and Control (G&C) algorithms that are designed to maximize the performance of these abort effectors. This paper will outline the necessary inputs to the G&C subsystem, the preliminary design of the G&C algorithms, the ability of the algorithms to predict what abort modes are achievable, and the resulting success of the abort system. Abort success will be measured against the Preliminary Design Review (PDR) abort performance metrics and overall performance will be reported. Finally, potential improvements to the G&C design will be discussed.
NASA Astrophysics Data System (ADS)
Fontana, A.; Marzari, F.
2016-05-01
Context. Planetesimals and planets embedded in a circumstellar disk are dynamically perturbed by the disk gravity. It causes an apsidal line precession at a rate that depends on the disk density profile and on the distance of the massive body from the star. Aims: Different analytical models are exploited to compute the precession rate of the perihelion ϖ˙. We compare them to verify their equivalence, in particular after analytical manipulations performed to derive handy formulas, and test their predictions against numerical models in some selected cases. Methods: The theoretical precession rates were computed with analytical algorithms found in the literature using the Mathematica symbolic code, while the numerical simulations were performed with the hydrodynamical code FARGO. Results: For low-mass bodies (planetesimals) the analytical approaches described in Binney & Tremaine (2008, Galactic Dynamics, p. 96), Ward (1981, Icarus, 47, 234), and Silsbee & Rafikov (2015a, ApJ, 798, 71) are equivalent under the same initial conditions for the disk in terms of mass, density profile, and inner and outer borders. They also match the numerical values computed with FARGO away from the outer border of the disk reasonably well. On the other hand, the predictions of the classical Mestel disk (Mestel 1963, MNRAS, 126, 553) for disks with p = 1 significantly depart from the numerical solution for radial distances beyond one-third of the disk extension because of the underlying assumption of the Mestel disk is that the outer disk border is equal to infinity. For massive bodies such as terrestrial and giant planets, the agreement of the analytical approaches is progressively poorer because of the changes in the disk structure that are induced by the planet gravity. For giant planets the precession rate changes sign and is higher than the modulus of the theoretical value by a factor ranging from 1.5 to 1.8. In this case, the correction of the formula proposed by Ward (1981) to
Fast and robust segmentation of solar EUV images: algorithm and results for solar cycle 23
NASA Astrophysics Data System (ADS)
Barra, V.; Delouille, V.; Kretzschmar, M.; Hochedez, J.-F.
2009-10-01
Context: The study of the variability of the solar corona and the monitoring of coronal holes, quiet sun and active regions are of great importance in astrophysics as well as for space weather and space climate applications. Aims: In a previous work, we presented the spatial possibilistic clustering algorithm (SPoCA). This is a multi-channel unsupervised spatially-constrained fuzzy clustering method that automatically segments solar extreme ultraviolet (EUV) images into regions of interest. The results we reported on SoHO-EIT images taken from February 1997 to May 2005 were consistent with previous knowledge in terms of both areas and intensity estimations. However, they presented some artifacts due to the method itself. Methods: Herein, we propose a new algorithm, based on SPoCA, that removes these artifacts. We focus on two points: the definition of an optimal clustering with respect to the regions of interest, and the accurate definition of the cluster edges. We moreover propose methodological extensions to this method, and we illustrate these extensions with the automatic tracking of active regions. Results: The much improved algorithm can decompose the whole set of EIT solar images over the 23rd solar cycle into regions that can clearly be identified as quiet sun, coronal hole and active region. The variations of the parameters resulting from the segmentation, i.e. the area, mean intensity, and relative contribution to the solar irradiance, are consistent with previous results and thus validate the decomposition. Furthermore, we find indications for a small variation of the mean intensity of each region in correlation with the solar cycle. Conclusions: The method is generic enough to allow the introduction of other channels or data. New applications are now expected, e.g. related to SDO-AIA data.
Sheng, I. C.; Kuan, C. K.; Chen, Y. T.; Yang, J. Y.; Hsiung, G. Y.; Chen, J. R.
2010-06-23
The pressure distribution is an important aspect of a UHV subsystem in either a storage ring or a front end. The design of the 3-GeV, 400-mA Taiwan Photon Source (TPS) foresees outgassing induced by photons and due to a bending magnet and an insertion device. An algorithm to calculate the photon-stimulated absorption (PSD) due to highly energetic radiation from a synchrotron source is presented. Several results using undulator sources such as IU20 are also presented, and the pressure distribution is illustrated.
Noninvasive assessment of mitral inertness: clinical results with numerical model validation
NASA Technical Reports Server (NTRS)
Firstenberg, M. S.; Greenberg, N. L.; Smedira, N. G.; McCarthy, P. M.; Garcia, M. J.; Thomas, J. D.
2001-01-01
Inertial forces (Mdv/dt) are a significant component of transmitral flow, but cannot be measured with Doppler echo. We validated a method of estimating Mdv/dt. Ten patients had a dual sensor transmitral (TM) catheter placed during cardiac surgery. Doppler and 2D echo was performed while acquiring LA and LV pressures. Mdv/dt was determined from the Bernoulli equation using Doppler velocities and TM gradients. Results were compared with numerical modeling. TM gradients (range: 1.04-14.24 mmHg) consisted of 74.0 +/- 11.0% inertial forcers (range: 0.6-12.9 mmHg). Multivariate analysis predicted Mdv/dt = -4.171(S/D (RATIO)) + 0.063(LAvolume-max) + 5. Using this equation, a strong relationship was obtained for the clinical dataset (y=0.98x - 0.045, r=0.90) and the results of numerical modeling (y=0.96x - 0.16, r=0.84). TM gradients are mainly inertial and, as validated by modeling, can be estimated with echocardiography.
NASA Astrophysics Data System (ADS)
Lahaye, Noé; Paci, Alexandre; Smith, Stefan Llewellyn
2016-04-01
We examine the instability of lenticular vortices -- or lenses -- in a stratified rotating fluid. The simplest configuration is one in which the lenses overlay a deep layer and have a free surface, and this can be studied using a two-layer rotating shallow water model. We report results from laboratory experiments and high-resolution direct numerical simulations of the destabilization of vortices with constant potential vorticity, and compare these to a linear stability analysis. The stability properties of the system are governed by two parameters: the typical upper-layer potential vorticity and the size (depth) of the vortex. Good agreement is found between analytical, numerical and experimental results for the growth rate and wavenumber of the instability. The nonlinear saturation of the instability is associated with conversion from potential to kinetic energy and weak emission of gravity waves, giving rise to the formation of coherent vortex multipoles with trapped waves. The impact of flow in the lower layer is examined. In particular, it is shown that the growth rate can be strongly affected and the instability can be suppressed for certain types of weak co-rotating flow.
Re-Computation of Numerical Results Contained in NACA Report No. 496
NASA Technical Reports Server (NTRS)
Perry, Boyd, III
2015-01-01
An extensive examination of NACA Report No. 496 (NACA 496), "General Theory of Aerodynamic Instability and the Mechanism of Flutter," by Theodore Theodorsen, is described. The examination included checking equations and solution methods and re-computing interim quantities and all numerical examples in NACA 496. The checks revealed that NACA 496 contains computational shortcuts (time- and effort-saving devices for engineers of the time) and clever artifices (employed in its solution methods), but, unfortunately, also contains numerous tripping points (aspects of NACA 496 that have the potential to cause confusion) and some errors. The re-computations were performed employing the methods and procedures described in NACA 496, but using modern computational tools. With some exceptions, the magnitudes and trends of the original results were in fair-to-very-good agreement with the re-computed results. The exceptions included what are speculated to be computational errors in the original in some instances and transcription errors in the original in others. Independent flutter calculations were performed and, in all cases, including those where the original and re-computed results differed significantly, were in excellent agreement with the re-computed results. Appendix A contains NACA 496; Appendix B contains a Matlab(Reistered) program that performs the re-computation of results; Appendix C presents three alternate solution methods, with examples, for the two-degree-of-freedom solution method of NACA 496; Appendix D contains the three-degree-of-freedom solution method (outlined in NACA 496 but never implemented), with examples.
Castro, A. P. G.; Paul, C. P. L.; Detiger, S. E. L.; Smit, T. H.; van Royen, B. J.; Pimenta Claro, J. C.; Mullender, M. G.; Alves, J. L.
2014-01-01
The loaded disk culture system is an intervertebral disk (IVD)-oriented bioreactor developed by the VU Medical Center (VUmc, Amsterdam, The Netherlands), which has the capacity of maintaining up to 12 IVDs in culture, for approximately 3 weeks after extraction. Using this system, eight goat IVDs were provided with the essential nutrients and submitted to compression tests without losing their biomechanical and physiological properties, for 22 days. Based on previous reports (Paul et al., 2012, 2013; Detiger et al., 2013), four of these IVDs were kept in physiological condition (control) and the other four were previously injected with chondroitinase ABC (CABC), in order to promote degenerative disk disease (DDD). The loading profile intercalated 16 h of activity loading with 8 h of loading recovery to express the standard circadian variations. The displacement behavior of these eight IVDs along the first 2 days of the experiment was numerically reproduced, using an IVD osmo-poro-hyper-viscoelastic and fiber-reinforced finite element (FE) model. The simulations were run on a custom FE solver (Castro et al., 2014). The analysis of the experimental results allowed concluding that the effect of the CABC injection was only significant in two of the four IVDs. The four control IVDs showed no signs of degeneration, as expected. In what concerns to the numerical simulations, the IVD FE model was able to reproduce the generic behavior of the two groups of goat IVDs (control and injected). However, some discrepancies were still noticed on the comparison between the injected IVDs and the numerical simulations, namely on the recovery periods. This may be justified by the complexity of the pathways for DDD, associated with the multiplicity of physiological responses to each direct or indirect stimulus. Nevertheless, one could conclude that ligaments, muscles, and IVD covering membranes could be added to the FE model, in order to improve its accuracy and properly
Castro, A P G; Paul, C P L; Detiger, S E L; Smit, T H; van Royen, B J; Pimenta Claro, J C; Mullender, M G; Alves, J L
2014-01-01
The loaded disk culture system is an intervertebral disk (IVD)-oriented bioreactor developed by the VU Medical Center (VUmc, Amsterdam, The Netherlands), which has the capacity of maintaining up to 12 IVDs in culture, for approximately 3 weeks after extraction. Using this system, eight goat IVDs were provided with the essential nutrients and submitted to compression tests without losing their biomechanical and physiological properties, for 22 days. Based on previous reports (Paul et al., 2012, 2013; Detiger et al., 2013), four of these IVDs were kept in physiological condition (control) and the other four were previously injected with chondroitinase ABC (CABC), in order to promote degenerative disk disease (DDD). The loading profile intercalated 16 h of activity loading with 8 h of loading recovery to express the standard circadian variations. The displacement behavior of these eight IVDs along the first 2 days of the experiment was numerically reproduced, using an IVD osmo-poro-hyper-viscoelastic and fiber-reinforced finite element (FE) model. The simulations were run on a custom FE solver (Castro et al., 2014). The analysis of the experimental results allowed concluding that the effect of the CABC injection was only significant in two of the four IVDs. The four control IVDs showed no signs of degeneration, as expected. In what concerns to the numerical simulations, the IVD FE model was able to reproduce the generic behavior of the two groups of goat IVDs (control and injected). However, some discrepancies were still noticed on the comparison between the injected IVDs and the numerical simulations, namely on the recovery periods. This may be justified by the complexity of the pathways for DDD, associated with the multiplicity of physiological responses to each direct or indirect stimulus. Nevertheless, one could conclude that ligaments, muscles, and IVD covering membranes could be added to the FE model, in order to improve its accuracy and properly
NASA Astrophysics Data System (ADS)
Saito, Kyosuke; Tanabe, Tadao; Oyama, Yutaka
2016-04-01
We have presented a numerical analysis to describe the behavior of a second harmonic generation (SHG) in THz regime by taking into account for both linear and nonlinear optical susceptibility. We employed a nonlinear finite-difference-time-domain (nonlinear FDTD) method to simulate SHG output characteristics in THz photonic crystal waveguide based on semi insulating gallium phosphide crystal. Unique phase matching conditions originated from photonic band dispersions with low group velocity are appeared, resulting in SHG output characteristics. This numerical study provides spectral information of SHG output in THz PC waveguide. THz PC waveguides is one of the active nonlinear optical devices in THz regime, and nonlinear FDTD method is a powerful tool to design photonic nonlinear THz devices.
Interpretation of high-dimensional numerical results for the Anderson transition
Suslov, I. M.
2014-12-15
The existence of the upper critical dimension d{sub c2} = 4 for the Anderson transition is a rigorous consequence of the Bogoliubov theorem on renormalizability of φ{sup 4} theory. For d ≥ 4 dimensions, one-parameter scaling does not hold and all existent numerical data should be reinterpreted. These data are exhausted by the results for d = 4, 5 from scaling in quasi-one-dimensional systems and the results for d = 4, 5, 6 from level statistics. All these data are compatible with the theoretical scaling dependences obtained from Vollhardt and Wolfle’s self-consistent theory of localization. The widespread viewpoint that d{sub c2} = ∞ is critically discussed.
Asymptotic expansion for stellarator equilibria with a non-planar magnetic axis: Numerical results
NASA Astrophysics Data System (ADS)
Freidberg, Jeffrey; Cerfon, Antoine; Parra, Felix
2012-10-01
We have recently presented a new asymptotic expansion for stellarator equilibria that generalizes the classic Greene-Johnson expansion [1] to allow for 3D equilibria with a non-planar magnetic axis [2]. Our expansion achieves the two goals of reducing the complexity of the three-dimensional MHD equilibrium equations and of describing equilibria in modern stellarator experiments. The end result of our analysis is a set of two coupled partial differential equations for the plasma pressure and the toroidal vector potential which fully determine the stellarator equilibrium. Both equations are advection equations in which the toroidal angle plays the role of time. We show that the method of characteristics, following magnetic field lines, is a convenient way of solving these equations, avoiding the difficulties associated with the periodicity of the solution in the toroidal angle. By combining the method of characteristics with Green's function integrals for the evaluation of the magnetic field due to the plasma current, we obtain an efficient numerical solver for our expansion. Numerical equilibria thus calculated will be given.[4pt] [1] J.M. Greene and J.L. Johnson, Phys. Fluids 4, 875 (1961)[0pt] [2] A.J. Cerfon, J.P. Freidberg, and F.I. Parra, Bull. Am. Phys. Soc. 56, 16 GP9.00081 (2011)
Verification of Numerical Weather Prediction Model Results for Energy Applications in Latvia
NASA Astrophysics Data System (ADS)
Sīle, Tija; Cepite-Frisfelde, Daiga; Sennikovs, Juris; Bethers, Uldis
2014-05-01
A resolution to increase the production and consumption of renewable energy has been made by EU governments. Most of the renewable energy in Latvia is produced by Hydroelectric Power Plants (HPP), followed by bio-gas, wind power and bio-mass energy production. Wind and HPP power production is sensitive to meteorological conditions. Currently the basis of weather forecasting is Numerical Weather Prediction (NWP) models. There are numerous methodologies concerning the evaluation of quality of NWP results (Wilks 2011) and their application can be conditional on the forecast end user. The goal of this study is to evaluate the performance of Weather Research and Forecast model (Skamarock 2008) implementation over the territory of Latvia, focusing on forecasting of wind speed and quantitative precipitation forecasts. The target spatial resolution is 3 km. Observational data from Latvian Environment, Geology and Meteorology Centre are used. A number of standard verification metrics are calculated. The sensitivity to the model output interpretation (output spatial interpolation versus nearest gridpoint) is investigated. For the precipitation verification the dichotomous verification metrics are used. Sensitivity to different precipitation accumulation intervals is examined. Skamarock, William C. and Klemp, Joseph B. A time-split nonhydrostatic atmospheric model for weather research and forecasting applications. Journal of Computational Physics. 227, 2008, pp. 3465-3485. Wilks, Daniel S. Statistical Methods in the Atmospheric Sciences. Third Edition. Academic Press, 2011.
Chaoticity threshold in magnetized plasmas: Numerical results in the weak coupling regime
Carati, A. Benfenati, F.; Maiocchi, A.; Galgani, L.; Zuin, M.
2014-03-15
The present paper is a numerical counterpart to the theoretical work [Carati et al., Chaos 22, 033124 (2012)]. We are concerned with the transition from order to chaos in a one-component plasma (a system of point electrons with mutual Coulomb interactions, in a uniform neutralizing background), the plasma being immersed in a uniform stationary magnetic field. In the paper [Carati et al., Chaos 22, 033124 (2012)], it was predicted that a transition should take place when the electron density is increased or the field decreased in such a way that the ratio ω{sub p}/ω{sub c} between plasma and cyclotron frequencies becomes of order 1, irrespective of the value of the so-called Coulomb coupling parameter Γ. Here, we perform numerical computations for a first principles model of N point electrons in a periodic box, with mutual Coulomb interactions, using as a probe for chaoticity the time-autocorrelation function of magnetization. We consider two values of Γ (0.04 and 0.016) in the weak coupling regime Γ ≪ 1, with N up to 512. A transition is found to occur for ω{sub p}/ω{sub c} in the range between 0.25 and 2, in fairly good agreement with the theoretical prediction. These results might be of interest for the problem of the breakdown of plasma confinement in fusion machines.
NASA Astrophysics Data System (ADS)
Soares, Edson J.; Thompson, Roney L.; Niero, Debora C.
2015-08-01
The immiscible displacement of one viscous liquid by another in a capillary tube is experimentally and numerically analyzed in the low inertia regime with negligible buoyancy effects. The dimensionless numbers that govern the problem are the capillary number Ca and the viscosity ratio of the displaced to the displacing fluids Nμ. In general, there are two output quantities of interest. One is associated to the relation between the front velocity, Ub, and the mean velocity of the displaced fluid, U ¯ 2 . The other is the layer thickness of the displaced fluid that remains attached to the wall. We compute these quantities as mass fractions in order to make them able to be compared. In this connection, the efficiency mass fraction, me, is defined as the complement of the mass fraction of the displaced fluid that leaves the tube while the displacing fluid crosses its length. The geometric mass fraction, mg, is defined as the fraction of the volume of the layer that remains attached to the wall. Because in gas-liquid displacement, these two quantities coincide, it is not uncommon in the literature to use mg as a measure of the displacement efficiency for liquid-liquid displacements. However, as is shown in the present paper, these two quantities have opposite tendencies when we increase the viscosity of the displacing fluid, making this distinction a crucial aspect of the problem. Results from a Galerkin finite element approach are also presented in order to make a comparison. Experimental and numerical results show that while the displacement efficiency decreases, the geometrical fraction increases when the viscosity ratio decreases. This fact leads to different decisions depending on the quantity to be optimized. The quantitative agreement between the numerical and experimental results was not completely achieved, especially for intermediate values of Ca. The reasons for that are still under investigation. The experiments conducted were able to achieve a wide range
Results from CrIS/ATMS Obtained Using an AIRS "Version-6 like" Retrieval Algorithm
NASA Technical Reports Server (NTRS)
Susskind, Joel; Kouvaris, Louis; Iredell, Lena
2015-01-01
We tested and evaluated Version-6.22 AIRS and Version-6.22 CrIS products on a single day, December 4, 2013, and compared results to those derived using AIRS Version-6. AIRS and CrIS Version-6.22 O3(p) and q(p) products are both superior to those of AIRS Version-6All AIRS and CrIS products agree reasonably well with each other. CrIS Version-6.22 T(p) and q(p) results are slightly poorer than AIRS over land, especially under very cloudy conditions. Both AIRS and CrIS Version-6.22 run now at JPL. Our short term plans are to analyze many common months at JPL in the near future using Version-6.22 or a further improved algorithm to assess the compatibility of AIRS and CrIS monthly mean products and their interannual differences. Updates to the calibration of both CrIS and ATMS are still being finalized. JPL plans, in collaboration with the Goddard DISC, to reprocess all AIRS data using a still to be finalized Version-7 retrieval algorithm, and to reprocess all recalibrated CrISATMS data using Version-7 as well.
Results from CrIS/ATMS Obtained Using an AIRS "Version-6 Like" Retrieval Algorithm
NASA Technical Reports Server (NTRS)
Susskind, Joel; Kouvaris, Louis; Iredell, Lena
2015-01-01
We have tested and evaluated Version-6.22 AIRS and Version-6.22 CrIS products on a single day, December 4, 2013, and compared results to those derived using AIRS Version-6. AIRS and CrIS Version-6.22 O3(p) and q(p) products are both superior to those of AIRS Version-6All AIRS and CrIS products agree reasonably well with each other CrIS Version-6.22 T(p) and q(p) results are slightly poorer than AIRS under very cloudy conditions. Both AIRS and CrIS Version-6.22 run now at JPL. Our short term plans are to analyze many common months at JPL in the near future using Version-6.22 or a further improved algorithm to assess the compatibility of AIRS and CrIS monthly mean products and their interannual differencesUpdates to the calibration of both CrIS and ATMS are still being finalized. JPL plans, in collaboration with the Goddard DISC, to reprocess all AIRS data using a still to be finalized Version-7 retrieval algorithm, and to reprocess all recalibrated CrISATMS data using Version-7 as well.
3D-radiation hydro simulations of disk-planet interactions. I. Numerical algorithm and test cases
NASA Astrophysics Data System (ADS)
Klahr, H.; Kley, W.
2006-01-01
We study the evolution of an embedded protoplanet in a circumstellar disk using the 3D-Radiation Hydro code TRAMP, and treat the thermodynamics of the gas properly in three dimensions. The primary interest of this work lies in the demonstration and testing of the numerical method. We show how far numerical parameters can influence the simulations of gap opening. We study a standard reference model under various numerical approximations. Then we compare the commonly used locally isothermal approximation to the radiation hydro simulation using an equation for the internal energy. Models with different treatments of the mass accretion process are compared. Often mass accumulates in the Roche lobe of the planet creating a hydrostatic atmosphere around the planet. The gravitational torques induced by the spiral pattern of the disk onto the planet are not strongly affected in the average magnitude, but the short time scale fluctuations are stronger in the radiation hydro models. An interesting result of this work lies in the analysis of the temperature structure around the planet. The most striking effect of treating the thermodynamics properly is the formation of a hot pressure-supported bubble around the planet with a pressure scale height of H/R ≈ 0.5 rather than a thin Keplerian circumplanetary accretion disk.
A Formal Algorithm for Verifying the Validity of Clustering Results Based on Model Checking
Huang, Shaobin; Cheng, Yuan; Lang, Dapeng; Chi, Ronghua; Liu, Guofeng
2014-01-01
The limitations in general methods to evaluate clustering will remain difficult to overcome if verifying the clustering validity continues to be based on clustering results and evaluation index values. This study focuses on a clustering process to analyze crisp clustering validity. First, we define the properties that must be satisfied by valid clustering processes and model clustering processes based on program graphs and transition systems. We then recast the analysis of clustering validity as the problem of verifying whether the model of clustering processes satisfies the specified properties with model checking. That is, we try to build a bridge between clustering and model checking. Experiments on several datasets indicate the effectiveness and suitability of our algorithms. Compared with traditional evaluation indices, our formal method can not only indicate whether the clustering results are valid but, in the case the results are invalid, can also detect the objects that have led to the invalidity. PMID:24608823
NASA Astrophysics Data System (ADS)
Williams, Arnold C.; Pachowicz, Peter W.
2004-09-01
Current mine detection research indicates that no single sensor or single look from a sensor will detect mines/minefields in a real-time manner at a performance level suitable for a forward maneuver unit. Hence, the integrated development of detectors and fusion algorithms are of primary importance. A problem in this development process has been the evaluation of these algorithms with relatively small data sets, leading to anecdotal and frequently over trained results. These anecdotal results are often unreliable and conflicting among various sensors and algorithms. Consequently, the physical phenomena that ought to be exploited and the performance benefits of this exploitation are often ambiguous. The Army RDECOM CERDEC Night Vision Laboratory and Electron Sensors Directorate has collected large amounts of multisensor data such that statistically significant evaluations of detection and fusion algorithms can be obtained. Even with these large data sets care must be taken in algorithm design and data processing to achieve statistically significant performance results for combined detectors and fusion algorithms. This paper discusses statistically significant detection and combined multilook fusion results for the Ellipse Detector (ED) and the Piecewise Level Fusion Algorithm (PLFA). These statistically significant performance results are characterized by ROC curves that have been obtained through processing this multilook data for the high resolution SAR data of the Veridian X-Band radar. We discuss the implications of these results on mine detection and the importance of statistical significance, sample size, ground truth, and algorithm design in performance evaluation.
Numerical results for near surface time domain electromagnetic exploration: a full waveform approach
NASA Astrophysics Data System (ADS)
Sun, H.; Li, K.; Li, X., Sr.; Liu, Y., Sr.; Wen, J., Sr.
2015-12-01
Time domain or Transient electromagnetic (TEM) survey including types with airborne, semi-airborne and ground play important roles in applicants such as geological surveys, ground water/aquifer assess [Meju et al., 2000; Cox et al., 2010], metal ore exploration [Yang and Oldenburg, 2012], prediction of water bearing structures in tunnels [Xue et al., 2007; Sun et al., 2012], UXO exploration [Pasion et al., 2007; Gasperikova et al., 2009] etc. The common practice is introducing a current into a transmitting (Tx) loop and acquire the induced electromagnetic field after the current is cut off [Zhdanov and Keller, 1994]. The current waveforms are different depending on instruments. Rectangle is the most widely used excitation current source especially in ground TEM. Triangle and half sine are commonly used in airborne and semi-airborne TEM investigation. In most instruments, only the off time responses are acquired and used in later analysis and data inversion. Very few airborne instruments acquire the on time and off time responses together. Although these systems acquire the on time data, they usually do not use them in the interpretation.This abstract shows a novel full waveform time domain electromagnetic method and our recent modeling results. The benefits comes from our new algorithm in modeling full waveform time domain electromagnetic problems. We introduced the current density into the Maxwell's equation as the transmitting source. This approach allows arbitrary waveforms, such as triangle, half-sine, trapezoidal waves or scatter record from equipment, being used in modeling. Here, we simulate the establishing and induced diffusion process of the electromagnetic field in the earth. The traditional time domain electromagnetic with pure secondary fields can also be extracted from our modeling results. The real time responses excited by a loop source can be calculated using the algorithm. We analyze the full time gates responses of homogeneous half space and two
NASA Astrophysics Data System (ADS)
Chiu, Ming-Hung; Lai, Chin-Fa; Tan, Chen-Tai; Lin, Yi-Zhi
2011-03-01
This paper presents a study of the lateral and axial resolutions of a transmission laser-scanning angle-deviation microscope (TADM) with different numerical aperture (NA) values. The TADM is based on geometric optics and surface plasmon resonance principles. The surface height is proportional to the phase difference between two marginal rays of the test beam, which is passed through the test medium. We used common-path heterodyne interferometry to measure the phase difference in real time, and used a personal computer to calculate and plot the surface profile. The experimental results showed that the best lateral and axial resolutions for NA = 0.41 were 0.5 μm and 3 nm, respectively, and the lateral resolution breaks through the diffraction limits.
NASA Astrophysics Data System (ADS)
Milošević, M.; Dimitrijević, D. D.; Djordjević, G. S.; Stojanović, M. D.
2016-06-01
The role tachyon fields may play in evolution of early universe is discussed in this paper. We consider the evolution of a flat and homogeneous universe governed by a tachyon scalar field with the DBI-type action and calculate the slow-roll parameters of inflation, scalar spectral index (n), and tensor-scalar ratio (r) for the given potentials. We pay special attention to the inverse power potential, first of all to V(x)˜ x^{-4}, and compare the available results obtained by analytical and numerical methods with those obtained by observation. It is shown that the computed values of the observational parameters and the observed ones are in a good agreement for the high values of the constant X_0. The possibility that influence of the radion field can extend a range of the acceptable values of the constant X_0 to the string theory motivated sector of its values is briefly considered.
Solar flare model: Comparison of the results of numerical simulations and observations
NASA Astrophysics Data System (ADS)
Podgorny, I. M.; Vashenyuk, E. V.; Podgorny, A. I.
2009-12-01
The electrodynamic flare model is based on numerical 3D simulations with the real magnetic field of an active region. An energy of ˜1032 erg necessary for a solar flare is shown to accumulate in the magnetic field of a coronal current sheet. The thermal X-ray source in the corona results from plasma heating in the current sheet upon reconnection. The hard X-ray sources are located on the solar surface at the loop foot-points. They are produced by the precipitation of electron beams accelerated in field-aligned currents. Solar cosmic rays appear upon acceleration in the electric field along a singular magnetic X-type line. The generation mechanism of the delayed cosmic-ray component is also discussed.
NASA Astrophysics Data System (ADS)
Xu, Hengyi; Heinzel, T.; Zozoulenko, I. V.
2011-09-01
We derive analytical expressions for the conductivity of bilayer graphene (BLG) using the Boltzmann approach within the the Born approximation for a model of Gaussian disorders describing both short- and long-range impurity scattering. The range of validity of the Born approximation is established by comparing the analytical results to exact tight-binding numerical calculations. A comparison of the obtained density dependencies of the conductivity with experimental data shows that the BLG samples investigated experimentally so far are in the quantum scattering regime where the Fermi wavelength exceeds the effective impurity range. In this regime both short- and long-range scattering lead to the same linear density dependence of the conductivity. Our calculations imply that bilayer and single-layer graphene have the same scattering mechanisms. We also provide an upper limit for the effective, density-dependent spatial extension of the scatterers present in the experiments.
Marom, Gil; Bluestein, Danny
2016-02-01
This paper evaluated the influence of various numerical implementation assumptions on predicting blood damage in cardiovascular devices using Lagrangian methods with Eulerian computational fluid dynamics. The implementation assumptions that were tested included various seeding patterns, stochastic walk model, and simplified trajectory calculations with pathlines. Post processing implementation options that were evaluated included single passage and repeated passages stress accumulation and time averaging. This study demonstrated that the implementation assumptions can significantly affect the resulting stress accumulation, i.e., the blood damage model predictions. Careful considerations should be taken in the use of Lagrangian models. Ultimately, the appropriate assumptions should be considered based the physics of the specific case and sensitivity analysis, similar to the ones presented here, should be employed. PMID:26679833
NASA Astrophysics Data System (ADS)
Cotel, Aline; Junghans, Lars; Wang, Xiaoxiang
2014-11-01
In recent years, a recognition of the scope of the negative environmental impact of existing buildings has spurred academic and industrial interest in transforming existing building design practices and disciplinary knowledge. For example, buildings alone consume 72% of the electricity produced annually in the United States; this share is expected to rise to 75% by 2025 (EPA, 2009). Significant reductions in overall building energy consumption can be achieved using green building methods such as natural ventilation. An office was instrumented on campus to acquire CO2 concentrations and temperature profiles at multiple locations while a single occupant was present. Using openFOAM, numerical calculations were performed to allow for comparisons of the CO2 concentration and temperature profiles for different ventilation strategies. Ultimately, these results will be the inputs into a real time feedback control system that can adjust actuators for indoor ventilation and utilize green design strategies. Funded by UM Office of Vice President for Research.
NASA Technical Reports Server (NTRS)
Holman, Gordon
2010-01-01
Accelerated electrons play an important role in the energetics of solar flares. Understanding the process or processes that accelerate these electrons to high, nonthermal energies also depends on understanding the evolution of these electrons between the acceleration region and the region where they are observed through their hard X-ray or radio emission. Energy losses in the co-spatial electric field that drives the current-neutralizing return current can flatten the electron distribution toward low energies. This in turn flattens the corresponding bremsstrahlung hard X-ray spectrum toward low energies. The lost electron beam energy also enhances heating in the coronal part of the flare loop. Extending earlier work by Knight & Sturrock (1977), Emslie (1980), Diakonov & Somov (1988), and Litvinenko & Somov (1991), I have derived analytical and semi-analytical results for the nonthermal electron distribution function and the self-consistent electric field strength in the presence of a steady-state return-current. I review these results, presented previously at the 2009 SPD Meeting in Boulder, CO, and compare them and computed X-ray spectra with numerical results obtained by Zharkova & Gordovskii (2005, 2006). The phYSical significance of similarities and differences in the results will be emphasized. This work is supported by NASA's Heliophysics Guest Investigator Program and the RHESSI Project.
Mars Entry Atmospheric Data System Trajectory Reconstruction Algorithms and Flight Results
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberger, Mark; Shidner, Jeremy; Munk, Michelle
2013-01-01
The Mars Entry Atmospheric Data System is a part of the Mars Science Laboratory, Entry, Descent, and Landing Instrumentation project. These sensors are a system of seven pressure transducers linked to ports on the entry vehicle forebody to record the pressure distribution during atmospheric entry. These measured surface pressures are used to generate estimates of atmospheric quantities based on modeled surface pressure distributions. Specifically, angle of attack, angle of sideslip, dynamic pressure, Mach number, and freestream atmospheric properties are reconstructed from the measured pressures. Such data allows for the aerodynamics to become decoupled from the assumed atmospheric properties, allowing for enhanced trajectory reconstruction and performance analysis as well as an aerodynamic reconstruction, which has not been possible in past Mars entry reconstructions. This paper provides details of the data processing algorithms that are utilized for this purpose. The data processing algorithms include two approaches that have commonly been utilized in past planetary entry trajectory reconstruction, and a new approach for this application that makes use of the pressure measurements. The paper describes assessments of data quality and preprocessing, and results of the flight data reduction from atmospheric entry, which occurred on August 5th, 2012.
Viola, Francesco; Coe, Ryan L.; Owen, Kevin; Guenther, Drake A.; Walker, William F.
2008-01-01
Image registration and motion estimation play central roles in many fields, including RADAR, SONAR, light microscopy, and medical imaging. Because of its central significance, estimator accuracy, precision, and computational cost are of critical importance. We have previously presented a highly accurate, spline-based time delay estimator that directly determines sub-sample time delay estimates from sampled data. The algorithm uses cubic splines to produce a continuous representation of a reference signal and then computes an analytical matching function between this reference and a delayed signal. The location of the minima of this function yields estimates of the time delay. In this paper we describe the MUlti-dimensional Spline-based Estimator (MUSE) that allows accurate and precise estimation of multidimensional displacements/strain components from multidimensional data sets. We describe the mathematical formulation for two- and three-dimensional motion/strain estimation and present simulation results to assess the intrinsic bias and standard deviation of this algorithm and compare it to currently available multi-dimensional estimators. In 1000 noise-free simulations of ultrasound data we found that 2D MUSE exhibits maximum bias of 2.6 × 10−4 samples in range and 2.2 × 10−3 samples in azimuth (corresponding to 4.8 and 297 nm, respectively). The maximum simulated standard deviation of estimates in both dimensions was comparable at roughly 2.8 × 10−3 samples (corresponding to 54 nm axially and 378 nm laterally). These results are between two and three orders of magnitude better than currently used 2D tracking methods. Simulation of performance in 3D yielded similar results to those observed in 2D. We also present experimental results obtained using 2D MUSE on data acquired by an Ultrasonix Sonix RP imaging system with an L14-5/38 linear array transducer operating at 6.6 MHz. While our validation of the algorithm was performed using ultrasound data, MUSE
Zlochiver, Sharon; Radai, M Michal; Abboud, Shimon; Rosenfeld, Moshe; Dong, Xiu-Zhen; Liu, Rui-Gang; You, Fu-Sheng; Xiang, Hai-Yan; Shi, Xue-Tao
2004-02-01
In electrical impedance tomography (EIT), measurements of developed surface potentials due to applied currents are used for the reconstruction of the conductivity distribution. Practical implementation of EIT systems is known to be problematic due to the high sensitivity to noise of such systems, leading to a poor imaging quality. In the present study, the performance of an induced current EIT (ICEIT) system, where eddy current is applied using magnetic induction, was studied by comparing the voltage measurements to simulated data, and examining the imaging quality with respect to simulated reconstructions for several phantom configurations. A 3-coil, 32-electrode ICEIT system was built, and an iterative modified Newton-Raphson algorithm was developed for the solution of the inverse problem. The RMS norm between the simulated and the experimental voltages was found to be 0.08 +/- 0.05 mV (<3%). Two regularization methods were implemented and compared: the Marquardt regularization and the Laplacian regularization (a bounded second-derivative regularization). While the Laplacian regularization method was found to be preferred for simulated data, it resulted in distinctive spatial artifacts for measured data. The experimental reconstructed images were found to be indicative of the angular positioning of the conductivity perturbations, though the radial sensitivity was low, especially when using the Marquardt regularization method. PMID:15005319
Lima da Silva, M.; Sauvage, E.; Brun, P.; Gagnoud, A.; Fautrelle, Y.; Riva, R.
2013-07-01
The process of vitrification in a cold crucible heated by direct induction is used in the fusion of oxides. Its feature is the production of high-purity materials. The high-level of purity of the molten is achieved because this melting technique excludes the contamination of the charge by the crucible. The aim of the present paper is to analyze the hydrodynamic of the vitrification process by direct induction, with the focus in the effects associated with the interaction between the mechanical stirrer and bubbling. Considering the complexity of the analyzed system and the goal of the present work, we simplified the system by not taking into account the thermal and electromagnetic phenomena. Based in the concept of hydraulic similitude, we performed an experimental study and a numerical modeling of the simplified model. The results of these two studies were compared and showed a good agreement. The results presented in this paper in conjunction with the previous work contribute to a better understanding of the hydrodynamics effects resulting from the interaction between the mechanical stirrer and air bubbling in the cold crucible heated by direct induction. Further works will take into account thermal and electromagnetic phenomena in the presence of mechanical stirrer and air bubbling. (authors)
NASA Astrophysics Data System (ADS)
Peukert, P.; Hrubý, J.
2013-04-01
The paper describes new results for an experimental heat exchanger equipped with a single corrugated capillary tube, basic information about the measurements and the experimental setup. Some of the results were compared with numerical simulations.
Development of region processing algorithm for HSTAMIDS: status and field test results
NASA Astrophysics Data System (ADS)
Ngan, Peter; Burke, Sean; Cresci, Roger; Wilson, Joseph N.; Gader, Paul; Ho, K. C.; Bartosz, Elizabeth; Duvoisin, Herbert
2007-04-01
The Region Processing Algorithm (RPA) has been developed by the Office of the Army Humanitarian Demining Research and Development (HD R&D) Program as part of improvements for the AN/PSS-14. The effort was a collaboration between the HD R&D Program, L-3 Communication CyTerra Corporation, University of Florida, Duke University and University of Missouri. RPA has been integrated into and implemented in a real-time AN/PSS-14. The subject unit was used to collect data and tested for its performance at three Army test sites within the United States of America. This paper describes the status of the technology and its recent test results.
Stability of Bareiss algorithm
NASA Astrophysics Data System (ADS)
Bojanczyk, Adam W.; Brent, Richard P.; de Hoog, F. R.
1991-12-01
In this paper, we present a numerical stability analysis of Bareiss algorithm for solving a symmetric positive definite Toeplitz system of linear equations. We also compare Bareiss algorithm with Levinson algorithm and conclude that the former has superior numerical properties.
A super-resolution algorithm for enhancement of flash lidar data: flight test results
NASA Astrophysics Data System (ADS)
Bulyshev, Alexander; Amzajerdian, Farzin; Roback, Eric; Reisse, Robert
2013-03-01
This paper describes the results of a 3D super-resolution algorithm applied to the range data obtained from a recent Flash Lidar helicopter flight test. The flight test was conducted by the NASA's Autonomous Landing and Hazard Avoidance Technology (ALHAT) project over a simulated lunar terrain facility at NASA Kennedy Space Center. ALHAT is developing the technology for safe autonomous landing on the surface of celestial bodies: Moon, Mars, asteroids. One of the test objectives was to verify the ability of 3D super-resolution technique to generate high resolution digital elevation models (DEMs) and to determine time resolved relative positions and orientations of the vehicle. 3D super-resolution algorithm was developed earlier and tested in computational modeling, and laboratory experiments, and in a few dynamic experiments using a moving truck. Prior to the helicopter flight test campaign, a 100mX100m hazard field was constructed having most of the relevant extraterrestrial hazard: slopes, rocks, and craters with different sizes. Data were collected during the flight and then processed by the super-resolution code. The detailed DEM of the hazard field was constructed using independent measurement to be used for comparison. ALHAT navigation system data were used to verify abilities of super-resolution method to provide accurate relative navigation information. Namely, the 6 degree of freedom state vector of the instrument as a function of time was restored from super-resolution data. The results of comparisons show that the super-resolution method can construct high quality DEMs and allows for identifying hazards like rocks and craters within the accordance of ALHAT requirements.
A Super-Resolution Algorithm for Enhancement of FLASH LIDAR Data: Flight Test Results
NASA Technical Reports Server (NTRS)
Bulyshev, Alexander; Amzajerdian, Farzin; Roback, Eric; Reisse Robert
2014-01-01
This paper describes the results of a 3D super-resolution algorithm applied to the range data obtained from a recent Flash Lidar helicopter flight test. The flight test was conducted by the NASA's Autonomous Landing and Hazard Avoidance Technology (ALHAT) project over a simulated lunar terrain facility at NASA Kennedy Space Center. ALHAT is developing the technology for safe autonomous landing on the surface of celestial bodies: Moon, Mars, asteroids. One of the test objectives was to verify the ability of 3D super-resolution technique to generate high resolution digital elevation models (DEMs) and to determine time resolved relative positions and orientations of the vehicle. 3D super-resolution algorithm was developed earlier and tested in computational modeling, and laboratory experiments, and in a few dynamic experiments using a moving truck. Prior to the helicopter flight test campaign, a 100mX100m hazard field was constructed having most of the relevant extraterrestrial hazard: slopes, rocks, and craters with different sizes. Data were collected during the flight and then processed by the super-resolution code. The detailed DEM of the hazard field was constructed using independent measurement to be used for comparison. ALHAT navigation system data were used to verify abilities of super-resolution method to provide accurate relative navigation information. Namely, the 6 degree of freedom state vector of the instrument as a function of time was restored from super-resolution data. The results of comparisons show that the super-resolution method can construct high quality DEMs and allows for identifying hazards like rocks and craters within the accordance of ALHAT requirements.
NASA Astrophysics Data System (ADS)
Beniaiche, Ahmed; Ghenaiet, Adel; Carcasci, Carlo; Facchini, Bruno
2016-05-01
This paper presents a numerical validation of the aero-thermal study of a 30:1 scaled model reproducing an innovative trailing edge with one row of enlarged pedestals under stationary and rotating conditions. A CFD analysis was performed by means of commercial ANSYS-Fluent modeling the isothermal air flow and using k-ω SST turbulence model and an isothermal air flow for both static and rotating conditions (Ro up to 0.23). The used numerical model is validated first by comparing the numerical velocity profiles distribution results to those obtained experimentally by means of PIV technique for Re = 20,000 and Ro = 0-0.23. The second validation is based on the comparison of the numerical results of the 2D HTC maps over the heated plate to those of TLC experimental data, for a smooth surface for a Reynolds number = 20,000 and 40,000 and Ro = 0-0.23. Two-tip conditions were considered: open tip and closed tip conditions. Results of the average Nusselt number inside the pedestal ducts region are presented too. The obtained results help to predict the flow field visualization and the evaluation of the aero-thermal performance of the studied blade cooling system during the design step.
Liberatore, S.; Jaouen, S.; Tabakhoff, E.; Canaud, B.
2009-04-15
Magnetic Rayleigh-Taylor instability is addressed in compressible hydrostatic media. A full model is presented and compared to numerical results from a linear perturbation code. A perfect agreement between both approaches is obtained in a wide range of parameters. Compressibility effects are examined and substantial deviations from classical Chandrasekhar growth rates are obtained and confirmed by the model and the numerical calculations.
Numerical modeling of protocore destabilization during planetary accretion: Methodology and results
NASA Astrophysics Data System (ADS)
Lin, Ja-Ren; Gerya, Taras V.; Tackley, Paul J.; Yuen, David A.; Golabek, Gregor J.
2009-12-01
We developed and tested an efficient 2D numerical methodology for modeling gravitational redistribution processes in a quasi spherical planetary body based on a simple Cartesian grid. This methodology allows one to implement large viscosity contrasts and to handle properly a free surface and self-gravitation. With this novel method we investigated in a simplified way the evolution of gravitationally unstable global three-layer structures in the interiors of large metal-silicate planetary bodies like those suggested by previous models of cold accretion [Sasaki, S., Nakazawa, K., 1986. J. Geophys. Res. 91, 9231-9238; Karato, S., Murthy, V.R., 1997. Phys. Earth Planet Interios 100, 61-79; Senshu, H., Kuramoto, K., Matsui, T., 2002. J. Geophys. Res. 107 (E12), 5118. 10.1029/2001JE001819]: an innermost solid protocore (either undifferentiated or partly differentiated), an intermediate metal-rich layer (either continuous or disrupted), and an outermost silicate-rich layer. Long-wavelength (degree-one) instability of this three-layer structure may strongly contribute to core formation dynamics by triggering planetary-scale gravitational redistribution processes. We studied possible geometrical modes of the resulting planetary reshaping using scaled 2D numerical experiments for self-gravitating planetary bodies with Mercury-, Mars- and Earth-size. In our simplified model the viscosity of each material remains constant during the experiment and rheological effects of gravitational energy dissipation are not taken into account. However, in contrast to a previously conducted numerical study [Honda, R., Mizutani, H., Yamamoto, T., 1993. J. Geophys. Res. 98, 2075-2089] we explored a freely deformable planetary surface and a broad range of viscosity ratios between the metallic layer and the protocore (0.001-1000) as well as between the silicate layer and the protocore (0.001-1000). An important new prediction from our study is that realistic modes of planetary reshaping
Results from CrIS/ATMS Obtained Using an "AIRS Version-6 Like Retrieval Algorithm
NASA Astrophysics Data System (ADS)
Susskind, J.
2015-12-01
A main objective of AIRS/AMSU on EOS is to provide accurate sounding products that are used to generate climate data sets. Suomi NPP carries CrIS/ATMS that were designed as follow-ons to AIRS/AMSU. Our objective is to generate a long term climate data set of products derived from CrIS/ATMS to serve as a continuation of the AIRS/AMSU products. The Goddard DISC has generated AIRS/AMSU retrieval products, extending from September 2002 through real time, using the AIRS Science Team Version-6 retrieval algorithm. Level-3 gridded monthly mean values of these products, generated using AIRS Version-6, form a state of the art multi-year set of Climate Data Records (CDRs), which is expected to continue through 2022 and possibly beyond, as the AIRS instrument is extremely stable. The goal of this research is to develop and implement a CrIS/ATMS retrieval system to generate CDRs that are compatible with, and are of comparable quality to, those generated operationally using AIRS/AMSU data. The AIRS Science Team has made considerable improvements in AIRS Science Team retrieval methodology and is working on the development of an improved AIRS Science Team Version-7 retrieval methodology to be used to reprocess all AIRS data in the relatively near future. Research is underway by Dr. Susskind and co-workers at the NASA GSFC Sounder Research Team (SRT) towards the finalization of the AIRS Version-7 retrieval algorithm, the current version of which is called SRT AIRS Version-6.22. Dr. Susskind and co-workers have developed analogous retrieval methodology for analysis of CrIS/ATMS data, called SRT CrIS Version-6.22. Results will be presented that show that AIRS and CrIS products derived using a common further improved retrieval algorithm agree closely with each other and are both superior to AIRS Version 6. The goal of the AIRS Science Team is to continue to improve both AIRS and CrIS retrieval products and then use the improved retrieval methodology for the processing of past and
One-year results of an algorithmic approach to managing failed back surgery syndrome
Avellanal, Martín; Diaz-Reganon, Gonzalo; Orts, Alejandro; Soto, Silvia
2014-01-01
BACKGROUND: Failed back surgery syndrome (FBSS) is a major clinical problem. Different etiologies with different incidence rates have been proposed. There are currently no standards regarding the management of these patients. Epiduroscopy is an endoscopic technique that may play a role in the management of FBSS. OBJECTIVE: To evaluate an algorithm for management of severe FBSS including epiduroscopy as a diagnostic and therapeutic tool. METHODS: A total of 133 patients with severe symptoms of FBSS (visual analogue scale score ≥7) and no response to pharmacological treatment and physical therapy were included. A six-step management algorithm was applied. Data, including patient demographics, pain and surgical procedure, were analyzed. In all cases, one or more objective causes of pain were established. Treatment success was defined as ≥50% long-term pain relief maintained during the first year of follow-up. Final allocation of patients was registered: good outcome with conservative treatment, surgical reintervention and palliative treatment with implantable devices. RESULTS: Of 122 patients enrolled, 59.84% underwent instrumented surgery and 40.16% a noninstrumented procedure. Most (64.75%) experienced significant pain relief with conventional pain clinic treatments; 15.57% required surgical treatment. Palliative spinal cord stimulation and spinal analgesia were applied in 9.84% and 2.46% of the cases, respectively. The most common diagnosis was epidural fibrosis, followed by disc herniation, global or lateral stenosis, and foraminal stenosis. CONCLUSIONS: A new six-step ladder approach to severe FBSS management that includes epiduroscopy was analyzed. Etiologies are accurately described and a useful role of epiduroscopy was confirmed. PMID:25222573
Kurihara, M.; Sato, A.; Funatsu, K.; Ouchi, H.; Masuda, Y.; Narita, H.; Collett, T.S.
2011-01-01
Targeting the methane hydrate (MH) bearing units C and D at the Mount Elbert prospect on the Alaska North Slope, four MDT (Modular Dynamic Formation Tester) tests were conducted in February 2007. The C2 MDT test was selected for history matching simulation in the MH Simulator Code Comparison Study. Through history matching simulation, the physical and chemical properties of the unit C were adjusted, which suggested the most likely reservoir properties of this unit. Based on these properties thus tuned, the numerical models replicating "Mount Elbert C2 zone like reservoir" "PBU L-Pad like reservoir" and "PBU L-Pad down dip like reservoir" were constructed. The long term production performances of wells in these reservoirs were then forecasted assuming the MH dissociation and production by the methods of depressurization, combination of depressurization and wellbore heating, and hot water huff and puff. The predicted cumulative gas production ranges from 2.16??106m3/well to 8.22??108m3/well depending mainly on the initial temperature of the reservoir and on the production method.This paper describes the details of modeling and history matching simulation. This paper also presents the results of the examinations on the effects of reservoir properties on MH dissociation and production performances under the application of the depressurization and thermal methods. ?? 2010 Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Xing, H. L.; Ding, R. W.; Yuen, D. A.
2015-08-01
Australia is surrounded by the Pacific Ocean and the Indian Ocean and, thus, may suffer from tsunamis due to its proximity to the subduction earthquakes around the boundary of Australian Plate. Potential tsunami risks along the eastern coast, where more and more people currently live, are numerically investigated through a scenario-based method to provide an estimation of the tsunami hazard in this region. We have chosen and calculated the tsunami waves generated at the New Hebrides Trench and the Puysegur Trench, and we further investigated the relevant tsunami hazards along the eastern coast and their sensitivities to various sea floor frictions and earthquake parameters (i.e. the strike, the dip and the slip angles and the earthquake magnitude/rupture length). The results indicate that the Puysegur trench possesses a seismic threat causing wave amplitudes over 1.5 m along the coast of Tasmania, Victoria, and New South Wales, and even reaching over 2.6 m at the regions close to Sydney, Maria Island, and Gabo Island for a certain worse case, while the cities along the coast of Queensland are potentially less vulnerable than those on the southeastern Australian coast.
Sprenger, Lisa Lange, Adrian; Odenbach, Stefan
2014-02-15
Ferrofluids consist of magnetic nanoparticles dispersed in a carrier liquid. Their strong thermodiffusive behaviour, characterised by the Soret coefficient, coupled with the dependency of the fluid's parameters on magnetic fields is dealt with in this work. It is known from former experimental investigations on the one hand that the Soret coefficient itself is magnetic field dependent and on the other hand that the accuracy of the coefficient's experimental determination highly depends on the volume concentration of the fluid. The thermally driven separation of particles and carrier liquid is carried out with a concentrated ferrofluid (φ = 0.087) in a horizontal thermodiffusion cell and is compared to equally detected former measurement data. The temperature gradient (1 K/mm) is applied perpendicular to the separation layer. The magnetic field is either applied parallel or perpendicular to the temperature difference. For three different magnetic field strengths (40 kA/m, 100 kA/m, 320 kA/m) the diffusive separation is detected. It reveals a sign change of the Soret coefficient with rising field strength for both field directions which stands for a change in the direction of motion of the particles. This behaviour contradicts former experimental results with a dilute magnetic fluid, in which a change in the coefficient's sign could only be detected for the parallel setup. An anisotropic behaviour in the current data is measured referring to the intensity of the separation being more intense in the perpendicular position of the magnetic field: S{sub T‖} = −0.152 K{sup −1} and S{sub T⊥} = −0.257 K{sup −1} at H = 320 kA/m. The ferrofluiddynamics-theory (FFD-theory) describes the thermodiffusive processes thermodynamically and a numerical simulation of the fluid's separation depending on the two transport parameters ξ{sub ‖} and ξ{sub ⊥} used within the FFD-theory can be implemented. In the case of a parallel aligned magnetic field, the parameter can
NASA Astrophysics Data System (ADS)
Chan, P. W.
2009-03-01
The Hong Kong International Airport (HKIA) is situated in an area of complex terrain. Turbulent flow due to terrain disruption could occur in the vicinity of HKIA when winds from east to southwest climb over Lantau Island, a mountainous island to the south of the airport. Low-level turbulence is an aviation hazard to the aircraft flying into and out of HKIA. It is closely monitored using remote-sensing instruments including Doppler LIght Detection And Ranging (LIDAR) systems and wind profilers in the airport area. Forecasting of low-level turbulence by numerical weather prediction models would be useful in the provision of timely turbulence warnings to the pilots. The feasibility of forecasting eddy dissipation rate (EDR), a measure of turbulence intensity adopted in the international civil aviation community, is studied in this paper using the Regional Atmospheric Modelling System (RAMS). Super-high resolution simulation (within the regime of large eddy simulation) is performed with a horizontal grid size down to 50 m for some typical cases of turbulent airflow at HKIA, such as spring-time easterly winds in a stable boundary layer and gale-force southeasterly winds associated with a typhoon. Sensitivity of the simulation results with respect to the choice of turbulent kinetic energy (TKE) parameterization scheme in RAMS is also examined. RAMS simulation with Deardorff (1980) TKE scheme is found to give the best result in comparison with actual EDR observations. It has the potential for real-time forecasting of low-level turbulence in short-term aviation applications (viz. for the next several hours).
A Hydrodynamic Theory for Spatially Inhomogeneous Semiconductor Lasers. 2; Numerical Results
NASA Technical Reports Server (NTRS)
Li, Jianzhong; Ning, C. Z.; Biegel, Bryan A. (Technical Monitor)
2001-01-01
We present numerical results of the diffusion coefficients (DCs) in the coupled diffusion model derived in the preceding paper for a semiconductor quantum well. These include self and mutual DCs in the general two-component case, as well as density- and temperature-related DCs under the single-component approximation. The results are analyzed from the viewpoint of free Fermi gas theory with many-body effects incorporated. We discuss in detail the dependence of these DCs on densities and temperatures in order to identify different roles played by the free carrier contributions including carrier statistics and carrier-LO phonon scattering, and many-body corrections including bandgap renormalization and electron-hole (e-h) scattering. In the general two-component case, it is found that the self- and mutual- diffusion coefficients are determined mainly by the free carrier contributions, but with significant many-body corrections near the critical density. Carrier-LO phonon scattering is dominant at low density, but e-h scattering becomes important in determining their density dependence above the critical electron density. In the single-component case, it is found that many-body effects suppress the density coefficients but enhance the temperature coefficients. The modification is of the order of 10% and reaches a maximum of over 20% for the density coefficients. Overall, temperature elevation enhances the diffusive capability or DCs of carriers linearly, and such an enhancement grows with density. Finally, the complete dataset of various DCs as functions of carrier densities and temperatures provides necessary ingredients for future applications of the model to various spatially inhomogeneous optoelectronic devices.
Scholl, M.A.
2000-01-01
Numerical simulations were used to examine the effects of heterogeneity in hydraulic conductivity (K) and intrinsic biodegradation rate on the accuracy of contaminant plume-scale biodegradation rates obtained from field data. The simulations were based on a steady-state BTEX contaminant plume-scale biodegradation under sulfate-reducing conditions, with the electron acceptor in excess. Biomass was either uniform or correlated with K to model spatially variable intrinsic biodegradation rates. A hydraulic conductivity data set from an alluvial aquifer was used to generate three sets of 10 realizations with different degrees of heterogeneity, and contaminant transport with biodegradation was simulated with BIOMOC. Biodegradation rates were calculated from the steady-state contaminant plumes using decreases in concentration with distance downgradient and a single flow velocity estimate, as is commonly done in site characterization to support the interpretation of natural attenuation. The observed rates were found to underestimate the actual rate specified in the heterogeneous model in all cases. The discrepancy between the observed rate and the 'true' rate depended on the ground water flow velocity estimate, and increased with increasing heterogeneity in the aquifer. For a lognormal K distribution with variance of 0.46, the estimate was no more than a factor of 1.4 slower than the true rate. For aquifer with 20% silt/clay lenses, the rate estimate was as much as nine times slower than the true rate. Homogeneous-permeability, uniform-degradation rate simulations were used to generate predictions of remediation time with the rates estimated from heterogeneous models. The homogeneous models were generally overestimated the extent of remediation or underestimated remediation time, due to delayed degradation of contaminants in the low-K areas. Results suggest that aquifer characterization for natural attenuation at contaminated sites should include assessment of the presence
Preliminary results of numerical investigations at SECARB Cranfield, MS field test site
NASA Astrophysics Data System (ADS)
Choi, J.; Nicot, J.; Meckel, T. A.; Chang, K.; Hovorka, S. D.
2008-12-01
The Southeast Regional Carbon Sequestration partnership sponsored by DOE has chosen the Cranfield, MS field as a test site for its Phase II experiment. It will provide information on CO2 storage in oil and gas fields, in particular on storage permanence, storage capacity, and pressure buildup as well as on sweep efficiency. The 10,300 ft-deep reservoir produced 38 MMbbl of oil and 677 MMSCF of gas from the 1940's to the 1960's and is being retrofitted by Denbury Resources for tertiary recovery. CO2 injection started in July 2008 with a scheduled ramp up during the next few months. The Cranfield modeling team selected the northern section of the field for development of a numerical model using the multiphase-flow, compositional CMG-GEM software. Model structure was determined through interpretation of logs from old and recently-drilled wells and geophysical data. PETREL was used to upscale and export permeability and porosity data to the GEM model. Preliminary sensitivity analyses determined that relative permeability parameters and oil composition had the largest impact on CO2 behavior. The first modeling step consisted in history-matching the total oil, gas, and water production out of the reservoir starting from its natural state to determine the approximate current conditions of the reservoir. The fact that pressure recovered in the 40 year interval since end of initial production helps in constraining boundary conditions. In a second step, the modeling focused on understanding pressure evolution and CO2 transport in the reservoir. The presentation will introduce preliminary results of the simulations and confirm/explain discrepancies with field measurements.
NASA Technical Reports Server (NTRS)
Merrill, Walter C.; Delaat, John C.; Kroszkewicz, Steven M.; Abdelwahab, Mahmood
1987-01-01
The objective of the advanced detection, isolation, and accommodation (ADIA) program is to improve the overall demonstrated reliability of digital electronic control systems for turbine engines. For this purpose, algorithms were developed which detect, isolate, and accommodate sensor failures using analytical redundancy. Preliminary results of a full scale engine demonstration of the ADIA algorithm are presented. Minimum detectable levels of sensor failures for an F100 turbofan engine control system are determined and compared to those obtained during a previous evaluation of this algorithm using a real-time hybrid computer simulation of the engine.
NASA Astrophysics Data System (ADS)
Merrill, Walter C.; Delaat, John C.; Kroszkewicz, Steven M.; Abdelwahab, Mahmood
The objective of the advanced detection, isolation, and accommodation (ADIA) program is to improve the overall demonstrated reliability of digital electronic control systems for turbine engines. For this purpose, algorithms were developed which detect, isolate, and accommodate sensor failures using analytical redundancy. Preliminary results of a full scale engine demonstration of the ADIA algorithm are presented. Minimum detectable levels of sensor failures for an F100 turbofan engine control system are determined and compared to those obtained during a previous evaluation of this algorithm using a real-time hybrid computer simulation of the engine.
NASA Technical Reports Server (NTRS)
Merrill, Walter C.; Delaat, John C.; Kroszkewicz, Steven M.; Abdelwahab, Mahmood
1987-01-01
The objective of the advanced detection, isolation, and accommodation (ADIA) program is to improve the overall demonstrated reliability of digital electronic control systems for turbine engines. For this purpose, algorithms were developed which detect, isolate, and accommodate sensor failures using analytical redundancy. Preliminary results of a full scale engine demonstration of the ADIA algorithm are presented. Minimum detectable levels of sensor failures for an F100 turbofan engine control system are determined and compared to those obtained during a previous evaluation of this algorithm using a real-time hybrid computer simulation of the engine.
NASA Astrophysics Data System (ADS)
Gliko, A. O.; Molodenskii, S. M.
2015-01-01
) are not only capable of significantly changing the magnitude of the radial displacements of the geoid but also altering their sign. Moreover, even in the uniform Earth's model, the effects of sphericity of its external surface and self-gravitation can also provide a noticeable contribution, which determines the signs of the coefficients in the expansion of the geoid's shape in the lower-order spherical functions. In order to separate these effects, below we present the results of the numerical calculations of the total effects of thermoelastic deformations for the two simplest models of spherical Earth without and with self-gravitation with constant density and complex-valued shear moduli and for the real Earth PREM model (which describes the depth distributions of density and elastic moduli for the high-frequency oscillations disregarding the rheology of the medium) and the modern models of the mantle rheology. Based on the calculations, we suggest the simplest interpretation of the present-day data on the relationship between the coefficients of spherical expansion of temperature, velocities of seismic body waves, the topography of the Earth's surface and geoid, and the data on the correlation between the lower-order coefficients in the expansions of the geoid and the corresponding terms of the expansions of horizontal inhomogeneities in seismic velocities. The suggested interpretation includes the estimates of the sign and magnitude for the ratios between the first coefficients of spherical expansions of seismic velocities, topography, and geoid. The presence of this correlation and the relationship between the signs and absolute values of these coefficients suggests that both the long-period oscillations of the geoid and the long-period variations in the velocities of seismic body waves are largely caused by thermoelastic deformations.
NASA Astrophysics Data System (ADS)
Heinze, Thomas; Galvan, Boris; Miller, Stephen
2013-04-01
Fluid-rock interactions are mechanically fundamental to many earth processes, including fault zones and hydrothermal/volcanic systems, and to future green energy solutions such as enhanced geothermal systems and carbon capture and storage (CCS). Modeling these processes is challenging because of the strong coupling between rock fracture evolution and the consequent large changes in the hydraulic properties of the system. In this talk, we present results of a numerical model that includes poro-elastic plastic rheology (with hardening, softening, and damage), and coupled to a non-linear diffusion model for fluid pressure propagation and two-phase fluid flow. Our plane strain model is based on the poro- elastic plastic behavior of porous rock and is advanced with hardening, softening and damage using the Mohr- Coulomb failure criteria. The effective stress model of Biot (1944) is used for coupling the pore pressure and the rock behavior. Frictional hardening and cohesion softening are introduced following Vermeer and de Borst (1984) with the angle of internal friction and the cohesion as functions of the principal strain rates. The scalar damage coefficient is assumed to be a linear function of the hardening parameter. Fluid injection is modeled as a two phase mixture of water and air using the Richards equation. The theoretical model is solved using finite differences on a staggered grid. The model is benchmarked with experiments on the laboratory scale in which fluid is injected from below in a critically-stressed, dry sandstone (Stanchits et al. 2011). We simulate three experiments, a) the failure a dry specimen due to biaxial compressive loading, b) the propagation a of low pressure fluid front induced from the bottom in a critically stressed specimen, and c) the failure of a critically stressed specimen due to a high pressure fluid intrusion. Comparison of model results with the fluid injection experiments shows that the model captures most of the experimental
Nonlinearities of waves propagating over a mild-slope beach: laboratory and numerical results
NASA Astrophysics Data System (ADS)
Rocha, Mariana V. L.; Michallet, Hervé; Silva, Paulo A.; Cienfuegos, Rodrigo
2014-05-01
As surface gravity waves propagate from deeper waters to the shore, their shape changes, primarily due to nonlinear wave interactions and further on due to breaking. The nonlinear effects amplify the higher harmonics and cause the oscillatory flow to transform from nearly sinusoidal in deep water, through velocity-skewed in the shoaling zone, to velocity asymmetric in the inner-surf and swash zones. In addition to short-wave nonlinearities, the presence of long waves and wave groups also results in a supplementary wave-induced velocity and influences the short-waves. Further, long waves can themselves contribute to velocity skewness and asymmetry at low frequencies, particularly for very dissipative mild-slope beach profiles, where long wave shoaling and breaking can also occur. The Hydralab-IV GLOBEX experiments were performed in a 110-m-long flume, with a 1/80 rigid-bottom slope and allowed the acquisition of high-resolution free-surface elevation and velocity data, obtained during 90-min long simulations of random and bichromatic wave conditions, and also of a monochromatic long wave (Ruessink et al., Proc. Coastal Dynamics, 2013). The measurements are compared to numerical results obtained with the SERR-1D Boussinesq-type model, which is designed to reproduce the complex dynamics of high-frequency wave propagation, including the energy transfer mechanisms that enhance infragravity-wave generation. The evolution of skewness and asymmetry along the beach profile until the swash zone is analyzed, relatively to that of the wave groupiness and long wave propagation. Some particularities of bichromatic wave groups are further investigated, such as partially-standing long-wave patterns and short-wave reformation after the first breakpoint, which is seen to influence particularly the skewness trends. Decreased spectral width (for random waves) and increased modulation (for bichromatic wave groups) are shown to enhance energy transfers between super- and sub
Wei, Wen-Cheng; Wu, Ching-Yang; Wu, Ching-Feng; Fu, Jui-Ying; Su, Ta-Wei; Yu, Sheng-Yueh; Kao, Tsung-Chi; Ko, Po-Jen
2015-01-01
Abstract Vascular cutdown and echo guide puncture methods have its own limitations under certain conditions. There was no available algorithm for choosing entry vessel. A standard algorithm was introduced to help choose the entry vessel location according to our clinical experience and review of the literature. The goal of this study is to analyze the treatment results of the standard algorithm used to choose the entry vessel for intravenous port implantation. During the period between March 2012 and March 2013, 507 patients who received intravenous port implantation due to advanced chemotherapy were included into this study. Choice of entry vessel was according to standard algorithm. All clinical characteristic factors were collected and complication rate and incidence were further analyzed. Compared with our clinical experience in 2006, procedure-related complication rate declined from 1.09% to 0.4%, whereas the late complication rate decreased from 19.97% to 3.55%. No more pneumothorax, hematoma, catheter kinking, fractures, and pocket erosion were identified after using the standard algorithm. In alive oncology patients, 98% implanted port could serve a functional vascular access to fit therapeutic needs. This standard algorithm for choosing the best entry vessel is a simple guideline that is easy to follow. The algorithm has excellent efficiency and can minimize complication rates and incidence. PMID:26287429
Dameron, O; Gibaud, B; Morandi, X
2004-06-01
The human cerebral cortex anatomy describes the brain organization at the scale of gyri and sulci. It is used as landmarks for neurosurgery as well as localization support for functional data analysis or inter-subject data comparison. Existing models of the cortex anatomy either rely on image labeling but fail to represent variability and structural properties or rely on a conceptual model but miss the inner 3D nature and relations of anatomical structures. This study was therefore conducted to propose a model of sulco-gyral anatomy for the healthy human brain. We hypothesized that both numeric knowledge (i.e., image-based) and symbolic knowledge (i.e., concept-based) have to be represented and coordinated. In addition, the representation of this knowledge should be application-independent in order to be usable in various contexts. Therefore, we devised a symbolic model describing specialization, composition and spatial organization of cortical anatomical structures. We also collected numeric knowledge such as 3D models of shape and shape variation about cortical anatomical structures. For each numeric piece of knowledge, a companion file describes the concept it refers to and the nature of the relationship. Demonstration software performs a mapping between the numeric and the symbolic aspects for browsing the knowledge base. PMID:15118839
NASA Technical Reports Server (NTRS)
Morrell, F. R.; Bailey, M. L.; Motyka, P. R.
1988-01-01
Flight test results of a vector-based fault-tolerant algorithm for a redundant strapdown inertial measurement unit are presented. Because the inertial sensors provide flight-critical information for flight control and navigation, failure detection and isolation is developed in terms of a multi-level structure. Threshold compensation techniques for gyros and accelerometers, developed to enhance the sensitivity of the failure detection process to low-level failures, are presented. Four flight tests, conducted in a commercial transport type environment, were used to determine the ability of the failure detection and isolation algorithm to detect failure signals, such a hard-over, null, or bias shifts. The algorithm provided timely detection and correct isolation of flight control- and low-level failures. The flight tests of the vector-based algorithm demonstrated its capability to provide false alarm free dual fail-operational performance for the skewed array of inertial sensors.
Numerical Analysis of Large Telescopes in Terms of Induced Loads and Resulting Geometrical Stability
NASA Astrophysics Data System (ADS)
Upnere, S.; Jekabsons, N.; Joffe, R.
2013-03-01
Comprehensive numerical studies, involving structural and Computational Fluid Dynamics (CFD) analysis, have been carried out at the Engineering Research Institute "Ventspils International Radio Astron- omy Center" (VIRAC) of the Ventspils University College to investigate the gravitational and wind load effects on large, ground-based radio tele- scopes RT-32 performance. Gravitational distortions appear to be the main limiting factor for the reflector performance in everyday operation. Random loads caused by wind gusts (unavoidable at zenith) contribute to the fatigue accumulation.
Chaotic structures of nonlinear magnetic fields. I - Theory. II - Numerical results
NASA Technical Reports Server (NTRS)
Lee, Nam C.; Parks, George K.
1992-01-01
A study of the evolutionary properties of nonlinear magnetic fields in flowing MHD plasmas is presented to illustrate that nonlinear magnetic fields may involve chaotic dynamics. It is shown how a suitable transformation of the coupled equations leads to Duffing's form, suggesting that the behavior of the general solution can also be chaotic. Numerical solutions of the nonlinear magnetic field equations that have been cast in the form of Duffing's equation are presented.
Coupled transport processes in semipermeable media. Part 2: Numerical method and results
NASA Astrophysics Data System (ADS)
Jacobsen, Janet S.; Carnahan, Chalon L.
1990-04-01
A numerical simulator has been developed to investigate the effects of coupled processes on heat and mass transport in semipermeable media. The governing equations on which the simulator is based were derived using the thermodynamics of irreversible processes. The equations are nonlinear and have been solved numerically using the n-dimensional Newton's method. As an example of an application, the numerical simulator has been used to investigate heat and solute transport in the vicinity of a heat source buried in a saturated clay-like medium, in part to study solute transport in bentonite packing material surrounding a nuclear waste canister. The coupled processes considered were thermal filtration, thermal osmosis, chemical osmosis and ultrafiltration. In the simulations, heat transport by coupled processes was negligible compared to heat conduction, but pressure and solute migration were affected. Solute migration was retarded relative to the uncoupled case when only chemical osmosis was considered. When both chemical osmosis and thermal osmosis were included, solute migration was enhanced.
NASA Astrophysics Data System (ADS)
Morvan, D.
2010-12-01
behaviour of forest fires, based on a multiphase formulation. This approach consists in solving the balance equations (mass, momentum, energy, chemical species, radiation intensity …) governing the coupled system formed by the vegetation and the surrounding atmosphere. The vegetation was represented as a collection of solid fuel particles, regrouped in families, each one characterized by its own set of physical variables (mass fraction of water, of dry matter, of char, temperature, volume fraction, density, surface area to volume ratio …) necessary to describe the evolution of its state during the propagation of fire. Some numerical results were then presented and compared with available experimental data. A particular attention was taken to simulate surface fires propagating through grassland and Mediterranean shrubland for which a large experimental data base exists. We conclude our paper, in presenting some recent results obtained in a more operational context, to simulate the interaction between two fire fronts (head fire and backfire) in conditions similar to two those encountered during a suppression fire operation.
New Cirrus Retrieval Algorithms and Results from eMAS during SEAC4RS
NASA Astrophysics Data System (ADS)
Holz, R.; Platnick, S. E.; Meyer, K.; Wang, C.; Wind, G.; Arnold, T.; King, M. D.; Yorks, J. E.; McGill, M. J.
2014-12-01
The enhanced MODIS Airborne Simulator (eMAS) scanning imager was flown on the ER-2 during the SEAC4RS field campaign. The imager provides measurements in 38 spectral channels from the visible into the 13μm CO2 absorption bands at approximately 25 m nadir spatial resolution at cirrus altitudes, and with a swath width of about 18 km, provided substantial context and synergy for other ER-2 cirrus observations. The eMAS is an update to the original MAS scanner, having new midwave and IR spectrometers coupled with the previous VNIR/SWIR spectrometers. In addition to the standard MODIS-like cloud retrieval algorithm (MOD06/MYD06 for MODIS Terra/Aqua, respectively) that provides cirrus optical thickness (COT) and effective particle radius (CER) from several channel combinations, three new algorithms were developed to take advantage of unique aspects of eMAS and/or other ER-2 observations. The first uses a combination of two solar reflectance channels within the 1.88 μm water vapor absorption band, each with significantly different single scattering albedo, allowing for simultaneous COT and CER retrievals. The advantage of this algorithm is that the strong water vapor absorption can significantly reduce the sensitivity to lower level clouds and ocean/land surface properties thus better isolating cirrus properties. A second algorithm uses a suite of infrared channels in an optimal estimation algorithm to simultaneously retrieve COT, CER, and cloud-top pressure/temperature. Finally, a window IR algorithm is used to retrieve COT in synergy with the ER-2 Cloud Physics Lidar (CPL) cloud top/base boundary measurements. Using a variety of quantifiable error sources, uncertainties for all eMAS retrievals will be shown along with comparisons with CPL COT retrievals.
Danilovic, D; Ohm, O J; Stroebel, J; Breivik, K; Hoff, P I; Markowitz, T
1998-05-01
We have developed an algorithmic method for automatic determination of stimulation thresholds in both cardiac chambers in patients with intact atrioventricular (AV) conduction. The algorithm utilizes ventricular sensing, may be used with any type of pacing leads, and may be downloaded via telemetry links into already implanted dual-chamber Thera pacemakers. Thresholds are determined with 0.5 V amplitude and 0.06 ms pulse-width resolution in unipolar, bipolar, or both lead configurations, with a programmable sampling interval from 2 minutes to 48 hours. Measured values are stored in the pacemaker memory for later retrieval and do not influence permanent output settings. The algorithm was intended to gather information on continuous behavior of stimulation thresholds, which is important in the formation of strategies for programming pacemaker outputs. Clinical performance of the algorithm was evaluated in eight patients who received bipolar tined steroid-eluting leads and were observed for a mean of 5.1 months. Patient safety was not compromised by the algorithm, except for the possibility of pacing during the physiologic refractory period. Methods for discrimination of incorrect data points were developed and incorrect values were discarded. Fine resolution threshold measurements collected during this study indicated that: (1) there were great differences in magnitude of threshold peaking in different patients; (2) the initial intensive threshold peaking was usually followed by another less intensive but longer-lasting wave of threshold peaking; (3) the pattern of tissue reaction in the atrium appeared different from that in the ventricle; and (4) threshold peaking in the bipolar lead configuration was greater than in the unipolar configuration. The algorithm proved to be useful in studying ambulatory thresholds. PMID:9604237
Photometric redshifts with the quasi Newton algorithm (MLPQNA) Results in the PHAT1 contest
NASA Astrophysics Data System (ADS)
Cavuoti, S.; Brescia, M.; Longo, G.; Mercurio, A.
2012-10-01
Context. Since the advent of modern multiband digital sky surveys, photometric redshifts (photo-z's) have become relevant if not crucial to many fields of observational cosmology, such as the characterization of cosmic structures and the weak and strong lensing. Aims: We describe an application to an astrophysical context, namely the evaluation of photometric redshifts, of MLPQNA, which is a machine-learning method based on the quasi Newton algorithm. Methods: Theoretical methods for photo-z evaluation are based on the interpolation of a priori knowledge (spectroscopic redshifts or SED templates), and they represent an ideal comparison ground for neural network-based methods. The MultiLayer Perceptron with quasi Newton learning rule (MLPQNA) described here is an effective computing implementation of neural networks exploited for the first time to solve regression problems in the astrophysical context. It is offered to the community through the DAMEWARE (DAta Mining & Exploration Web Application REsource) infrastructure. Results: The PHAT contest (Hildebrandt et al. 2010, A&A, 523, A31) provides a standard dataset to test old and new methods for photometric redshift evaluation and with a set of statistical indicators that allow a straightforward comparison among different methods. The MLPQNA model has been applied on the whole PHAT1 dataset of 1984 objects after an optimization of the model performed with the 515 available spectroscopic redshifts as training set. When applied to the PHAT1 dataset, MLPQNA obtains the best bias accuracy (0.0006) and very competitive accuracies in terms of scatter (0.056) and outlier percentage (16.3%), scoring as the second most effective empirical method among those that have so far participated in the contest. MLPQNA shows better generalization capabilities than most other empirical methods especially in the presence of underpopulated regions of the knowledge base.
Numerical model of the lowermost Mississippi River as an alluvial-bedrock reach: preliminary results
NASA Astrophysics Data System (ADS)
Viparelli, E.; Nittrouer, J. A.; Mohrig, D. C.; Parker, G.
2012-12-01
Recent field studies reveal that the river bed of the Lower Mississippi River is characterized by a transition from alluvium (upstream) to bedrock (downstream). In particular, in the downstream 250 km of the river, fields of actively migrating bedforms alternate with deep zones where a consolidated substratum is exposed. Here we present a first version of a one-dimensional numerical model able to capture the alluvial-bedrock transition in the lowermost Mississippi River, defined herein as the 500-km reach between the Old River Control Structure and the Gulf of Mexico. The flow is assumed to be steady, and the cross-section is divided in two regions, the river channel and the floodplain. The streamwise variation of channel and floodplain geometry is described with synthetic relations derived from field observations. Flow resistance in the river channel is computed with the formulation for low-slope, large sand bed rivers due to Wright and Parker, while a Chezy-type formulation is implemented on the floodplain. Sediment is modeled in terms of bed material and wash load. Suspended load is computed with the Wright-Parker formulation. This treatment allows either uniform sediment or a mixture of different grain sizes, and accounts for stratification effects. Bedload transport rates are estimated with the relation for sediment mixtures of Ashida and Michiue. Previous work documents reasonable agreement between these load relations and field measurements. Washload is routed through the system solving the equation of mass conservation of sediment in suspension in the water column. The gradual transition from the alluvial reach to the bedrock reach is modeled in terms of a "mushy" layer of specified thickness overlying the non-erodible substrate. In the case of a fully alluvial reach, the channel bed elevation is above this mushy layer, while in the case of partial alluvial cover of the substratum, the channel bed elevation is within the mushy layer. Variations in base
Parallel algorithms for unconstrained optimizations by multisplitting
He, Qing
1994-12-31
In this paper a new parallel iterative algorithm for unconstrained optimization using the idea of multisplitting is proposed. This algorithm uses the existing sequential algorithms without any parallelization. Some convergence and numerical results for this algorithm are presented. The experiments are performed on an Intel iPSC/860 Hyper Cube with 64 nodes. It is interesting that the sequential implementation on one node shows that if the problem is split properly, the algorithm converges much faster than one without splitting.
High-performance combinatorial algorithms
Pinar, Ali
2003-10-31
Combinatorial algorithms have long played an important role in many applications of scientific computing such as sparse matrix computations and parallel computing. The growing importance of combinatorial algorithms in emerging applications like computational biology and scientific data mining calls for development of a high performance library for combinatorial algorithms. Building such a library requires a new structure for combinatorial algorithms research that enables fast implementation of new algorithms. We propose a structure for combinatorial algorithms research that mimics the research structure of numerical algorithms. Numerical algorithms research is nicely complemented with high performance libraries, and this can be attributed to the fact that there are only a small number of fundamental problems that underlie numerical solvers. Furthermore there are only a handful of kernels that enable implementation of algorithms for these fundamental problems. Building a similar structure for combinatorial algorithms will enable efficient implementations for existing algorithms and fast implementation of new algorithms. Our results will promote utilization of combinatorial techniques and will impact research in many scientific computing applications, some of which are listed.
Ponderomotive stabilization of flute modes in mirrors Feedback control and numerical results
NASA Technical Reports Server (NTRS)
Similon, P. L.
1987-01-01
Ponderomotive stabilization of rigid plasma flute modes is numerically investigated by use of a variational principle, for a simple geometry, without eikonal approximation. While the near field of the studied antenna can be stabilizing, the far field has a small contribution only, because of large cancellation by quasi mode-coupling terms. The field energy for stabilization is evaluated and is a nonnegligible fraction of the plasma thermal energy. A new antenna design is proposed, and feedback stabilization is investigated. Their use drastically reduces power requirements.
NASA Technical Reports Server (NTRS)
Zhai, Chengxing; Milman, Mark H.; Regehr, Martin W.; Best, Paul K.
2007-01-01
In the companion paper, [Appl. Opt. 46, 5853 (2007)] a highly accurate white light interference model was developed from just a few key parameters characterized in terms of various moments of the source and instrument transmission function. We develop and implement the end-to-end process of calibrating these moment parameters together with the differential dispersion of the instrument and applying them to the algorithms developed in the companion paper. The calibration procedure developed herein is based on first obtaining the standard monochromatic parameters at the pixel level: wavenumber, phase, intensity, and visibility parameters via a nonlinear least-squares procedure that exploits the structure of the model. The pixel level parameters are then combined to obtain the required 'global' moment and dispersion parameters. The process is applied to both simulated scenarios of astrometric observations and to data from the microarcsecond metrology testbed (MAM), an interferometer testbed that has played a prominent role in the development of this technology.
NASA Astrophysics Data System (ADS)
Reynolds, William R.; Talcott, Denise; Hilgers, John W.
2002-07-01
A new iterative algorithm (EMLS) via the expectation maximization method is derived for extrapolating a non- negative object function from noisy, diffraction blurred image data. The algorithm has the following desirable attributes; fast convergence is attained for high frequency object components, is less sensitive to constraint parameters, and will accommodate randomly missing data. Speed and convergence results are presented. Field test imagery was obtained with a passive millimeter wave imaging sensor having a 30.5 cm aperture. The algorithm was implemented and tested in near real time using field test imagery. Theoretical results and experimental results using the field test imagery will be compared using an effective aperture measure of resolution increase. The effective aperture measure, based on examination of the edge-spread function, will be detailed.
Fanselau, R.W.; Thakkar, J.G.; Hiestand, J.W.; Cassell, D.
1981-03-01
The Comparative Thermal-Hydraulic Evaluation of Steam Generators program represents an analytical investigation of the thermal-hydraulic characteristics of four PWR steam generators. The analytical tool utilized in this investigation is the CALIPSOS code, a three-dimensional flow distribution code. This report presents the steady state thermal-hydraulic characteristics on the secondary side of a Westinghouse Model 51 steam generator. Details of the CALIPSOS model with accompanying assumptions, operating parameters, and transport correlations are identified. Comprehensive graphical and numerical results are presented to facilitate the desired comparison with other steam generators analyzed by the same flow distribution code.
NASA Astrophysics Data System (ADS)
Conti, Livia; De Gregorio, Paolo; Bonaldi, Michele; Borrielli, Antonio; Crivellari, Michele; Karapetyan, Gagik; Poli, Charles; Serra, Enrico; Thakur, Ram-Krishna; Rondoni, Lamberto
2012-06-01
We study experimentally, numerically, and theoretically the elastic response of mechanical resonators along which the temperature is not uniform, as a consequence of the onset of steady-state thermal gradients. Two experimental setups and designs are employed, both using low-loss materials. In both cases, we monitor the resonance frequencies of specific modes of vibration, as they vary along with variations of temperatures and of temperature differences. In one case, we consider the first longitudinal mode of vibration of an aluminum alloy resonator; in the other case, we consider the antisymmetric torsion modes of a silicon resonator. By defining the average temperature as the volume-weighted mean of the temperatures of the respective elastic sections, we find out that the elastic response of an object depends solely on it, regardless of whether a thermal gradient exists and, up to 10% imbalance, regardless of its magnitude. The numerical model employs a chain of anharmonic oscillators, with first- and second-neighbor interactions and temperature profiles satisfying Fourier's Law to a good degree. Its analysis confirms, for the most part, the experimental findings and it is explained theoretically from a statistical mechanics perspective with a loose notion of local equilibrium.
Estimation of geopotential from satellite-to-satellite range rate data: Numerical results
NASA Technical Reports Server (NTRS)
Thobe, Glenn E.; Bose, Sam C.
1987-01-01
A technique for high-resolution geopotential field estimation by recovering the harmonic coefficients from satellite-to-satellite range rate data is presented and tested against both a controlled analytical simulation of a one-day satellite mission (maximum degree and order 8) and then against a Cowell method simulation of a 32-day mission (maximum degree and order 180). Innovations include: (1) a new frequency-domain observation equation based on kinetic energy perturbations which avoids much of the complication of the usual Keplerian element perturbation approaches; (2) a new method for computing the normalized inclination functions which unlike previous methods is both efficient and numerically stable even for large harmonic degrees and orders; (3) the application of a mass storage FFT to the entire mission range rate history; (4) the exploitation of newly discovered symmetries in the block diagonal observation matrix which reduce each block to the product of (a) a real diagonal matrix factor, (b) a real trapezoidal factor with half the number of rows as before, and (c) a complex diagonal factor; (5) a block-by-block least-squares solution of the observation equation by means of a custom-designed Givens orthogonal rotation method which is both numerically stable and tailored to the trapezoidal matrix structure for fast execution.
Interaction of a mantle plume and a segmented mid-ocean ridge: Results from numerical modeling
NASA Astrophysics Data System (ADS)
Georgen, Jennifer E.
2014-04-01
Previous investigations have proposed that changes in lithospheric thickness across a transform fault, due to the juxtaposition of seafloor of different ages, can impede lateral dispersion of an on-ridge mantle plume. The application of this “transform damming” mechanism has been considered for several plume-ridge systems, including the Reunion hotspot and the Central Indian Ridge, the Amsterdam-St. Paul hotspot and the Southeast Indian Ridge, the Cobb hotspot and the Juan de Fuca Ridge, the Iceland hotspot and the Kolbeinsey Ridge, the Afar plume and the ridges of the Gulf of Aden, and the Marion/Crozet hotspot and the Southwest Indian Ridge. This study explores the geodynamics of the transform damming mechanism using a three-dimensional finite element numerical model. The model solves the coupled steady-state equations for conservation of mass, momentum, and energy, including thermal buoyancy and viscosity that is dependent on pressure and temperature. The plume is introduced as a circular thermal anomaly on the bottom boundary of the numerical domain. The center of the plume conduit is located directly beneath a spreading segment, at a distance of 200 km (measured in the along-axis direction) from a transform offset with length 100 km. Half-spreading rate is 0.5 cm/yr. In a series of numerical experiments, the buoyancy flux of the modeled plume is progressively increased to investigate the effects on the temperature and velocity structure of the upper mantle in the vicinity of the transform. Unlike earlier studies, which suggest that a transform always acts to decrease the along-axis extent of plume signature, these models imply that the effect of a transform on plume dispersion may be complex. Under certain ranges of plume flux modeled in this study, the region of the upper mantle undergoing along-axis flow directed away from the plume could be enhanced by the three-dimensional velocity and temperature structure associated with ridge
NASA Astrophysics Data System (ADS)
Blecka, Maria I.
2010-05-01
The passive remote spectrometric methods are important in examinations the atmospheres of planets. The radiance spectra inform us about values of thermodynamical parameters and composition of the atmospheres and surfaces. The spectral technology can be useful in detection of the trace aerosols like biological substances (if present) in the environments of the planets. We discuss here some of the aspects related to the spectroscopic search for the aerosols and dust in planetary atmospheres. Possibility of detection and identifications of biological aerosols with a passive InfraRed spectrometer in an open-air environment is discussed. We present numerically simulated, based on radiative transfer theory, spectroscopic observations of the Earth atmosphere. Laboratory measurements of transmittance of various kinds of aerosols, pollens and bacterias were used in modeling.
NASA Technical Reports Server (NTRS)
Aveiro, H. C.; Hysell, D. L.; Caton, R. G.; Groves, K. M.; Klenzing, J.; Pfaff, R. F.; Stoneback, R.; Heelis, R. A.
2012-01-01
A three-dimensional numerical simulation of plasma density irregularities in the postsunset equatorial F region ionosphere leading to equatorial spread F (ESF) is described. The simulation evolves under realistic background conditions including bottomside plasma shear flow and vertical current. It also incorporates C/NOFS satellite data which partially specify the forcing. A combination of generalized Rayleigh-Taylor instability (GRT) and collisional shear instability (CSI) produces growing waveforms with key features that agree with C/NOFS satellite and ALTAIR radar observations in the Pacific sector, including features such as gross morphology and rates of development. The transient response of CSI is consistent with the observation of bottomside waves with wavelengths close to 30 km, whereas the steady state behavior of the combined instability can account for the 100+ km wavelength waves that predominate in the F region.
Razali, Azhani Mohd Abdullah, Jaafar
2015-04-29
Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.
NASA Astrophysics Data System (ADS)
Razali, Azhani Mohd; Abdullah, Jaafar
2015-04-01
Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1995-01-01
Two methods for developing high order single step explicit algorithms on symmetric stencils with data on only one time level are presented. Examples are given for the convection and linearized Euler equations with up to the eighth order accuracy in both space and time in one space dimension, and up to the sixth in two space dimensions. The method of characteristics is generalized to nondiagonalizable hyperbolic systems by using exact local polynominal solutions of the system, and the resulting exact propagator methods automatically incorporate the correct multidimensional wave propagation dynamics. Multivariate Taylor or Cauchy-Kowaleskaya expansions are also used to develop algorithms. Both of these methods can be applied to obtain algorithms of arbitrarily high order for hyperbolic systems in multiple space dimensions. Cross derivatives are included in the local approximations used to develop the algorithms in this paper in order to obtain high order accuracy, and improved isotropy and stability. Efficiency in meeting global error bounds is an important criterion for evaluating algorithms, and the higher order algorithms are shown to be up to several orders of magnitude more efficient even though they are more complex. Stable high order boundary conditions for the linearized Euler equations are developed in one space dimension, and demonstrated in two space dimensions.
Results with an Algorithmic Approach to Hybrid Repair of the Aortic Arch
Andersen, Nicholas D.; Williams, Judson B.; Hanna, Jennifer M.; Shah, Asad A.; McCann, Richard L.; Hughes, G. Chad
2013-01-01
Objective Hybrid repair of the transverse aortic arch may allow for aortic arch repair with reduced morbidity in patients who are suboptimal candidates for conventional open surgery. Here, we present our results with an algorithmic approach to hybrid arch repair, based upon the extent of aortic disease and patient comorbidities. Methods Between August 2005 and January 2012, 87 patients underwent hybrid arch repair by three principal procedures: zone 1 endograft coverage with extra-anatomic left carotid revascularization (zone 1, n=19), zone 0 endograft coverage with aortic arch debranching (zone 0, n=48), or total arch replacement with staged stented elephant trunk completion (stented elephant trunk, n=20). Results The mean patient age was 64 years and the mean expected in-hospital mortality rate was 16.3% as calculated by the EuroSCORE II. 22% (n=19) of operations were non-elective. Sternotomy, cardiopulmonary bypass, and deep hypothermic circulatory arrest were required in 78% (n=68), 45% (n=39), and 31% (n=27) of patients, respectively, to allow for total arch replacement, arch debranching, or other concomitant cardiac procedures, including ascending ± hemi-arch replacement in 17% (n=8) of patients undergoing zone 0 repair. All stented elephant trunk procedures (n=20) and 19% (n=9) of zone 0 procedures were staged, with 41% (n=12) of patients undergoing staged repair during a single hospitalization. The 30-day/in-hospital rates of stroke and permanent paraplegia/paraparesis were 4.6% (n=4) and 1.2% (n=1), respectively. Three of 27 (11.1%) patients with native ascending aorta zone 0 proximal landing zone experienced retrograde type A dissection following endograft placement. The overall in-hospital mortality rate was 5.7% (n=5), however, 30-day/in-hospital mortality increased to 14.9% (n=13) due to eight 30-day out-of-hospital deaths. Native ascending aorta zone 0 endograft placement was found to be the only univariate predictor of 30-day/in-hospital mortality
NASA Technical Reports Server (NTRS)
Wehrbein, W. M.; Leovy, C. B.
1981-01-01
A Curtis matrix is used to compute cooling by the 15 micron and 10 micron bands of carbon dioxide. Escape of radiation to space and exchange the lower boundary are used for the 9.6 micron band of ozone. Voigt line shape, vibrational relaxation, line overlap, and the temperature dependence of line strength distributions and transmission functions are incorporated into the Curtis matrices. The distributions of the atmospheric constituents included in the algorithm, and the method used to compute the Curtis matrices are discussed as well as cooling or heating by the 9.6 micron band of ozone. The FORTRAN programs and subroutines that were developed are described and listed.
NASA Technical Reports Server (NTRS)
Rigby, D. L.; Van Fossen, G. J.
1992-01-01
A study of the effect of spanwise variation on leading edge heat transfer is presented. Experimental and numerical results are given for a circular leading edge and for a 3:1 elliptical leading edge. It is demonstrated that increases in leading edge heat transfer due to spanwise variations in freestream momentum are comparable to those due to freestream turbulence.
Numerical study of the wind energy potential in Bulgaria - Some preliminary results
NASA Astrophysics Data System (ADS)
Jordanov, G.; Gadzhev, G.; Ganev, K.; Miloshev, N.; Syrakov, D.; Prodanova, M.
2012-10-01
The new energy efficiency politics of the EU requires till year 2020 16% of Bulgarian electricity to be produced from renewable sources. The wind is one of renewable energy sources. The ecological benefits of all the kinds of "green" energy are obvious. It is desirable, however, the utilization of renewable energy sources to be as much as possible economically effective. This means that installment of the respective devices (wind farms, solar farms, etc.) should be based on a detailed and reliable evaluation of the real potential of the country. Detailed study of the wind energy potential of the country - spatial distribution, temporal variation, mean and extreme values, fluctuations and statistical characteristics; evaluation from a point of view of industrial applicability can not be made only on the basis of the existing routine meteorological data - the measuring network is not dense enough to catch all the details of the local flow systems, hence of the real wind energy potential of the country spatial distribution. That is why the measurement data has to be supplemented by numerical modeling. The wind field simulations were performed applying the 5th generation PSU/NCAR Meso-Meteorological Model MM5 for years 2000-2007 with a spatial resolution of 3 km over Bulgaria. Some preliminary evaluations of the country wind energy potential, based on the simulation output are demonstrated in the paper.
Mazza, Fabio; Vulcano, Alfonso
2008-07-08
For a widespread application of dissipative braces to protect framed buildings against seismic loads, practical and reliable design procedures are needed. In this paper a design procedure based on the Direct Displacement-Based Design approach is adopted, assuming the elastic lateral storey-stiffness of the damped braces proportional to that of the unbraced frame. To check the effectiveness of the design procedure, presented in an associate paper, a six-storey reinforced concrete plane frame, representative of a medium-rise symmetric framed building, is considered as primary test structure; this structure, designed in a medium-risk region, is supposed to be retrofitted as in a high-risk region, by insertion of diagonal braces equipped with hysteretic dampers. A numerical investigation is carried out to study the nonlinear static and dynamic responses of the primary and the damped braced test structures, using step-by-step procedures described in the associate paper mentioned above; the behaviour of frame members and hysteretic dampers is idealized by bilinear models. Real and artificial accelerograms, matching EC8 response spectrum for a medium soil class, are considered for dynamic analyses.
Accretion of rotating fluids by barytropes - Numerical results for white-dwarf models
NASA Technical Reports Server (NTRS)
Durisen, R. H.
1977-01-01
Numerical sequences of rotating axisymmetric nonmagnetic equilibrium models are constructed which represent the evolution of a barytropic star as it accretes material from a rotating medium. Two accretion geometries are considered - one approximating accretion from a rotating cloud and the other, accretion from a Keplerian disk. It is assumed that some process, such as Ekman spin-up or nonequilibrium oscillations, maintains nearly constant angular velocity along cylinders about the rotation axis. Transport of angular momentum in the cylindrically radial direction by viscosity is included. Fluid instabilities and other physical processes leading to enhancement of this transport are discussed. Particular application is made to zero-temperature white-dwarf models, using the degenerate electron equation of state. An initially nonrotating 0.566-solar-mass white dwarf is followed during the accretion of more than one solar mass of material. Applications to degenerate stellar cores, to mass-transfer binary systems containing white dwarfs, such as novae and dwarf novae, to Type I supernovae, and to galactic X-ray sources are considered.
Preliminary Results from Numerical Experiments on the Summer 1980 Heat Wave and Drought
NASA Technical Reports Server (NTRS)
Wolfson, N.; Atlas, R.; Sud, Y. C.
1985-01-01
During the summer of 1980, a prolonged heat wave and drought affected the United States. A preliminary set of experiments has been conducted to study the effect of varying boundary conditions on the GLA model simulation of the heat wave. Five 10-day numerical integrations with three different specifications of boundary conditions were carried out: a control experiment which utilized climatological boundary conditions, an SST experiment which utilized summer 1980 sea-surface temperatures in the North Pacific, but climatological values elsewhere, and a Soil Moisture experiment which utilized the values of Mintz-Serafini for the summer, 1980. The starting dates for the five forecasts were 11 June, 7 July, 21 July, 22 August, and 6 September of 1980. These dates were specifically chosen as days when a heat wave was already established in order to investigate the effect of soil moistures or North Pacific sea-surface temperatures on the model's ability to maintain the heat wave pattern. The experiments were evaluated in terms of the heat wave index for the South Plains, North Plains, Great Plains and the entire U.S. In addition a subjective comparison of map patterns has been performed.
NASA Astrophysics Data System (ADS)
Szeremley, Daniel; Mussenbrock, Thomas; Brinkmann, Ralf Peter; Zimmermanns, Marc; Rolfes, Ilona; Eremin, Denis; Ruhr-University Bochum, Theoretical Electrical Engineering Team; Ruhr-University Bochum, Institute of Microwave Systems Team
2015-09-01
The market shows in recent years a growing demand for bottles made of polyethylene terephthalate (PET). Therefore, fast and efficient sterilization processes as well as barrier coatings to decrease gas permeation are required. A specialized microwave plasma source - referred to as the plasmaline - has been developed to allow for depositing thin films of e.g. silicon oxid on the inner surface of such PET bottles. The plasmaline is a coaxial waveguide combined with a gas-inlet which is inserted into the empty bottle and initiates a reactive plasma. To optimize and control the different surface processes, it is essential to fully understand the microwave power coupling to the plasma and the related heating of electrons inside the bottle and thus the electromagnetic wave propagation along the plasmaline. In this contribution, we present a detailed dispersion analysis based on a numerical approach. We study how modes of guided waves are propagating under different conditions, if at all. The authors gratefully acknowledge the financial support of the German Research Foundation (DFG) within the framework of the collaborative research centre TRR87.
Recent results from numerical models of the Caribbean Sea and Gulf of Mexico: Do they all agree?
NASA Astrophysics Data System (ADS)
Sheinbaum, J.
2013-05-01
A great variety of numerical models of the Caribbean Sea and Gulf of Mexico have been developed over the years. They all reproduce the basic features of the circulation in the region but do not necessarily agree in the dynamics that explains them. We review recent results related to: 1) semiannual and interannual eddy variability in the Caribbean and their possible role in determining the extension of the western Atlantic warm pool. 2) Loop Current and its eddy shedding dynamics and 3) the deep circulation in the Gulf of Mexico. Recent observations of inertial wave trapping by eddies suggest new veins for numerical research and model comparisons.
NASA Technical Reports Server (NTRS)
Stamnes, Knut; Tsay, S.-CHEE; Jayaweera, Kolf; Wiscombe, Warren
1988-01-01
The transfer of monochromatic radiation in a scattering, absorbing, and emitting plane-parallel medium with a specified bidirectional reflectivity at the lower boundary is considered. The equations and boundary conditions are summarized. The numerical implementation of the theory is discussed with attention given to the reliable and efficient computation of eigenvalues and eigenvectors. Ways of avoiding fatal overflows and ill-conditioning in the matrix inversion needed to determine the integration constants are also presented.
NASA Astrophysics Data System (ADS)
Stamnes, Knut; Tsay, S.-Chee; Jayaweera, Kolf; Wiscombe, Warren
1988-06-01
The transfer of monochromatic radiation in a scattering, absorbing, and emitting plane-parallel medium with a specified bidirectional reflectivity at the lower boundary is considered. The equations and boundary conditions are summarized. The numerical implementation of the theory is discussed with attention given to the reliable and efficient computation of eigenvalues and eigenvectors. Ways of avoiding fatal overflows and ill-conditioning in the matrix inversion needed to determine the integration constants are also presented.
NASA Astrophysics Data System (ADS)
Motheau, E.; Abraham, J.
2016-05-01
A novel and efficient algorithm is presented in this paper to deal with DNS of turbulent reacting flows under the low-Mach-number assumption, with detailed chemistry and a quasi-spectral accuracy. The temporal integration of the equations relies on an operating-split strategy, where chemical reactions are solved implicitly with a stiff solver and the convection-diffusion operators are solved with a Runge-Kutta-Chebyshev method. The spatial discretisation is performed with high-order compact schemes, and a FFT based constant-coefficient spectral solver is employed to solve a variable-coefficient Poisson equation. The numerical implementation takes advantage of the 2DECOMP&FFT libraries developed by [1], which are based on a pencil decomposition method of the domain and are proven to be computationally very efficient. An enhanced pressure-correction method is proposed to speed up the achievement of machine precision accuracy. It is demonstrated that a second-order accuracy is reached in time, while the spatial accuracy ranges from fourth-order to sixth-order depending on the set of imposed boundary conditions. The software developed to implement the present algorithm is called HOLOMAC, and its numerical efficiency opens the way to deal with DNS of reacting flows to understand complex turbulent and chemical phenomena in flames.
Results from CrIS/ATMS Obtained Using an "AIRS Version-6 Like" Retrieval Algorithm
NASA Technical Reports Server (NTRS)
Susskind, Joel; Kouvaris, Louis; Iredell, Lena
2015-01-01
A main objective of AIRS/AMSU on EOS is to provide accurate sounding products that are used to generate climate data sets. Suomi NPP carries CrIS/ATMS that were designed as follow-ons to AIRS/AMSU. Our objective is to generate a long term climate data set of products derived from CrIS/ATMS to serve as a continuation of the AIRS/AMSU products. We have modified an improved version of the operational AIRS Version-6 retrieval algorithm for use with CrIS/ATMS. CrIS/ATMS products are of very good quality, and are comparable to, and consistent with, those of AIRS.
Lee, Chia-Ching; Lin, Shang-Chih; Wu, Shu-Wei; Li, Yu-Ching; Fu, Ping-Yuen
2012-10-01
The holding power of the bone-screw interfaces is one of the key factors in the clinical performance of screw design. The value of the holding power can be experimentally measured by pullout tests. Historically, some researchers have used the finite-element method to simulate the holding power of the different screws. Among them, however, the assumed displacement of the screw withdrawal is unreasonably small (about 0.005-1.0 mm). In addition, the chosen numerical indices are quite different, including maximum stress, strain energy, and reaction force. This study systematically uses dental, traumatic, and spinal screws to experimentally measure and numerically simulate their bone-purchasing ability within the synthetic bone. The testing results (pullout displacement and holding power) and numerical indices (maximum stress, total strain energy, and reaction forces) are chosen to calculate their correlation coefficients. The pullout displacement is divided into five regions from initial to final withdrawal. The experimental results demonstrate that the pullout displacement consistently occurs at the final region (0.6-1.6 mm) and is significantly higher than the assumed value of the literature studies. For all screw groups, the measured holding power within the initial region is not highly or even negatively correlated with the experimental and numerical results within the final region. The observation from the simulative results shows the maximum stress only reflects the loads concentrated at some local site(s) and is the least correlated to the measured holding power. Comparatively, both energy and force are more global indices to correlate with the gross failure at the bone-screw interfaces. However, the energy index is not suitable for the screw groups with rather tiny threads compared with the other specifications. In conclusion, the underestimated displacement leads to erroneous results in the screw-pullout simulation. Among three numerical indices the reaction
Hidden modes in open disordered media: analytical, numerical, and experimental results
NASA Astrophysics Data System (ADS)
Bliokh, Yury P.; Freilikher, Valentin; Shi, Z.; Genack, A. Z.; Nori, Franco
2015-11-01
We explore numerically, analytically, and experimentally the relationship between quasi-normal modes (QNMs) and transmission resonance (TR) peaks in the transmission spectrum of one-dimensional (1D) and quasi-1D open disordered systems. It is shown that for weak disorder there exist two types of the eigenstates: ordinary QNMs which are associated with a TR, and hidden QNMs which do not exhibit peaks in transmission or within the sample. The distinctive feature of the hidden modes is that unlike ordinary ones, their lifetimes remain constant in a wide range of the strength of disorder. In this range, the averaged ratio of the number of transmission peaks {N}{{res}} to the number of QNMs {N}{{mod}}, {N}{{res}}/{N}{{mod}}, is insensitive to the type and degree of disorder and is close to the value \\sqrt{2/5}, which we derive analytically in the weak-scattering approximation. The physical nature of the hidden modes is illustrated in simple examples with a few scatterers. The analogy between ordinary and hidden QNMs and the segregation of superradiant states and trapped modes is discussed. When the coupling to the environment is tuned by an external edge reflectors, the superradiance transition is reproduced. Hidden modes have been also found in microwave measurements in quasi-1D open disordered samples. The microwave measurements and modal analysis of transmission in the crossover to localization in quasi-1D systems give a ratio of {N}{{res}}/{N}{{mod}} close to \\sqrt{2/5}. In diffusive quasi-1D samples, however, {N}{{res}}/{N}{{mod}} falls as the effective number of transmission eigenchannels M increases. Once {N}{{mod}} is divided by M, however, the ratio {N}{{res}}/{N}{{mod}} is close to the ratio found in 1D.
First Results from the OMI Rotational Raman Scattering Cloud Pressure Algorithm
NASA Technical Reports Server (NTRS)
Joiner, Joanna; Vasilkov, Alexander P.
2006-01-01
We have developed an algorithm to retrieve scattering cloud pressures and other cloud properties with the Aura Ozone Monitoring Instrument (OMI). The scattering cloud pressure is retrieved using the effects of rotational Raman scattering (RRS). It is defined as the pressure of a Lambertian surface that would produce the observed amount of RRS consistent with the derived reflectivity of that surface. The independent pixel approximation is used in conjunction with the Lambertian-equivalent reflectivity model to provide an effective radiative cloud fraction and scattering pressure in the presence of broken or thin cloud. The derived cloud pressures will enable accurate retrievals of trace gas mixing ratios, including ozone, in the troposphere within and above clouds. We describe details of the algorithm that will be used for the first release of these products. We compare our scattering cloud pressures with cloud-top pressures and other cloud properties from the Aqua Moderate-Resolution Imaging Spectroradiometer (MODIS) instrument. OMI and MODIS are part of the so-called A-train satellites flying in formation within 30 min of each other. Differences between OMI and MODIS are expected because the MODIS observations in the thermal infrared are more sensitive to the cloud top whereas the backscattered photons in the ultraviolet can penetrate deeper into clouds. Radiative transfer calculations are consistent with the observed differences. The OMI cloud pressures are shown to be correlated with the cirrus reflectance. This relationship indicates that OMI can probe through thin or moderately thick cirrus to lower lying water clouds.
Spiegal, R.J.
1984-08-01
For humans exposed to electromagnetic (EM) radiation, the resulting thermophysiologic response is not well understood. Because it is unlikely that this information will be determined from quantitative experimentation, it is necessary to develop theoretical models which predict the resultant thermal response after exposure to EM fields. These calculations are difficult and involved because the human thermoregulatory system is very complex. In this paper, the important numerical models are reviewed and possibilities for future development are discussed.
222Rn transport in a fractured crystalline rock aquifer: Results from numerical simulations
Folger, P.F.; Poeter, E.; Wanty, R.B.; Day, W.; Frishman, D.
1997-01-01
Dissolved 222Rn concentrations in ground water from a small wellfield underlain by fractured Middle Proterozoic Pikes Peak Granite southwest of Denver, Colorado range from 124 to 840 kBq m-3 (3360-22700 pCi L-1). Numerical simulations of flow and transport between two wells show that differences in equivalent hydraulic aperture of transmissive fractures, assuming a simplified two-fracture system and the parallel-plate model, can account for the different 222Rn concentrations in each well under steady-state conditions. Transient flow and transport simulations show that 222Rn concentrations along the fracture profile are influenced by 222Rn concentrations in the adjoining fracture and depend on boundary conditions, proximity of the pumping well to the fracture intersection, transmissivity of the conductive fractures, and pumping rate. Non-homogeneous distribution (point sources) of 222Rn parent radionuclides, uranium and 226Ra, can strongly perturb the dissolved 222Rn concentrations in a fracture system. Without detailed information on the geometry and hydraulic properties of the connected fracture system, it may be impossible to distinguish the influence of factors controlling 222Rn distribution or to determine location of 222Rn point sources in the field in areas where ground water exhibits moderate 222Rn concentrations. Flow and transport simulations of a hypothetical multifracture system consisting of ten connected fractures, each 10 m in length with fracture apertures ranging from 0.1 to 1.0 mm, show that 222Rn concentrations at the pumping well can vary significantly over time. Assuming parallel-plate flow, transmissivities of the hypothetical system vary over four orders of magnitude because transmissivity varies with the cube of fracture aperture. The extreme hydraulic heterogeneity of the simple hypothetical system leads to widely ranging 222Rn values, even assuming homogeneous distribution of uranium and 226Ra along fracture walls. Consequently, it is
Newton, Katherine M; Peissig, Peggy L; Kho, Abel Ngo; Bielinski, Suzette J; Berg, Richard L; Choudhary, Vidhu; Basford, Melissa; Chute, Christopher G; Kullo, Iftikhar J; Li, Rongling; Pacheco, Jennifer A; Rasmussen, Luke V; Spangler, Leslie; Denny, Joshua C
2013-01-01
Background Genetic studies require precise phenotype definitions, but electronic medical record (EMR) phenotype data are recorded inconsistently and in a variety of formats. Objective To present lessons learned about validation of EMR-based phenotypes from the Electronic Medical Records and Genomics (eMERGE) studies. Materials and methods The eMERGE network created and validated 13 EMR-derived phenotype algorithms. Network sites are Group Health, Marshfield Clinic, Mayo Clinic, Northwestern University, and Vanderbilt University. Results By validating EMR-derived phenotypes we learned that: (1) multisite validation improves phenotype algorithm accuracy; (2) targets for validation should be carefully considered and defined; (3) specifying time frames for review of variables eases validation time and improves accuracy; (4) using repeated measures requires defining the relevant time period and specifying the most meaningful value to be studied; (5) patient movement in and out of the health plan (transience) can result in incomplete or fragmented data; (6) the review scope should be defined carefully; (7) particular care is required in combining EMR and research data; (8) medication data can be assessed using claims, medications dispensed, or medications prescribed; (9) algorithm development and validation work best as an iterative process; and (10) validation by content experts or structured chart review can provide accurate results. Conclusions Despite the diverse structure of the five EMRs of the eMERGE sites, we developed, validated, and successfully deployed 13 electronic phenotype algorithms. Validation is a worthwhile process that not only measures phenotype performance but also strengthens phenotype algorithm definitions and enhances their inter-institutional sharing. PMID:23531748
Results from CrIS/ATMS Obtained Using an "AIRS Version-6 Like" Retrieval Algorithm
NASA Technical Reports Server (NTRS)
Susskind, Joel; Kouvaris, Louis; Iredell, Lena; Blaisdell, John
2015-01-01
AIRS and CrIS Version-6.22 O3(p) and q(p) products are both superior to those of AIRS Version-6.Monthly mean August 2014 Version-6.22 AIRS and CrIS products agree reasonably well with OMPS, CERES, and witheach other. JPL plans to process AIRS and CrIS for many months and compare interannual differences. Updates to thecalibration of both CrIS and ATMS are still being finalized. We are also working with JPL to develop a joint AIRS/CrISlevel-1 to level-3 processing system using a still to be finalized Version-7 retrieval algorithm. The NASA Goddard DISCwill eventually use this system to reprocess all AIRS and recalibrated CrIS/ATMS. .
Six clustering algorithms applied to the WAIS-R: the problem of dissimilar cluster results.
Fraboni, M; Cooper, D
1989-11-01
Clusterings of the Wechsler Adult Intelligence Scale-Revised subtests were obtained from the application of six hierarchical clustering methods (N = 113). These sets of clusters were compared for similarities using the Rand index. The calculated indices suggested similarities of cluster group membership between the Complete Linkage and Centroid methods; Complete Linkage and Ward's methods; Centroid and Ward's methods; and Single Linkage and Average Linkage Between Groups methods. Cautious use of single clustering methods is implied, though the authors suggest some advantages of knowing specific similarities and differences. If between-method comparisons consistently reveal similar cluster membership, a choice could be made from those algorithms that tend to produce similar partitions, thereby enhancing cluster interpretation. PMID:2613904
Lessor, K.S.
1988-08-26
The parallel algorithm of Ariyawansa, Sorensen, and Wets for approximating the values and subgradients of the recourse function in a stochastic program with complete recourse is implemented and timing results are reported for limited experimental trials. 14 refs., 6 figs., 8 tabs.
Multi-Country Experience in Delivering a Joint Course on Software Engineering--Numerical Results
ERIC Educational Resources Information Center
Budimac, Zoran; Putnik, Zoran; Ivanovic, Mirjana; Bothe, Klaus; Zdravkova, Katerina; Jakimovski, Boro
2014-01-01
A joint course, created as a result of a project under the auspices of the "Stability Pact of South-Eastern Europe" and DAAD, has been conducted in several Balkan countries: in Novi Sad, Serbia, for the last six years in several different forms, in Skopje, FYR of Macedonia, for two years, for several types of students, and in Tirana,…
A numerically efficient finite element hydroelastic analysis. Volume 1: Theory and results
NASA Technical Reports Server (NTRS)
Coppolino, R. N.
1976-01-01
Symmetric finite element matrix formulations for compressible and incompressible hydroelasticity are developed on the basis of Toupin's complementary formulation of classical mechanics. Results of implementation of the new technique in the NASTRAN structural analysis program are presented which demonstrate accuracy and efficiency.
NASA Astrophysics Data System (ADS)
Khokhlov, A.; Domínguez, I.; Bacon, C.; Clifford, B.; Baron, E.; Hoeflich, P.; Krisciunas, K.; Suntzeff, N.; Wang, L.
2012-07-01
We describe a new astrophysical version of a cell-based adaptive mesh refinement code ALLA for reactive flow fluid dynamic simulations, including a new implementation of α-network nuclear kinetics, and present preliminary results of first three-dimensional simulations of incomplete carbon-oxygen detonation in Type Ia Supernovae.
NASA Astrophysics Data System (ADS)
Montbriand, L. E.
1991-07-01
A peak identification technique which uses the fast Fourier transform (FFT) algorithm is presented for unambiguously identifying up to three sources in signals received by the sampled aperture receiving array (SARA) of the Communications Research Center. The technique involves removing phase rotations resulting from the FFT and the data configuration and interpreting this result as the direction cosine distribution of the received signal. The locations and amplitudes of all peaks for one array arm are matched with those in a master list for a single source in order to identify actual sources. The identification of actual sources was found to be subject to the limitations of the FFT in that there was an inherent bias for the secondary and tertiary sources to appear at the side-lobe positions of the strongest source. There appears to be a limit in the ratio of the magnitude of a weaker source to that of the strongest source, below which it becomes too difficult to reliably identify true sources. For the SARA array this ratio is near-10 dB. Some of the data were also analyzed using the more complex MUSIC algorithm which yields a narrower directional peak for the sources than the FFT. For the SARA array, using ungroomed data, the largest side and grating lobes that the MUSIC algorithm produces are some 10 dB below the largest side and grating lobes that are produced using the FFT algorithm. Consequently the source-separation problem is less than that encountered using the FFT algorithm, but is not eliminated.
Nourgaliev R.; Knoll D.; Mousseau V.; Berry R.
2007-04-01
The state-of-the-art for Direct Numerical Simulation (DNS) of boiling multiphase flows is reviewed, focussing on potential of available computational techniques, the level of current success for their applications to model several basic flow regimes (film, pool-nucleate and wall-nucleate boiling -- FB, PNB and WNB, respectively). Then, we discuss multiphysics and multiscale nature of practical boiling flows in LWR reactors, requiring high-fidelity treatment of interfacial dynamics, phase-change, hydrodynamics, compressibility, heat transfer, and non-equilibrium thermodynamics and chemistry of liquid/vapor and fluid/solid-wall interfaces. Finally, we outline the framework for the {\\sf Fervent} code, being developed at INL for DNS of reactor-relevant boiling multiphase flows, with the purpose of gaining insight into the physics of multiphase flow regimes, and generating a basis for effective-field modeling in terms of its formulation and closure laws.
NASA Technical Reports Server (NTRS)
Rigby, D. L.; Vanfossen, G. J.
1992-01-01
A study of the effect of spanwise variation in momentum on leading edge heat transfer is discussed. Numerical and experimental results are presented for both a circular leading edge and a 3:1 elliptical leading edge. Reynolds numbers in the range of 10,000 to 240,000 based on leading edge diameter are investigated. The surface of the body is held at a constant uniform temperature. Numerical and experimental results with and without spanwise variations are presented. Direct comparison of the two-dimensional results, that is, with no spanwise variations, to the analytical results of Frossling is very good. The numerical calculation, which uses the PARC3D code, solves the three-dimensional Navier-Stokes equations, assuming steady laminar flow on the leading edge region. Experimentally, increases in the spanwise-averaged heat transfer coefficient as high as 50 percent above the two-dimensional value were observed. Numerically, the heat transfer coefficient was seen to increase by as much as 25 percent. In general, under the same flow conditions, the circular leading edge produced a higher heat transfer rate than the elliptical leading edge. As a percentage of the respective two-dimensional values, the circular and elliptical leading edges showed similar sensitivity to span wise variations in momentum. By equating the root mean square of the amplitude of the spanwise variation in momentum to the turbulence intensity, a qualitative comparison between the present work and turbulent results was possible. It is shown that increases in leading edge heat transfer due to spanwise variations in freestream momentum are comparable to those due to freestream turbulence.
Preliminary numerical modeling results - cone penetrometer (CPT) tip used as an electrode
Ramirez, A L
2006-12-19
Figure 1 shows the resistivity models considered in this study; log10 of the resistivity is shown. The graph on the upper left hand side shows a hypothetical resisitivity well log measured along a well in the upper layered model; 10% Gaussian noise has been added to the well log data. The lower model is identical to the upper one except for one square area located within the second deepest layer. Figure 2 shows the electrode configurations considered. The ''reference'' case (upper frame) considers point electrodes located along the surface and along a vertical borehole. The ''CPT electrode'' case (middle frame) assumes that the CPT tip serves as an electrode that is electrically connected to the push rod; the surface electrodes are used in conjuction with the moving CPT electrode. The ''isolated CPT electrode'' case assumes that the electrode at the CPT tip is electrically isolated from the pushrod. Note that the separate CPT push rods in the middle and lower frames are shown separated to clarify the figure; in reality, there is only one pushrod that is changing length as the probe advances. Figure 3 shows three pole-pole measurement schemes were considered; in all cases, the ''get lost'' electrodes were the leftmost and rightmost surface electrodes. The top frame shows the reference scheme where all surface and borehole electrodes can be used. The middle frame shows two possible configurations available when a CPT mounted electrode is used. Note that only one of the four poles can be located along the borehole at any given time; electrode combinations such as the one depicted in blue (upper frame) are not possible in this case. The bottom frame shows a sample configuration where only the surface electrodes are used. Figure 4 shows the results obtained for the various measurement schemes. The white lines show the outline of the true model (shown in Figure 1, upper frame). The starting initial model for these inversions is based on the electrical resistivity log
Spallative nucleosynthesis in supernova remnants. II. Time-dependent numerical results
NASA Astrophysics Data System (ADS)
Parizot, Etienne; Drury, Luke
1999-06-01
We calculate the spallative production of light elements associated with the explosion of an isolated supernova in the interstellar medium, using a time-dependent model taking into account the dilution of the ejected enriched material and the adiabatic energy losses. We first derive the injection function of energetic particles (EPs) accelerated at both the forward and the reverse shock, as a function of time. Then we calculate the Be yields obtained in both cases and compare them to the value implied by the observational data for metal-poor stars in the halo of our Galaxy, using both O and Fe data. We find that none of the processes investigated here can account for the amount of Be found in these stars, which confirms the analytical results of Parizot & Drury (1999). We finally analyze the consequences of these results for Galactic chemical evolution, and suggest that a model involving superbubbles might alleviate the energetics problem in a quite natural way.
Collisional evolution in the Eos and Koronis asteroid families - Observational and numerical results
NASA Technical Reports Server (NTRS)
Binzel, Richard P.
1988-01-01
The origin and evolution of the Eos and Koronis families are addressed by an analysis of Binzel's (1987) observational results. The Maxwellian distribution of the Eos family's rotation rates implies a collisionally-evolved population; these rates are also faster than those of the Koronis family and nonfamily asteroids. While the age of the Eos family may be comparable to the solar system's, that of the Koronis family could be considerably younger. Greater shape irregularity may account for the Koronis family's higher mean lightcurve amplitude.
Wang, Zhan-Shan; Pan, Li-Bo
2014-03-01
The emission inventory of air pollutants from the thermal power plants in the year of 2010 was set up. Based on the inventory, the air quality of the prediction scenarios by implementation of both 2003-version emission standard and the new emission standard were simulated using Models-3/CMAQ. The concentrations of NO2, SO2, and PM2.5, and the deposition of nitrogen and sulfur in the year of 2015 and 2020 were predicted to investigate the regional air quality improvement by the new emission standard. The results showed that the new emission standard could effectively improve the air quality in China. Compared with the implementation results of the 2003-version emission standard, by 2015 and 2020, the area with NO2 concentration higher than the emission standard would be reduced by 53.9% and 55.2%, the area with SO2 concentration higher than the emission standard would be reduced by 40.0%, the area with nitrogen deposition higher than 1.0 t x km(-2) would be reduced by 75.4% and 77.9%, and the area with sulfur deposition higher than 1.6 t x km(-2) would be reduced by 37.1% and 34.3%, respectively. PMID:24881370
Analytical and Numerical Results for an Adhesively Bonded Joint Subjected to Pure Bending
NASA Technical Reports Server (NTRS)
Smeltzer, Stanley S., III; Lundgren, Eric
2006-01-01
A one-dimensional, semi-analytical methodology that was previously developed for evaluating adhesively bonded joints composed of anisotropic adherends and adhesives that exhibit inelastic material behavior is further verified in the present paper. A summary of the first-order differential equations and applied joint loading used to determine the adhesive response from the methodology are also presented. The method was previously verified against a variety of single-lap joint configurations from the literature that subjected the joints to cases of axial tension and pure bending. Using the same joint configuration and applied bending load presented in a study by Yang, the finite element analysis software ABAQUS was used to further verify the semi-analytical method. Linear static ABAQUS results are presented for two models, one with a coarse and one with a fine element meshing, that were used to verify convergence of the finite element analyses. Close agreement between the finite element results and the semi-analytical methodology were determined for both the shear and normal stress responses of the adhesive bondline. Thus, the semi-analytical methodology was successfully verified using the ABAQUS finite element software and a single-lap joint configuration subjected to pure bending.
NASA Astrophysics Data System (ADS)
Helsdon, John H.; Farley, Richard D.
1987-05-01
A recently developed Storm Electrification Model (SEM) has been used to simulate the July 19, 1981, Cooperative Convective Precipitation Experiment (CCOPE) case study cloud. This part of the investigation examines the comparison between the model results and the observations of the actual cloud with respect to its nonelectrical aspects. A timing equivalence is established between the simulation and observations based on an explosive growth phase which was both observed and modeled. This timing equivalence is used as a basis upon which the comparisons are made. The model appears to do a good job of reproducing (in both space and time) many of the observed characteristics of the cloud. These include: (1) the general cloud appearance; (2) cloud size; (3) cloud top rise rate; (4) rapid growth phase; (5) updraft structure; (6) first graupel appearance; (7) first radar echo; (8) qualitative radar range-height indicator evolution; (9) cloud decay; and (10) the location of hydrometers with respect to the updraft/-downdraft structure. Some features that are not accurately modeled are the cloud base height, the maximum liquid water content, and the time from first formation of precipitation until it reaches the ground. While the simulation is not perfect, the faithfulness of the model results to the observations is sufficient to give us confidence that the microphysical processes active in this storm are adequately represented in the model physics. Areas where model improvement is indicated are also discussed.
Numerical predictions and experimental results of a dry bay fire environment.
Suo-Anttila, Jill Marie; Gill, Walter; Black, Amalia Rebecca
2003-11-01
The primary objective of the Safety and Survivability of Aircraft Initiative is to improve the safety and survivability of systems by using validated computational models to predict the hazard posed by a fire. To meet this need, computational model predictions and experimental data have been obtained to provide insight into the thermal environment inside an aircraft dry bay. The calculations were performed using the Vulcan fire code, and the experiments were completed using a specially designed full-scale fixture. The focus of this report is to present comparisons of the Vulcan results with experimental data for a selected test scenario and to assess the capability of the Vulcan fire field model to accurately predict dry bay fire scenarios. Also included is an assessment of the sensitivity of the fire model predictions to boundary condition distribution and grid resolution. To facilitate the comparison with experimental results, a brief description of the dry bay fire test fixture and a detailed specification of the geometry and boundary conditions are included. Overall, the Vulcan fire field model has shown the capability to predict the thermal hazard posed by a sustained pool fire within a dry bay compartment of an aircraft; although, more extensive experimental data and rigorous comparison are required for model validation.
Urban Surface Network In Marseille: Network Optimization Using Numerical Simulations and Results
NASA Astrophysics Data System (ADS)
Pigeon, G.; Lemonsu, A.; Durand, P.; Masson, V.
During the ESCOMPTE program (Field experiment to constrain models of atmo- spheric pollution and emissions transport) in Marseille between june and july 2001 an important device has been set up to describe the urban boundary layer over the built-up aera of Marseille. There was notably a network of 20 temperature and humid- ity sensors which has mesured the spatial and temporal variability of these parameters. Before the experiment the arrangement of the network had been optimized to get the maximum of information about these two varaibilities. We have worked on results of high resolution simulations containing the TEB scheme which represents the energy budgets associated with the gobal street geometry of the mesh. First, a qualitative analysis had enabled the identification of the characteristical phenomenons over the town of Marseille. There are narrows links beetween urban effects and local effects : marine advection and orography. Then, a quantitative analysis of the field has been developped. EOF (empirical orthogonal functions) have been used to characterised the spatial and temporal structures of the field evolution. Instrumented axis have been determined with all these results. Finally, we have choosen very carefully the locations of the instruments at the scale of the street to avoid that micro-climatic effects interfere with the meso-scale effect of the town. The recording of the mesurements, every 10 minutes, had started on the 12th of june and had finished on the 16th of july. We did not get any problem with the instrument and so all the period has been recorded every 10 minutes. The analysis of the datas will be led on different way. First, will be done a temporal study. We want to determine if the times when occur phenomenons are linked to the location in the town. We will interest particulary to the warming during the morning and the cooling during the evening. Then, we will look for correlation between the temperature and mixing ratio with the wind
NASA Astrophysics Data System (ADS)
Henne, Stephan; Kaufmann, Pirmin; Schraner, Martin; Brunner, Dominik
2013-04-01
allows particles to leave the limited COSMO domain. On the technical side, we added an OpenMP shared-memory parallelisation to the model, which also allows for asynchronous reading of input data. Here we present results from several model performance tests under different conditions and compare these with results from standard FLEXPART simulations using nested ECMWF input. This analysis will contain evaluation of deposition fields, comparison of convection schemes and performance analysis of the parallel version. Furthermore, a series of forward-backward simulations were conducted in order to test the robustness of model results independent of the integration direction. Finally, selected examples from recent applications of the model to transport of radioactive and conservative tracers and for in-situ measurement characterisation will be presented.
Pham, VT.; Silva, L.; Digonnet, H.; Combeaud, C.; Billon, N.; Coupez, T.
2011-05-04
The objective of this work is to model the viscoelastic behaviour of polymer from the solid state to the liquid state. With this objective, we perform experimental tensile tests and compare with simulation results. The chosen polymer is a PMMA whose behaviour depends on its temperature. The computation simulation is based on Navier-Stokes equations where we propose a mixed finite element method with an interpolation P1+/P1 using displacement (or velocity) and pressure as principal variables. The implemented technique uses a mesh composed of triangles (2D) or tetrahedra (3D). The goal of this approach is to model the viscoelastic behaviour of polymers through a fluid-structure coupling technique with a multiphase approach.
Arctic Mixed-Phase Cloud Properties from AERI Lidar Observations: Algorithm and Results from SHEBA
Turner, David D.
2005-04-01
A new approach to retrieve microphysical properties from mixed-phase Arctic clouds is presented. This mixed-phase cloud property retrieval algorithm (MIXCRA) retrieves cloud optical depth, ice fraction, and the effective radius of the water and ice particles from ground-based, high-resolution infrared radiance and lidar cloud boundary observations. The theoretical basis for this technique is that the absorption coefficient of ice is greater than that of liquid water from 10 to 13 μm, whereas liquid water is more absorbing than ice from 16 to 25 μm. MIXCRA retrievals are only valid for optically thin (τvisible < 6) single-layer clouds when the precipitable water vapor is less than 1 cm. MIXCRA was applied to the Atmospheric Emitted Radiance Interferometer (AERI) data that were collected during the Surface Heat Budget of the Arctic Ocean (SHEBA) experiment from November 1997 to May 1998, where 63% of all of the cloudy scenes above the SHEBA site met this specification. The retrieval determined that approximately 48% of these clouds were mixed phase and that a significant number of clouds (during all 7 months) contained liquid water, even for cloud temperatures as low as 240 K. The retrieved distributions of effective radii for water and ice particles in single-phase clouds are shown to be different than the effective radii in mixed-phase clouds.
NASA Technical Reports Server (NTRS)
Swartz, W. H.; Bucesla, E. J.; Lamsal, L. N.; Celarier, E. A.; Krotkov, N. A.; Bhartia, P, K,; Strahan, S. E.; Gleason, J. F.; Herman, J.; Pickering, K.
2012-01-01
Nitrogen oxides (NOx =NO+NO2) are important atmospheric trace constituents that impact tropospheric air pollution chemistry and air quality. We have developed a new NASA algorithm for the retrieval of stratospheric and tropospheric NO2 vertical column densities using measurements from the nadir-viewing Ozone Monitoring Instrument (OMI) on NASA's Aura satellite. The new products rely on an improved approach to stratospheric NO2 column estimation and stratosphere-troposphere separation and a new monthly NO2 climatology based on the NASA Global Modeling Initiative chemistry-transport model. The retrieval does not rely on daily model profiles, minimizing the influence of a priori information. We evaluate the retrieved tropospheric NO2 columns using surface in situ (e.g., AQS/EPA), ground-based (e.g., DOAS), and airborne measurements (e.g., DISCOVER-AQ). The new, improved OMI tropospheric NO2 product is available at high spatial resolution for the years 200S-present. We believe that this product is valuable for the evaluation of chemistry-transport models, examining the spatial and temporal patterns of NOx emissions, constraining top-down NOx inventories, and for the estimation of NOx lifetimes.
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
Active behavior of abdominal wall muscles: Experimental results and numerical model formulation.
Grasa, J; Sierra, M; Lauzeral, N; Muñoz, M J; Miana-Mena, F J; Calvo, B
2016-08-01
In the present study a computational finite element technique is proposed to simulate the mechanical response of muscles in the abdominal wall. This technique considers the active behavior of the tissue taking into account both collagen and muscle fiber directions. In an attempt to obtain the computational response as close as possible to real muscles, the parameters needed to adjust the mathematical formulation were determined from in vitro experimental tests. Experiments were conducted on male New Zealand White rabbits (2047±34g) and the active properties of three different muscles: Rectus Abdominis, External Oblique and multi-layered samples formed by three muscles (External Oblique, Internal Oblique, and Transversus Abdominis) were characterized. The parameters obtained for each muscle were incorporated into a finite strain formulation to simulate active behavior of muscles incorporating the anisotropy of the tissue. The results show the potential of the model to predict the anisotropic behavior of the tissue associated to fibers and how this influences on the strain, stress and generated force during an isometric contraction. PMID:27111629
Restricted diffusion in a model acinar labyrinth by NMR: Theoretical and numerical results
NASA Astrophysics Data System (ADS)
Grebenkov, D. S.; Guillot, G.; Sapoval, B.
2007-01-01
A branched geometrical structure of the mammal lungs is known to be crucial for rapid access of oxygen to blood. But an important pulmonary disease like emphysema results in partial destruction of the alveolar tissue and enlargement of the distal airspaces, which may reduce the total oxygen transfer. This effect has been intensively studied during the last decade by MRI of hyperpolarized gases like helium-3. The relation between geometry and signal attenuation remained obscure due to a lack of realistic geometrical model of the acinar morphology. In this paper, we use Monte Carlo simulations of restricted diffusion in a realistic model acinus to compute the signal attenuation in a diffusion-weighted NMR experiment. We demonstrate that this technique should be sensitive to destruction of the branched structure: partial removal of the interalveolar tissue creates loops in the tree-like acinar architecture that enhance diffusive motion and the consequent signal attenuation. The role of the local geometry and related practical applications are discussed.
Buoyancy-driven melt segregation in the earth's moon. I - Numerical results
NASA Technical Reports Server (NTRS)
Delano, J. W.
1990-01-01
The densities of lunar mare magmas have been estimated at liquidus temperatures for pressures from 0 to 47 kbar (0.4 GPa; center of the moon) using a third-order Birch-Murnaghan equation and compositionally dependent parameters from Large and Carmichael (1987). Results on primary magmatic compositions represented by pristine volcanic glasses suggest that the density contrast between very-high-Ti melts and their liquidus olivines may approach zero at pressures of about 25 kbar (2.5 GPa). Since this is the pressure regime of the mantle source regions for these magmas, a compositional limit of eruptability for mare liquids may exist that is similar to the highest Ti melt yet observed among the lunar samples. Although the moon may have generated magmas having greater than 16.4 wt pct TiO2, those melts would probably not have reached the lunar surface due to their high densities, and may have even sunk deeper into the moon's interior as negatively buoyant diapirs. This process may have been important for assimilative interactions in the lunar mantle. The phenomenon of melt/solid density crossover may therefore occur not only in large terrestrial-type objects but also in small objects where, despite low pressures, the range of melt compositions is extreme.
NASA Astrophysics Data System (ADS)
Salcedo-Castro, Julio; Bourgault, Daniel; deYoung, Brad
2011-09-01
The flow caused by the discharge of freshwater underneath a glacier into an idealized fjord is simulated with a 2D non-hydrostatic model. As the freshwater leaves horizontally the subglacial opening into a fjord of uniformly denser water it spreads along the bottom as a jet, until buoyancy forces it to rise. During the initial rising phase, the plume meanders into complex flow patterns while mixing with the surrounding fluid until it reaches the surface and then spreads horizontally as a surface seaward flowing plume of brackish water. The process induces an estuarine-like circulation. Once steady-state is reached, the flow consists of an almost undiluted buoyant plume rising straight along the face of the glacier that turns into a horizontal surface layer thickening as it flows seaward. Over the range of parameters examined, the estuarine circulation is dynamically unstable with gradient Richardson number at the sheared interface having values of <1/4. The surface velocity and dilution factors are strongly and non-linearly related to the Froude number. It is the buoyancy flux that primarily controls the resulting circulation with the momentum flux playing a secondary role.
The Formation of Asteroid Satellites in Catastrophic Impacts: Results from Numerical Simulations
NASA Technical Reports Server (NTRS)
Durda, D. D.; Bottke, W. F., Jr.; Enke, B. L.; Asphaug, E.; Richardson, D. C.; Leinhardt, Z. M.
2003-01-01
We have performed new simulations of the formation of asteroid satellites by collisions, using a combination of hydrodynamical and gravitational dynamical codes. This initial work shows that both small satellites and ejected, co-orbiting pairs are produced most favorably by moderate-energy collisions at more direct, rather than oblique, impact angles. Simulations so far seem to be able to produce systems qualitatively similar to known binaries. Asteroid satellites provide vital clues that can help us understand the physics of hypervelocity impacts, the dominant geologic process affecting large main belt asteroids. Moreover, models of satellite formation may provide constraints on the internal structures of asteroids beyond those possible from observations of satellite orbital properties alone. It is probable that most observed main-belt asteroid satellites are by-products of cratering and/or catastrophic disruption events. Several possible formation mechanisms related to collisions have been identified: (i) mutual capture following catastrophic disruption, (ii) rotational fission due to glancing impact and spin-up, and (iii) re-accretion in orbit of ejecta from large, non-catastrophic impacts. Here we present results from a systematic investigation directed toward mapping out the parameter space of the first and third of these three collisional mechanisms.
Kam, Seung I.; Gauglitz, Phillip A. ); Rossen, William R.
2000-12-01
The goal of this study is to fit model parameters to changes in waste level in response to barometric pressure changes in underground storage tanks at the Hanford Site. This waste compressibility is a measure of the quantity of gas, typically hydrogen and other flammable gases that can pose a safety hazard, retained in the waste. A one-dimensional biconical-pore-network model for compressibility of a bubbly slurry is presented in a companion paper. Fitting these results to actual waste level changes in the tanks implies that bubbles are long in the slurry layer and the ratio of pore-body radius to pore-throat radius is close to one; unfortunately, capillary effects can not be quantified unambiguously from the data without additional information on pore geometry. Therefore determining the quantity of gas in the tanks requires more than just slurry volume data. Similar ambiguity also exists with two other simple models: a capillary-tube model with contact angle hysteresis and spherical-p ore model.
NASA Astrophysics Data System (ADS)
Pearson, A.; Pizzuto, J. E.
2015-12-01
Previous work at run-of-river (ROR) dams in northern Delaware has shown that bedload supplied to ROR impoundments can be transported over the dam when impoundments remain unfilled. Transport is facilitated by high levels of sand in the impoundment that lowers the critical shear stresses for particle entrainment, and an inversely sloping sediment ramp connecting the impoundment bed (where the water depth is typically equal to the dam height) with the top of the dam (Pearson and Pizzuto, in press). We demonstrate with one-dimensional bed material transport modeling that bed material can move through impoundments and that equilibrium transport (i.e., a balance between supply to and export from the impoundment, with a constant bed elevation) is possible even when the bed elevation is below the top of the dam. Based on our field work and previous HEC-RAS modeling, we assess bed material transport capacity at the base of the sediment ramp (and ignore detailed processes carrying sediment up and ramp and over the dam). The hydraulics at the base of the ramp are computed using a weir equation, providing estimates of water depth, velocity, and friction, based on the discharge and sediment grain size distribution of the impoundment. Bedload transport rates are computed using the Wilcock-Crowe equation, and changes in the impoundment's bed elevation are determined by sediment continuity. Our results indicate that impoundments pass the gravel supplied from upstream with deep pools when gravel supply rate is low, gravel grain sizes are relatively small, sand supply is high, and discharge is high. Conversely, impoundments will tend to fill their pools when gravel supply rate is high, gravel grain sizes are relatively large, sand supply is low, and discharge is low. The rate of bedload supplied to an impoundment is the primary control on how fast equilibrium transport is reached, with discharge having almost no influence on the timing of equilibrium.
NASA Astrophysics Data System (ADS)
Radhakrishnan, Sreeram
Harbor observation and prediction system (NYHOPS) which provides 48-hour forecasts of salinity and temperature profiles. Initial results indicate that the NYHOPS forecast of sound speed profiles used in conjunction with the acoustic propagation model is able to make realistic forecasts of TL in the Hudson River Estuary.
NASA Technical Reports Server (NTRS)
Schonberg, William P.; Peck, Jeffrey A.
1992-01-01
Over the last three decades, multiwall structures have been analyzed extensively, primarily through experiment, as a means of increasing the protection afforded to spacecraft structure. However, as structural configurations become more varied, the number of tests required to characterize their response increases dramatically. As an alternative, numerical modeling of high-speed impact phenomena is often being used to predict the response of a variety of structural systems under impact loading conditions. This paper presents the results of a preliminary numerical/experimental investigation of the hypervelocity impact response of multiwall structures. The results of experimental high-speed impact tests are compared against the predictions of the HULL hydrodynamic computer code. It is shown that the hypervelocity impact response characteristics of a specific system cannot be accurately predicted from a limited number of HULL code impact simulations. However, if a wide range of impact loadings conditions are considered, then the ballistic limit curve of the system based on the entire series of numerical simulations can be used as a relatively accurate indication of actual system response.
Numerical Modeling of Anti-icing Systems and Comparison to Test Results on a NACA 0012 Airfoil
NASA Technical Reports Server (NTRS)
Al-Khalil, Kamel M.; Potapczuk, Mark G.
1993-01-01
A series of experimental tests were conducted in the NASA Lewis IRT on an electro-thermally heated NACA 0012 airfoil. Quantitative comparisons between the experimental results and those predicted by a computer simulation code were made to assess the validity of a recently developed anti-icing model. An infrared camera was utilized to scan the instantaneous temperature contours of the skin surface. Despite some experimental difficulties, good agreement between the numerical predictions and the experiment results were generally obtained for the surface temperature and the possibility for each runback to freeze. Some recommendations were given for an efficient operation of a thermal anti-icing system.
Pancheliuga, V A; Pancheliuga, M S
2013-01-01
In the present work a methodological background for the histogram method of time series analysis is developed. Connection between shapes of smoothed histograms constructed on the basis of short segments of time series of fluctuations and the fractal dimension of the segments is studied. It is shown that the fractal dimension possesses all main properties of the histogram method. Based on it a further development of fractal dimension determination algorithm is proposed. This algorithm allows more precision determination of the fractal dimension by using the "all possible combination" method. The application of the method to noise-like time series analysis leads to results, which could be obtained earlier only by means of the histogram method based on human expert comparisons of histograms shapes. PMID:23755565
Forsström, J
1992-01-01
The ID3 algorithm for inductive learning was tested using preclassified material for patients suspected to have a thyroid illness. Classification followed a rule-based expert system for the diagnosis of thyroid function. Thus, the knowledge to be learned was limited to the rules existing in the knowledge base of that expert system. The learning capability of the ID3 algorithm was tested with an unselected learning material (with some inherent missing data) and with a selected learning material (no missing data). The selected learning material was a subgroup which formed a part of the unselected learning material. When the number of learning cases was increased, the accuracy of the program improved. When the learning material was large enough, an increase in the learning material did not improve the results further. A better learning result was achieved with the selected learning material not including missing data as compared to unselected learning material. With this material we demonstrate a weakness in the ID3 algorithm: it can not find available information from good example cases if we add poor examples to the data. PMID:1551737
NASA Astrophysics Data System (ADS)
Mattioni, L.; Wittmer, J. P.; Baschnagel, J.; Barrat, J.-L.; Luijten, E.
2003-04-01
Correlations in the motion of reptating polymers in a melt are investigated by means of Monte Carlo simulations of the three-dimensional slithering-snake version of the bond-fluctuation model. Surprisingly, the slithering-snake dynamics becomes inconsistent with classical reptation predictions at high chain overlap (created either by chain length N or by the volume fraction φ of occupied lattice sites), where the relaxation times increase much faster than expected. This is due to the anomalous curvilinear diffusion in a finite time window whose upper bound tau_+(N) is set by the density of chain ends φ/N. Density fluctuations created by passing chain ends allow a reference polymer to break out of the local cage of immobile obstacles created by neighboring chains. The dynamics of dense solutions of “snakes” at t ll tau_+ is identical to that of a benchmark system where all chains but one are frozen. We demonstrate that the subdiffusive dynamical regime is caused by the slow creeping of a chain out of its correlation hole. Our results are in good qualitative agreement with the activated-reptation scheme proposed recently by Semenov and Rubinstein (Eur. Phys. J. B, 1 (1998) 87). Additionally, we briefly comment on the relevance of local relaxation pathways within a slithering-snake scheme. Our preliminary results suggest that a judicious choice of the ratio of local to slithering-snake moves is crucial to equilibrate a melt of long chains efficiently.
NASA Astrophysics Data System (ADS)
Venema, Victor; Mestre, Olivier
2010-05-01
As part of the COST Action HOME (Advances in homogenisation methods of climate series: an integrated approach) a dataset was generated that serves as a benchmark for homogenisation algorithms. Members of the Action and third parties have been invited to homogenise this dataset. The results of this exercise are analysed by the HOME Working Groups (WG) on detection (WG2) and correction (WG3) algorithms to obtain recommendations for a standard homogenisation procedure for climate data. This talk will shortly describe this benchmark dataset and present first results comparing the quality of the about 25 contributions. Based upon a survey among homogenisation experts we chose to work with monthly values for temperature and precipitation. Temperature and precipitation were selected because most participants consider these elements the most relevant for their studies. Furthermore, they represent two important types of statistics (additive and multiplicative). The benchmark has three different types of datasets: real data, surrogate data and synthetic data. The real datasets allow comparing the different homogenisation methods with the most realistic type of data and inhomogeneities. Thus this part of the benchmark is important for a faithful comparison of algorithms with each other. However, as in this case the truth is not known, it is not possible to quantify the improvements due to homogenisation. Therefore, the benchmark also has two datasets with artificial data to which we inserted known inhomogeneities: surrogate and synthetic data. The aim of surrogate data is to reproduce the structure of measured data accurately enough that it can be used as substitute for measurements. The surrogate climate networks have the spatial and temporal auto- and cross-correlation functions of real homogenised networks as well as the exact (non-Gaussian) distribution for each station. The idealised synthetic data is based on the surrogate networks. The change is that the difference
NASA Technical Reports Server (NTRS)
Moes, Timothy R.; Whitmore, Stephen A.
1991-01-01
A technique was developed to improve the fidelity of airdata measurements during dynamic maneuvering. This technique is particularly useful for airdata measured during flight at high angular rates and high angles of attack. To support this research, flight tests using the F-18 high alpha research vehicle (HARV) were conducted at NASA Ames Research Center, Dryden Flight Research Facility. A Kalman filter was used to combine information from research airdata, linear accelerometers, angular rate gyros, and attitude gyros to determine better estimates of airdata quantities such as angle of attack, angle of sideslip, airspeed, and altitude. The state and observation equations used by the Kalman filter are briefly developed and it is shown how the state and measurement covariance matrices were determined from flight data. Flight data are used to show the results of the technique and these results are compared to an independent measurement source. This technique is applicable to both postflight and real-time processing of data.
NASA Technical Reports Server (NTRS)
Vinh, Hoang; Dwyer, Harry A.; Van Dam, C. P.
1992-01-01
The applications of two CFD-based finite-difference methods to computational electromagnetics are investigated. In the first method, the time-domain Maxwell's equations are solved using the explicit Lax-Wendroff scheme and in the second method, the second-order wave equations satisfying the Maxwell's equations are solved using the implicit Crank-Nicolson scheme. The governing equations are transformed to a generalized curvilinear coordinate system and solved on a body-conforming mesh using the scattered-field formulation. The induced surface current and the bistatic radar cross section are computed and the results are validated for several two-dimensional test cases involving perfectly-conducting scatterers submerged in transverse-magnetic plane waves.
NASA Astrophysics Data System (ADS)
Balashov, V. A.; Savenkov, E. B.
2015-10-01
The applicability of numerical algorithms based on a quasi-hydrodynamic system of equations for computing viscous heat-conducting compressible gas flows at Mach numbers M = 10-2-10-1 is studied numerically. The numerical algorithm is briefly described, and the results obtained for a number of two- and three-dimensional test problems are presented and compared with earlier numerical data.
Remote sensing of gases by hyperspectral imaging: algorithms and results of field measurements
NASA Astrophysics Data System (ADS)
Sabbah, Samer; Rusch, Peter; Eichmann, Jens; Gerhard, Jörn-Hinnrich; Harig, Roland
2012-09-01
Remote gas detection and visualization provides vital information in scenarios involving chemical accidents, terrorist attacks or gas leaks. Previous work showed how imaging infrared spectroscopy can be used to assess the location, the dimensions, and the dispersion of a potentially hazardous cloud. In this work the latest developments of an infrared hyperspectral imager based on a Michelson interferometer in combination with a focal plane array detector are presented. The performance of the system is evaluated by laboratory measurements. The system was deployed in field measurements to identify industrial gas emissions. Excellent results were obtained by successfully identifying released gases from relatively long distances.
Simulation Results of the Huygens Probe Entry and Descent Trajectory Reconstruction Algorithm
NASA Technical Reports Server (NTRS)
Kazeminejad, B.; Atkinson, D. H.; Perez-Ayucar, M.
2005-01-01
Cassini/Huygens is a joint NASA/ESA mission to explore the Saturnian system. The ESA Huygens probe is scheduled to be released from the Cassini spacecraft on December 25, 2004, enter the atmosphere of Titan in January, 2005, and descend to Titan s surface using a sequence of different parachutes. To correctly interpret and correlate results from the probe science experiments and to provide a reference set of data for "ground-truthing" Orbiter remote sensing measurements, it is essential that the probe entry and descent trajectory reconstruction be performed as early as possible in the postflight data analysis phase. The Huygens Descent Trajectory Working Group (DTWG), a subgroup of the Huygens Science Working Team (HSWT), is responsible for developing a methodology and performing the entry and descent trajectory reconstruction. This paper provides an outline of the trajectory reconstruction methodology, preliminary probe trajectory retrieval test results using a simulated synthetic Huygens dataset developed by the Huygens Project Scientist Team at ESA/ESTEC, and a discussion of strategies for recovery from possible instrument failure.
Deriving Arctic Cloud Microphysics at Barrow, Alaska. Algorithms, Results, and Radiative Closure
Shupe, Matthew D.; Turner, David D.; Zwink, Alexander; Thieman, Mandana M.; Mlawer, Eli J.; Shippert, Timothy
2015-07-01
Cloud phase and microphysical properties control the radiative effects of clouds in the climate system and are therefore crucial to characterize in a variety of conditions and locations. An Arctic-specific, ground-based, multi-sensor cloud retrieval system is described here and applied to two years of observations from Barrow, Alaska. Over these two years, clouds occurred 75% of the time, with cloud ice and liquid each occurring nearly 60% of the time. Liquid water occurred at least 25% of the time even in the winter, and existed up to heights of 8 km. The vertically integrated mass of liquid was typically larger than that of ice. While it is generally difficult to evaluate the overall uncertainty of a comprehensive cloud retrieval system of this type, radiative flux closure analyses were performed where flux calculations using the derived microphysical properties were compared to measurements at the surface and top-of-atmosphere. Radiative closure biases were generally smaller for cloudy scenes relative to clear skies, while the variability of flux closure results was only moderately larger than under clear skies. The best closure at the surface was obtained for liquid-containing clouds. Radiative closure results were compared to those based on a similar, yet simpler, cloud retrieval system. These comparisons demonstrated the importance of accurate cloud phase classification, and specifically the identification of liquid water, for determining radiative fluxes. Enhanced retrievals of liquid water path for thin clouds were also shown to improve radiative flux calculations.
NASA Technical Reports Server (NTRS)
Guo, Liwen; Cardullo, Frank M.; Kelly, Lon C.
2007-01-01
This report summarizes the results of delay measurement and piloted performance tests that were conducted to assess the effectiveness of the adaptive compensator and the state space compensator for alleviating the phase distortion of transport delay in the visual system in the VMS at the NASA Langley Research Center. Piloted simulation tests were conducted to assess the effectiveness of two novel compensators in comparison to the McFarland predictor and the baseline system with no compensation. Thirteen pilots with heterogeneous flight experience executed straight-in and offset approaches, at various delay configurations, on a flight simulator where different predictors were applied to compensate for transport delay. The glideslope and touchdown errors, power spectral density of the pilot control inputs, NASA Task Load Index, and Cooper-Harper rating of the handling qualities were employed for the analyses. The overall analyses show that the adaptive predictor results in slightly poorer compensation for short added delay (up to 48 ms) and better compensation for long added delay (up to 192 ms) than the McFarland compensator. The analyses also show that the state space predictor is fairly superior for short delay and significantly superior for long delay than the McFarland compensator.
NASA Astrophysics Data System (ADS)
Wu, Yang; Kelly, Damien P.
2014-12-01
The distribution of the complex field in the focal region of a lens is a classical optical diffraction problem. Today, it remains of significant theoretical importance for understanding the properties of imaging systems. In the paraxial regime, it is possible to find analytical solutions in the neighborhood of the focus, when a plane wave is incident on a focusing lens whose finite extent is limited by a circular aperture. For example, in Born and Wolf's treatment of this problem, two different, but mathematically equivalent analytical solutions, are presented that describe the 3D field distribution using infinite sums of ? and ? type Lommel functions. An alternative solution expresses the distribution in terms of Zernike polynomials, and was presented by Nijboer in 1947. More recently, Cao derived an alternative analytical solution by expanding the Fresnel kernel using a Taylor series expansion. In practical calculations, however, only a finite number of terms from these infinite series expansions is actually used to calculate the distribution in the focal region. In this manuscript, we compare and contrast each of these different solutions to a numerically calculated result, paying particular attention to how quickly each solution converges for a range of different spatial locations behind the focusing lens. We also examine the time taken to calculate each of the analytical solutions. The numerical solution is calculated in a polar coordinate system and is semi-analytic. The integration over the angle is solved analytically, while the radial coordinate is sampled with a sampling interval of ? and then numerically integrated. This produces an infinite set of replicas in the diffraction plane, that are located in circular rings centered at the optical axis and each with radii given by ?, where ? is the replica order. These circular replicas are shown to be fundamentally different from the replicas that arise in a Cartesian coordinate system.
Wu, Yang; Kelly, Damien P.
2014-01-01
The distribution of the complex field in the focal region of a lens is a classical optical diffraction problem. Today, it remains of significant theoretical importance for understanding the properties of imaging systems. In the paraxial regime, it is possible to find analytical solutions in the neighborhood of the focus, when a plane wave is incident on a focusing lens whose finite extent is limited by a circular aperture. For example, in Born and Wolf’s treatment of this problem, two different, but mathematically equivalent analytical solutions, are presented that describe the 3D field distribution using infinite sums of Un and Vn type Lommel functions. An alternative solution expresses the distribution in terms of Zernike polynomials, and was presented by Nijboer in 1947. More recently, Cao derived an alternative analytical solution by expanding the Fresnel kernel using a Taylor series expansion. In practical calculations, however, only a finite number of terms from these infinite series expansions is actually used to calculate the distribution in the focal region. In this manuscript, we compare and contrast each of these different solutions to a numerically calculated result, paying particular attention to how quickly each solution converges for a range of different spatial locations behind the focusing lens. We also examine the time taken to calculate each of the analytical solutions. The numerical solution is calculated in a polar coordinate system and is semi-analytic. The integration over the angle is solved analytically, while the radial coordinate is sampled with a sampling interval of Δρ and then numerically integrated. This produces an infinite set of replicas in the diffraction plane, that are located in circular rings centered at the optical axis and each with radii given by 2πm/Δρ, where m is the replica order. These circular replicas are shown to be fundamentally different from the replicas that arise in a Cartesian coordinate system. PMID
NASA Astrophysics Data System (ADS)
Baharun, A. Tarmizi; Maimun, Adi; Ahmed, Yasser M.; Mobassher, M.; Nakisa, M.
2015-05-01
In this paper, three dimensional data and behavior of incompressible and steady air flow around a small scale Wing in Ground Effect Craft (WIG) were investigated and studied numerically then compared to the experimental result and also published data. This computational simulation (CFD) adopted two turbulence models, which were k-ɛ and k-ω in order to determine which model produces minimum difference to the experimental result of the small scale WIG tested in wind tunnel. Unstructured mesh was used in the simulation and data of drag coefficient (Cd) and lift coefficient (Cl) were obtained with angle of attack (AoA) of the WIG model as the parameter. Ansys ICEM was used for the meshing process while Ansys Fluent was used for solution. Aerodynamic forces, Cl, Cd and Cl/Cd along with fluid flow pattern of the small scale WIG craft was shown and discussed.
Meyer, H. O.
The PINTEX group studied proton-proton and proton-deuteron scattering and reactions between 100 and 500 MeV at the Indiana University Cyclotron Facility (IUCF). More than a dozen experiments made use of electron-cooled polarized proton or deuteron beams, orbiting in the 'Indiana Cooler' storage ring, and of a polarized atomic-beam target of hydrogen or deuterium in the path of the stored beam. The collaboration involved researchers from several midwestern universities, as well as a number of European institutions. The PINTEX program ended when the Indiana Cooler was shut down in August 2002. The website contains links to some of the numerical results, descriptions of experiments, and a complete list of publications resulting from PINTEX.
Minimal Sign Representation of Boolean Functions: Algorithms and Exact Results for Low Dimensions.
Sezener, Can Eren; Oztop, Erhan
2015-08-01
Boolean functions (BFs) are central in many fields of engineering and mathematics, such as cryptography, circuit design, and combinatorics. Moreover, they provide a simple framework for studying neural computation mechanisms of the brain. Many representation schemes for BFs exist to satisfy the needs of the domain they are used in. In neural computation, it is of interest to know how many input lines a neuron would need to represent a given BF. A common BF representation to study this is the so-called polynomial sign representation where [Formula: see text] and 1 are associated with true and false, respectively. The polynomial is treated as a real-valued function and evaluated at its parameters, and the sign of the polynomial is then taken as the function value. The number of input lines for the modeled neuron is exactly the number of terms in the polynomial. This letter investigates the minimum number of terms, that is, the minimum threshold density, that is sufficient to represent a given BF and more generally aims to find the maximum over this quantity for all BFs in a given dimension. With this work, for the first time exact results for four- and five-variable BFs are obtained, and strong bounds for six-variable BFs are derived. In addition, some connections between the sign representation framework and bent functions are derived, which are generally studied for their desirable cryptographic properties. PMID:26079754
The equation of state for stellar envelopes. II - Algorithm and selected results
NASA Technical Reports Server (NTRS)
Mihalas, Dimitri; Dappen, Werner; Hummer, D. G.
1988-01-01
A free-energy-minimization method for computing the dissociation and ionization equilibrium of a multicomponent gas is discussed. The adopted free energy includes terms representing the translational free energy of atoms, ions, and molecules; the internal free energy of particles with excited states; the free energy of a partially degenerate electron gas; and the configurational free energy from shielded Coulomb interactions among charged particles. Internal partition functions are truncated using an occupation probability formalism that accounts for perturbations of bound states by both neutral and charged perturbers. The entire theory is analytical and differentiable to all orders, so it is possible to write explicit analytical formulas for all derivatives required in a Newton-Raphson iteration; these are presented to facilitate future work. Some representative results for both Saha and free-energy-minimization equilibria are presented for a hydrogen-helium plasma with N(He)/N(H) = 0.10. These illustrate nicely the phenomena of pressure dissociation and ionization, and also demonstrate vividly the importance of choosing a reliable cutoff procedure for internal partition functions.
Siddique, Waseem; El-Gabry, Lamyaa; Shevchuk, Igor V; Fransson, Torsten H
2013-01-01
High inlet temperatures in a gas turbine lead to an increase in the thermal efficiency of the gas turbine. This results in the requirement of cooling of gas turbine blades/vanes. Internal cooling of the gas turbine blade/vanes with the help of two-pass channels is one of the effective methods to reduce the metal temperatures. In particular, the trailing edge of a turbine vane is a critical area, where effective cooling is required. The trailing edge can be modeled as a trapezoidal channel. This paper describes the numerical validation of the heat transfer and pressure drop in a trapezoidal channel with and without orthogonal ribs at the bottom surface. A new concept of ribbed trailing edge has been introduced in this paper which presents a numerical study of several trailing edge cooling configurations based on the placement of ribs at different walls. The baseline geometries are two-pass trapezoidal channels with and without orthogonal ribs at the bottom surface of the channel. Ribs induce secondary flow which results in enhancement of heat transfer; therefore, for enhancement of heat transfer at the trailing edge, ribs are placed at the trailing edge surface in three different configurations: first without ribs at the bottom surface, then ribs at the trailing edge surface in-line with the ribs at the bottom surface, and finally staggered ribs. Heat transfer and pressure drop is calculated at Reynolds number equal to 9400 for all configurations. Different turbulent models are used for the validation of the numerical results. For the smooth channel low-Re k-ɛ model, realizable k-ɛ model, the RNG k-ω model, low-Re k-ω model, and SST k-ω models are compared, whereas for ribbed channel, low-Re k-ɛ model and SST k-ω models are compared. The results show that the low-Re k-ɛ model, which predicts the heat transfer in outlet pass of the smooth channels with difference of +7%, underpredicts the heat transfer by -17% in case of ribbed channel compared to
NASA Astrophysics Data System (ADS)
Sanz-Enguita, G.; Ortega, J.; Folcia, C. L.; Aramburu, I.; Etxebarria, J.
2016-02-01
We have studied the performance characteristics of a dye-doped cholesteric liquid crystal (CLC) laser as a function of the sample thickness. The study has been carried out both from the experimental and theoretical points of view. The theoretical model is based on the kinetic equations for the population of the excited states of the dye and for the power of light generated within the laser cavity. From the equations, the threshold pump radiation energy Eth and the slope efficiency η are numerically calculated. Eth is rather insensitive to thickness changes, except for small thicknesses. In comparison, η shows a much more pronounced variation, exhibiting a maximum that determines the sample thickness for optimum laser performance. The predictions are in good accordance with the experimental results. Approximate analytical expressions for Eth and η as a function of the physical characteristics of the CLC laser are also proposed. These expressions present an excellent agreement with the numerical calculations. Finally, we comment on the general features of CLC layer and dye that lead to the best laser performance.
NASA Astrophysics Data System (ADS)
Vitiello, Antonio; Squillace, Antonino; Prisco, Umberto
2007-02-01
Shape memory alloys (SMA) are a particular family of materials, discovered during the 1930s and only now used in technological applications, with the property of returning to an imposed shape after a deformation and heating process. The study of the mechanical behaviour of SMA, through a proper constitutive model, and the possible ensuing applications form the core of an interesting research field, developed in the last few years and still now subject to studies driven by the aim of understanding and characterizing the peculiar properties of these materials. The aim of this work is to study the behaviour of SMA under torsional loads. To obtain a forecast of the mechanical response of the SMA, we utilized a numerical algorithm based on the Boyd-Lagoudas model and then we compared the results with those from some experimental tests. The experiments were conducted by subjecting helicoidal springs with a constant cross section to a traction load. It is well known, in fact, that in such springs the main stress under traction loads is almost completely a pure torsional stress field. The interest in these studies is due to the absence of data on such tests in the literature for SMA, and because there are an increasing number of industrial applications where SMA are subjected to torsional load, in particular in medicine, and especially in orthodontic drills which usually work under torsional loads.
NASA Astrophysics Data System (ADS)
Mazoyer, Johan; Pueyo, Laurent; Norman, Colin; N'Diaye, Mamadou; van der Marel, Roeland P.; Soummer, Rémi
2016-03-01
The new frontier in the quest for the highest contrast levels in the focal plane of a coronagraph is now the correction of the large diffraction artifacts introduced at the science camera by apertures of increasing complexity. Indeed, the future generation of space- and ground-based coronagraphic instruments will be mounted on on-axis and/or segmented telescopes; the design of coronagraphic instruments for such observatories is currently a domain undergoing rapid progress. One approach consists of using two sequential deformable mirrors (DMs) to correct for aberrations introduced by secondary mirror structures and segmentation of the primary mirror. The coronagraph for the WFIRST-AFTA mission will be the first of such instruments in space with a two-DM wavefront control system. Regardless of the control algorithm for these multiple DMs, they will have to rely on quick and accurate simulation of the propagation effects introduced by the out-of-pupil surface. In the first part of this paper, we present the analytical description of the different approximations to simulate these propagation effects. In Appendix A, we prove analytically that in the special case of surfaces inducing a converging beam, the Fresnel method yields high fidelity for simulations of these effects. We provide numerical simulations showing this effect. In the second part, we use these tools in the framework of the active compensation of aperture discontinuities (ACAD) technique applied to pupil geometries similar to WFIRST-AFTA. We present these simulations in the context of the optical layout of the high-contrast imager for complex aperture telescopes, which will test ACAD on a optical bench. The results of this analysis show that using the ACAD method, an apodized pupil Lyot coronagraph, and the performance of our current DMs, we are able to obtain, in numerical simulations, a dark hole with a WFIRST-AFTA-like. Our numerical simulation shows that we can obtain contrast better than 2×10-9 in
NASA Astrophysics Data System (ADS)
de'Michieli Vitturi, M.; Todesco, M.; Neri, A.; Esposti Ongaro, T.; Tola, E.; Rocco, G.
2011-12-01
We present a new DVD of the INGV outreach series, aimed at illustrating our research work on pyroclastic flow modeling. Pyroclastic flows (or pyroclastic density currents) are hot, devastating clouds of gas and ashes, generated during explosive eruptions. Understanding their dynamics and impact is crucial for a proper hazard assessment. We employ a 3D numerical model which describes the main features of the multi-phase and multi-component process, from the generation of the flows to their propagation along complex terrains. Our numerical results can be translated into color animations, which describe the temporal evolution of flow variables such as temperature or ash concentration. The animations provide a detailed and effective description of the natural phenomenon which can be used to present this geological process to a general public and to improve the hazard perception in volcanic areas. In our DVD, the computer animations are introduced and commented by professionals and researchers who deals at various levels with the study of pyroclastic flows and their impact. Their comments are taken as short interviews, mounted in a short video (about 10 minutes), which describes the natural process, as well as the model and its applications to some explosive volcanoes like Vesuvio, Campi Flegrei, Mt. St. Helens and Soufriere Hills (Montserrat). The ensemble of different voices and faces provides a direct sense of the multi-disciplinary effort involved in the assessment of pyroclastic flow hazard. The video also introduces the people who address this complex problem, and the personal involvement beyond the scientific results. The full, uncommented animations of the pyroclastic flow propagation on the different volcanic settings are also provided in the DVD, that is meant to be a general, flexible outreach tool.
G. L. Hawkes; J. E. O'Brien; B. A. Haberman; A. J. Marquis; C. M. Baca; D. Tripepi; P. Costamagna
2008-06-01
A numerical study of the thermal and electrochemical performance of a single-tube Integrated Planar Solid Oxide Fuel Cell (IP-SOFC) has been performed. Results obtained from two finite-volume computational fluid dynamics (CFD) codes FLUENT and SOHAB and from a two-dimensional inhouse developed finite-volume GENOA model are presented and compared. Each tool uses physical and geometric models of differing complexity and comparisons are made to assess their relative merits. Several single-tube simulations were run using each code over a range of operating conditions. The results include polarization curves, distributions of local current density, composition and temperature. Comparisons of these results are discussed, along with their relationship to the respective imbedded phenomenological models for activation losses, fluid flow and mass transport in porous media. In general, agreement between the codes was within 15% for overall parameters such as operating voltage and maximum temperature. The CFD results clearly show the effects of internal structure on the distributions of gas flows and related quantities within the electrochemical cells.
NASA Technical Reports Server (NTRS)
Knox, C. E.; Cannon, D. G.
1979-01-01
A flight management algorithm designed to improve the accuracy of delivering the airplane fuel efficiently to a metering fix at a time designated by air traffic control is discussed. The algorithm provides a 3-D path with time control (4-D) for a test B 737 airplane to make an idle thrust, clean configured descent to arrive at the metering fix at a predetermined time, altitude, and airspeed. The descent path is calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard pressure and temperature effects. The flight management descent algorithms and the results of the flight tests are discussed.
NASA Astrophysics Data System (ADS)
Hand, J. W.; Li, Y.; Hajnal, J. V.
2010-02-01
Numerical simulations of specific absorption rate (SAR) and temperature changes in a 26-week pregnant woman model within typical birdcage body coils as used in 1.5 T and 3 T MRI scanners are described. Spatial distributions of SAR and the resulting spatial and temporal changes in temperature are determined using a finite difference time domain method and a finite difference bio-heat transfer solver that accounts for discrete vessels. Heat transfer from foetus to placenta via the umbilical vein and arteries as well as that across the foetal skin/amniotic fluid/uterine wall boundaries is modelled. Results suggest that for procedures compliant with IEC normal mode conditions (maternal whole-body averaged SARMWB <= 2 W kg-1 (continuous or time-averaged over 6 min)), whole foetal SAR, local foetal SAR10g and average foetal temperature are within international safety limits. For continuous RF exposure at SARMWB = 2 W kg-1 over periods of 7.5 min or longer, a maximum local foetal temperature >38 °C may occur. However, assessment of the risk posed by such maximum temperatures predicted in a static model is difficult because of frequent foetal movement. Results also confirm that when SARMWB = 2 W kg-1, some local SAR10g values in the mother's trunk and extremities exceed recommended limits.
Ermolaev, B.S.; Novozhilov, B.V.; Posvyanskii, V.S.; Sulimov, A.A.
1986-03-01
The authors analyze the results of a numerical simulation of the convective burning of explosive powders in the presence of increasing pressure. The formulation of the problem reproduces a typical experimental technique: a strong closed vessel with a channel uniformly filled with the explosive investigated is fitted with devices for initiating and recording the process of explosion. It is shown that the relation between the propagation velocities of the flame and the compression waves in the powder and the rate of pressure increase in the combustion zone is such that a narrow compaction zone is formed ahead of the ignition front. Another important result is obtained by analyzing the difference between the flame velocity and the gas flow velocity in the ignition front. A model of the process is given. The results of the investigation throw light on such aspects of the convective combustion mechanism and the transition from combustion to detonation as the role of compaction of the explosive in the process of flame propogation and the role of the rate of pressure increase and dissipative heating of the gas phase in the pores ahead of the ignition front.
Stabilizing the Richardson eigenvector algorithm by controlling chaos
He, S.
1997-03-01
By viewing the operations of the Richardson purification algorithm as a discrete time dynamical process, we propose a method to overcome the instability of this eigenvector algorithm by controlling chaos. We present theoretical analysis and numerical results on the behavior and performance of the stabilized algorithm. {copyright} {ital 1997 American Institute of Physics.}
An algorithm for the automatic synchronization of Omega receivers
NASA Technical Reports Server (NTRS)
Stonestreet, W. M.; Marzetta, T. L.
1977-01-01
The Omega navigation system and the requirement for receiver synchronization are discussed. A description of the synchronization algorithm is provided. The numerical simulation and its associated assumptions were examined and results of the simulation are presented. The suggested form of the synchronization algorithm and the suggested receiver design values were surveyed. A Fortran of the synchronization algorithm used in the simulation was also included.
NASA Technical Reports Server (NTRS)
Gomberg, Joan; Ellis, Michael
1994-01-01
We present results of a series of numerical experiments designed to test hypothetical mechanisms that derive deformation in the New Madrid seismic zone. Experiments are constrained by subtle topography and the distribution of seismicity in the region. We use a new boundary element algorithm that permits calcuation of the three-dimensional deformation field. Surface displacement fields are calculated for the New Madrid zone under both far-field (plate tectonics scale) and locally derived driving strains. Results demonstrate that surface displacement fields cannot distinguish between either a far-field simple or pure shear strain field or one that involves a deep shear zone beneath the upper crustal faults. Thus, neither geomorphic nor geodetic studies alone are expected to reveal the ultimate driving mechanism behind the present-day deformation. We have also tested hypotheses about strain accommodation within the New Madrid contractional step-over by including linking faults, two southwest dipping and one vertical, recently inferred from microearthquake data. Only those models with step-over faults are able to predict the observed topography. Surface displacement fields for long-term, relaxed deformation predict the distribution of uplift and subsidence in the contractional step-over remarkably well. Generation of these displacement fields appear to require slip on both the two northeast trending vertical faults and the two dipping faults in the step-over region, with very minor displacements occurring during the interseismic period when the northeast trending vertical faults are locked. These models suggest that the gently dippling central step-over fault is a reverse fault and that the steeper fault, extending to the southeast of the step-over, acts as a normal fault over the long term.
NASA Technical Reports Server (NTRS)
Peltier, L. J.; Biringen, S.
1993-01-01
The present numerical simulation explores a thermal-convective mechanism for oscillatory thermocapillary convection in a shallow Cartesian cavity for a Prandtl number 6.78 fluid. The computer program developed for this simulation integrates the two-dimensional, time-dependent Navier-Stokes equations and the energy equation by a time-accurate method on a stretched, staggered mesh. Flat free surfaces are assumed. The instability is shown to depend upon temporal coupling between large scale thermal structures within the flow field and the temperature sensitive free surface. A primary result of this study is the development of a stability diagram presenting the critical Marangoni number separating steady from the time-dependent flow states as a function of aspect ratio for the range of values between 2.3 and 3.8. Within this range, a minimum critical aspect ratio near 2.3 and a minimum critical Marangoni number near 20,000 are predicted below which steady convection is found.
Dvir, Hila; Zlochiver, Sharon
2015-01-01
A single isolated sinoatrial pacemaker cell presents intrinsic interbeat interval (IBI) variability that is believed to result from the stochastic characteristics of the opening and closing processes of membrane ion channels. To our knowledge, a novel mathematical framework was developed in this work to address the effect of current fluctuations on the IBIs of sinoatrial pacemaker cells. Using statistical modeling and employing the Fokker-Planck formalism, our mathematical analysis suggests that increased stochastic current fluctuation variance linearly increases the slope of phase-4 depolarization, hence the rate of activations. Single-cell and two-dimensional computerized numerical modeling of the sinoatrial node was conducted to validate the theoretical predictions using established ionic kinetics of the rabbit pacemaker and atrial cells. Our models also provide, to our knowledge, a novel complementary or alternative explanation to recent experimental observations showing a strong reduction in the mean IBI of Cx30 deficient mice in comparison to wild-types, not fully explicable by the effects of intercellular decoupling. PMID:25762340
2010-01-01
Background The mitosporic fungus Trichoderma harzianum (Hypocrea, Ascomycota, Hypocreales, Hypocreaceae) is an ubiquitous species in the environment with some strains commercially exploited for the biological control of plant pathogenic fungi. Although T. harzianum is asexual (or anamorphic), its sexual stage (or teleomorph) has been described as Hypocrea lixii. Since recombination would be an important issue for the efficacy of an agent of the biological control in the field, we investigated the phylogenetic structure of the species. Results Using DNA sequence data from three unlinked loci for each of 93 strains collected worldwide, we detected a complex speciation process revealing overlapping reproductively isolated biological species, recent agamospecies and numerous relict lineages with unresolved phylogenetic positions. Genealogical concordance and recombination analyses confirm the existence of two genetically isolated agamospecies including T. harzianum sensu stricto and two hypothetical holomorphic species related to but different from H. lixii. The exact phylogenetic position of the majority of strains was not resolved and therefore attributed to a diverse network of recombining strains conventionally called 'pseudoharzianum matrix'. Since H. lixii and T. harzianum are evidently genetically isolated, the anamorph - teleomorph combination comprising H. lixii/T. harzianum in one holomorph must be rejected in favor of two separate species. Conclusions Our data illustrate a complex speciation within H. lixii - T. harzianum species group, which is based on coexistence and interaction of organisms with different evolutionary histories and on the absence of strict genetic borders between them. PMID:20359347
NASA Astrophysics Data System (ADS)
Chirkov, V. A.; Komarov, D. K.; Stishkov, Y. K.; Vasilkov, S. A.
2015-10-01
The paper studies a particular electrode system, two flat parallel electrodes with a dielectric plate having a small circular hole between them. Its main feature is that the region of the strong electric field is located far from metal electrode surfaces, which permits one to preclude the injection charge formation and to observe field-enhanced dissociation (the Wien effect) leading to the emergence of electrohydrodynamic (EHD) flow. The described electrode system was studied by way of both computer simulation and experiment. The latter was conducted with the help of the particle image velocimetry (or PIV) technique. The numerical research used trusted software package COMSOL Multiphysics, which allows solving the complete set of EHD equations and obtaining the EHD flow structure. Basing on the computer simulation and the comparison with experimental investigation results, it was concluded that the Wien effect is capable of causing intense (several centimeters per second) EHD flows in low-conducting liquids and has to be taken into account when dealing with EHD devices.
Luo Xueli; Day, Christian; Haas, Horst; Varoutis, Stylianos
2011-07-15
For the torus of the nuclear fusion project ITER (originally the International Thermonuclear Experimental Reactor, but also Latin: the way), eight high-performance large-scale customized cryopumps must be designed and manufactured to accommodate the very high pumping speeds and throughputs of the fusion exhaust gas needed to maintain the plasma under stable vacuum conditions and comply with other criteria which cannot be met by standard commercial vacuum pumps. Under an earlier research and development program, a model pump of reduced scale based on active cryosorption on charcoal-coated panels at 4.5 K was manufactured and tested systematically. The present article focuses on the simulation of the true three-dimensional complex geometry of the model pump by the newly developed ProVac3D Monte Carlo code. It is shown for gas throughputs of up to 1000 sccm ({approx}1.69 Pa m{sup 3}/s at T = 0 deg. C) in the free molecular regime that the numerical simulation results are in good agreement with the pumping speeds measured. Meanwhile, the capture coefficient associated with the virtual region around the cryogenic panels and shields which holds for higher throughputs is calculated using this generic approach. This means that the test particle Monte Carlo simulations in free molecular flow can be used not only for the optimization of the pumping system but also for the supply of the input parameters necessary for the future direct simulation Monte Carlo in the full flow regime.
Dvir, Hila; Zlochiver, Sharon
2015-03-10
A single isolated sinoatrial pacemaker cell presents intrinsic interbeat interval (IBI) variability that is believed to result from the stochastic characteristics of the opening and closing processes of membrane ion channels. To our knowledge, a novel mathematical framework was developed in this work to address the effect of current fluctuations on the IBIs of sinoatrial pacemaker cells. Using statistical modeling and employing the Fokker-Planck formalism, our mathematical analysis suggests that increased stochastic current fluctuation variance linearly increases the slope of phase-4 depolarization, hence the rate of activations. Single-cell and two-dimensional computerized numerical modeling of the sinoatrial node was conducted to validate the theoretical predictions using established ionic kinetics of the rabbit pacemaker and atrial cells. Our models also provide, to our knowledge, a novel complementary or alternative explanation to recent experimental observations showing a strong reduction in the mean IBI of Cx30 deficient mice in comparison to wild-types, not fully explicable by the effects of intercellular decoupling. PMID:25762340
A Food Chain Algorithm for Capacitated Vehicle Routing Problem with Recycling in Reverse Logistics
NASA Astrophysics Data System (ADS)
Song, Qiang; Gao, Xuexia; Santos, Emmanuel T.
2015-12-01
This paper introduces the capacitated vehicle routing problem with recycling in reverse logistics, and designs a food chain algorithm for it. Some illustrative examples are selected to conduct simulation and comparison. Numerical results show that the performance of the food chain algorithm is better than the genetic algorithm, particle swarm optimization as well as quantum evolutionary algorithm.
Sequential and Parallel Algorithms for Spherical Interpolation
NASA Astrophysics Data System (ADS)
De Rossi, Alessandra
2007-09-01
Given a large set of scattered points on a sphere and their associated real values, we analyze sequential and parallel algorithms for the construction of a function defined on the sphere satisfying the interpolation conditions. The algorithms we implemented are based on a local interpolation method using spherical radial basis functions and the Inverse Distance Weighted method. Several numerical results show accuracy and efficiency of the algorithms.
NASA Astrophysics Data System (ADS)
Valla, Pierre G.; van der Beek, Peter A.; Lague, Dimitri; Carcaillet, Julien
2010-05-01
Bedrock gorges are frequent features in glacial or post-glacial landscapes and allow measurements of fluvial bedrock incision in mountainous relief. Using digital elevation models, aerial photographs, topographic maps and field reconnaissance in the Pelvoux-Ecrins Massif (French Western Alps), we have identified ~30 tributary hanging valleys incised by gorges toward their confluence with the trunk streams. Longitudinal profiles of these tributaries are all convex and have abrupt knickpoints at the upper limit of oversteepened gorge reaches. From morphometric analyses, we find that mean channel gradients and widths, as well as knickpoint retreat rates, display a drainage-area dependence modulated by bedrock lithology. However, there appears to be no relation between horizontal retreat and vertical downwearing of knickpoints. Numerical modeling has been performed to test the capacity of different fluvial incision models to predict the inferred evolution of the gorges. Results from simple end-member models suggest transport-limited behavior of the bedrock gorges. Using a more sophisticated model including dynamic width adjustment and sediment-dependent incision rates, we show that bedrock gorge evolution requires significant supply of sediment from the gorge sidewalls triggered by gorge deepening, combined with pronounced inhibition of bedrock incision by sediment transport and deposition. We then use in-situ produced 10Be cosmogenic nuclides to date and quantify bedrock gorge incision into a single glacial hanging valley (Gorge du Diable). We have sampled gorge sidewalls and the active channel bed to derive both long-term and present-day incision rates. 10Be ages of sidewall profiles reveal rapid incision through the late Holocene (ca 5 ka), implying either delayed initiation of gorge incision after final ice retreat from internal Alpine valleys at ca 12 ka, or post-glacial surface reburial of the gorge. Both modeling results and cosmogenic dating suggest that
Elcner, Jakub; Lizal, Frantisek; Jedelsky, Jan; Jicha, Miroslav; Chovancova, Michaela
2016-04-01
In this article, the results of numerical simulations using computational fluid dynamics (CFD) and a comparison with experiments performed with phase Doppler anemometry are presented. The simulations and experiments were conducted in a realistic model of the human airways, which comprised the throat, trachea and tracheobronchial tree up to the fourth generation. A full inspiration/expiration breathing cycle was used with tidal volumes 0.5 and 1 L, which correspond to a sedentary regime and deep breath, respectively. The length of the entire breathing cycle was 4 s, with inspiration and expiration each lasting 2 s. As a boundary condition for the CFD simulations, experimentally obtained flow rate distribution in 10 terminal airways was used with zero pressure resistance at the throat inlet. CCM+ CFD code (Adapco) was used with an SST k-[Formula: see text] low-Reynolds Number RANS model. The total number of polyhedral control volumes was 2.6 million with a time step of 0.001 s. Comparisons were made at several points in eight cross sections selected according to experiments in the trachea and the left and right bronchi. The results agree well with experiments involving the oscillation (temporal relocation) of flow structures in the majority of the cross sections and individual local positions. Velocity field simulation in several cross sections shows a very unstable flow field, which originates in the tracheal laryngeal jet and propagates far downstream with the formation of separation zones in both left and right airways. The RANS simulation agrees with the experiments in almost all the cross sections and shows unstable local flow structures and a quantitatively acceptable solution for the time-averaged flow field. PMID:26163996
Spurious Numerical Solutions Of Differential Equations
NASA Technical Reports Server (NTRS)
Lafon, A.; Yee, H. C.
1995-01-01
Paper presents detailed study of spurious steady-state numerical solutions of differential equations that contain nonlinear source terms. Main objectives of this study are (1) to investigate how well numerical steady-state solutions of model nonlinear reaction/convection boundary-value problem mimic true steady-state solutions and (2) to relate findings of this investigation to implications for interpretation of numerical results from computational-fluid-dynamics algorithms and computer codes used to simulate reacting flows.
NASA Astrophysics Data System (ADS)
Ulmer, W.; Pyyry, J.; Kaissl, W.
2005-04-01
Based on previous publications on a triple Gaussian analytical pencil beam model and on Monte Carlo calculations using Monte Carlo codes GEANT-Fluka, versions 95, 98, 2002, and BEAMnrc/EGSnrc, a three-dimensional (3D) superposition/convolution algorithm for photon beams (6 MV, 18 MV) is presented. Tissue heterogeneity is taken into account by electron density information of CT images. A clinical beam consists of a superposition of divergent pencil beams. A slab-geometry was used as a phantom model to test computed results by measurements. An essential result is the existence of further dose build-up and build-down effects in the domain of density discontinuities. These effects have increasing magnitude for field sizes <=5.5 cm2 and densities <=0.25 g cm-3, in particular with regard to field sizes considered in stereotaxy. They could be confirmed by measurements (mean standard deviation 2%). A practical impact is the dose distribution at transitions from bone to soft tissue, lung or cavities. This work has partially been presented at WC 2003, Sydney.
NASA Technical Reports Server (NTRS)
Susskind, Joel; Kouvaris, Louis; Iredell, Lena
2016-01-01
The AIRS Science Team Version 6 retrieval algorithm is currently producing high quality level-3 Climate Data Records (CDRs) from AIRSAMSU which are critical for understanding climate processes. The AIRS Science Team is finalizing an improved Version-7 retrieval algorithm to reprocess all old and future AIRS data. AIRS CDRs should eventually cover the period September 2002 through at least 2020. CrISATMS is the only scheduled follow on to AIRSAMSU. The objective of this research is to prepare for generation of a long term CrISATMS level-3 data using a finalized retrieval algorithm that is scientifically equivalent to AIRSAMSU Version-7.
NASA Astrophysics Data System (ADS)
Alhammoud, B.; Béranger, K.; Mortier, L.; Crépon, M.
The Eastern Mediterranean hydrology and circulation are studied by comparing the results of a high resolution primitive equation model (described in dedicated session: Béranger et al.) with observations. The model has a horizontal grid mesh of 1/16o and 43 z-levels in the vertical. The model was initialized with the MODB5 climatology and has been forced during 11 years by the daily sea surface fluxes provided by the European Centre for Medium-range Weather Forecasts analysis in a perpetual year mode corresponding to the year March 1998-February 1999. At the end of the run, the numerical model is able to accurately reproduce the major water masses of the Eastern Mediterranean Basin (Levantine Surface Water, modi- fied Atlantic Water, Levantine Intermediate Water, and Eastern Mediterranean Deep Water). Comparisons with the POEM observations reveal good agreement. While the initial conditions of the model are somewhat different from POEM observations, dur- ing the last year of the simulation, we found that the water mass stratification matches that of the observations quite well in the seasonal mean. During the 11 years of simulation, the model drifts slightly in the layers below the thermocline. Nevertheless, many important physical processes were reproduced. One example is that the dispersal of Adriatic Deep Water into the Levantine Basin is rep- resented. In addition, convective activity located in the northern part of the Levantine Basin occurs in Spring as expected. The surface circulation is in agreement with in-situ and satellite observations. Some well known mesoscale features of the upper thermocline circulation are shown. Sea- sonal variability of transports through Sicily, Otranto and Cretan straits are inves- tigated as well. This work was supported by the french MERCATOR project and SHOM.
Jacobsen, S.; Birkelund, Y.
2010-01-01
Microwave breast cancer detection is based on the dielectric contrast between healthy and malignant tissue. This radar-based imaging method involves illumination of the breast with an ultra-wideband pulse. Detection of tumors within the breast is achieved by some selected focusing technique. Image formation algorithms are tailored to enhance tumor responses and reduce early-time and late-time clutter associated with skin reflections and heterogeneity of breast tissue. In this contribution, we evaluate the performance of the so-called cross-correlated back projection imaging scheme by using a scanning system in phantom experiments. Supplementary numerical modeling based on commercial software is also presented. The phantom is synthetically scanned with a broadband elliptical antenna in a mono-static configuration. The respective signals are pre-processed by a data-adaptive RLS algorithm in order to remove artifacts caused by antenna reverberations and signal clutter. Successful detection of a 7 mm diameter cylindrical tumor immersed in a low permittivity medium was achieved in all cases. Selecting the widely used delay-and-sum (DAS) beamforming algorithm as a benchmark, we show that correlation based imaging methods improve the signal-to-clutter ratio by at least 10 dB and improves spatial resolution through a reduction of the imaged peak full-width half maximum (FWHM) of about 40–50%. PMID:21331362
NASA Astrophysics Data System (ADS)
Abrams, Daniel S.
This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases (commonly found in ab initio physics and chemistry problems) for which all known classical algorithms require exponential time. Fast algorithms for simulating many body Fermi systems are also provided in both first and second quantized descriptions. An efficient quantum algorithm for anti-symmetrization is given as well as a detailed discussion of a simulation of the Hubbard model. In addition, quantum algorithms that calculate numerical integrals and various characteristics of stochastic processes are described. Two techniques are given, both of which obtain an exponential speed increase in comparison to the fastest known classical deterministic algorithms and a quadratic speed increase in comparison to classical Monte Carlo (probabilistic) methods. I derive a simpler and slightly faster version of Grover's mean algorithm, show how to apply quantum counting to the problem, develop some variations of these algorithms, and show how both (apparently distinct) approaches can be understood from the same unified framework. Finally, the relationship between physics and computation is explored in some more depth, and it is shown that computational complexity theory depends very sensitively on physical laws. In particular, it is shown that nonlinear quantum mechanics allows for the polynomial time solution of NP-complete and #P oracle problems. Using the Weinberg model as a simple example, the explicit construction of the necessary gates is derived from the underlying physics. Nonlinear quantum algorithms are also presented using Polchinski type nonlinearities which do not allow for superluminal communication. (Copies available exclusively from MIT Libraries, Rm. 14- 0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)
NASA Astrophysics Data System (ADS)
Takahashi, N.; Okei, K.; Nakatsuka, T.
Accuracies of numerical Fourier and Hankel transforms are examined with the Takahasi-Mori theory of error evaluation. The higher Moliere terms both for spatial and projected distributions derived by these methods agree very well with those derived analytically. The methods will be valuable to solve other transport problems concerning fast charged particles.
Algorithms and Algorithmic Languages.
ERIC Educational Resources Information Center
Veselov, V. M.; Koprov, V. M.
This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…
NASA Astrophysics Data System (ADS)
Declair, Stefan; Stephan, Klaus; Potthast, Roland
2015-04-01
Determining the amount of weather dependent renewable energy is a demanding task for transmission system operators (TSOs). In the project EWeLiNE funded by the German government, the German Weather Service and the Fraunhofer Institute on Wind Energy and Energy System Technology strongly support the TSOs by developing innovative weather- and power forecasting models and tools for grid integration of weather dependent renewable energy. The key in the energy prediction process chain is the numerical weather prediction (NWP) system. With focus on wind energy, we face the model errors in the planetary boundary layer, which is characterized by strong spatial and temporal fluctuations in wind speed, to improve the basis of the weather dependent renewable energy prediction. Model data can be corrected by postprocessing techniques such as model output statistics and calibration using historical observational data. On the other hand, latest observations can be used in a preprocessing technique called data assimilation (DA). In DA, the model output from a previous time step is combined such with observational data, that the new model data for model integration initialization (analysis) fits best to the latest model data and the observational data as well. Therefore, model errors can be already reduced before the model integration. In this contribution, the results of an impact study are presented. A so-called OSSE (Observation Simulation System Experiment) is performed using the convective-resoluted COSMO-DE model of the German Weather Service and a 4D-DA technique, a Newtonian relaxation method also called nudging. Starting from a nature run (treated as the truth), conventional observations and artificial wind observations at hub height are generated. In a control run, the basic model setup of the nature run is slightly perturbed to drag the model away from the beforehand generated truth and a free forecast is computed based on the analysis using only conventional
Geist, G.A.; Howell, G.W.; Watkins, D.S.
1997-11-01
The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.
NASA Technical Reports Server (NTRS)
Knox, C. E.; Vicroy, D. D.; Simmon, D. A.
1985-01-01
A simple, airborne, flight-management descent algorithm was developed and programmed into a small programmable calculator. The algorithm may be operated in either a time mode or speed mode. The time mode was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The speed model was designed for planning fuel-conservative descents when time is not a consideration. The descent path for both modes was calculated for a constant with considerations given for the descent Mach/airspeed schedule, gross weight, wind, wind gradient, and nonstandard temperature effects. Flight tests, using the algorithm on the programmable calculator, showed that the open-loop guidance could be useful to airline flight crews for planning and executing fuel-conservative descents.
Knox, C.E.; Vicroy, D.D.; Simmon, D.A.
1985-05-01
A simple, airborne, flight-management descent algorithm was developed and programmed into a small programmable calculator. The algorithm may be operated in either a time mode or speed mode. The time mode was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The speed model was designed for planning fuel-conservative descents when time is not a consideration. The descent path for both modes was calculated for a constant with considerations given for the descent Mach/airspeed schedule, gross weight, wind, wind gradient, and nonstandard temperature effects. Flight tests, using the algorithm on the programmable calculator, showed that the open-loop guidance could be useful to airline flight crews for planning and executing fuel-conservative descents.
NASA Technical Reports Server (NTRS)
Burt, Adam O.; Tinker, Michael L.
2014-01-01
In this paper, genetic algorithm based and gradient-based topology optimization is presented in application to a real hardware design problem. Preliminary design of a planetary lander mockup structure is accomplished using these methods that prove to provide major weight savings by addressing the structural efficiency during the design cycle. This paper presents two alternative formulations of the topology optimization problem. The first is the widely-used gradient-based implementation using commercially available algorithms. The second is formulated using genetic algorithms and internally developed capabilities. These two approaches are applied to a practical design problem for hardware that has been built, tested and proven to be functional. Both formulations converged on similar solutions and therefore were proven to be equally valid implementations of the process. This paper discusses both of these formulations at a high level.
Murakami, Yoshimasa; Tsuboi, Naoya; Inden, Yasuya; Yoshida, Yukihiko; Murohara, Toyoaki; Ihara, Zenichi; Takami, Mitsuaki
2010-01-01
Aims Managed ventricular pacing (MVP) and Search AV+ are representative dual-chamber pacing algorithms for minimizing ventricular pacing (VP). This randomized, crossover study aimed to examine the difference in ability to reduce percentage of VP (%VP) between these two algorithms. Methods and results Symptomatic bradyarrhythmia patients implanted with a pacemaker equipped with both algorithms (Adapta DR, Medtronic) were enrolled. The %VPs of the patients during two periods were compared: 1 month operation of either one of the two algorithms for each period. All patients were categorized into subgroups according to the atrioventricular block (AVB) status at baseline: no AVB (nAVB), first-degree AVB (1AVB), second-degree AVB (2AVB), episodic third-degree AVB (e3AVB), and persistent third-degree AVB (p3AVB). Data were available from 127 patients for the analysis. For all patient subgroups, except for p3AVB category, the median %VPs were lower during the MVP operation than those during the Search AV+ (nAVB: 0.2 vs. 0.8%, P < 0.0001; 1AVB: 2.3 vs. 27.4%, P = 0.001; 2AVB: 16.4% vs. 91.9%, P = 0.0052; e3AVB: 37.7% vs. 92.7%, P = 0.0003). Conclusion Managed ventricular pacing algorithm, when compared with Search AV+, offers further %VP reduction in patients implanted with a dual-chamber pacemaker, except for patients diagnosed with persistent loss of atrioventricular conduction. PMID:19762332
NASA Astrophysics Data System (ADS)
van Aalsburg, Jordan; Rundle, John B.; Grant, Lisa B.; Rundle, Paul B.; Yakovlev, Gleb; Turcotte, Donald L.; Donnellan, Andrea; Tiampo, Kristy F.; Fernandez, Jose
2010-08-01
In weather forecasting, current and past observational data are routinely assimilated into numerical simulations to produce ensemble forecasts of future events in a process termed "model steering". Here we describe a similar approach that is motivated by analyses of previous forecasts of the Working Group on California Earthquake Probabilities (WGCEP). Our approach is adapted to the problem of earthquake forecasting using topologically realistic numerical simulations for the strike-slip fault system in California. By systematically comparing simulation data to observed paleoseismic data, a series of spatial probability density functions (PDFs) can be computed that describe the probable locations of future large earthquakes. We develop this approach and show examples of PDFs associated with magnitude M > 6.5 and M > 7.0 earthquakes in California.
NASA Technical Reports Server (NTRS)
Lyons, Walter A.; Pielke, Roger A.; Cotton, William R.; Keen, Cecil S.; Moon, Dennis A.
1992-01-01
Sea breeze thunderstorms during quiescent synoptic conductions account for 40 percent of Florida rainfall, and are the dominant feature of April-October weather at the Kennedy Space Center (KSC). An effort is presently made to assess the feasibility of a mesoscale numerical model in improving the point-specific thunderstorm forecasting accuracy at the KSC, in the 2-12 hour time frame. Attention is given to the Applied Regional Atmospheric Modeling System.
NASA Astrophysics Data System (ADS)
Auletta, Gianluca; Ditommaso, Rocco; Iacovino, Chiara; Carlo Ponzo, Felice; Pina Limongelli, Maria
2016-04-01
Continuous monitoring based on vibrational identification methods is increasingly employed with the aim of evaluate the state of the health of existing structures and infrastructures and to evaluate the performance of safety interventions over time. In case of earthquakes, data acquired by means of continuous monitoring systems can be used to localize and quantify a possible damage occurred on a monitored structure using appropriate algorithms based on the variations of structural parameters. Most of the damage identification methods are based on the variation of few modal and/or non-modal parameters: the former, are strictly related to the structural eigenfrequencies, equivalent viscous damping factors and mode shapes; the latter, are based on the variation of parameters related to the geometric characteristics of the monitored structure whose variations could be correlated related to damage. In this work results retrieved from the application of a curvature evolution based method and an interpolation error based method are compared. The first method is based on the evaluation of the curvature variation (related to the fundamental mode of vibration) over time and compares the variations before, during and after the earthquake. The Interpolation Method is based on the detection of localized reductions of smoothness in the Operational Deformed Shapes (ODSs) of the structure. A damage feature is defined in terms of the error related to the use of a spline function in interpolating the ODSs of the structure: statistically significant variations of the interpolation error between two successive inspections of the structure indicate the onset of damage. Both methods have been applied using both numerical data retrieved from nonlinear FE models and experimental tests on scaled structures carried out on the shaking table of the University of Basilicata. Acknowledgements This study was partially funded by the Italian Civil Protection Department within the project DPC
Frontiers in Numerical Relativity
NASA Astrophysics Data System (ADS)
Evans, Charles R.; Finn, Lee S.; Hobill, David W.
2011-06-01
Preface; Participants; Introduction; 1. Supercomputing and numerical relativity: a look at the past, present and future David W. Hobill and Larry L. Smarr; 2. Computational relativity in two and three dimensions Stuart L. Shapiro and Saul A. Teukolsky; 3. Slowly moving maximally charged black holes Robert C. Ferrell and Douglas M. Eardley; 4. Kepler's third law in general relativity Steven Detweiler; 5. Black hole spacetimes: testing numerical relativity David H. Bernstein, David W. Hobill and Larry L. Smarr; 6. Three dimensional initial data of numerical relativity Ken-ichi Oohara and Takashi Nakamura; 7. Initial data for collisions of black holes and other gravitational miscellany James W. York, Jr.; 8. Analytic-numerical matching for gravitational waveform extraction Andrew M. Abrahams; 9. Supernovae, gravitational radiation and the quadrupole formula L. S. Finn; 10. Gravitational radiation from perturbations of stellar core collapse models Edward Seidel and Thomas Moore; 11. General relativistic implicit radiation hydrodynamics in polar sliced space-time Paul J. Schinder; 12. General relativistic radiation hydrodynamics in spherically symmetric spacetimes A. Mezzacappa and R. A. Matzner; 13. Constraint preserving transport for magnetohydrodynamics John F. Hawley and Charles R. Evans; 14. Enforcing the momentum constraints during axisymmetric spacelike simulations Charles R. Evans; 15. Experiences with an adaptive mesh refinement algorithm in numerical relativity Matthew W. Choptuik; 16. The multigrid technique Gregory B. Cook; 17. Finite element methods in numerical relativity P. J. Mann; 18. Pseudo-spectral methods applied to gravitational collapse Silvano Bonazzola and Jean-Alain Marck; 19. Methods in 3D numerical relativity Takashi Nakamura and Ken-ichi Oohara; 20. Nonaxisymmetric rotating gravitational collapse and gravitational radiation Richard F. Stark; 21. Nonaxisymmetric neutron star collisions: initial results using smooth particle hydrodynamics
Performance Comparison Of Evolutionary Algorithms For Image Clustering
NASA Astrophysics Data System (ADS)
Civicioglu, P.; Atasever, U. H.; Ozkan, C.; Besdok, E.; Karkinli, A. E.; Kesikoglu, A.
2014-09-01
Evolutionary computation tools are able to process real valued numerical sets in order to extract suboptimal solution of designed problem. Data clustering algorithms have been intensively used for image segmentation in remote sensing applications. Despite of wide usage of evolutionary algorithms on data clustering, their clustering performances have been scarcely studied by using clustering validation indexes. In this paper, the recently proposed evolutionary algorithms (i.e., Artificial Bee Colony Algorithm (ABC), Gravitational Search Algorithm (GSA), Cuckoo Search Algorithm (CS), Adaptive Differential Evolution Algorithm (JADE), Differential Search Algorithm (DSA) and Backtracking Search Optimization Algorithm (BSA)) and some classical image clustering techniques (i.e., k-means, fcm, som networks) have been used to cluster images and their performances have been compared by using four clustering validation indexes. Experimental test results exposed that evolutionary algorithms give more reliable cluster-centers than classical clustering techniques, but their convergence time is quite long.
NASA Astrophysics Data System (ADS)
Sprenger, Lisa; Lange, Adrian; Odenbach, Stefan
2013-12-01
Ferrofluids are colloidal suspensions consisting of magnetic nanoparticles dispersed in a carrier liquid. Their thermodiffusive behaviour is rather strong compared to molecular binary mixtures, leading to a Soret coefficient (ST) of 0.16 K-1. Former experiments with dilute magnetic fluids have been done with thermogravitational columns or horizontal thermodiffusion cells by different research groups. Considering the horizontal thermodiffusion cell, a former analytical approach has been used to solve the phenomenological diffusion equation in one dimension assuming a constant concentration gradient over the cell's height. The current experimental work is based on the horizontal separation cell and emphasises the comparison of the concentration development in different concentrated magnetic fluids and at different temperature gradients. The ferrofluid investigated is the kerosene-based EMG905 (Ferrotec) to be compared with the APG513A (Ferrotec), both containing magnetite nanoparticles. The experiments prove that the separation process linearly depends on the temperature gradient and that a constant concentration gradient develops in the setup due to the separation. Analytical one dimensional and numerical three dimensional approaches to solve the diffusion equation are derived to be compared with the solution used so far for dilute fluids to see if formerly made assumptions also hold for higher concentrated fluids. Both, the analytical and numerical solutions, either in a phenomenological or a thermodynamic description, are able to reproduce the separation signal gained from the experiments. The Soret coefficient can then be determined to 0.184 K-1 in the analytical case and 0.29 K-1 in the numerical case. Former theoretical approaches for dilute magnetic fluids underestimate the strength of the separation in the case of a concentrated ferrofluid.
NASA Astrophysics Data System (ADS)
Raghavan, V.; Whitney, Scott E.; Ebmeier, Ryan J.; Padhye, Nisha V.; Nelson, Michael; Viljoen, Hendrik J.; Gogos, George
2006-09-01
In this article, experimental and numerical analyses to investigate the thermal control of an innovative vortex tube based polymerase chain reaction (VT-PCR) thermocycler are described. VT-PCR is capable of rapid DNA amplification and real-time optical detection. The device rapidly cycles six 20μl 96bp λ-DNA samples between the PCR stages (denaturation, annealing, and elongation) for 30cycles in approximately 6min. Two-dimensional numerical simulations have been carried out using computational fluid dynamics (CFD) software FLUENT v.6.2.16. Experiments and CFD simulations have been carried out to measure/predict the temperature variation between the samples and within each sample. Heat transfer rate (primarily dictated by the temperature differences between the samples and the external air heating or cooling them) governs the temperature distribution between and within the samples. Temperature variation between and within the samples during the denaturation stage has been quite uniform (maximum variation around ±0.5 and 1.6°C, respectively). During cooling, by adjusting the cold release valves in the VT-PCR during some stage of cooling, the heat transfer rate has been controlled. Improved thermal control, which increases the efficiency of the PCR process, has been obtained both experimentally and numerically by slightly decreasing the rate of cooling. Thus, almost uniform temperature distribution between and within the samples (within 1°C) has been attained for the annealing stage as well. It is shown that the VT-PCR is a fully functional PCR machine capable of amplifying specific DNA target sequences in less time than conventional PCR devices.
Sprenger, Lisa Lange, Adrian; Odenbach, Stefan
2013-12-15
Ferrofluids are colloidal suspensions consisting of magnetic nanoparticles dispersed in a carrier liquid. Their thermodiffusive behaviour is rather strong compared to molecular binary mixtures, leading to a Soret coefficient (S{sub T}) of 0.16 K{sup −1}. Former experiments with dilute magnetic fluids have been done with thermogravitational columns or horizontal thermodiffusion cells by different research groups. Considering the horizontal thermodiffusion cell, a former analytical approach has been used to solve the phenomenological diffusion equation in one dimension assuming a constant concentration gradient over the cell's height. The current experimental work is based on the horizontal separation cell and emphasises the comparison of the concentration development in different concentrated magnetic fluids and at different temperature gradients. The ferrofluid investigated is the kerosene-based EMG905 (Ferrotec) to be compared with the APG513A (Ferrotec), both containing magnetite nanoparticles. The experiments prove that the separation process linearly depends on the temperature gradient and that a constant concentration gradient develops in the setup due to the separation. Analytical one dimensional and numerical three dimensional approaches to solve the diffusion equation are derived to be compared with the solution used so far for dilute fluids to see if formerly made assumptions also hold for higher concentrated fluids. Both, the analytical and numerical solutions, either in a phenomenological or a thermodynamic description, are able to reproduce the separation signal gained from the experiments. The Soret coefficient can then be determined to 0.184 K{sup −1} in the analytical case and 0.29 K{sup −1} in the numerical case. Former theoretical approaches for dilute magnetic fluids underestimate the strength of the separation in the case of a concentrated ferrofluid.
Schubert, Frank; Wiggenhauser, Herbert; Lausch, Regine
2004-04-01
In impact-echo testing of finite concrete structures, reflections of Rayleigh and body waves from lateral boundaries significantly affect time-domain signals and spectra. In the present paper we demonstrate by numerical simulations and experimental measurements at a concrete specimen that these reflections can lead to systematic errors in thickness determination. These effects depend not only on the dimensions of the specimen, but also on the location of the actual measuring point and on the duration of the detected time-domain signal. PMID:15047403
NASA Astrophysics Data System (ADS)
Malamataris, Nikolaos; Liakos, Anastasios
2015-11-01
The exact value of the Reynolds number regarding the inception of separation in the flow around a circular cylinder is still a matter of research. This work connects the inception of separation with the calculation of a positive pressure gradient around the circumference of the cylinder. The hypothesis is that inception of separation occurs when the pressure gradient becomes positive around the circumference. From the most cited laboratory experiments that have dealt with that subject of inception of separation only Thom has measured the pressure gradient there at very low Reynolds numbers (up to Re=3.5). For this reason, the experimental conditions of his tunnel are simulated in a new numerical experiment. The full Navier Stokes equations in both two and three dimensions are solved with a home made code that utilizes Galerkin finite elements. In the two dimensional numerical experiment, inception of separation is observed at Re=4.3, which is the lowest Reynolds number where inception has been reported computationally. Currently, the three dimensional experiment is under way, in order to compare if there are effects of three dimensional theory of separation in the conditions of Thom's experiments.
NASA Astrophysics Data System (ADS)
Plach, Andreas; Proschek, Veronika; Kirchengast, Gottfried
2014-05-01
We employ the Low Earth Orbit (LEO-LEO) microwave and infrared-laser occultation (LMIO) method to derive a full set of thermodynamic state variables from microwave signals and climate benchmark profiling of greenhouse gases (GHGs) and line-of-sight (l.o.s.) wind using infrared-laser signals. The focus lies on the upper troposphere/lower stratosphere region (UTLS - 5 km to 35 km). The GHG retrieval errors are generally smaller than 1% to 3% r.m.s., at a vertical resolution of about 1 km. In this study we focus on the infrared-laser part of LMIO, where we introduce a new, advanced wind retrieval algorithm to derive accurate l.o.s. wind profiles. The wind retrieval uses the reasonable assumption of the wind blowing along spherical shells (horizontal winds) and therefore the l.o.s. wind speed can be retrieved by using an Abel integral transform. A 'delta-differential transmission' principle is applied to two thoroughly selected infrared-laser signals placed at the wings of the highly symmetric C18OO absorption line (nominally ±0.004 cm-1 from the line center near 4767 cm-1) plus a related 'off-line' reference signal. The delta-differential transmission obtained by differencing these signals is clear from atmospheric broadband effects and is proportional to the wind-induced Doppler shift; it serves as the integrand of the Abel transform. The Doppler frequency shift calculated along with the wind retrieval is in turn also used in the GHG retrieval to correct the frequency of GHG-sensitive infrared-laser signals for the wind-induced Doppler shift, which enables improved GHG estimation. This step therefore provides the capability to correct potential wind-induced residual errors of the GHG retrieval in case of strong winds. We performed end-to-end simulations to test the performance of the new retrieval in windy air. The simulations used realistic atmospheric conditions (thermodynamic state variables and wind profiles) from an analysis field of the European Centre for
Quarini, G L; Learmonth, I D; Gheduzzi, S
2006-07-01
Acrylic cements are commonly used to attach prosthetic components in joint replacement surgery. The cements set in short periods of time by a complex polymerization of initially liquid monomer compounds into solid structures with accompanying significant heat release. Two main problems arise from this form of fixation: the first is the potential damage caused by the temperature excursion, and the second is incomplete reaction leaving active monomer compounds, which can potentially be slowly released into the patient. This paper presents a numerical model predicting the temperature-time history in an idealized prosthetic-cement-bone system. Using polymerization kinetics equations from the literature, the degree of polymerization is predicted, which is found to be very dependent on the thermal history of the setting process. Using medical literature, predictions for the degree of thermal bone necrosis are also made. The model is used to identify the critical parameters controlling thermal and unreacted monomer distributions. PMID:16898219
NASA Technical Reports Server (NTRS)
Scalapino, D. J.; Sugar, R. L.; White, S. R.; Bickers, N. E.; Scalettar, R. T.
1989-01-01
Numerical simulations on the half-filled three-dimensional Hubbard model clearly show the onset of Neel order. Simulations of the two-dimensional electron-phonon Holstein model show the competition between the formation of a Peierls-CDW state and a superconducting state. However, the behavior of the partly filled two-dimensional Hubbard model is more difficult to determine. At half-filling, the antiferromagnetic correlations grow as T is reduced. Doping away from half-filling suppresses these correlations, and it is found that there is a weak attractive pairing interaction in the d-wave channel. However, the strength of the pair field susceptibility is weak at the temperatures and lattice sizes that have been simulated, and the nature of the low-temperature state of the nearly half-filled Hubbard model remains open.
NASA Astrophysics Data System (ADS)
Chevallier, L.
2010-11-01
Tests are presented of the 1D Accelerated Lambda Iteration method, which is widely used for solving the radiative transfer equation for a stellar atmosphere. We use our ARTY code as a reference solution and tables for these tests are provided. We model a static idealized stellar atmosphere, which is illuminated on its inner face and where internal sources are distributed with weak or strong gradients. This is an extension of published tests for a slab without incident radiation and gradients. Typical physical conditions for the continuum radiation and spectral lines are used, as well as typical values for the numerical parameters in order to reach a 1% accuracy. It is shown that the method is able to reach such an accuracy for most cases but the spatial discretization has to be refined for strong gradients and spectral lines, beyond the scope of realistic stellar atmospheres models. Discussion is provided on faster methods.
Parameter incremental learning algorithm for neural networks.
Wan, Sheng; Banta, Larry E
2006-11-01
In this paper, a novel stochastic (or online) training algorithm for neural networks, named parameter incremental learning (PIL) algorithm, is proposed and developed. The main idea of the PIL strategy is that the learning algorithm should not only adapt to the newly presented input-output training pattern by adjusting parameters, but also preserve the prior results. A general PIL algorithm for feedforward neural networks is accordingly presented as the first-order approximate solution to an optimization problem, where the performance index is the combination of proper measures of preservation and adaptation. The PIL algorithms for the multilayer perceptron (MLP) are subsequently derived. Numerical studies show that for all the three benchmark problems used in this paper the PIL algorithm for MLP is measurably superior to the standard online backpropagation (BP) algorithm and the stochastic diagonal Levenberg-Marquardt (SDLM) algorithm in terms of the convergence speed and accuracy. Other appealing features of the PIL algorithm are that it is computationally as simple as the BP algorithm, and as easy to use as the BP algorithm. It, therefore, can be applied, with better performance, to any situations where the standard online BP algorithm is applicable. PMID:17131658
NASA Technical Reports Server (NTRS)
Knox, C. E.; Cannon, D. G.
1980-01-01
A simple flight management descent algorithm designed to improve the accuracy of delivering an airplane in a fuel-conservative manner to a metering fix at a time designated by air traffic control was developed and flight tested. This algorithm provides a three dimensional path with terminal area time constraints (four dimensional) for an airplane to make an idle thrust, clean configured (landing gear up, flaps zero, and speed brakes retracted) descent to arrive at the metering fix at a predetermined time, altitude, and airspeed. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard pressure and temperature effects. The flight management descent algorithm is described. The results of the flight tests flown with the Terminal Configured Vehicle airplane are presented.
NASA Technical Reports Server (NTRS)
Knox, C. E.
1983-01-01
A simplified flight-management descent algorithm, programmed on a small programmable calculator, was developed and flight tested. It was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel-conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight-management descent algorithm is described. The results of flight tests flown with a T-39A (Sabreliner) airplane are presented.
NASA Technical Reports Server (NTRS)
Platnick, Steven; King, Michael D.; Wind, Galina; Amarasinghe, Nandana; Marchant, Benjamin; Arnold, G. Thomas
2012-01-01
Operational Moderate Resolution Imaging Spectroradiometer (MODIS) retrievals of cloud optical and microphysical properties (part of the archived products MOD06 and MYD06, for MODIS Terra and Aqua, respectively) are currently being reprocessed along with other MODIS Atmosphere Team products. The latest "Collection 6" processing stream, which is expected to begin production by summer 2012, includes updates to the previous cloud retrieval algorithm along with new capabilities. The 1 km retrievals, based on well-known solar reflectance techniques, include cloud optical thickness, effective particle radius, and water path, as well as thermodynamic phase derived from a combination of solar and infrared tests. Being both global and of high spatial resolution requires an algorithm that is computationally efficient and can perform over all surface types. Collection 6 additions and enhancements include: (i) absolute effective particle radius retrievals derived separately from the 1.6 and 3.7 !-lm bands (instead of differences relative to the standard 2.1 !-lm retrieval), (ii) comprehensive look-up tables for cloud reflectance and emissivity (no asymptotic theory) with a wind-speed interpolated Cox-Munk BRDF for ocean surfaces, (iii) retrievals for both liquid water and ice phases for each pixel, and a subsequent determination of the phase based, in part, on effective radius retrieval outcomes for the two phases, (iv) new ice cloud radiative models using roughened particles with a specified habit, (v) updated spatially-complete global spectral surface albedo maps derived from MODIS Collection 5, (vi) enhanced pixel-level uncertainty calculations incorporating additional radiative error sources including the MODIS L1 B uncertainty index for assessing band and scene-dependent radiometric uncertainties, (v) and use of a new 1 km cloud top pressure/temperature algorithm (also part of MOD06) for atmospheric corrections and low cloud non-unity emissivity temperature adjustments.
New products of GOSAT/TANSO-FTS TIR CO2 and CH4 profiles: Algorithm and initial validation results
NASA Astrophysics Data System (ADS)
Saitoh, N.; Imasu, R.; Sugita, T.; Hayashida, S.; Shiomi, K.; Kawakami, S.; Machida, T.; Sawa, Y.; Matsueda, H.; Terao, Y.
2013-12-01
The Thermal and Near-infrared Sensor for Carbon Observation Fourier Transform Spectrometer (TANSO-FTS) on board the Greenhouse Gases Observing Satellite (GO-SAT) simultaneously observes column abundances and profiles of CO2 and CH4 in the same field of view, from the shortwave infrared (SWIR) and thermal infrared (TIR) bands, respectively. The latest version of the GOSAT Level 1B (L1B) radiance spectra, the version 160.160, is improved compared to the previous versions, but still has a bias judging from comparisons with spectral data of other coincident instruments. The bias is largest at around 14-15 micron band that includes carbon dioxide strong absorption lines [Kataoka et al., 2013]; it probably causes a high bias in mid-tropospheric carbon dioxide concentration of the current released V00.01 TIR products. Besides, relatively low sig-nal-to-noise ratio (SNR) less than 100 at around 7-8 micron band makes CH4 retrieval unstable. We have improved an algorithm for retrieving CO2 and CH4 profiles in order to overcome the spectral bias and low SNR problems. In our new algorithm, we treated surface temperature and surface emissivity as correction parameters for radi-ance-independent and radiance-dependent spectral biases, respectively, and retrieved them simultaneously with gas retrieval. We used 7-8 micron band (1140-1370 wave-number) for methane retrieval and 10 and 14-15 micron bands (930-990, 1040-1090, 690-750, and 790-795 wavenumber) for carbon dioxide retrieval. Temperature, water vapor, ozone, and nitrous oxide were retrieved simultaneously other than CO2 and CH4. CO2 profiles retrieved using our new algorithm have no clear bias in mid-troposphere compared to the previous V00.01 CO2 product. New retrieved CH4 profiles show better agreement with aircraft CH4 profiles than the a priori profiles.
Translation and integration of numerical atomic orbitals in linear molecules.
Heinäsmäki, Sami
2014-02-14
We present algorithms for translation and integration of atomic orbitals for LCAO calculations in linear molecules. The method applies to arbitrary radial functions given on a numerical mesh. The algorithms are based on pseudospectral differentiation matrices in two dimensions and the corresponding two-dimensional Gaussian quadratures. As a result, multicenter overlap and Coulomb integrals can be evaluated effectively. PMID:24527905
Translation and integration of numerical atomic orbitals in linear molecules
NASA Astrophysics Data System (ADS)
Heinäsmäki, Sami
2014-02-01
We present algorithms for translation and integration of atomic orbitals for LCAO calculations in linear molecules. The method applies to arbitrary radial functions given on a numerical mesh. The algorithms are based on pseudospectral differentiation matrices in two dimensions and the corresponding two-dimensional Gaussian quadratures. As a result, multicenter overlap and Coulomb integrals can be evaluated effectively.
NASA Astrophysics Data System (ADS)
Won, Jihye; Park, Kwan-Dong
2015-04-01
Real-time PPP-RTK positioning algorithms were developed for the purpose of getting precise coordinates of moving platforms. In this implementation, corrections for the satellite orbit and satellite clock were taken from the IGS-RTS products while the ionospheric delay was removed through ionosphere-free combination and the tropospheric delay was either taken care of using the Global Pressure and Temperature (GPT) model or estimated as a stochastic parameter. To improve the convergence speed, all the available GPS and GLONASS measurements were used and Extended Kalman Filter parameters were optimized. To validate our algorithms, we collected the GPS and GLONASS data from a geodetic-quality receiver installed on a roof of a moving vehicle in an open-sky environment and used IGS final products of satellite orbits and clock offsets. The horizontal positioning error got less than 10 cm within 5 minutes, and the error stayed below 10 cm even after the vehicle start moving. When the IGS-RTS product and the GPT model were used instead of the IGS precise product, the positioning accuracy of the moving vehicle was maintained at better than 20 cm once convergence was achieved at around 6 minutes.
Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models
Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou
2015-01-01
Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1)βk ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations. PMID:26502409
NASA Astrophysics Data System (ADS)
Losiak, Anna; Czechowski, Leszek; Velbel, Michael A.
2015-12-01
Gypsum, a mineral that requires water to form, is common on the surface of Mars. Most of it originated before 3.5 Gyr when the Red Planet was more humid than now. However, occurrences of gypsum dune deposits around the North Polar Residual Cap (NPRC) seem to be surprisingly young: late Amazonian in age. This shows that liquid water was present on Mars even at times when surface conditions were as cold and dry as the present-day. A recently proposed mechanism for gypsum formation involves weathering of dust within ice (e.g., Niles, P.B., Michalski, J. [2009]. Nat. Geosci. 2, 215-220.). However, none of the previous studies have determined if this process is possible under current martian conditions. Here, we use numerical modelling of heat transfer to show that during the warmest days of the summer, solar irradiation may be sufficient to melt pure water ice located below a layer of dark dust particles (albedo ⩽ 0.13) lying on the steepest sections of the equator-facing slopes of the spiral troughs within martian NPRC. During the times of high irradiance at the north pole (every 51 ka; caused by variation of orbital and rotational parameters of Mars e.g., Laskar, J. et al. [2002]. Nature 419, 375-377.) this process could have taken place over larger parts of the spiral troughs. The existence of small amounts of liquid water close to the surface, even under current martian conditions, fulfils one of the main requirements necessary to explain the formation of the extensive gypsum deposits around the NPRC. It also changes our understanding of the degree of current geological activity on Mars and has important implications for estimating the astrobiological potential of Mars.
NASA Technical Reports Server (NTRS)
Guo, Li-Wen; Cardullo, Frank M.; Telban, Robert J.; Houck, Jacob A.; Kelly, Lon C.
2003-01-01
A study was conducted employing the Visual Motion Simulator (VMS) at the NASA Langley Research Center, Hampton, Virginia. This study compared two motion cueing algorithms, the NASA adaptive algorithm and a new optimal control based algorithm. Also, the study included the effects of transport delays and the compensation thereof. The delay compensation algorithm employed is one developed by Richard McFarland at NASA Ames Research Center. This paper reports on the analyses of the results of analyzing the experimental data collected from preliminary simulation tests. This series of tests was conducted to evaluate the protocols and the methodology of data analysis in preparation for more comprehensive tests which will be conducted during the spring of 2003. Therefore only three pilots were used. Nevertheless some useful results were obtained. The experimental conditions involved three maneuvers; a straight-in approach with a rotating wind vector, an offset approach with turbulence and gust, and a takeoff with and without an engine failure shortly after liftoff. For each of the maneuvers the two motion conditions were combined with four delay conditions (0, 50, 100 & 200ms), with and without compensation.
NASA Astrophysics Data System (ADS)
Wildman, R. D.; Jenkins, J. T.; Krouskop, P. E.; Talbot, J.
2006-07-01
A comparison of the predictions of a simple kinetic theory with experimental and numerical results for a vibrated granular bed consisting of nearly elastic particles of two sizes has been performed. The results show good agreement between the data sets for a range of numbers of each size of particle, and are particularly good for particle beds containing similar proportions of each species. The agreement suggests that such a model may be a good starting point for describing polydisperse systems of granular flows.
Schäfer, Dirk; Köber, Ralf; Dahmke, Andreas
2003-09-01
The successful dechlorination of mixtures of chlorinated hydrocarbons with zero-valent metals requires information concerning the kinetics of simultaneous degradation of different contaminants. This includes intraspecies competitive effects (loading of the reactive iron surface by a single contaminant) as well as interspecies competition of several contaminants for the reactive sites available. In columns packed with zero-valent iron, the degradation behaviour of trichloroethylene (TCE), cis-dichloroethylene (DCE) and mixtures of both was measured in order to investigate interspecies competition. Although a decreasing rate of dechlorination is to be expected, when several degradable substances compete for the reactive sites on the iron surface, TCE degradation is nearly unaffected by the presence of cis-DCE. In contrast, cis-DCE degradation rates decrease significantly when TCE is added. A new modelling approach is developed in order to identify and quantify the observed competitive effects. The numerical model TBC (Transport, Biochemistry and Chemistry, Schäfer et al., 1998a) is used to describe adsorption, desorption and dechlorination in a mechanistic way. Adsorption and degradation of a contaminant based on a limited number of reactive sites leads to a combined zero- and first-order degradation kinetics for high and low concentrations, respectively. The adsorption of several contaminants with different sorption parameters to a limited reactive surface causes interspecies competition. The reaction scheme and the parameters required are successfully transferred from Arnold and Roberts (2000b) to the model TBC. The degradation behaviour of the mixed contamination observed in the column experiments can be related to the adsorption properties of TCE and cis-DCE. By predicting the degradation of the single substances TCE and cis-DCE as well as mixtures of both, the calibrated model is used to investigate the effects of interspecies competition on the design of
NASA Astrophysics Data System (ADS)
Schäfer, Dirk; Köber, Ralf; Dahmke, Andreas
2003-09-01
The successful dechlorination of mixtures of chlorinated hydrocarbons with zero-valent metals requires information concerning the kinetics of simultaneous degradation of different contaminants. This includes intraspecies competitive effects (loading of the reactive iron surface by a single contaminant) as well as interspecies competition of several contaminants for the reactive sites available. In columns packed with zero-valent iron, the degradation behaviour of trichloroethylene (TCE), cis-dichloroethylene (DCE) and mixtures of both was measured in order to investigate interspecies competition. Although a decreasing rate of dechlorination is to be expected, when several degradable substances compete for the reactive sites on the iron surface, TCE degradation is nearly unaffected by the presence of cis-DCE. In contrast, cis-DCE degradation rates decrease significantly when TCE is added. A new modelling approach is developed in order to identify and quantify the observed competitive effects. The numerical model TBC (Transport, Biochemistry and Chemistry, Schäfer et al., 1998a) is used to describe adsorption, desorption and dechlorination in a mechanistic way. Adsorption and degradation of a contaminant based on a limited number of reactive sites leads to a combined zero- and first-order degradation kinetics for high and low concentrations, respectively. The adsorption of several contaminants with different sorption parameters to a limited reactive surface causes interspecies competition. The reaction scheme and the parameters required are successfully transferred from Arnold and Roberts (2000b) to the model TBC. The degradation behaviour of the mixed contamination observed in the column experiments can be related to the adsorption properties of TCE and cis-DCE. By predicting the degradation of the single substances TCE and cis-DCE as well as mixtures of both, the calibrated model is used to investigate the effects of interspecies competition on the design of
NASA Astrophysics Data System (ADS)
Rawat, A.; Aucan, J.; Ardhuin, F.
2012-12-01
All sea level variations of the order of 1 cm at scales under 30 km are of great interest for the future Surface Water Ocean Topography (SWOT) satellite mission. That satellite should provide high-resolution maps of the sea surface height for analysis of meso to sub-mesoscale currents, but that will require a filtering of all gravity wave motions in the data. Free infragravity waves (FIGWs) are generated and radiate offshore when swells and/or wind seas and their associated bound infragravity waves impact exposed coastlines. Free infragravity waves have dominant periods comprised between 1 and 10 minutes and horizontal wavelengths of up to tens of kilometers. Given the length scales of the infragravity waves wavelength and amplitude, the infragravity wave field will can a significant fraction the signal measured by the future SWOT mission. In this study, we analyze the data from recovered bottom pressure recorders of the Deep-ocean Assessment and Reporting of Tsunami (DART) program. This analysis includes data spanning several years between 2006 and 2010, from stations at different latitudes in the North and South Pacific, the North Atlantic, the Gulf of Mexico and the Caribbean Sea. We present and discuss the following conclusions: (1) The amplitude of free infragravity waves can reach several centimeters, higher than the precision sought for the SWOT mission. (2) The free infragravity signal is higher in the Eastern North Pacific than in the Western North Pacific, possibly due to smaller incident swell and seas impacting the nearby coastlines. (3) Free infragravity waves are higher in the North Pacific than in the North Atlantic, possibly owing to different average continental shelves configurations in the two basins. (4) There is a clear seasonal cycle at the high latitudes North Atlantic and Pacific stations that is much less pronounced or absent at the tropical stations, consistent with the generation mechanism of free infragravity waves. Our numerical model
Lane, J.W., Jr.; Buursink, M.L.; Haeni, F.P.; Versteeg, R.J.
2000-01-01
The suitability of common-offset ground-penetrating radar (GPR) to detect free-phase hydrocarbons in bedrock fractures was evaluated using numerical modeling and physical experiments. The results of one- and two-dimensional numerical modeling at 100 megahertz indicate that GPR reflection amplitudes are relatively insensitive to fracture apertures ranging from 1 to 4 mm. The numerical modeling and physical experiments indicate that differences in the fluids that fill fractures significantly affect the amplitude and the polarity of electromagnetic waves reflected by subhorizontal fractures. Air-filled and hydrocarbon-filled fractures generate low-amplitude reflections that are in-phase with the transmitted pulse. Water-filled fractures create reflections with greater amplitude and opposite polarity than those reflections created by air-filled or hydrocarbon-filled fractures. The results from the numerical modeling and physical experiments demonstrate it is possible to distinguish water-filled fracture reflections from air- or hydrocarbon-filled fracture reflections, nevertheless subsurface heterogeneity, antenna coupling changes, and other sources of noise will likely make it difficult to observe these changes in GPR field data. This indicates that the routine application of common-offset GPR reflection methods for detection of hydrocarbon-filled fractures will be problematic. Ideal cases will require appropriately processed, high-quality GPR data, ground-truth information, and detailed knowledge of subsurface physical properties. Conversely, the sensitivity of GPR methods to changes in subsurface physical properties as demonstrated by the numerical and experimental results suggests the potential of using GPR methods as a monitoring tool. GPR methods may be suited for monitoring pumping and tracer tests, changes in site hydrologic conditions, and remediation activities.The suitability of common-offset ground-penetrating radar (GPR) to detect free-phase hydrocarbons
Sobel, E.; Lange, K.; O`Connell, J.R.
1996-12-31
Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.
Benedetti, Andrea; Platt, Robert; Atherton, Juli
2014-01-01
Background Over time, adaptive Gaussian Hermite quadrature (QUAD) has become the preferred method for estimating generalized linear mixed models with binary outcomes. However, penalized quasi-likelihood (PQL) is still used frequently. In this work, we systematically evaluated whether matching results from PQL and QUAD indicate less bias in estimated regression coefficients and variance parameters via simulation. Methods We performed a simulation study in which we varied the size of the data set, probability of the outcome, variance of the random effect, number of clusters and number of subjects per cluster, etc. We estimated bias in the regression coefficients, odds ratios and variance parameters as estimated via PQL and QUAD. We ascertained if similarity of estimated regression coefficients, odds ratios and variance parameters predicted less bias. Results Overall, we found that the absolute percent bias of the odds ratio estimated via PQL or QUAD increased as the PQL- and QUAD-estimated odds ratios became more discrepant, though results varied markedly depending on the characteristics of the dataset Conclusions Given how markedly results varied depending on data set characteristics, specifying a rule above which indicated biased results proved impossible. This work suggests that comparing results from generalized linear mixed models estimated via PQL and QUAD is a worthwhile exercise for regression coefficients and variance components obtained via QUAD, in situations where PQL is known to give reasonable results. PMID:24416249
NASA Astrophysics Data System (ADS)
van Poppel, Bret; Owkes, Mark; Nelson, Thomas; Lee, Zachary; Sowell, Tyler; Benson, Michael; Vasquez Guzman, Pablo; Fahrig, Rebecca; Eaton, John; Kurman, Matthew; Kweon, Chol-Bum; Bravo, Luis
2014-11-01
In this work, we present high-fidelity Computational Fluid Dynamics (CFD) results of liquid fuel injection from a pressure-swirl atomizer and compare the simulations to experimental results obtained using both shadowgraphy and phase-averaged X-ray computed tomography (CT) scans. The CFD and experimental results focus on the dense near-nozzle region to identify the dominant mechanisms of breakup during primary atomization. Simulations are performed using the NGA code of Desjardins et al (JCP 227 (2008)) and employ the volume of fluid (VOF) method proposed by Owkes and Desjardins (JCP 270 (2013)), a second order accurate, un-split, conservative, three-dimensional VOF scheme providing second order density fluxes and capable of robust and accurate high density ratio simulations. Qualitative features and quantitative statistics are assessed and compared for the simulation and experimental results, including the onset of atomization, spray cone angle, and drop size and distribution.
Modified OMP Algorithm for Exponentially Decaying Signals
Kazimierczuk, Krzysztof; Kasprzak, Paweł
2015-01-01
A group of signal reconstruction methods, referred to as compressed sensing (CS), has recently found a variety of applications in numerous branches of science and technology. However, the condition of the applicability of standard CS algorithms (e.g., orthogonal matching pursuit, OMP), i.e., the existence of the strictly sparse representation of a signal, is rarely met. Thus, dedicated algorithms for solving particular problems have to be developed. In this paper, we introduce a modification of OMP motivated by nuclear magnetic resonance (NMR) application of CS. The algorithm is based on the fact that the NMR spectrum consists of Lorentzian peaks and matches a single Lorentzian peak in each of its iterations. Thus, we propose the name Lorentzian peak matching pursuit (LPMP). We also consider certain modification of the algorithm by introducing the allowed positions of the Lorentzian peaks' centers. Our results show that the LPMP algorithm outperforms other CS algorithms when applied to exponentially decaying signals. PMID:25609044
NASA Astrophysics Data System (ADS)
Shmelkov, Yuriy; Samujlov, Eugueny
2012-04-01
Comparison of calculation results of transport properties of the solid fuels combustion products was made with known experimental data. Calculation was made by means of the modified program TETRAN developed in G.M. Krzhizhanovsky Power Engineering Institute. The calculation was spent with chemical reactions and phase transformations occurring during combustion. Also ionization of products of solid fuels combustion products at high temperatures was taken into account. In the capacity of fuels various Russian coals and some other solid fuels were considered. As a result of density, viscosity and heat conductivity calculation of a gas phase of solid fuels combustion products the data has been obtained in a range of temperatures 500-20000 K. This comparison has shown good convergence of calculation results with experiment.
Performance-based seismic design of steel frames utilizing colliding bodies algorithm.
Veladi, H
2014-01-01
A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm. PMID:25202717
Performance-Based Seismic Design of Steel Frames Utilizing Colliding Bodies Algorithm
Veladi, H.
2014-01-01
A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm. PMID:25202717
A Collaborative Recommend Algorithm Based on Bipartite Community
Fu, Yuchen; Liu, Quan; Cui, Zhiming
2014-01-01
The recommendation algorithm based on bipartite network is superior to traditional methods on accuracy and diversity, which proves that considering the network topology of recommendation systems could help us to improve recommendation results. However, existing algorithms mainly focus on the overall topology structure and those local characteristics could also play an important role in collaborative recommend processing. Therefore, on account of data characteristics and application requirements of collaborative recommend systems, we proposed a link community partitioning algorithm based on the label propagation and a collaborative recommendation algorithm based on the bipartite community. Then we designed numerical experiments to verify the algorithm validity under benchmark and real database. PMID:24955393
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A.; Moes, Timothy R.; Larson, Terry J.
1990-01-01
A nonintrusive high angle-of-attack flush airdata sensing (HI-FADS) system was installed and flight-tested on the F-18 high alpha research vehicle. This paper discusses the airdata algorithm development and composite results expressed as airdata parameter estimates and describes the HI-FADS system hardware, calibration techniques, and algorithm development. An independent empirical verification was performed over a large portion of the subsonic flight envelope. Test points were obtained for Mach numbers from 0.15 to 0.94 and angles of attack from -8.0 to 55.0 deg. Angles of sideslip ranged from -15.0 to 15.0 deg, and test altitudes ranged from 18,000 to 40,000 ft. The HI-FADS system gave excellent results over the entire subsonic Mach number range up to 55 deg angle of attack. The internal pneumatic frequency response of the system is accurate to beyond 10 Hz.
NASA Technical Reports Server (NTRS)
Vardi, A.
1984-01-01
The representation min t s.t. F(I)(x). - t less than or equal to 0 for all i is examined. An active set strategy is designed of functions: active, semi-active, and non-active. This technique will help in preventing zigzagging which often occurs when an active set strategy is used. Some of the inequality constraints are handled with slack variables. Also a trust region strategy is used in which at each iteration there is a sphere around the current point in which the local approximation of the function is trusted. The algorithm is implemented into a successful computer program. Numerical results are provided.
NASA Astrophysics Data System (ADS)
Perez-Poch, Antoni
Computer simulations are becoming a promising research line of work, as physiological models become more and more sophisticated and reliable. Technological advances in state-of-the-art hardware technology and software allow nowadays for better and more accurate simulations of complex phenomena, such as the response of the human cardiovascular system to long-term exposure to microgravity. Experimental data for long-term missions are difficult to achieve and reproduce, therefore the predictions of computer simulations are of a major importance in this field. Our approach is based on a previous model developed and implemented in our laboratory (NELME: Numercial Evaluation of Long-term Microgravity Effects). The software simulates the behaviour of the cardiovascular system and different human organs, has a modular archi-tecture, and allows to introduce perturbations such as physical exercise or countermeasures. The implementation is based on a complex electrical-like model of this control system, using inexpensive development frameworks, and has been tested and validated with the available experimental data. The objective of this work is to analyse and simulate long-term effects and gender differences when individuals are exposed to long-term microgravity. Risk probability of a health impairement which may put in jeopardy a long-term mission is also evaluated. . Gender differences have been implemented for this specific work, as an adjustment of a number of parameters that are included in the model. Women versus men physiological differences have been therefore taken into account, based upon estimations from the physiology bibliography. A number of simulations have been carried out for long-term exposure to microgravity. Gravity varying continuosly from Earth-based to zero, and time exposure are the two main variables involved in the construction of results, including responses to patterns of physical aerobic ex-ercise and thermal stress simulating an extra
NASA Technical Reports Server (NTRS)
Durisen, R. H.
1975-01-01
Improved viscous evolutionary sequences of differentially rotating, axisymmetric, nonmagnetic, zero-temperature white-dwarf models are constructed using the relativistically corrected degenerate electron viscosity. The results support the earlier conclusion that angular momentum transport due to viscosity does not lead to overall uniform rotation in many interesting cases. Qualitatively different behaviors are obtained, depending on how the total mass M and angular momentum J compare with the M and J values for which uniformly rotating models exist. Evolutions roughly determine the region in M and J for which models with a particular initial angular momentum distribution can reach carbon-ignition densities in 10 b.y. Such models may represent Type I supernova precursors.
NASA Astrophysics Data System (ADS)
Wang, Ten-See; Dumas, Catherine
1993-07-01
A computational fluid dynamics (CFD) model has been applied to study the transient flow phenomena of the nozzle and exhaust plume of the Space Shuttle Main Engine (SSME), fired at sea level. The CFD model is a time accurate, pressure based, reactive flow solver. A six-species hydrogen/oxygen equilibrium chemistry is used to describe the chemical-thermodynamics. An adaptive upwinding scheme is employed for the spatial discretization, and a predictor, multiple corrector method is used for the temporal solution. Both engine start-up and shut-down processes were simulated. The elapse time is approximately five seconds for both cases. The computed results were animated and compared with the test. The images for the animation were created with PLOT3D and FAST and then animated with ABEKAS. The hysteresis effects, and the issues of free-shock separation, restricted-shock separation and the end-effects were addressed.
NASA Technical Reports Server (NTRS)
Wang, Ten-See; Dumas, Catherine
1993-01-01
A computational fluid dynamics (CFD) model has been applied to study the transient flow phenomena of the nozzle and exhaust plume of the Space Shuttle Main Engine (SSME), fired at sea level. The CFD model is a time accurate, pressure based, reactive flow solver. A six-species hydrogen/oxygen equilibrium chemistry is used to describe the chemical-thermodynamics. An adaptive upwinding scheme is employed for the spatial discretization, and a predictor, multiple corrector method is used for the temporal solution. Both engine start-up and shut-down processes were simulated. The elapse time is approximately five seconds for both cases. The computed results were animated and compared with the test. The images for the animation were created with PLOT3D and FAST and then animated with ABEKAS. The hysteresis effects, and the issues of free-shock separation, restricted-shock separation and the end-effects were addressed.
Algorithmically specialized parallel computers
Snyder, L.; Jamieson, L.H.; Gannon, D.B.; Siegel, H.J.
1985-01-01
This book is based on a workshop which dealt with array processors. Topics considered include algorithmic specialization using VLSI, innovative architectures, signal processing, speech recognition, image processing, specialized architectures for numerical computations, and general-purpose computers.
NASA Technical Reports Server (NTRS)
Uslenghi, Piergiorgio L. E.; Laxpati, Sharad R.; Kawalko, Stephen F.
1993-01-01
The third phase of the development of the computer codes for scattering by coated bodies that has been part of an ongoing effort in the Electromagnetics Laboratory of the Electrical Engineering and Computer Science Department at the University of Illinois at Chicago is described. The work reported discusses the analytical and numerical results for the scattering of an obliquely incident plane wave by impedance bodies of revolution with phi variation of the surface impedance. Integral equation formulation of the problem is considered. All three types of integral equations, electric field, magnetic field, and combined field, are considered. These equations are solved numerically via the method of moments with parametric elements. Both TE and TM polarization of the incident plane wave are considered. The surface impedance is allowed to vary along both the profile of the scatterer and in the phi direction. Computer code developed for this purpose determines the electric surface current as well as the bistatic radar cross section. The results obtained with this code were validated by comparing the results with available results for specific scatterers such as the perfectly conducting sphere. Results for the cone-sphere and cone-cylinder-sphere for the case of an axially incident plane were validated by comparing the results with the results with those obtained in the first phase of this project. Results for body of revolution scatterers with an abrupt change in the surface impedance along both the profile of the scatterer and the phi direction are presented.
NASA Astrophysics Data System (ADS)
Humeau, Anne; Buard, Benjamin; Mahé, Guillaume; Chapeau-Blondeau, François; Rousseau, David; Abraham, Pierre
2010-10-01
To contribute to the understanding of the complex dynamics in the cardiovascular system (CVS), the central CVS has previously been analyzed through multifractal analyses of heart rate variability (HRV) signals that were shown to bring useful contributions. Similar approaches for the peripheral CVS through the analysis of laser Doppler flowmetry (LDF) signals are comparatively very recent. In this direction, we propose here a study of the peripheral CVS through a multifractal analysis of LDF fluctuations, together with a comparison of the results with those obtained on HRV fluctuations simultaneously recorded. To perform these investigations concerning the biophysics of the CVS, first we have to address the problem of selecting a suitable methodology for multifractal analysis, allowing us to extract meaningful interpretations on biophysical signals. For this purpose, we test four existing methodologies of multifractal analysis. We also present a comparison of their applicability and interpretability when implemented on both simulated multifractal signals of reference and on experimental signals from the CVS. One essential outcome of the study is that the multifractal properties observed from both the LDF fluctuations (peripheral CVS) and the HRV fluctuations (central CVS) appear very close and similar over the studied range of scales relevant to physiology.
NASA Astrophysics Data System (ADS)
Randol, Brent M.; Christian, Eric R.
2016-03-01
A parametric study is performed using the electrostatic simulations of Randol and Christian in which the number density, n, and initial thermal speed, θ, are varied. The range of parameters covers an extremely broad plasma regime, all the way from the very weak coupling of space plasmas to the very strong coupling of solid plasmas. The first result is that simulations at the same ΓD, where ΓD (∝ n1/3θ-2) is the plasma coupling parameter, but at different combinations of n and θ, behave exactly the same. As a function of ΓD, the form of p(v), the velocity distribution function of v, the magnitude of v, the velocity vector, is studied. For intermediate to high ΓD, heating is observed in p(v) that obeys conservation of energy, and a suprathermal tail is formed, with a spectral index that depends on ΓD. For strong coupling (ΓD≫1), the form of the tail is v-5, consistent with the findings of Randol and Christian). For weak coupling (ΓD≪1), no acceleration or heating occurs, as there is no free energy. The dependence on N, the number of particles in the simulation, is also explored. There is a subtle dependence in the index of the tail, such that v-5 appears to be the N→∞ limit.
SLAC E155 and E155x Numeric Data Results and Data Plots: Nucleon Spin Structure Functions
The nucleon spin structure functions g1 and g2 are important tools for testing models of nucleon structure and QCD. Experiments at CERN, DESY, and SLAC have measured g1 and g2 using deep inelastic scattering of polarized leptons on polarized nucleon targets. The results of these experiments have established that the quark component of the nucleon helicity is much smaller than naive quark-parton model predictions. The Bjorken sum rule has been confirmed within the uncertainties of experiment and theory. The experiment E155 at SLAC collected data in March and April of 1997. Approximately 170 million scattered electron events were recorded to tape. (Along with several billion inclusive hadron events.) The data were collected using three independent fixed-angle magnetic spectrometers, at approximately 2.75, 5.5, and 10.5 degrees. The momentum acceptance of the 2.75 and 5.5 degree spectrometers ranged from 10 to 40 GeV, with momentum resolution of 2-4%. The 10.5 degree spectrometer, new for E155, accepted events of 7 GeV to 20 GeV. Each spectrometer used threshold gas Cerenkov counters (for particle ID), a segmented lead-glass calorimeter (for energy measurement and particle ID), and plastic scintillator hodoscopes (for tracking and momentum measurement). The polarized targets used for E155 were 15NH3 and 6LiD, as targets for measuring the proton and deuteron spin structure functions respectively. Experiment E155x recently concluded a successful two-month run at SLAC. The experiment was designed to measure the transverse spin structure functions of the proton and deuteron. The E155 target was also recently in use at TJNAF's Hall C (E93-026) and was returned to SLAC for E155x. E155x hopes to reduce the world data set errors on g2 by a factor of three. [Copied from http://www.slac.stanford.edu/exp/e155/e155_nickeltour.html, an information summary linked off the E155 home page at http://www.slac.stanford.edu/exp/e155/e155_home.html. The extension run, E155x, also makes
Numerical simulation of steady supersonic flow. [spatial marching
NASA Technical Reports Server (NTRS)
Schiff, L. B.; Steger, J. L.
1981-01-01
A noniterative, implicit, space-marching, finite-difference algorithm was developed for the steady thin-layer Navier-Stokes equations in conservation-law form. The numerical algorithm is applicable to steady supersonic viscous flow over bodies of arbitrary shape. In addition, the same code can be used to compute supersonic inviscid flow or three-dimensional boundary layers. Computed results from two-dimensional and three-dimensional versions of the numerical algorithm are in good agreement with those obtained from more costly time-marching techniques.
Chandrasekhar, Jaya; Baber, Usman; Mehran, Roxana; Aquino, Melissa; Sartori, Samantha; Yu, Jennifer; Kini, Annapoorna; Sharma, Samin; Skurk, Carsten; Shlofmitz, Richard A; Witzenbichler, Bernhard; Dangas, George
2016-08-01
Assessment of platelet reactivity alone for thienopyridine selection with percutaneous coronary intervention (PCI) has not been associated with improved outcomes. In TRIAGE, a prospective multicenter observational pilot study we sought to evaluate the benefit of an integrated algorithm combining clinical risk and platelet function testing to select type of thienopyridine in patients undergoing PCI. Patients on chronic clopidogrel therapy underwent platelet function testing prior to PCI using the VerifyNow assay to determine high on treatment platelet reactivity (HTPR, ≥230 P2Y12 reactivity units or PRU). Based on both PRU and clinical (ischemic and bleeding) risks, patients were switched to prasugrel or continued on clopidogrel per the study algorithm. The primary endpoints were (i) 1-year major adverse cardiovascular events (MACE) composite of death, non-fatal myocardial infarction, or definite or probable stent thrombosis; and (ii) major bleeding, Bleeding Academic Research Consortium type 2, 3 or 5. Out of 318 clopidogrel treated patients with a mean age of 65.9 ± 9.8 years, HTPR was noted in 33.3 %. Ninety (28.0 %) patients overall were switched to prasugrel and 228 (72.0 %) continued clopidogrel. The prasugrel group had fewer smokers and more patients with heart failure. At 1-year MACE occurred in 4.4 % of majority HTPR patients on prasugrel versus 3.5 % of primarily non-HTPR patients on clopidogrel (p = 0.7). Major bleeding (5.6 vs 7.9 %, p = 0.47) was numerically higher with clopidogrel compared with prasugrel. Use of the study clinical risk algorithm for choice and intensity of thienopyridine prescription following PCI resulted in similar ischemic outcomes in HTPR patients receiving prasugrel and primarily non-HTPR patients on clopidogrel without an untoward increase in bleeding with prasugrel. However, the study was prematurely terminated and these findings are therefore hypothesis generating. PMID:27100112
Developmental Algorithms Have Meaning!
ERIC Educational Resources Information Center
Green, John
1997-01-01
Adapts Stanic and McKillip's ideas for the use of developmental algorithms to propose that the present emphasis on symbolic manipulation should be tempered with an emphasis on the conceptual understanding of the mathematics underlying the algorithm. Uses examples from the areas of numeric computation, algebraic manipulation, and equation solving…
Yadav, Rakesh; Jaswal, Aparna; Chennapragada, Sridevi; Kamath, Prakash; Hiremath, Shirish M.S.; Kahali, Dhiman; Anand, Sumit; Sood, Naresh K.; Mishra, Anil; Makkar, Jitendra S.; Kaul, Upendra
2015-01-01
Background Several past clinical studies have demonstrated that frequent and unnecessary right ventricular pacing in patients with sick sinus syndrome and compromised atrio-ventricular conduction (AVC) produces long-term adverse effects. The safety and efficacy of two pacemaker algorithms, Ventricular Intrinsic Preference™ (VIP) and Ventricular AutoCapture (VAC), were evaluated in a multi-center study in pacemaker patients. Methods We evaluated 80 patients across 10 centers in India. Patients were enrolled within 15 days of dual chamber pacemaker (DDDR) implantation, and within 45 days thereafter were classified to either a compromised AVC (cAVC) arm or an intact AVC (iAVC) arm based on intrinsic paced/sensed (AV/PV) delays. In each arm, patients were then randomized (1:1) into the following groups: VIP OFF and VAC OFF (Control group; CG), or VIP ON and VAC ON (Treatment Group; TG). Subsequently, the AV/PV delays in the CG groups were mandatorily programmed at 180/150 ms, and to up to 350 ms in the TG groups. The percentage of right ventricular pacing (%RVp) evaluated at 12-month post-implantation follow-ups were compared between the two groups in each arm. Additionally, in-clinic time required for collecting device data was compared between patients programmed with the automated AutoCapture algorithm activated (VAC ON) vs. the manually programmed method (VAC OFF). Results Patients randomized to the TG with the VIP algorithm activated exhibited a significantly lower %RVp at 12 months than those in the CG in both the cAVC arm (39±41% vs. 97±3%; p=0.0004) and the iAVC arm (15±25% vs. 68±39%; p=0.0067). In-clinic time required to collect device data was less in patients with the VAC algorithm activated. No device-related adverse events were reported during the year-long study period. Conclusions In our study cohort, the use of the VIP algorithm significantly reduced the %RVp, while the VAC algorithm reduced in-clinic time needed to collect device data. PMID
Improved local linearization algorithm for solving the quaternion equations
NASA Technical Reports Server (NTRS)
Yen, K.; Cook, G.
1980-01-01
The objective of this paper is to develop a new and more accurate local linearization algorithm for numerically solving sets of linear time-varying differential equations. Of special interest is the application of this algorithm to the quaternion rate equations. The results are compared, both analytically and experimentally, with previous results using local linearization methods. The new algorithm requires approximately one-third more calculations per step than the previously developed local linearization algorithm; however, this disadvantage could be reduced by using parallel implementation. For some cases the new algorithm yields significant improvement in accuracy, even with an enlarged sampling interval. The reverse is true in other cases. The errors depend on the values of angular velocity, angular acceleration, and integration step size. One important result is that for the worst case the new algorithm can guarantee eigenvalues nearer the region of stability than can the previously developed algorithm.
NASA Technical Reports Server (NTRS)
Westphalen, H.; Spjeldvik, W. N.
1982-01-01
A theoretical method by which the energy dependence of the radial diffusion coefficient may be deduced from spectral observations of the particle population at the inner edge of the earth's radiation belts is presented. This region has previously been analyzed with numerical techniques; in this report an analytical treatment that illustrates characteristic limiting cases in the L shell range where the time scale of Coulomb losses is substantially shorter than that of radial diffusion (L approximately 1-2) is given. It is demonstrated both analytically and numerically that the particle spectra there are shaped by the energy dependence of the radial diffusion coefficient regardless of the spectral shapes of the particle populations diffusing inward from the outer radiation zone, so that from observed spectra the energy dependence of the diffusion coefficient can be determined. To insure realistic simulations, inner zone data obtained from experiments on the DIAL, AZUR, and ESRO 2 spacecraft have been used as boundary conditions. Excellent agreement between analytic and numerical results is reported.
ERIC Educational Resources Information Center
Zemsky, Robert; Shaman, Susan; Shapiro, Daniel B.
2001-01-01
Describes the Collegiate Results Instrument (CRI), which measures a range of collegiate outcomes for alumni 6 years after graduation. The CRI was designed to target alumni from institutions across market segments and assess their values, abilities, work skills, occupations, and pursuit of lifelong learning. (EV)
High order hybrid numerical simulations of two dimensional detonation waves
NASA Technical Reports Server (NTRS)
Cai, Wei
1993-01-01
In order to study multi-dimensional unstable detonation waves, a high order numerical scheme suitable for calculating the detailed transverse wave structures of multidimensional detonation waves was developed. The numerical algorithm uses a multi-domain approach so different numerical techniques can be applied for different components of detonation waves. The detonation waves are assumed to undergo an irreversible, unimolecular reaction A yields B. Several cases of unstable two dimensional detonation waves are simulated and detailed transverse wave interactions are documented. The numerical results show the importance of resolving the detonation front without excessive numerical viscosity in order to obtain the correct cellular patterns.
NASA Technical Reports Server (NTRS)
Baker, John G.
2009-01-01
Recent advances in numerical relativity have fueled an explosion of progress in understanding the predictions of Einstein's theory of gravity, General Relativity, for the strong field dynamics, the gravitational radiation wave forms, and consequently the state of the remnant produced from the merger of compact binary objects. I will review recent results from the field, focusing on mergers of two black holes.
NASA Technical Reports Server (NTRS)
Back, L. H.
1972-01-01
The laminar flow equations in differential form are solved numerically on a digital computer for flow of a very high temperature gas through the entrance region of an externally cooled tube. The solution method is described and calculations are carried out in conjunction with experimental measurements. The agreement with experiment is good, with the result indicating relatively large energy and momentum losses in the highly cooled flows considered where the pressure is nearly uniform along the flow and the core flow becomes non-adiabatic a few diameters downstream of the inlet. The effects of a large range of Reynolds number and Mach number (viscous dissipation) are also investigated.
A parallel Jacobson-Oksman optimization algorithm. [parallel processing (computers)
NASA Technical Reports Server (NTRS)
Straeter, T. A.; Markos, A. T.
1975-01-01
A gradient-dependent optimization technique which exploits the vector-streaming or parallel-computing capabilities of some modern computers is presented. The algorithm, derived by assuming that the function to be minimized is homogeneous, is a modification of the Jacobson-Oksman serial minimization method. In addition to describing the algorithm, conditions insuring the convergence of the iterates of the algorithm and the results of numerical experiments on a group of sample test functions are presented. The results of these experiments indicate that this algorithm will solve optimization problems in less computing time than conventional serial methods on machines having vector-streaming or parallel-computing capabilities.
Symbalisty, E.M.D.; Zinn, J.; Whitaker, R.W.
1995-09-01
This paper describes the history, physics, and algorithms of the computer code RADFLO and its extension HYCHEM. RADFLO is a one-dimensional, radiation-transport hydrodynamics code that is used to compute early-time fireball behavior for low-altitude nuclear bursts. The primary use of the code is the prediction of optical signals produced by nuclear explosions. It has also been used to predict thermal and hydrodynamic effects that are used for vulnerability and lethality applications. Another closely related code, HYCHEM, is an extension of RADFLO which includes the effects of nonequilibrium chemistry. Some examples of numerical results will be shown, along with scaling expressions derived from those results. We describe new computations of the structures and luminosities of steady-state shock waves and radiative thermal waves, which have been extended to cover a range of ambient air densities for high-altitude applications. We also describe recent modifications of the codes to use a one-dimensional analog of the CAVEAT fluid-dynamics algorithm in place of the former standard Richtmyer-von Neumann algorithm.
An algorithm for the empirical optimization of antenna arrays
NASA Technical Reports Server (NTRS)
Blank, S.
1983-01-01
A numerical technique is presented to optimize the performance of arbitrary antenna arrays under realistic conditions. An experimental-computational algorithm is formulated in which n-dimensional minimization methods are applied to measured data obtained from the antenna array. A numerical update formula is used to induce partial derivative information without requiring special perturbations of the array parameters. The algorithm provides a new design for the antenna array, and the method proceeds in an iterative fashion. Test case results are presented showing the effectiveness of the algorithm.
NASA Technical Reports Server (NTRS)
Entekhabi, Dara; Njoku, Eni E.; O'Neill, Peggy E.; Kellogg, Kent H.; Entin, Jared K.
2010-01-01
Talk outline 1. Derivation of SMAP basic and applied science requirements from the NRC Earth Science Decadal Survey applications 2. Data products and latencies 3. Algorithm highlights 4. SMAP Algorithm Testbed 5. SMAP Working Groups and community engagement
NASA Technical Reports Server (NTRS)
Cabra, R.; Chen, J. Y.; Dibble, R. W.; Myhrvold, T.; Karpetis, A. N.; Barlow, R. S.
2002-01-01
An experiment and numerical investigation is presented of a lifted turbulent H2/N2 jet flame in a coflow of hot, vitiated gases. The vitiated coflow burner emulates the coupling of turbulent mixing and chemical kinetics exemplary of the reacting flow in the recirculation region of advanced combustors. It also simplifies numerical investigation of this coupled problem by removing the complexity of recirculating flow. Scalar measurements are reported for a lifted turbulent jet flame of H2/N2 (Re = 23,600, H/d = 10) in a coflow of hot combustion products from a lean H2/Air flame ((empty set) = 0.25, T = 1,045 K). The combination of Rayleigh scattering, Raman scattering, and laser-induced fluorescence is used to obtain simultaneous measurements of temperature and concentrations of the major species, OH, and NO. The data attest to the success of the experimental design in providing a uniform vitiated coflow throughout the entire test region. Two combustion models (PDF: joint scalar Probability Density Function and EDC: Eddy Dissipation Concept) are used in conjunction with various turbulence models to predict the lift-off height (H(sub PDF)/d = 7,H(sub EDC)/d = 8.5). Kalghatgi's classic phenomenological theory, which is based on scaling arguments, yields a reasonably accurate prediction (H(sub K)/d = 11.4) of the lift-off height for the present flame. The vitiated coflow admits the possibility of auto-ignition of mixed fluid, and the success of the present parabolic implementation of the PDF model in predicting a stable lifted flame is attributable to such ignition. The measurements indicate a thickened turbulent reaction zone at the flame base. Experimental results and numerical investigations support the plausibility of turbulent premixed flame propagation by small scale (on the order of the flame thickness) recirculation and mixing of hot products into reactants and subsequent rapid ignition of the mixture.
Numerical recipes for mold filling simulation
Kothe, D.; Juric, D.; Lam, K.; Lally, B.
1998-07-01
Has the ability to simulate the filling of a mold progressed to a point where an appropriate numerical recipe achieves the desired results? If results are defined to be topological robustness, computational efficiency, quantitative accuracy, and predictability, all within a computational domain that faithfully represents complex three-dimensional foundry molds, then the answer unfortunately remains no. Significant interfacial flow algorithm developments have occurred over the last decade, however, that could bring this answer closer to maybe. These developments have been both evolutionary and revolutionary, will continue to transpire for the near future. Might they become useful numerical recipes for mold filling simulations? Quite possibly. Recent progress in algorithms for interface kinematics and dynamics, linear solution methods, computer science issues such as parallelization and object-oriented programming, high resolution Navier-Stokes (NS) solution methods, and unstructured mesh techniques, must all be pursued as possible paths toward higher fidelity mold filling simulations. A detailed exposition of these algorithmic developments is beyond the scope of this paper, hence the authors choose to focus here exclusively on algorithms for interface kinematics. These interface tracking algorithms are designed to model the movement of interfaces relative to a reference frame such as a fixed mesh. Current interface tracking algorithm choices are numerous, so is any one best suited for mold filling simulation? Although a clear winner is not (yet) apparent, pros and cons are given in the following brief, critical review. Highlighted are those outstanding interface tracking algorithm issues the authors feel can hamper the reliable modeling of today`s foundry mold filling processes.
On Numerical Methods For Hypersonic Turbulent Flows
NASA Astrophysics Data System (ADS)
Yee, H. C.; Sjogreen, B.; Shu, C. W.; Wang, W.; Magin, T.; Hadjadj, A.
2011-05-01
Proper control of numerical dissipation in numerical methods beyond the standard shock-capturing dissipation at discontinuities is an essential element for accurate and stable simulation of hypersonic turbulent flows, including combustion, and thermal and chemical nonequilibrium flows. Unlike rapidly developing shock interaction flows, turbulence computations involve long time integrations. Improper control of numerical dissipation from one time step to another would be compounded over time, resulting in the smearing of turbulent fluctuations to an unrecognizable form. Hypersonic turbulent flows around re- entry space vehicles involve mixed steady strong shocks and turbulence with unsteady shocklets that pose added computational challenges. Stiffness of the source terms and material mixing in combustion pose yet other types of numerical challenges. A low dissipative high order well- balanced scheme, which can preserve certain non-trivial steady solutions of the governing equations exactly, may help minimize some of these difficulties. For stiff reactions it is well known that the wrong propagation speed of discontinuities occurs due to the under-resolved numerical solutions in both space and time. Schemes to improve the wrong propagation speed of discontinuities for systems of stiff reacting flows remain a challenge for algorithm development. Some of the recent algorithm developments for direct numerical simulations (DNS) and large eddy simulations (LES) for the subject physics, including the aforementioned numerical challenges, will be discussed.
MFIX documentation numerical technique
Syamlal, M.
1998-01-01
MFIX (Multiphase Flow with Interphase eXchanges) is a general-purpose hydrodynamic model for describing chemical reactions and heat transfer in dense or dilute fluid-solids flows, which typically occur in energy conversion and chemical processing reactors. The calculations give time-dependent information on pressure, temperature, composition, and velocity distributions in the reactors. The theoretical basis of the calculations is described in the MFIX Theory Guide. Installation of the code, setting up of a run, and post-processing of results are described in MFIX User`s manual. Work was started in April 1996 to increase the execution speed and accuracy of the code, which has resulted in MFIX 2.0. To improve the speed of the code the old algorithm was replaced by a more implicit algorithm. In different test cases conducted the new version runs 3 to 30 times faster than the old version. To increase the accuracy of the computations, second order accurate discretization schemes were included in MFIX 2.0. Bubbling fluidized bed simulations conducted with a second order scheme show that the predicted bubble shape is rounded, unlike the (unphysical) pointed shape predicted by the first order upwind scheme. This report describes the numerical technique used in MFIX 2.0.
Numerical analysis of bifurcations
Guckenheimer, J.
1996-06-01
This paper is a brief survey of numerical methods for computing bifurcations of generic families of dynamical systems. Emphasis is placed upon algorithms that reflect the structure of the underlying mathematical theory while retaining numerical efficiency. Significant improvements in the computational analysis of dynamical systems are to be expected from more reliance of geometric insight coming from dynamical systems theory. {copyright} {ital 1996 American Institute of Physics.}
NASA Astrophysics Data System (ADS)
Agus, M.; Mascia, M. L.; Fastame, M. C.; Melis, V.; Pilloni, M. C.; Penna, M. P.
2015-02-01
A body of literature shows the significant role of visual-spatial skills played in the improvement of mathematical skills in the primary school. The main goal of the current study was to investigate the impact of a combined visuo-spatial and mathematical training on the improvement of mathematical skills in 146 second graders of several schools located in Italy. Participants were presented single pencil-and-paper visuo-spatial or mathematical trainings, computerised version of the above mentioned treatments, as well as a combined version of computer-assisted and pencil-and-paper visuo-spatial and mathematical trainings, respectively. Experimental groups were presented with training for 3 months, once a week. All children were treated collectively both in computer-assisted or pencil-and-paper modalities. At pre and post-test all our participants were presented with a battery of objective tests assessing numerical and visuo-spatial abilities. Our results suggest the positive effect of different types of training for the empowerment of visuo-spatial and numerical abilities. Specifically, the combination of computerised and pencil-and-paper versions of visuo-spatial and mathematical trainings are more effective than the single execution of the software or of the pencil-and-paper treatment.
Efficient Homotopy Continuation Algorithms with Application to Computational Fluid Dynamics
NASA Astrophysics Data System (ADS)
Brown, David A.
New homotopy continuation algorithms are developed and applied to a parallel implicit finite-difference Newton-Krylov-Schur external aerodynamic flow solver for the compressible Euler, Navier-Stokes, and Reynolds-averaged Navier-Stokes equations with the Spalart-Allmaras one-equation turbulence model. Many new analysis tools, calculations, and numerical algorithms are presented for the study and design of efficient and robust homotopy continuation algorithms applicable to solving very large and sparse nonlinear systems of equations. Several specific homotopies are presented and studied and a methodology is presented for assessing the suitability of specific homotopies for homotopy continuation. . A new class of homotopy continuation algorithms, referred to as monolithic homotopy continuation algorithms, is developed. These algorithms differ from classical predictor-corrector algorithms by combining the predictor and corrector stages into a single update, significantly reducing the amount of computation and avoiding wasted computational effort resulting from over-solving in the corrector phase. The new algorithms are also simpler from a user perspective, with fewer input parameters, which also improves the user's ability to choose effective parameters on the first flow solve attempt. Conditional convergence is proved analytically and studied numerically for the new algorithms. The performance of a fully-implicit monolithic homotopy continuation algorithm is evaluated for several inviscid, laminar, and turbulent flows over NACA 0012 airfoils and ONERA M6 wings. The monolithic algorithm is demonstrated to be more efficient than the predictor-corrector algorithm for all applications investigated. It is also demonstrated to be more efficient than the widely-used pseudo-transient continuation algorithm for all inviscid and laminar cases investigated, and good performance scaling with grid refinement is demonstrated for the inviscid cases. Performance is also demonstrated
QPSO-Based Adaptive DNA Computing Algorithm
Karakose, Mehmet; Cigdem, Ugur
2013-01-01
DNA (deoxyribonucleic acid) computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO). Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1) parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2) adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3) numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm. PMID:23935409
X Liu; E Garboczi; m Grigoriu; Y Lu; S Erdogan
2011-12-31
Many parameters affect the cyclone efficiency, and these parameters can have different effects in different flow regimes. Therefore the maximum-efficiency cyclone length is a function of the specific geometry and operating conditions in use. In this study, we obtained a relationship describing the minimum particle diameter or maximum cyclone efficiency by using a theoretical approach based on cyclone geometry and fluid properties. We have compared the empirical predictions with corresponding literature data and observed good agreement. The results address the importance of fluid properties. Inlet and vortex finder cross-sections, cone-apex diameter, inlet Reynolds number and surface roughness are found to be the other important parameters affecting cyclone height. The surface friction coefficient, on the other hand, is difficult to employ in the calculations.We developed a theoretical approach to find the maximum-efficiency heights for cyclones with tangential inlet and we suggested a relation for this height as a function of cyclone geometry and operating parameters. In order to generalize use of the relation, two dimensionless parameters, namely for geometric and operational variables, we defined and results were presented in graphical form such that one can calculate and enter the values of these dimensionless parameters and then can find the maximum efficiency height of his own specific cyclone.
NASA Astrophysics Data System (ADS)
Boerstoel, J. W.
1988-01-01
The current status of a computer program system for the numerical simulation of Euler flows is presented. Preliminary test calculation results are shown. They concern the three-dimensional flow around a wing-nacelle-propeller-outlet configuration. The system is constructed to execute four major tasks: block decomposition of the flow domain around given, possibly complex, three-dimensional aerodynamic surfaces; grid generation on the blocked flow domain; Euler-flow simulation on the blocked grid; and graphical visualization of the computed flow on the blocked grid, and postprocessing. The system consists of about 20 codes interfaced by files. Most of the required tasks can be executed. The geometry of complex aerodynamic surfaces in three-dimensional space can be handled. The validation test showed that the system must be improved to increase the speed of the grid generation process.
NASA Astrophysics Data System (ADS)
Wang, Y.; Qin, G.; Zhang, M.
2012-12-01
Solar energetic particle (SEP) fluxes data measured by multi-spacecraft are able to provide important information of the transport process of SEPs accelerated by the interplanetary coronal mass ejection (ICME) shock. Depending on their locations, observers in interplanetary space may be connected to different parts of an ICME shock by the interplanetary magnetic field (IMF). Simultaneous observations by multi-spacecraft in the ecliptic, e.g., ACE, STEREO A and B, usually show huge differences of SEP time profiles. In this work, based on a numerical solution of the Fokker-Planck transport equation for energetic particles, we will obtain the fluxes of SEPs accelerated by ICME shocks. In addition, we will compare SEP events measured by these spacecraft, located at different longitudes, with our simulation results. The comparison has enabled us to determine the parameters of particle transport such as the parallel and perpendicular diffusion coefficients and the efficiency of particles injections at the ICME shock.
Numerical propagator through PIAA optics
NASA Astrophysics Data System (ADS)
Pueyo, Laurent; Shaklan, Stuart; Give'On, Amir; Krist, John
2009-08-01
In this communication we address two outstanding issues pertaining the modeling of PIAA coronagraphs, accurate numerical propagation of edge effects and fast propagation of mid spatial frequencies for wavefront control. In order to solve them, we first derive a quadratic approximation of the Huygens wavelets that allows us to develop an angular spectrum propagator for pupil remapping. Using this result we introduce an independent method to verify the ultimate contrast floor, due to edge propagation effects, of PIAA units currently being tested in various testbeds. We then delve into the details of a novel fast algorithm, based on the recognition that angular spectrum computations with a pre-apodised system are computationally light. When used for the propagation of mid spatial frequencies, such a fast propagator will ultimately allow us to develop robust wavefront control algorithms with DMs located before the pupil remapping mirrors.
NASA Technical Reports Server (NTRS)
Newman, P. A.; Allison, D. O.
1974-01-01
Numerical results obtained from two computer programs recently developed with NASA support and now available for use by others are compared with some sample experimental data taken on a rectangular-wing configuration in the AEDC 16-Foot Transonic Tunnel at transonic and subsonic flow conditions. This data was used in an AEDC investigation as reference data to deduce the tunnel-wall interference effects for corresponding data taken in a smaller tunnel. The comparisons were originally intended to see how well a current state-of-the-art transonic flow calculation for a simple 3-D wing agreed with data which was felt by experimentalists to be relatively interference-free. As a result of the discrepancies between the experimental data and computational results at the quoted angle of attack, it was then deduced from an approximate stress analysis that the sting had deflected appreciably. Thus, the comparisons themselves are not so meaningful, since the calculations must be repeated at the proper angle of attack. Of more importance, however, is a demonstration of the utility of currently available computational tools in the analysis and correlation of transonic experimental data.
Numerical simulation of photoexcited polaron states in water
NASA Astrophysics Data System (ADS)
Zemlyanaya, E. V.; Volokhova, A. V.; Lakhno, V. D.; Amirkhanov, I. V.; Puzynin, I. V.; Puzynina, T. P.; Rikhvitskiy, V. S.; Atanasova, P. Kh.
2015-10-01
We consider the dynamic polaron model of the hydrated electron state on the basis of a system of three nonlinear partial differential equations with appropriate initial and boundary conditions. A parallel numerical algorithm for the numerical solution of this system has been developed. Its effectiveness has been tested on a few multi-processor systems. A numerical simulation of the polaron states formation in water under the action of the ultraviolet range laser irradiation has been performed. The numerical results are shown to be in a reasonable agreement with experimental data and theoretical predictions.
Numerical simulation of photoexcited polaron states in water
Zemlyanaya, E. V. Volokhova, A. V.; Amirkhanov, I. V.; Puzynin, I. V.; Puzynina, T. P.; Rikhvitskiy, V. S.; Lakhno, V. D.; Atanasova, P. Kh.
2015-10-28
We consider the dynamic polaron model of the hydrated electron state on the basis of a system of three nonlinear partial differential equations with appropriate initial and boundary conditions. A parallel numerical algorithm for the numerical solution of this system has been developed. Its effectiveness has been tested on a few multi-processor systems. A numerical simulation of the polaron states formation in water under the action of the ultraviolet range laser irradiation has been performed. The numerical results are shown to be in a reasonable agreement with experimental data and theoretical predictions.
A multi-level solution algorithm for steady-state Markov chains
NASA Technical Reports Server (NTRS)
Horton, Graham; Leutenegger, Scott T.
1993-01-01
A new iterative algorithm, the multi-level algorithm, for the numerical solution of steady state Markov chains is presented. The method utilizes a set of recursively coarsened representations of the original system to achieve accelerated convergence. It is motivated by multigrid methods, which are widely used for fast solution of partial differential equations. Initial results of numerical experiments are reported, showing significant reductions in computation time, often an order of magnitude or more, relative to the Gauss-Seidel and optimal SOR algorithms for a variety of test problems. The multi-level method is compared and contrasted with the iterative aggregation-disaggregation algorithm of Takahashi.
A fast algorithm for nonlinear finite element analysis using equivalent magnetization current
NASA Astrophysics Data System (ADS)
Lee, Joon-Ho; Park, Il-Han; Kim, Dong-Hun; Lee, Ki-Sik
2002-05-01
A fast algorithm for iterative nonlinear finite element analysis is presented in this paper. The algorithm replaces updated permeability by an equivalent magnetization current and moves it to the source current term. Once the initial system matrix is decomposed in the LU form, the iterative procedure involves the trivial step of back-substitution from the LU form. Consequently, the computation time for the nonlinear analysis is greatly reduced. A numerical model of a cylindrical conductor enclosed with saturable iron is tested to validate the proposed algorithm. Numerical results are compared with those obtained using conventional Newton-Raphson algorithm in respect to accuracy and computational time.
NASA Astrophysics Data System (ADS)
Li, Xiaoping; Hunt, Katharine L. C.; Pipin, Janusz; Bishop, David M.
1996-12-01
For atoms or molecules of D∞h or higher symmetry, this work gives equations for the long-range, collision-induced changes in the first (Δβ) and second (Δγ) hyperpolarizabilities, complete to order R-7 in the intermolecular separation R for Δβ, and order R-6 for Δγ. The results include nonlinear dipole-induced-dipole (DID) interactions, higher multipole induction, induction due to the nonuniformity of the local fields, back induction, and dispersion. For pairs containing H or He, we have used ab initio values of the static (hyper)polarizabilities to obtain numerical results for the induction terms in Δβ and Δγ. For dispersion effects, we have derived analytic results in the form of integrals of the dynamic (hyper)polarizabilities over imaginary frequencies, and we have evaluated these numerically for the pairs H...H, H...He, and He...He using the values of the fourth dipole hyperpolarizability ɛ(-iω; iω, 0, 0, 0, 0) obtained in this work, along with other hyperpolarizabilities calculated previously by Bishop and Pipin. For later numerical applications to molecular pairs, we have developed constant ratio approximations (CRA1 and CRA2) to estimate the dispersion effects in terms of static (hyper)polarizabilities and van der Waals energy or polarizability coefficients. Tests of the approximations against accurate results for the pairs H...H, H...He, and He...He show that the root mean square (rms) error in CRA1 is ˜20%-25% for Δβ and Δγ; for CRA2 the error in Δβ is similar, but the rms error in Δγ is less than 4%. At separations ˜1.0 a.u. outside the van der Waals minima of the pair potentials for H...H, H...He, and He...He, the nonlinear DID interactions make the dominant contributions to Δγzzzz (where z is the interatomic axis) and to Δγxxxx, accounting for ˜80%-123% of the total value. Contributions due to higher-multipole induction and the nonuniformity of the local field (Qα terms) may exceed 15%, while dispersion effects
NASA Astrophysics Data System (ADS)
Imada, Masatoshi; Kashima, Tsuyoshi
2000-09-01
A numerical algorithm for studying strongly correlated electron systems is proposed. The groundstate wavefunction is projected out after a numerical renormalization procedure in the path integral formalism. The wavefunction is expressed from the optimized linear combination of retained states in the truncated Hilbert space with a numerically chosen basis. This algorithm does not suffer from the negative sign problem and can be applied to any type of Hamiltonian in any dimension. The efficiency is tested in examples of the Hubbard model where the basis of Slater determinants is numerically optimized. We show results on fast convergence and accuracy achieved with a small number of retained states.
Numerical quadrature for slab geometry transport algorithms
Hennart, J.P.; Valle, E. del
1995-12-31
In recent papers, a generalized nodal finite element formalism has been presented for virtually all known linear finite difference approximations to the discrete ordinates equations in slab geometry. For a particular angular directions {mu}, the neutron flux {Phi} is approximated by a piecewise function Oh, which over each space interval can be polynomial or quasipolynomial. Here we shall restrict ourselves to the polynomial case. Over each space interval, {Phi} is a polynomial of degree k, interpolating parameters given by in the continuous and discontinuous cases, respectively. The angular flux at the left and right ends and the k`th Legendre moment of {Phi} over the cell considered are represented as.
Gregoire, C.; Joesten, P.K.; Lane, J.W., Jr.
2006-01-01
Ground penetrating radar is an efficient geophysical method for the detection and location of fractures and fracture zones in electrically resistive rocks. In this study, the use of down-hole (borehole) radar reflection logs to monitor the injection of steam in fractured rocks was tested as part of a field-scale, steam-enhanced remediation pilot study conducted at a fractured limestone quarry contaminated with chlorinated hydrocarbons at the former Loring Air Force Base, Limestone, Maine, USA. In support of the pilot study, borehole radar reflection logs were collected three times (before, during, and near the end of steam injection) using broadband 100 MHz electric dipole antennas. Numerical modelling was performed to predict the effect of heating on radar-frequency electromagnetic (EM) wave velocity, attenuation, and fracture reflectivity. The modelling results indicate that EM wave velocity and attenuation change substantially if heating increases the electrical conductivity of the limestone matrix. Furthermore, the net effect of heat-induced variations in fracture-fluid dielectric properties on average medium velocity is insignificant because the expected total fracture porosity is low. In contrast, changes in fracture fluid electrical conductivity can have a significant effect on EM wave attenuation and fracture reflectivity. Total replacement of water by steam in a fracture decreases fracture reflectivity of a factor of 10 and induces a change in reflected wave polarity. Based on the numerical modelling results, a reflection amplitude analysis method was developed to delineate fractures where steam has displaced water. Radar reflection logs collected during the three acquisition periods were analysed in the frequency domain to determine if steam had replaced water in the fractures (after normalizing the logs to compensate for differences in antenna performance between logging runs). Analysis of the radar reflection logs from a borehole where the temperature
NASA Astrophysics Data System (ADS)
Li, Zhaokun; Cao, Jingtai; Liu, Wei; Feng, Jianfeng; Zhao, Xiaohui
2015-03-01
We use conventional adaptive optical system to compensate atmospheric turbulence in free space optical (FSO) communication system under strong scintillation circumstances, undesired wave-front measurements based on Shark-Hartman sensor (SH). Since wavefront sensor-less adaptive optics is a feasible option, we propose several swarm intelligence algorithms to compensate the wavefront aberration from atmospheric interference in FSO and mainly discuss the algorithm principle, basic flows, and simulation result. The numerical simulation experiment and result analysis show that compared with SPGD algorithm, the proposed algorithms can effectively restrain wavefront aberration, and improve convergence rate of the algorithms and the coupling efficiency of receiver in large extent.
Hsu, Hsiao-Ping; Binder, Kurt; Klushin, Leonid I; Skvortsov, Alexander M
2008-10-01
A polymer chain containing N monomers confined in a finite cylindrical tube of diameter D grafted at a distance L from the open end of the tube may undergo a rather abrupt transition, where part of the chain escapes from the tube to form a "crownlike" coil outside of the tube. When this problem is studied by Monte Carlo simulation of self-avoiding walks on the simple cubic lattice applying a cylindrical confinement and using the standard pruned-enriched Rosenbluth method (PERM), one obtains spurious results, however, with increasing chain length the transition gets weaker and weaker, due to insufficient sampling of the "escaped" states, as a detailed analysis shows. In order to solve this problem, a new variant of a biased sequential sampling algorithm with resampling is proposed, force-biased PERM: the difficulty of sampling both phases in the region of the first order transition with the correct weights is treated by applying a force at the free end pulling it out of the tube. Different strengths of this force need to be used and reweighting techniques are applied. Using rather long chains (up to N=18000 ) and wide tubes (up to D=29 lattice spacings), the free energy of the chain, its end-to-end distance, the number of "imprisoned" monomers can be estimated, as well as the order parameter and its distribution. It is suggested that this algorithm should be useful for other problems involving state changes of polymers, where the different states belong to rather disjunct "valleys" in the phase space of the system. PMID:18999448
Sheng, Zheng; Wang, Jun; Zhou, Shudao; Zhou, Bihua
2014-03-01
This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented to tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm. PMID:24697395