François, Marianne M.
2015-05-28
A review of recent advances made in numerical methods and algorithms within the volume tracking framework is presented. The volume tracking method, also known as the volume-of-fluid method has become an established numerical approach to model and simulate interfacial flows. Its advantage is its strict mass conservation. However, because the interface is not explicitly tracked but captured via the material volume fraction on a fixed mesh, accurate estimation of the interface position, its geometric properties and modeling of interfacial physics in the volume tracking framework remain difficult. Several improvements have been made over the last decade to address these challenges.more » In this study, the multimaterial interface reconstruction method via power diagram, curvature estimation via heights and mean values and the balanced-force algorithm for surface tension are highlighted.« less
On recent advances and future research directions for computational fluid dynamics
NASA Technical Reports Server (NTRS)
Baker, A. J.; Soliman, M. O.; Manhardt, P. D.
1986-01-01
This paper highlights some recent accomplishments regarding CFD numerical algorithm constructions for generation of discrete approximate solutions to classes of Reynolds-averaged Navier-Stokes equations. Following an overview of turbulent closure modeling, and development of appropriate conservation law systems, a Taylor weak-statement semi-discrete approximate solution algorithm is developed. Various forms for completion to the final linear algebra statement are cited, as are a range of candidate numerical linear algebra solution procedures. This development sequence emphasizes the key building blocks of a CFD RNS algorithm, including solution trial and test spaces, integration procedure and added numerical stability mechanisms. A range of numerical results are discussed focusing on key topics guiding future research directions.
Advanced rotorcraft control using parameter optimization
NASA Technical Reports Server (NTRS)
Vansteenwyk, Brett; Ly, Uy-Loi
1991-01-01
A reliable algorithm for the evaluation of a quadratic performance index and its gradients with respect to the controller design parameters is presented. The algorithm is part of a design algorithm for an optimal linear dynamic output feedback controller that minimizes a finite time quadratic performance index. The numerical scheme is particularly robust when it is applied to the control law synthesis for systems with densely packed modes and where there is a high likelihood of encountering degeneracies in the closed loop eigensystem. This approach through the use of a accurate Pade series approximation does not require the closed loop system matrix to be diagonalizable. The algorithm has been included in a control design package for optimal robust low order controllers. Usefulness of the proposed numerical algorithm has been demonstrated using numerous practical design cases where degeneracies occur frequently in the closed loop system under an arbitrary controller design initialization and during the numerical search.
Ellison, C. L.; Burby, J. W.; Qin, H.
2015-11-01
One popular technique for the numerical time advance of charged particles interacting with electric and magnetic fields according to the Lorentz force law [1], [2], [3] and [4] is the Boris algorithm. Its popularity stems from simple implementation, rapid iteration, and excellent long-term numerical fidelity [1] and [5]. Excellent long-term behavior strongly suggests the numerical dynamics exhibit conservation laws analogous to those governing the continuous Lorentz force system [6]. Moreover, without conserved quantities to constrain the numerical dynamics, algorithms typically dissipate or accumulate important observables such as energy and momentum over long periods of simulated time [6]. Identification of themore » conservative properties of an algorithm is important for establishing rigorous expectations on the long-term behavior; energy-preserving, symplectic, and volume-preserving methods each have particular implications for the qualitative numerical behavior [6], [7], [8], [9], [10] and [11].« less
Structural reliability assessment capability in NESSUS
NASA Technical Reports Server (NTRS)
Millwater, H.; Wu, Y.-T.
1992-01-01
The principal capabilities of NESSUS (Numerical Evaluation of Stochastic Structures Under Stress), an advanced computer code developed for probabilistic structural response analysis, are reviewed, and its structural reliability assessed. The code combines flexible structural modeling tools with advanced probabilistic algorithms in order to compute probabilistic structural response and resistance, component reliability and risk, and system reliability and risk. An illustrative numerical example is presented.
Structural reliability assessment capability in NESSUS
NASA Astrophysics Data System (ADS)
Millwater, H.; Wu, Y.-T.
1992-07-01
The principal capabilities of NESSUS (Numerical Evaluation of Stochastic Structures Under Stress), an advanced computer code developed for probabilistic structural response analysis, are reviewed, and its structural reliability assessed. The code combines flexible structural modeling tools with advanced probabilistic algorithms in order to compute probabilistic structural response and resistance, component reliability and risk, and system reliability and risk. An illustrative numerical example is presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manteuffel, T.A.
The objective of this project is the development of numerical solution techniques for deterministic models of the transport of neutral and charged particles and the demonstration of their effectiveness in both a production environment and on advanced architecture computers. The primary focus is on various versions of the linear Boltzman equation. These equations are fundamental in many important applications. This project is an attempt to integrate the development of numerical algorithms with the process of developing production software. A major thrust of this reject will be the implementation of these algorithms on advanced architecture machines that reside at the Advancedmore » Computing Laboratory (ACL) at Los Alamos National Laboratories (LANL).« less
NASA Astrophysics Data System (ADS)
Radev, Dimitar; Lokshina, Izabella
2010-11-01
The paper examines self-similar (or fractal) properties of real communication network traffic data over a wide range of time scales. These self-similar properties are very different from the properties of traditional models based on Poisson and Markov-modulated Poisson processes. Advanced fractal models of sequentional generators and fixed-length sequence generators, and efficient algorithms that are used to simulate self-similar behavior of IP network traffic data are developed and applied. Numerical examples are provided; and simulation results are obtained and analyzed.
High-order hydrodynamic algorithms for exascale computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morgan, Nathaniel Ray
Hydrodynamic algorithms are at the core of many laboratory missions ranging from simulating ICF implosions to climate modeling. The hydrodynamic algorithms commonly employed at the laboratory and in industry (1) typically lack requisite accuracy for complex multi- material vortical flows and (2) are not well suited for exascale computing due to poor data locality and poor FLOP/memory ratios. Exascale computing requires advances in both computer science and numerical algorithms. We propose to research the second requirement and create a new high-order hydrodynamic algorithm that has superior accuracy, excellent data locality, and excellent FLOP/memory ratios. This proposal will impact a broadmore » range of research areas including numerical theory, discrete mathematics, vorticity evolution, gas dynamics, interface instability evolution, turbulent flows, fluid dynamics and shock driven flows. If successful, the proposed research has the potential to radically transform simulation capabilities and help position the laboratory for computing at the exascale.« less
Numerical simulation of coupled electrochemical and transport processes in battery systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liaw, B.Y.; Gu, W.B.; Wang, C.Y.
1997-12-31
Advanced numerical modeling to simulate dynamic battery performance characteristics for several types of advanced batteries is being conducted using computational fluid dynamics (CFD) techniques. The CFD techniques provide efficient algorithms to solve a large set of highly nonlinear partial differential equations that represent the complex battery behavior governed by coupled electrochemical reactions and transport processes. The authors have recently successfully applied such techniques to model advanced lead-acid, Ni-Cd and Ni-MH cells. In this paper, the authors briefly discuss how the governing equations were numerically implemented, show some preliminary modeling results, and compare them with other modeling or experimental data reportedmore » in the literature. The authors describe the advantages and implications of using the CFD techniques and their capabilities in future battery applications.« less
Generalized Differential Calculus and Applications to Optimization
NASA Astrophysics Data System (ADS)
Rector, Robert Blake Hayden
This thesis contains contributions in three areas: the theory of generalized calculus, numerical algorithms for operations research, and applications of optimization to problems in modern electric power systems. A geometric approach is used to advance the theory and tools used for studying generalized notions of derivatives for nonsmooth functions. These advances specifically pertain to methods for calculating subdifferentials and to expanding our understanding of a certain notion of derivative of set-valued maps, called the coderivative, in infinite dimensions. A strong understanding of the subdifferential is essential for numerical optimization algorithms, which are developed and applied to nonsmooth problems in operations research, including non-convex problems. Finally, an optimization framework is applied to solve a problem in electric power systems involving a smart solar inverter and battery storage system providing energy and ancillary services to the grid.
Variational Algorithms for Test Particle Trajectories
NASA Astrophysics Data System (ADS)
Ellison, C. Leland; Finn, John M.; Qin, Hong; Tang, William M.
2015-11-01
The theory of variational integration provides a novel framework for constructing conservative numerical methods for magnetized test particle dynamics. The retention of conservation laws in the numerical time advance captures the correct qualitative behavior of the long time dynamics. For modeling the Lorentz force system, new variational integrators have been developed that are both symplectic and electromagnetically gauge invariant. For guiding center test particle dynamics, discretization of the phase-space action principle yields multistep variational algorithms, in general. Obtaining the desired long-term numerical fidelity requires mitigation of the multistep method's parasitic modes or applying a discretization scheme that possesses a discrete degeneracy to yield a one-step method. Dissipative effects may be modeled using Lagrange-D'Alembert variational principles. Numerical results will be presented using a new numerical platform that interfaces with popular equilibrium codes and utilizes parallel hardware to achieve reduced times to solution. This work was supported by DOE Contract DE-AC02-09CH11466.
Advances in Numerical Boundary Conditions for Computational Aeroacoustics
NASA Technical Reports Server (NTRS)
Tam, Christopher K. W.
1997-01-01
Advances in Computational Aeroacoustics (CAA) depend critically on the availability of accurate, nondispersive, least dissipative computation algorithm as well as high quality numerical boundary treatments. This paper focuses on the recent developments of numerical boundary conditions. In a typical CAA problem, one often encounters two types of boundaries. Because a finite computation domain is used, there are external boundaries. On the external boundaries, boundary conditions simulating the solution outside the computation domain are to be imposed. Inside the computation domain, there may be internal boundaries. On these internal boundaries, boundary conditions simulating the presence of an object or surface with specific acoustic characteristics are to be applied. Numerical boundary conditions, both external or internal, developed for simple model problems are reviewed and examined. Numerical boundary conditions for real aeroacoustic problems are also discussed through specific examples. The paper concludes with a description of some much needed research in numerical boundary conditions for CAA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fischetti, Sebastian; Cadonati, Laura; Mohapatra, Satyanarayan R. P.
Recent years have witnessed tremendous progress in numerical relativity and an ever improving performance of ground-based interferometric gravitational wave detectors. In preparation for the Advanced Laser Interferometer Gravitational Wave Observatory (Advanced LIGO) and a new era in gravitational wave astronomy, the numerical relativity and gravitational wave data analysis communities are collaborating to ascertain the most useful role for numerical relativity waveforms in the detection and characterization of binary black hole coalescences. In this paper, we explore the detectability of equal mass, merging black hole binaries with precessing spins and total mass M{sub T}(set-membership sign)[80,350]M{sub {center_dot}}, using numerical relativity waveforms andmore » templateless search algorithms designed for gravitational wave bursts. In particular, we present a systematic study using waveforms produced by the MayaKranc code that are added to colored, Gaussian noise and analyzed with the Omega burst search algorithm. Detection efficiency is weighed against the orientation of one of the black-hole's spin axes. We find a strong correlation between the detection efficiency and the radiated energy and angular momentum, and that the inclusion of the l=2, m={+-}1, 0 modes, at a minimum, is necessary to account for the full dynamics of precessing systems.« less
Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiu, Dongbin
2017-03-03
The focus of the project is the development of mathematical methods and high-performance computational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly efficient and scalable numerical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.
Advancing MODFLOW Applying the Derived Vector Space Method
NASA Astrophysics Data System (ADS)
Herrera, G. S.; Herrera, I.; Lemus-García, M.; Hernandez-Garcia, G. D.
2015-12-01
The most effective domain decomposition methods (DDM) are non-overlapping DDMs. Recently a new approach, the DVS-framework, based on an innovative discretization method that uses a non-overlapping system of nodes (the derived-nodes), was introduced and developed by I. Herrera et al. [1, 2]. Using the DVS-approach a group of four algorithms, referred to as the 'DVS-algorithms', which fulfill the DDM-paradigm (i.e. the solution of global problems is obtained by resolution of local problems exclusively) has been derived. Such procedures are applicable to any boundary-value problem, or system of such equations, for which a standard discretization method is available and then software with a high degree of parallelization can be constructed. In a parallel talk, in this AGU Fall Meeting, Ismael Herrera will introduce the general DVS methodology. The application of the DVS-algorithms has been demonstrated in the solution of several boundary values problems of interest in Geophysics. Numerical examples for a single-equation, for the cases of symmetric, non-symmetric and indefinite problems were demonstrated before [1,2]. For these problems DVS-algorithms exhibited significantly improved numerical performance with respect to standard versions of DDM algorithms. In view of these results our research group is in the process of applying the DVS method to a widely used simulator for the first time, here we present the advances of the application of this method for the parallelization of MODFLOW. Efficiency results for a group of tests will be presented. References [1] I. Herrera, L.M. de la Cruz and A. Rosas-Medina. Non overlapping discretization methods for partial differential equations, Numer Meth Part D E, (2013). [2] Herrera, I., & Contreras Iván "An Innovative Tool for Effectively Applying Highly Parallelized Software To Problems of Elasticity". Geofísica Internacional, 2015 (In press)
A 1DVAR-based snowfall rate retrieval algorithm for passive microwave radiometers
NASA Astrophysics Data System (ADS)
Meng, Huan; Dong, Jun; Ferraro, Ralph; Yan, Banghua; Zhao, Limin; Kongoli, Cezar; Wang, Nai-Yu; Zavodsky, Bradley
2017-06-01
Snowfall rate retrieval from spaceborne passive microwave (PMW) radiometers has gained momentum in recent years. PMW can be so utilized because of its ability to sense in-cloud precipitation. A physically based, overland snowfall rate (SFR) algorithm has been developed using measurements from the Advanced Microwave Sounding Unit-A/Microwave Humidity Sounder sensor pair and the Advanced Technology Microwave Sounder. Currently, these instruments are aboard five polar-orbiting satellites, namely, NOAA-18, NOAA-19, Metop-A, Metop-B, and Suomi-NPP. The SFR algorithm relies on a separate snowfall detection algorithm that is composed of a satellite-based statistical model and a set of numerical weather prediction model-based filters. There are four components in the SFR algorithm itself: cloud properties retrieval, computation of ice particle terminal velocity, ice water content adjustment, and the determination of snowfall rate. The retrieval of cloud properties is the foundation of the algorithm and is accomplished using a one-dimensional variational (1DVAR) model. An existing model is adopted to derive ice particle terminal velocity. Since no measurement of cloud ice distribution is available when SFR is retrieved in near real time, such distribution is implicitly assumed by deriving an empirical function that adjusts retrieved SFR toward radar snowfall estimates. Finally, SFR is determined numerically from a complex integral. The algorithm has been validated against both radar and ground observations of snowfall events from the contiguous United States with satisfactory results. Currently, the SFR product is operationally generated at the National Oceanic and Atmospheric Administration and can be obtained from that organization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allgor, R.J.; Feehery, W.F.; Tolsma, J.E.
The batch process development problem serves as good candidate to guide the development of process modeling environments. It demonstrates that very robust numerical techniques are required within an environment that can collect, organize, and maintain the data and models required to address the batch process development problem. This paper focuses on improving the robustness and efficiency of the numerical algorithms required in such a modeling environment through the development of hybrid numerical and symbolic strategies.
Recovery Discontinuous Galerkin Jacobian-free Newton-Krylov Method for all-speed flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
HyeongKae Park; Robert Nourgaliev; Vincent Mousseau
2008-07-01
There is an increasing interest to develop the next generation simulation tools for the advanced nuclear energy systems. These tools will utilize the state-of-art numerical algorithms and computer science technology in order to maximize the predictive capability, support advanced reactor designs, reduce uncertainty and increase safety margins. In analyzing nuclear energy systems, we are interested in compressible low-Mach number, high heat flux flows with a wide range of Re, Ra, and Pr numbers. Under these conditions, the focus is placed on turbulent heat transfer, in contrast to other industries whose main interest is in capturing turbulent mixing. Our objective ismore » to develop singlepoint turbulence closure models for large-scale engineering CFD code, using Direct Numerical Simulation (DNS) or Large Eddy Simulation (LES) tools, requireing very accurate and efficient numerical algorithms. The focus of this work is placed on fully-implicit, high-order spatiotemporal discretization based on the discontinuous Galerkin method solving the conservative form of the compressible Navier-Stokes equations. The method utilizes a local reconstruction procedure derived from weak formulation of the problem, which is inspired by the recovery diffusion flux algorithm of van Leer and Nomura [?] and by the piecewise parabolic reconstruction [?] in the finite volume method. The developed methodology is integrated into the Jacobianfree Newton-Krylov framework [?] to allow a fully-implicit solution of the problem.« less
Advanced Combustion Numerics and Modeling - FY18 First Quarter Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitesides, R. A.; Killingsworth, N. J.; McNenly, M. J.
This project is focused on early stage research and development of numerical methods and models to improve advanced engine combustion concepts and systems. The current focus is on development of new mathematics and algorithms to reduce the time to solution for advanced combustion engine design using detailed fuel chemistry. The research is prioritized towards the most time-consuming workflow bottlenecks (computer and human) and accuracy gaps that slow ACS program members. Zero-RK, the fast and accurate chemical kinetics solver software developed in this project, is central to the research efforts and continues to be developed to address the current and emergingmore » needs of the engine designers, engine modelers and fuel mechanism developers.« less
NASA Astrophysics Data System (ADS)
Maximov, Ivan I.; Vinding, Mads S.; Tse, Desmond H. Y.; Nielsen, Niels Chr.; Shah, N. Jon
2015-05-01
There is an increasing need for development of advanced radio-frequency (RF) pulse techniques in modern magnetic resonance imaging (MRI) systems driven by recent advancements in ultra-high magnetic field systems, new parallel transmit/receive coil designs, and accessible powerful computational facilities. 2D spatially selective RF pulses are an example of advanced pulses that have many applications of clinical relevance, e.g., reduced field of view imaging, and MR spectroscopy. The 2D spatially selective RF pulses are mostly generated and optimised with numerical methods that can handle vast controls and multiple constraints. With this study we aim at demonstrating that numerical, optimal control (OC) algorithms are efficient for the design of 2D spatially selective MRI experiments, when robustness towards e.g. field inhomogeneity is in focus. We have chosen three popular OC algorithms; two which are gradient-based, concurrent methods using first- and second-order derivatives, respectively; and a third that belongs to the sequential, monotonically convergent family. We used two experimental models: a water phantom, and an in vivo human head. Taking into consideration the challenging experimental setup, our analysis suggests the use of the sequential, monotonic approach and the second-order gradient-based approach as computational speed, experimental robustness, and image quality is key. All algorithms used in this work were implemented in the MATLAB environment and are freely available to the MRI community.
A possible explanation for foreland thrust propagation
NASA Astrophysics Data System (ADS)
Panian, John; Pilant, Walter
1990-06-01
A common feature of thin-skinned fold and thrust belts is the sequential nature of foreland directed thrust systems. As a rule, younger thrusts develop in the footwalls of older thrusts, the whole sequence propagating towards the foreland in the transport direction. As each new younger thrust develops, the entire sequence is thickened; particularly in the frontal region. The compressive toe region can be likened to an advancing wave; as the mountainous thrust belt advanced the down-surface slope stresses drive thrusts ahead of it much like a surfboard rider. In an attempt to investigate the stresses in the frontal regions of thrustsheets, a numerical method has been devised from the algorithm given by McTigue and Mei [1981]. The algorithm yields a quickly computed approximate solution of the gravity- and tectonic-induced stresses of a two-dimensional homogeneous elastic half-space with an arbitrarily shaped free surface of small slope. A comparison of the numerical method with analytical examples shows excellent agreement. The numerical method was devised because it greatly facilitates the stress calculations and frees one from using the restrictive, simple topographic profiles necessary to obtain an analytical solution. The numerical version of the McTigue and Mei algorithm shows that there is a region of increased maximum resolved shear stress, τ, directly beneath the toe of the overthrust sheet. Utilizing the Mohr-Coulomb failure criterion, predicted fault lines are computed. It is shown that they flatten and become horizontal in some portions of this zone of increased τ. Thrust sheets are known to advance upon weak decollement zones. If there is a coincidence of increased τ, a weak rock layer, and a potential fault line parallel to this weak layer, we have in place all the elements necessary to initiate a new thrusting event. That is, this combination acts as a nucleating center to initiate a new thrusting event. Therefore, thrusts develop in sequence towards the foreland as a consequence of the stress concentrating abilities of the toe of the thrust sheet. The gravity- and tectonic-induced stresses due to the surface topography (usually ignored in previous analyses) of an advancing thrust sheet play a key role in the nature of shallow foreland thrust propagation.
Estimation of electric fields and current from ground-based magnetometer data
NASA Technical Reports Server (NTRS)
Kamide, Y.; Richmond, A. D.
1984-01-01
Recent advances in numerical algorithms for estimating ionospheric electric fields and currents from groundbased magnetometer data are reviewed and evaluated. Tests of the adequacy of one such algorithm in reproducing large-scale patterns of electrodynamic parameters in the high-latitude ionosphere have yielded generally positive results, at least for some simple cases. Some encouraging advances in producing realistic conductivity models, which are a critical input, are pointed out. When the algorithms are applied to extensive data sets, such as the ones from meridian chain magnetometer networks during the IMS, together with refined conductivity models, unique information on instantaneous electric field and current patterns can be obtained. Examples of electric potentials, ionospheric currents, field-aligned currents, and Joule heating distributions derived from ground magnetic data are presented. Possible directions for future improvements are also pointed out.
Advanced adaptive computational methods for Navier-Stokes simulations in rotorcraft aerodynamics
NASA Technical Reports Server (NTRS)
Stowers, S. T.; Bass, J. M.; Oden, J. T.
1993-01-01
A phase 2 research and development effort was conducted in area transonic, compressible, inviscid flows with an ultimate goal of numerically modeling complex flows inherent in advanced helicopter blade designs. The algorithms and methodologies therefore are classified as adaptive methods, which are error estimation techniques for approximating the local numerical error, and automatically refine or unrefine the mesh so as to deliver a given level of accuracy. The result is a scheme which attempts to produce the best possible results with the least number of grid points, degrees of freedom, and operations. These types of schemes automatically locate and resolve shocks, shear layers, and other flow details to an accuracy level specified by the user of the code. The phase 1 work involved a feasibility study of h-adaptive methods for steady viscous flows, with emphasis on accurate simulation of vortex initiation, migration, and interaction. Phase 2 effort focused on extending these algorithms and methodologies to a three-dimensional topology.
NASA Astrophysics Data System (ADS)
Jiang, J.; Kaloti, A. P.; Levinson, H. R.; Nguyen, N.; Puckett, E. G.; Lokavarapu, H. V.
2016-12-01
We present the results of three standard benchmarks for the new active tracer particle algorithm in ASPECT. The three benchmarks are SolKz, SolCx, and SolVI (also known as the 'inclusion benchmark') first proposed by Duretz, May, Gerya, and Tackley (G Cubed, 2011) and in subsequent work by Theilman, May, and Kaus (Pure and Applied Geophysics, 2014). Each of the three benchmarks compares the accuracy of the numerical solution to a steady (time-independent) solution of the incompressible Stokes equations with a known exact solution. These benchmarks are specifically designed to test the accuracy and effectiveness of the numerical method when the viscosity varies up to six orders of magnitude. ASPECT has been shown to converge to the exact solution of each of these benchmarks at the correct design rate when all of the flow variables, including the density and viscosity, are discretized on the underlying finite element grid (Krobichler, Heister, and Bangerth, GJI, 2012). In our work we discretize the density and viscosity by initially placing the true values of the density and viscosity at the intial particle positions. At each time step, including the initialization step, the density and viscosity are interpolated from the particles onto the finite element grid. The resulting Stokes system is solved for the velocity and pressure, and the particle positions are advanced in time according to this new, numerical, velocity field. Note that this procedure effectively changes a steady solution of the Stokes equaton (i.e., one that is independent of time) to a solution of the Stokes equations that is time dependent. Furthermore, the accuracy of the active tracer particle algorithm now also depends on the accuracy of the interpolation algorithm and of the numerical method one uses to advance the particle positions in time. Finally, we will present new interpolation algorithms designed to increase the overall accuracy of the active tracer algorithms in ASPECT and interpolation algotithms designed to conserve properties, such as mass density, that are being carried by the particles.
Maximov, Ivan I; Vinding, Mads S; Tse, Desmond H Y; Nielsen, Niels Chr; Shah, N Jon
2015-05-01
There is an increasing need for development of advanced radio-frequency (RF) pulse techniques in modern magnetic resonance imaging (MRI) systems driven by recent advancements in ultra-high magnetic field systems, new parallel transmit/receive coil designs, and accessible powerful computational facilities. 2D spatially selective RF pulses are an example of advanced pulses that have many applications of clinical relevance, e.g., reduced field of view imaging, and MR spectroscopy. The 2D spatially selective RF pulses are mostly generated and optimised with numerical methods that can handle vast controls and multiple constraints. With this study we aim at demonstrating that numerical, optimal control (OC) algorithms are efficient for the design of 2D spatially selective MRI experiments, when robustness towards e.g. field inhomogeneity is in focus. We have chosen three popular OC algorithms; two which are gradient-based, concurrent methods using first- and second-order derivatives, respectively; and a third that belongs to the sequential, monotonically convergent family. We used two experimental models: a water phantom, and an in vivo human head. Taking into consideration the challenging experimental setup, our analysis suggests the use of the sequential, monotonic approach and the second-order gradient-based approach as computational speed, experimental robustness, and image quality is key. All algorithms used in this work were implemented in the MATLAB environment and are freely available to the MRI community. Copyright © 2015 Elsevier Inc. All rights reserved.
On estimating the phase of a periodic waveform in additive Gaussian noise, part 3
NASA Technical Reports Server (NTRS)
Rauch, L. L.
1991-01-01
Motivated by advances in signal processing technology that support more complex algorithms, researchers have taken a new look at the problem of estimating the phase and other parameters of a nearly periodic waveform in additive Gaussian noise, based on observation during a given time interval. Parts 1 and 2 are very briefly reviewed. In part 3, the actual performances of some of the highly nonlinear estimation algorithms of parts 1 and 2 are evaluated by numerical simulation using Monte Carlo techniques.
Perspectives on the Future of CFD
NASA Technical Reports Server (NTRS)
Kwak, Dochan
2000-01-01
This viewgraph presentation gives an overview of the future of computational fluid dynamics (CFD), which in the past has pioneered the field of flow simulation. Over time CFD has progressed as computing power. Numerical methods have been advanced as CPU and memory capacity increases. Complex configurations are routinely computed now and direct numerical simulations (DNS) and large eddy simulations (LES) are used to study turbulence. As the computing resources changed to parallel and distributed platforms, computer science aspects such as scalability (algorithmic and implementation) and portability and transparent codings have advanced. Examples of potential future (or current) challenges include risk assessment, limitations of the heuristic model, and the development of CFD and information technology (IT) tools.
Propagating Qualitative Values Through Quantitative Equations
NASA Technical Reports Server (NTRS)
Kulkarni, Deepak
1992-01-01
In most practical problems where traditional numeric simulation is not adequate, one need to reason about a system with both qualitative and quantitative equations. In this paper, we address the problem of propagating qualitative values represented as interval values through quantitative equations. Previous research has produced exponential-time algorithms for approximate solution of the problem. These may not meet the stringent requirements of many real time applications. This paper advances the state of art by producing a linear-time algorithm that can propagate a qualitative value through a class of complex quantitative equations exactly and through arbitrary algebraic expressions approximately. The algorithm was found applicable to Space Shuttle Reaction Control System model.
NAS Applications and Advanced Algorithms
NASA Technical Reports Server (NTRS)
Bailey, David H.; Biswas, Rupak; VanDerWijngaart, Rob; Kutler, Paul (Technical Monitor)
1997-01-01
This paper examines the applications most commonly run on the supercomputers at the Numerical Aerospace Simulation (NAS) facility. It analyzes the extent to which such applications are fundamentally oriented to vector computers, and whether or not they can be efficiently implemented on hierarchical memory machines, such as systems with cache memories and highly parallel, distributed memory systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halavanau, A.; Piot, P.
2015-06-01
In a cascaded longitudinal space-charge amplifier (LSCA), initial density noise in a relativistic e-beam is amplified via the interplay of longitudinal space charge forces and properly located dispersive sections. This type of amplification process was shown to potentially result in large final density modulations [1] compatible with the production of broadband electromagnetic radiation. The technique was recently demonstrated in the optical domain [2]. In this paper we investigate, via numerical simulations, the performances of a cascaded LSCA beamline at the Fermilab’s Advanced Superconducting Test Accelerator (ASTA). We especially explore the properties of the produced broadband radiation. Our studies have beenmore » conducted with a grid-less three-dimensional space-charge algorithm.« less
Numerical algorithms for cold-relativistic plasma models in the presence of discontinuties
NASA Astrophysics Data System (ADS)
Hakim, Ammar; Cary, John; Bruhwiler, David; Geddes, Cameron; Leemans, Wim; Esarey, Eric
2006-10-01
A numerical algorithm is presented to solve cold-relativistic electron fluid equations in the presence of sharp gradients and discontinuities. The intended application is to laser wake-field accelerator simulations in which the laser induces accelerating fields thousands of times those achievable in conventional RF accelerators. The relativistic cold-fluid equations are formulated as non-classical system of hyperbolic balance laws. It is shown that the flux Jacobian for this system can not be diagonalized which causes numerical difficulties when developing shock-capturing algorithms. Further, the system is shown to admit generalized delta-shock solutions, first discovered in the context of sticky-particle dynamics (Bouchut, Ser. Adv. Math App. Sci., 22 (1994) pp. 171--190). A new approach, based on relaxation schemes proposed by Jin and Xin (Comm. Pure Appl. Math. 48 (1995) pp. 235--276) and LeVeque and Pelanti (J. Comput. Phys. 172 (2001) pp. 572--591) is developed to solve this system of equations. The method consists of finding an exact solution to a Riemann problem at each cell interface and coupling these to advance the solution in time. Applications to an intense laser propagating in an under-dense plasma are presented.
NASA Technical Reports Server (NTRS)
Ryabenkii, V. S.; Turchaninov, V. I.; Tsynkov, S. V.
1999-01-01
We propose a family of algorithms for solving numerically a Cauchy problem for the three-dimensional wave equation. The sources that drive the equation (i.e., the right-hand side) are compactly supported in space for any given time; they, however, may actually move in space with a subsonic speed. The solution is calculated inside a finite domain (e.g., sphere) that also moves with a subsonic speed and always contains the support of the right-hand side. The algorithms employ a standard consistent and stable explicit finite-difference scheme for the wave equation. They allow one to calculate tile solution for arbitrarily long time intervals without error accumulation and with the fixed non-growing amount of tile CPU time and memory required for advancing one time step. The algorithms are inherently three-dimensional; they rely on the presence of lacunae in the solutions of the wave equation in oddly dimensional spaces. The methodology presented in the paper is, in fact, a building block for constructing the nonlocal highly accurate unsteady artificial boundary conditions to be used for the numerical simulation of waves propagating with finite speed over unbounded domains.
Expert system constant false alarm rate processor
NASA Astrophysics Data System (ADS)
Baldygo, William J., Jr.; Wicks, Michael C.
1993-10-01
The requirements for high detection probability and low false alarm probability in modern wide area surveillance radars are rarely met due to spatial variations in clutter characteristics. Many filtering and CFAR detection algorithms have been developed to effectively deal with these variations; however, any single algorithm is likely to exhibit excessive false alarms and intolerably low detection probabilities in a dynamically changing environment. A great deal of research has led to advances in the state of the art in Artificial Intelligence (AI) and numerous areas have been identified for application to radar signal processing. The approach suggested here, discussed in a patent application submitted by the authors, is to intelligently select the filtering and CFAR detection algorithms being executed at any given time, based upon the observed characteristics of the interference environment. This approach requires sensing the environment, employing the most suitable algorithms, and applying an appropriate multiple algorithm fusion scheme or consensus algorithm to produce a global detection decision.
Bayesian cloud detection for MERIS, AATSR, and their combination
NASA Astrophysics Data System (ADS)
Hollstein, A.; Fischer, J.; Carbajal Henken, C.; Preusker, R.
2014-11-01
A broad range of different of Bayesian cloud detection schemes is applied to measurements from the Medium Resolution Imaging Spectrometer (MERIS), the Advanced Along-Track Scanning Radiometer (AATSR), and their combination. The cloud masks were designed to be numerically efficient and suited for the processing of large amounts of data. Results from the classical and naive approach to Bayesian cloud masking are discussed for MERIS and AATSR as well as for their combination. A sensitivity study on the resolution of multidimensional histograms, which were post-processed by Gaussian smoothing, shows how theoretically insufficient amounts of truth data can be used to set up accurate classical Bayesian cloud masks. Sets of exploited features from single and derived channels are numerically optimized and results for naive and classical Bayesian cloud masks are presented. The application of the Bayesian approach is discussed in terms of reproducing existing algorithms, enhancing existing algorithms, increasing the robustness of existing algorithms, and on setting up new classification schemes based on manually classified scenes.
Bayesian cloud detection for MERIS, AATSR, and their combination
NASA Astrophysics Data System (ADS)
Hollstein, A.; Fischer, J.; Carbajal Henken, C.; Preusker, R.
2015-04-01
A broad range of different of Bayesian cloud detection schemes is applied to measurements from the Medium Resolution Imaging Spectrometer (MERIS), the Advanced Along-Track Scanning Radiometer (AATSR), and their combination. The cloud detection schemes were designed to be numerically efficient and suited for the processing of large numbers of data. Results from the classical and naive approach to Bayesian cloud masking are discussed for MERIS and AATSR as well as for their combination. A sensitivity study on the resolution of multidimensional histograms, which were post-processed by Gaussian smoothing, shows how theoretically insufficient numbers of truth data can be used to set up accurate classical Bayesian cloud masks. Sets of exploited features from single and derived channels are numerically optimized and results for naive and classical Bayesian cloud masks are presented. The application of the Bayesian approach is discussed in terms of reproducing existing algorithms, enhancing existing algorithms, increasing the robustness of existing algorithms, and on setting up new classification schemes based on manually classified scenes.
NASA Astrophysics Data System (ADS)
Wang, Xiaowei; Li, Huiping; Li, Zhichao
2018-04-01
The interfacial heat transfer coefficient (IHTC) is one of the most important thermal physical parameters which have significant effects on the calculation accuracy of physical fields in the numerical simulation. In this study, the artificial fish swarm algorithm (AFSA) was used to evaluate the IHTC between the heated sample and the quenchant in a one-dimensional heat conduction problem. AFSA is a global optimization method. In order to speed up the convergence speed, a hybrid method which is the combination of AFSA and normal distribution method (ZAFSA) was presented. The IHTC evaluated by ZAFSA were compared with those attained by AFSA and the advanced-retreat method and golden section method. The results show that the reasonable IHTC is obtained by using ZAFSA, the convergence of hybrid method is well. The algorithm based on ZAFSA can not only accelerate the convergence speed, but also reduce the numerical oscillation in the evaluation of IHTC.
Computing row and column counts for sparse QR and LU factorization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilbert, John R.; Li, Xiaoye S.; Ng, Esmond G.
2001-01-01
We present algorithms to determine the number of nonzeros in each row and column of the factors of a sparse matrix, for both the QR factorization and the LU factorization with partial pivoting. The algorithms use only the nonzero structure of the input matrix, and run in time nearly linear in the number of nonzeros in that matrix. They may be used to set up data structures or schedule parallel operations in advance of the numerical factorization. The row and column counts we compute are upper bounds on the actual counts. If the input matrix is strong Hall and theremore » is no coincidental numerical cancellation, the counts are exact for QR factorization and are the tightest bounds possible for LU factorization. These algorithms are based on our earlier work on computing row and column counts for sparse Cholesky factorization, plus an efficient method to compute the column elimination tree of a sparse matrix without explicitly forming the product of the matrix and its transpose.« less
NASA Technical Reports Server (NTRS)
Carroll, Chester C.; Youngblood, John N.; Saha, Aindam
1987-01-01
Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processing elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carroll, C.C.; Youngblood, J.N.; Saha, A.
1987-12-01
Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processingmore » elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.« less
Computational Astrophysical Magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Norman, M. L.
1994-05-01
Cosmic magnetic fields have intrigued and vexed astrophysicists seeking to understand their complex dynamics in a wide variety of astronomical settings. Magnetic fields are believed to play an important role in regulating star formation in molecular clouds, providing an effective viscosity in accretion disks, accelerating astrophysical jets, and influencing the large scale structure of the ISM of disk galaxies. Radio observations of supernova remnants and extragalactic radio jets prove that magnetic fields are are fundamentally linked to astrophysical particle acceleration. Magnetic fields exist on cosmological scales as shown by the existence of radio halos in clusters of galaxies. Theoretical investigation of these and other phenomena require numerical simulations due to the inherent complexity of MHD, but until now neither the computer power nor the numerical algorithms existed to mount a serious attack on the most important problems. That has now changed. Advances in parallel computing and numerical algorithms now permit the simulation of fully nonlinear, time-dependent astrophysical MHD in 2D and 3D. In this talk, I will describe the ZEUS codes for astrophysical MHD developed at the Laboratory for Computational Astrophysics (LCA) at the University of Illinois. These codes are now available to the national community. The numerical algorithms and test suite used to validate them are briefly discussed. Several applications of ZEUS to topics listed above are presented. An extension of ZEUS to model ambipolar diffusion in weakly ionized plasmas is illustrated. I discuss how continuing exponential growth in computer power and new numerical algorithms under development will allow us to tackle two grand challenges: compressible MHD turbulence and relativistic MHD. This work is partially supported by grants NSF AST-9201113 and NASA NAG 5-2493.
Advanced FIREFLY Assessment Generalized Mechanization Requirements Report
1979-06-01
Systems; Fire Control Computers ; Weapon Control 20. ABSTRACT (Continue on reverse side If necessary end tdentify by blockc number) -The requirements for...airborne digital computer which can be specialized to per- form successfully in a variety of tactical aircraft with differing avionics sensors, fire...AGG ........................................... 27 13 Time of Flight Computation Using a Modified (China Lake) Numerical Integration Algorithm
A feedback control for the advanced launch system
NASA Technical Reports Server (NTRS)
Seywald, Hans; Cliff, Eugene M.
1991-01-01
A robust feedback algorithm is presented for a near-minimum-fuel ascent of a two-stage launch vehicle operating in the equatorial plane. The development of the algorithm is based on the ideas of neighboring optimal control and can be derived into three phases. In phase 1, the formalism of optimal control is employed to calculate fuel-optimal ascent trajectories for a simple point-mass model. In phase 2, these trajectories are used to numerically calculate gain functions of time for the control(s), the total flight time, and possibly, for other variables of interest. In phase 3, these gains are used to determine feedback expressions for the controls associated with a more realistic model of a launch vehicle. With the Advanced Launch System in mind, all calculations are performed on a two-stage vehicle with fixed thrust history, but this restriction is by no means important for the approach taken. Performance and robustness of the algorithm is found to be excellent.
Computational Fluid Dynamics Symposium on Aeropropulsion
NASA Technical Reports Server (NTRS)
1991-01-01
Recognizing the considerable advances that have been made in computational fluid dynamics, the Internal Fluid Mechanics Division of NASA Lewis Research Center sponsored this symposium with the objective of providing a forum for exchanging information regarding recent developments in numerical methods, physical and chemical modeling, and applications. This conference publication is a compilation of 4 invited and 34 contributed papers presented in six sessions: algorithms one and two, turbomachinery, turbulence, components application, and combustors. Topics include numerical methods, grid generation, chemically reacting flows, turbulence modeling, inlets, nozzles, and unsteady flows.
Denoising of gravitational wave signals via dictionary learning algorithms
NASA Astrophysics Data System (ADS)
Torres-Forné, Alejandro; Marquina, Antonio; Font, José A.; Ibáñez, José M.
2016-12-01
Gravitational wave astronomy has become a reality after the historical detections accomplished during the first observing run of the two advanced LIGO detectors. In the following years, the number of detections is expected to increase significantly with the full commissioning of the advanced LIGO, advanced Virgo and KAGRA detectors. The development of sophisticated data analysis techniques to improve the opportunities of detection for low signal-to-noise-ratio events is, hence, a most crucial effort. In this paper, we present one such technique, dictionary-learning algorithms, which have been extensively developed in the last few years and successfully applied mostly in the context of image processing. However, to the best of our knowledge, such algorithms have not yet been employed to denoise gravitational wave signals. By building dictionaries from numerical relativity templates of both binary black holes mergers and bursts of rotational core collapse, we show how machine-learning algorithms based on dictionaries can also be successfully applied for gravitational wave denoising. We use a subset of signals from both catalogs, embedded in nonwhite Gaussian noise, to assess our techniques with a large sample of tests and to find the best model parameters. The application of our method to the actual signal GW150914 shows promising results. Dictionary-learning algorithms could be a complementary addition to the gravitational wave data analysis toolkit. They may be used to extract signals from noise and to infer physical parameters if the data are in good enough agreement with the morphology of the dictionary atoms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qin, Hong; Liu, Jian; Xiao, Jianyuan
Particle-in-cell (PIC) simulation is the most important numerical tool in plasma physics. However, its long-term accuracy has not been established. To overcome this difficulty, we developed a canonical symplectic PIC method for the Vlasov-Maxwell system by discretising its canonical Poisson bracket. A fast local algorithm to solve the symplectic implicit time advance is discovered without root searching or global matrix inversion, enabling applications of the proposed method to very large-scale plasma simulations with many, e.g. 10(9), degrees of freedom. The long-term accuracy and fidelity of the algorithm enables us to numerically confirm Mouhot and Villani's theory and conjecture on nonlinearmore » Landau damping over several orders of magnitude using the PIC method, and to calculate the nonlinear evolution of the reflectivity during the mode conversion process from extraordinary waves to Bernstein waves.« less
Using adaptive grid in modeling rocket nozzle flow
NASA Technical Reports Server (NTRS)
Chow, Alan S.; Jin, Kang-Ren
1992-01-01
The mechanical behavior of a rocket motor internal flow field results in a system of nonlinear partial differential equations which cannot be solved analytically. However, this system of equations called the Navier-Stokes equations can be solved numerically. The accuracy and the convergence of the solution of the system of equations will depend largely on how precisely the sharp gradients in the domain of interest can be resolved. With the advances in computer technology, more sophisticated algorithms are available to improve the accuracy and convergence of the solutions. An adaptive grid generation is one of the schemes which can be incorporated into the algorithm to enhance the capability of numerical modeling. It is equivalent to putting intelligence into the algorithm to optimize the use of computer memory. With this scheme, the finite difference domain of the flow field called the grid does neither have to be very fine nor strategically placed at the location of sharp gradients. The grid is self adapting as the solution evolves. This scheme significantly improves the methodology of solving flow problems in rocket nozzles by taking the refinement part of grid generation out of the hands of computational fluid dynamics (CFD) specialists and place it into the computer algorithm itself.
Parallel Algorithms for Computational Models of Geophysical Systems
NASA Astrophysics Data System (ADS)
Carrillo Ledesma, A.; Herrera, I.; de la Cruz, L. M.; Hernández, G.; Grupo de Modelacion Matematica y Computacional
2013-05-01
Mathematical models of many systems of interest, including very important continuous systems of Earth Sciences and Engineering, lead to a great variety of partial differential equations (PDEs) whose solution methods are based on the computational processing of large-scale algebraic systems. Furthermore, the incredible expansion experienced by the existing computational hardware and software has made amenable to effective treatment problems of an ever increasing diversity and complexity, posed by scientific and engineering applications. Parallel computing is outstanding among the new computational tools and, in order to effectively use the most advanced computers available today, massively parallel software is required. Domain decomposition methods (DDMs) have been developed precisely for effectively treating PDEs in paralle. Ideally, the main objective of domain decomposition research is to produce algorithms capable of 'obtaining the global solution by exclusively solving local problems', but up-to-now this has only been an aspiration; that is, a strong desire for achieving such a property and so we call it 'the DDM-paradigm'. In recent times, numerically competitive DDM-algorithms are non-overlapping, preconditioned and necessarily incorporate constraints which pose an additional challenge for achieving the DDM-paradigm. Recently a group of four algorithms, referred to as the 'DVS-algorithms', which fulfill the DDM-paradigm, was developed. To derive them a new discretization method, which uses a non-overlapping system of nodes (the derived-nodes), was introduced. This discretization procedure can be applied to any boundary-value problem, or system of such equations. In turn, the resulting system of discrete equations can be treated using any available DDM-algorithm. In particular, two of the four DVS-algorithms mentioned above were obtained by application of the well-known and very effective algorithms BDDC and FETI-DP; these will be referred to as the DVS-BDDC and DVS-FETI-DP algorithms. The other two, which will be referred to as the DVS-PRIMAL and DVS-DUAL algorithms, were obtained by application of two new algorithms that had not been previously reported in the literature. As said before, the four DVS-algorithms constitute a group of preconditioned and constrained algorithms that, for the first time, fulfill the DDM-paradigm. Both, BDDC and FETI-DP, are very well-known; and both are highly efficient. Recently, it was established that these two methods are closely related and its numerical performance is quite similar. On the other hand, through numerical experiments, we have established that the numerical performances of each one of the members of DVS-algorithms group (DVS-BDDC, DVS-FETI-DP, DVS-PRIMAL and DVS-DUAL) are very similar too. Furthermore, we have carried out comparisons of the performances of the standard versions of BDDC and FETI-DP with DVS-BDDC and DVS-FETI-DP, and in all such numerical experiments the DVS algorithms have performed significantly better.
Rotorcraft Brownout: Advanced Understanding, Control and Mitigation
2008-12-31
the Gauss Seidel iterative method . The overall steps of SIMPLER algorithm can be summarized as: 1. Guess velocity field, 2. Calculate the momentum...techniques and numerical methods , and the team will begin to develop a methodology that is capable of integrating these solutions and highlighting...rotorcraft design optimization techniques will then be undertaken using the validated computational methods . 15. SUBJECT TERMS Rotorcraft
Modeling, Analysis, and Optimization Issues for Large Space Structures.
1983-02-01
There are numerous opportunities - provided by new advances in computer hardware, firmware, software , CAD/CAM systems, computational algorithms and...Institute Department of Mechanical Engineering Dept. of Civil Engineering & Mechanics Troy, NY 12181 Drexel University Philadelphia, PA 19104 Dr...Mechanical Engineering Hampton, VA 23665 Washington, DC 20059 Dr. K. T. Alfriend Mr. Siva S. Banda Department of the Navy Flight Dynamics LaboratoryNaval
Final Report---Optimization Under Nonconvexity and Uncertainty: Algorithms and Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeff Linderoth
2011-11-06
the goal of this work was to develop new algorithmic techniques for solving large-scale numerical optimization problems, focusing on problems classes that have proven to be among the most challenging for practitioners: those involving uncertainty and those involving nonconvexity. This research advanced the state-of-the-art in solving mixed integer linear programs containing symmetry, mixed integer nonlinear programs, and stochastic optimization problems. The focus of the work done in the continuation was on Mixed Integer Nonlinear Programs (MINLP)s and Mixed Integer Linear Programs (MILP)s, especially those containing a great deal of symmetry.
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Housner, Jerrold M.
1993-01-01
Recent advances in computer technology that are likely to impact structural analysis and design of flight vehicles are reviewed. A brief summary is given of the advances in microelectronics, networking technologies, and in the user-interface hardware and software. The major features of new and projected computing systems, including high performance computers, parallel processing machines, and small systems, are described. Advances in programming environments, numerical algorithms, and computational strategies for new computing systems are reviewed. The impact of the advances in computer technology on structural analysis and the design of flight vehicles is described. A scenario for future computing paradigms is presented, and the near-term needs in the computational structures area are outlined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spotz, William F.
PyTrilinos is a set of Python interfaces to compiled Trilinos packages. This collection supports serial and parallel dense linear algebra, serial and parallel sparse linear algebra, direct and iterative linear solution techniques, algebraic and multilevel preconditioners, nonlinear solvers and continuation algorithms, eigensolvers and partitioning algorithms. Also included are a variety of related utility functions and classes, including distributed I/O, coloring algorithms and matrix generation. PyTrilinos vector objects are compatible with the popular NumPy Python package. As a Python front end to compiled libraries, PyTrilinos takes advantage of the flexibility and ease of use of Python, and the efficiency of themore » underlying C++, C and Fortran numerical kernels. This paper covers recent, previously unpublished advances in the PyTrilinos package.« less
NASA Astrophysics Data System (ADS)
Gilbert, B. K.; Robb, R. A.; Chu, A.; Kenue, S. K.; Lent, A. H.; Swartzlander, E. E., Jr.
1981-02-01
Rapid advances during the past ten years of several forms of computer-assisted tomography (CT) have resulted in the development of numerous algorithms to convert raw projection data into cross-sectional images. These reconstruction algorithms are either 'iterative,' in which a large matrix algebraic equation is solved by successive approximation techniques; or 'closed form'. Continuing evolution of the closed form algorithms has allowed the newest versions to produce excellent reconstructed images in most applications. This paper will review several computer software and special-purpose digital hardware implementations of closed form algorithms, either proposed during the past several years by a number of workers or actually implemented in commercial or research CT scanners. The discussion will also cover a number of recently investigated algorithmic modifications which reduce the amount of computation required to execute the reconstruction process, as well as several new special-purpose digital hardware implementations under development in laboratories at the Mayo Clinic.
Computational Challenges of Viscous Incompressible Flows
NASA Technical Reports Server (NTRS)
Kwak, Dochan; Kiris, Cetin; Kim, Chang Sung
2004-01-01
Over the past thirty years, numerical methods and simulation tools for incompressible flows have been advanced as a subset of the computational fluid dynamics (CFD) discipline. Although incompressible flows are encountered in many areas of engineering, simulation of compressible flow has been the major driver for developing computational algorithms and tools. This is probably due to the rather stringent requirements for predicting aerodynamic performance characteristics of flight vehicles, while flow devices involving low-speed or incompressible flow could be reasonably well designed without resorting to accurate numerical simulations. As flow devices are required to be more sophisticated and highly efficient CFD took become increasingly important in fluid engineering for incompressible and low-speed flow. This paper reviews some of the successes made possible by advances in computational technologies during the same period, and discusses some of the current challenges faced in computing incompressible flows.
Successes and Challenges of Incompressible Flow Simulation
NASA Technical Reports Server (NTRS)
Kwak, Dochan; Kiris, Cetin
2003-01-01
During the past thirty years, numerical methods and simulation tools for incompressible flows have been advanced as a subset of CFD discipline. Even though incompressible flows are encountered in many areas of engineering, simulation of compressible flow has been the major driver for developing computational algorithms and tools. This is probably due to rather stringent requirements for predicting aerodynamic performance characteristics of flight vehicles, while flow devices involving low speed or incompressible flow could be reasonably well designed without resorting to accurate numerical simulations. As flow devices are required to be more sophisticated and highly efficient, CFD tools become indispensable in fluid engineering for incompressible and low speed flow. This paper is intended to review some of the successes made possible by advances in computational technologies during the same period, and discuss some of the current challenges.
Stochastic Forcing for Ocean Uncertainty Prediction
2013-09-30
using the desired dynamics and the fitting of that velocity field to the bathymetry, coasts and discretization for the desired simulation. New algorithms...numerical bias is removed. Pdfs of the forecast errors are shown to capture and evolve non- Gaussian statistics. Comparing the Kullback - Leibler ...advances in collaborative sea exercises of opportunity vi) Strengthen existing and initiate new collaborations with NRL, using and leveraging the MIT
Astrophysical Computation in Research, the Classroom and Beyond
NASA Astrophysics Data System (ADS)
Frank, Adam
2009-03-01
In this talk I review progress in the use of simulations as a tool for astronomical research, for education and public outreach. The talk will include the basic elements of numerical simulations as well as advances in algorithms which have led to recent dramatic progress such as the use of Adaptive Mesh Refinement methods. The scientific focus of the talk will be star formation jets and outflows while the educational emphasis will be on the use of advanced platforms for simulation based learning in lecture and integrated homework. Learning modules for science outreach websites such as DISCOVER magazine will also be highlighted.
Genetic algorithms for multicriteria shape optimization of induction furnace
NASA Astrophysics Data System (ADS)
Kůs, Pavel; Mach, František; Karban, Pavel; Doležel, Ivo
2012-09-01
In this contribution we deal with a multi-criteria shape optimization of an induction furnace. We want to find shape parameters of the furnace in such a way, that two different criteria are optimized. Since they cannot be optimized simultaneously, instead of one optimum we find set of partially optimal designs, so called Pareto front. We compare two different approaches to the optimization, one using nonlinear conjugate gradient method and second using variation of genetic algorithm. As can be seen from the numerical results, genetic algorithm seems to be the right choice for this problem. Solution of direct problem (coupled problem consisting of magnetic and heat field) is done using our own code Agros2D. It uses finite elements of higher order leading to fast and accurate solution of relatively complicated coupled problem. It also provides advanced scripting support, allowing us to prepare parametric model of the furnace and simply incorporate various types of optimization algorithms.
Improving the Numerical Stability of Fast Matrix Multiplication
Ballard, Grey; Benson, Austin R.; Druinsky, Alex; ...
2016-10-04
Fast algorithms for matrix multiplication, namely those that perform asymptotically fewer scalar operations than the classical algorithm, have been considered primarily of theoretical interest. Apart from Strassen's original algorithm, few fast algorithms have been efficiently implemented or used in practical applications. However, there exist many practical alternatives to Strassen's algorithm with varying performance and numerical properties. Fast algorithms are known to be numerically stable, but because their error bounds are slightly weaker than the classical algorithm, they are not used even in cases where they provide a performance benefit. We argue in this study that the numerical sacrifice of fastmore » algorithms, particularly for the typical use cases of practical algorithms, is not prohibitive, and we explore ways to improve the accuracy both theoretically and empirically. The numerical accuracy of fast matrix multiplication depends on properties of the algorithm and of the input matrices, and we consider both contributions independently. We generalize and tighten previous error analyses of fast algorithms and compare their properties. We discuss algorithmic techniques for improving the error guarantees from two perspectives: manipulating the algorithms, and reducing input anomalies by various forms of diagonal scaling. In conclusion, we benchmark performance and demonstrate our improved numerical accuracy.« less
A parallel second-order adaptive mesh algorithm for incompressible flow in porous media.
Pau, George S H; Almgren, Ann S; Bell, John B; Lijewski, Michael J
2009-11-28
In this paper, we present a second-order accurate adaptive algorithm for solving multi-phase, incompressible flow in porous media. We assume a multi-phase form of Darcy's law with relative permeabilities given as a function of the phase saturation. The remaining equations express conservation of mass for the fluid constituents. In this setting, the total velocity, defined to be the sum of the phase velocities, is divergence free. The basic integration method is based on a total-velocity splitting approach in which we solve a second-order elliptic pressure equation to obtain a total velocity. This total velocity is then used to recast component conservation equations as nonlinear hyperbolic equations. Our approach to adaptive refinement uses a nested hierarchy of logically rectangular grids with simultaneous refinement of the grids in both space and time. The integration algorithm on the grid hierarchy is a recursive procedure in which coarse grids are advanced in time, fine grids are advanced multiple steps to reach the same time as the coarse grids and the data at different levels are then synchronized. The single-grid algorithm is described briefly, but the emphasis here is on the time-stepping procedure for the adaptive hierarchy. Numerical examples are presented to demonstrate the algorithm's accuracy and convergence properties and to illustrate the behaviour of the method.
Dynamic Analysis of Sounding Rocket Pneumatic System Revision
NASA Technical Reports Server (NTRS)
Armen, Jerald
2010-01-01
The recent fusion of decades of advancements in mathematical models, numerical algorithms and curve fitting techniques marked the beginning of a new era in the science of simulation. It is becoming indispensable to the study of rockets and aerospace analysis. In pneumatic system, which is the main focus of this paper, particular emphasis will be placed on the efforts of compressible flow in Attitude Control System of sounding rocket.
Extending HPF for advanced data parallel applications
NASA Technical Reports Server (NTRS)
Chapman, Barbara; Mehrotra, Piyush; Zima, Hans
1994-01-01
The stated goal of High Performance Fortran (HPF) was to 'address the problems of writing data parallel programs where the distribution of data affects performance'. After examining the current version of the language we are led to the conclusion that HPF has not fully achieved this goal. While the basic distribution functions offered by the language - regular block, cyclic, and block cyclic distributions - can support regular numerical algorithms, advanced applications such as particle-in-cell codes or unstructured mesh solvers cannot be expressed adequately. We believe that this is a major weakness of HPF, significantly reducing its chances of becoming accepted in the numeric community. The paper discusses the data distribution and alignment issues in detail, points out some flaws in the basic language, and outlines possible future paths of development. Furthermore, we briefly deal with the issue of task parallelism and its integration with the data parallel paradigm of HPF.
Numerical algebraic geometry: a new perspective on gauge and string theories
NASA Astrophysics Data System (ADS)
Mehta, Dhagash; He, Yang-Hui; Hauensteine, Jonathan D.
2012-07-01
There is a rich interplay between algebraic geometry and string and gauge theories which has been recently aided immensely by advances in computational algebra. However, symbolic (Gröbner) methods are severely limited by algorithmic issues such as exponential space complexity and being highly sequential. In this paper, we introduce a novel paradigm of numerical algebraic geometry which in a plethora of situations overcomes these shortcomings. The so-called `embarrassing parallelizability' allows us to solve many problems and extract physical information which elude symbolic methods. We describe the method and then use it to solve various problems arising from physics which could not be otherwise solved.
Modified symplectic schemes with nearly-analytic discrete operators for acoustic wave simulations
NASA Astrophysics Data System (ADS)
Liu, Shaolin; Yang, Dinghui; Lang, Chao; Wang, Wenshuai; Pan, Zhide
2017-04-01
Using a structure-preserving algorithm significantly increases the computational efficiency of solving wave equations. However, only a few explicit symplectic schemes are available in the literature, and the capabilities of these symplectic schemes have not been sufficiently exploited. Here, we propose a modified strategy to construct explicit symplectic schemes for time advance. The acoustic wave equation is transformed into a Hamiltonian system. The classical symplectic partitioned Runge-Kutta (PRK) method is used for the temporal discretization. Additional spatial differential terms are added to the PRK schemes to form the modified symplectic methods and then two modified time-advancing symplectic methods with all of positive symplectic coefficients are then constructed. The spatial differential operators are approximated by nearly-analytic discrete (NAD) operators, and we call the fully discretized scheme modified symplectic nearly analytic discrete (MSNAD) method. Theoretical analyses show that the MSNAD methods exhibit less numerical dispersion and higher stability limits than conventional methods. Three numerical experiments are conducted to verify the advantages of the MSNAD methods, such as their numerical accuracy, computational cost, stability, and long-term calculation capability.
NASA Technical Reports Server (NTRS)
Kaplan, Michael L.; Lin, Yuh-Lang
2005-01-01
The purpose of the research was to develop and test improved hazard algorithms that could result in the development of sensors that are better able to anticipate potentially severe atmospheric turbulence, which affects aircraft safety. The research focused on employing numerical simulation models to develop improved algorithms for the prediction of aviation turbulence. This involved producing both research simulations and real-time simulations of environments predisposed to moderate and severe aviation turbulence. The research resulted in the following fundamental advancements toward the aforementioned goal: 1) very high resolution simulations of turbulent environments indicated how predictive hazard indices could be improved resulting in a candidate hazard index that indicated the potential for improvement over existing operational indices, 2) a real-time turbulence hazard numerical modeling system was improved by correcting deficiencies in its simulation of moist convection and 3) the same real-time predictive system was tested by running the code twice daily and the hazard prediction indices updated and improved. Additionally, a simple validation study was undertaken to determine how well a real time hazard predictive index performed when compared to commercial pilot observations of aviation turbulence. Simple statistical analyses were performed in this validation study indicating potential skill in employing the hazard prediction index to predict regions of varying intensities of aviation turbulence. Data sets from a research numerical model where provided to NASA for use in a large eddy simulation numerical model. A NASA contractor report and several refereed journal articles where prepared and submitted for publication during the course of this research.
Application of a Fully Numerical Guidance to Mars Aerocapture
NASA Technical Reports Server (NTRS)
Matz, Daniel A.; Lu, Ping; Mendeck, Gavin F.; Sostaric, Ronald R.
2017-01-01
An advanced guidance algorithm, Fully Numerical Predictor-corrector Aerocapture Guidance (FNPAG), has been developed to perform aerocapture maneuvers in an optimal manner. It is a model-based, numerical guidance that benefits from requiring few adjustments across a variety of different hypersonic vehicle lift-to-drag ratios, ballistic co-efficients, and atmospheric entry conditions. In this paper, FNPAG is first applied to the Mars Rigid Vehicle (MRV) mid lift-to-drag ratio concept. Then the study is generalized to a design map of potential Mars aerocapture missions and vehicles, ranging from the scale and requirements of recent robotic to potential human and precursor missions. The design map results show the versatility of FNPAG and provide insight for the design of Mars aerocapture vehicles and atmospheric entry conditions to achieve desired performance.
NASA Technical Reports Server (NTRS)
Chen, C. P.
1990-01-01
An existing Computational Fluid Dynamics code for simulating complex turbulent flows inside a liquid rocket combustion chamber was validated and further developed. The Advanced Rocket Injector/Combustor Code (ARICC) is simplified and validated against benchmark flow situations for laminar and turbulent flows. The numerical method used in ARICC Code is re-examined for incompressible flow calculations. For turbulent flows, both the subgrid and the two equation k-epsilon turbulence models are studied. Cases tested include idealized Burger's equation in complex geometries and boundaries, a laminar pipe flow, a high Reynolds number turbulent flow, and a confined coaxial jet with recirculations. The accuracy of the algorithm is examined by comparing the numerical results with the analytical solutions as well as experimented data with different grid sizes.
Predicting catastrophes of non-autonomous networks with visibility graphs and horizontal visibility
NASA Astrophysics Data System (ADS)
Zhang, Haicheng; Xu, Daolin; Wu, Yousheng
2018-05-01
Prediction of potential catastrophes in engineering systems is a challenging problem. We first attempt to construct a complex network to predict catastrophes of a multi-modular floating system in advance of their occurrences. Response time series of the system can be mapped into an virtual network by using visibility graph or horizontal visibility algorithm. The topology characteristics of the networks can be used to forecast catastrophes of the system. Numerical results show that there is an obvious corresponding relationship between the variation of topology characteristics and the onset of catastrophes. A Catastrophe Index (CI) is proposed as a numerical indicator to measure a qualitative change from a stable state to a catastrophic state. The two approaches, the visibility graph and horizontal visibility algorithms, are compared by using the index in the reliability analysis with different data lengths and sampling frequencies. The technique of virtual network method is potentially extendable to catastrophe predictions of other engineering systems.
NASA Technical Reports Server (NTRS)
Bakuckas, J. G.; Tan, T. M.; Lau, A. C. W.; Awerbuch, J.
1993-01-01
A finite element-based numerical technique has been developed to simulate damage growth in unidirectional composites. This technique incorporates elastic-plastic analysis, micromechanics analysis, failure criteria, and a node splitting and node force relaxation algorithm to create crack surfaces. Any combination of fiber and matrix properties can be used. One of the salient features of this technique is that damage growth can be simulated without pre-specifying a crack path. In addition, multiple damage mechanisms in the forms of matrix cracking, fiber breakage, fiber-matrix debonding and plastic deformation are capable of occurring simultaneously. The prevailing failure mechanism and the damage (crack) growth direction are dictated by the instantaneous near-tip stress and strain fields. Once the failure mechanism and crack direction are determined, the crack is advanced via the node splitting and node force relaxation algorithm. Simulations of the damage growth process in center-slit boron/aluminum and silicon carbide/titanium unidirectional specimens were performed. The simulation results agreed quite well with the experimental observations.
Catarinucci, L; Tarricone, L
2009-12-01
With the next transposition of the 2004/40/EC Directive, employers will become responsible for the electromagnetic field level at the workplace. To make this task easier, the scientific community is compiling practical guidelines to be followed. This work aims at enriching such guidelines, especially for the dosimetric issues. More specifically, some critical aspects related to the application of numerical dosimetric techniques for the verification of the safety limit compliance have been highlighted. In particular, three different aspects have been considered: the dosimetric parameter dependence on the shape and the inner characterisation of the exposed subject as well as on the numerical algorithm used, and the correlation between reference limits and basic restriction. Results and discussions demonstrate how, even by using sophisticated numerical techniques, in some cases a complex interpretation of the result is mandatory.
Mathematical and Numerical Techniques in Energy and Environmental Modeling
NASA Astrophysics Data System (ADS)
Chen, Z.; Ewing, R. E.
Mathematical models have been widely used to predict, understand, and optimize many complex physical processes, from semiconductor or pharmaceutical design to large-scale applications such as global weather models to astrophysics. In particular, simulation of environmental effects of air pollution is extensive. Here we address the need for using similar models to understand the fate and transport of groundwater contaminants and to design in situ remediation strategies. Three basic problem areas need to be addressed in the modeling and simulation of the flow of groundwater contamination. First, one obtains an effective model to describe the complex fluid/fluid and fluid/rock interactions that control the transport of contaminants in groundwater. This includes the problem of obtaining accurate reservoir descriptions at various length scales and modeling the effects of this heterogeneity in the reservoir simulators. Next, one develops accurate discretization techniques that retain the important physical properties of the continuous models. Finally, one develops efficient numerical solution algorithms that utilize the potential of the emerging computing architectures. We will discuss recent advances and describe the contribution of each of the papers in this book in these three areas. Keywords: reservoir simulation, mathematical models, partial differential equations, numerical algorithms
Numerical Methods for Forward and Inverse Problems in Discontinuous Media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chartier, Timothy P.
The research emphasis under this grant's funding is in the area of algebraic multigrid methods. The research has two main branches: 1) exploring interdisciplinary applications in which algebraic multigrid can make an impact and 2) extending the scope of algebraic multigrid methods with algorithmic improvements that are based in strong analysis.The work in interdisciplinary applications falls primarily in the field of biomedical imaging. Work under this grant demonstrated the effectiveness and robustness of multigrid for solving linear systems that result from highly heterogeneous finite element method models of the human head. The results in this work also give promise tomore » medical advances possible with software that may be developed. Research to extend the scope of algebraic multigrid has been focused in several areas. In collaboration with researchers at the University of Colorado, Lawrence Livermore National Laboratory, and Los Alamos National Laboratory, the PI developed an adaptive multigrid with subcycling via complementary grids. This method has very cheap computing costs per iterate and is showing promise as a preconditioner for conjugate gradient. Recent work with Los Alamos National Laboratory concentrates on developing algorithms that take advantage of the recent advances in adaptive multigrid research. The results of the various efforts in this research could ultimately have direct use and impact to researchers for a wide variety of applications, including, astrophysics, neuroscience, contaminant transport in porous media, bi-domain heart modeling, modeling of tumor growth, and flow in heterogeneous porous media. This work has already led to basic advances in computational mathematics and numerical linear algebra and will continue to do so into the future.« less
Turbulence dissipation challenge: particle-in-cell simulations
NASA Astrophysics Data System (ADS)
Roytershteyn, V.; Karimabadi, H.; Omelchenko, Y.; Germaschewski, K.
2015-12-01
We discuss application of three particle in cell (PIC) codes to the problems relevant to turbulence dissipation challenge. VPIC is a fully kinetic code extensively used to study a variety of diverse problems ranging from laboratory plasmas to astrophysics. PSC is a flexible fully kinetic code offering a variety of algorithms that can be advantageous to turbulence simulations, including high order particle shapes, dynamic load balancing, and ability to efficiently run on Graphics Processing Units (GPUs). Finally, HYPERS is a novel hybrid (kinetic ions+fluid electrons) code, which utilizes asynchronous time advance and a number of other advanced algorithms. We present examples drawn both from large-scale turbulence simulations and from the test problems outlined by the turbulence dissipation challenge. Special attention is paid to such issues as the small-scale intermittency of inertial range turbulence, mode content of the sub-proton range of scales, the formation of electron-scale current sheets and the role of magnetic reconnection, as well as numerical challenges of applying PIC codes to simulations of astrophysical turbulence.
A Polynomial Time, Numerically Stable Integer Relation Algorithm
NASA Technical Reports Server (NTRS)
Ferguson, Helaman R. P.; Bailey, Daivd H.; Kutler, Paul (Technical Monitor)
1998-01-01
Let x = (x1, x2...,xn be a vector of real numbers. X is said to possess an integer relation if there exist integers a(sub i) not all zero such that a1x1 + a2x2 + ... a(sub n)Xn = 0. Beginning in 1977 several algorithms (with proofs) have been discovered to recover the a(sub i) given x. The most efficient of these existing integer relation algorithms (in terms of run time and the precision required of the input) has the drawback of being very unstable numerically. It often requires a numeric precision level in the thousands of digits to reliably recover relations in modest-sized test problems. We present here a new algorithm for finding integer relations, which we have named the "PSLQ" algorithm. It is proved in this paper that the PSLQ algorithm terminates with a relation in a number of iterations that is bounded by a polynomial in it. Because this algorithm employs a numerically stable matrix reduction procedure, it is free from the numerical difficulties, that plague other integer relation algorithms. Furthermore, its stability admits an efficient implementation with lower run times oil average than other algorithms currently in Use. Finally, this stability can be used to prove that relation bounds obtained from computer runs using this algorithm are numerically accurate.
NASA Technical Reports Server (NTRS)
Dunn, M. H.; Tarkenton, G. M.
1992-01-01
This document describes the computational aspects of propeller noise prediction in the time domain and the use of high speed propeller noise prediction program ASSPIN (Advanced Subsonic and Supersonic Propeller Induced Noise). These formulations are valid in both the near and far fields. Two formulations are utilized by ASSPIN: (1) one is used for subsonic portions of the propeller blade; and (2) the second is used for transonic and supersonic regions on the blade. Switching between the two formulations is done automatically. ASSPIN incorporates advanced blade geometry and surface pressure modelling, adaptive observer time grid strategies, and contains enhanced numerical algorithms that result in reduced computational time. In addition, the ability to treat the nonaxial inflow case has been included.
Li, Tiejun; Min, Bin; Wang, Zhiming
2013-03-14
The stochastic integral ensuring the Newton-Leibnitz chain rule is essential in stochastic energetics. Marcus canonical integral has this property and can be understood as the Wong-Zakai type smoothing limit when the driving process is non-Gaussian. However, this important concept seems not well-known for physicists. In this paper, we discuss Marcus integral for non-Gaussian processes and its computation in the context of stochastic energetics. We give a comprehensive introduction to Marcus integral and compare three equivalent definitions in the literature. We introduce the exact pathwise simulation algorithm and give the error analysis. We show how to compute the thermodynamic quantities based on the pathwise simulation algorithm. We highlight the information hidden in the Marcus mapping, which plays the key role in determining thermodynamic quantities. We further propose the tau-leaping algorithm, which advance the process with deterministic time steps when tau-leaping condition is satisfied. The numerical experiments and its efficiency analysis show that it is very promising.
Impact of new computing systems on computational mechanics and flight-vehicle structures technology
NASA Technical Reports Server (NTRS)
Noor, A. K.; Storaasli, O. O.; Fulton, R. E.
1984-01-01
Advances in computer technology which may have an impact on computational mechanics and flight vehicle structures technology were reviewed. The characteristics of supersystems, highly parallel systems, and small systems are summarized. The interrelations of numerical algorithms and software with parallel architectures are discussed. A scenario for future hardware/software environment and engineering analysis systems is presented. Research areas with potential for improving the effectiveness of analysis methods in the new environment are identified.
Advances in Machine Learning and Data Mining for Astronomy
NASA Astrophysics Data System (ADS)
Way, Michael J.; Scargle, Jeffrey D.; Ali, Kamal M.; Srivastava, Ashok N.
2012-03-01
Advances in Machine Learning and Data Mining for Astronomy documents numerous successful collaborations among computer scientists, statisticians, and astronomers who illustrate the application of state-of-the-art machine learning and data mining techniques in astronomy. Due to the massive amount and complexity of data in most scientific disciplines, the material discussed in this text transcends traditional boundaries between various areas in the sciences and computer science. The book's introductory part provides context to issues in the astronomical sciences that are also important to health, social, and physical sciences, particularly probabilistic and statistical aspects of classification and cluster analysis. The next part describes a number of astrophysics case studies that leverage a range of machine learning and data mining technologies. In the last part, developers of algorithms and practitioners of machine learning and data mining show how these tools and techniques are used in astronomical applications. With contributions from leading astronomers and computer scientists, this book is a practical guide to many of the most important developments in machine learning, data mining, and statistics. It explores how these advances can solve current and future problems in astronomy and looks at how they could lead to the creation of entirely new algorithms within the data mining community.
Implementing a GPU-based numerical algorithm for modelling dynamics of a high-speed train
NASA Astrophysics Data System (ADS)
Sytov, E. S.; Bratus, A. S.; Yurchenko, D.
2018-04-01
This paper discusses the initiative of implementing a GPU-based numerical algorithm for studying various phenomena associated with dynamics of a high-speed railway transport. The proposed numerical algorithm for calculating a critical speed of the bogie is based on the first Lyapunov number. Numerical algorithm is validated by analytical results, derived for a simple model. A dynamic model of a carriage connected to a new dual-wheelset flexible bogie is studied for linear and dry friction damping. Numerical results obtained by CPU, MPU and GPU approaches are compared and appropriateness of these methods is discussed.
Adaptive mesh refinement and front-tracking for shear bands in an antiplane shear model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garaizar, F.X.; Trangenstein, J.
1998-09-01
In this paper the authors describe a numerical algorithm for the study of hear-band formation and growth in a two-dimensional antiplane shear of granular materials. The algorithm combines front-tracking techniques and adaptive mesh refinement. Tracking provides a more careful evolution of the band when coupled with special techniques to advance the ends of the shear band in the presence of a loss of hyperbolicity. The adaptive mesh refinement allows the computational effort to be concentrated in important areas of the deformation, such as the shear band and the elastic relief wave. The main challenges are the problems related to shearmore » bands that extend across several grid patches and the effects that a nonhyperbolic growth rate of the shear bands has in the refinement process. They give examples of the success of the algorithm for various levels of refinement.« less
Automated Vectorization of Decision-Based Algorithms
NASA Technical Reports Server (NTRS)
James, Mark
2006-01-01
Virtually all existing vectorization algorithms are designed to only analyze the numeric properties of an algorithm and distribute those elements across multiple processors. This advances the state of the practice because it is the only known system, at the time of this reporting, that takes high-level statements and analyzes them for their decision properties and converts them to a form that allows them to automatically be executed in parallel. The software takes a high-level source program that describes a complex decision- based condition and rewrites it as a disjunctive set of component Boolean relations that can then be executed in parallel. This is important because parallel architectures are becoming more commonplace in conventional systems and they have always been present in NASA flight systems. This technology allows one to take existing condition-based code and automatically vectorize it so it naturally decomposes across parallel architectures.
Advanced Fast 3-D Electromagnetic Solver for Microwave Tomography Imaging.
Simonov, Nikolai; Kim, Bo-Ra; Lee, Kwang-Jae; Jeon, Soon-Ik; Son, Seong-Ho
2017-10-01
This paper describes a fast-forward electromagnetic solver (FFS) for the image reconstruction algorithm of our microwave tomography system. Our apparatus is a preclinical prototype of a biomedical imaging system, designed for the purpose of early breast cancer detection. It operates in the 3-6-GHz frequency band using a circular array of probe antennas immersed in a matching liquid; it produces image reconstructions of the permittivity and conductivity profiles of the breast under examination. Our reconstruction algorithm solves the electromagnetic (EM) inverse problem and takes into account the real EM properties of the probe antenna array as well as the influence of the patient's body and that of the upper metal screen sheet. This FFS algorithm is much faster than conventional EM simulation solvers. In comparison, in the same PC, the CST solver takes ~45 min, while the FFS takes ~1 s of effective simulation time for the same EM model of a numerical breast phantom.
DOLPHIn—Dictionary Learning for Phase Retrieval
NASA Astrophysics Data System (ADS)
Tillmann, Andreas M.; Eldar, Yonina C.; Mairal, Julien
2016-12-01
We propose a new algorithm to learn a dictionary for reconstructing and sparsely encoding signals from measurements without phase. Specifically, we consider the task of estimating a two-dimensional image from squared-magnitude measurements of a complex-valued linear transformation of the original image. Several recent phase retrieval algorithms exploit underlying sparsity of the unknown signal in order to improve recovery performance. In this work, we consider such a sparse signal prior in the context of phase retrieval, when the sparsifying dictionary is not known in advance. Our algorithm jointly reconstructs the unknown signal - possibly corrupted by noise - and learns a dictionary such that each patch of the estimated image can be sparsely represented. Numerical experiments demonstrate that our approach can obtain significantly better reconstructions for phase retrieval problems with noise than methods that cannot exploit such "hidden" sparsity. Moreover, on the theoretical side, we provide a convergence result for our method.
Hydrodynamic Simulations of Protoplanetary Disks with GIZMO
NASA Astrophysics Data System (ADS)
Rice, Malena; Laughlin, Greg
2018-01-01
Over the past several decades, the field of computational fluid dynamics has rapidly advanced as the range of available numerical algorithms and computationally feasible physical problems has expanded. The development of modern numerical solvers has provided a compelling opportunity to reconsider previously obtained results in search for yet undiscovered effects that may be revealed through longer integration times and more precise numerical approaches. In this study, we compare the results of past hydrodynamic disk simulations with those obtained from modern analytical resources. We focus our study on the GIZMO code (Hopkins 2015), which uses meshless methods to solve the homogeneous Euler equations of hydrodynamics while eliminating problems arising as a result of advection between grid cells. By comparing modern simulations with prior results, we hope to provide an improved understanding of the impact of fluid mechanics upon the evolution of protoplanetary disks.
A numerical comparison of discrete Kalman filtering algorithms: An orbit determination case study
NASA Technical Reports Server (NTRS)
Thornton, C. L.; Bierman, G. J.
1976-01-01
The numerical stability and accuracy of various Kalman filter algorithms are thoroughly studied. Numerical results and conclusions are based on a realistic planetary approach orbit determination study. The case study results of this report highlight the numerical instability of the conventional and stabilized Kalman algorithms. Numerical errors associated with these algorithms can be so large as to obscure important mismodeling effects and thus give misleading estimates of filter accuracy. The positive result of this study is that the Bierman-Thornton U-D covariance factorization algorithm is computationally efficient, with CPU costs that differ negligibly from the conventional Kalman costs. In addition, accuracy of the U-D filter using single-precision arithmetic consistently matches the double-precision reference results. Numerical stability of the U-D filter is further demonstrated by its insensitivity of variations in the a priori statistics.
On the numeric integration of dynamic attitude equations
NASA Technical Reports Server (NTRS)
Crouch, P. E.; Yan, Y.; Grossman, Robert
1992-01-01
We describe new types of numerical integration algorithms developed by the authors. The main aim of the algorithms is to numerically integrate differential equations which evolve on geometric objects, such as the rotation group. The algorithms provide iterates which lie on the prescribed geometric object, either exactly, or to some prescribed accuracy, independent of the order of the algorithm. This paper describes applications of these algorithms to the evolution of the attitude of a rigid body.
Dwell time algorithm based on the optimization theory for magnetorheological finishing
NASA Astrophysics Data System (ADS)
Zhang, Yunfei; Wang, Yang; Wang, Yajun; He, Jianguo; Ji, Fang; Huang, Wen
2010-10-01
Magnetorheological finishing (MRF) is an advanced polishing technique capable of rapidly converging to the required surface figure. This process can deterministically control the amount of the material removed by varying a time to dwell at each particular position on the workpiece surface. The dwell time algorithm is one of the most important key techniques of the MRF. A dwell time algorithm based on the1 matrix equation and optimization theory was presented in this paper. The conventional mathematical model of the dwell time was transferred to a matrix equation containing initial surface error, removal function and dwell time function. The dwell time to be calculated was just the solution to the large, sparse matrix equation. A new mathematical model of the dwell time based on the optimization theory was established, which aims to minimize the 2-norm or ∞-norm of the residual surface error. The solution meets almost all the requirements of precise computer numerical control (CNC) without any need for extra data processing, because this optimization model has taken some polishing condition as the constraints. Practical approaches to finding a minimal least-squares solution and a minimal maximum solution are also discussed in this paper. Simulations have shown that the proposed algorithm is numerically robust and reliable. With this algorithm an experiment has been performed on the MRF machine developed by ourselves. After 4.7 minutes' polishing, the figure error of a flat workpiece with a 50 mm diameter is improved by PV from 0.191λ(λ = 632.8 nm) to 0.087λ and RMS 0.041λ to 0.010λ. This algorithm can be constructed to polish workpieces of all shapes including flats, spheres, aspheres, and prisms, and it is capable of improving the polishing figures dramatically.
NASA Technical Reports Server (NTRS)
Hall, Edward J.; Delaney, Robert A.; Bettner, James L.
1991-01-01
The primary objective was the development of a time dependent 3-D Euler/Navier-Stokes aerodynamic analysis to predict unsteady compressible transonic flows about ducted and unducted propfan propulsion systems at angle of attack. The resulting computer codes are referred to as Advanced Ducted Propfan Analysis Codes (ADPAC). A computer program user's manual is presented for the ADPAC. Aerodynamic calculations were based on a four stage Runge-Kutta time marching finite volume solution technique with added numerical dissipation. A time accurate implicit residual smoothing operator was used for unsteady flow predictions. For unducted propfans, a single H-type grid was used to discretize each blade passage of the complete propeller. For ducted propfans, a coupled system of five grid blocks utilizing an embedded C grid about the cowl leading edge was used to discretize each blade passage. Grid systems were generated by a combined algebraic/elliptic algorithm developed specifically for ducted propfans. Numerical calculations were compared with experimental data for both ducted and unducted flows.
Information filtering via biased random walk on coupled social network.
Nie, Da-Cheng; Zhang, Zi-Ke; Dong, Qiang; Sun, Chongjing; Fu, Yan
2014-01-01
The recommender systems have advanced a great deal in the past two decades. However, most researchers focus their attentions on mining the similarities among users or objects in recommender systems and overlook the social influence which plays an important role in users' purchase process. In this paper, we design a biased random walk algorithm on coupled social networks which gives recommendation results based on both social interests and users' preference. Numerical analyses on two real data sets, Epinions and Friendfeed, demonstrate the improvement of recommendation performance by taking social interests into account, and experimental results show that our algorithm can alleviate the user cold-start problem more effectively compared with the mass diffusion and user-based collaborative filtering methods.
Information Filtering via Biased Random Walk on Coupled Social Network
Dong, Qiang; Fu, Yan
2014-01-01
The recommender systems have advanced a great deal in the past two decades. However, most researchers focus their attentions on mining the similarities among users or objects in recommender systems and overlook the social influence which plays an important role in users' purchase process. In this paper, we design a biased random walk algorithm on coupled social networks which gives recommendation results based on both social interests and users' preference. Numerical analyses on two real data sets, Epinions and Friendfeed, demonstrate the improvement of recommendation performance by taking social interests into account, and experimental results show that our algorithm can alleviate the user cold-start problem more effectively compared with the mass diffusion and user-based collaborative filtering methods. PMID:25147867
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, P. T.
1993-09-01
As the field of computational fluid dynamics (CFD) continues to mature, algorithms are required to exploit the most recent advances in approximation theory, numerical mathematics, computing architectures, and hardware. Meeting this requirement is particularly challenging in incompressible fluid mechanics, where primitive-variable CFD formulations that are robust, while also accurate and efficient in three dimensions, remain an elusive goal. This dissertation asserts that one key to accomplishing this goal is recognition of the dual role assumed by the pressure, i.e., a mechanism for instantaneously enforcing conservation of mass and a force in the mechanical balance law for conservation of momentum. Provingmore » this assertion has motivated the development of a new, primitive-variable, incompressible, CFD algorithm called the Continuity Constraint Method (CCM). The theoretical basis for the CCM consists of a finite-element spatial semi-discretization of a Galerkin weak statement, equal-order interpolation for all state-variables, a 0-implicit time-integration scheme, and a quasi-Newton iterative procedure extended by a Taylor Weak Statement (TWS) formulation for dispersion error control. Original contributions to algorithmic theory include: (a) formulation of the unsteady evolution of the divergence error, (b) investigation of the role of non-smoothness in the discretized continuity-constraint function, (c) development of a uniformly H 1 Galerkin weak statement for the Reynolds-averaged Navier-Stokes pressure Poisson equation, (d) derivation of physically and numerically well-posed boundary conditions, and (e) investigation of sparse data structures and iterative methods for solving the matrix algebra statements generated by the algorithm.« less
NASA Technical Reports Server (NTRS)
Hall, E. J.; Topp, D. A.; Delaney, R. A.
1996-01-01
The overall objective of this study was to develop a 3-D numerical analysis for compressor casing treatment flowfields. The current version of the computer code resulting from this study is referred to as ADPAC (Advanced Ducted Propfan Analysis Codes-Version 7). This report is intended to serve as a computer program user's manual for the ADPAC code developed under Tasks 6 and 7 of the NASA Contract. The ADPAC program is based on a flexible multiple- block grid discretization scheme permitting coupled 2-D/3-D mesh block solutions with application to a wide variety of geometries. Aerodynamic calculations are based on a four-stage Runge-Kutta time-marching finite volume solution technique with added numerical dissipation. Steady flow predictions are accelerated by a multigrid procedure. An iterative implicit algorithm is available for rapid time-dependent flow calculations, and an advanced two equation turbulence model is incorporated to predict complex turbulent flows. The consolidated code generated during this study is capable of executing in either a serial or parallel computing mode from a single source code. Numerous examples are given in the form of test cases to demonstrate the utility of this approach for predicting the aerodynamics of modem turbomachinery configurations.
Probabilistic numerics and uncertainty in computations
Hennig, Philipp; Osborne, Michael A.; Girolami, Mark
2015-01-01
We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations. PMID:26346321
Probabilistic numerics and uncertainty in computations.
Hennig, Philipp; Osborne, Michael A; Girolami, Mark
2015-07-08
We deliver a call to arms for probabilistic numerical methods : algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations.
NASA Technical Reports Server (NTRS)
Wigton, Larry
1996-01-01
Improving the numerical linear algebra routines for use in new Navier-Stokes codes, specifically Tim Barth's unstructured grid code, with spin-offs to TRANAIR is reported. A fast distance calculation routine for Navier-Stokes codes using the new one-equation turbulence models is written. The primary focus of this work was devoted to improving matrix-iterative methods. New algorithms have been developed which activate the full potential of classical Cray-class computers as well as distributed-memory parallel computers.
A spectral, quasi-cylindrical and dispersion-free Particle-In-Cell algorithm
Lehe, Remi; Kirchen, Manuel; Andriyash, Igor A.; ...
2016-02-17
We propose a spectral Particle-In-Cell (PIC) algorithm that is based on the combination of a Hankel transform and a Fourier transform. For physical problems that have close-to-cylindrical symmetry, this algorithm can be much faster than full 3D PIC algorithms. In addition, unlike standard finite-difference PIC codes, the proposed algorithm is free of spurious numerical dispersion, in vacuum. This algorithm is benchmarked in several situations that are of interest for laser-plasma interactions. These benchmarks show that it avoids a number of numerical artifacts, that would otherwise affect the physics in a standard PIC algorithm - including the zero-order numerical Cherenkov effect.
SEOM's Sentinel-3/OLCI' project CAWA: advanced GRASP aerosol retrieval
NASA Astrophysics Data System (ADS)
Dubovik, Oleg; litvinov, Pavel; Huang, Xin; Aspetsberger, Michael; Fuertes, David; Brockmann, Carsten; Fischer, Jürgen; Bojkov, Bojan
2016-04-01
The CAWA "Advanced Clouds, Aerosols and WAter vapour products for Sentinel-3/OLCI" ESA-SEOM project aims on the development of advanced atmospheric retrieval algorithms for the Sentinel-3/OLCI mission, and is prepared using Envisat/MERIS and Aqua/MODIS datasets. This presentation discusses mainly CAWA aerosol product developments and results. CAWA aerosol retrieval uses recently developed GRASP algorithm (Generalized Retrieval of Aerosol and Surface Properties) algorithm described by Dubovik et al. (2014). GRASP derives extended set of atmospheric parameters using multi-pixel concept - a simultaneous fitting of a large group of pixels under additional a priori constraints limiting the time variability of surface properties and spatial variability of aerosol properties. Over land GRASP simultaneously retrieves properties of both aerosol and underlying surface even over bright surfaces. GRAPS doesn't use traditional look-up-tables and performs retrieval as search in continuous space of solution. All radiative transfer calculations are performed as part of the retrieval. The results of comprehensive sensitivity tests, as well as results obtained from real Envisat/MERIS data will be presented. The tests analyze various aspects of aerosol and surface reflectance retrieval accuracy. In addition, the possibilities of retrieval improvement by means of implementing synergetic inversion of a combination of OLCI data with observations by SLSTR are explored. Both the results of numerical tests, as well as the results of processing several years of Envisat/MERIS data illustrate demonstrate reliable retrieval of AOD (Aerosol Optical Depth) and surface BRDF. Observed retrieval issues and advancements will be discussed. For example, for some situations we illustrate possibilities of retrieving aerosol absorption - property that hardly accessible from satellite observations with no multi-angular and polarimetric capabilities.
Broecker, Peter; Trebst, Simon
2016-12-01
In the absence of a fermion sign problem, auxiliary-field (or determinantal) quantum Monte Carlo (DQMC) approaches have long been the numerical method of choice for unbiased, large-scale simulations of interacting many-fermion systems. More recently, the conceptual scope of this approach has been expanded by introducing ingenious schemes to compute entanglement entropies within its framework. On a practical level, these approaches, however, suffer from a variety of numerical instabilities that have largely impeded their applicability. Here we report on a number of algorithmic advances to overcome many of these numerical instabilities and significantly improve the calculation of entanglement measures in the zero-temperature projective DQMC approach, ultimately allowing us to reach similar system sizes as for the computation of conventional observables. We demonstrate the applicability of this improved DQMC approach by providing an entanglement perspective on the quantum phase transition from a magnetically ordered Mott insulator to a band insulator in the bilayer square lattice Hubbard model at half filling.
Numerical methods for engine-airframe integration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murthy, S.N.B.; Paynter, G.C.
1986-01-01
Various papers on numerical methods for engine-airframe integration are presented. The individual topics considered include: scientific computing environment for the 1980s, overview of prediction of complex turbulent flows, numerical solutions of the compressible Navier-Stokes equations, elements of computational engine/airframe integrations, computational requirements for efficient engine installation, application of CAE and CFD techniques to complete tactical missile design, CFD applications to engine/airframe integration, and application of a second-generation low-order panel methods to powerplant installation studies. Also addressed are: three-dimensional flow analysis of turboprop inlet and nacelle configurations, application of computational methods to the design of large turbofan engine nacelles, comparison ofmore » full potential and Euler solution algorithms for aeropropulsive flow field computations, subsonic/transonic, supersonic nozzle flows and nozzle integration, subsonic/transonic prediction capabilities for nozzle/afterbody configurations, three-dimensional viscous design methodology of supersonic inlet systems for advanced technology aircraft, and a user's technology assessment.« less
NASA Astrophysics Data System (ADS)
Scovazzi, Guglielmo; Wheeler, Mary F.; Mikelić, Andro; Lee, Sanghyun
2017-04-01
The miscible displacement of one fluid by another in a porous medium has received considerable attention in subsurface, environmental and petroleum engineering applications. When a fluid of higher mobility displaces another of lower mobility, unstable patterns - referred to as viscous fingering - may arise. Their physical and mathematical study has been the object of numerous investigations over the past century. The objective of this paper is to present a review of these contributions with particular emphasis on variational methods. These algorithms are tailored to real field applications thanks to their advanced features: handling of general complex geometries, robustness in the presence of rough tensor coefficients, low sensitivity to mesh orientation in advection dominated scenarios, and provable convergence with fully unstructured grids. This paper is dedicated to the memory of Dr. Jim Douglas Jr., for his seminal contributions to miscible displacement and variational numerical methods.
Anatomisation with slicing: a new privacy preservation approach for multiple sensitive attributes.
Susan, V Shyamala; Christopher, T
2016-01-01
An enormous quantity of personal health information is available in recent decades and tampering of any part of this information imposes a great risk to the health care field. Existing anonymization methods are only apt for single sensitive and low dimensional data to keep up with privacy specifically like generalization and bucketization. In this paper, an anonymization technique is proposed that is a combination of the benefits of anatomization, and enhanced slicing approach adhering to the principle of k-anonymity and l-diversity for the purpose of dealing with high dimensional data along with multiple sensitive data. The anatomization approach dissociates the correlation observed between the quasi identifier attributes and sensitive attributes (SA) and yields two separate tables with non-overlapping attributes. In the enhanced slicing algorithm, vertical partitioning does the grouping of the correlated SA in ST together and thereby minimizes the dimensionality by employing the advanced clustering algorithm. In order to get the optimal size of buckets, tuple partitioning is conducted by MFA. The experimental outcomes indicate that the proposed method can preserve privacy of data with numerous SA. The anatomization approach minimizes the loss of information and slicing algorithm helps in the preservation of correlation and utility which in turn results in reducing the data dimensionality and information loss. The advanced clustering algorithms prove its efficiency by minimizing the time and complexity. Furthermore, this work sticks to the principle of k-anonymity, l-diversity and thus avoids privacy threats like membership, identity and attributes disclosure.
Snowfall Rate Retrieval using NPP ATMS Passive Microwave Measurements
NASA Technical Reports Server (NTRS)
Meng, Huan; Ferraro, Ralph; Kongoli, Cezar; Wang, Nai-Yu; Dong, Jun; Zavodsky, Bradley; Yan, Banghua; Zhao, Limin
2014-01-01
Passive microwave measurements at certain high frequencies are sensitive to the scattering effect of snow particles and can be utilized to retrieve snowfall properties. Some of the microwave sensors with snowfall sensitive channels are Advanced Microwave Sounding Unit (AMSU), Microwave Humidity Sounder (MHS) and Advance Technology Microwave Sounder (ATMS). ATMS is the follow-on sensor to AMSU and MHS. Currently, an AMSU and MHS based land snowfall rate (SFR) product is running operationally at NOAA/NESDIS. Based on the AMSU/MHS SFR, an ATMS SFR algorithm has been developed recently. The algorithm performs retrieval in three steps: snowfall detection, retrieval of cloud properties, and estimation of snow particle terminal velocity and snowfall rate. The snowfall detection component utilizes principal component analysis and a logistic regression model. The model employs a combination of temperature and water vapor sounding channels to detect the scattering signal from falling snow and derive the probability of snowfall (Kongoli et al., 2014). In addition, a set of NWP model based filters is also employed to improve the accuracy of snowfall detection. Cloud properties are retrieved using an inversion method with an iteration algorithm and a two-stream radiative transfer model (Yan et al., 2008). A method developed by Heymsfield and Westbrook (2010) is adopted to calculate snow particle terminal velocity. Finally, snowfall rate is computed by numerically solving a complex integral. The ATMS SFR product is validated against radar and gauge snowfall data and shows that the ATMS algorithm outperforms the AMSU/MHS SFR.
Knowledge-Based Object Detection in Laser Scanning Point Clouds
NASA Astrophysics Data System (ADS)
Boochs, F.; Karmacharya, A.; Marbs, A.
2012-07-01
Object identification and object processing in 3D point clouds have always posed challenges in terms of effectiveness and efficiency. In practice, this process is highly dependent on human interpretation of the scene represented by the point cloud data, as well as the set of modeling tools available for use. Such modeling algorithms are data-driven and concentrate on specific features of the objects, being accessible to numerical models. We present an approach that brings the human expert knowledge about the scene, the objects inside, and their representation by the data and the behavior of algorithms to the machine. This "understanding" enables the machine to assist human interpretation of the scene inside the point cloud. Furthermore, it allows the machine to understand possibilities and limitations of algorithms and to take this into account within the processing chain. This not only assists the researchers in defining optimal processing steps, but also provides suggestions when certain changes or new details emerge from the point cloud. Our approach benefits from the advancement in knowledge technologies within the Semantic Web framework. This advancement has provided a strong base for applications based on knowledge management. In the article we will present and describe the knowledge technologies used for our approach such as Web Ontology Language (OWL), used for formulating the knowledge base and the Semantic Web Rule Language (SWRL) with 3D processing and topologic built-ins, aiming to combine geometrical analysis of 3D point clouds, and specialists' knowledge of the scene and algorithmic processing.
Trees, bialgebras and intrinsic numerical algorithms
NASA Technical Reports Server (NTRS)
Crouch, Peter; Grossman, Robert; Larson, Richard
1990-01-01
Preliminary work about intrinsic numerical integrators evolving on groups is described. Fix a finite dimensional Lie group G; let g denote its Lie algebra, and let Y(sub 1),...,Y(sub N) denote a basis of g. A class of numerical algorithms is presented that approximate solutions to differential equations evolving on G of the form: dot-x(t) = F(x(t)), x(0) = p is an element of G. The algorithms depend upon constants c(sub i) and c(sub ij), for i = 1,...,k and j is less than i. The algorithms have the property that if the algorithm starts on the group, then it remains on the group. In addition, they also have the property that if G is the abelian group R(N), then the algorithm becomes the classical Runge-Kutta algorithm. The Cayley algebra generated by labeled, ordered trees is used to generate the equations that the coefficients c(sub i) and c(sub ij) must satisfy in order for the algorithm to yield an rth order numerical integrator and to analyze the resulting algorithms.
Development of numerical techniques for simulation of magnetogasdynamics and hypersonic chemistry
NASA Astrophysics Data System (ADS)
Damevin, Henri-Marie
Magnetogasdynamics, the science concerned with the mutual interaction between electromagnetic field and flow of electrically conducting gas, offers promising advances in flow control and propulsion of future hypersonic vehicles. Numerical simulations are essential for understanding phenomena, and for research and development. The current dissertation is devoted to the development and validation of numerical algorithms for the solution of multidimensional magnetogasdynamic equations and the simulation of hypersonic high-temperature effects. Governing equations are derived, based on classical magnetogasdynamic assumptions. Two sets of equations are considered, namely the full equations and equations in the low magnetic Reynolds number approximation. Equations are expressed in a suitable formulation for discretization by finite differences in a computational space. For the full equations, Gauss law for magnetism is enforced using Powell's methodology. The time integration method is a four-stage modified Runge-Kutta scheme, amended with a Total Variation Diminishing model in a postprocessing stage. The eigensystem, required for the Total Variation Diminishing scheme, is derived in generalized three-dimensional coordinate system. For the simulation of hypersonic high-temperature effects, two chemical models are utilized, namely a nonequilibrium model and an equilibrium model. A loosely coupled approach is implemented to communicate between the magnetogasdynamic equations and the chemical models. The nonequilibrium model is a one-temperature, five-species, seventeen-reaction model solved by an implicit flux-vector splitting scheme. The chemical equilibrium model computes thermodynamics properties using curve fit procedures. Selected results are provided, which explore the different features of the numerical algorithms. The shock-capturing properties are validated for shock-tube simulations using numerical solutions reported in the literature. The computations of superfast flows over corners and in convergent channels demonstrate the performances of the algorithm in multiple dimensions. The implementation of diffusion terms is validated by solving the magnetic Rayleigh problem and Hartmann problem, for which analytical solutions are available. Prediction of blunt-body type flow are investigated and compared with numerical solutions reported in the literature. The effectiveness of the chemical models for hypersonic flow over blunt body is examined in various flow conditions. It is shown that the proposed schemes perform well in a variety of test cases, though some limitations have been identified.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dahlin, J.-E.; Scheffel, J.
2005-06-15
In the advanced reversed-field pinch (RFP), the current density profile is externally controlled to diminish tearing instabilities. Thus the scaling of energy confinement time with plasma current and density is improved substantially as compared to the conventional RFP. This may be numerically simulated by introducing an ad hoc electric field, adjusted to generate a tearing mode stable parallel current density profile. In the present work a current profile control algorithm, based on feedback of the fluctuating electric field in Ohm's law, is introduced into the resistive magnetohydrodynamic code DEBSP [D. D. Schnack and D. C. Baxter, J. Comput. Phys. 55,more » 485 (1984); D. D. Schnack, D. C. Barnes, Z. Mikic, D. S. Marneal, E. J. Caramana, and R. A. Nebel, Comput. Phys. Commun. 43, 17 (1986)]. The resulting radial magnetic field is decreased considerably, causing an increase in energy confinement time and poloidal {beta}. It is found that the parallel current density profile spontaneously becomes hollow, and that a formation, being related to persisting resistive g modes, appears close to the reversal surface.« less
NASA Astrophysics Data System (ADS)
Schulthess, Thomas C.
2013-03-01
The continued thousand-fold improvement in sustained application performance per decade on modern supercomputers keeps opening new opportunities for scientific simulations. But supercomputers have become very complex machines, built with thousands or tens of thousands of complex nodes consisting of multiple CPU cores or, most recently, a combination of CPU and GPU processors. Efficient simulations on such high-end computing systems require tailored algorithms that optimally map numerical methods to particular architectures. These intricacies will be illustrated with simulations of strongly correlated electron systems, where the development of quantum cluster methods, Monte Carlo techniques, as well as their optimal implementation by means of algorithms with improved data locality and high arithmetic density have gone hand in hand with evolving computer architectures. The present work would not have been possible without continued access to computing resources at the National Center for Computational Science of Oak Ridge National Laboratory, which is funded by the Facilities Division of the Office of Advanced Scientific Computing Research, and the Swiss National Supercomputing Center (CSCS) that is funded by ETH Zurich.
Process simulation for advanced composites production
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allendorf, M.D.; Ferko, S.M.; Griffiths, S.
1997-04-01
The objective of this project is to improve the efficiency and lower the cost of chemical vapor deposition (CVD) processes used to manufacture advanced ceramics by providing the physical and chemical understanding necessary to optimize and control these processes. Project deliverables include: numerical process models; databases of thermodynamic and kinetic information related to the deposition process; and process sensors and software algorithms that can be used for process control. Target manufacturing techniques include CVD fiber coating technologies (used to deposit interfacial coatings on continuous fiber ceramic preforms), chemical vapor infiltration, thin-film deposition processes used in the glass industry, and coatingmore » techniques used to deposit wear-, abrasion-, and corrosion-resistant coatings for use in the pulp and paper, metals processing, and aluminum industries.« less
Reliable numerical computation in an optimal output-feedback design
NASA Technical Reports Server (NTRS)
Vansteenwyk, Brett; Ly, Uy-Loi
1991-01-01
A reliable algorithm is presented for the evaluation of a quadratic performance index and its gradients with respect to the controller design parameters. The algorithm is a part of a design algorithm for optimal linear dynamic output-feedback controller that minimizes a finite-time quadratic performance index. The numerical scheme is particularly robust when it is applied to the control-law synthesis for systems with densely packed modes and where there is a high likelihood of encountering degeneracies in the closed-loop eigensystem. This approach through the use of an accurate Pade series approximation does not require the closed-loop system matrix to be diagonalizable. The algorithm was included in a control design package for optimal robust low-order controllers. Usefulness of the proposed numerical algorithm was demonstrated using numerous practical design cases where degeneracies occur frequently in the closed-loop system under an arbitrary controller design initialization and during the numerical search.
Evaluation of on-line pulse control for vibration suppression in flexible spacecraft
NASA Technical Reports Server (NTRS)
Masri, Sami F.
1987-01-01
A numerical simulation was performed, by means of a large-scale finite element code capable of handling large deformations and/or nonlinear behavior, to investigate the suitability of the nonlinear pulse-control algorithm to suppress the vibrations induced in the Spacecraft Control Laboratory Experiment (SCOLE) components under realistic maneuvers. Among the topics investigated were the effects of various control parameters on the efficiency and robustness of the vibration control algorithm. Advanced nonlinear control techniques were applied to an idealized model of some of the SCOLE components to develop an efficient algorithm to determine the optimal locations of point actuators, considering the hardware on the SCOLE project as distributed in nature. The control was obtained from a quadratic optimization criterion, given in terms of the state variables of the distributed system. An experimental investigation was performed on a model flexible structure resembling the essential features of the SCOLE components, and electrodynamic and electrohydraulic actuators were used to investigate the applicability of the control algorithm with such devices in addition to mass-ejection pulse generators using compressed air.
NASA Astrophysics Data System (ADS)
Tuan, Le Anh; Lee, Soon-Geul
2018-03-01
In this study, a new mathematical model of crawler cranes is developed for heavy working conditions, with payload-lifting and boom-hoisting motions simultaneously activated. The system model is built with full consideration of wind disturbances, geometrical nonlinearities, and cable elasticities of cargo lifting and boom luffing. On the basis of this dynamic model, three versions of sliding mode control are analyzed and designed to control five system outputs with only two inputs. When used in complicated operations, the effectiveness of the controllers is analyzed using analytical investigation and numerical simulation. Results indicate the effectiveness of the control algorithms and the proposed dynamic model. The control algorithms asymptotically stabilize the system with finite-time convergences, remaining robust amid disturbances and parametric uncertainties.
Optimal design of dampers within seismic structures
NASA Astrophysics Data System (ADS)
Ren, Wenjie; Qian, Hui; Song, Wali; Wang, Liqiang
2009-07-01
An improved multi-objective genetic algorithm for structural passive control system optimization is proposed. Based on the two-branch tournament genetic algorithm, the selection operator is constructed by evaluating individuals according to their dominance in one run. For a constrained problem, the dominance-based penalty function method is advanced, containing information on an individual's status (feasible or infeasible), position in a search space, and distance from a Pareto optimal set. The proposed approach is used for the optimal designs of a six-storey building with shape memory alloy dampers subjected to earthquake. The number and position of dampers are chosen as the design variables. The number of dampers and peak relative inter-storey drift are considered as the objective functions. Numerical results generate a set of non-dominated solutions.
Research on numerical algorithms for large space structures
NASA Technical Reports Server (NTRS)
Denman, E. D.
1982-01-01
Numerical algorithms for large space structures were investigated with particular emphasis on decoupling method for analysis and design. Numerous aspects of the analysis of large systems ranging from the algebraic theory to lambda matrices to identification algorithms were considered. A general treatment of the algebraic theory of lambda matrices is presented and the theory is applied to second order lambda matrices.
A numerical algorithm for the explicit calculation of SU(N) and SL(N,C) Clebsch-Gordan coefficients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alex, Arne; Delft, Jan von; Kalus, Matthias
2011-02-15
We present an algorithm for the explicit numerical calculation of SU(N) and SL(N,C) Clebsch-Gordan coefficients, based on the Gelfand-Tsetlin pattern calculus. Our algorithm is well suited for numerical implementation; we include a computer code in an appendix. Our exposition presumes only familiarity with the representation theory of SU(2).
A numerical algorithm for MHD of free surface flows at low magnetic Reynolds numbers
NASA Astrophysics Data System (ADS)
Samulyak, Roman; Du, Jian; Glimm, James; Xu, Zhiliang
2007-10-01
We have developed a numerical algorithm and computational software for the study of magnetohydrodynamics (MHD) of free surface flows at low magnetic Reynolds numbers. The governing system of equations is a coupled hyperbolic-elliptic system in moving and geometrically complex domains. The numerical algorithm employs the method of front tracking and the Riemann problem for material interfaces, second order Godunov-type hyperbolic solvers, and the embedded boundary method for the elliptic problem in complex domains. The numerical algorithm has been implemented as an MHD extension of FronTier, a hydrodynamic code with free interface support. The code is applicable for numerical simulations of free surface flows of conductive liquids or weakly ionized plasmas. The code has been validated through the comparison of numerical simulations of a liquid metal jet in a non-uniform magnetic field with experiments and theory. Simulations of the Muon Collider/Neutrino Factory target have also been discussed.
Adaptive grid embedding for the two-dimensional flux-split Euler equations. M.S. Thesis
NASA Technical Reports Server (NTRS)
Warren, Gary Patrick
1990-01-01
A numerical algorithm is presented for solving the 2-D flux-split Euler equations using a multigrid method with adaptive grid embedding. The method uses an unstructured data set along with a system of pointers for communication on the irregularly shaped grid topologies. An explicit two-stage time advancement scheme is implemented. A multigrid algorithm is used to provide grid level communication and to accelerate the convergence of the solution to steady state. Results are presented for a subcritical airfoil and a transonic airfoil with 3 levels of adaptation. Comparisons are made with a structured upwind Euler code which uses the same flux integration techniques of the present algorithm. Good agreement is obtained with converged surface pressure coefficients. The lift coefficients of the adaptive code are within 2 1/2 percent of the structured code for the sub-critical case and within 4 1/2 percent of the structured code for the transonic case using approximately one-third the number of grid points.
Recent advances in numerical PDEs
NASA Astrophysics Data System (ADS)
Zuev, Julia Michelle
In this thesis, we investigate four neighboring topics, all in the general area of numerical methods for solving Partial Differential Equations (PDEs). Topic 1. Radial Basis Functions (RBF) are widely used for multi-dimensional interpolation of scattered data. This methodology offers smooth and accurate interpolants, which can be further refined, if necessary, by clustering nodes in select areas. We show, however, that local refinements with RBF (in a constant shape parameter [varepsilon] regime) may lead to the oscillatory errors associated with the Runge phenomenon (RP). RP is best known in the case of high-order polynomial interpolation, where its effects can be accurately predicted via Lebesgue constant L (which is based solely on the node distribution). We study the RP and the applicability of Lebesgue constant (as well as other error measures) in RBF interpolation. Mainly, we allow for a spatially variable shape parameter, and demonstrate how it can be used to suppress RP-like edge effects and to improve the overall stability and accuracy. Topic 2. Although not as versatile as RBFs, cubic splines are useful for interpolating grid-based data. In 2-D, we consider a patch representation via Hermite basis functions s i,j ( u, v ) = [Special characters omitted.] h mn H m ( u ) H n ( v ), as opposed to the standard bicubic representation. Stitching requirements for the rectangular non-equispaced grid yield a 2-D tridiagonal linear system AX = B, where X represents the unknown first derivatives. We discover that the standard methods for solving this NxM system do not take advantage of the spline-specific format of the matrix B. We develop an alternative approach using this specialization of the RHS, which allows us to pre-compute coefficients only once, instead of N times. MATLAB implementation of our fast 2-D cubic spline algorithm is provided. We confirm analytically and numerically that for large N ( N > 200), our method is at least 3 times faster than the standard algorithm and is just as accurate. Topic 3. The well-known ADI-FDTD method for solving Maxwell's curl equations is second-order accurate in space/time, unconditionally stable, and computationally efficient. We research Richardson extrapolation -based techniques to improve time discretization accuracy for spatially oversampled ADI-FDTD. A careful analysis of temporal accuracy, computational efficiency, and the algorithm's overall stability is presented. Given the context of wave- type PDEs, we find that only a limited number of extrapolations to the ADI-FDTD method are beneficial, if its unconditional stability is to be preserved. We propose a practical approach for choosing the size of a time step that can be used to improve the efficiency of the ADI-FDTD algorithm, while maintaining its accuracy and stability. Topic 4. Shock waves and their energy dissipation properties are critical to understanding the dynamics controlling the MHD turbulence. Numerical advection algorithms used in MHD solvers (e.g. the ZEUS package) introduce undesirable numerical viscosity. To counteract its effects and to resolve shocks numerically, Richtmyer and von Neumann's artificial viscosity is commonly added to the model. We study shock power by analyzing the influence of both artificial and numerical viscosity on energy decay rates. Also, we analytically characterize the numerical diffusivity of various advection algorithms by quantifying their diffusion coefficients e.
Li, Bai; Lin, Mu; Liu, Qiao; Li, Ya; Zhou, Changjun
2015-10-01
Protein folding is a fundamental topic in molecular biology. Conventional experimental techniques for protein structure identification or protein folding recognition require strict laboratory requirements and heavy operating burdens, which have largely limited their applications. Alternatively, computer-aided techniques have been developed to optimize protein structures or to predict the protein folding process. In this paper, we utilize a 3D off-lattice model to describe the original protein folding scheme as a simplified energy-optimal numerical problem, where all types of amino acid residues are binarized into hydrophobic and hydrophilic ones. We apply a balance-evolution artificial bee colony (BE-ABC) algorithm as the minimization solver, which is featured by the adaptive adjustment of search intensity to cater for the varying needs during the entire optimization process. In this work, we establish a benchmark case set with 13 real protein sequences from the Protein Data Bank database and evaluate the convergence performance of BE-ABC algorithm through strict comparisons with several state-of-the-art ABC variants in short-term numerical experiments. Besides that, our obtained best-so-far protein structures are compared to the ones in comprehensive previous literature. This study also provides preliminary insights into how artificial intelligence techniques can be applied to reveal the dynamics of protein folding. Graphical Abstract Protein folding optimization using 3D off-lattice model and advanced optimization techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
2013-07-01
The Mathematics and Computation Division of the American Nuclear (ANS) and the Idaho Section of the ANS hosted the 2013 International Conference on Mathematics and Computational Methods Applied to Nuclear Science and Engineering (M and C 2013). This proceedings contains over 250 full papers with topics ranging from reactor physics; radiation transport; materials science; nuclear fuels; core performance and optimization; reactor systems and safety; fluid dynamics; medical applications; analytical and numerical methods; algorithms for advanced architectures; and validation verification, and uncertainty quantification.
Numerical modeling of spray combustion with an advanced VOF method
NASA Technical Reports Server (NTRS)
Chen, Yen-Sen; Shang, Huan-Min; Shih, Ming-Hsin; Liaw, Paul
1995-01-01
This paper summarizes the technical development and validation of a multiphase computational fluid dynamics (CFD) numerical method using the volume-of-fluid (VOF) model and a Lagrangian tracking model which can be employed to analyze general multiphase flow problems with free surface mechanism. The gas-liquid interface mass, momentum and energy conservation relationships are modeled by continuum surface mechanisms. A new solution method is developed such that the present VOF model can be applied for all-speed flow regimes. The objectives of the present study are to develop and verify the fractional volume-of-fluid cell partitioning approach into a predictor-corrector algorithm and to demonstrate the effectiveness of the present approach by simulating benchmark problems including laminar impinging jets, shear coaxial jet atomization and shear coaxial spray combustion flows.
Development of Multistep and Degenerate Variational Integrators for Applications in Plasma Physics
NASA Astrophysics Data System (ADS)
Ellison, Charles Leland
Geometric integrators yield high-fidelity numerical results by retaining conservation laws in the time advance. A particularly powerful class of geometric integrators is symplectic integrators, which are widely used in orbital mechanics and accelerator physics. An important application presently lacking symplectic integrators is the guiding center motion of magnetized particles represented by non-canonical coordinates. Because guiding center trajectories are foundational to many simulations of magnetically confined plasmas, geometric guiding center algorithms have high potential for impact. The motivation is compounded by the need to simulate long-pulse fusion devices, including ITER, and opportunities in high performance computing, including the use of petascale resources and beyond. This dissertation uses a systematic procedure for constructing geometric integrators --- known as variational integration --- to deliver new algorithms for guiding center trajectories and other plasma-relevant dynamical systems. These variational integrators are non-trivial because the Lagrangians of interest are degenerate - the Euler-Lagrange equations are first-order differential equations and the Legendre transform is not invertible. The first contribution of this dissertation is that variational integrators for degenerate Lagrangian systems are typically multistep methods. Multistep methods admit parasitic mode instabilities that can ruin the numerical results. These instabilities motivate the second major contribution: degenerate variational integrators. By replicating the degeneracy of the continuous system, degenerate variational integrators avoid parasitic mode instabilities. The new methods are therefore robust geometric integrators for degenerate Lagrangian systems. These developments in variational integration theory culminate in one-step degenerate variational integrators for non-canonical magnetic field line flow and guiding center dynamics. The guiding center integrator assumes coordinates such that one component of the magnetic field is zero; it is shown how to construct such coordinates for nested magnetic surface configurations. Additionally, collisional drag effects are incorporated in the variational guiding center algorithm for the first time, allowing simulation of energetic particle thermalization. Advantages relative to existing canonical-symplectic and non-geometric algorithms are numerically demonstrated. All algorithms have been implemented as part of a modern, parallel, ODE-solving library, suitable for use in high-performance simulations.
Salari, Nader; Shohaimi, Shamarina; Najafi, Farid; Nallappan, Meenakshii; Karishnarajah, Isthrinayagy
2014-01-01
Among numerous artificial intelligence approaches, k-Nearest Neighbor algorithms, genetic algorithms, and artificial neural networks are considered as the most common and effective methods in classification problems in numerous studies. In the present study, the results of the implementation of a novel hybrid feature selection-classification model using the above mentioned methods are presented. The purpose is benefitting from the synergies obtained from combining these technologies for the development of classification models. Such a combination creates an opportunity to invest in the strength of each algorithm, and is an approach to make up for their deficiencies. To develop proposed model, with the aim of obtaining the best array of features, first, feature ranking techniques such as the Fisher's discriminant ratio and class separability criteria were used to prioritize features. Second, the obtained results that included arrays of the top-ranked features were used as the initial population of a genetic algorithm to produce optimum arrays of features. Third, using a modified k-Nearest Neighbor method as well as an improved method of backpropagation neural networks, the classification process was advanced based on optimum arrays of the features selected by genetic algorithms. The performance of the proposed model was compared with thirteen well-known classification models based on seven datasets. Furthermore, the statistical analysis was performed using the Friedman test followed by post-hoc tests. The experimental findings indicated that the novel proposed hybrid model resulted in significantly better classification performance compared with all 13 classification methods. Finally, the performance results of the proposed model was benchmarked against the best ones reported as the state-of-the-art classifiers in terms of classification accuracy for the same data sets. The substantial findings of the comprehensive comparative study revealed that performance of the proposed model in terms of classification accuracy is desirable, promising, and competitive to the existing state-of-the-art classification models. PMID:25419659
Verification of Numerical Programs: From Real Numbers to Floating Point Numbers
NASA Technical Reports Server (NTRS)
Goodloe, Alwyn E.; Munoz, Cesar; Kirchner, Florent; Correnson, Loiec
2013-01-01
Numerical algorithms lie at the heart of many safety-critical aerospace systems. The complexity and hybrid nature of these systems often requires the use of interactive theorem provers to verify that these algorithms are logically correct. Usually, proofs involving numerical computations are conducted in the infinitely precise realm of the field of real numbers. However, numerical computations in these algorithms are often implemented using floating point numbers. The use of a finite representation of real numbers introduces uncertainties as to whether the properties veri ed in the theoretical setting hold in practice. This short paper describes work in progress aimed at addressing these concerns. Given a formally proven algorithm, written in the Program Verification System (PVS), the Frama-C suite of tools is used to identify sufficient conditions and verify that under such conditions the rounding errors arising in a C implementation of the algorithm do not affect its correctness. The technique is illustrated using an algorithm for detecting loss of separation among aircraft.
NASA Astrophysics Data System (ADS)
Strippoli, L. S.; Gonzalez-Arjona, D. G.
2018-04-01
GMV extensively worked in many activities aimed at developing, validating, and verifying up to TRL-6 advanced GNC and IP algorithms for Mars Sample Return rendezvous working under different ESA contracts on the development of advanced algorithms for VBN sensor.
Flow Navigation by Smart Microswimmers via Reinforcement Learning
NASA Astrophysics Data System (ADS)
Colabrese, Simona; Biferale, Luca; Celani, Antonio; Gustavsson, Kristian
2017-11-01
We have numerically modeled active particles which are able to acquire some limited knowledge of the fluid environment from simple mechanical cues and exert a control on their preferred steering direction. We show that those swimmers can learn effective strategies just by experience, using a reinforcement learning algorithm. As an example, we focus on smart gravitactic swimmers. These are active particles whose task is to reach the highest altitude within some time horizon, exploiting the underlying flow whenever possible. The reinforcement learning algorithm allows particles to learn effective strategies even in difficult situations when, in the absence of control, they would end up being trapped by flow structures. These strategies are highly nontrivial and cannot be easily guessed in advance. This work paves the way towards the engineering of smart microswimmers that solve difficult navigation problems. ERC AdG NewTURB 339032.
QKD-Based Secured Burst Integrity Design for Optical Burst Switched Networks
NASA Astrophysics Data System (ADS)
Balamurugan, A. M.; Sivasubramanian, A.; Parvathavarthini, B.
2016-03-01
The field of optical transmission has undergone numerous advancements and is still being researched mainly due to the fact that optical data transmission can be done at enormous speeds. It is quite evident that people prefer optical communication when it comes to large amount of data involving its transmission. The concept of switching in networks has matured enormously with several researches, architecture to implement and methods starting with Optical circuit switching to Optical Burst Switching. Optical burst switching is regarded as viable solution for switching bursts over networks but has several security vulnerabilities. However, this work exploited the security issues associated with Optical Burst Switching with respect to integrity of burst. This proposed Quantum Key based Secure Hash Algorithm (QKBSHA-512) with enhanced compression function design provides better avalanche effect over the conventional integrity algorithms.
Nonlinear dynamics of laser systems with elements of a chaos: Advanced computational code
NASA Astrophysics Data System (ADS)
Buyadzhi, V. V.; Glushkov, A. V.; Khetselius, O. Yu; Kuznetsova, A. A.; Buyadzhi, A. A.; Prepelitsa, G. P.; Ternovsky, V. B.
2017-10-01
A general, uniform chaos-geometric computational approach to analysis, modelling and prediction of the non-linear dynamics of quantum and laser systems (laser and quantum generators system etc) with elements of the deterministic chaos is briefly presented. The approach is based on using the advanced generalized techniques such as the wavelet analysis, multi-fractal formalism, mutual information approach, correlation integral analysis, false nearest neighbour algorithm, the Lyapunov’s exponents analysis, and surrogate data method, prediction models etc There are firstly presented the numerical data on the topological and dynamical invariants (in particular, the correlation, embedding, Kaplan-York dimensions, the Lyapunov’s exponents, Kolmogorov’s entropy and other parameters) for laser system (the semiconductor GaAs/GaAlAs laser with a retarded feedback) dynamics in a chaotic and hyperchaotic regimes.
NASA Astrophysics Data System (ADS)
Nalli, N. R.; Gambacorta, A.; Tan, C.; Iturbide, F.; Barnet, C. D.; Reale, A.; Sun, B.; Liu, Q.
2017-12-01
This presentation overviews the performance of the operational SNPP NOAA Unique Combined Atmospheric Processing System (NUCAPS) environmental data record (EDR) products. The SNPP Cross-track Infrared Sounder and Advanced Technology Microwave Sounder (CrIS/ATMS) suite, the first of the Joint Polar Satellite System (JPSS) Program, is one of NOAA's major investments in our nation's future operational environmental observation capability. The NUCAPS algorithm is a world-class NOAA-operational IR/MW retrieval algorithm based upon the well-established AIRS science team algorithm for deriving temperature, moisture, ozone and carbon trace gas to provide users with state-of-the-art EDR products. Operational use of the products includes the NOAA National Weather Service (NWS) Advanced Weather Interactive Processing System (AWIPS), along with numerous science-user applications. NUCAPS EDR product assessments are made with reference to JPSS Level 1 global requirements, which provide the definitive metrics for assessing that the products have minimally met predefined global performance specifications. The NESDIS/STAR NUCAPS development and validation team recently delivered the Phase 4 algorithm which incorporated critical updates necessary for compatibility with full spectral-resolution (FSR) CrIS sensor data records (SDRs). Based on comprehensive analyses, the NUCAPS Phase 4 CrIS-FSR temperature, moisture and ozone profile EDRs, as well as the carbon trace gas EDRs (CO, CH4 and CO2), are shown o be meeting or close to meeting the JPSS program global requirements. Regional and temporal assessments of interest to EDR users (e.g., AWIPS) will also be presented.
Splitting algorithm for numerical simulation of Li-ion battery electrochemical processes
NASA Astrophysics Data System (ADS)
Iliev, Oleg; Nikiforova, Marina A.; Semenov, Yuri V.; Zakharov, Petr E.
2017-11-01
In this paper we present a splitting algorithm for a numerical simulation of Li-ion battery electrochemical processes. Liion battery consists of three domains: anode, cathode and electrolyte. Mathematical model of electrochemical processes is described on a microscopic scale, and contains nonlinear equations for concentration and potential in each domain. On the interface of electrodes and electrolyte there are the Lithium ions intercalation and deintercalation processes, which are described by Butler-Volmer nonlinear equation. To approximate in spatial coordinates we use finite element methods with discontinues Galerkin elements. To simplify numerical simulations we develop the splitting algorithm, which split the original problem into three independent subproblems. We investigate the numerical convergence of the algorithm on 2D model problem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Kok Foong; Patterson, Robert I.A.; Wagner, Wolfgang
2015-12-15
Graphical abstract: -- Highlights: •Problems concerning multi-compartment population balance equations are studied. •A class of fragmentation weight transfer functions is presented. •Three stochastic weighted algorithms are compared against the direct simulation algorithm. •The numerical errors of the stochastic solutions are assessed as a function of fragmentation rate. •The algorithms are applied to a multi-dimensional granulation model. -- Abstract: This paper introduces stochastic weighted particle algorithms for the solution of multi-compartment population balance equations. In particular, it presents a class of fragmentation weight transfer functions which are constructed such that the number of computational particles stays constant during fragmentation events. Themore » weight transfer functions are constructed based on systems of weighted computational particles and each of it leads to a stochastic particle algorithm for the numerical treatment of population balance equations. Besides fragmentation, the algorithms also consider physical processes such as coagulation and the exchange of mass with the surroundings. The numerical properties of the algorithms are compared to the direct simulation algorithm and an existing method for the fragmentation of weighted particles. It is found that the new algorithms show better numerical performance over the two existing methods especially for systems with significant amount of large particles and high fragmentation rates.« less
NASA Astrophysics Data System (ADS)
Aasi, J.; Abbott, B. P.; Abbott, R.; Abbott, T.; Abernathy, M. R.; Accadia, T.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Affeldt, C.; Agathos, M.; Aggarwal, N.; Aguiar, O. D.; Ain, A.; Ajith, P.; Alemic, A.; Allen, B.; Allocca, A.; Amariutei, D.; Andersen, M.; Anderson, R.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C.; Areeda, J.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Austin, L.; Aylott, B. E.; Babak, S.; Baker, P. T.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barbet, M.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barton, M. A.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Bauchrowitz, J.; Bauer, Th S.; Behnke, B.; Bejger, M.; Beker, M. G.; Belczynski, C.; Bell, A. S.; Bell, C.; Bergmann, G.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Beyersdorf, P. T.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Biscans, S.; Bitossi, M.; Bizouard, M. A.; Black, E.; Blackburn, J. K.; Blackburn, L.; Blair, D.; Bloemen, S.; Blom, M.; Bock, O.; Bodiya, T. P.; Boer, M.; Bogaert, G.; Bogan, C.; Bond, C.; Bondu, F.; Bonelli, L.; Bonnand, R.; Bork, R.; Born, M.; Boschi, V.; Bose, Sukanta; Bosi, L.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Bridges, D. O.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brückner, F.; Buchman, S.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Burman, R.; Buskulic, D.; Buy, C.; Cadonati, L.; Cagnoli, G.; Calderón Bustillo, J.; Calloni, E.; Camp, J. B.; Campsie, P.; Cannon, K. C.; Canuel, B.; Cao, J.; Capano, C. D.; Carbognani, F.; Carbone, L.; Caride, S.; Castiglia, A.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Celerier, C.; Cella, G.; Cepeda, C.; Cesarini, E.; Chakraborty, R.; Chalermsongsak, T.; Chamberlin, S. J.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Chen, X.; Chen, Y.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Chow, J.; Christensen, N.; Chu, Q.; Chua, S. S. Y.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C.; Colombini, M.; Cominsky, L.; Constancio, M., Jr.; Conte, A.; Cook, D.; Corbitt, T. R.; Cordier, M.; Cornish, N.; Corpuz, A.; Corsi, A.; Costa, C. A.; Coughlin, M. W.; Coughlin, S.; Coulon, J.-P.; Countryman, S.; Couvares, P.; Coward, D. M.; Cowart, M.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Crowder, S. G.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dahl, K.; Dal Canton, T.; Damjanic, M.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dattilo, V.; Daveloza, H.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; Dayanga, T.; Debreczeni, G.; Degallaix, J.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dereli, H.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Dhurandhar, S.; Díaz, M.; Di Fiore, L.; Di Lieto, A.; Di Palma, I.; Di Virgilio, A.; Donath, A.; Donovan, F.; Dooley, K. L.; Doravari, S.; Dossa, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Dwyer, S.; Eberle, T.; Edo, T.; Edwards, M.; Effler, A.; Eggenstein, H.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Endrőczi, G.; Essick, R.; Etzel, T.; Evans, M.; Evans, T.; Factourovich, M.; Fafone, V.; Fairhurst, S.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Favata, M.; Fehrmann, H.; Fejer, M. M.; Feldbaum, D.; Feroz, F.; Ferrante, I.; Ferrini, F.; Fidecaro, F.; Finn, L. S.; Fiori, I.; Fisher, R. P.; Flaminio, R.; Fournier, J.-D.; Franco, S.; Frasca, S.; Frasconi, F.; Frede, M.; Frei, Z.; Freise, A.; Frey, R.; Fricke, T. T.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gair, J.; Gammaitoni, L.; Gaonkar, S.; Garufi, F.; Gehrels, N.; Gemme, G.; Genin, E.; Gennai, A.; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, C.; Gleason, J.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gordon, N.; Gorodetsky, M. L.; Gossan, S.; Goßler, S.; Gouaty, R.; Gräf, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greenhalgh, R. J. S.; Gretarsson, A. M.; Groot, P.; Grote, H.; Grover, K.; Grunewald, S.; Guidi, G. M.; Guido, C.; Gushwa, K.; Gustafson, E. K.; Gustafson, R.; Hammer, D.; Hammond, G.; Hanke, M.; Hanks, J.; Hanna, C.; Hanson, J.; Harms, J.; Harry, G. M.; Harry, I. W.; Harstad, E. D.; Hart, M.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Heidmann, A.; Heintze, M.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Heptonstall, A. W.; Heurs, M.; Hewitson, M.; Hild, S.; Hoak, D.; Hodge, K. A.; Holt, K.; Hooper, S.; Hopkins, P.; Hosken, D. J.; Hough, J.; Howell, E. J.; Hu, Y.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh, M.; Huynh-Dinh, T.; Ingram, D. R.; Inta, R.; Isogai, T.; Ivanov, A.; Iyer, B. R.; Izumi, K.; Jacobson, M.; James, E.; Jang, H.; Jaranowski, P.; Ji, Y.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; K, Haris; Kalmus, P.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Karlen, J.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, H.; Kawabe, K.; Kawazoe, F.; Kéfélian, F.; Keiser, G. M.; Keitel, D.; Kelley, D. B.; Kells, W.; Khalaidovski, A.; Khalili, F. Y.; Khazanov, E. A.; Kim, C.; Kim, K.; Kim, N.; Kim, N. G.; Kim, Y.-M.; King, E. J.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Klimenko, S.; Kline, J.; Koehlenbeck, S.; Kokeyama, K.; Kondrashov, V.; Koranda, S.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kremin, A.; Kringel, V.; Krishnan, B.; Królak, A.; Kuehn, G.; Kumar, A.; Kumar, P.; Kumar, R.; Kuo, L.; Kutynia, A.; Kwee, P.; Landry, M.; Lantz, B.; Larson, S.; Lasky, P. D.; Lawrie, C.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C.-H.; Lee, H. K.; Lee, H. M.; Lee, J.; Leonardi, M.; Leong, J. R.; Le Roux, A.; Leroy, N.; Letendre, N.; Levin, Y.; Levine, B.; Lewis, J.; Li, T. G. F.; Libbrecht, K.; Libson, A.; Lin, A. C.; Littenberg, T. B.; Litvine, V.; Lockerbie, N. A.; Lockett, V.; Lodhia, D.; Loew, K.; Logue, J.; Lombardi, A. L.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J.; Lubinski, M. J.; Lück, H.; Luijten, E.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Macarthur, J.; Macdonald, E. P.; MacDonald, T.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magana-Sandoval, F.; Mageswaran, M.; Maglione, C.; Mailand, K.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Manca, G. M.; Mandel, I.; Mandic, V.; Mangano, V.; Mangini, N.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A.; Maros, E.; Marque, J.; Martelli, F.; Martin, I. W.; Martin, R. M.; Martinelli, L.; Martynov, D.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Matichard, F.; Matone, L.; Matzner, R. A.; Mavalvala, N.; Mazumder, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McGuire, S. C.; McIntyre, G.; McIver, J.; McLin, K.; Meacher, D.; Meadors, G. D.; Mehmet, M.; Meidam, J.; Meinders, M.; Melatos, A.; Mendell, G.; Mercer, R. A.; Meshkov, S.; Messenger, C.; Meyers, P.; Miao, H.; Michel, C.; Mikhailov, E. E.; Milano, L.; Milde, S.; Miller, J.; Minenkov, Y.; Mingarelli, C. M. F.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moe, B.; Moesta, P.; Mohan, M.; Mohapatra, S. R. P.; Moraru, D.; Moreno, G.; Morgado, N.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, C. L.; Mueller, G.; Mukherjee, S.; Mullavey, A.; Munch, J.; Murphy, D.; Murray, P. G.; Mytidis, A.; Nagy, M. F.; Nanda Kumar, D.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Necula, V.; Nelemans, G.; Neri, I.; Neri, M.; Newton, G.; Nguyen, T.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Ochsner, E.; O'Dell, J.; Oelker, E.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oppermann, P.; O'Reilly, B.; O'Shaughnessy, R.; Osthelder, C.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Padilla, C.; Pai, A.; Palashov, O.; Palomba, C.; Pan, H.; Pan, Y.; Pankow, C.; Paoletti, F.; Paoletti, R.; Papa, M. A.; Paris, H.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Pedraza, M.; Penn, S.; Perreca, A.; Phelps, M.; Pichot, M.; Pickenpack, M.; Piergiovanni, F.; Pierro, V.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poeld, J.; Poggiani, R.; Poteomkin, A.; Powell, J.; Prasad, J.; Premachandra, S.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Qin, J.; Quetschke, V.; Quintero, E.; Quiroga, G.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Rácz, I.; Radkins, H.; Raffai, P.; Raja, S.; Rajalakshmi, G.; Rakhmanov, M.; Ramet, C.; Ramirez, K.; Rapagnani, P.; Raymond, V.; Re, V.; Read, J.; Reed, C. M.; Regimbau, T.; Reid, S.; Reitze, D. H.; Rhoades, E.; Ricci, F.; Riles, K.; Robertson, N. A.; Robinet, F.; Rocchi, A.; Rodruck, M.; Rolland, L.; Rollins, J. G.; Romano, R.; Romanov, G.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Salemi, F.; Sammut, L.; Sandberg, V.; Sanders, J. R.; Sannibale, V.; Santiago-Prieto, I.; Saracco, E.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Savage, R.; Scheuer, J.; Schilling, R.; Schnabel, R.; Schofield, R. M. S.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Shaddock, D.; Shah, S.; Shahriar, M. S.; Shaltev, M.; Shapiro, B.; Shawhan, P.; Shoemaker, D. H.; Sidery, T. L.; Siellez, K.; Siemens, X.; Sigg, D.; Simakov, D.; Singer, A.; Singer, L.; Singh, R.; Sintes, A. M.; Slagmolen, B. J. J.; Slutsky, J.; Smith, J. R.; Smith, M.; Smith, R. J. E.; Smith-Lefebvre, N. D.; Son, E. J.; Sorazu, B.; Souradeep, T.; Sperandio, L.; Staley, A.; Stebbins, J.; Steinlechner, J.; Steinlechner, S.; Stephens, B. C.; Steplewski, S.; Stevenson, S.; Stone, R.; Stops, D.; Strain, K. A.; Straniero, N.; Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Susmithan, S.; Sutton, P. J.; Swinkels, B.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tarabrin, S. P.; Taylor, R.; ter Braack, A. P. M.; Thirugnanasambandam, M. P.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, V.; Tokmakov, K. V.; Tomlinson, C.; Toncelli, A.; Tonelli, M.; Torre, O.; Torres, C. V.; Torrie, C. I.; Travasso, F.; Traylor, G.; Tse, M.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Urbanek, K.; Vahlbruch, H.; Vajente, G.; Valdes, G.; Vallisneri, M.; van den Brand, J. F. J.; Van Den Broeck, C.; van der Putten, S.; van der Sluys, M. V.; van Heijningen, J.; van Veggel, A. A.; Vass, S.; Vasúth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Verma, S. S.; Vetrano, F.; Viceré, A.; Vincent-Finley, R.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Vousden, W. D.; Vyachanin, S. P.; Wade, A.; Wade, L.; Wade, M.; Walker, M.; Wallace, L.; Wang, M.; Wang, X.; Ward, R. L.; Was, M.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Welborn, T.; Wen, L.; Wessels, P.; West, M.; Westphal, T.; Wette, K.; Whelan, J. T.; Whitcomb, S. E.; White, D. J.; Whiting, B. F.; Wiesner, K.; Wilkinson, C.; Williams, K.; Williams, L.; Williams, R.; Williams, T.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M.; Winkler, W.; Wipf, C. C.; Wiseman, A. G.; Wittel, H.; Woan, G.; Worden, J.; Yablon, J.; Yakushin, I.; Yamamoto, H.; Yancey, C. C.; Yang, H.; Yang, Z.; Yoshida, S.; Yvert, M.; Zadrożny, A.; Zanolin, M.; Zendri, J.-P.; Zhang, Fan; Zhang, L.; Zhao, C.; Zhu, X. J.; Zucker, M. E.; Zuraw, S.; Zweizig, J.; Boyle, M.; Brügmann, B.; Buchman, L. T.; Campanelli, M.; Chu, T.; Etienne, Z. B.; Hannam, M.; Healy, J.; Hinder, I.; Kidder, L. E.; Laguna, P.; Liu, Y. T.; London, L.; Lousto, C. O.; Lovelace, G.; MacDonald, I.; Marronetti, P.; Mösta, P.; Müller, D.; Mundim, B. C.; Nakano, H.; Paschalidis, V.; Pekowsky, L.; Pollney, D.; Pfeiffer, H. P.; Ponce, M.; Pürrer, M.; Reifenberger, G.; Reisswig, C.; Santamaría, L.; Scheel, M. A.; Shapiro, S. L.; Shoemaker, D.; Sopuerta, C. F.; Sperhake, U.; Szilágyi, B.; Taylor, N. W.; Tichy, W.; Tsatsin, P.; Zlochower, Y.
2014-06-01
The Numerical INJection Analysis (NINJA) project is a collaborative effort between members of the numerical relativity and gravitational-wave (GW) astrophysics communities. The purpose of NINJA is to study the ability to detect GWs emitted from merging binary black holes (BBH) and recover their parameters with next-generation GW observatories. We report here on the results of the second NINJA project, NINJA-2, which employs 60 complete BBH hybrid waveforms consisting of a numerical portion modelling the late inspiral, merger, and ringdown stitched to a post-Newtonian portion modelling the early inspiral. In a ‘blind injection challenge’ similar to that conducted in recent Laser Interferometer Gravitational Wave Observatory (LIGO) and Virgo science runs, we added seven hybrid waveforms to two months of data recoloured to predictions of Advanced LIGO (aLIGO) and Advanced Virgo (AdV) sensitivity curves during their first observing runs. The resulting data was analysed by GW detection algorithms and 6 of the waveforms were recovered with false alarm rates smaller than 1 in a thousand years. Parameter-estimation algorithms were run on each of these waveforms to explore the ability to constrain the masses, component angular momenta and sky position of these waveforms. We find that the strong degeneracy between the mass ratio and the BHs’ angular momenta will make it difficult to precisely estimate these parameters with aLIGO and AdV. We also perform a large-scale Monte Carlo study to assess the ability to recover each of the 60 hybrid waveforms with early aLIGO and AdV sensitivity curves. Our results predict that early aLIGO and AdV will have a volume-weighted average sensitive distance of 300 Mpc (1 Gpc) for 10M⊙ + 10M⊙ (50M⊙ + 50M⊙) BBH coalescences. We demonstrate that neglecting the component angular momenta in the waveform models used in matched-filtering will result in a reduction in sensitivity for systems with large component angular momenta. This reduction is estimated to be up to ˜15% for 50M⊙ + 50M⊙ BBH coalescences with almost maximal angular momenta aligned with the orbit when using early aLIGO and AdV sensitivity curves.
Development of computational methods for heavy lift launch vehicles
NASA Technical Reports Server (NTRS)
Yoon, Seokkwan; Ryan, James S.
1993-01-01
The research effort has been focused on the development of an advanced flow solver for complex viscous turbulent flows with shock waves. The three-dimensional Euler and full/thin-layer Reynolds-averaged Navier-Stokes equations for compressible flows are solved on structured hexahedral grids. The Baldwin-Lomax algebraic turbulence model is used for closure. The space discretization is based on a cell-centered finite-volume method augmented by a variety of numerical dissipation models with optional total variation diminishing limiters. The governing equations are integrated in time by an implicit method based on lower-upper factorization and symmetric Gauss-Seidel relaxation. The algorithm is vectorized on diagonal planes of sweep using two-dimensional indices in three dimensions. A new computer program named CENS3D has been developed for viscous turbulent flows with discontinuities. Details of the code are described in Appendix A and Appendix B. With the developments of the numerical algorithm and dissipation model, the simulation of three-dimensional viscous compressible flows has become more efficient and accurate. The results of the research are expected to yield a direct impact on the design process of future liquid fueled launch systems.
Numerical heating in Particle-In-Cell simulations with Monte Carlo binary collisions
NASA Astrophysics Data System (ADS)
Alves, E. Paulo; Mori, Warren; Fiuza, Frederico
2017-10-01
The binary Monte Carlo collision (BMCC) algorithm is a robust and popular method to include Coulomb collision effects in Particle-in-Cell (PIC) simulations of plasmas. While a number of works have focused on extending the validity of the model to different physical regimes of temperature and density, little attention has been given to the fundamental coupling between PIC and BMCC algorithms. Here, we show that the coupling between PIC and BMCC algorithms can give rise to (nonphysical) numerical heating of the system, that can be far greater than that observed when these algorithms operate independently. This deleterious numerical heating effect can significantly impact the evolution of the simulated system particularly for long simulation times. In this work, we describe the source of this numerical heating, and derive scaling laws for the numerical heating rates based on the numerical parameters of PIC-BMCC simulations. We compare our theoretical scalings with PIC-BMCC numerical experiments, and discuss strategies to minimize this parasitic effect. This work is supported by DOE FES under FWP 100237 and 100182.
Parallel Algorithm Solves Coupled Differential Equations
NASA Technical Reports Server (NTRS)
Hayashi, A.
1987-01-01
Numerical methods adapted to concurrent processing. Algorithm solves set of coupled partial differential equations by numerical integration. Adapted to run on hypercube computer, algorithm separates problem into smaller problems solved concurrently. Increase in computing speed with concurrent processing over that achievable with conventional sequential processing appreciable, especially for large problems.
NASA Astrophysics Data System (ADS)
Mahlmann, J. F.; Cerdá-Durán, P.; Aloy, M. A.
2018-07-01
The study of the electrodynamics of static, axisymmetric, and force-free Kerr magnetospheres relies vastly on solutions of the so-called relativistic Grad-Shafranov equation (GSE). Different numerical approaches to the solution of the GSE have been introduced in the literature, but none of them has been fully assessed from the numerical point of view in terms of efficiency and quality of the solutions found. We present a generalization of these algorithms and give a detailed background on the algorithmic implementation. We assess the numerical stability of the implemented algorithms and quantify the convergence of the presented methodology for the most established set-ups (split-monopole, paraboloidal, BH disc, uniform).
NASA Astrophysics Data System (ADS)
Mahlmann, J. F.; Cerdá-Durán, P.; Aloy, M. A.
2018-04-01
The study of the electrodynamics of static, axisymmetric and force-free Kerr magnetospheres relies vastly on solutions of the so called relativistic Grad-Shafranov equation (GSE). Different numerical approaches to the solution of the GSE have been introduced in the literature, but none of them has been fully assessed from the numerical point of view in terms of efficiency and quality of the solutions found. We present a generalization of these algorithms and give detailed background on the algorithmic implementation. We assess the numerical stability of the implemented algorithms and quantify the convergence of the presented methodology for the most established setups (split-monopole, paraboloidal, BH-disk, uniform).
Atmospheric Ascent Guidance for Rocket-Powered Launch Vehicles
NASA Technical Reports Server (NTRS)
Dukeman, Greg A.
2002-01-01
An advanced ascent guidance algorithm for rocket- powered launch vehicles is developed. This algorithm cyclically solves the calculus-of-variations two-point boundary-value problem starting at vertical rise completion through main engine cutoff. This is different from traditional ascent guidance algorithms which operate in a simple open-loop mode until high dynamic pressure (including the critical max-Q) portion of the trajectory is over, at which time guidance operates under the assumption of negligible aerodynamic acceleration (i.e., vacuum dynamics). The initial costate guess is corrected based on errors in the terminal state constraints and the transversality conditions. Judicious approximations are made to reduce the order and complexity of the state/costate system. Results comparing guided launch vehicle trajectories with POST open-loop trajectories are given verifying the basic formulation of the algorithm. Multiple shooting is shown to be a very effective numerical technique for this application. In particular, just one intermediate shooting point, in addition to the initial shooting point, is sufficient to significantly reduce sensitivity to the guessed initial costates. Simulation results from a high-fidelity trajectory simulation are given for the case of launch to sub-orbital cutoff conditions as well as launch to orbit conditions. An abort to downrange landing site formulation of the algorithm is presented.
Event-driven Monte Carlo: Exact dynamics at all time scales for discrete-variable models
NASA Astrophysics Data System (ADS)
Mendoza-Coto, Alejandro; Díaz-Méndez, Rogelio; Pupillo, Guido
2016-06-01
We present an algorithm for the simulation of the exact real-time dynamics of classical many-body systems with discrete energy levels. In the same spirit of kinetic Monte Carlo methods, a stochastic solution of the master equation is found, with no need to define any other phase-space construction. However, unlike existing methods, the present algorithm does not assume any particular statistical distribution to perform moves or to advance the time, and thus is a unique tool for the numerical exploration of fast and ultra-fast dynamical regimes. By decomposing the problem in a set of two-level subsystems, we find a natural variable step size, that is well defined from the normalization condition of the transition probabilities between the levels. We successfully test the algorithm with known exact solutions for non-equilibrium dynamics and equilibrium thermodynamical properties of Ising-spin models in one and two dimensions, and compare to standard implementations of kinetic Monte Carlo methods. The present algorithm is directly applicable to the study of the real-time dynamics of a large class of classical Markovian chains, and particularly to short-time situations where the exact evolution is relevant.
A Probabilistic Feature Map-Based Localization System Using a Monocular Camera.
Kim, Hyungjin; Lee, Donghwa; Oh, Taekjun; Choi, Hyun-Taek; Myung, Hyun
2015-08-31
Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments.
A Probabilistic Feature Map-Based Localization System Using a Monocular Camera
Kim, Hyungjin; Lee, Donghwa; Oh, Taekjun; Choi, Hyun-Taek; Myung, Hyun
2015-01-01
Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments. PMID:26404284
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramis, Rafael, E-mail: rafael.ramis@upm.es
A new one-dimensional hydrodynamic algorithm, specifically developed for Inertial Confinement Fusion (ICF) applications, is presented. The scheme uses a fully conservative Lagrangian formulation in planar, cylindrical, and spherically symmetric geometries, and supports arbitrary equations of state with separate ion and electron components. Fluid equations are discretized on a staggered grid and stabilized by means of an artificial viscosity formulation. The space discretized equations are advanced in time using an implicit algorithm. The method includes several numerical parameters that can be adjusted locally. In regions with low Courant–Friedrichs–Lewy (CFL) number, where stability is not an issue, they can be adjusted tomore » optimize the accuracy. In typical problems, the truncation error can be reduced by a factor between 2 to 10 in comparison with conventional explicit algorithms. On the other hand, in regions with high CFL numbers, the parameters can be set to guarantee unconditional stability. The method can be integrated into complex ICF codes. This is demonstrated through several examples covering a wide range of situations: from thermonuclear ignition physics, where alpha particles are managed as an additional species, to low intensity laser–matter interaction, where liquid–vapor phase transitions occur.« less
Overview of fast algorithm in 3D dynamic holographic display
NASA Astrophysics Data System (ADS)
Liu, Juan; Jia, Jia; Pan, Yijie; Wang, Yongtian
2013-08-01
3D dynamic holographic display is one of the most attractive techniques for achieving real 3D vision with full depth cue without any extra devices. However, huge 3D information and data should be preceded and be computed in real time for generating the hologram in 3D dynamic holographic display, and it is a challenge even for the most advanced computer. Many fast algorithms are proposed for speeding the calculation and reducing the memory usage, such as:look-up table (LUT), compressed look-up table (C-LUT), split look-up table (S-LUT), and novel look-up table (N-LUT) based on the point-based method, and full analytical polygon-based methods, one-step polygon-based method based on the polygon-based method. In this presentation, we overview various fast algorithms based on the point-based method and the polygon-based method, and focus on the fast algorithm with low memory usage, the C-LUT, and one-step polygon-based method by the 2D Fourier analysis of the 3D affine transformation. The numerical simulations and the optical experiments are presented, and several other algorithms are compared. The results show that the C-LUT algorithm and the one-step polygon-based method are efficient methods for saving calculation time. It is believed that those methods could be used in the real-time 3D holographic display in future.
Resolution of the 1D regularized Burgers equation using a spatial wavelet approximation
NASA Technical Reports Server (NTRS)
Liandrat, J.; Tchamitchian, PH.
1990-01-01
The Burgers equation with a small viscosity term, initial and periodic boundary conditions is resolved using a spatial approximation constructed from an orthonormal basis of wavelets. The algorithm is directly derived from the notions of multiresolution analysis and tree algorithms. Before the numerical algorithm is described these notions are first recalled. The method uses extensively the localization properties of the wavelets in the physical and Fourier spaces. Moreover, the authors take advantage of the fact that the involved linear operators have constant coefficients. Finally, the algorithm can be considered as a time marching version of the tree algorithm. The most important point is that an adaptive version of the algorithm exists: it allows one to reduce in a significant way the number of degrees of freedom required for a good computation of the solution. Numerical results and description of the different elements of the algorithm are provided in combination with different mathematical comments on the method and some comparison with more classical numerical algorithms.
Towards effective interactive three-dimensional colour postprocessing
NASA Technical Reports Server (NTRS)
Bailey, B. C.; Hajjar, J. F.; Abel, J. F.
1986-01-01
Recommendations for the development of effective three-dimensional, graphical color postprocessing are made. First, the evaluation of large, complex numerical models demands that a postprocessor be highly interactive. A menu of available functions should be provided and these operations should be performed quickly so that a sense of continuity and spontaneity exists during the post-processing session. Second, an agenda for three-dimensional color postprocessing is proposed. A postprocessor must be versatile with respect to application and basic algorithms must be designed so that they are flexible. A complete selection of tools is necessary to allow arbitrary specification of views, extraction of qualitative information, and access to detailed quantitative and problem information. Finally, full use of advanced display hardware is necessary if interactivity is to be maximized and effective postprocessing of today's numerical simulations is to be achieved.
Numerical simulation of steady supersonic flow. [spatial marching
NASA Technical Reports Server (NTRS)
Schiff, L. B.; Steger, J. L.
1981-01-01
A noniterative, implicit, space-marching, finite-difference algorithm was developed for the steady thin-layer Navier-Stokes equations in conservation-law form. The numerical algorithm is applicable to steady supersonic viscous flow over bodies of arbitrary shape. In addition, the same code can be used to compute supersonic inviscid flow or three-dimensional boundary layers. Computed results from two-dimensional and three-dimensional versions of the numerical algorithm are in good agreement with those obtained from more costly time-marching techniques.
Modifying a numerical algorithm for solving the matrix equation X + AX T B = C
NASA Astrophysics Data System (ADS)
Vorontsov, Yu. O.
2013-06-01
Certain modifications are proposed for a numerical algorithm solving the matrix equation X + AX T B = C. By keeping the intermediate results in storage and repeatedly using them, it is possible to reduce the total complexity of the algorithm from O( n 4) to O( n 3) arithmetic operations.
Numerical stability of the error diffusion concept
NASA Astrophysics Data System (ADS)
Weissbach, Severin; Wyrowski, Frank
1992-10-01
The error diffusion algorithm is an easy implementable mean to handle nonlinearities in signal processing, e.g. in picture binarization and coding of diffractive elements. The numerical stability of the algorithm depends on the choice of the diffusion weights. A criterion for the stability of the algorithm is presented and evaluated for some examples.
Numerical Characterization of Piezoceramics Using Resonance Curves
Pérez, Nicolás; Buiochi, Flávio; Brizzotti Andrade, Marco Aurélio; Adamowski, Julio Cezar
2016-01-01
Piezoelectric materials characterization is a challenging problem involving physical concepts, electrical and mechanical measurements and numerical optimization techniques. Piezoelectric ceramics such as Lead Zirconate Titanate (PZT) belong to the 6 mm symmetry class, which requires five elastic, three piezoelectric and two dielectric constants to fully represent the material properties. If losses are considered, the material properties can be represented by complex numbers. In this case, 20 independent material constants are required to obtain the full model. Several numerical methods have been used to adjust the theoretical models to the experimental results. The continuous improvement of the computer processing ability has allowed the use of a specific numerical method, the Finite Element Method (FEM), to iteratively solve the problem of finding the piezoelectric constants. This review presents the recent advances in the numerical characterization of 6 mm piezoelectric materials from experimental electrical impedance curves. The basic strategy consists in measuring the electrical impedance curve of a piezoelectric disk, and then combining the Finite Element Method with an iterative algorithm to find a set of material properties that minimizes the difference between the numerical impedance curve and the experimental one. Different methods to validate the results are also discussed. Examples of characterization of some common piezoelectric ceramics are presented to show the practical application of the described methods. PMID:28787875
Numerical Characterization of Piezoceramics Using Resonance Curves.
Pérez, Nicolás; Buiochi, Flávio; Brizzotti Andrade, Marco Aurélio; Adamowski, Julio Cezar
2016-01-27
Piezoelectric materials characterization is a challenging problem involving physical concepts, electrical and mechanical measurements and numerical optimization techniques. Piezoelectric ceramics such as Lead Zirconate Titanate (PZT) belong to the 6 mm symmetry class, which requires five elastic, three piezoelectric and two dielectric constants to fully represent the material properties. If losses are considered, the material properties can be represented by complex numbers. In this case, 20 independent material constants are required to obtain the full model. Several numerical methods have been used to adjust the theoretical models to the experimental results. The continuous improvement of the computer processing ability has allowed the use of a specific numerical method, the Finite Element Method (FEM), to iteratively solve the problem of finding the piezoelectric constants. This review presents the recent advances in the numerical characterization of 6 mm piezoelectric materials from experimental electrical impedance curves. The basic strategy consists in measuring the electrical impedance curve of a piezoelectric disk, and then combining the Finite Element Method with an iterative algorithm to find a set of material properties that minimizes the difference between the numerical impedance curve and the experimental one. Different methods to validate the results are also discussed. Examples of characterization of some common piezoelectric ceramics are presented to show the practical application of the described methods.
NASA Astrophysics Data System (ADS)
Messenger, C.; Bulten, H. J.; Crowder, S. G.; Dergachev, V.; Galloway, D. K.; Goetz, E.; Jonker, R. J. G.; Lasky, P. D.; Meadors, G. D.; Melatos, A.; Premachandra, S.; Riles, K.; Sammut, L.; Thrane, E. H.; Whelan, J. T.; Zhang, Y.
2015-07-01
The low-mass X-ray binary Scorpius X-1 (Sco X-1) is potentially the most luminous source of continuous gravitational-wave radiation for interferometers such as LIGO and Virgo. For low-mass X-ray binaries this radiation would be sustained by active accretion of matter from its binary companion. With the Advanced Detector Era fast approaching, work is underway to develop an array of robust tools for maximizing the science and detection potential of Sco X-1. We describe the plans and progress of a project designed to compare the numerous independent search algorithms currently available. We employ a mock-data challenge in which the search pipelines are tested for their relative proficiencies in parameter estimation, computational efficiency, robustness, and most importantly, search sensitivity. The mock-data challenge data contains an ensemble of 50 Scorpius X-1 (Sco X-1) type signals, simulated within a frequency band of 50-1500 Hz. Simulated detector noise was generated assuming the expected best strain sensitivity of Advanced LIGO [1] and Advanced VIRGO [2] (4 ×10-24 Hz-1 /2 ). A distribution of signal amplitudes was then chosen so as to allow a useful comparison of search methodologies. A factor of 2 in strain separates the quietest detected signal, at 6.8 ×10-26 strain, from the torque-balance limit at a spin frequency of 300 Hz, although this limit could range from 1.2 ×10-25 (25 Hz) to 2.2 ×10-26 (750 Hz) depending on the unknown frequency of Sco X-1. With future improvements to the search algorithms and using advanced detector data, our expectations for probing below the theoretical torque-balance strain limit are optimistic.
Lang, Jun
2012-01-30
In this paper, we propose a novel secure image sharing scheme based on Shamir's three-pass protocol and the multiple-parameter fractional Fourier transform (MPFRFT), which can safely exchange information with no advance distribution of either secret keys or public keys between users. The image is encrypted directly by the MPFRFT spectrum without the use of phase keys, and information can be shared by transmitting the encrypted image (or message) three times between users. Numerical simulation results are given to verify the performance of the proposed algorithm.
Concurrent design of an RTP chamber and advanced control system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spence, P.; Schaper, C.; Kermani, A.
1995-12-31
A concurrent-engineering approach is applied to the development of an axisymmetric rapid-thermal-processing (RTP) reactor and its associated temperature controller. Using a detailed finite-element thermal model as a surrogate for actual hardware, the authors have developed and tested a multi-input multi-output (MIMO) controller. Closed-loop simulations are performed by linking the control algorithm with the finite-element code. Simulations show that good temperature uniformity is maintained on the wafer during both steady and transient conditions. A numerical study shows the effect of ramp rate, feedback gain, sensor placement, and wafer-emissivity patterns on system performance.
Treatment of the neglected Achilles tendon rupture.
Bevilacqua, Nicholas J
2012-04-01
Achilles tendon ruptures are best managed acutely. Neglected Achilles tendon ruptures are debilitating injuries and the increased complexity of the situation must be appreciated. Surgical management is recommended, and only in the poorest surgical candidate is conservative treatment entertained. Numerous treatment algorithms and surgical techniques have been described. A V-Y advancement flap and flexor halluces longus tendon transfer have been found to be reliable and achieve good clinical outcomes for defects ranging from 2 cm to 8 cm. This article focuses on the treatment options for the neglected Achilles tendon rupture. Copyright © 2012 Elsevier Inc. All rights reserved.
Icing detection from geostationary satellite data using machine learning approaches
NASA Astrophysics Data System (ADS)
Lee, J.; Ha, S.; Sim, S.; Im, J.
2015-12-01
Icing can cause a significant structural damage to aircraft during flight, resulting in various aviation accidents. Icing studies have been typically performed using two approaches: one is a numerical model-based approach and the other is a remote sensing-based approach. The model based approach diagnoses aircraft icing using numerical atmospheric parameters such as temperature, relative humidity, and vertical thermodynamic structure. This approach tends to over-estimate icing according to the literature. The remote sensing-based approach typically uses meteorological satellite/ground sensor data such as Geostationary Operational Environmental Satellite (GOES) and Dual-Polarization radar data. This approach detects icing areas by applying thresholds to parameters such as liquid water path and cloud optical thickness derived from remote sensing data. In this study, we propose an aircraft icing detection approach which optimizes thresholds for L1B bands and/or Cloud Optical Thickness (COT) from Communication, Ocean and Meteorological Satellite-Meteorological Imager (COMS MI) and newly launched Himawari-8 Advanced Himawari Imager (AHI) over East Asia. The proposed approach uses machine learning algorithms including decision trees (DT) and random forest (RF) for optimizing thresholds of L1B data and/or COT. Pilot Reports (PIREPs) from South Korea and Japan were used as icing reference data. Results show that RF produced a lower false alarm rate (1.5%) and a higher overall accuracy (98.8%) than DT (8.5% and 75.3%), respectively. The RF-based approach was also compared with the existing COMS MI and GOES-R icing mask algorithms. The agreements of the proposed approach with the existing two algorithms were 89.2% and 45.5%, respectively. The lower agreement with the GOES-R algorithm was possibly due to the high uncertainty of the cloud phase product from COMS MI.
NASA Astrophysics Data System (ADS)
Harmon, Michael; Gamba, Irene M.; Ren, Kui
2016-12-01
This work concerns the numerical solution of a coupled system of self-consistent reaction-drift-diffusion-Poisson equations that describes the macroscopic dynamics of charge transport in photoelectrochemical (PEC) solar cells with reactive semiconductor and electrolyte interfaces. We present three numerical algorithms, mainly based on a mixed finite element and a local discontinuous Galerkin method for spatial discretization, with carefully chosen numerical fluxes, and implicit-explicit time stepping techniques, for solving the time-dependent nonlinear systems of partial differential equations. We perform computational simulations under various model parameters to demonstrate the performance of the proposed numerical algorithms as well as the impact of these parameters on the solution to the model.
NASA Astrophysics Data System (ADS)
Labate, Demetrio; Negi, Pooran; Ozcan, Burcin; Papadakis, Manos
2015-09-01
As advances in imaging technologies make more and more data available for biomedical applications, there is an increasing need to develop efficient quantitative algorithms for the analysis and processing of imaging data. In this paper, we introduce an innovative multiscale approach called Directional Ratio which is especially effective to distingush isotropic from anisotropic structures. This task is especially useful in the analysis of images of neurons, the main units of the nervous systems which consist of a main cell body called the soma and many elongated processes called neurites. We analyze the theoretical properties of our method on idealized models of neurons and develop a numerical implementation of this approach for analysis of fluorescent images of cultured neurons. We show that this algorithm is very effective for the detection of somas and the extraction of neurites in images of small circuits of neurons.
Modeling Complex Biological Flows in Multi-Scale Systems using the APDEC Framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trebotich, D
We have developed advanced numerical algorithms to model biological fluids in multiscale flow environments using the software framework developed under the SciDAC APDEC ISIC. The foundation of our computational effort is an approach for modeling DNA-laden fluids as ''bead-rod'' polymers whose dynamics are fully coupled to an incompressible viscous solvent. The method is capable of modeling short range forces and interactions between particles using soft potentials and rigid constraints. Our methods are based on higher-order finite difference methods in complex geometry with adaptivity, leveraging algorithms and solvers in the APDEC Framework. Our Cartesian grid embedded boundary approach to incompressible viscousmore » flow in irregular geometries has also been interfaced to a fast and accurate level-sets method within the APDEC Framework for extracting surfaces from volume renderings of medical image data and used to simulate cardio-vascular and pulmonary flows in critical anatomies.« less
Modeling complex biological flows in multi-scale systems using the APDEC framework
NASA Astrophysics Data System (ADS)
Trebotich, David
2006-09-01
We have developed advanced numerical algorithms to model biological fluids in multiscale flow environments using the software framework developed under the SciDAC APDEC ISIC. The foundation of our computational effort is an approach for modeling DNA laden fluids as ''bead-rod'' polymers whose dynamics are fully coupled to an incompressible viscous solvent. The method is capable of modeling short range forces and interactions between particles using soft potentials and rigid constraints. Our methods are based on higher-order finite difference methods in complex geometry with adaptivity, leveraging algorithms and solvers in the APDEC Framework. Our Cartesian grid embedded boundary approach to incompressible viscous flow in irregular geometries has also been interfaced to a fast and accurate level-sets method within the APDEC Framework for extracting surfaces from volume renderings of medical image data and used to simulate cardio-vascular and pulmonary flows in critical anatomies.
Advancements to the planogram frequency–distance rebinning algorithm
Champley, Kyle M; Raylman, Raymond R; Kinahan, Paul E
2010-01-01
In this paper we consider the task of image reconstruction in positron emission tomography (PET) with the planogram frequency–distance rebinning (PFDR) algorithm. The PFDR algorithm is a rebinning algorithm for PET systems with panel detectors. The algorithm is derived in the planogram coordinate system which is a native data format for PET systems with panel detectors. A rebinning algorithm averages over the redundant four-dimensional set of PET data to produce a three-dimensional set of data. Images can be reconstructed from this rebinned three-dimensional set of data. This process enables one to reconstruct PET images more quickly than reconstructing directly from the four-dimensional PET data. The PFDR algorithm is an approximate rebinning algorithm. We show that implementing the PFDR algorithm followed by the (ramp) filtered backprojection (FBP) algorithm in linogram coordinates from multiple views reconstructs a filtered version of our image. We develop an explicit formula for this filter which can be used to achieve exact reconstruction by means of a modified FBP algorithm applied to the stack of rebinned linograms and can also be used to quantify the errors introduced by the PFDR algorithm. This filter is similar to the filter in the planogram filtered backprojection algorithm derived by Brasse et al. The planogram filtered backprojection and exact reconstruction with the PFDR algorithm require complete projections which can be completed with a reprojection algorithm. The PFDR algorithm is similar to the rebinning algorithm developed by Kao et al. By expressing the PFDR algorithm in detector coordinates, we provide a comparative analysis between the two algorithms. Numerical experiments using both simulated data and measured data from a positron emission mammography/tomography (PEM/PET) system are performed. Images are reconstructed by PFDR+FBP (PFDR followed by 2D FBP reconstruction), PFDRX (PFDR followed by the modified FBP algorithm for exact reconstruction) and planogram filtered backprojection image reconstruction algorithms. We show that the PFDRX algorithm produces images that are nearly as accurate as images reconstructed with the planogram filtered backprojection algorithm and more accurate than images reconstructed with the PFDR+FBP algorithm. Both the PFDR+FBP and PFDRX algorithms provide a dramatic improvement in computation time over the planogram filtered backprojection algorithm. PMID:20436790
NASA Technical Reports Server (NTRS)
Noor, A. K. (Editor); Hayduk, R. J. (Editor)
1985-01-01
Among the topics discussed are developments in structural engineering hardware and software, computation for fracture mechanics, trends in numerical analysis and parallel algorithms, mechanics of materials, advances in finite element methods, composite materials and structures, determinations of random motion and dynamic response, optimization theory, automotive tire modeling methods and contact problems, the damping and control of aircraft structures, and advanced structural applications. Specific topics covered include structural design expert systems, the evaluation of finite element system architectures, systolic arrays for finite element analyses, nonlinear finite element computations, hierarchical boundary elements, adaptive substructuring techniques in elastoplastic finite element analyses, automatic tracking of crack propagation, a theory of rate-dependent plasticity, the torsional stability of nonlinear eccentric structures, a computation method for fluid-structure interaction, the seismic analysis of three-dimensional soil-structure interaction, a stress analysis for a composite sandwich panel, toughness criterion identification for unidirectional composite laminates, the modeling of submerged cable dynamics, and damping synthesis for flexible spacecraft structures.
Projected role of advanced computational aerodynamic methods at the Lockheed-Georgia company
NASA Technical Reports Server (NTRS)
Lores, M. E.
1978-01-01
Experience with advanced computational methods being used at the Lockheed-Georgia Company to aid in the evaluation and design of new and modified aircraft indicates that large and specialized computers will be needed to make advanced three-dimensional viscous aerodynamic computations practical. The Numerical Aerodynamic Simulation Facility should be used to provide a tool for designing better aerospace vehicles while at the same time reducing development costs by performing computations using Navier-Stokes equations solution algorithms and permitting less sophisticated but nevertheless complex calculations to be made efficiently. Configuration definition procedures and data output formats can probably best be defined in cooperation with industry, therefore, the computer should handle many remote terminals efficiently. The capability of transferring data to and from other computers needs to be provided. Because of the significant amount of input and output associated with 3-D viscous flow calculations and because of the exceedingly fast computation speed envisioned for the computer, special attention should be paid to providing rapid, diversified, and efficient input and output.
LLIMAS: Revolutionizing integrating modeling and analysis at MIT Lincoln Laboratory
NASA Astrophysics Data System (ADS)
Doyle, Keith B.; Stoeckel, Gerhard P.; Rey, Justin J.; Bury, Mark E.
2017-08-01
MIT Lincoln Laboratory's Integrated Modeling and Analysis Software (LLIMAS) enables the development of novel engineering solutions for advanced prototype systems through unique insights into engineering performance and interdisciplinary behavior to meet challenging size, weight, power, environmental, and performance requirements. LLIMAS is a multidisciplinary design optimization tool that wraps numerical optimization algorithms around an integrated framework of structural, thermal, optical, stray light, and computational fluid dynamics analysis capabilities. LLIMAS software is highly extensible and has developed organically across a variety of technologies including laser communications, directed energy, photometric detectors, chemical sensing, laser radar, and imaging systems. The custom software architecture leverages the capabilities of existing industry standard commercial software and supports the incorporation of internally developed tools. Recent advances in LLIMAS's Structural-Thermal-Optical Performance (STOP), aeromechanical, and aero-optical capabilities as applied to Lincoln prototypes are presented.
NASA Astrophysics Data System (ADS)
Sanger, Gregory M.; Reid, Paul B.; Baker, Lionel R.
1990-11-01
Consideration is given to advanced optical fabrication, profilometry and thin films, and metrology. Particular attention is given to automation for optics manufacturing, 3D contouring on a numerically controlled grinder, laser-scanning lens configurations, a noncontact precision measurement system, novel noncontact profiler design for measuring synchrotron radiation mirrors, laser-diode technologies for in-process metrology, measurements of X-ray reflectivities of Au-coatings at several energies, platinum coating of an X-ray mirror for SR lithography, a Hilbert transform algorithm for fringe-pattern analysis, structural error sources during fabrication of the AXAF optical elements, an in-process mirror figure qualification procedure for large deformable mirrors, interferometric evaluation of lenslet arrays for 2D phase-locked laser diode sources, and manufacturing and metrology tooling for the solar-A soft X-ray telescope.
e-DMDAV: A new privacy preserving algorithm for wearable enterprise information systems
NASA Astrophysics Data System (ADS)
Zhang, Zhenjiang; Wang, Xiaoni; Uden, Lorna; Zhang, Peng; Zhao, Yingsi
2018-04-01
Wearable devices have been widely used in many fields to improve the quality of people's lives. More and more data on individuals and businesses are collected by statistical organizations though those devices. Almost all of this data holds confidential information. Statistical Disclosure Control (SDC) seeks to protect statistical data in such a way that it can be released without giving away confidential information that can be linked to specific individuals or entities. The MDAV (Maximum Distance to Average Vector) algorithm is an efficient micro-aggregation algorithm belonging to SDC. However, the MDAV algorithm cannot survive homogeneity and background knowledge attacks because it was designed for static numerical data. This paper proposes a systematic dynamic-updating anonymity algorithm based on MDAV called the e-DMDAV algorithm. This algorithm introduces a new parameter and a table to ensure that the k records in one cluster with the range of the distinct values in each cluster is no less than e for numerical and non-numerical datasets. This new algorithm has been evaluated and compared with the MDAV algorithm. The simulation results show that the new algorithm outperforms MDAV in terms of minimizing distortion and disclosure risk with a similar computational cost.
Random Walk Particle Tracking For Multiphase Heat Transfer
NASA Astrophysics Data System (ADS)
Lattanzi, Aaron; Yin, Xiaolong; Hrenya, Christine
2017-11-01
As computing capabilities have advanced, direct numerical simulation (DNS) has become a highly effective tool for quantitatively predicting the heat transfer within multiphase flows. Here we utilize a hybrid DNS framework that couples the lattice Boltzmann method (LBM) to the random walk particle tracking (RWPT) algorithm. The main challenge of such a hybrid is that discontinuous fields pose a significant challenge to the RWPT framework and special attention must be given to the handling of interfaces. We derive a method for addressing discontinuities in the diffusivity field, arising at the interface between two phases. Analytical means are utilized to develop an interfacial tracer balance and modify the RWPT algorithm. By expanding the modulus of the stochastic (diffusive) step and only allowing a subset of the tracers within the high diffusivity medium to undergo a diffusive step, the correct equilibrium state can be restored (globally homogeneous tracer distribution). The new RWPT algorithm is implemented within the SUSP3D code and verified against a variety of systems: effective diffusivity of a static gas-solids mixture, hot sphere in unbounded diffusion, cooling sphere in unbounded diffusion, and uniform flow past a hot sphere.
NASA Technical Reports Server (NTRS)
Wang, Xu; Shi, Fang; Sigrist, Norbert; Seo, Byoung-Joon; Tang, Hong; Bikkannavar, Siddarayappa; Basinger, Scott; Lay, Oliver
2012-01-01
Large aperture telescope commonly features segment mirrors and a coarse phasing step is needed to bring these individual segments into the fine phasing capture range. Dispersed Fringe Sensing (DFS) is a powerful coarse phasing technique and its alteration is currently being used for JWST.An Advanced Dispersed Fringe Sensing (ADFS) algorithm is recently developed to improve the performance and robustness of previous DFS algorithms with better accuracy and unique solution. The first part of the paper introduces the basic ideas and the essential features of the ADFS algorithm and presents the some algorithm sensitivity study results. The second part of the paper describes the full details of algorithm validation process through the advanced wavefront sensing and correction testbed (AWCT): first, the optimization of the DFS hardware of AWCT to ensure the data accuracy and reliability is illustrated. Then, a few carefully designed algorithm validation experiments are implemented, and the corresponding data analysis results are shown. Finally the fiducial calibration using Range-Gate-Metrology technique is carried out and a <10nm or <1% algorithm accuracy is demonstrated.
Scalability Test of Multiscale Fluid-Platelet Model for Three Top Supercomputers
Zhang, Peng; Zhang, Na; Gao, Chao; Zhang, Li; Gao, Yuxiang; Deng, Yuefan; Bluestein, Danny
2016-01-01
We have tested the scalability of three supercomputers: the Tianhe-2, Stampede and CS-Storm with multiscale fluid-platelet simulations, in which a highly-resolved and efficient numerical model for nanoscale biophysics of platelets in microscale viscous biofluids is considered. Three experiments involving varying problem sizes were performed: Exp-S: 680,718-particle single-platelet; Exp-M: 2,722,872-particle 4-platelet; and Exp-L: 10,891,488-particle 16-platelet. Our implementations of multiple time-stepping (MTS) algorithm improved the performance of single time-stepping (STS) in all experiments. Using MTS, our model achieved the following simulation rates: 12.5, 25.0, 35.5 μs/day for Exp-S and 9.09, 6.25, 14.29 μs/day for Exp-M on Tianhe-2, CS-Storm 16-K80 and Stampede K20. The best rate for Exp-L was 6.25 μs/day for Stampede. Utilizing current advanced HPC resources, the simulation rates achieved by our algorithms bring within reach performing complex multiscale simulations for solving vexing problems at the interface of biology and engineering, such as thrombosis in blood flow which combines millisecond-scale hematology with microscale blood flow at resolutions of micro-to-nanoscale cellular components of platelets. This study of testing the performance characteristics of supercomputers with advanced computational algorithms that offer optimal trade-off to achieve enhanced computational performance serves to demonstrate that such simulations are feasible with currently available HPC resources. PMID:27570250
Development of iterative techniques for the solution of unsteady compressible viscous flows
NASA Technical Reports Server (NTRS)
Hixon, Duane; Sankar, L. N.
1993-01-01
During the past two decades, there has been significant progress in the field of numerical simulation of unsteady compressible viscous flows. At present, a variety of solution techniques exist such as the transonic small disturbance analyses (TSD), transonic full potential equation-based methods, unsteady Euler solvers, and unsteady Navier-Stokes solvers. These advances have been made possible by developments in three areas: (1) improved numerical algorithms; (2) automation of body-fitted grid generation schemes; and (3) advanced computer architectures with vector processing and massively parallel processing features. In this work, the GMRES scheme has been considered as a candidate for acceleration of a Newton iteration time marching scheme for unsteady 2-D and 3-D compressible viscous flow calculation; from preliminary calculations, this will provide up to a 65 percent reduction in the computer time requirements over the existing class of explicit and implicit time marching schemes. The proposed method has ben tested on structured grids, but is flexible enough for extension to unstructured grids. The described scheme has been tested only on the current generation of vector processor architecture of the Cray Y/MP class, but should be suitable for adaptation to massively parallel machines.
UDU/T/ covariance factorization for Kalman filtering
NASA Technical Reports Server (NTRS)
Thornton, C. L.; Bierman, G. J.
1980-01-01
There has been strong motivation to produce numerically stable formulations of the Kalman filter algorithms because it has long been known that the original discrete-time Kalman formulas are numerically unreliable. Numerical instability can be avoided by propagating certain factors of the estimate error covariance matrix rather than the covariance matrix itself. This paper documents filter algorithms that correspond to the covariance factorization P = UDU(T), where U is a unit upper triangular matrix and D is diagonal. Emphasis is on computational efficiency and numerical stability, since these properties are of key importance in real-time filter applications. The history of square-root and U-D covariance filters is reviewed. Simple examples are given to illustrate the numerical inadequacy of the Kalman covariance filter algorithms; these examples show how factorization techniques can give improved computational reliability.
Second-order Poisson Nernst-Planck solver for ion channel transport
Zheng, Qiong; Chen, Duan; Wei, Guo-Wei
2010-01-01
The Poisson Nernst-Planck (PNP) theory is a simplified continuum model for a wide variety of chemical, physical and biological applications. Its ability of providing quantitative explanation and increasingly qualitative predictions of experimental measurements has earned itself much recognition in the research community. Numerous computational algorithms have been constructed for the solution of the PNP equations. However, in the realistic ion-channel context, no second order convergent PNP algorithm has ever been reported in the literature, due to many numerical obstacles, including discontinuous coefficients, singular charges, geometric singularities, and nonlinear couplings. The present work introduces a number of numerical algorithms to overcome the abovementioned numerical challenges and constructs the first second-order convergent PNP solver in the ion-channel context. First, a Dirichlet to Neumann mapping (DNM) algorithm is designed to alleviate the charge singularity due to the protein structure. Additionally, the matched interface and boundary (MIB) method is reformulated for solving the PNP equations. The MIB method systematically enforces the interface jump conditions and achieves the second order accuracy in the presence of complex geometry and geometric singularities of molecular surfaces. Moreover, two iterative schemes are utilized to deal with the coupled nonlinear equations. Furthermore, extensive and rigorous numerical validations are carried out over a number of geometries, including a sphere, two proteins and an ion channel, to examine the numerical accuracy and convergence order of the present numerical algorithms. Finally, application is considered to a real transmembrane protein, the Gramicidin A channel protein. The performance of the proposed numerical techniques is tested against a number of factors, including mesh sizes, diffusion coefficient profiles, iterative schemes, ion concentrations, and applied voltages. Numerical predictions are compared with experimental measurements. PMID:21552336
NASA Astrophysics Data System (ADS)
Die Moran, Andres; El kadi Abderrezzak, Kamal; Tassi, Pablo; Herouvet, Jean-Michel
2014-05-01
Bank erosion is a key process that may cause a large number of economic and environmental problems (e.g. land loss, damage to structures and aquatic habitat). Stream bank erosion (toe erosion and mass failure) represents an important form of channel morphology changes and a significant source of sediment. With the advances made in computational techniques, two-dimensional (2-D) numerical models have become valuable tools for investigating flow and sediment transport in open channels at large temporal and spatial scales. However, the implementation of mass failure process in 2D numerical models is still a challenging task. In this paper, a simple, innovative algorithm is implemented in the Telemac-Mascaret modeling platform to handle bank failure: failure occurs whether the actual slope of one given bed element is higher than the internal friction angle. The unstable bed elements are rotated around an appropriate axis, ensuring mass conservation. Mass failure of a bank due to slope instability is applied at the end of each sediment transport evolution iteration, once the bed evolution due to bed load (and/or suspended load) has been computed, but before the global sediment mass balance is verified. This bank failure algorithm is successfully tested using two laboratory experimental cases. Then, bank failure in a 1:40 scale physical model of the Rhine River composed of non-uniform material is simulated. The main features of the bank erosion and failure are correctly reproduced in the numerical simulations, namely the mass wasting at the bank toe, followed by failure at the bank head, and subsequent transport of the mobilised material in an aggradation front. Volumes of eroded material obtained are of the same order of magnitude as the volumes measured during the laboratory tests.
Constraint treatment techniques and parallel algorithms for multibody dynamic analysis. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Chiou, Jin-Chern
1990-01-01
Computational procedures for kinematic and dynamic analysis of three-dimensional multibody dynamic (MBD) systems are developed from the differential-algebraic equations (DAE's) viewpoint. Constraint violations during the time integration process are minimized and penalty constraint stabilization techniques and partitioning schemes are developed. The governing equations of motion, a two-stage staggered explicit-implicit numerical algorithm, are treated which takes advantage of a partitioned solution procedure. A robust and parallelizable integration algorithm is developed. This algorithm uses a two-stage staggered central difference algorithm to integrate the translational coordinates and the angular velocities. The angular orientations of bodies in MBD systems are then obtained by using an implicit algorithm via the kinematic relationship between Euler parameters and angular velocities. It is shown that the combination of the present solution procedures yields a computationally more accurate solution. To speed up the computational procedures, parallel implementation of the present constraint treatment techniques, the two-stage staggered explicit-implicit numerical algorithm was efficiently carried out. The DAE's and the constraint treatment techniques were transformed into arrowhead matrices to which Schur complement form was derived. By fully exploiting the sparse matrix structural analysis techniques, a parallel preconditioned conjugate gradient numerical algorithm is used to solve the systems equations written in Schur complement form. A software testbed was designed and implemented in both sequential and parallel computers. This testbed was used to demonstrate the robustness and efficiency of the constraint treatment techniques, the accuracy of the two-stage staggered explicit-implicit numerical algorithm, and the speed up of the Schur-complement-based parallel preconditioned conjugate gradient algorithm on a parallel computer.
Numerical Algorithms for Acoustic Integrals - The Devil is in the Details
NASA Technical Reports Server (NTRS)
Brentner, Kenneth S.
1996-01-01
The accurate prediction of the aeroacoustic field generated by aerospace vehicles or nonaerospace machinery is necessary for designers to control and reduce source noise. Powerful computational aeroacoustic methods, based on various acoustic analogies (primarily the Lighthill acoustic analogy) and Kirchhoff methods, have been developed for prediction of noise from complicated sources, such as rotating blades. Both methods ultimately predict the noise through a numerical evaluation of an integral formulation. In this paper, we consider three generic acoustic formulations and several numerical algorithms that have been used to compute the solutions to these formulations. Algorithms for retarded-time formulations are the most efficient and robust, but they are difficult to implement for supersonic-source motion. Collapsing-sphere and emission-surface formulations are good alternatives when supersonic-source motion is present, but the numerical implementations of these formulations are more computationally demanding. New algorithms - which utilize solution adaptation to provide a specified error level - are needed.
NASA Astrophysics Data System (ADS)
Lee, Yang-Sub
A time-domain numerical algorithm for solving the KZK (Khokhlov-Zabolotskaya-Kuznetsov) nonlinear parabolic wave equation is developed for pulsed, axisymmetric, finite amplitude sound beams in thermoviscous fluids. The KZK equation accounts for the combined effects of diffraction, absorption, and nonlinearity at the same order of approximation. The accuracy of the algorithm is established via comparison with analytical solutions for several limiting cases, and with numerical results obtained from a widely used algorithm for solving the KZK equation in the frequency domain. The time domain algorithm is used to investigate waveform distortion and shock formation in directive sound beams radiated by pulsed circular piston sources. New results include predictions for the entire process of self-demodulation, and for the effect of frequency modulation on pulse envelope distortion. Numerical results are compared with measurements, and focused sources are investigated briefly.
Design and Implementation of Hybrid CORDIC Algorithm Based on Phase Rotation Estimation for NCO
Zhang, Chaozhu; Han, Jinan; Li, Ke
2014-01-01
The numerical controlled oscillator has wide application in radar, digital receiver, and software radio system. Firstly, this paper introduces the traditional CORDIC algorithm. Then in order to improve computing speed and save resources, this paper proposes a kind of hybrid CORDIC algorithm based on phase rotation estimation applied in numerical controlled oscillator (NCO). Through estimating the direction of part phase rotation, the algorithm reduces part phase rotation and add-subtract unit, so that it decreases delay. Furthermore, the paper simulates and implements the numerical controlled oscillator by Quartus II software and Modelsim software. Finally, simulation results indicate that the improvement over traditional CORDIC algorithm is achieved in terms of ease of computation, resource utilization, and computing speed/delay while maintaining the precision. It is suitable for high speed and precision digital modulation and demodulation. PMID:25110750
ANNIT - An Efficient Inversion Algorithm based on Prediction Principles
NASA Astrophysics Data System (ADS)
Růžek, B.; Kolář, P.
2009-04-01
Solution of inverse problems represents meaningful job in geophysics. The amount of data is continuously increasing, methods of modeling are being improved and the computer facilities are also advancing great technical progress. Therefore the development of new and efficient algorithms and computer codes for both forward and inverse modeling is still up to date. ANNIT is contributing to this stream since it is a tool for efficient solution of a set of non-linear equations. Typical geophysical problems are based on parametric approach. The system is characterized by a vector of parameters p, the response of the system is characterized by a vector of data d. The forward problem is usually represented by unique mapping F(p)=d. The inverse problem is much more complex and the inverse mapping p=G(d) is available in an analytical or closed form only exceptionally and generally it may not exist at all. Technically, both forward and inverse mapping F and G are sets of non-linear equations. ANNIT solves such situation as follows: (i) joint subspaces {pD, pM} of original data and model spaces D, M, resp. are searched for, within which the forward mapping F is sufficiently smooth that the inverse mapping G does exist, (ii) numerical approximation of G in subspaces {pD, pM} is found, (iii) candidate solution is predicted by using this numerical approximation. ANNIT is working in an iterative way in cycles. The subspaces {pD, pM} are searched for by generating suitable populations of individuals (models) covering data and model spaces. The approximation of the inverse mapping is made by using three methods: (a) linear regression, (b) Radial Basis Function Network technique, (c) linear prediction (also known as "Kriging"). The ANNIT algorithm has built in also an archive of already evaluated models. Archive models are re-used in a suitable way and thus the number of forward evaluations is minimized. ANNIT is now implemented both in MATLAB and SCILAB. Numerical tests show good performance of the algorithm. Both versions and documentation are available on Internet and anybody can download them. The goal of this presentation is to offer the algorithm and computer codes for anybody interested in the solution to inverse problems.
Studies of numerical algorithms for gyrokinetics and the effects of shaping on plasma turbulence
NASA Astrophysics Data System (ADS)
Belli, Emily Ann
Advanced numerical algorithms for gyrokinetic simulations are explored for more effective studies of plasma turbulent transport. The gyrokinetic equations describe the dynamics of particles in 5-dimensional phase space, averaging over the fast gyromotion, and provide a foundation for studying plasma microturbulence in fusion devices and in astrophysical plasmas. Several algorithms for Eulerian/continuum gyrokinetic solvers are compared. An iterative implicit scheme based on numerical approximations of the plasma response is developed. This method reduces the long time needed to set-up implicit arrays, yet still has larger time step advantages similar to a fully implicit method. Various model preconditioners and iteration schemes, including Krylov-based solvers, are explored. An Alternating Direction Implicit algorithm is also studied and is surprisingly found to yield a severe stability restriction on the time step. Overall, an iterative Krylov algorithm might be the best approach for extensions of core tokamak gyrokinetic simulations to edge kinetic formulations and may be particularly useful for studies of large-scale ExB shear effects. The effects of flux surface shape on the gyrokinetic stability and transport of tokamak plasmas are studied using the nonlinear GS2 gyrokinetic code with analytic equilibria based on interpolations of representative JET-like shapes. High shaping is found to be a stabilizing influence on both the linear ITG instability and nonlinear ITG turbulence. A scaling of the heat flux with elongation of chi ˜ kappa-1.5 or kappa-2 (depending on the triangularity) is observed, which is consistent with previous gyrofluid simulations. Thus, the GS2 turbulence simulations are explaining a significant fraction, but not all, of the empirical elongation scaling. The remainder of the scaling may come from (1) the edge boundary conditions for core turbulence, and (2) the larger Dimits nonlinear critical temperature gradient shift due to the enhancement of zonal flows with shaping, which is observed with the GS2 simulations. Finally, a local linear trial function-based gyrokinetic code is developed to aid in fast scoping studies of gyrokinetic linear stability. This code is successfully benchmarked with the full GS2 code in the collisionless, electrostatic limit, as well as in the more general electromagnetic description with higher-order Hermite basis functions.
Advances of the smooth variable structure filter: square-root and two-pass formulations
NASA Astrophysics Data System (ADS)
Gadsden, S. Andrew; Lee, Andrew S.
2017-01-01
The smooth variable structure filter (SVSF) has seen significant development and research activity in recent years. It is based on sliding mode concepts, which utilize a switching gain that brings an inherent amount of stability to the estimation process. In an effort to improve upon the numerical stability of the SVSF, a square-root formulation is derived. The square-root SVSF is based on Potter's algorithm. The proposed formulation is computationally more efficient and reduces the risks of failure due to numerical instability. The new strategy is applied on target tracking scenarios for the purposes of state estimation, and the results are compared with the popular Kalman filter. In addition, the SVSF is reformulated to present a two-pass smoother based on the SVSF gain. The proposed method is applied on an aerospace flight surface actuator, and the results are compared with the Kalman-based two-pass smoother.
Finite element solution of optimal control problems with inequality constraints
NASA Technical Reports Server (NTRS)
Bless, Robert R.; Hodges, Dewey H.
1990-01-01
A finite-element method based on a weak Hamiltonian form of the necessary conditions is summarized for optimal control problems. Very crude shape functions (so simple that element numerical quadrature is not necessary) can be used to develop an efficient procedure for obtaining candidate solutions (i.e., those which satisfy all the necessary conditions) even for highly nonlinear problems. An extension of the formulation allowing for discontinuities in the states and derivatives of the states is given. A theory that includes control inequality constraints is fully developed. An advanced launch vehicle (ALV) model is presented. The model involves staging and control constraints, thus demonstrating the full power of the weak formulation to date. Numerical results are presented along with total elapsed computer time required to obtain the results. The speed and accuracy in obtaining the results make this method a strong candidate for a real-time guidance algorithm.
Computational Aeroacoustics: An Overview
NASA Technical Reports Server (NTRS)
Tam, Christopher K. W.
2003-01-01
An overview of recent advances in computational aeroacoustics (CAA) is presented. CAA algorithms must not be dispersive and dissipative. It should propagate waves supported by the Euler equations with the correct group velocities. Computation domains are inevitably finite in size. To avoid the reflection of acoustic and other outgoing waves at the boundaries of the computation domain, it is required that special boundary conditions be imposed at the boundary region. These boundary conditions either absorb all the outgoing waves without reflection or allow the waves to exit smoothly. High-order schemes, invariably, supports spurious short waves. These spurious waves tend to pollute the numerical solution. They must be selectively damped or filtered out. All these issues and relevant computation methods are briefly reviewed. Jet screech tones are known to have caused structural fatigue in military combat aircrafts. Numerical simulation of the jet screech phenomenon is presented as an example of a successful application of CAA.
NASA Technical Reports Server (NTRS)
Vatsa, Veer N.; Carpenter, Mark H.; Lockard, David P.
2009-01-01
Recent experience in the application of an optimized, second-order, backward-difference (BDF2OPT) temporal scheme is reported. The primary focus of the work is on obtaining accurate solutions of the unsteady Reynolds-averaged Navier-Stokes equations over long periods of time for aerodynamic problems of interest. The baseline flow solver under consideration uses a particular BDF2OPT temporal scheme with a dual-time-stepping algorithm for advancing the flow solutions in time. Numerical difficulties are encountered with this scheme when the flow code is run for a large number of time steps, a behavior not seen with the standard second-order, backward-difference, temporal scheme. Based on a stability analysis, slight modifications to the BDF2OPT scheme are suggested. The performance and accuracy of this modified scheme is assessed by comparing the computational results with other numerical schemes and experimental data.
NASA Astrophysics Data System (ADS)
Murray, Ian B.; Densmore, Victor; Bora, Vaibhav; Pieratt, Matthew W.; Hibbard, Douglas L.; Milster, Tom D.
2011-06-01
Coatings of various metalized patterns are used for heating and electromagnetic interference (EMI) shielding applications. Previous work has focused on macro differences between different types of grids, and has shown good correlation between measurements and analyses of grid diffraction. To advance this work, we have utilized the University of Arizona's OptiScan software, which has been optimized for this application by using the Babinet Principle. When operating on an appropriate computer system, this algorithm produces results hundreds of times faster than standard Fourier-based methods, and allows realistic cases to be modeled for the first time. By using previously published derivations by Exotic Electro-Optics, we compare diffraction performance of repeating and randomized grid patterns with equivalent sheet resistance using numerical performance metrics. Grid patterns of each type are printed on optical substrates and measured energy is compared against modeled energy.
NASA Astrophysics Data System (ADS)
Recent advances in computational fluid dynamics are discussed in reviews and reports. Topics addressed include large-scale LESs for turbulent pipe and channel flows, numerical solutions of the Euler and Navier-Stokes equations on parallel computers, multigrid methods for steady high-Reynolds-number flow past sudden expansions, finite-volume methods on unstructured grids, supersonic wake flow on a blunt body, a grid-characteristic method for multidimensional gas dynamics, and CIC numerical simulation of a wave boundary layer. Consideration is given to vortex simulations of confined two-dimensional jets, supersonic viscous shear layers, spectral methods for compressible flows, shock-wave refraction at air/water interfaces, oscillatory flow in a two-dimensional collapsible channel, the growth of randomness in a spatially developing wake, and an efficient simplex algorithm for the finite-difference and dynamic linear-programming method in optimal potential control.
Research on numerical algorithms for large space structures
NASA Technical Reports Server (NTRS)
Denman, E. D.
1981-01-01
Numerical algorithms for analysis and design of large space structures are investigated. The sign algorithm and its application to decoupling of differential equations are presented. The generalized sign algorithm is given and its application to several problems discussed. The Laplace transforms of matrix functions and the diagonalization procedure for a finite element equation are discussed. The diagonalization of matrix polynomials is considered. The quadrature method and Laplace transforms is discussed and the identification of linear systems by the quadrature method investigated.
Lykiardopoulos, Byron; Hagström, Hannes; Fredrikson, Mats; Ignatova, Simone; Stål, Per; Hultcrantz, Rolf; Ekstedt, Mattias; Kechagias, Stergios
2016-01-01
Detection of advanced fibrosis (F3-F4) in nonalcoholic fatty liver disease (NAFLD) is important for ascertaining prognosis. Serum markers have been proposed as alternatives to biopsy. We attempted to develop a novel algorithm for detection of advanced fibrosis based on a more efficient combination of serological markers and to compare this with established algorithms. We included 158 patients with biopsy-proven NAFLD. Of these, 38 had advanced fibrosis. The following fibrosis algorithms were calculated: NAFLD fibrosis score, BARD, NIKEI, NASH-CRN regression score, APRI, FIB-4, King´s score, GUCI, Lok index, Forns score, and ELF. Study population was randomly divided in a training and a validation group. A multiple logistic regression analysis using bootstrapping methods was applied to the training group. Among many variables analyzed age, fasting glucose, hyaluronic acid and AST were included, and a model (LINKI-1) for predicting advanced fibrosis was created. Moreover, these variables were combined with platelet count in a mathematical way exaggerating the opposing effects, and alternative models (LINKI-2) were also created. Models were compared using area under the receiver operator characteristic curves (AUROC). Of established algorithms FIB-4 and King´s score had the best diagnostic accuracy with AUROCs 0.84 and 0.83, respectively. Higher accuracy was achieved with the novel LINKI algorithms. AUROCs in the total cohort for LINKI-1 was 0.91 and for LINKI-2 models 0.89. The LINKI algorithms for detection of advanced fibrosis in NAFLD showed better accuracy than established algorithms and should be validated in further studies including larger cohorts.
Efficient Parallel Algorithm For Direct Numerical Simulation of Turbulent Flows
NASA Technical Reports Server (NTRS)
Moitra, Stuti; Gatski, Thomas B.
1997-01-01
A distributed algorithm for a high-order-accurate finite-difference approach to the direct numerical simulation (DNS) of transition and turbulence in compressible flows is described. This work has two major objectives. The first objective is to demonstrate that parallel and distributed-memory machines can be successfully and efficiently used to solve computationally intensive and input/output intensive algorithms of the DNS class. The second objective is to show that the computational complexity involved in solving the tridiagonal systems inherent in the DNS algorithm can be reduced by algorithm innovations that obviate the need to use a parallelized tridiagonal solver.
AutoBayes Program Synthesis System System Internals
NASA Technical Reports Server (NTRS)
Schumann, Johann Martin
2011-01-01
This lecture combines the theoretical background of schema based program synthesis with the hands-on study of a powerful, open-source program synthesis system (Auto-Bayes). Schema-based program synthesis is a popular approach toward program synthesis. The lecture will provide an introduction into this topic and discuss how this technology can be used to generate customized algorithms. The synthesis of advanced numerical algorithms requires the availability of a powerful symbolic (algebra) system. Its task is to symbolically solve equations, simplify expressions, or to symbolically calculate derivatives (among others) such that the synthesized algorithms become as efficient as possible. We will discuss the use and importance of the symbolic system for synthesis. Any synthesis system is a large and complex piece of code. In this lecture, we will study Autobayes in detail. AutoBayes has been developed at NASA Ames and has been made open source. It takes a compact statistical specification and generates a customized data analysis algorithm (in C/C++) from it. AutoBayes is written in SWI Prolog and many concepts from rewriting, logic, functional, and symbolic programming. We will discuss the system architecture, the schema libary and the extensive support infra-structure. Practical hands-on experiments and exercises will enable the student to get insight into a realistic program synthesis system and provides knowledge to use, modify, and extend Autobayes.
Numerical Algorithms Based on Biorthogonal Wavelets
NASA Technical Reports Server (NTRS)
Ponenti, Pj.; Liandrat, J.
1996-01-01
Wavelet bases are used to generate spaces of approximation for the resolution of bidimensional elliptic and parabolic problems. Under some specific hypotheses relating the properties of the wavelets to the order of the involved operators, it is shown that an approximate solution can be built. This approximation is then stable and converges towards the exact solution. It is designed such that fast algorithms involving biorthogonal multi resolution analyses can be used to resolve the corresponding numerical problems. Detailed algorithms are provided as well as the results of numerical tests on partial differential equations defined on the bidimensional torus.
Algorithms for computing the geopotential using a simple density layer
NASA Technical Reports Server (NTRS)
Morrison, F.
1976-01-01
Several algorithms have been developed for computing the potential and attraction of a simple density layer. These are numerical cubature, Taylor series, and a mixed analytic and numerical integration using a singularity-matching technique. A computer program has been written to combine these techniques for computing the disturbing acceleration on an artificial earth satellite. A total of 1640 equal-area, constant surface density blocks on an oblate spheroid are used. The singularity-matching algorithm is used in the subsatellite region, Taylor series in the surrounding zone, and numerical cubature on the rest of the earth.
Multidisciplinary Optimization of a Transport Aircraft Wing using Particle Swarm Optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw; Venter, Gerhard
2002-01-01
The purpose of this paper is to demonstrate the application of particle swarm optimization to a realistic multidisciplinary optimization test problem. The paper's new contributions to multidisciplinary optimization is the application of a new algorithm for dealing with the unique challenges associated with multidisciplinary optimization problems, and recommendations as to the utility of the algorithm in future multidisciplinary optimization applications. The selected example is a bi-level optimization problem that demonstrates severe numerical noise and has a combination of continuous and truly discrete design variables. The use of traditional gradient-based optimization algorithms is thus not practical. The numerical results presented indicate that the particle swarm optimization algorithm is able to reliably find the optimum design for the problem presented here. The algorithm is capable of dealing with the unique challenges posed by multidisciplinary optimization as well as the numerical noise and truly discrete variables present in the current example problem.
An implementation of the look-ahead Lanczos algorithm for non-Hermitian matrices, part 2
NASA Technical Reports Server (NTRS)
Freund, Roland W.; Nachtigal, Noel M.
1990-01-01
It is shown how the look-ahead Lanczos process (combined with a quasi-minimal residual QMR) approach) can be used to develop a robust black box solver for large sparse non-Hermitian linear systems. Details of an implementation of the resulting QMR algorithm are presented. It is demonstrated that the QMR method is closely related to the biconjugate gradient (BCG) algorithm; however, unlike BCG, the QMR algorithm has smooth convergence curves and good numerical properties. We report numerical experiments with our implementation of the look-ahead Lanczos algorithm, both for eigenvalue problem and linear systems. Also, program listings of FORTRAN implementations of the look-ahead algorithm and the QMR method are included.
On the Reprocessing and Reanalysis of Observations for Climate
NASA Technical Reports Server (NTRS)
Bosilovich, Michael G.; Kennedy, John; Dee, Dick; ONeill, Alan
2012-01-01
The long observational record is critical to our understanding of the Earth s climate, but most observing systems were not developed with a climate objective in mind. As a result, tremendous efforts have gone into assessing and reprocessing the data records to improve their usefulness in climate studies. Many challenges remain, such as tracking the improvement of processing algorithms and limited spatial coverage. Reanalyses have fostered significant research, yet reliable global trends in many physical fields are not yet attainable, despite significant advances in data assimilation and numerical modeling. Communication of the strengths, limitations and uncertainties of reprocessed observations and reanalysis data, not only among the community of developers, but also with the extended research community, including the new generations of researchers and the decision makers is crucial for further advancement of the observational data records. WCRP provides the means to bridge the different motivating objectives on which national efforts focus.
Recent advances in the modeling of plasmas with the Particle-In-Cell methods
NASA Astrophysics Data System (ADS)
Vay, Jean-Luc; Lehe, Remi; Vincenti, Henri; Godfrey, Brendan; Lee, Patrick; Haber, Irv
2015-11-01
The Particle-In-Cell (PIC) approach is the method of choice for self-consistent simulations of plasmas from first principles. The fundamentals of the PIC method were established decades ago but improvements or variations are continuously being proposed. We report on several recent advances in PIC related algorithms, including: (a) detailed analysis of the numerical Cherenkov instability and its remediation, (b) analytic pseudo-spectral electromagnetic solvers in Cartesian and cylindrical (with azimuthal modes decomposition) geometries, (c) arbitrary-order finite-difference and generalized pseudo-spectral Maxwell solvers, (d) novel analysis of Maxwell's solvers' stencil variation and truncation, in application to domain decomposition strategies and implementation of Perfectly Matched Layers in high-order and pseudo-spectral solvers. Work supported by US-DOE Contracts DE-AC02-05CH11231 and the US-DOE SciDAC program ComPASS. Used resources of NERSC, supported by US-DOE Contract DE-AC02-05CH11231.
Multigrid for Staggered Lattice Fermions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brower, Richard C.; Clark, M. A.; Strelchenko, Alexei
Critical slowing down in Krylov methods for the Dirac operator presents a major obstacle to further advances in lattice field theory as it approaches the continuum solution. Here we formulate a multi-grid algorithm for the Kogut-Susskind (or staggered) fermion discretization which has proven difficult relative to Wilson multigrid due to its first-order anti-Hermitian structure. The solution is to introduce a novel spectral transformation by the K\\"ahler-Dirac spin structure prior to the Galerkin projection. We present numerical results for the two-dimensional, two-flavor Schwinger model, however, the general formalism is agnostic to dimension and is directly applicable to four-dimensional lattice QCD.
NASA Technical Reports Server (NTRS)
Freed, Alan; Diethelm, Kai; Luchko, Yury
2002-01-01
This is the first annual report to the U.S. Army Medical Research and Material Command for the three year project "Advanced Soft Tissue Modeling for Telemedicine and Surgical Simulation" supported by grant No. DAMD17-01-1-0673 to The Cleveland Clinic Foundation, to which the NASA Glenn Research Center is a subcontractor through Space Act Agreement SAA 3-445. The objective of this report is to extend popular one-dimensional (1D) fractional-order viscoelastic (FOV) materials models into their three-dimensional (3D) equivalents for finitely deforming continua, and to provide numerical algorithms for their solution.
Gong, Mali; Yuan, Yanyang; Li, Chen; Yan, Ping; Zhang, Haitao; Liao, Suying
2007-03-19
A model based on propagation-rate equations with consideration of transverse gain distribution is built up to describe the transverse mode competition in strongly pumped multimode fiber lasers and amplifiers. An approximate practical numerical algorithm by multilayer method is presented. Based on the model and the numerical algorithm, the behaviors of multitransverse mode competition are demonstrated and individual transverse modes power distributions of output are simulated numerically for both fiber lasers and amplifiers under various conditions.
Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fahimian, Benjamin P.; Zhao Yunzhe; Huang Zhifeng
Purpose: A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. Methods: EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). Inmore » each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Results: Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 mAs produce comparable image quality, resolution, and contrast relative to FBP with the 140 mAs flux setting. Compared to the algebraic reconstruction technique and the expectation maximization statistical reconstruction algorithm, a significant reduction in computation time is achieved with EST. Finally, numerical experiments on helical cone-beam CT data suggest that the combination of EST and ASSR produces reconstructions with higher image quality and lower noise than the Feldkamp Davis and Kress (FDK) method and the conventional ASSR approach. Conclusions: A Fourier-based iterative method has been applied to the reconstruction of fan-bean CT data with reduced x-ray fluence. This method incorporates advantageous features in both real and Fourier space iterative schemes: using a fast and algebraically exact method to calculate forward projection, enforcing the measured data in Fourier space, and applying physical constraints and flexible regularization in real space. Our results suggest that EST can be utilized for radiation dose reduction in x-ray CT via the readily implementable technique of lowering mAs settings. Numerical experiments further indicate that EST requires less computation time than several other iterative algorithms and can, in principle, be extended to helical cone-beam geometry in combination with the ASSR method.« less
Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction.
Fahimian, Benjamin P; Zhao, Yunzhe; Huang, Zhifeng; Fung, Russell; Mao, Yu; Zhu, Chun; Khatonabadi, Maryam; DeMarco, John J; Osher, Stanley J; McNitt-Gray, Michael F; Miao, Jianwei
2013-03-01
A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). In each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 mAs produce comparable image quality, resolution, and contrast relative to FBP with the 140 mAs flux setting. Compared to the algebraic reconstruction technique and the expectation maximization statistical reconstruction algorithm, a significant reduction in computation time is achieved with EST. Finally, numerical experiments on helical cone-beam CT data suggest that the combination of EST and ASSR produces reconstructions with higher image quality and lower noise than the Feldkamp Davis and Kress (FDK) method and the conventional ASSR approach. A Fourier-based iterative method has been applied to the reconstruction of fan-bean CT data with reduced x-ray fluence. This method incorporates advantageous features in both real and Fourier space iterative schemes: using a fast and algebraically exact method to calculate forward projection, enforcing the measured data in Fourier space, and applying physical constraints and flexible regularization in real space. Our results suggest that EST can be utilized for radiation dose reduction in x-ray CT via the readily implementable technique of lowering mAs settings. Numerical experiments further indicate that EST requires less computation time than several other iterative algorithms and can, in principle, be extended to helical cone-beam geometry in combination with the ASSR method.
Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction
Fahimian, Benjamin P.; Zhao, Yunzhe; Huang, Zhifeng; Fung, Russell; Mao, Yu; Zhu, Chun; Khatonabadi, Maryam; DeMarco, John J.; Osher, Stanley J.; McNitt-Gray, Michael F.; Miao, Jianwei
2013-01-01
Purpose: A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. Methods: EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). In each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Results: Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 mAs produce comparable image quality, resolution, and contrast relative to FBP with the 140 mAs flux setting. Compared to the algebraic reconstruction technique and the expectation maximization statistical reconstruction algorithm, a significant reduction in computation time is achieved with EST. Finally, numerical experiments on helical cone-beam CT data suggest that the combination of EST and ASSR produces reconstructions with higher image quality and lower noise than the Feldkamp Davis and Kress (FDK) method and the conventional ASSR approach. Conclusions: A Fourier-based iterative method has been applied to the reconstruction of fan-bean CT data with reduced x-ray fluence. This method incorporates advantageous features in both real and Fourier space iterative schemes: using a fast and algebraically exact method to calculate forward projection, enforcing the measured data in Fourier space, and applying physical constraints and flexible regularization in real space. Our results suggest that EST can be utilized for radiation dose reduction in x-ray CT via the readily implementable technique of lowering mAs settings. Numerical experiments further indicate that EST requires less computation time than several other iterative algorithms and can, in principle, be extended to helical cone-beam geometry in combination with the ASSR method. PMID:23464329
NASA Astrophysics Data System (ADS)
Ha, J.; Chung, W.; Shin, S.
2015-12-01
Many waveform inversion algorithms have been proposed in order to construct subsurface velocity structures from seismic data sets. These algorithms have suffered from computational burden, local minima problems, and the lack of low-frequency components. Computational efficiency can be improved by the application of back-propagation techniques and advances in computing hardware. In addition, waveform inversion algorithms, for obtaining long-wavelength velocity models, could avoid both the local minima problem and the effect of the lack of low-frequency components in seismic data. In this study, we proposed spectrogram inversion as a technique for recovering long-wavelength velocity models. In spectrogram inversion, decomposed frequency components from spectrograms of traces, in the observed and calculated data, are utilized to generate traces with reproduced low-frequency components. Moreover, since each decomposed component can reveal the different characteristics of a subsurface structure, several frequency components were utilized to analyze the velocity features in the subsurface. We performed the spectrogram inversion using a modified SEG/SEGE salt A-A' line. Numerical results demonstrate that spectrogram inversion could also recover the long-wavelength velocity features. However, inversion results varied according to the frequency components utilized. Based on the results of inversion using a decomposed single-frequency component, we noticed that robust inversion results are obtained when a dominant frequency component of the spectrogram was utilized. In addition, detailed information on recovered long-wavelength velocity models was obtained using a multi-frequency component combined with single-frequency components. Numerical examples indicate that various detailed analyses of long-wavelength velocity models can be carried out utilizing several frequency components.
Nonlinear Computational Aeroelasticity: Formulations and Solution Algorithms
2003-03-01
problem is proposed. Fluid-structure coupling algorithms are then discussed with some emphasis on distributed computing strategies. Numerical results...the structure and the exchange of structure motion to the fluid. The computational fluid dynamics code PFES is our finite element code for the numerical ...unstructured meshes). It was numerically demonstrated [1-3] that EBS can be less diffusive than SUPG [4-6] and the standard Finite Volume schemes
A numerically-stable algorithm for calibrating single six-ports for national microwave reflectometry
NASA Astrophysics Data System (ADS)
Hodgetts, T. E.
1990-11-01
A full description and analysis of the numerically stable algorithm currently used for calibrating single six ports or multi states for national microwave reflectometry, employing as standards four one port devices having known voltage reflection coefficients, is given.
Automatic extraction of numeric strings in unconstrained handwritten document images
NASA Astrophysics Data System (ADS)
Haji, M. Mehdi; Bui, Tien D.; Suen, Ching Y.
2011-01-01
Numeric strings such as identification numbers carry vital pieces of information in documents. In this paper, we present a novel algorithm for automatic extraction of numeric strings in unconstrained handwritten document images. The algorithm has two main phases: pruning and verification. In the pruning phase, the algorithm first performs a new segment-merge procedure on each text line, and then using a new regularity measure, it prunes all sequences of characters that are unlikely to be numeric strings. The segment-merge procedure is composed of two modules: a new explicit character segmentation algorithm which is based on analysis of skeletal graphs and a merging algorithm which is based on graph partitioning. All the candidate sequences that pass the pruning phase are sent to a recognition-based verification phase for the final decision. The recognition is based on a coarse-to-fine approach using probabilistic RBF networks. We developed our algorithm for the processing of real-world documents where letters and digits may be connected or broken in a document. The effectiveness of the proposed approach is shown by extensive experiments done on a real-world database of 607 documents which contains handwritten, machine-printed and mixed documents with different types of layouts and levels of noise.
NASA Astrophysics Data System (ADS)
Block, Martin M.; Durand, Loyal
2011-11-01
We recently derived a very accurate and fast new algorithm for numerically inverting the Laplace transforms needed to obtain gluon distributions from the proton structure function F2^{γ p}(x,Q2). We numerically inverted the function g( s), s being the variable in Laplace space, to G( v), where v is the variable in ordinary space. We have since discovered that the algorithm does not work if g( s)→0 less rapidly than 1/ s as s→∞, e.g., as 1/ s β for 0< β<1. In this note, we derive a new numerical algorithm for such cases, which holds for all positive and non-integer negative values of β. The new algorithm is exact if the original function G( v) is given by the product of a power v β-1 and a polynomial in v. We test the algorithm numerically for very small positive β, β=10-6 obtaining numerical results that imitate the Dirac delta function δ( v). We also devolve the published MSTW2008LO gluon distribution at virtuality Q 2=5 GeV2 down to the lower virtuality Q 2=1.69 GeV2. For devolution, β is negative, giving rise to inverse Laplace transforms that are distributions and not proper functions. This requires us to introduce the concept of Hadamard Finite Part integrals, which we discuss in detail.
A multi-level solution algorithm for steady-state Markov chains
NASA Technical Reports Server (NTRS)
Horton, Graham; Leutenegger, Scott T.
1993-01-01
A new iterative algorithm, the multi-level algorithm, for the numerical solution of steady state Markov chains is presented. The method utilizes a set of recursively coarsened representations of the original system to achieve accelerated convergence. It is motivated by multigrid methods, which are widely used for fast solution of partial differential equations. Initial results of numerical experiments are reported, showing significant reductions in computation time, often an order of magnitude or more, relative to the Gauss-Seidel and optimal SOR algorithms for a variety of test problems. The multi-level method is compared and contrasted with the iterative aggregation-disaggregation algorithm of Takahashi.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Youngjoon, E-mail: hongy@uic.edu; Nicholls, David P., E-mail: davidn@uic.edu
The accurate numerical simulation of linear waves interacting with periodic layered media is a crucial capability in engineering applications. In this contribution we study the stable and high-order accurate numerical simulation of the interaction of linear, time-harmonic waves with a periodic, triply layered medium with irregular interfaces. In contrast with volumetric approaches, High-Order Perturbation of Surfaces (HOPS) algorithms are inexpensive interfacial methods which rapidly and recursively estimate scattering returns by perturbation of the interface shape. In comparison with Boundary Integral/Element Methods, the stable HOPS algorithm we describe here does not require specialized quadrature rules, periodization strategies, or the solution ofmore » dense non-symmetric positive definite linear systems. In addition, the algorithm is provably stable as opposed to other classical HOPS approaches. With numerical experiments we show the remarkable efficiency, fidelity, and accuracy one can achieve with an implementation of this algorithm.« less
Numerical Asymptotic Solutions Of Differential Equations
NASA Technical Reports Server (NTRS)
Thurston, Gaylen A.
1992-01-01
Numerical algorithms derived and compared with classical analytical methods. In method, expansions replaced with integrals evaluated numerically. Resulting numerical solutions retain linear independence, main advantage of asymptotic solutions.
Sparsity-based acoustic inversion in cross-sectional multiscale optoacoustic imaging.
Han, Yiyong; Tzoumas, Stratis; Nunes, Antonio; Ntziachristos, Vasilis; Rosenthal, Amir
2015-09-01
With recent advancement in hardware of optoacoustic imaging systems, highly detailed cross-sectional images may be acquired at a single laser shot, thus eliminating motion artifacts. Nonetheless, other sources of artifacts remain due to signal distortion or out-of-plane signals. The purpose of image reconstruction algorithms is to obtain the most accurate images from noisy, distorted projection data. In this paper, the authors use the model-based approach for acoustic inversion, combined with a sparsity-based inversion procedure. Specifically, a cost function is used that includes the L1 norm of the image in sparse representation and a total variation (TV) term. The optimization problem is solved by a numerically efficient implementation of a nonlinear gradient descent algorithm. TV-L1 model-based inversion is tested in the cross section geometry for numerically generated data as well as for in vivo experimental data from an adult mouse. In all cases, model-based TV-L1 inversion showed a better performance over the conventional Tikhonov regularization, TV inversion, and L1 inversion. In the numerical examples, the images reconstructed with TV-L1 inversion were quantitatively more similar to the originating images. In the experimental examples, TV-L1 inversion yielded sharper images and weaker streak artifact. The results herein show that TV-L1 inversion is capable of improving the quality of highly detailed, multiscale optoacoustic images obtained in vivo using cross-sectional imaging systems. As a result of its high fidelity, model-based TV-L1 inversion may be considered as the new standard for image reconstruction in cross-sectional imaging.
Spiral trajectory design: a flexible numerical algorithm and base analytical equations.
Pipe, James G; Zwart, Nicholas R
2014-01-01
Spiral-based trajectories for magnetic resonance imaging can be advantageous, but are often cumbersome to design or create. This work presents a flexible numerical algorithm for designing trajectories based on explicit definition of radial undersampling, and also gives several analytical expressions for charactering the base (critically sampled) class of these trajectories. Expressions for the gradient waveform, based on slew and amplitude limits, are developed such that a desired pitch in the spiral k-space trajectory is followed. The source code for this algorithm, written in C, is publicly available. Analytical expressions approximating the spiral trajectory (ignoring the radial component) are given to characterize measurement time, gradient heating, maximum gradient amplitude, and off-resonance phase for slew-limited and gradient amplitude-limited cases. Several numerically calculated trajectories are illustrated, and base Archimedean spirals are compared with analytically obtained results. Several different waveforms illustrate that the desired slew and amplitude limits are reached, as are the desired undersampling patterns, using the numerical method. For base Archimedean spirals, the results of the numerical and analytical approaches are in good agreement. A versatile numerical algorithm was developed, and was written in publicly available code. Approximate analytical formulas are given that help characterize spiral trajectories. Copyright © 2013 Wiley Periodicals, Inc.
Highly uniform parallel microfabrication using a large numerical aperture system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Zi-Yu; Su, Ya-Hui, E-mail: ustcsyh@ahu.edu.cn, E-mail: dongwu@ustc.edu.cn; Zhang, Chen-Chu
In this letter, we report an improved algorithm to produce accurate phase patterns for generating highly uniform diffraction-limited multifocal arrays in a large numerical aperture objective system. It is shown that based on the original diffraction integral, the uniformity of the diffraction-limited focal arrays can be improved from ∼75% to >97%, owing to the critical consideration of the aperture function and apodization effect associated with a large numerical aperture objective. The experimental results, e.g., 3 × 3 arrays of square and triangle, seven microlens arrays with high uniformity, further verify the advantage of the improved algorithm. This algorithm enables the laser parallelmore » processing technology to realize uniform microstructures and functional devices in the microfabrication system with a large numerical aperture objective.« less
NASA Astrophysics Data System (ADS)
Zhang, Ye; Gong, Rongfang; Cheng, Xiaoliang; Gulliksson, Mårten
2018-06-01
This study considers the inverse source problem for elliptic partial differential equations with both Dirichlet and Neumann boundary data. The unknown source term is to be determined by additional boundary conditions. Unlike the existing methods found in the literature, which usually employ the first-order in time gradient-like system (such as the steepest descent methods) for numerically solving the regularized optimization problem with a fixed regularization parameter, we propose a novel method with a second-order in time dissipative gradient-like system and a dynamical selected regularization parameter. A damped symplectic scheme is proposed for the numerical solution. Theoretical analysis is given for both the continuous model and the numerical algorithm. Several numerical examples are provided to show the robustness of the proposed algorithm.
From Pacemaker to Wearable: Techniques for ECG Detection Systems.
Kumar, Ashish; Komaragiri, Rama; Kumar, Manjeet
2018-01-11
With the alarming rise in the deaths due to cardiovascular diseases (CVD), present medical research scenario places notable importance on techniques and methods to detect CVDs. As adduced by world health organization, technological proceeds in the field of cardiac function assessment have become the nucleus and heart of all leading research studies in CVDs in which electrocardiogram (ECG) analysis is the most functional and convenient tool used to test the range of heart-related irregularities. Most of the approaches present in the literature of ECG signal analysis consider noise removal, rhythm-based analysis, and heartbeat detection to improve the performance of a cardiac pacemaker. Advancements achieved in the field of ECG segments detection and beat classification have a limited evaluation and still require clinical approvals. In this paper, approaches on techniques to implement on-chip ECG detector for a cardiac pacemaker system are discussed. Moreover, different challenges regarding the ECG signal morphology analysis deriving from medical literature is extensively reviewed. It is found that robustness to noise, wavelet parameter choice, numerical efficiency, and detection performance are essential performance indicators required by a state-of-the-art ECG detector. Furthermore, many algorithms described in the existing literature are not verified using ECG data from the standard databases. Some ECG detection algorithms show very high detection performance with the total number of detected QRS complexes. However, the high detection performance of the algorithm is verified using only a few datasets. Finally, gaps in current advancements and testing are identified, and the primary challenge remains to be implementing bullseye test for morphology analysis evaluation.
NPP ATMS Snowfall Rate Product
NASA Technical Reports Server (NTRS)
Meng, Huan; Ferraro, Ralph; Kongoli, Cezar; Wang, Nai-Yu; Dong, Jun; Zavodsky, Bradley; Yan, Banghua
2015-01-01
Passive microwave measurements at certain high frequencies are sensitive to the scattering effect of snow particles and can be utilized to retrieve snowfall properties. Some of the microwave sensors with snowfall sensitive channels are Advanced Microwave Sounding Unit (AMSU), Microwave Humidity Sounder (MHS) and Advance Technology Microwave Sounder (ATMS). ATMS is the follow-on sensor to AMSU and MHS. Currently, an AMSU and MHS based land snowfall rate (SFR) product is running operationally at NOAA/NESDIS. Based on the AMSU/MHS SFR, an ATMS SFR algorithm has been developed recently. The algorithm performs retrieval in three steps: snowfall detection, retrieval of cloud properties, and estimation of snow particle terminal velocity and snowfall rate. The snowfall detection component utilizes principal component analysis and a logistic regression model. The model employs a combination of temperature and water vapor sounding channels to detect the scattering signal from falling snow and derive the probability of snowfall (Kongoli et al., 2015). In addition, a set of NWP model based filters is also employed to improve the accuracy of snowfall detection. Cloud properties are retrieved using an inversion method with an iteration algorithm and a two-stream radiative transfer model (Yan et al., 2008). A method developed by Heymsfield and Westbrook (2010) is adopted to calculate snow particle terminal velocity. Finally, snowfall rate is computed by numerically solving a complex integral. NCEP CMORPH analysis has shown that integration of ATMS SFR has improved the performance of CMORPH-Snow. The ATMS SFR product is also being assessed at several NWS Weather Forecast Offices for its usefulness in weather forecast.
Parallel language constructs for tensor product computations on loosely coupled architectures
NASA Technical Reports Server (NTRS)
Mehrotra, Piyush; Van Rosendale, John
1989-01-01
A set of language primitives designed to allow the specification of parallel numerical algorithms at a higher level is described. The authors focus on tensor product array computations, a simple but important class of numerical algorithms. They consider first the problem of programming one-dimensional kernel routines, such as parallel tridiagonal solvers, and then look at how such parallel kernels can be combined to form parallel tensor product algorithms.
A Parallel, Finite-Volume Algorithm for Large-Eddy Simulation of Turbulent Flows
NASA Technical Reports Server (NTRS)
Bui, Trong T.
1999-01-01
A parallel, finite-volume algorithm has been developed for large-eddy simulation (LES) of compressible turbulent flows. This algorithm includes piecewise linear least-square reconstruction, trilinear finite-element interpolation, Roe flux-difference splitting, and second-order MacCormack time marching. Parallel implementation is done using the message-passing programming model. In this paper, the numerical algorithm is described. To validate the numerical method for turbulence simulation, LES of fully developed turbulent flow in a square duct is performed for a Reynolds number of 320 based on the average friction velocity and the hydraulic diameter of the duct. Direct numerical simulation (DNS) results are available for this test case, and the accuracy of this algorithm for turbulence simulations can be ascertained by comparing the LES solutions with the DNS results. The effects of grid resolution, upwind numerical dissipation, and subgrid-scale dissipation on the accuracy of the LES are examined. Comparison with DNS results shows that the standard Roe flux-difference splitting dissipation adversely affects the accuracy of the turbulence simulation. For accurate turbulence simulations, only 3-5 percent of the standard Roe flux-difference splitting dissipation is needed.
Three numerical algorithms were compared to provide a solution of a radiative transfer equation (RTE) for plane albedo (hemispherical reflectance) in semi-infinite one-dimensional plane-parallel layer. Algorithms were based on the invariant imbedding method and two different var...
A Comparison of Three Algorithms for Orion Drogue Parachute Release
NASA Technical Reports Server (NTRS)
Matz, Daniel A.; Braun, Robert D.
2015-01-01
The Orion Multi-Purpose Crew Vehicle is susceptible to ipping apex forward between drogue parachute release and main parachute in ation. A smart drogue release algorithm is required to select a drogue release condition that will not result in an apex forward main parachute deployment. The baseline algorithm is simple and elegant, but does not perform as well as desired in drogue failure cases. A simple modi cation to the baseline algorithm can improve performance, but can also sometimes fail to identify a good release condition. A new algorithm employing simpli ed rotational dynamics and a numeric predictor to minimize a rotational energy metric is proposed. A Monte Carlo analysis of a drogue failure scenario is used to compare the performance of the algorithms. The numeric predictor prevents more of the cases from ipping apex forward, and also results in an improvement in the capsule attitude at main bag extraction. The sensitivity of the numeric predictor to aerodynamic dispersions, errors in the navigated state, and execution rate is investigated, showing little degradation in performance.
Recent advances on terrain database correlation testing
NASA Astrophysics Data System (ADS)
Sakude, Milton T.; Schiavone, Guy A.; Morelos-Borja, Hector; Martin, Glenn; Cortes, Art
1998-08-01
Terrain database correlation is a major requirement for interoperability in distributed simulation. There are numerous situations in which terrain database correlation problems can occur that, in turn, lead to lack of interoperability in distributed training simulations. Examples are the use of different run-time terrain databases derived from inconsistent on source data, the use of different resolutions, and the use of different data models between databases for both terrain and culture data. IST has been developing a suite of software tools, named ZCAP, to address terrain database interoperability issues. In this paper we discuss recent enhancements made to this suite, including improved algorithms for sampling and calculating line-of-sight, an improved method for measuring terrain roughness, and the application of a sparse matrix method to the terrain remediation solution developed at the Visual Systems Lab of the Institute for Simulation and Training. We review the application of some of these new algorithms to the terrain correlation measurement processes. The application of these new algorithms improves our support for very large terrain databases, and provides the capability for performing test replications to estimate the sampling error of the tests. With this set of tools, a user can quantitatively assess the degree of correlation between large terrain databases.
NASA Astrophysics Data System (ADS)
Al-Asadi, H. A.
2013-02-01
We present a theoretical analysis of an additional nonlinear phase shift of backward Stokes wave based on stimulated Brillouin scattering in the system with a bi-directional pumping scheme. We optimize three parameters of the system: the numerical aperture, the optical loss and the pumping wavelength to minimize an additional nonlinear phase shift of backward Stokes waves due to stimulated Brillouin scattering. The optimization is performed with various Brillouin pump powers and the optical reflectivity values are based on the modern, global evolutionary computation algorithm, particle swarm optimization. It is shown that the additional nonlinear phase shift of backward Stokes wave varies with different optical fiber lengths, and can be minimized to less than 0.07 rad according to the particle swarm optimization algorithm for 5 km. The bi-directional pumping configuration system is shown to be efficient when it is possible to transmit the power output to advanced when frequency detuning is negative and delayed when it is positive, with the optimum values of the three parameters to achieve the reduction of an additional nonlinear phase shift.
Multiresolution representation and numerical algorithms: A brief review
NASA Technical Reports Server (NTRS)
Harten, Amiram
1994-01-01
In this paper we review recent developments in techniques to represent data in terms of its local scale components. These techniques enable us to obtain data compression by eliminating scale-coefficients which are sufficiently small. This capability for data compression can be used to reduce the cost of many numerical solution algorithms by either applying it to the numerical solution operator in order to get an approximate sparse representation, or by applying it to the numerical solution itself in order to reduce the number of quantities that need to be computed.
Hasani, Mojtaba H; Gharibzadeh, Shahriar; Farjami, Yaghoub; Tavakkoli, Jahan
2013-09-01
Various numerical algorithms have been developed to solve the Khokhlov-Kuznetsov-Zabolotskaya (KZK) parabolic nonlinear wave equation. In this work, a generalized time-domain numerical algorithm is proposed to solve the diffraction term of the KZK equation. This algorithm solves the transverse Laplacian operator of the KZK equation in three-dimensional (3D) Cartesian coordinates using a finite-difference method based on the five-point implicit backward finite difference and the five-point Crank-Nicolson finite difference discretization techniques. This leads to a more uniform discretization of the Laplacian operator which in turn results in fewer calculation gridding nodes without compromising accuracy in the diffraction term. In addition, a new empirical algorithm based on the LU decomposition technique is proposed to solve the system of linear equations obtained from this discretization. The proposed empirical algorithm improves the calculation speed and memory usage, while the order of computational complexity remains linear in calculation of the diffraction term in the KZK equation. For evaluating the accuracy of the proposed algorithm, two previously published algorithms are used as comparison references: the conventional 2D Texas code and its generalization for 3D geometries. The results show that the accuracy/efficiency performance of the proposed algorithm is comparable with the established time-domain methods.
A Parallel Compact Multi-Dimensional Numerical Algorithm with Aeroacoustics Applications
NASA Technical Reports Server (NTRS)
Povitsky, Alex; Morris, Philip J.
1999-01-01
In this study we propose a novel method to parallelize high-order compact numerical algorithms for the solution of three-dimensional PDEs (Partial Differential Equations) in a space-time domain. For this numerical integration most of the computer time is spent in computation of spatial derivatives at each stage of the Runge-Kutta temporal update. The most efficient direct method to compute spatial derivatives on a serial computer is a version of Gaussian elimination for narrow linear banded systems known as the Thomas algorithm. In a straightforward pipelined implementation of the Thomas algorithm processors are idle due to the forward and backward recurrences of the Thomas algorithm. To utilize processors during this time, we propose to use them for either non-local data independent computations, solving lines in the next spatial direction, or local data-dependent computations by the Runge-Kutta method. To achieve this goal, control of processor communication and computations by a static schedule is adopted. Thus, our parallel code is driven by a communication and computation schedule instead of the usual "creative, programming" approach. The obtained parallelization speed-up of the novel algorithm is about twice as much as that for the standard pipelined algorithm and close to that for the explicit DRP algorithm.
Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models.
Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou
2015-01-01
Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1) βk ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations.
Analysis of data mining classification by comparison of C4.5 and ID algorithms
NASA Astrophysics Data System (ADS)
Sudrajat, R.; Irianingsih, I.; Krisnawan, D.
2017-01-01
The rapid development of information technology, triggered by the intensive use of information technology. For example, data mining widely used in investment. Many techniques that can be used assisting in investment, the method that used for classification is decision tree. Decision tree has a variety of algorithms, such as C4.5 and ID3. Both algorithms can generate different models for similar data sets and different accuracy. C4.5 and ID3 algorithms with discrete data provide accuracy are 87.16% and 99.83% and C4.5 algorithm with numerical data is 89.69%. C4.5 and ID3 algorithms with discrete data provides 520 and 598 customers and C4.5 algorithm with numerical data is 546 customers. From the analysis of the both algorithm it can classified quite well because error rate less than 15%.
Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models
Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou
2015-01-01
Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1)β k ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations. PMID:26502409
A proximity algorithm accelerated by Gauss-Seidel iterations for L1/TV denoising models
NASA Astrophysics Data System (ADS)
Li, Qia; Micchelli, Charles A.; Shen, Lixin; Xu, Yuesheng
2012-09-01
Our goal in this paper is to improve the computational performance of the proximity algorithms for the L1/TV denoising model. This leads us to a new characterization of all solutions to the L1/TV model via fixed-point equations expressed in terms of the proximity operators. Based upon this observation we develop an algorithm for solving the model and establish its convergence. Furthermore, we demonstrate that the proposed algorithm can be accelerated through the use of the componentwise Gauss-Seidel iteration so that the CPU time consumed is significantly reduced. Numerical experiments using the proposed algorithm for impulsive noise removal are included, with a comparison to three recently developed algorithms. The numerical results show that while the proposed algorithm enjoys a high quality of the restored images, as the other three known algorithms do, it performs significantly better in terms of computational efficiency measured in the CPU time consumed.
NASA Technical Reports Server (NTRS)
Hall, Edward J.; Delaney, Robert A.; Bettner, James L.
1991-01-01
The primary objective of this study was the development of a time-dependent three-dimensional Euler/Navier-Stokes aerodynamic analysis to predict unsteady compressible transonic flows about ducted and unducted propfan propulsion systems at angle of attack. The computer codes resulting from this study are referred to as Advanced Ducted Propfan Analysis Codes (ADPAC). This report is intended to serve as a computer program user's manual for the ADPAC developed under Task 2 of NASA Contract NAS3-25270, Unsteady Ducted Propfan Analysis. Aerodynamic calculations were based on a four-stage Runge-Kutta time-marching finite volume solution technique with added numerical dissipation. A time-accurate implicit residual smoothing operator was utilized for unsteady flow predictions. For unducted propfans, a single H-type grid was used to discretize each blade passage of the complete propeller. For ducted propfans, a coupled system of five grid blocks utilizing an embedded C-grid about the cowl leading edge was used to discretize each blade passage. Grid systems were generated by a combined algebraic/elliptic algorithm developed specifically for ducted propfans. Numerical calculations were compared with experimental data for both ducted and unducted propfan flows. The solution scheme demonstrated efficiency and accuracy comparable with other schemes of this class.
MHD code using multi graphical processing units: SMAUG+
NASA Astrophysics Data System (ADS)
Gyenge, N.; Griffiths, M. K.; Erdélyi, R.
2018-01-01
This paper introduces the Sheffield Magnetohydrodynamics Algorithm Using GPUs (SMAUG+), an advanced numerical code for solving magnetohydrodynamic (MHD) problems, using multi-GPU systems. Multi-GPU systems facilitate the development of accelerated codes and enable us to investigate larger model sizes and/or more detailed computational domain resolutions. This is a significant advancement over the parent single-GPU MHD code, SMAUG (Griffiths et al., 2015). Here, we demonstrate the validity of the SMAUG + code, describe the parallelisation techniques and investigate performance benchmarks. The initial configuration of the Orszag-Tang vortex simulations are distributed among 4, 16, 64 and 100 GPUs. Furthermore, different simulation box resolutions are applied: 1000 × 1000, 2044 × 2044, 4000 × 4000 and 8000 × 8000 . We also tested the code with the Brio-Wu shock tube simulations with model size of 800 employing up to 10 GPUs. Based on the test results, we observed speed ups and slow downs, depending on the granularity and the communication overhead of certain parallel tasks. The main aim of the code development is to provide massively parallel code without the memory limitation of a single GPU. By using our code, the applied model size could be significantly increased. We demonstrate that we are able to successfully compute numerically valid and large 2D MHD problems.
General advancing front packing algorithm for the discrete element method
NASA Astrophysics Data System (ADS)
Morfa, Carlos A. Recarey; Pérez Morales, Irvin Pablo; de Farias, Márcio Muniz; de Navarra, Eugenio Oñate Ibañez; Valera, Roberto Roselló; Casañas, Harold Díaz-Guzmán
2018-01-01
A generic formulation of a new method for packing particles is presented. It is based on a constructive advancing front method, and uses Monte Carlo techniques for the generation of particle dimensions. The method can be used to obtain virtual dense packings of particles with several geometrical shapes. It employs continuous, discrete, and empirical statistical distributions in order to generate the dimensions of particles. The packing algorithm is very flexible and allows alternatives for: 1—the direction of the advancing front (inwards or outwards), 2—the selection of the local advancing front, 3—the method for placing a mobile particle in contact with others, and 4—the overlap checks. The algorithm also allows obtaining highly porous media when it is slightly modified. The use of the algorithm to generate real particle packings from grain size distribution curves, in order to carry out engineering applications, is illustrated. Finally, basic applications of the algorithm, which prove its effectiveness in the generation of a large number of particles, are carried out.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walstrom, Peter Lowell
A numerical algorithm for computing the field components B r and B z and their r and z derivatives with open boundaries in cylindrical coordinates for circular current loops is described. An algorithm for computing the vector potential is also described. For the convenience of the reader, derivations of the final expressions from their defining integrals are given in detail, since their derivations (especially for the field derivatives) are not all easily found in textbooks. Numerical calculations are based on evaluation of complete elliptic integrals using the Bulirsch algorithm cel. Since cel can evaluate complete elliptic integrals of a fairlymore » general type, in some cases the elliptic integrals can be evaluated without first reducing them to forms containing standard Legendre forms. The algorithms avoid the numerical difficulties that many of the textbook solutions have for points near the axis because of explicit factors of 1=r or 1=r 2 in the some of the expressions.« less
Automatic Boosted Flood Mapping from Satellite Data
NASA Technical Reports Server (NTRS)
Coltin, Brian; McMichael, Scott; Smith, Trey; Fong, Terrence
2016-01-01
Numerous algorithms have been proposed to map floods from Moderate Resolution Imaging Spectroradiometer (MODIS) imagery. However, most require human input to succeed, either to specify a threshold value or to manually annotate training data. We introduce a new algorithm based on Adaboost which effectively maps floods without any human input, allowing for a truly rapid and automatic response. The Adaboost algorithm combines multiple thresholds to achieve results comparable to state-of-the-art algorithms which do require human input. We evaluate Adaboost, as well as numerous previously proposed flood mapping algorithms, on multiple MODIS flood images, as well as on hundreds of non-flood MODIS lake images, demonstrating its effectiveness across a wide variety of conditions.
Community detection in complex networks using deep auto-encoded extreme learning machine
NASA Astrophysics Data System (ADS)
Wang, Feifan; Zhang, Baihai; Chai, Senchun; Xia, Yuanqing
2018-06-01
Community detection has long been a fascinating topic in complex networks since the community structure usually unveils valuable information of interest. The prevalence and evolution of deep learning and neural networks have been pushing forward the advancement in various research fields and also provide us numerous useful and off the shelf techniques. In this paper, we put the cascaded stacked autoencoders and the unsupervised extreme learning machine (ELM) together in a two-level embedding process and propose a novel community detection algorithm. Extensive comparison experiments in circumstances of both synthetic and real-world networks manifest the advantages of the proposed algorithm. On one hand, it outperforms the k-means clustering in terms of the accuracy and stability thus benefiting from the determinate dimensions of the ELM block and the integration of sparsity restrictions. On the other hand, it endures smaller complexity than the spectral clustering method on account of the shrinkage in time spent on the eigenvalue decomposition procedure.
Development of advanced Navier-Stokes solver
NASA Technical Reports Server (NTRS)
Yoon, Seokkwan
1994-01-01
The objective of research was to develop and validate new computational algorithms for solving the steady and unsteady Euler and Navier-Stokes equations. The end-products are new three-dimensional Euler and Navier-Stokes codes that are faster, more reliable, more accurate, and easier to use. The three-dimensional Euler and full/thin-layer Reynolds-averaged Navier-Stokes equations for compressible/incompressible flows are solved on structured hexahedral grids. The Baldwin-Lomax algebraic turbulence model is used for closure. The space discretization is based on a cell-centered finite-volume method augmented by a variety of numerical dissipation models with optional total variation diminishing limiters. The governing equations are integrated in time by an implicit method based on lower-upper factorization and symmetric Gauss-Seidel relaxation. The algorithm is vectorized on diagonal planes of sweep using two-dimensional indices in three dimensions. Convergence rates and the robustness of the codes are enhanced by the use of an implicit full approximation storage multigrid method.
Some issues in numerical simulation of nonlinear structural response
NASA Technical Reports Server (NTRS)
Hibbitt, H. D.
1989-01-01
The development of commercial finite element software is addressed. This software provides practical tools that are used in an astonishingly wide range of engineering applications that include critical aspects of the safety evaluation of nuclear power plants or of heavily loaded offshore structures in the hostile environments of the North Sea or the Arctic, major design activities associated with the development of airframes for high strength and minimum weight, thermal analysis of electronic components, and the design of sports equipment. In the more advanced application areas, the effectiveness of the product depends critically on the quality of the mechanics and mechanics related algorithms that are implemented. Algorithmic robustness is of primary concern. Those methods that should be chosen will maximize reliability with minimal understanding on the part of the user. Computational efficiency is also important because there are always limited resources, and hence problems that are too time consuming or costly. Finally, some areas where research work will provide new methods and improvements is discussed.
Underwater Acoustic Target Tracking: A Review
Han, Ying; Fan, Liying
2018-01-01
Advances in acoustic technology and instrumentation now make it possible to explore marine resources. As a significant component of ocean exploration, underwater acoustic target tracking has aroused wide attention both in military and civil fields. Due to the complexity of the marine environment, numerous techniques have been proposed to obtain better tracking performance. In this paper, we survey over 100 papers ranging from innovative papers to the state-of-the-art in this field to present underwater tracking technologies. Not only the related knowledge of acoustic tracking instrument and tracking progress is clarified in detail, but also a novel taxonomy method is proposed. In this paper, algorithms for underwater acoustic target tracking are classified based on the methods used as: (1) instrument-assisted methods; (2) mode-based methods; (3) tracking optimization methods. These algorithms are compared and analyzed in the aspect of dimensions, numbers, and maneuvering of the tracking target, which is different from other survey papers. Meanwhile, challenges, countermeasures, and lessons learned are illustrated in this paper. PMID:29301318
A critical review of principal traffic noise models: Strategies and implications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garg, Naveen, E-mail: ngarg@mail.nplindia.ernet.in; Department of Mechanical, Production and Industrial Engineering, Delhi Technological University, Delhi 110042; Maji, Sagar
2014-04-01
The paper presents an exhaustive comparison of principal traffic noise models adopted in recent years in developed nations. The comparison is drawn on the basis of technical attributes including source modelling and sound propagation algorithms. Although the characterization of source in terms of rolling and propulsion noise in conjunction with advanced numerical methods for sound propagation has significantly reduced the uncertainty in traffic noise predictions, the approach followed is quite complex and requires specialized mathematical skills for predictions which is sometimes quite cumbersome for town planners. Also, it is sometimes difficult to follow the best approach when a variety ofmore » solutions have been proposed. This paper critically reviews all these aspects pertaining to the recent models developed and adapted in some countries and also discusses the strategies followed and implications of these models. - Highlights: • Principal traffic noise models developed are reviewed. • Sound propagation algorithms used in traffic noise models are compared. • Implications of models are discussed.« less
Flow Navigation by Smart Microswimmers via Reinforcement Learning
NASA Astrophysics Data System (ADS)
Colabrese, Simona; Gustavsson, Kristian; Celani, Antonio; Biferale, Luca
2017-04-01
Smart active particles can acquire some limited knowledge of the fluid environment from simple mechanical cues and exert a control on their preferred steering direction. Their goal is to learn the best way to navigate by exploiting the underlying flow whenever possible. As an example, we focus our attention on smart gravitactic swimmers. These are active particles whose task is to reach the highest altitude within some time horizon, given the constraints enforced by fluid mechanics. By means of numerical experiments, we show that swimmers indeed learn nearly optimal strategies just by experience. A reinforcement learning algorithm allows particles to learn effective strategies even in difficult situations when, in the absence of control, they would end up being trapped by flow structures. These strategies are highly nontrivial and cannot be easily guessed in advance. This Letter illustrates the potential of reinforcement learning algorithms to model adaptive behavior in complex flows and paves the way towards the engineering of smart microswimmers that solve difficult navigation problems.
NASA Astrophysics Data System (ADS)
Mozaffari, Ahmad; Vajedi, Mahyar; Chehresaz, Maryyeh; Azad, Nasser L.
2016-03-01
The urgent need to meet increasingly tight environmental regulations and new fuel economy requirements has motivated system science researchers and automotive engineers to take advantage of emerging computational techniques to further advance hybrid electric vehicle and plug-in hybrid electric vehicle (PHEV) designs. In particular, research has focused on vehicle powertrain system design optimization, to reduce the fuel consumption and total energy cost while improving the vehicle's driving performance. In this work, two different natural optimization machines, namely the synchronous self-learning Pareto strategy and the elitism non-dominated sorting genetic algorithm, are implemented for component sizing of a specific power-split PHEV platform with a Toyota plug-in Prius as the baseline vehicle. To do this, a high-fidelity model of the Toyota plug-in Prius is employed for the numerical experiments using the Autonomie simulation software. Based on the simulation results, it is demonstrated that Pareto-based algorithms can successfully optimize the design parameters of the vehicle powertrain.
Machine learning strategy for accelerated design of polymer dielectrics
Mannodi-Kanakkithodi, Arun; Pilania, Ghanshyam; Huan, Tran Doan; ...
2016-02-15
The ability to efficiently design new and advanced dielectric polymers is hampered by the lack of sufficient, reliable data on wide polymer chemical spaces, and the difficulty of generating such data given time and computational/experimental constraints. Here, we address the issue of accelerating polymer dielectrics design by extracting learning models from data generated by accurate state-of-the-art first principles computations for polymers occupying an important part of the chemical subspace. The polymers are ‘fingerprinted’ as simple, easily attainable numerical representations, which are mapped to the properties of interest using a machine learning algorithm to develop an on-demand property prediction model. Further,more » a genetic algorithm is utilised to optimise polymer constituent blocks in an evolutionary manner, thus directly leading to the design of polymers with given target properties. Furthermore, while this philosophy of learning to make instant predictions and design is demonstrated here for the example of polymer dielectrics, it is equally applicable to other classes of materials as well.« less
A Bayesian approach to microwave precipitation profile retrieval
NASA Technical Reports Server (NTRS)
Evans, K. Franklin; Turk, Joseph; Wong, Takmeng; Stephens, Graeme L.
1995-01-01
A multichannel passive microwave precipitation retrieval algorithm is developed. Bayes theorem is used to combine statistical information from numerical cloud models with forward radiative transfer modeling. A multivariate lognormal prior probability distribution contains the covariance information about hydrometeor distribution that resolves the nonuniqueness inherent in the inversion process. Hydrometeor profiles are retrieved by maximizing the posterior probability density for each vector of observations. The hydrometeor profile retrieval method is tested with data from the Advanced Microwave Precipitation Radiometer (10, 19, 37, and 85 GHz) of convection over ocean and land in Florida. The CP-2 multiparameter radar data are used to verify the retrieved profiles. The results show that the method can retrieve approximate hydrometeor profiles, with larger errors over land than water. There is considerably greater accuracy in the retrieval of integrated hydrometeor contents than of profiles. Many of the retrieval errors are traced to problems with the cloud model microphysical information, and future improvements to the algorithm are suggested.
Research on the control of large space structures
NASA Technical Reports Server (NTRS)
Denman, E. D.
1983-01-01
The research effort on the control of large space structures at the University of Houston has concentrated on the mathematical theory of finite-element models; identification of the mass, damping, and stiffness matrix; assignment of damping to structures; and decoupling of structure dynamics. The objective of the work has been and will continue to be the development of efficient numerical algorithms for analysis, control, and identification of large space structures. The major consideration in the development of the algorithms has been the large number of equations that must be handled by the algorithm as well as sensitivity of the algorithms to numerical errors.
Wang, Peng; Zhu, Zhouquan; Huang, Shuai
2013-01-01
This paper presents a novel biologically inspired metaheuristic algorithm called seven-spot ladybird optimization (SLO). The SLO is inspired by recent discoveries on the foraging behavior of a seven-spot ladybird. In this paper, the performance of the SLO is compared with that of the genetic algorithm, particle swarm optimization, and artificial bee colony algorithms by using five numerical benchmark functions with multimodality. The results show that SLO has the ability to find the best solution with a comparatively small population size and is suitable for solving optimization problems with lower dimensions.
Zhu, Zhouquan
2013-01-01
This paper presents a novel biologically inspired metaheuristic algorithm called seven-spot ladybird optimization (SLO). The SLO is inspired by recent discoveries on the foraging behavior of a seven-spot ladybird. In this paper, the performance of the SLO is compared with that of the genetic algorithm, particle swarm optimization, and artificial bee colony algorithms by using five numerical benchmark functions with multimodality. The results show that SLO has the ability to find the best solution with a comparatively small population size and is suitable for solving optimization problems with lower dimensions. PMID:24385879
On the impact of communication complexity in the design of parallel numerical algorithms
NASA Technical Reports Server (NTRS)
Gannon, D.; Vanrosendale, J.
1984-01-01
This paper describes two models of the cost of data movement in parallel numerical algorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In the second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm independent upper bounds on system performance are derived for several problems that are important to scientific computation.
Xiao, Li; Cai, Qin; Li, Zhilin; Zhao, Hongkai; Luo, Ray
2014-11-25
A multi-scale framework is proposed for more realistic molecular dynamics simulations in continuum solvent models by coupling a molecular mechanics treatment of solute with a fluid mechanics treatment of solvent. This article reports our initial efforts to formulate the physical concepts necessary for coupling the two mechanics and develop a 3D numerical algorithm to simulate the solvent fluid via the Navier-Stokes equation. The numerical algorithm was validated with multiple test cases. The validation shows that the algorithm is effective and stable, with observed accuracy consistent with our design.
A hybrid artificial bee colony algorithm for numerical function optimization
NASA Astrophysics Data System (ADS)
Alqattan, Zakaria N.; Abdullah, Rosni
2015-02-01
Artificial Bee Colony (ABC) algorithm is one of the swarm intelligence algorithms; it has been introduced by Karaboga in 2005. It is a meta-heuristic optimization search algorithm inspired from the intelligent foraging behavior of the honey bees in nature. Its unique search process made it as one of the most competitive algorithm with some other search algorithms in the area of optimization, such as Genetic algorithm (GA) and Particle Swarm Optimization (PSO). However, the ABC performance of the local search process and the bee movement or the solution improvement equation still has some weaknesses. The ABC is good in avoiding trapping at the local optimum but it spends its time searching around unpromising random selected solutions. Inspired by the PSO, we propose a Hybrid Particle-movement ABC algorithm called HPABC, which adapts the particle movement process to improve the exploration of the original ABC algorithm. Numerical benchmark functions were used in order to experimentally test the HPABC algorithm. The results illustrate that the HPABC algorithm can outperform the ABC algorithm in most of the experiments (75% better in accuracy and over 3 times faster).
Lee, Tae Kyu; Sandison, George A
2003-01-21
Electron backscattering has been incorporated into the energy-dependent electron loss (EL) model and the resulting algorithm is applied to predict dose deposition in slab heterogeneous media. This algorithm utilizes a reflection coefficient from the interface that is computed on the basis of Goudsmit-Saunderson theory and an average energy for the backscattered electrons based on Everhart's theory. Predictions of dose deposition in slab heterogeneous media are compared to the Monte Carlo based dose planning method (DPM) and a numerical discrete ordinates method (DOM). The slab media studied comprised water/Pb, water/Al, water/bone, water/bone/water, and water/lung/water, and incident electron beam energies of 10 MeV and 18 MeV. The predicted dose enhancement due to backscattering is accurate to within 3% of dose maximum even for lead as the backscattering medium. Dose discrepancies at large depths beyond the interface were as high as 5% of dose maximum and we speculate that this error may be attributed to the EL model assuming a Gaussian energy distribution for the electrons at depth. The computational cost is low compared to Monte Carlo simulations making the EL model attractive as a fast dose engine for dose optimization algorithms. The predictive power of the algorithm demonstrates that the small angle scattering restriction on the EL model can be overcome while retaining dose calculation accuracy and requiring only one free variable, chi, in the algorithm to be determined in advance of calculation.
The energy-dependent electron loss model: backscattering and application to heterogeneous slab media
NASA Astrophysics Data System (ADS)
Lee, Tae Kyu; Sandison, George A.
2003-01-01
Electron backscattering has been incorporated into the energy-dependent electron loss (EL) model and the resulting algorithm is applied to predict dose deposition in slab heterogeneous media. This algorithm utilizes a reflection coefficient from the interface that is computed on the basis of Goudsmit-Saunderson theory and an average energy for the backscattered electrons based on Everhart's theory. Predictions of dose deposition in slab heterogeneous media are compared to the Monte Carlo based dose planning method (DPM) and a numerical discrete ordinates method (DOM). The slab media studied comprised water/Pb, water/Al, water/bone, water/bone/water, and water/lung/water, and incident electron beam energies of 10 MeV and 18 MeV. The predicted dose enhancement due to backscattering is accurate to within 3% of dose maximum even for lead as the backscattering medium. Dose discrepancies at large depths beyond the interface were as high as 5% of dose maximum and we speculate that this error may be attributed to the EL model assuming a Gaussian energy distribution for the electrons at depth. The computational cost is low compared to Monte Carlo simulations making the EL model attractive as a fast dose engine for dose optimization algorithms. The predictive power of the algorithm demonstrates that the small angle scattering restriction on the EL model can be overcome while retaining dose calculation accuracy and requiring only one free variable, χ, in the algorithm to be determined in advance of calculation.
Improved detection of soma location and morphology in fluorescence microscopy images of neurons.
Kayasandik, Cihan Bilge; Labate, Demetrio
2016-12-01
Automated detection and segmentation of somas in fluorescent images of neurons is a major goal in quantitative studies of neuronal networks, including applications of high-content-screenings where it is required to quantify multiple morphological properties of neurons. Despite recent advances in image processing targeted to neurobiological applications, existing algorithms of soma detection are often unreliable, especially when processing fluorescence image stacks of neuronal cultures. In this paper, we introduce an innovative algorithm for the detection and extraction of somas in fluorescent images of networks of cultured neurons where somas and other structures exist in the same fluorescent channel. Our method relies on a new geometrical descriptor called Directional Ratio and a collection of multiscale orientable filters to quantify the level of local isotropy in an image. To optimize the application of this approach, we introduce a new construction of multiscale anisotropic filters that is implemented by separable convolution. Extensive numerical experiments using 2D and 3D confocal images show that our automated algorithm reliably detects somas, accurately segments them, and separates contiguous ones. We include a detailed comparison with state-of-the-art existing methods to demonstrate that our algorithm is extremely competitive in terms of accuracy, reliability and computational efficiency. Our algorithm will facilitate the development of automated platforms for high content neuron image processing. A Matlab code is released open-source and freely available to the scientific community. Copyright © 2016 Elsevier B.V. All rights reserved.
Wavelet-based Adaptive Mesh Refinement Method for Global Atmospheric Chemical Transport Modeling
NASA Astrophysics Data System (ADS)
Rastigejev, Y.
2011-12-01
Numerical modeling of global atmospheric chemical transport presents enormous computational difficulties, associated with simulating a wide range of time and spatial scales. The described difficulties are exacerbated by the fact that hundreds of chemical species and thousands of chemical reactions typically are used for chemical kinetic mechanism description. These computational requirements very often forces researches to use relatively crude quasi-uniform numerical grids with inadequate spatial resolution that introduces significant numerical diffusion into the system. It was shown that this spurious diffusion significantly distorts the pollutant mixing and transport dynamics for typically used grid resolution. The described numerical difficulties have to be systematically addressed considering that the demand for fast, high-resolution chemical transport models will be exacerbated over the next decade by the need to interpret satellite observations of tropospheric ozone and related species. In this study we offer dynamically adaptive multilevel Wavelet-based Adaptive Mesh Refinement (WAMR) method for numerical modeling of atmospheric chemical evolution equations. The adaptive mesh refinement is performed by adding and removing finer levels of resolution in the locations of fine scale development and in the locations of smooth solution behavior accordingly. The algorithm is based on the mathematically well established wavelet theory. This allows us to provide error estimates of the solution that are used in conjunction with an appropriate threshold criteria to adapt the non-uniform grid. Other essential features of the numerical algorithm include: an efficient wavelet spatial discretization that allows to minimize the number of degrees of freedom for a prescribed accuracy, a fast algorithm for computing wavelet amplitudes, and efficient and accurate derivative approximations on an irregular grid. The method has been tested for a variety of benchmark problems including numerical simulation of transpacific traveling pollution plumes. The generated pollution plumes are diluted due to turbulent mixing as they are advected downwind. Despite this dilution, it was recently discovered that pollution plumes in the remote troposphere can preserve their identity as well-defined structures for two weeks or more as they circle the globe. Present Global Chemical Transport Models (CTMs) implemented for quasi-uniform grids are completely incapable of reproducing these layered structures due to high numerical plume dilution caused by numerical diffusion combined with non-uniformity of atmospheric flow. It is shown that WAMR algorithm solutions of comparable accuracy as conventional numerical techniques are obtained with more than an order of magnitude reduction in number of grid points, therefore the adaptive algorithm is capable to produce accurate results at a relatively low computational cost. The numerical simulations demonstrate that WAMR algorithm applied the traveling plume problem accurately reproduces the plume dynamics unlike conventional numerical methods that utilizes quasi-uniform numerical grids.
Doha, E.H.; Abd-Elhameed, W.M.; Youssri, Y.H.
2014-01-01
Two families of certain nonsymmetric generalized Jacobi polynomials with negative integer indexes are employed for solving third- and fifth-order two point boundary value problems governed by homogeneous and nonhomogeneous boundary conditions using a dual Petrov–Galerkin method. The idea behind our method is to use trial functions satisfying the underlying boundary conditions of the differential equations and the test functions satisfying the dual boundary conditions. The resulting linear systems from the application of our method are specially structured and they can be efficiently inverted. The use of generalized Jacobi polynomials simplify the theoretical and numerical analysis of the method and also leads to accurate and efficient numerical algorithms. The presented numerical results indicate that the proposed numerical algorithms are reliable and very efficient. PMID:26425358
NASA Astrophysics Data System (ADS)
Liu, Shixing; Liu, Chang; Hua, Wei; Guo, Yongxin
2016-11-01
By using the discrete variational method, we study the numerical method of the general nonholonomic system in the generalized Birkhoffian framework, and construct a numerical method of generalized Birkhoffian equations called a self-adjoint-preserving algorithm. Numerical results show that it is reasonable to study the nonholonomic system by the structure-preserving algorithm in the generalized Birkhoffian framework. Project supported by the National Natural Science Foundation of China (Grant Nos. 11472124, 11572145, 11202090, and 11301350), the Doctor Research Start-up Fund of Liaoning Province, China (Grant No. 20141050), the China Postdoctoral Science Foundation (Grant No. 2014M560203), and the General Science and Technology Research Plans of Liaoning Educational Bureau, China (Grant No. L2013005).
An advancing front Delaunay triangulation algorithm designed for robustness
NASA Technical Reports Server (NTRS)
Mavriplis, D. J.
1992-01-01
A new algorithm is described for generating an unstructured mesh about an arbitrary two-dimensional configuration. Mesh points are generated automatically by the algorithm in a manner which ensures a smooth variation of elements, and the resulting triangulation constitutes the Delaunay triangulation of these points. The algorithm combines the mathematical elegance and efficiency of Delaunay triangulation algorithms with the desirable point placement features, boundary integrity, and robustness traditionally associated with advancing-front-type mesh generation strategies. The method offers increased robustness over previous algorithms in that it cannot fail regardless of the initial boundary point distribution and the prescribed cell size distribution throughout the flow-field.
Structure and structure-preserving algorithms for plasma physics
NASA Astrophysics Data System (ADS)
Morrison, P. J.
2016-10-01
Conventional simulation studies of plasma physics are based on numerically solving the underpinning differential (or integro-differential) equations. Usual algorithms in general do not preserve known geometric structure of the physical systems, such as the local energy-momentum conservation law, Casimir invariants, and the symplectic structure (Poincaré invariants). As a consequence, numerical errors may accumulate coherently with time and long-term simulation results may be unreliable. Recently, a series of geometric algorithms that preserve the geometric structures resulting from the Hamiltonian and action principle (HAP) form of theoretical models in plasma physics have been developed by several authors. The superiority of these geometric algorithms has been demonstrated with many test cases. For example, symplectic integrators for guiding-center dynamics have been constructed to preserve the noncanonical symplectic structures and bound the energy-momentum errors for all simulation time-steps; variational and symplectic algorithms have been discovered and successfully applied to the Vlasov-Maxwell system, MHD, and other magnetofluid equations as well. Hamiltonian truncations of the full Vlasov-Maxwell system have opened the field of discrete gyrokinetics and led to the GEMPIC algorithm. The vision that future numerical capabilities in plasma physics should be based on structure-preserving geometric algorithms will be presented. It will be argued that the geometric consequences of HAP form and resulting geometric algorithms suitable for plasma physics studies cannot be adapted from existing mathematical literature but, rather, need to be discovered and worked out by theoretical plasma physicists. The talk will review existing HAP structures of plasma physics for a variety of models, and how they have been adapted for numerical implementation. Supported by DOE DE-FG02-04ER-54742.
Algorithms and a short description of the D1_Flow program for numerical modeling of one-dimensional steady-state flow in horizontally heterogeneous aquifers with uneven sloping bases are presented. The algorithms are based on the Dupuit-Forchheimer approximations. The program per...
Faster and More Accurate Transport Procedures for HZETRN
NASA Technical Reports Server (NTRS)
Slaba, Tony C.; Blattnig, Steve R.; Badavi, Francis F.
2010-01-01
Several aspects of code verification are examined for HZETRN. First, a detailed derivation of the numerical marching algorithms is given. Next, a new numerical method for light particle transport is presented, and improvements to the heavy ion transport algorithm are discussed. A summary of various coding errors is also given, and the impact of these errors on exposure quantities is shown. Finally, a coupled convergence study is conducted. From this study, it is shown that past efforts in quantifying the numerical error in HZETRN were hindered by single precision calculations and computational resources. It is also determined that almost all of the discretization error in HZETRN is caused by charged target fragments below 50 AMeV. Total discretization errors are given for the old and new algorithms, and the improved accuracy of the new numerical methods is demonstrated. Run time comparisons are given for three applications in which HZETRN is commonly used. The new algorithms are found to be almost 100 times faster for solar particle event simulations and almost 10 times faster for galactic cosmic ray simulations.
Mission Simulation Facility: Simulation Support for Autonomy Development
NASA Technical Reports Server (NTRS)
Pisanich, Greg; Plice, Laura; Neukom, Christian; Flueckiger, Lorenzo; Wagner, Michael
2003-01-01
The Mission Simulation Facility (MSF) supports research in autonomy technology for planetary exploration vehicles. Using HLA (High Level Architecture) across distributed computers, the MSF connects users autonomy algorithms with provided or third-party simulations of robotic vehicles and planetary surface environments, including onboard components and scientific instruments. Simulation fidelity is variable to meet changing needs as autonomy technology advances in Technical Readiness Level (TRL). A virtual robot operating in a virtual environment offers numerous advantages over actual hardware, including availability, simplicity, and risk mitigation. The MSF is in use by researchers at NASA Ames Research Center (ARC) and has demonstrated basic functionality. Continuing work will support the needs of a broader user base.
Automatic Parameterization Strategy for Cardiac Electrophysiology Simulations.
Costa, Caroline Mendonca; Hoetzl, Elena; Rocha, Bernardo Martins; Prassl, Anton J; Plank, Gernot
2013-10-01
Driven by recent advances in medical imaging, image segmentation and numerical techniques, computer models of ventricular electrophysiology account for increasingly finer levels of anatomical and biophysical detail. However, considering the large number of model parameters involved parameterization poses a major challenge. A minimum requirement in combined experimental and modeling studies is to achieve good agreement in activation and repolarization sequences between model and experiment or patient data. In this study, we propose basic techniques which aid in determining bidomain parameters to match activation sequences. An iterative parameterization algorithm is implemented which determines appropriate bulk conductivities which yield prescribed velocities. In addition, a method is proposed for splitting the computed bulk conductivities into individual bidomain conductivities by prescribing anisotropy ratios.
NASA Technical Reports Server (NTRS)
Steger, J. L.; Caradonna, F. X.
1980-01-01
An implicit finite difference procedure is developed to solve the unsteady full potential equation in conservation law form. Computational efficiency is maintained by use of approximate factorization techniques. The numerical algorithm is first order in time and second order in space. A circulation model and difference equations are developed for lifting airfoils in unsteady flow; however, thin airfoil body boundary conditions have been used with stretching functions to simplify the development of the numerical algorithm.
A numerical solution of Duffing's equations including the prediction of jump phenomena
NASA Technical Reports Server (NTRS)
Moyer, E. T., Jr.; Ghasghai-Abdi, E.
1987-01-01
Numerical methodology for the solution of Duffing's differential equation is presented. Algorithms for the prediction of multiple equilibrium solutions and jump phenomena are developed. In addition, a filtering algorithm for producing steady state solutions is presented. The problem of a rigidly clamped circular plate subjected to cosinusoidal pressure loading is solved using the developed algorithms (the plate is assumed to be in the geometrically nonlinear range). The results accurately predict regions of solution multiplicity and jump phenomena.
Density-matrix-based algorithm for solving eigenvalue problems
NASA Astrophysics Data System (ADS)
Polizzi, Eric
2009-03-01
A fast and stable numerical algorithm for solving the symmetric eigenvalue problem is presented. The technique deviates fundamentally from the traditional Krylov subspace iteration based techniques (Arnoldi and Lanczos algorithms) or other Davidson-Jacobi techniques and takes its inspiration from the contour integration and density-matrix representation in quantum mechanics. It will be shown that this algorithm—named FEAST—exhibits high efficiency, robustness, accuracy, and scalability on parallel architectures. Examples from electronic structure calculations of carbon nanotubes are presented, and numerical performances and capabilities are discussed.
NASA Astrophysics Data System (ADS)
Min, Junhong; Carlini, Lina; Unser, Michael; Manley, Suliana; Ye, Jong Chul
2015-09-01
Localization microscopy such as STORM/PALM can achieve a nanometer scale spatial resolution by iteratively localizing fluorescence molecules. It was shown that imaging of densely activated molecules can accelerate temporal resolution which was considered as major limitation of localization microscopy. However, this higher density imaging needs to incorporate advanced localization algorithms to deal with overlapping point spread functions (PSFs). In order to address this technical challenges, previously we developed a localization algorithm called FALCON1, 2 using a quasi-continuous localization model with sparsity prior on image space. It was demonstrated in both 2D/3D live cell imaging. However, it has several disadvantages to be further improved. Here, we proposed a new localization algorithm using annihilating filter-based low rank Hankel structured matrix approach (ALOHA). According to ALOHA principle, sparsity in image domain implies the existence of rank-deficient Hankel structured matrix in Fourier space. Thanks to this fundamental duality, our new algorithm can perform data-adaptive PSF estimation and deconvolution of Fourier spectrum, followed by truly grid-free localization using spectral estimation technique. Furthermore, all these optimizations are conducted on Fourier space only. We validated the performance of the new method with numerical experiments and live cell imaging experiment. The results confirmed that it has the higher localization performances in both experiments in terms of accuracy and detection rate.
Fast matrix multiplication and its algebraic neighbourhood
NASA Astrophysics Data System (ADS)
Pan, V. Ya.
2017-11-01
Matrix multiplication is among the most fundamental operations of modern computations. By 1969 it was still commonly believed that the classical algorithm was optimal, although the experts already knew that this was not so. Worldwide interest in matrix multiplication instantly exploded in 1969, when Strassen decreased the exponent 3 of cubic time to 2.807. Then everyone expected to see matrix multiplication performed in quadratic or nearly quadratic time very soon. Further progress, however, turned out to be capricious. It was at stalemate for almost a decade, then a combination of surprising techniques (completely independent of Strassen's original ones and much more advanced) enabled a new decrease of the exponent in 1978-1981 and then again in 1986, to 2.376. By 2017 the exponent has still not passed through the barrier of 2.373, but most disturbing was the curse of recursion — even the decrease of exponents below 2.7733 required numerous recursive steps, and each of them squared the problem size. As a result, all algorithms supporting such exponents supersede the classical algorithm only for inputs of immense sizes, far beyond any potential interest for the user. We survey the long study of fast matrix multiplication, focusing on neglected algorithms for feasible matrix multiplication. We comment on their design, the techniques involved, implementation issues, the impact of their study on the modern theory and practice of Algebraic Computations, and perspectives for fast matrix multiplication. Bibliography: 163 titles.
The Construction of 3-d Neutral Density for Arbitrary Data Sets
NASA Astrophysics Data System (ADS)
Riha, S.; McDougall, T. J.; Barker, P. M.
2014-12-01
The Neutral Density variable allows inference of water pathways from thermodynamic properties in the global ocean, and is therefore an essential component of global ocean circulation analysis. The widely used algorithm for the computation of Neutral Density yields accurate results for data sets which are close to the observed climatological ocean. Long-term numerical climate simulations, however, often generate a significant drift from present-day climate, which renders the existing algorithm inaccurate. To remedy this problem, new algorithms which operate on arbitrary data have been developed, which may potentially be used to compute Neutral Density during runtime of a numerical model.We review existing approaches for the construction of Neutral Density in arbitrary data sets, detail their algorithmic structure, and present an analysis of the computational cost for implementations on a single-CPU computer. We discuss possible strategies for the implementation in state-of-the-art numerical models, with a focus on distributed computing environments.
NASA Technical Reports Server (NTRS)
Merrill, W. C.; Delaat, J. C.
1986-01-01
An advanced sensor failure detection, isolation, and accommodation (ADIA) algorithm has been developed for use with an aircraft turbofan engine control system. In a previous paper the authors described the ADIA algorithm and its real-time implementation. Subsequent improvements made to the algorithm and implementation are discussed, and the results of an evaluation presented. The evaluation used a real-time, hybrid computer simulation of an F100 turbofan engine.
NASA Technical Reports Server (NTRS)
Spratlin, Kenneth Milton
1987-01-01
An adaptive numeric predictor-corrector guidance is developed for atmospheric entry vehicles which utilize lift to achieve maximum footprint capability. Applicability of the guidance design to vehicles with a wide range of performance capabilities is desired so as to reduce the need for algorithm redesign with each new vehicle. Adaptability is desired to minimize mission-specific analysis and planning. The guidance algorithm motivation and design are presented. Performance is assessed for application of the algorithm to the NASA Entry Research Vehicle (ERV). The dispersions the guidance must be designed to handle are presented. The achievable operational footprint for expected worst-case dispersions is presented. The algorithm performs excellently for the expected dispersions and captures most of the achievable footprint.
Li, Longxiang; Xue, Donglin; Deng, Weijie; Wang, Xu; Bai, Yang; Zhang, Feng; Zhang, Xuejun
2017-11-10
In deterministic computer-controlled optical surfacing, accurate dwell time execution by computer numeric control machines is crucial in guaranteeing a high-convergence ratio for the optical surface error. It is necessary to consider the machine dynamics limitations in the numerical dwell time algorithms. In this paper, these constraints on dwell time distribution are analyzed, and a model of the equal extra material removal is established. A positive dwell time algorithm with minimum equal extra material removal is developed. Results of simulations based on deterministic magnetorheological finishing demonstrate the necessity of considering machine dynamics performance and illustrate the validity of the proposed algorithm. Indeed, the algorithm effectively facilitates the determinacy of sub-aperture optical surfacing processes.
Xu, Z N
2014-12-01
In this study, an error analysis is performed to study real water drop images and the corresponding numerically generated water drop profiles for three widely used static contact angle algorithms: the circle- and ellipse-fitting algorithms and the axisymmetric drop shape analysis-profile (ADSA-P) algorithm. The results demonstrate the accuracy of the numerically generated drop profiles based on the Laplace equation. A significant number of water drop profiles with different volumes, contact angles, and noise levels are generated, and the influences of the three factors on the accuracies of the three algorithms are systematically investigated. The results reveal that the above-mentioned three algorithms are complementary. In fact, the circle- and ellipse-fitting algorithms show low errors and are highly resistant to noise for water drops with small/medium volumes and contact angles, while for water drop with large volumes and contact angles just the ADSA-P algorithm can meet accuracy requirement. However, this algorithm introduces significant errors in the case of small volumes and contact angles because of its high sensitivity to noise. The critical water drop volumes of the circle- and ellipse-fitting algorithms corresponding to a certain contact angle error are obtained through a significant amount of computation. To improve the precision of the static contact angle measurement, a more accurate algorithm based on a combination of the three algorithms is proposed. Following a systematic investigation, the algorithm selection rule is described in detail, while maintaining the advantages of the three algorithms and overcoming their deficiencies. In general, static contact angles over the entire hydrophobicity range can be accurately evaluated using the proposed algorithm. The ease of erroneous judgment in static contact angle measurements is avoided. The proposed algorithm is validated by a static contact angle evaluation of real and numerically generated water drop images with different hydrophobicity values and volumes.
2016-01-01
Recent studies have highlighted the potential role of basic numerical processing in the acquisition of numerical and mathematical competences. However, it is debated whether high-level numerical skills and mathematics depends specifically on basic numerical representations. In this study mathematicians and nonmathematicians performed a basic number line task, which required mapping positive and negative numbers on a physical horizontal line, and has been shown to correlate with more advanced numerical abilities and mathematical achievement. We found that mathematicians were more accurate compared with nonmathematicians when mapping positive, but not negative numbers, which are considered numerical primitives and cultural artifacts, respectively. Moreover, performance on positive number mapping could predict whether one is a mathematician or not, and was mediated by more advanced mathematical skills. This finding might suggest a link between basic and advanced mathematical skills. However, when we included visuospatial skills, as measured by block design subtest, the mediation analysis revealed that the relation between the performance in the number line task and the group membership was explained by non-numerical visuospatial skills. These results demonstrate that relation between basic, even specific, numerical skills and advanced mathematical achievement can be artifactual and explained by visuospatial processing. PMID:26913930
The GOES-R GeoStationary Lightning Mapper (GLM)
NASA Technical Reports Server (NTRS)
Goodman, Steven J.; Blakeslee, Richard J.; Koshak, William J.; Mach, Douglas
2011-01-01
The Geostationary Operational Environmental Satellite (GOES-R) is the next series to follow the existing GOES system currently operating over the Western Hemisphere. Superior spacecraft and instrument technology will support expanded detection of environmental phenomena, resulting in more timely and accurate forecasts and warnings. Advancements over current GOES capabilities include a new capability for total lightning detection (cloud and cloud-to-ground flashes) from the Geostationary Lightning Mapper (GLM), and improved capability for the Advanced Baseline Imager (ABI). The Geostationary Lighting Mapper (GLM) will map total lightning activity (in-cloud and cloud-to-ground lighting flashes) continuously day and night with near-uniform spatial resolution of 8 km with a product refresh rate of less than 20 sec over the Americas and adjacent oceanic regions. This will aid in forecasting severe storms and tornado activity, and convective weather impacts on aviation safety and efficiency among a number of potential applications. In parallel with the instrument development (a prototype and 4 flight models), a GOES-R Risk Reduction Team and Algorithm Working Group Lightning Applications Team have begun to develop the Level 2 algorithms (environmental data records), cal/val performance monitoring tools, and new applications using GLM alone, in combination with the ABI, merged with ground-based sensors, and decision aids augmented by numerical weather prediction model forecasts. Proxy total lightning data from the NASA Lightning Imaging Sensor on the Tropical Rainfall Measuring Mission (TRMM) satellite and regional test beds are being used to develop the pre-launch algorithms and applications, and also improve our knowledge of thunderstorm initiation and evolution. An international field campaign planned for 2011-2012 will produce concurrent observations from a VHF lightning mapping array, Meteosat multi-band imagery, Tropical Rainfall Measuring Mission (TRMM) Lightning Imaging Sensor (LIS) overpasses, and related ground and in-situ lightning and meteorological measurements in the vicinity of Sao Paulo. These data will provide a new comprehensive proxy data set for algorithm and application development.
Xiao, Li; Cai, Qin; Li, Zhilin; Zhao, Hongkai; Luo, Ray
2014-01-01
A multi-scale framework is proposed for more realistic molecular dynamics simulations in continuum solvent models by coupling a molecular mechanics treatment of solute with a fluid mechanics treatment of solvent. This article reports our initial efforts to formulate the physical concepts necessary for coupling the two mechanics and develop a 3D numerical algorithm to simulate the solvent fluid via the Navier-Stokes equation. The numerical algorithm was validated with multiple test cases. The validation shows that the algorithm is effective and stable, with observed accuracy consistent with our design. PMID:25404761
The upwind control volume scheme for unstructured triangular grids
NASA Technical Reports Server (NTRS)
Giles, Michael; Anderson, W. Kyle; Roberts, Thomas W.
1989-01-01
A new algorithm for the numerical solution of the Euler equations is presented. This algorithm is particularly suited to the use of unstructured triangular meshes, allowing geometric flexibility. Solutions are second-order accurate in the steady state. Implementation of the algorithm requires minimal grid connectivity information, resulting in modest storage requirements, and should enhance the implementation of the scheme on massively parallel computers. A novel form of upwind differencing is developed, and is shown to yield sharp resolution of shocks. Two new artificial viscosity models are introduced that enhance the performance of the new scheme. Numerical results for transonic airfoil flows are presented, which demonstrate the performance of the algorithm.
On the efficient and reliable numerical solution of rate-and-state friction problems
NASA Astrophysics Data System (ADS)
Pipping, Elias; Kornhuber, Ralf; Rosenau, Matthias; Oncken, Onno
2016-03-01
We present a mathematically consistent numerical algorithm for the simulation of earthquake rupture with rate-and-state friction. Its main features are adaptive time stepping, a novel algebraic solution algorithm involving nonlinear multigrid and a fixed point iteration for the rate-and-state decoupling. The algorithm is applied to a laboratory scale subduction zone which allows us to compare our simulations with experimental results. Using physical parameters from the experiment, we find a good fit of recurrence time of slip events as well as their rupture width and peak slip. Computations in 3-D confirm efficiency and robustness of our algorithm.
NASA Astrophysics Data System (ADS)
Wichert, Viktoria; Arkenberg, Mario; Hauschildt, Peter H.
2016-10-01
Highly resolved state-of-the-art 3D atmosphere simulations will remain computationally extremely expensive for years to come. In addition to the need for more computing power, rethinking coding practices is necessary. We take a dual approach by introducing especially adapted, parallel numerical methods and correspondingly parallelizing critical code passages. In the following, we present our respective work on PHOENIX/3D. With new parallel numerical algorithms, there is a big opportunity for improvement when iteratively solving the system of equations emerging from the operator splitting of the radiative transfer equation J = ΛS. The narrow-banded approximate Λ-operator Λ* , which is used in PHOENIX/3D, occurs in each iteration step. By implementing a numerical algorithm which takes advantage of its characteristic traits, the parallel code's efficiency is further increased and a speed-up in computational time can be achieved.
Uniformly stable backpropagation algorithm to train a feedforward neural network.
Rubio, José de Jesús; Angelov, Plamen; Pacheco, Jaime
2011-03-01
Neural networks (NNs) have numerous applications to online processes, but the problem of stability is rarely discussed. This is an extremely important issue because, if the stability of a solution is not guaranteed, the equipment that is being used can be damaged, which can also cause serious accidents. It is true that in some research papers this problem has been considered, but this concerns continuous-time NN only. At the same time, there are many systems that are better described in the discrete time domain such as population of animals, the annual expenses in an industry, the interest earned by a bank, or the prediction of the distribution of loads stored every hour in a warehouse. Therefore, it is of paramount importance to consider the stability of the discrete-time NN. This paper makes several important contributions. 1) A theorem is stated and proven which guarantees uniform stability of a general discrete-time system. 2) It is proven that the backpropagation (BP) algorithm with a new time-varying rate is uniformly stable for online identification and the identification error converges to a small zone bounded by the uncertainty. 3) It is proven that the weights' error is bounded by the initial weights' error, i.e., overfitting is eliminated in the proposed algorithm. 4) The BP algorithm is applied to predict the distribution of loads that a transelevator receives from a trailer and places in the deposits in a warehouse every hour, so that the deposits in the warehouse are reserved in advance using the prediction results. 5) The BP algorithm is compared with the recursive least square (RLS) algorithm and with the Takagi-Sugeno type fuzzy inference system in the problem of predicting the distribution of loads in a warehouse, giving that the first and the second are stable and the third is unstable. 6) The BP algorithm is compared with the RLS algorithm and with the Kalman filter algorithm in a synthetic example.
Direct single-shot phase retrieval from the diffraction pattern of separated objects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leshem, Ben; Xu, Rui; Dallal, Yehonatan
The non-crystallographic phase problem arises in numerous scientific and technological fields. An important application is coherent diffractive imaging. Recent advances in X-ray free-electron lasers allow capturing of the diffraction pattern from a single nanoparticle before it disintegrates, in so-called ‘diffraction before destruction’ experiments. Presently, the phase is reconstructed by iterative algorithms, imposing a non-convex computational challenge, or by Fourier holography, requiring a well-characterized reference field. Here we present a convex scheme for single-shot phase retrieval for two (or more) sufficiently separated objects, demonstrated in two dimensions. In our approach, the objects serve as unknown references to one another, reducing themore » phase problem to a solvable set of linear equations. We establish our method numerically and experimentally in the optical domain and demonstrate a proof-of-principle single-shot coherent diffractive imaging using X-ray free-electron lasers pulses. Lastly, our scheme alleviates several limitations of current methods, offering a new pathway towards direct reconstruction of complex objects.« less
Direct single-shot phase retrieval from the diffraction pattern of separated objects
Leshem, Ben; Xu, Rui; Dallal, Yehonatan; ...
2016-02-22
The non-crystallographic phase problem arises in numerous scientific and technological fields. An important application is coherent diffractive imaging. Recent advances in X-ray free-electron lasers allow capturing of the diffraction pattern from a single nanoparticle before it disintegrates, in so-called ‘diffraction before destruction’ experiments. Presently, the phase is reconstructed by iterative algorithms, imposing a non-convex computational challenge, or by Fourier holography, requiring a well-characterized reference field. Here we present a convex scheme for single-shot phase retrieval for two (or more) sufficiently separated objects, demonstrated in two dimensions. In our approach, the objects serve as unknown references to one another, reducing themore » phase problem to a solvable set of linear equations. We establish our method numerically and experimentally in the optical domain and demonstrate a proof-of-principle single-shot coherent diffractive imaging using X-ray free-electron lasers pulses. Lastly, our scheme alleviates several limitations of current methods, offering a new pathway towards direct reconstruction of complex objects.« less
NASA Astrophysics Data System (ADS)
Voytishek, Anton V.; Shipilov, Nikolay M.
2017-11-01
In this paper, the systematization of numerical (implemented on a computer) randomized functional algorithms for approximation of a solution of Fredholm integral equation of the second kind is carried out. Wherein, three types of such algorithms are distinguished: the projection, the mesh and the projection-mesh methods. The possibilities for usage of these algorithms for solution of practically important problems is investigated in detail. The disadvantages of the mesh algorithms, related to the necessity of calculation values of the kernels of integral equations in fixed points, are identified. On practice, these kernels have integrated singularities, and calculation of their values is impossible. Thus, for applied problems, related to solving Fredholm integral equation of the second kind, it is expedient to use not mesh, but the projection and the projection-mesh randomized algorithms.
The MINERVA Software Development Process
NASA Technical Reports Server (NTRS)
Narkawicz, Anthony; Munoz, Cesar A.; Dutle, Aaron M.
2017-01-01
This paper presents a software development process for safety-critical software components of cyber-physical systems. The process is called MINERVA, which stands for Mirrored Implementation Numerically Evaluated against Rigorously Verified Algorithms. The process relies on formal methods for rigorously validating code against its requirements. The software development process uses: (1) a formal specification language for describing the algorithms and their functional requirements, (2) an interactive theorem prover for formally verifying the correctness of the algorithms, (3) test cases that stress the code, and (4) numerical evaluation on these test cases of both the algorithm specifications and their implementations in code. The MINERVA process is illustrated in this paper with an application to geo-containment algorithms for unmanned aircraft systems. These algorithms ensure that the position of an aircraft never leaves a predetermined polygon region and provide recovery maneuvers when the region is inadvertently exited.
Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-01-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835
3D printing application and numerical simulations in a fracture system
NASA Astrophysics Data System (ADS)
Yoon, H.; Martinez, M. J.
2017-12-01
The hydrogeological and mechanical properties in fractured and porous media are fundamental to predicting coupled multiphysics processes in the subsurface. Recent advances in experimental methods and multi-scale imaging capabilities have revolutionized our ability to quantitatively characterize geomaterials and digital counterparts are now routinely used for numerical simulations to characterize petrophysical and mechanical properties across scales. 3D printing is a very effective and creative technique that reproduce the digital images in a controlled way. For geoscience applications, 3D printing can be co-opted to print reproducible porous and fractured structures derived from CT-imaging of actual rocks and theoretical algorithms for experimental testing. In this work we used a stereolithography (SLA) method to create a single fracture network. The fracture in shale was first scanned using a microCT system and then the digital fracture network was printed into two parts and assembled. Aperture ranges from 0.3 to 1 mm. In particular, we discuss the design of single fracture network and the progress of printing practices to reproduce the fracture network system. Printed samples at different scales are used to measure the permeability and surface roughness. Various numerical simulations including (non-)reactive transport and multiphase flow cases are performed to study fluid flow characterization. We will also discuss the innovative advancement of 3D printing techniques applicable for coupled processes in the subsurface. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.
NASA Technical Reports Server (NTRS)
Crook, Andrew J.; Delaney, Robert A.
1992-01-01
The purpose of this study is the development of a three-dimensional Euler/Navier-Stokes flow analysis for fan section/engine geometries containing multiple blade rows and multiple spanwise flow splitters. An existing procedure developed by Dr. J. J. Adamczyk and associates and the NASA Lewis Research Center was modified to accept multiple spanwise splitter geometries and simulate engine core conditions. The procedure was also modified to allow coarse parallelization of the solution algorithm. This document is a final report outlining the development and techniques used in the procedure. The numerical solution is based upon a finite volume technique with a four stage Runge-Kutta time marching procedure. Numerical dissipation is used to gain solution stability but is reduced in viscous dominated flow regions. Local time stepping and implicit residual smoothing are used to increase the rate of convergence. Multiple blade row solutions are based upon the average-passage system of equations. The numerical solutions are performed on an H-type grid system, with meshes being generated by the system (TIGG3D) developed earlier under this contract. The grid generation scheme meets the average-passage requirement of maintaining a common axisymmetric mesh for each blade row grid. The analysis was run on several geometry configurations ranging from one to five blade rows and from one to four radial flow splitters. Pure internal flow solutions were obtained as well as solutions with flow about the cowl/nacelle and various engine core flow conditions. The efficiency of the solution procedure was shown to be the same as the original analysis.
Introduction to Numerical Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schoonover, Joseph A.
2016-06-14
These are slides for a lecture for the Parallel Computing Summer Research Internship at the National Security Education Center. This gives an introduction to numerical methods. Repetitive algorithms are used to obtain approximate solutions to mathematical problems, using sorting, searching, root finding, optimization, interpolation, extrapolation, least squares regresion, Eigenvalue problems, ordinary differential equations, and partial differential equations. Many equations are shown. Discretizations allow us to approximate solutions to mathematical models of physical systems using a repetitive algorithm and introduce errors that can lead to numerical instabilities if we are not careful.
Karamintziou, Sofia D; Custódio, Ana Luísa; Piallat, Brigitte; Polosan, Mircea; Chabardès, Stéphan; Stathis, Pantelis G; Tagaris, George A; Sakas, Damianos E; Polychronaki, Georgia E; Tsirogiannis, George L; David, Olivier; Nikita, Konstantina S
2017-01-01
Advances in the field of closed-loop neuromodulation call for analysis and modeling approaches capable of confronting challenges related to the complex neuronal response to stimulation and the presence of strong internal and measurement noise in neural recordings. Here we elaborate on the algorithmic aspects of a noise-resistant closed-loop subthalamic nucleus deep brain stimulation system for advanced Parkinson's disease and treatment-refractory obsessive-compulsive disorder, ensuring remarkable performance in terms of both efficiency and selectivity of stimulation, as well as in terms of computational speed. First, we propose an efficient method drawn from dynamical systems theory, for the reliable assessment of significant nonlinear coupling between beta and high-frequency subthalamic neuronal activity, as a biomarker for feedback control. Further, we present a model-based strategy through which optimal parameters of stimulation for minimum energy desynchronizing control of neuronal activity are being identified. The strategy integrates stochastic modeling and derivative-free optimization of neural dynamics based on quadratic modeling. On the basis of numerical simulations, we demonstrate the potential of the presented modeling approach to identify, at a relatively low computational cost, stimulation settings potentially associated with a significantly higher degree of efficiency and selectivity compared with stimulation settings determined post-operatively. Our data reinforce the hypothesis that model-based control strategies are crucial for the design of novel stimulation protocols at the backstage of clinical applications.
NASA Astrophysics Data System (ADS)
Park, Jun Kwon; Kang, Kwan Hyoung
2012-04-01
Contact angle (CA) hysteresis is important in many natural and engineering wetting processes, but predicting it numerically is difficult. We developed an algorithm that considers CA hysteresis when analyzing the motion of the contact line (CL). This algorithm employs feedback control of CA which decelerates CL speed to make the CL stationary in the hysteretic range of CA, and one control coefficient should be heuristically determined depending on characteristic time of the simulated system. The algorithm requires embedding only a simple additional routine with little modification of a code which considers the dynamic CA. The method is non-iterative and explicit, and also has less computational load than other algorithms. For a drop hanging on a wire, the proposed algorithm accurately predicts the theoretical equilibrium CA. For the drop impacting on a dry surface, the results of the proposed algorithm agree well with experimental results including the intermittent occurrence of the pinning of CL. The proposed algorithm is as accurate as other algorithms, but faster.
Hatt, Mathieu; Lee, John A.; Schmidtlein, Charles R.; Naqa, Issam El; Caldwell, Curtis; De Bernardi, Elisabetta; Lu, Wei; Das, Shiva; Geets, Xavier; Gregoire, Vincent; Jeraj, Robert; MacManus, Michael P.; Mawlawi, Osama R.; Nestle, Ursula; Pugachev, Andrei B.; Schöder, Heiko; Shepherd, Tony; Spezi, Emiliano; Visvikis, Dimitris; Zaidi, Habib; Kirov, Assen S.
2017-01-01
Purpose The purpose of this educational report is to provide an overview of the present state-of-the-art PET auto-segmentation (PET-AS) algorithms and their respective validation, with an emphasis on providing the user with help in understanding the challenges and pitfalls associated with selecting and implementing a PET-AS algorithm for a particular application. Approach A brief description of the different types of PET-AS algorithms is provided using a classification based on method complexity and type. The advantages and the limitations of the current PET-AS algorithms are highlighted based on current publications and existing comparison studies. A review of the available image datasets and contour evaluation metrics in terms of their applicability for establishing a standardized evaluation of PET-AS algorithms is provided. The performance requirements for the algorithms and their dependence on the application, the radiotracer used and the evaluation criteria are described and discussed. Finally, a procedure for algorithm acceptance and implementation, as well as the complementary role of manual and auto-segmentation are addressed. Findings A large number of PET-AS algorithms have been developed within the last 20 years. Many of the proposed algorithms are based on either fixed or adaptively selected thresholds. More recently, numerous papers have proposed the use of more advanced image analysis paradigms to perform semi-automated delineation of the PET images. However, the level of algorithm validation is variable and for most published algorithms is either insufficient or inconsistent which prevents recommending a single algorithm. This is compounded by the fact that realistic image configurations with low signal-to-noise ratios (SNR) and heterogeneous tracer distributions have rarely been used. Large variations in the evaluation methods used in the literature point to the need for a standardized evaluation protocol. Conclusions Available comparison studies suggest that PET-AS algorithms relying on advanced image analysis paradigms provide generally more accurate segmentation than approaches based on PET activity thresholds, particularly for realistic configurations. However, this may not be the case for simple shape lesions in situations with a narrower range of parameters, where simpler methods may also perform well. Recent algorithms which employ some type of consensus or automatic selection between several PET-AS methods have potential to overcome the limitations of the individual methods when appropriately trained. In either case, accuracy evaluation is required for each different PET scanner and scanning and image reconstruction protocol. For the simpler, less robust approaches, adaptation to scanning conditions, tumor type, and tumor location by optimization of parameters is necessary. The results from the method evaluation stage can be used to estimate the contouring uncertainty. All PET-AS contours should be critically verified by a physician. A standard test, i.e., a benchmark dedicated to evaluating both existing and future PET-AS algorithms needs to be designed, to aid clinicians in evaluating and selecting PET-AS algorithms and to establish performance limits for their acceptance for clinical use. The initial steps toward designing and building such a standard are undertaken by the task group members. PMID:28120467
A novel approach to solve nonlinear Fredholm integral equations of the second kind.
Li, Hu; Huang, Jin
2016-01-01
In this paper, we present a novel approach to solve nonlinear Fredholm integral equations of the second kind. This algorithm is constructed by the integral mean value theorem and Newton iteration. Convergence and error analysis of the numerical solutions are given. Moreover, Numerical examples show the algorithm is very effective and simple.
ERIC Educational Resources Information Center
Gonzalez-Vega, Laureano
1999-01-01
Using a Computer Algebra System (CAS) to help with the teaching of an elementary course in linear algebra can be one way to introduce computer algebra, numerical analysis, data structures, and algorithms. Highlights the advantages and disadvantages of this approach to the teaching of linear algebra. (Author/MM)
Pre-Hardware Optimization and Implementation Of Fast Optics Closed Control Loop Algorithms
NASA Technical Reports Server (NTRS)
Kizhner, Semion; Lyon, Richard G.; Herman, Jay R.; Abuhassan, Nader
2004-01-01
One of the main heritage tools used in scientific and engineering data spectrum analysis is the Fourier Integral Transform and its high performance digital equivalent - the Fast Fourier Transform (FFT). The FFT is particularly useful in two-dimensional (2-D) image processing (FFT2) within optical systems control. However, timing constraints of a fast optics closed control loop would require a supercomputer to run the software implementation of the FFT2 and its inverse, as well as other image processing representative algorithm, such as numerical image folding and fringe feature extraction. A laboratory supercomputer is not always available even for ground operations and is not feasible for a night project. However, the computationally intensive algorithms still warrant alternative implementation using reconfigurable computing technologies (RC) such as Digital Signal Processors (DSP) and Field Programmable Gate Arrays (FPGA), which provide low cost compact super-computing capabilities. We present a new RC hardware implementation and utilization architecture that significantly reduces the computational complexity of a few basic image-processing algorithm, such as FFT2, image folding and phase diversity for the NASA Solar Viewing Interferometer Prototype (SVIP) using a cluster of DSPs and FPGAs. The DSP cluster utilization architecture also assures avoidance of a single point of failure, while using commercially available hardware. This, combined with the control algorithms pre-hardware optimization, or the first time allows construction of image-based 800 Hertz (Hz) optics closed control loops on-board a spacecraft, based on the SVIP ground instrument. That spacecraft is the proposed Earth Atmosphere Solar Occultation Imager (EASI) to study greenhouse gases CO2, C2H, H2O, O3, O2, N2O from Lagrange-2 point in space. This paper provides an advanced insight into a new type of science capabilities for future space exploration missions based on on-board image processing for control and for robotics missions using vision sensors. It presents a top-level description of technologies required for the design and construction of SVIP and EASI and to advance the spatial-spectral imaging and large-scale space interferometry science and engineering.
The minimal residual QR-factorization algorithm for reliably solving subset regression problems
NASA Technical Reports Server (NTRS)
Verhaegen, M. H.
1987-01-01
A new algorithm to solve test subset regression problems is described, called the minimal residual QR factorization algorithm (MRQR). This scheme performs a QR factorization with a new column pivoting strategy. Basically, this strategy is based on the change in the residual of the least squares problem. Furthermore, it is demonstrated that this basic scheme might be extended in a numerically efficient way to combine the advantages of existing numerical procedures, such as the singular value decomposition, with those of more classical statistical procedures, such as stepwise regression. This extension is presented as an advisory expert system that guides the user in solving the subset regression problem. The advantages of the new procedure are highlighted by a numerical example.
An algorithm for the numerical solution of linear differential games
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polovinkin, E S; Ivanov, G E; Balashov, M V
2001-10-31
A numerical algorithm for the construction of stable Krasovskii bridges, Pontryagin alternating sets, and also of piecewise program strategies solving two-person linear differential (pursuit or evasion) games on a fixed time interval is developed on the basis of a general theory. The aim of the first player (the pursuer) is to hit a prescribed target (terminal) set by the phase vector of the control system at the prescribed time. The aim of the second player (the evader) is the opposite. A description of numerical algorithms used in the solution of differential games of the type under consideration is presented andmore » estimates of the errors resulting from the approximation of the game sets by polyhedra are presented.« less
Advanced biologically plausible algorithms for low-level image processing
NASA Astrophysics Data System (ADS)
Gusakova, Valentina I.; Podladchikova, Lubov N.; Shaposhnikov, Dmitry G.; Markin, Sergey N.; Golovan, Alexander V.; Lee, Seong-Whan
1999-08-01
At present, in computer vision, the approach based on modeling the biological vision mechanisms is extensively developed. However, up to now, real world image processing has no effective solution in frameworks of both biologically inspired and conventional approaches. Evidently, new algorithms and system architectures based on advanced biological motivation should be developed for solution of computational problems related to this visual task. Basic problems that should be solved for creation of effective artificial visual system to process real world imags are a search for new algorithms of low-level image processing that, in a great extent, determine system performance. In the present paper, the result of psychophysical experiments and several advanced biologically motivated algorithms for low-level processing are presented. These algorithms are based on local space-variant filter, context encoding visual information presented in the center of input window, and automatic detection of perceptually important image fragments. The core of latter algorithm are using local feature conjunctions such as noncolinear oriented segment and composite feature map formation. Developed algorithms were integrated into foveal active vision model, the MARR. It is supposed that proposed algorithms may significantly improve model performance while real world image processing during memorizing, search, and recognition.
NASA Astrophysics Data System (ADS)
Vasquez, D. A.; Swift, J. N.; Tan, S.; Darrah, T. H.
2013-12-01
The integration of precise geochemical analyses with quantitative engineering modeling into an interactive GIS system allows for a sophisticated and efficient method of reservoir engineering and characterization. Geographic Information Systems (GIS) is utilized as an advanced technique for oil field reservoir analysis by combining field engineering and geological/geochemical spatial datasets with the available systematic modeling and mapping methods to integrate the information into a spatially correlated first-hand approach in defining surface and subsurface characteristics. Three key methods of analysis include: 1) Geostatistical modeling to create a static and volumetric 3-dimensional representation of the geological body, 2) Numerical modeling to develop a dynamic and interactive 2-dimensional model of fluid flow across the reservoir and 3) Noble gas geochemistry to further define the physical conditions, components and history of the geologic system. Results thus far include using engineering algorithms for interpolating electrical well log properties across the field (spontaneous potential, resistivity) yielding a highly accurate and high-resolution 3D model of rock properties. Results so far also include using numerical finite difference methods (crank-nicholson) to solve for equations describing the distribution of pressure across field yielding a 2D simulation model of fluid flow across reservoir. Ongoing noble gas geochemistry results will also include determination of the source, thermal maturity and the extent/style of fluid migration (connectivity, continuity and directionality). Future work will include developing an inverse engineering algorithm to model for permeability, porosity and water saturation.This combination of new and efficient technological and analytical capabilities is geared to provide a better understanding of the field geology and hydrocarbon dynamics system with applications to determine the presence of hydrocarbon pay zones (or other reserves) and improve oil field management (e.g. perforating, drilling, EOR and reserves estimation)
Sparsity-based acoustic inversion in cross-sectional multiscale optoacoustic imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Yiyong; Tzoumas, Stratis; Nunes, Antonio
2015-09-15
Purpose: With recent advancement in hardware of optoacoustic imaging systems, highly detailed cross-sectional images may be acquired at a single laser shot, thus eliminating motion artifacts. Nonetheless, other sources of artifacts remain due to signal distortion or out-of-plane signals. The purpose of image reconstruction algorithms is to obtain the most accurate images from noisy, distorted projection data. Methods: In this paper, the authors use the model-based approach for acoustic inversion, combined with a sparsity-based inversion procedure. Specifically, a cost function is used that includes the L1 norm of the image in sparse representation and a total variation (TV) term. Themore » optimization problem is solved by a numerically efficient implementation of a nonlinear gradient descent algorithm. TV–L1 model-based inversion is tested in the cross section geometry for numerically generated data as well as for in vivo experimental data from an adult mouse. Results: In all cases, model-based TV–L1 inversion showed a better performance over the conventional Tikhonov regularization, TV inversion, and L1 inversion. In the numerical examples, the images reconstructed with TV–L1 inversion were quantitatively more similar to the originating images. In the experimental examples, TV–L1 inversion yielded sharper images and weaker streak artifact. Conclusions: The results herein show that TV–L1 inversion is capable of improving the quality of highly detailed, multiscale optoacoustic images obtained in vivo using cross-sectional imaging systems. As a result of its high fidelity, model-based TV–L1 inversion may be considered as the new standard for image reconstruction in cross-sectional imaging.« less
Recent advances in computational-analytical integral transforms for convection-diffusion problems
NASA Astrophysics Data System (ADS)
Cotta, R. M.; Naveira-Cotta, C. P.; Knupp, D. C.; Zotin, J. L. Z.; Pontes, P. C.; Almeida, A. P.
2017-10-01
An unifying overview of the Generalized Integral Transform Technique (GITT) as a computational-analytical approach for solving convection-diffusion problems is presented. This work is aimed at bringing together some of the most recent developments on both accuracy and convergence improvements on this well-established hybrid numerical-analytical methodology for partial differential equations. Special emphasis is given to novel algorithm implementations, all directly connected to enhancing the eigenfunction expansion basis, such as a single domain reformulation strategy for handling complex geometries, an integral balance scheme in dealing with multiscale problems, the adoption of convective eigenvalue problems in formulations with significant convection effects, and the direct integral transformation of nonlinear convection-diffusion problems based on nonlinear eigenvalue problems. Then, selected examples are presented that illustrate the improvement achieved in each class of extension, in terms of convergence acceleration and accuracy gain, which are related to conjugated heat transfer in complex or multiscale microchannel-substrate geometries, multidimensional Burgers equation model, and diffusive metal extraction through polymeric hollow fiber membranes. Numerical results are reported for each application and, where appropriate, critically compared against the traditional GITT scheme without convergence enhancement schemes and commercial or dedicated purely numerical approaches.
Nivala, Michael; de Lange, Enno; Rovetti, Robert; Qu, Zhilin
2012-01-01
Intracellular calcium (Ca) cycling dynamics in cardiac myocytes is regulated by a complex network of spatially distributed organelles, such as sarcoplasmic reticulum (SR), mitochondria, and myofibrils. In this study, we present a mathematical model of intracellular Ca cycling and numerical and computational methods for computer simulations. The model consists of a coupled Ca release unit (CRU) network, which includes a SR domain and a myoplasm domain. Each CRU contains 10 L-type Ca channels and 100 ryanodine receptor channels, with individual channels simulated stochastically using a variant of Gillespie’s method, modified here to handle time-dependent transition rates. Both the SR domain and the myoplasm domain in each CRU are modeled by 5 × 5 × 5 voxels to maintain proper Ca diffusion. Advanced numerical algorithms implemented on graphical processing units were used for fast computational simulations. For a myocyte containing 100 × 20 × 10 CRUs, a 1-s heart time simulation takes about 10 min of machine time on a single NVIDIA Tesla C2050. Examples of simulated Ca cycling dynamics, such as Ca sparks, Ca waves, and Ca alternans, are shown. PMID:22586402
NASA Astrophysics Data System (ADS)
Haworth, Daniel
2013-11-01
The importance of explicitly accounting for the effects of unresolved turbulent fluctuations in Reynolds-averaged and large-eddy simulations of chemically reacting turbulent flows is increasingly recognized. Transported probability density function (PDF) methods have emerged as one of the most promising modeling approaches for this purpose. In particular, PDF methods provide an elegant and effective resolution to the closure problems that arise from averaging or filtering terms that correspond to nonlinear point processes, including chemical reaction source terms and radiative emission. PDF methods traditionally have been associated with studies of turbulence-chemistry interactions in laboratory-scale, atmospheric-pressure, nonluminous, statistically stationary nonpremixed turbulent flames; and Lagrangian particle-based Monte Carlo numerical algorithms have been the predominant method for solving modeled PDF transport equations. Recent advances and trends in PDF methods are reviewed and discussed. These include advances in particle-based algorithms, alternatives to particle-based algorithms (e.g., Eulerian field methods), treatment of combustion regimes beyond low-to-moderate-Damköhler-number nonpremixed systems (e.g., premixed flamelets), extensions to include radiation heat transfer and multiphase systems (e.g., soot and fuel sprays), and the use of PDF methods as the basis for subfilter-scale modeling in large-eddy simulation. Examples are provided that illustrate the utility and effectiveness of PDF methods for physics discovery and for applications to practical combustion systems. These include comparisons of results obtained using the PDF method with those from models that neglect unresolved turbulent fluctuations in composition and temperature in the averaged or filtered chemical source terms and/or the radiation heat transfer source terms. In this way, the effects of turbulence-chemistry-radiation interactions can be isolated and quantified.
Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu
2015-11-11
Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted "useful" data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency.
Firefly Algorithm, Lévy Flights and Global Optimization
NASA Astrophysics Data System (ADS)
Yang, Xin-She
Nature-inspired algorithms such as Particle Swarm Optimization and Firefly Algorithm are among the most powerful algorithms for optimization. In this paper, we intend to formulate a new metaheuristic algorithm by combining Lévy flights with the search strategy via the Firefly Algorithm. Numerical studies and results suggest that the proposed Lévy-flight firefly algorithm is superior to existing metaheuristic algorithms. Finally implications for further research and wider applications will be discussed.
Multispectral image analysis for object recognition and classification
NASA Astrophysics Data System (ADS)
Viau, C. R.; Payeur, P.; Cretu, A.-M.
2016-05-01
Computer and machine vision applications are used in numerous fields to analyze static and dynamic imagery in order to assist or automate decision-making processes. Advancements in sensor technologies now make it possible to capture and visualize imagery at various wavelengths (or bands) of the electromagnetic spectrum. Multispectral imaging has countless applications in various fields including (but not limited to) security, defense, space, medical, manufacturing and archeology. The development of advanced algorithms to process and extract salient information from the imagery is a critical component of the overall system performance. The fundamental objective of this research project was to investigate the benefits of combining imagery from the visual and thermal bands of the electromagnetic spectrum to improve the recognition rates and accuracy of commonly found objects in an office setting. A multispectral dataset (visual and thermal) was captured and features from the visual and thermal images were extracted and used to train support vector machine (SVM) classifiers. The SVM's class prediction ability was evaluated separately on the visual, thermal and multispectral testing datasets.
Dual-TRACER: High resolution fMRI with constrained evolution reconstruction.
Li, Xuesong; Ma, Xiaodong; Li, Lyu; Zhang, Zhe; Zhang, Xue; Tong, Yan; Wang, Lihong; Sen Song; Guo, Hua
2018-01-01
fMRI with high spatial resolution is beneficial for studies in psychology and neuroscience, but is limited by various factors such as prolonged imaging time, low signal to noise ratio and scarcity of advanced facilities. Compressed Sensing (CS) based methods for accelerating fMRI data acquisition are promising. Other advanced algorithms like k-t FOCUSS or PICCS have been developed to improve performance. This study aims to investigate a new method, Dual-TRACER, based on Temporal Resolution Acceleration with Constrained Evolution Reconstruction (TRACER), for accelerating fMRI acquisitions using golden angle variable density spiral. Both numerical simulations and in vivo experiments at 3T were conducted to evaluate and characterize this method. Results show that Dual-TRACER can provide functional images with a high spatial resolution (1×1mm 2 ) under an acceleration factor of 20 while maintaining hemodynamic signals well. Compared with other investigated methods, dual-TRACER provides a better signal recovery, higher fMRI sensitivity and more reliable activation detection. Copyright © 2017 Elsevier Inc. All rights reserved.
McIDAS-V: A Data Analysis and Visualization Tool for Global Satellite Data
NASA Astrophysics Data System (ADS)
Achtor, T. H.; Rink, T. D.
2011-12-01
The Man-computer Interactive Data Access System (McIDAS-V) is a java-based, open-source, freely available system for scientists, researchers and algorithm developers working with atmospheric data. The McIDAS-V software tools provide powerful new data manipulation and visualization capabilities, including 4-dimensional displays, an abstract data model with integrated metadata, user defined computation, and a powerful scripting capability. As such, McIDAS-V is a valuable tool for scientists and researchers within the GEO and GOESS domains. The advancing polar and geostationary orbit environmental satellite missions conducted by several countries will carry advanced instrumentation and systems that will collect and distribute land, ocean, and atmosphere data. These systems provide atmospheric and sea surface temperatures, humidity sounding, cloud and aerosol properties, and numerous other environmental products. This presentation will display and demonstrate some of the capabilities of McIDAS-V to analyze and display high temporal and spectral resolution data using examples from international environmental satellites.
Qiao, Guixiu; Weiss, Brian A.
2016-01-01
Unexpected equipment downtime is a ‘pain point’ for manufacturers, especially in that this event usually translates to financial losses. To minimize this pain point, manufacturers are developing new health monitoring, diagnostic, prognostic, and maintenance (collectively known as prognostics and health management (PHM)) techniques to advance the state-of-the-art in their maintenance strategies. The manufacturing community has a wide-range of needs with respect to the advancement and integration of PHM technologies to enhance manufacturing robotic system capabilities. Numerous researchers, including personnel from the National Institute of Standards and Technology (NIST), have identified a broad landscape of barriers and challenges to advancing PHM technologies. One such challenge is the verification and validation of PHM technology through the development of performance metrics, test methods, reference datasets, and supporting tools. Besides documenting and presenting the research landscape, NIST personnel are actively researching PHM for robotics to promote the development of innovative sensing technology and prognostic decision algorithms and to produce a positional accuracy test method that emphasizes the identification of static and dynamic positional accuracy. The test method development will provide manufacturers with a methodology that will allow them to quickly assess the positional health of their robot systems along with supporting the verification and validation of PHM techniques for the robot system. PMID:28058172
Qiao, Guixiu; Weiss, Brian A
2016-01-01
Unexpected equipment downtime is a 'pain point' for manufacturers, especially in that this event usually translates to financial losses. To minimize this pain point, manufacturers are developing new health monitoring, diagnostic, prognostic, and maintenance (collectively known as prognostics and health management (PHM)) techniques to advance the state-of-the-art in their maintenance strategies. The manufacturing community has a wide-range of needs with respect to the advancement and integration of PHM technologies to enhance manufacturing robotic system capabilities. Numerous researchers, including personnel from the National Institute of Standards and Technology (NIST), have identified a broad landscape of barriers and challenges to advancing PHM technologies. One such challenge is the verification and validation of PHM technology through the development of performance metrics, test methods, reference datasets, and supporting tools. Besides documenting and presenting the research landscape, NIST personnel are actively researching PHM for robotics to promote the development of innovative sensing technology and prognostic decision algorithms and to produce a positional accuracy test method that emphasizes the identification of static and dynamic positional accuracy. The test method development will provide manufacturers with a methodology that will allow them to quickly assess the positional health of their robot systems along with supporting the verification and validation of PHM techniques for the robot system.
Topology of large-scale structure. IV - Topology in two dimensions
NASA Technical Reports Server (NTRS)
Melott, Adrian L.; Cohen, Alexander P.; Hamilton, Andrew J. S.; Gott, J. Richard, III; Weinberg, David H.
1989-01-01
In a recent series of papers, an algorithm was developed for quantitatively measuring the topology of the large-scale structure of the universe and this algorithm was applied to numerical models and to three-dimensional observational data sets. In this paper, it is shown that topological information can be derived from a two-dimensional cross section of a density field, and analytic expressions are given for a Gaussian random field. The application of a two-dimensional numerical algorithm for measuring topology to cross sections of three-dimensional models is demonstrated.
Interior point techniques for LP and NLP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evtushenko, Y.
By using surjective mapping the initial constrained optimization problem is transformed to a problem in a new space with only equality constraints. For the numerical solution of the latter problem we use the generalized gradient-projection method and Newton`s method. After inverse transformation to the initial space we obtain the family of numerical methods for solving optimization problems with equality and inequality constraints. In the linear programming case after some simplification we obtain Dikin`s algorithm, affine scaling algorithm and generalized primal dual interior point linear programming algorithm.
Na, X D; Zang, S Y; Wu, C S; Li, W L
2015-11-01
Knowledge of the spatial extent of forested wetlands is essential to many studies including wetland functioning assessment, greenhouse gas flux estimation, and wildlife suitable habitat identification. For discriminating forested wetlands from their adjacent land cover types, researchers have resorted to image analysis techniques applied to numerous remotely sensed data. While with some success, there is still no consensus on the optimal approaches for mapping forested wetlands. To address this problem, we examined two machine learning approaches, random forest (RF) and K-nearest neighbor (KNN) algorithms, and applied these two approaches to the framework of pixel-based and object-based classifications. The RF and KNN algorithms were constructed using predictors derived from Landsat 8 imagery, Radarsat-2 advanced synthetic aperture radar (SAR), and topographical indices. The results show that the objected-based classifications performed better than per-pixel classifications using the same algorithm (RF) in terms of overall accuracy and the difference of their kappa coefficients are statistically significant (p<0.01). There were noticeably omissions for forested and herbaceous wetlands based on the per-pixel classifications using the RF algorithm. As for the object-based image analysis, there were also statistically significant differences (p<0.01) of Kappa coefficient between results performed based on RF and KNN algorithms. The object-based classification using RF provided a more visually adequate distribution of interested land cover types, while the object classifications based on the KNN algorithm showed noticeably commissions for forested wetlands and omissions for agriculture land. This research proves that the object-based classification with RF using optical, radar, and topographical data improved the mapping accuracy of land covers and provided a feasible approach to discriminate the forested wetlands from the other land cover types in forestry area.
Sella, Francesco; Sader, Elie; Lolliot, Simon; Cohen Kadosh, Roi
2016-09-01
Recent studies have highlighted the potential role of basic numerical processing in the acquisition of numerical and mathematical competences. However, it is debated whether high-level numerical skills and mathematics depends specifically on basic numerical representations. In this study mathematicians and nonmathematicians performed a basic number line task, which required mapping positive and negative numbers on a physical horizontal line, and has been shown to correlate with more advanced numerical abilities and mathematical achievement. We found that mathematicians were more accurate compared with nonmathematicians when mapping positive, but not negative numbers, which are considered numerical primitives and cultural artifacts, respectively. Moreover, performance on positive number mapping could predict whether one is a mathematician or not, and was mediated by more advanced mathematical skills. This finding might suggest a link between basic and advanced mathematical skills. However, when we included visuospatial skills, as measured by block design subtest, the mediation analysis revealed that the relation between the performance in the number line task and the group membership was explained by non-numerical visuospatial skills. These results demonstrate that relation between basic, even specific, numerical skills and advanced mathematical achievement can be artifactual and explained by visuospatial processing. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Number Partitioning via Quantum Adiabatic Computation
NASA Technical Reports Server (NTRS)
Smelyanskiy, Vadim N.; Toussaint, Udo
2002-01-01
We study both analytically and numerically the complexity of the adiabatic quantum evolution algorithm applied to random instances of combinatorial optimization problems. We use as an example the NP-complete set partition problem and obtain an asymptotic expression for the minimal gap separating the ground and exited states of a system during the execution of the algorithm. We show that for computationally hard problem instances the size of the minimal gap scales exponentially with the problem size. This result is in qualitative agreement with the direct numerical simulation of the algorithm for small instances of the set partition problem. We describe the statistical properties of the optimization problem that are responsible for the exponential behavior of the algorithm.
Progress on a Taylor weak statement finite element algorithm for high-speed aerodynamic flows
NASA Technical Reports Server (NTRS)
Baker, A. J.; Freels, J. D.
1989-01-01
A new finite element numerical Computational Fluid Dynamics (CFD) algorithm has matured to the point of efficiently solving two-dimensional high speed real-gas compressible flow problems in generalized coordinates on modern vector computer systems. The algorithm employs a Taylor Weak Statement classical Galerkin formulation, a variably implicit Newton iteration, and a tensor matrix product factorization of the linear algebra Jacobian under a generalized coordinate transformation. Allowing for a general two-dimensional conservation law system, the algorithm has been exercised on the Euler and laminar forms of the Navier-Stokes equations. Real-gas fluid properties are admitted, and numerical results verify solution accuracy, efficiency, and stability over a range of test problem parameters.
ERIC Educational Resources Information Center
Guerrero, Lourdes; Rivera, Antonio
Fourteen third graders were given numerical computation and division-with-remainder (DWR) problems both before and after they were taught the division algorithm in classrooms. Their solutions were examined. The results show that students' initial acquisition of the division algorithm did improve their performance in numerical division computations…
Translation and integration of numerical atomic orbitals in linear molecules
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heinäsmäki, Sami, E-mail: sami.heinasmaki@gmail.com
2014-02-14
We present algorithms for translation and integration of atomic orbitals for LCAO calculations in linear molecules. The method applies to arbitrary radial functions given on a numerical mesh. The algorithms are based on pseudospectral differentiation matrices in two dimensions and the corresponding two-dimensional Gaussian quadratures. As a result, multicenter overlap and Coulomb integrals can be evaluated effectively.
Numerical simulation of disperse particle flows on a graphics processing unit
NASA Astrophysics Data System (ADS)
Sierakowski, Adam J.
In both nature and technology, we commonly encounter solid particles being carried within fluid flows, from dust storms to sediment erosion and from food processing to energy generation. The motion of uncountably many particles in highly dynamic flow environments characterizes the tremendous complexity of such phenomena. While methods exist for the full-scale numerical simulation of such systems, current computational capabilities require the simplification of the numerical task with significant approximation using closure models widely recognized as insufficient. There is therefore a fundamental need for the investigation of the underlying physical processes governing these disperse particle flows. In the present work, we develop a new tool based on the Physalis method for the first-principles numerical simulation of thousands of particles (a small fraction of an entire disperse particle flow system) in order to assist in the search for new reduced-order closure models. We discuss numerous enhancements to the efficiency and stability of the Physalis method, which introduces the influence of spherical particles to a fixed-grid incompressible Navier-Stokes flow solver using a local analytic solution to the flow equations. Our first-principles investigation demands the modeling of unresolved length and time scales associated with particle collisions. We introduce a collision model alongside Physalis, incorporating lubrication effects and proposing a new nonlinearly damped Hertzian contact model. By reproducing experimental studies from the literature, we document extensive validation of the methods. We discuss the implementation of our methods for massively parallel computation using a graphics processing unit (GPU). We combine Eulerian grid-based algorithms with Lagrangian particle-based algorithms to achieve computational throughput up to 90 times faster than the legacy implementation of Physalis for a single central processing unit. By avoiding all data communication between the GPU and the host system during the simulation, we utilize with great efficacy the GPU hardware with which many high performance computing systems are currently equipped. We conclude by looking forward to the future of Physalis with multi-GPU parallelization in order to perform resolved disperse flow simulations of more than 100,000 particles and further advance the development of reduced-order closure models.
NSR&D FY17 Report: CartaBlanca Capability Enhancements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Long, Christopher Curtis; Dhakal, Tilak Raj; Zhang, Duan Zhong
Over the last several years, particle technology in the CartaBlanca code has been matured and has been successfully applied to a wide variety of physical problems. It has been shown that the particle methods, especially Los Alamos's dual domain material point method, is capable of computing many problems involves complex physics, chemistries accompanied by large material deformations, where the traditional finite element or Eulerian method encounter significant difficulties. In FY17, the CartaBlanca code has been enhanced with physical models and numerical algorithms. We started out to compute penetration and HE safety problems. Most of the year we focused on themore » TEPLA model improvement testing against the sweeping wave experiment by Gray et al., because it was found that pore growth and material failure are essentially important for our tasks and needed to be understood for modeling the penetration and the can experiments efficiently. We extended the TEPLA mode from the point view of ensemble phase average to include the effects of nite deformation. It is shown that the assumed pore growth model in TEPLA is actually an exact result from the theory. Alone this line, we then generalized the model to include finite deformations to consider nonlinear dynamics of large deformation. The interaction between the HE product gas and the solid metal is based on the multi-velocity formation. Our preliminary numerical results suggest good agreement between the experiment and the numerical results, pending further verification. To improve the parallel processing capabilities of the CartaBlanca code, we are actively working with the Next Generation Code (NGC) project to rewrite selected packages using C++. This work is expected to continue in the following years. This effort also makes the particle technology developed with CartaBlanca project available to other part of the laboratory. Working with the NGC project and rewriting some parts of the code also given us an opportunity to improve our numerical implementations of the method and to take advantage of recently advances in the numerical methods, such as multiscale algorithms.« less
Silletta, Emilia V; Franzoni, María B; Monti, Gustavo A; Acosta, Rodolfo H
2018-01-01
Two-dimension (2D) Nuclear Magnetic Resonance relaxometry experiments are a powerful tool extensively used to probe the interaction among different pore structures, mostly in inorganic systems. The analysis of the collected experimental data generally consists of a 2D numerical inversion of time-domain data where T 2 -T 2 maps are generated. Through the years, different algorithms for the numerical inversion have been proposed. In this paper, two different algorithms for numerical inversion are tested and compared under different conditions of exchange dynamics; the method based on Butler-Reeds-Dawson (BRD) algorithm and the fast-iterative shrinkage-thresholding algorithm (FISTA) method. By constructing a theoretical model, the algorithms were tested for a two- and three-site porous media, varying the exchange rates parameters, the pore sizes and the signal to noise ratio. In order to test the methods under realistic experimental conditions, a challenging organic system was chosen. The molecular exchange rates of water confined in hierarchical porous polymeric networks were obtained, for a two- and three-site porous media. Data processed with the BRD method was found to be accurate only under certain conditions of the exchange parameters, while data processed with the FISTA method is precise for all the studied parameters, except when SNR conditions are extreme. Copyright © 2017 Elsevier Inc. All rights reserved.
Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction
NASA Astrophysics Data System (ADS)
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-11-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constraint involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the PAPA. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality.
An Algorithm For Climate-Quality Atmospheric Profiling Continuity From EOS Aqua To Suomi-NPP
NASA Astrophysics Data System (ADS)
Moncet, J. L.
2015-12-01
We will present results from an algorithm that is being developed to produce climate-quality atmospheric profiling earth system data records (ESDRs) for application to hyperspectral sounding instrument data from Suomi-NPP, EOS Aqua, and other spacecraft. The current focus is on data from the S-NPP Cross-track Infrared Sounder (CrIS) and Advanced Technology Microwave Sounder (ATMS) instruments as well as the Atmospheric InfraRed Sounder (AIRS) on EOS Aqua. The algorithm development at Atmospheric and Environmental Research (AER) has common heritage with the optimal estimation (OE) algorithm operationally processing S-NPP data in the Interface Data Processing Segment (IDPS), but the ESDR algorithm has a flexible, modular software structure to support experimentation and collaboration and has several features adapted to the climate orientation of ESDRs. Data record continuity benefits from the fact that the same algorithm can be applied to different sensors, simply by providing suitable configuration and data files. The radiative transfer component uses an enhanced version of optimal spectral sampling (OSS) with updated spectroscopy, treatment of emission that is not in local thermodynamic equilibrium (non-LTE), efficiency gains with "global" optimal sampling over all channels, and support for channel selection. The algorithm is designed for adaptive treatment of clouds, with capability to apply "cloud clearing" or simultaneous cloud parameter retrieval, depending on conditions. We will present retrieval results demonstrating the impact of a new capability to perform the retrievals on sigma or hybrid vertical grid (as opposed to a fixed pressure grid), which particularly affects profile accuracy over land with variable terrain height and with sharp vertical structure near the surface. In addition, we will show impacts of alternative treatments of regularization of the inversion. While OE algorithms typically implement regularization by using background estimates from climatological or numerical forecast data, those sources are problematic for climate applications due to the imprint of biases from past climate analyses or from model error.
NASA Astrophysics Data System (ADS)
Lee, Sangkyu
Illicit trafficking and smuggling of radioactive materials and special nuclear materials (SNM) are considered as one of the most important recent global nuclear threats. Monitoring the transport and safety of radioisotopes and SNM are challenging due to their weak signals and easy shielding. Great efforts worldwide are focused at developing and improving the detection technologies and algorithms, for accurate and reliable detection of radioisotopes of interest in thus better securing the borders against nuclear threats. In general, radiation portal monitors enable detection of gamma and neutron emitting radioisotopes. Passive or active interrogation techniques, present and/or under the development, are all aimed at increasing accuracy, reliability, and in shortening the time of interrogation as well as the cost of the equipment. Equally important efforts are aimed at advancing algorithms to process the imaging data in an efficient manner providing reliable "readings" of the interiors of the examined volumes of various sizes, ranging from cargos to suitcases. The main objective of this thesis is to develop two synergistic algorithms with the goal to provide highly reliable - low noise identification of radioisotope signatures. These algorithms combine analysis of passive radioactive detection technique with active interrogation imaging techniques such as gamma radiography or muon tomography. One algorithm consists of gamma spectroscopy and cosmic muon tomography, and the other algorithm is based on gamma spectroscopy and gamma radiography. The purpose of fusing two detection methodologies per algorithm is to find both heavy-Z radioisotopes and shielding materials, since radionuclides can be identified with gamma spectroscopy, and shielding materials can be detected using muon tomography or gamma radiography. These combined algorithms are created and analyzed based on numerically generated images of various cargo sizes and materials. In summary, the three detection methodologies are fused into two algorithms with mathematical functions providing: reliable identification of radioisotopes in gamma spectroscopy; noise reduction and precision enhancement in muon tomography; and the atomic number and density estimation in gamma radiography. It is expected that these new algorithms maybe implemented at portal scanning systems with the goal to enhance the accuracy and reliability in detecting nuclear materials inside the cargo containers.
Sokol, Serguei; Millard, Pierre; Portais, Jean-Charles
2012-03-01
The problem of stationary metabolic flux analysis based on isotope labelling experiments first appeared in the early 1950s and was basically solved in early 2000s. Several algorithms and software packages are available for this problem. However, the generic stochastic algorithms (simulated annealing or evolution algorithms) currently used in these software require a lot of time to achieve acceptable precision. For deterministic algorithms, a common drawback is the lack of convergence stability for ill-conditioned systems or when started from a random point. In this article, we present a new deterministic algorithm with significantly increased numerical stability and accuracy of flux estimation compared with commonly used algorithms. It requires relatively short CPU time (from several seconds to several minutes with a standard PC architecture) to estimate fluxes in the central carbon metabolism network of Escherichia coli. The software package influx_s implementing this algorithm is distributed under an OpenSource licence at http://metasys.insa-toulouse.fr/software/influx/. Supplementary data are available at Bioinformatics online.
NASA Astrophysics Data System (ADS)
Chen, Jiaoxuan; Zhang, Maomao; Liu, Yinyan; Chen, Jiaoliao; Li, Yi
2017-03-01
Electrical capacitance tomography (ECT) is a promising technique applied in many fields. However, the solutions for ECT are not unique and highly sensitive to the measurement noise. To remain a good shape of reconstructed object and endure a noisy data, a Rudin-Osher-Fatemi (ROF) model with total variation regularization is applied to image reconstruction in ECT. Two numerical methods, which are simplified augmented Lagrangian (SAL) and accelerated alternating direction method of multipliers (AADMM), are innovatively introduced to try to solve the above mentioned problems in ECT. The effect of the parameters and the number of iterations for different algorithms, and the noise level in capacitance data are discussed. Both simulation and experimental tests were carried out to validate the feasibility of the proposed algorithms, compared to the Landweber iteration (LI) algorithm. The results show that the SAL and AADMM algorithms can handle a high level of noise and the AADMM algorithm outperforms other algorithms in identifying the object from its background.
A model-based 3D template matching technique for pose acquisition of an uncooperative space object.
Opromolla, Roberto; Fasano, Giancarmine; Rufino, Giancarlo; Grassi, Michele
2015-03-16
This paper presents a customized three-dimensional template matching technique for autonomous pose determination of uncooperative targets. This topic is relevant to advanced space applications, like active debris removal and on-orbit servicing. The proposed technique is model-based and produces estimates of the target pose without any prior pose information, by processing three-dimensional point clouds provided by a LIDAR. These estimates are then used to initialize a pose tracking algorithm. Peculiar features of the proposed approach are the use of a reduced number of templates and the idea of building the database of templates on-line, thus significantly reducing the amount of on-board stored data with respect to traditional techniques. An algorithm variant is also introduced aimed at further accelerating the pose acquisition time and reducing the computational cost. Technique performance is investigated within a realistic numerical simulation environment comprising a target model, LIDAR operation and various target-chaser relative dynamics scenarios, relevant to close-proximity flight operations. Specifically, the capability of the proposed techniques to provide a pose solution suitable to initialize the tracking algorithm is demonstrated, as well as their robustness against highly variable pose conditions determined by the relative dynamics. Finally, a criterion for autonomous failure detection of the presented techniques is presented.
NASA Astrophysics Data System (ADS)
Didari, Azadeh; Pinar Mengüç, M.
2017-08-01
Advances in nanotechnology and nanophotonics are inextricably linked with the need for reliable computational algorithms to be adapted as design tools for the development of new concepts in energy harvesting, radiative cooling, nanolithography and nano-scale manufacturing, among others. In this paper, we provide an outline for such a computational tool, named NF-RT-FDTD, to determine the near-field radiative transfer between structured surfaces using Finite Difference Time Domain method. NF-RT-FDTD is a direct and non-stochastic algorithm, which accounts for the statistical nature of the thermal radiation and is easily applicable to any arbitrary geometry at thermal equilibrium. We present a review of the fundamental relations for far- and near-field radiative transfer between different geometries with nano-scale surface and volumetric features and gaps, and then we discuss the details of the NF-RT-FDTD formulation, its application to sample geometries and outline its future expansion to more complex geometries. In addition, we briefly discuss some of the recent numerical works for direct and indirect calculations of near-field thermal radiation transfer, including Scattering Matrix method, Finite Difference Time Domain method (FDTD), Wiener Chaos Expansion, Fluctuating Surface Current (FSC), Fluctuating Volume Current (FVC) and Thermal Discrete Dipole Approximations (TDDA).
NASA Astrophysics Data System (ADS)
Morfa, Carlos Recarey; Farias, Márcio Muniz de; Morales, Irvin Pablo Pérez; Navarra, Eugenio Oñate Ibañez de; Valera, Roberto Roselló
2018-04-01
The influence of the microstructural heterogeneities is an important topic in the study of materials. In the context of computational mechanics, it is therefore necessary to generate virtual materials that are statistically equivalent to the microstructure under study, and to connect that geometrical description to the different numerical methods. Herein, the authors present a procedure to model continuous solid polycrystalline materials, such as rocks and metals, preserving their representative statistical grain size distribution. The first phase of the procedure consists of segmenting an image of the material into adjacent polyhedral grains representing the individual crystals. This segmentation allows estimating the grain size distribution, which is used as the input for an advancing front sphere packing algorithm. Finally, Laguerre diagrams are calculated from the obtained sphere packings. The centers of the spheres give the centers of the Laguerre cells, and their radii determine the cells' weights. The cell sizes in the obtained Laguerre diagrams have a distribution similar to that of the grains obtained from the image segmentation. That is why those diagrams are a convenient model of the original crystalline structure. The above-outlined procedure has been used to model real polycrystalline metallic materials. The main difference with previously existing methods lies in the use of a better particle packing algorithm.
NASA Astrophysics Data System (ADS)
Yang, Chen; Zhang, Xuepan; Huang, Xiaoqi; Cheng, ZhengAi; Zhang, Xinghua; Hou, Xinbin
2017-11-01
The concept of space solar power satellite (SSPS) is an advanced system for collecting solar energy in space and transmitting it wirelessly to earth. However, due to the long service life, in-orbit damage may occur in the structural system of SSPS. Therefore, sensor placement layouts for structural health monitoring should be firstly considered in this concept. In this paper, based on genetic algorithm, an optimal sensor placement method for deployable antenna module health monitoring in SSPS is proposed. According to the characteristics of the deployable antenna module, the designs of sensor placement are listed. Furthermore, based on effective independence method and effective interval index, a combined fitness function is defined to maximize linear independence in targeted modes while simultaneously avoiding redundant information at nearby positions. In addition, by considering the reliability of sensors located at deployable mechanisms, another fitness function is constituted. Moreover, the solution process of optimal sensor placement by using genetic algorithm is clearly demonstrated. At last, a numerical example about the sensor placement layout in a deployable antenna module of SSPS is presented, which by synthetically considering all the above mentioned performances. All results can illustrate the effectiveness and feasibility of the proposed sensor placement method in SSPS.
NASA Technical Reports Server (NTRS)
Bui, Trong T.; Mankbadi, Reda R.
1995-01-01
Numerical simulation of a very small amplitude acoustic wave interacting with a shock wave in a quasi-1D convergent-divergent nozzle is performed using an unstructured finite volume algorithm with a piece-wise linear, least square reconstruction, Roe flux difference splitting, and second-order MacCormack time marching. First, the spatial accuracy of the algorithm is evaluated for steady flows with and without the normal shock by running the simulation with a sequence of successively finer meshes. Then the accuracy of the Roe flux difference splitting near the sonic transition point is examined for different reconstruction schemes. Finally, the unsteady numerical solutions with the acoustic perturbation are presented and compared with linear theory results.
A multistage time-stepping scheme for the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Swanson, R. C.; Turkel, E.
1985-01-01
A class of explicit multistage time-stepping schemes is used to construct an algorithm for solving the compressible Navier-Stokes equations. Flexibility in treating arbitrary geometries is obtained with a finite-volume formulation. Numerical efficiency is achieved by employing techniques for accelerating convergence to steady state. Computer processing is enhanced through vectorization of the algorithm. The scheme is evaluated by solving laminar and turbulent flows over a flat plate and an NACA 0012 airfoil. Numerical results are compared with theoretical solutions or other numerical solutions and/or experimental data.
Analytical and numerical analysis of frictional damage in quasi brittle materials
NASA Astrophysics Data System (ADS)
Zhu, Q. Z.; Zhao, L. Y.; Shao, J. F.
2016-07-01
Frictional sliding and crack growth are two main dissipation processes in quasi brittle materials. The frictional sliding along closed cracks is the origin of macroscopic plastic deformation while the crack growth induces a material damage. The main difficulty of modeling is to consider the inherent coupling between these two processes. Various models and associated numerical algorithms have been proposed. But there are so far no analytical solutions even for simple loading paths for the validation of such algorithms. In this paper, we first present a micro-mechanical model taking into account the damage-friction coupling for a large class of quasi brittle materials. The model is formulated by combining a linear homogenization procedure with the Mori-Tanaka scheme and the irreversible thermodynamics framework. As an original contribution, a series of analytical solutions of stress-strain relations are developed for various loading paths. Based on the micro-mechanical model, two numerical integration algorithms are exploited. The first one involves a coupled friction/damage correction scheme, which is consistent with the coupling nature of the constitutive model. The second one contains a friction/damage decoupling scheme with two consecutive steps: the friction correction followed by the damage correction. With the analytical solutions as reference results, the two algorithms are assessed through a series of numerical tests. It is found that the decoupling correction scheme is efficient to guarantee a systematic numerical convergence.
Numerical algorithms for computations of feedback laws arising in control of flexible systems
NASA Technical Reports Server (NTRS)
Lasiecka, Irena
1989-01-01
Several continuous models will be examined, which describe flexible structures with boundary or point control/observation. Issues related to the computation of feedback laws are examined (particularly stabilizing feedbacks) with sensors and actuators located either on the boundary or at specific point locations of the structure. One of the main difficulties is due to the great sensitivity of the system (hyperbolic systems with unbounded control actions), with respect to perturbations caused either by uncertainty of the model or by the errors introduced in implementing numerical algorithms. Thus, special care must be taken in the choice of the appropriate numerical schemes which eventually lead to implementable finite dimensional solutions. Finite dimensional algorithms are constructed on a basis of a priority analysis of the properties of the original, continuous (infinite diversional) systems with the following criteria in mind: (1) convergence and stability of the algorithms and (2) robustness (reasonable insensitivity with respect to the unknown parameters of the systems). Examples with mixed finite element methods and spectral methods are provided.
NASA Astrophysics Data System (ADS)
Bang, Jeongho; Lee, Seung-Woo; Lee, Chang-Woo; Jeong, Hyunseok
2015-01-01
We propose a quantum algorithm to obtain the lowest eigenstate of any Hamiltonian simulated by a quantum computer. The proposed algorithm begins with an arbitrary initial state of the simulated system. A finite series of transforms is iteratively applied to the initial state assisted with an ancillary qubit. The fraction of the lowest eigenstate in the initial state is then amplified up to 1. We prove that our algorithm can faithfully work for any arbitrary Hamiltonian in the theoretical analysis. Numerical analyses are also carried out. We firstly provide a numerical proof-of-principle demonstration with a simple Hamiltonian in order to compare our scheme with the so-called "Demon-like algorithmic cooling (DLAC)", recently proposed in Xu (Nat Photonics 8:113, 2014). The result shows a good agreement with our theoretical analysis, exhibiting the comparable behavior to the best `cooling' with the DLAC method. We then consider a random Hamiltonian model for further analysis of our algorithm. By numerical simulations, we show that the total number of iterations is proportional to , where is the difference between the two lowest eigenvalues and is an error defined as the probability that the finally obtained system state is in an unexpected (i.e., not the lowest) eigenstate.
NASA Astrophysics Data System (ADS)
Kim, Juhye; Nam, Haewon; Lee, Rena
2015-07-01
CT (computed tomography) images, metal materials such as tooth supplements or surgical clips can cause metal artifact and degrade image quality. In severe cases, this may lead to misdiagnosis. In this research, we developed a new MAR (metal artifact reduction) algorithm by using an edge preserving filter and the MATLAB program (Mathworks, version R2012a). The proposed algorithm consists of 6 steps: image reconstruction from projection data, metal segmentation, forward projection, interpolation, applied edge preserving smoothing filter, and new image reconstruction. For an evaluation of the proposed algorithm, we obtained both numerical simulation data and data for a Rando phantom. In the numerical simulation data, four metal regions were added into the Shepp Logan phantom for metal artifacts. The projection data of the metal-inserted Rando phantom were obtained by using a prototype CBCT scanner manufactured by medical engineering and medical physics (MEMP) laboratory research group in medical science at Ewha Womans University. After these had been adopted the proposed algorithm was performed, and the result were compared with the original image (with metal artifact without correction) and with a corrected image based on linear interpolation. Both visual and quantitative evaluations were done. Compared with the original image with metal artifacts and with the image corrected by using linear interpolation, both the numerical and the experimental phantom data demonstrated that the proposed algorithm reduced the metal artifact. In conclusion, the evaluation in this research showed that the proposed algorithm outperformed the interpolation based MAR algorithm. If an optimization and a stability evaluation of the proposed algorithm can be performed, the developed algorithm is expected to be an effective tool for eliminating metal artifacts even in commercial CT systems.
A Food Chain Algorithm for Capacitated Vehicle Routing Problem with Recycling in Reverse Logistics
NASA Astrophysics Data System (ADS)
Song, Qiang; Gao, Xuexia; Santos, Emmanuel T.
2015-12-01
This paper introduces the capacitated vehicle routing problem with recycling in reverse logistics, and designs a food chain algorithm for it. Some illustrative examples are selected to conduct simulation and comparison. Numerical results show that the performance of the food chain algorithm is better than the genetic algorithm, particle swarm optimization as well as quantum evolutionary algorithm.
Algorithms and Application of Sparse Matrix Assembly and Equation Solvers for Aeroacoustics
NASA Technical Reports Server (NTRS)
Watson, W. R.; Nguyen, D. T.; Reddy, C. J.; Vatsa, V. N.; Tang, W. H.
2001-01-01
An algorithm for symmetric sparse equation solutions on an unstructured grid is described. Efficient, sequential sparse algorithms for degree-of-freedom reordering, supernodes, symbolic/numerical factorization, and forward backward solution phases are reviewed. Three sparse algorithms for the generation and assembly of symmetric systems of matrix equations are presented. The accuracy and numerical performance of the sequential version of the sparse algorithms are evaluated over the frequency range of interest in a three-dimensional aeroacoustics application. Results show that the solver solutions are accurate using a discretization of 12 points per wavelength. Results also show that the first assembly algorithm is impractical for high-frequency noise calculations. The second and third assembly algorithms have nearly equal performance at low values of source frequencies, but at higher values of source frequencies the third algorithm saves CPU time and RAM. The CPU time and the RAM required by the second and third assembly algorithms are two orders of magnitude smaller than that required by the sparse equation solver. A sequential version of these sparse algorithms can, therefore, be conveniently incorporated into a substructuring for domain decomposition formulation to achieve parallel computation, where different substructures are handles by different parallel processors.
q-Space Deep Learning: Twelve-Fold Shorter and Model-Free Diffusion MRI Scans.
Golkov, Vladimir; Dosovitskiy, Alexey; Sperl, Jonathan I; Menzel, Marion I; Czisch, Michael; Samann, Philipp; Brox, Thomas; Cremers, Daniel
2016-05-01
Numerous scientific fields rely on elaborate but partly suboptimal data processing pipelines. An example is diffusion magnetic resonance imaging (diffusion MRI), a non-invasive microstructure assessment method with a prominent application in neuroimaging. Advanced diffusion models providing accurate microstructural characterization so far have required long acquisition times and thus have been inapplicable for children and adults who are uncooperative, uncomfortable, or unwell. We show that the long scan time requirements are mainly due to disadvantages of classical data processing. We demonstrate how deep learning, a group of algorithms based on recent advances in the field of artificial neural networks, can be applied to reduce diffusion MRI data processing to a single optimized step. This modification allows obtaining scalar measures from advanced models at twelve-fold reduced scan time and detecting abnormalities without using diffusion models. We set a new state of the art by estimating diffusion kurtosis measures from only 12 data points and neurite orientation dispersion and density measures from only 8 data points. This allows unprecedentedly fast and robust protocols facilitating clinical routine and demonstrates how classical data processing can be streamlined by means of deep learning.
PolyPole-1: An accurate numerical algorithm for intra-granular fission gas release
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pizzocri, D.; Rabiti, C.; Luzzi, L.
2016-09-01
This paper describes the development of a new numerical algorithm (called PolyPole-1) to efficiently solve the equation for intra-granular fission gas release in nuclear fuel. The work was carried out in collaboration with Politecnico di Milano and Institute for Transuranium Elements. The PolyPole-1 algorithms is being implemented in INL's fuels code BISON code as part of BISON's fission gas release model. The transport of fission gas from within the fuel grains to the grain boundaries (intra-granular fission gas release) is a fundamental controlling mechanism of fission gas release and gaseous swelling in nuclear fuel. Hence, accurate numerical solution of themore » corresponding mathematical problem needs to be included in fission gas behaviour models used in fuel performance codes. Under the assumption of equilibrium between trapping and resolution, the process can be described mathematically by a single diffusion equation for the gas atom concentration in a grain. In this work, we propose a new numerical algorithm (PolyPole-1) to efficiently solve the fission gas diffusion equation in time-varying conditions. The PolyPole-1 algorithm is based on the analytic modal solution of the diffusion equation for constant conditions, with the addition of polynomial corrective terms that embody the information on the deviation from constant conditions. The new algorithm is verified by comparing the results to a finite difference solution over a large number of randomly generated operation histories. Furthermore, comparison to state-of-the-art algorithms used in fuel performance codes demonstrates that the accuracy of the PolyPole-1 solution is superior to other algorithms, with similar computational effort. Finally, the concept of PolyPole-1 may be extended to the solution of the general problem of intra-granular fission gas diffusion during non-equilibrium trapping and resolution, which will be the subject of future work.« less
Advanced technology development multi-color holography
NASA Technical Reports Server (NTRS)
Vikram, Chandra S.
1994-01-01
Several key aspects of multi-color holography and some non-conventional ways to study the holographic reconstructions are considered. The error analysis of three-color holography is considered in detail with particular example of a typical triglycine sulfate crystal growth situation. For the numerical analysis of the fringe patterns, a new algorithm is introduced with experimental verification using sugar-water solution. The role of the phase difference among component holograms is also critically considered with examples of several two- and three-color situations. The status of experimentation on two-color holography and fabrication of a small breadboard system is also reported. Finally, some successful demonstrations of unconventional ways to study holographic reconstructions are described. These methods are deflectometry and confocal optical processing using some Spacelab III holograms.
Automatic Parameterization Strategy for Cardiac Electrophysiology Simulations
Costa, Caroline Mendonca; Hoetzl, Elena; Rocha, Bernardo Martins; Prassl, Anton J; Plank, Gernot
2014-01-01
Driven by recent advances in medical imaging, image segmentation and numerical techniques, computer models of ventricular electrophysiology account for increasingly finer levels of anatomical and biophysical detail. However, considering the large number of model parameters involved parameterization poses a major challenge. A minimum requirement in combined experimental and modeling studies is to achieve good agreement in activation and repolarization sequences between model and experiment or patient data. In this study, we propose basic techniques which aid in determining bidomain parameters to match activation sequences. An iterative parameterization algorithm is implemented which determines appropriate bulk conductivities which yield prescribed velocities. In addition, a method is proposed for splitting the computed bulk conductivities into individual bidomain conductivities by prescribing anisotropy ratios. PMID:24729986
Fast algorithm for bilinear transforms in optics
NASA Astrophysics Data System (ADS)
Ostrovsky, Andrey S.; Martinez-Niconoff, Gabriel C.; Ramos Romero, Obdulio; Cortes, Liliana
2000-10-01
The fast algorithm for calculating the bilinear transform in the optical system is proposed. This algorithm is based on the coherent-mode representation of the cross-spectral density function of the illumination. The algorithm is computationally efficient when the illumination is partially coherent. Numerical examples are studied and compared with the theoretical results.
A split finite element algorithm for the compressible Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Baker, A. J.
1979-01-01
An accurate and efficient numerical solution algorithm is established for solution of the high Reynolds number limit of the Navier-Stokes equations governing the multidimensional flow of a compressible essentially inviscid fluid. Finite element interpolation theory is used within a dissipative formulation established using Galerkin criteria within the Method of Weighted Residuals. An implicit iterative solution algorithm is developed, employing tensor product bases within a fractional steps integration procedure, that significantly enhances solution economy concurrent with sharply reduced computer hardware demands. The algorithm is evaluated for resolution of steep field gradients and coarse grid accuracy using both linear and quadratic tensor product interpolation bases. Numerical solutions for linear and nonlinear, one, two and three dimensional examples confirm and extend the linearized theoretical analyses, and results are compared to competitive finite difference derived algorithms.
Numerical algorithm for rigid body position estimation using the quaternion approach
NASA Astrophysics Data System (ADS)
Zigic, Miodrag; Grahovac, Nenad
2017-11-01
This paper deals with rigid body attitude estimation on the basis of the data obtained from an inertial measurement unit mounted on the body. The aim of this work is to present the numerical algorithm, which can be easily applied to the wide class of problems concerning rigid body positioning, arising in aerospace and marine engineering, or in increasingly popular robotic systems and unmanned aerial vehicles. Following the considerations of kinematics of rigid bodies, the relations between accelerations of different points of the body are given. A rotation matrix is formed using the quaternion approach to avoid singularities. We present numerical procedures for determination of the absolute accelerations of the center of mass and of an arbitrary point of the body expressed in the inertial reference frame, as well as its attitude. An application of the algorithm to the example of a heavy symmetrical gyroscope is presented, where input data for the numerical procedure are obtained from the solution of differential equations of motion, instead of using sensor measurements.
A Numerical Model for Trickle Bed Reactors
NASA Astrophysics Data System (ADS)
Propp, Richard M.; Colella, Phillip; Crutchfield, William Y.; Day, Marcus S.
2000-12-01
Trickle bed reactors are governed by equations of flow in porous media such as Darcy's law and the conservation of mass. Our numerical method for solving these equations is based on a total-velocity splitting, sequential formulation which leads to an implicit pressure equation and a semi-implicit mass conservation equation. We use high-resolution finite-difference methods to discretize these equations. Our solution scheme extends previous work in modeling porous media flows in two ways. First, we incorporate physical effects due to capillary pressure, a nonlinear inlet boundary condition, spatial porosity variations, and inertial effects on phase mobilities. In particular, capillary forces introduce a parabolic component into the recast evolution equation, and the inertial effects give rise to hyperbolic nonconvexity. Second, we introduce a modification of the slope-limiting algorithm to prevent our numerical method from producing spurious shocks. We present a numerical algorithm for accommodating these difficulties, show the algorithm is second-order accurate, and demonstrate its performance on a number of simplified problems relevant to trickle bed reactor modeling.
Fast numerics for the spin orbit equation with realistic tidal dissipation and constant eccentricity
NASA Astrophysics Data System (ADS)
Bartuccelli, Michele; Deane, Jonathan; Gentile, Guido
2017-08-01
We present an algorithm for the rapid numerical integration of a time-periodic ODE with a small dissipation term that is C^1 in the velocity. Such an ODE arises as a model of spin-orbit coupling in a star/planet system, and the motivation for devising a fast algorithm for its solution comes from the desire to estimate probability of capture in various solutions, via Monte Carlo simulation: the integration times are very long, since we are interested in phenomena occurring on timescales of the order of 10^6-10^7 years. The proposed algorithm is based on the high-order Euler method which was described in Bartuccelli et al. (Celest Mech Dyn Astron 121(3):233-260, 2015), and it requires computer algebra to set up the code for its implementation. The payoff is an overall increase in speed by a factor of about 7.5 compared to standard numerical methods. Means for accelerating the purely numerical computation are also discussed.
Evaluation of a transfinite element numerical solution method for nonlinear heat transfer problems
NASA Technical Reports Server (NTRS)
Cerro, J. A.; Scotti, S. J.
1991-01-01
Laplace transform techniques have been widely used to solve linear, transient field problems. A transform-based algorithm enables calculation of the response at selected times of interest without the need for stepping in time as required by conventional time integration schemes. The elimination of time stepping can substantially reduce computer time when transform techniques are implemented in a numerical finite element program. The coupling of transform techniques with spatial discretization techniques such as the finite element method has resulted in what are known as transfinite element methods. Recently attempts have been made to extend the transfinite element method to solve nonlinear, transient field problems. This paper examines the theoretical basis and numerical implementation of one such algorithm, applied to nonlinear heat transfer problems. The problem is linearized and solved by requiring a numerical iteration at selected times of interest. While shown to be acceptable for weakly nonlinear problems, this algorithm is ineffective as a general nonlinear solution method.
Application of symbolic/numeric matrix solution techniques to the NASTRAN program
NASA Technical Reports Server (NTRS)
Buturla, E. M.; Burroughs, S. H.
1977-01-01
The matrix solving algorithm of any finite element algorithm is extremely important since solution of the matrix equations requires a large amount of elapse time due to null calculations and excessive input/output operations. An alternate method of solving the matrix equations is presented. A symbolic processing step followed by numeric solution yields the solution very rapidly and is especially useful for nonlinear problems.
A sensitivity equation approach to shape optimization in fluid flows
NASA Technical Reports Server (NTRS)
Borggaard, Jeff; Burns, John
1994-01-01
A sensitivity equation method to shape optimization problems is applied. An algorithm is developed and tested on a problem of designing optimal forebody simulators for a 2D, inviscid supersonic flow. The algorithm uses a BFGS/Trust Region optimization scheme with sensitivities computed by numerically approximating the linear partial differential equations that determine the flow sensitivities. Numerical examples are presented to illustrate the method.
Effects of illumination on image reconstruction via Fourier ptychography
NASA Astrophysics Data System (ADS)
Cao, Xinrui; Sinzinger, Stefan
2017-12-01
The Fourier ptychographic microscopy (FPM) technique provides high-resolution images by combining a traditional imaging system, e.g. a microscope or a 4f-imaging system, with a multiplexing illumination system, e.g. an LED array and numerical image processing for enhanced image reconstruction. In order to numerically combine images that are captured under varying illumination angles, an iterative phase-retrieval algorithm is often applied. However, in practice, the performance of the FPM algorithm degrades due to the imperfections of the optical system, the image noise caused by the camera, etc. To eliminate the influence of the aberrations of the imaging system, an embedded pupil function recovery (EPRY)-FPM algorithm has been proposed [Opt. Express 22, 4960-4972 (2014)]. In this paper, we study how the performance of FPM and EPRY-FPM algorithms are affected by imperfections of the illumination system using both numerical simulations and experiments. The investigated imperfections include varying and non-uniform intensities, and wavefront aberrations. Our study shows that the aberrations of the illumination system significantly affect the performance of both FPM and EPRY-FPM algorithms. Hence, in practice, aberrations in the illumination system gain significant influence on the resulting image quality.
New algorithm and system for measuring size distribution of blood cells
NASA Astrophysics Data System (ADS)
Yao, Cuiping; Li, Zheng; Zhang, Zhenxi
2004-06-01
In optical scattering particle sizing, a numerical transform is sought so that a particle size distribution can be determined from angular measurements of near forward scattering, which has been adopted in the measurement of blood cells. In this paper a new method of counting and classification of blood cell, laser light scattering method from stationary suspensions, is presented. The genetic algorithm combined with nonnegative least squared algorithm is employed to inverse the size distribution of blood cells. Numerical tests show that these techniques can be successfully applied to measuring size distribution of blood cell with high stability.
On the impact of communication complexity on the design of parallel numerical algorithms
NASA Technical Reports Server (NTRS)
Gannon, D. B.; Van Rosendale, J.
1984-01-01
This paper describes two models of the cost of data movement in parallel numerical alorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In this second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm-independent upper bounds on system performance are derived for several problems that are important to scientific computation.
An improved cylindrical FDTD method and its application to field-tissue interaction study in MRI.
Chi, Jieru; Liu, Feng; Xia, Ling; Shao, Tingting; Mason, David G; Crozier, Stuart
2010-01-01
This paper presents a three dimensional finite-difference time-domain (FDTD) scheme in cylindrical coordinates with an improved algorithm for accommodating the numerical singularity associated with the polar axis. The regularization of this singularity problem is entirely based on Ampere's law. The proposed algorithm has been detailed and verified against a problem with a known solution obtained from a commercial electromagnetic simulation package. The numerical scheme is also illustrated by modeling high-frequency RF field-human body interactions in MRI. The results demonstrate the accuracy and capability of the proposed algorithm.
Advances/applications of MAGIC and SOS
NASA Astrophysics Data System (ADS)
Warren, Gary; Ludeking, Larry; Nguyen, Khanh; Smithe, David; Goplen, Bruce
1993-12-01
MAGIC and SOS have been applied to investigate a variety of accelerator-related devices. Examples include high brightness electron guns, beam-RF interactions in klystrons, cold-test modes in an RFQ and in RF sources, and a high-quality, flexible, electron gun with operating modes appropriate for gyrotrons, peniotrons, and other RF sources. Algorithmic improvements for PIC have been developed and added to MAGIC and SOS to facilitate these modeling efforts. Two new field algorithms allow improved control of computational numerical noise and selective control of harmonic modes in RF cavities. An axial filter in SOS accelerates simulations in cylindrical coordinates. The recent addition of an export/import feature now allows long devices to be modeled in sections. Interfaces have been added to receive electromagnetic field information from the Poisson group of codes and from EGUN and to send beam information to PARMELA for subsequent tracing of bunches through beam optics. Post-processors compute and display beam properties including geometric, normalized, and slice emittances, and phase-space parameters, and video. VMS, UNIX, and DOS versions are supported, with migration underway toward windows environments.
Simulation-driven machine learning: Bearing fault classification
NASA Astrophysics Data System (ADS)
Sobie, Cameron; Freitas, Carina; Nicolai, Mike
2018-01-01
Increasing the accuracy of mechanical fault detection has the potential to improve system safety and economic performance by minimizing scheduled maintenance and the probability of unexpected system failure. Advances in computational performance have enabled the application of machine learning algorithms across numerous applications including condition monitoring and failure detection. Past applications of machine learning to physical failure have relied explicitly on historical data, which limits the feasibility of this approach to in-service components with extended service histories. Furthermore, recorded failure data is often only valid for the specific circumstances and components for which it was collected. This work directly addresses these challenges for roller bearings with race faults by generating training data using information gained from high resolution simulations of roller bearing dynamics, which is used to train machine learning algorithms that are then validated against four experimental datasets. Several different machine learning methodologies are compared starting from well-established statistical feature-based methods to convolutional neural networks, and a novel application of dynamic time warping (DTW) to bearing fault classification is proposed as a robust, parameter free method for race fault detection.
The accurate particle tracer code
NASA Astrophysics Data System (ADS)
Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi; Yao, Yicun
2017-11-01
The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runaway electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world's fastest computer, the Sunway TaihuLight supercomputer, by supporting master-slave architecture of Sunway many-core processors. Based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.
Advanced Energy Storage Management in Distribution Network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Guodong; Ceylan, Oguzhan; Xiao, Bailu
2016-01-01
With increasing penetration of distributed generation (DG) in the distribution networks (DN), the secure and optimal operation of DN has become an important concern. In this paper, an iterative mixed integer quadratic constrained quadratic programming model to optimize the operation of a three phase unbalanced distribution system with high penetration of Photovoltaic (PV) panels, DG and energy storage (ES) is developed. The proposed model minimizes not only the operating cost, including fuel cost and purchasing cost, but also voltage deviations and power loss. The optimization model is based on the linearized sensitivity coefficients between state variables (e.g., node voltages) andmore » control variables (e.g., real and reactive power injections of DG and ES). To avoid slow convergence when close to the optimum, a golden search method is introduced to control the step size and accelerate the convergence. The proposed algorithm is demonstrated on modified IEEE 13 nodes test feeders with multiple PV panels, DG and ES. Numerical simulation results validate the proposed algorithm. Various scenarios of system configuration are studied and some critical findings are concluded.« less
An extension of the QZ algorithm for solving the generalized matrix eigenvalue problem
NASA Technical Reports Server (NTRS)
Ward, R. C.
1973-01-01
This algorithm is an extension of Moler and Stewart's QZ algorithm with some added features for saving time and operations. Also, some additional properties of the QR algorithm which were not practical to implement in the QZ algorithm can be generalized with the combination shift QZ algorithm. Numerous test cases are presented to give practical application tests for algorithm. Based on results, this algorithm should be preferred over existing algorithms which attempt to solve the class of generalized eigenproblems where both matrices are singular or nearly singular.
Pernice, W H; Payne, F P; Gallagher, D F
2007-09-03
We present a novel numerical scheme for the simulation of the field enhancement by metal nano-particles in the time domain. The algorithm is based on a combination of the finite-difference time-domain method and the pseudo-spectral time-domain method for dispersive materials. The hybrid solver leads to an efficient subgridding algorithm that does not suffer from spurious field spikes as do FDTD schemes. Simulation of the field enhancement by gold particles shows the expected exponential field profile. The enhancement factors are computed for single particles and particle arrays. Due to the geometry conforming mesh the algorithm is stable for long integration times and thus suitable for the simulation of resonance phenomena in coupled nano-particle structures.
Faster and more accurate transport procedures for HZETRN
NASA Astrophysics Data System (ADS)
Slaba, T. C.; Blattnig, S. R.; Badavi, F. F.
2010-12-01
The deterministic transport code HZETRN was developed for research scientists and design engineers studying the effects of space radiation on astronauts and instrumentation protected by various shielding materials and structures. In this work, several aspects of code verification are examined. First, a detailed derivation of the light particle ( A ⩽ 4) and heavy ion ( A > 4) numerical marching algorithms used in HZETRN is given. References are given for components of the derivation that already exist in the literature, and discussions are given for details that may have been absent in the past. The present paper provides a complete description of the numerical methods currently used in the code and is identified as a key component of the verification process. Next, a new numerical method for light particle transport is presented, and improvements to the heavy ion transport algorithm are discussed. A summary of round-off error is also given, and the impact of this error on previously predicted exposure quantities is shown. Finally, a coupled convergence study is conducted by refining the discretization parameters (step-size and energy grid-size). From this study, it is shown that past efforts in quantifying the numerical error in HZETRN were hindered by single precision calculations and computational resources. It is determined that almost all of the discretization error in HZETRN is caused by the use of discretization parameters that violate a numerical convergence criterion related to charged target fragments below 50 AMeV. Total discretization errors are given for the old and new algorithms to 100 g/cm 2 in aluminum and water, and the improved accuracy of the new numerical methods is demonstrated. Run time comparisons between the old and new algorithms are given for one, two, and three layer slabs of 100 g/cm 2 of aluminum, polyethylene, and water. The new algorithms are found to be almost 100 times faster for solar particle event simulations and almost 10 times faster for galactic cosmic ray simulations.
Faster and more accurate transport procedures for HZETRN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slaba, T.C., E-mail: Tony.C.Slaba@nasa.go; Blattnig, S.R., E-mail: Steve.R.Blattnig@nasa.go; Badavi, F.F., E-mail: Francis.F.Badavi@nasa.go
The deterministic transport code HZETRN was developed for research scientists and design engineers studying the effects of space radiation on astronauts and instrumentation protected by various shielding materials and structures. In this work, several aspects of code verification are examined. First, a detailed derivation of the light particle (A {<=} 4) and heavy ion (A > 4) numerical marching algorithms used in HZETRN is given. References are given for components of the derivation that already exist in the literature, and discussions are given for details that may have been absent in the past. The present paper provides a complete descriptionmore » of the numerical methods currently used in the code and is identified as a key component of the verification process. Next, a new numerical method for light particle transport is presented, and improvements to the heavy ion transport algorithm are discussed. A summary of round-off error is also given, and the impact of this error on previously predicted exposure quantities is shown. Finally, a coupled convergence study is conducted by refining the discretization parameters (step-size and energy grid-size). From this study, it is shown that past efforts in quantifying the numerical error in HZETRN were hindered by single precision calculations and computational resources. It is determined that almost all of the discretization error in HZETRN is caused by the use of discretization parameters that violate a numerical convergence criterion related to charged target fragments below 50 AMeV. Total discretization errors are given for the old and new algorithms to 100 g/cm{sup 2} in aluminum and water, and the improved accuracy of the new numerical methods is demonstrated. Run time comparisons between the old and new algorithms are given for one, two, and three layer slabs of 100 g/cm{sup 2} of aluminum, polyethylene, and water. The new algorithms are found to be almost 100 times faster for solar particle event simulations and almost 10 times faster for galactic cosmic ray simulations.« less
NASA Astrophysics Data System (ADS)
Mehdinejadiani, Behrouz
2017-08-01
This study represents the first attempt to estimate the solute transport parameters of the spatial fractional advection-dispersion equation using Bees Algorithm. The numerical studies as well as the experimental studies were performed to certify the integrity of Bees Algorithm. The experimental ones were conducted in a sandbox for homogeneous and heterogeneous soils. A detailed comparative study was carried out between the results obtained from Bees Algorithm and those from Genetic Algorithm and LSQNONLIN routines in FracFit toolbox. The results indicated that, in general, the Bees Algorithm much more accurately appraised the sFADE parameters in comparison with Genetic Algorithm and LSQNONLIN, especially in the heterogeneous soil and for α values near to 1 in the numerical study. Also, the results obtained from Bees Algorithm were more reliable than those from Genetic Algorithm. The Bees Algorithm showed the relative similar performances for all cases, while the Genetic Algorithm and the LSQNONLIN yielded different performances for various cases. The performance of LSQNONLIN strongly depends on the initial guess values so that, compared to the Genetic Algorithm, it can more accurately estimate the sFADE parameters by taking into consideration the suitable initial guess values. To sum up, the Bees Algorithm was found to be very simple, robust and accurate approach to estimate the transport parameters of the spatial fractional advection-dispersion equation.
Mehdinejadiani, Behrouz
2017-08-01
This study represents the first attempt to estimate the solute transport parameters of the spatial fractional advection-dispersion equation using Bees Algorithm. The numerical studies as well as the experimental studies were performed to certify the integrity of Bees Algorithm. The experimental ones were conducted in a sandbox for homogeneous and heterogeneous soils. A detailed comparative study was carried out between the results obtained from Bees Algorithm and those from Genetic Algorithm and LSQNONLIN routines in FracFit toolbox. The results indicated that, in general, the Bees Algorithm much more accurately appraised the sFADE parameters in comparison with Genetic Algorithm and LSQNONLIN, especially in the heterogeneous soil and for α values near to 1 in the numerical study. Also, the results obtained from Bees Algorithm were more reliable than those from Genetic Algorithm. The Bees Algorithm showed the relative similar performances for all cases, while the Genetic Algorithm and the LSQNONLIN yielded different performances for various cases. The performance of LSQNONLIN strongly depends on the initial guess values so that, compared to the Genetic Algorithm, it can more accurately estimate the sFADE parameters by taking into consideration the suitable initial guess values. To sum up, the Bees Algorithm was found to be very simple, robust and accurate approach to estimate the transport parameters of the spatial fractional advection-dispersion equation. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mucha, Waldemar; Kuś, Wacław
2018-01-01
The paper presents a practical implementation of hybrid simulation using Real Time Finite Element Method (RTFEM). Hybrid simulation is a technique for investigating dynamic material and structural properties of mechanical systems by performing numerical analysis and experiment at the same time. It applies to mechanical systems with elements too difficult or impossible to model numerically. These elements are tested experimentally, while the rest of the system is simulated numerically. Data between the experiment and numerical simulation are exchanged in real time. Authors use Finite Element Method to perform the numerical simulation. The following paper presents the general algorithm for hybrid simulation using RTFEM and possible improvements of the algorithm for computation time reduction developed by the authors. The paper focuses on practical implementation of presented methods, which involves testing of a mountain bicycle frame, where the shock absorber is tested experimentally while the rest of the frame is simulated numerically.
Analysis of close encounters with Ganymede and Callisto using a genetic n-body algorithm
NASA Astrophysics Data System (ADS)
Winter, Philip M.; Galiazzo, Mattia A.; Maindl, Thomas I.
2018-05-01
In this work we describe a genetic algorithm which is used in order to study orbits of minor bodies in the frames of close encounters. We find that the algorithm in combination with standard orbital numerical integrators can be used as a good proxy for finding typical orbits of minor bodies in close encounters with planets and even their moons, saving a lot of computational time compared t0 long-term orbital numerical integrations. Here, we study close encounters of Centaurs with Callisto and Ganymede in particular. We also perform n-body numerical simulations for comparison. We find typical impact velocities to be between v rel = 20[v esc ] and v rel = 30[v esc ] for Ganymede and between v rel = 25[v esc ] and v rel = 35[v esc ] for Callisto.
The GOES-R Geostationary Lightning Mapper (GLM)
NASA Astrophysics Data System (ADS)
Goodman, Steven J.; Blakeslee, Richard J.; Koshak, William J.; Mach, Douglas; Bailey, Jeffrey; Buechler, Dennis; Carey, Larry; Schultz, Chris; Bateman, Monte; McCaul, Eugene; Stano, Geoffrey
2013-05-01
The Geostationary Operational Environmental Satellite R-series (GOES-R) is the next block of four satellites to follow the existing GOES constellation currently operating over the Western Hemisphere. Advanced spacecraft and instrument technology will support expanded detection of environmental phenomena, resulting in more timely and accurate forecasts and warnings. Advancements over current GOES capabilities include a new capability for total lightning detection (cloud and cloud-to-ground flashes) from the Geostationary Lightning Mapper (GLM), and improved cloud and moisture imagery with the 16-channel Advanced Baseline Imager (ABI). The GLM will map total lightning activity continuously day and night with near-uniform storm-scale spatial resolution of 8 km with a product refresh rate of less than 20 s over the Americas and adjacent oceanic regions in the western hemisphere. This will aid in forecasting severe storms and tornado activity, and convective weather impacts on aviation safety and efficiency. In parallel with the instrument development, an Algorithm Working Group (AWG) Lightning Detection Science and Applications Team developed the Level 2 (stroke and flash) algorithms from the Level 1 lightning event (pixel level) data. Proxy data sets used to develop the GLM operational algorithms as well as cal/val performance monitoring tools were derived from the NASA Lightning Imaging Sensor (LIS) and Optical Transient Detector (OTD) instruments in low Earth orbit, and from ground-based lightning networks and intensive prelaunch field campaigns. The GLM will produce the same or similar lightning flash attributes provided by the LIS and OTD, and thus extend their combined climatology over the western hemisphere into the coming decades. Science and application development along with preoperational product demonstrations and evaluations at NWS forecast offices and NOAA testbeds will prepare the forecasters to use GLM as soon as possible after the planned launch and checkout of GOES-R in late 2015. New applications will use GLM alone, in combination with the ABI, or integrated (fused) with other available tools (weather radar and ground strike networks, nowcasting systems, mesoscale analysis, and numerical weather prediction models) in the hands of the forecaster responsible for issuing more timely and accurate forecasts and warnings.
Algorithms for the Fractional Calculus: A Selection of Numerical Methods
NASA Technical Reports Server (NTRS)
Diethelm, K.; Ford, N. J.; Freed, A. D.; Luchko, Yu.
2003-01-01
Many recently developed models in areas like viscoelasticity, electrochemistry, diffusion processes, etc. are formulated in terms of derivatives (and integrals) of fractional (non-integer) order. In this paper we present a collection of numerical algorithms for the solution of the various problems arising in this context. We believe that this will give the engineer the necessary tools required to work with fractional models in an efficient way.
KAM Tori Construction Algorithms
NASA Astrophysics Data System (ADS)
Wiesel, W.
In this paper we evaluate and compare two algorithms for the calculation of KAM tori in Hamiltonian systems. The direct fitting of a torus Fourier series to a numerically integrated trajectory is the first method, while an accelerated finite Fourier transform is the second method. The finite Fourier transform, with Hanning window functions, is by far superior in both computational loading and numerical accuracy. Some thoughts on applications of KAM tori are offered.
An improved conjugate gradient scheme to the solution of least squares SVM.
Chu, Wei; Ong, Chong Jin; Keerthi, S Sathiya
2005-03-01
The least square support vector machines (LS-SVM) formulation corresponds to the solution of a linear system of equations. Several approaches to its numerical solutions have been proposed in the literature. In this letter, we propose an improved method to the numerical solution of LS-SVM and show that the problem can be solved using one reduced system of linear equations. Compared with the existing algorithm for LS-SVM, the approach used in this letter is about twice as efficient. Numerical results using the proposed method are provided for comparisons with other existing algorithms.
Numerical taxonomy on data: Experimental results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cohen, J.; Farach, M.
1997-12-01
The numerical taxonomy problems associated with most of the optimization criteria described above are NP - hard [3, 5, 1, 4]. In, the first positive result for numerical taxonomy was presented. They showed that if e is the distance to the closest tree metric under the L{sub {infinity}} norm. i.e., e = min{sub T} [L{sub {infinity}} (T-D)], then it is possible to construct a tree T such that L{sub {infinity}} (T-D) {le} 3e, that is, they gave a 3-approximation algorithm for this problem. We will refer to this algorithm as the Single Pivot (SP) heuristic.
Wang, Jun; Zhou, Bi-hua; Zhou, Shu-dao; Sheng, Zheng
2015-01-01
The paper proposes a novel function expression method to forecast chaotic time series, using an improved genetic-simulated annealing (IGSA) algorithm to establish the optimum function expression that describes the behavior of time series. In order to deal with the weakness associated with the genetic algorithm, the proposed algorithm incorporates the simulated annealing operation which has the strong local search ability into the genetic algorithm to enhance the performance of optimization; besides, the fitness function and genetic operators are also improved. Finally, the method is applied to the chaotic time series of Quadratic and Rossler maps for validation. The effect of noise in the chaotic time series is also studied numerically. The numerical results verify that the method can forecast chaotic time series with high precision and effectiveness, and the forecasting precision with certain noise is also satisfactory. It can be concluded that the IGSA algorithm is energy-efficient and superior. PMID:26000011
Wang, Jun; Zhou, Bi-hua; Zhou, Shu-dao; Sheng, Zheng
2015-01-01
The paper proposes a novel function expression method to forecast chaotic time series, using an improved genetic-simulated annealing (IGSA) algorithm to establish the optimum function expression that describes the behavior of time series. In order to deal with the weakness associated with the genetic algorithm, the proposed algorithm incorporates the simulated annealing operation which has the strong local search ability into the genetic algorithm to enhance the performance of optimization; besides, the fitness function and genetic operators are also improved. Finally, the method is applied to the chaotic time series of Quadratic and Rossler maps for validation. The effect of noise in the chaotic time series is also studied numerically. The numerical results verify that the method can forecast chaotic time series with high precision and effectiveness, and the forecasting precision with certain noise is also satisfactory. It can be concluded that the IGSA algorithm is energy-efficient and superior.
A parallel time integrator for noisy nonlinear oscillatory systems
NASA Astrophysics Data System (ADS)
Subber, Waad; Sarkar, Abhijit
2018-06-01
In this paper, we adapt a parallel time integration scheme to track the trajectories of noisy non-linear dynamical systems. Specifically, we formulate a parallel algorithm to generate the sample path of nonlinear oscillator defined by stochastic differential equations (SDEs) using the so-called parareal method for ordinary differential equations (ODEs). The presence of Wiener process in SDEs causes difficulties in the direct application of any numerical integration techniques of ODEs including the parareal algorithm. The parallel implementation of the algorithm involves two SDEs solvers, namely a fine-level scheme to integrate the system in parallel and a coarse-level scheme to generate and correct the required initial conditions to start the fine-level integrators. For the numerical illustration, a randomly excited Duffing oscillator is investigated in order to study the performance of the stochastic parallel algorithm with respect to a range of system parameters. The distributed implementation of the algorithm exploits Massage Passing Interface (MPI).
Maximum likelihood estimates, from censored data, for mixed-Weibull distributions
NASA Astrophysics Data System (ADS)
Jiang, Siyuan; Kececioglu, Dimitri
1992-06-01
A new algorithm for estimating the parameters of mixed-Weibull distributions from censored data is presented. The algorithm follows the principle of maximum likelihood estimate (MLE) through the expectation and maximization (EM) algorithm, and it is derived for both postmortem and nonpostmortem time-to-failure data. It is concluded that the concept of the EM algorithm is easy to understand and apply (only elementary statistics and calculus are required). The log-likelihood function cannot decrease after an EM sequence; this important feature was observed in all of the numerical calculations. The MLEs of the nonpostmortem data were obtained successfully for mixed-Weibull distributions with up to 14 parameters in a 5-subpopulation, mixed-Weibull distribution. Numerical examples indicate that some of the log-likelihood functions of the mixed-Weibull distributions have multiple local maxima; therefore, the algorithm should start at several initial guesses of the parameter set.
NASA Astrophysics Data System (ADS)
Zhang, Ruili; Wang, Yulei; He, Yang; Xiao, Jianyuan; Liu, Jian; Qin, Hong; Tang, Yifa
2018-02-01
Relativistic dynamics of a charged particle in time-dependent electromagnetic fields has theoretical significance and a wide range of applications. The numerical simulation of relativistic dynamics is often multi-scale and requires accurate long-term numerical simulations. Therefore, explicit symplectic algorithms are much more preferable than non-symplectic methods and implicit symplectic algorithms. In this paper, we employ the proper time and express the Hamiltonian as the sum of exactly solvable terms and product-separable terms in space-time coordinates. Then, we give the explicit symplectic algorithms based on the generating functions of orders 2 and 3 for relativistic dynamics of a charged particle. The methodology is not new, which has been applied to non-relativistic dynamics of charged particles, but the algorithm for relativistic dynamics has much significance in practical simulations, such as the secular simulation of runaway electrons in tokamaks.
Ensemble-type numerical uncertainty information from single model integrations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rauser, Florian, E-mail: florian.rauser@mpimet.mpg.de; Marotzke, Jochem; Korn, Peter
2015-07-01
We suggest an algorithm that quantifies the discretization error of time-dependent physical quantities of interest (goals) for numerical models of geophysical fluid dynamics. The goal discretization error is estimated using a sum of weighted local discretization errors. The key feature of our algorithm is that these local discretization errors are interpreted as realizations of a random process. The random process is determined by the model and the flow state. From a class of local error random processes we select a suitable specific random process by integrating the model over a short time interval at different resolutions. The weights of themore » influences of the local discretization errors on the goal are modeled as goal sensitivities, which are calculated via automatic differentiation. The integration of the weighted realizations of local error random processes yields a posterior ensemble of goal approximations from a single run of the numerical model. From the posterior ensemble we derive the uncertainty information of the goal discretization error. This algorithm bypasses the requirement of detailed knowledge about the models discretization to generate numerical error estimates. The algorithm is evaluated for the spherical shallow-water equations. For two standard test cases we successfully estimate the error of regional potential energy, track its evolution, and compare it to standard ensemble techniques. The posterior ensemble shares linear-error-growth properties with ensembles of multiple model integrations when comparably perturbed. The posterior ensemble numerical error estimates are of comparable size as those of a stochastic physics ensemble.« less
Advanced Techniques for Ultrasonic Imaging in the Presence of Material and Geometrical Complexity
NASA Astrophysics Data System (ADS)
Brath, Alexander Joseph
The complexity of modern engineering systems is increasing in several ways: advances in materials science are leading to the design of materials which are optimized for material strength, conductivity, temperature resistance etc., leading to complex material microstructure; the combination of additive manufacturing and shape optimization algorithms are leading to components with incredibly intricate geometrical complexity; and engineering systems are being designed to operate at larger scales in ever harsher environments. As a result, at the same time that there is an increasing need for reliable and accurate defect detection and monitoring capabilities, many of the currently available non-destructive evaluation techniques are rendered ineffective by this increasing material and geometrical complexity. This thesis addresses the challenges posed by inspection and monitoring problems in complex engineering systems with a three-part approach. In order to address material complexities, a model of wavefront propagation in anisotropic materials is developed, along with efficient numerical techniques to solve for the wavefront propagation in inhomogeneous, anisotropic material. Since material and geometrical complexities significantly affect the ability of ultrasonic energy to penetrate into the specimen, measurement configurations are tailored to specific applications which utilize arrays of either piezoelectric (PZT) or electromagnetic acoustic transducers (EMAT). These measurement configurations include novel array architectures as well as the exploration of ice as an acoustic coupling medium. Imaging algorithms which were previously developed for isotropic materials with simple geometry are adapted to utilize the more powerful wavefront propagation model and novel measurement configurations.
Karamintziou, Sofia D.; Custódio, Ana Luísa; Piallat, Brigitte; Polosan, Mircea; Chabardès, Stéphan; Stathis, Pantelis G.; Tagaris, George A.; Sakas, Damianos E.; Polychronaki, Georgia E.; Tsirogiannis, George L.; David, Olivier; Nikita, Konstantina S.
2017-01-01
Advances in the field of closed-loop neuromodulation call for analysis and modeling approaches capable of confronting challenges related to the complex neuronal response to stimulation and the presence of strong internal and measurement noise in neural recordings. Here we elaborate on the algorithmic aspects of a noise-resistant closed-loop subthalamic nucleus deep brain stimulation system for advanced Parkinson’s disease and treatment-refractory obsessive-compulsive disorder, ensuring remarkable performance in terms of both efficiency and selectivity of stimulation, as well as in terms of computational speed. First, we propose an efficient method drawn from dynamical systems theory, for the reliable assessment of significant nonlinear coupling between beta and high-frequency subthalamic neuronal activity, as a biomarker for feedback control. Further, we present a model-based strategy through which optimal parameters of stimulation for minimum energy desynchronizing control of neuronal activity are being identified. The strategy integrates stochastic modeling and derivative-free optimization of neural dynamics based on quadratic modeling. On the basis of numerical simulations, we demonstrate the potential of the presented modeling approach to identify, at a relatively low computational cost, stimulation settings potentially associated with a significantly higher degree of efficiency and selectivity compared with stimulation settings determined post-operatively. Our data reinforce the hypothesis that model-based control strategies are crucial for the design of novel stimulation protocols at the backstage of clinical applications. PMID:28222198
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lechman, Jeremy B.; Battaile, Corbett Chandler.; Bolintineanu, Dan
This report summarizes a project in which the authors sought to develop and deploy: (i) experimental techniques to elucidate the complex, multiscale nature of thermal transport in particle-based materials; and (ii) modeling approaches to address current challenges in predicting performance variability of materials (e.g., identifying and characterizing physical- chemical processes and their couplings across multiple length and time scales, modeling information transfer between scales, and statically and dynamically resolving material structure and its evolution during manufacturing and device performance). Experimentally, several capabilities were successfully advanced. As discussed in Chapter 2 a flash diffusivity capability for measuring homogeneous thermal conductivity ofmore » pyrotechnic powders (and beyond) was advanced; leading to enhanced characterization of pyrotechnic materials and properties impacting component development. Chapter 4 describes success for the first time, although preliminary, in resolving thermal fields at speeds and spatial scales relevant to energetic components. Chapter 7 summarizes the first ever (as far as the authors know) application of TDTR to actual pyrotechnic materials. This is the first attempt to actually characterize these materials at the interfacial scale. On the modeling side, new capabilities in image processing of experimental microstructures and direct numerical simulation on complicated structures were advanced (see Chapters 3 and 5). In addition, modeling work described in Chapter 8 led to improved prediction of interface thermal conductance from first principles calculations. Toward the second point, for a model system of packed particles, significant headway was made in implementing numerical algorithms and collecting data to justify the approach in terms of highlighting the phenomena at play and pointing the way forward in developing and informing the kind of modeling approach originally envisioned (see Chapter 6). In both cases much more remains to be accomplished.« less
Advanced detection, isolation and accommodation of sensor failures: Real-time evaluation
NASA Technical Reports Server (NTRS)
Merrill, Walter C.; Delaat, John C.; Bruton, William M.
1987-01-01
The objective of the Advanced Detection, Isolation, and Accommodation (ADIA) Program is to improve the overall demonstrated reliability of digital electronic control systems for turbine engines by using analytical redundacy to detect sensor failures. The results of a real time hybrid computer evaluation of the ADIA algorithm are presented. Minimum detectable levels of sensor failures for an F100 engine control system are determined. Also included are details about the microprocessor implementation of the algorithm as well as a description of the algorithm itself.
New Parallel Algorithms for Landscape Evolution Model
NASA Astrophysics Data System (ADS)
Jin, Y.; Zhang, H.; Shi, Y.
2017-12-01
Most landscape evolution models (LEM) developed in the last two decades solve the diffusion equation to simulate the transportation of surface sediments. This numerical approach is difficult to parallelize due to the computation of drainage area for each node, which needs huge amount of communication if run in parallel. In order to overcome this difficulty, we developed two parallel algorithms for LEM with a stream net. One algorithm handles the partition of grid with traditional methods and applies an efficient global reduction algorithm to do the computation of drainage areas and transport rates for the stream net; the other algorithm is based on a new partition algorithm, which partitions the nodes in catchments between processes first, and then partitions the cells according to the partition of nodes. Both methods focus on decreasing communication between processes and take the advantage of massive computing techniques, and numerical experiments show that they are both adequate to handle large scale problems with millions of cells. We implemented the two algorithms in our program based on the widely used finite element library deal.II, so that it can be easily coupled with ASPECT.
Event and Apparent Horizon Finders for 3 + 1 Numerical Relativity.
Thornburg, Jonathan
2007-01-01
Event and apparent horizons are key diagnostics for the presence and properties of black holes. In this article I review numerical algorithms and codes for finding event and apparent horizons in numerically-computed spacetimes, focusing on calculations done using the 3 + 1 ADM formalism. The event horizon of an asymptotically-flat spacetime is the boundary between those events from which a future-pointing null geodesic can reach future null infinity and those events from which no such geodesic exists. The event horizon is a (continuous) null surface in spacetime. The event horizon is defined nonlocally in time : it is a global property of the entire spacetime and must be found in a separate post-processing phase after all (or at least the nonstationary part) of spacetime has been numerically computed. There are three basic algorithms for finding event horizons, based on integrating null geodesics forwards in time, integrating null geodesics backwards in time, and integrating null surfaces backwards in time. The last of these is generally the most efficient and accurate. In contrast to an event horizon, an apparent horizon is defined locally in time in a spacelike slice and depends only on data in that slice, so it can be (and usually is) found during the numerical computation of a spacetime. A marginally outer trapped surface (MOTS) in a slice is a smooth closed 2-surface whose future-pointing outgoing null geodesics have zero expansion Θ. An apparent horizon is then defined as a MOTS not contained in any other MOTS. The MOTS condition is a nonlinear elliptic partial differential equation (PDE) for the surface shape, containing the ADM 3-metric, its spatial derivatives, and the extrinsic curvature as coefficients. Most "apparent horizon" finders actually find MOTSs. There are a large number of apparent horizon finding algorithms, with differing trade-offs between speed, robustness, accuracy, and ease of programming. In axisymmetry, shooting algorithms work well and are fairly easy to program. In slices with no continuous symmetries, spectral integral-iteration algorithms and elliptic-PDE algorithms are fast and accurate, but require good initial guesses to converge. In many cases, Schnetter's "pretracking" algorithm can greatly improve an elliptic-PDE algorithm's robustness. Flow algorithms are generally quite slow but can be very robust in their convergence. Minimization methods are slow and relatively inaccurate in the context of a finite differencing simulation, but in a spectral code they can be relatively faster and more robust.
Advances in multi-sensor data fusion: algorithms and applications.
Dong, Jiang; Zhuang, Dafang; Huang, Yaohuan; Fu, Jingying
2009-01-01
With the development of satellite and remote sensing techniques, more and more image data from airborne/satellite sensors have become available. Multi-sensor image fusion seeks to combine information from different images to obtain more inferences than can be derived from a single sensor. In image-based application fields, image fusion has emerged as a promising research area since the end of the last century. The paper presents an overview of recent advances in multi-sensor satellite image fusion. Firstly, the most popular existing fusion algorithms are introduced, with emphasis on their recent improvements. Advances in main applications fields in remote sensing, including object identification, classification, change detection and maneuvering targets tracking, are described. Both advantages and limitations of those applications are then discussed. Recommendations are addressed, including: (1) Improvements of fusion algorithms; (2) Development of "algorithm fusion" methods; (3) Establishment of an automatic quality assessment scheme.
The foamed structures in numerical testing
NASA Astrophysics Data System (ADS)
John, Antoni; John, Małgorzata
2018-01-01
In the paper numerical simulation of the foamed metal structures using numerical homogenization algorithm is prescribed. From the beginning, numerical model of heterogeneous porous simplified structures of typical foamed metal, based on the FEM was built and material parameters (coefficients of elasticity matrix of the considered structure) were determined with use of numerical homogenization algorithm. During the work the different RVE models of structure were created and their properties were compared at different relative density, different numbers and the size and structure of the arrangement of voids. Finally, obtained results were used in modeling of typical elements made from foam metals structures - sandwich structure and profile filled with metal foam. Simulation were performed for different dimensions of cladding and core. Additionally, the test of influence material orientation (arrangement of voids in RVE element) on the maximum stresses and displacement during bending test was performed.
Numerical implementation of the S-matrix algorithm for modeling of relief diffraction gratings
NASA Astrophysics Data System (ADS)
Yaremchuk, Iryna; Tamulevičius, Tomas; Fitio, Volodymyr; Gražulevičiūte, Ieva; Bobitski, Yaroslav; Tamulevičius, Sigitas
2013-11-01
A new numerical implementation is developed to calculate the diffraction efficiency of relief diffraction gratings. In the new formulation, vectors containing the expansion coefficients of electric and magnetic fields on boundaries of the grating layer are expressed by additional constants. An S-matrix algorithm has been systematically described in detail and adapted to a simple matrix form. This implementation is suitable for the study of optical characteristics of periodic structures by using modern object-oriented programming languages and different standard mathematical software. The modeling program has been developed on the basis of this numerical implementation and tested by comparison with other commercially available programs and experimental data. Numerical examples are given to show the usefulness of the new implementation.
NASA Astrophysics Data System (ADS)
Senkerik, Roman; Zelinka, Ivan; Davendra, Donald; Oplatkova, Zuzana
2010-06-01
This research deals with the optimization of the control of chaos by means of evolutionary algorithms. This work is aimed on an explanation of how to use evolutionary algorithms (EAs) and how to properly define the advanced targeting cost function (CF) securing very fast and precise stabilization of desired state for any initial conditions. As a model of deterministic chaotic system, the one dimensional Logistic equation was used. The evolutionary algorithm Self-Organizing Migrating Algorithm (SOMA) was used in four versions. For each version, repeated simulations were conducted to outline the effectiveness and robustness of used method and targeting CF.
NASA Astrophysics Data System (ADS)
Schott, John R.; Brown, Scott D.; Raqueno, Rolando V.; Gross, Harry N.; Robinson, Gary
1999-01-01
The need for robust image data sets for algorithm development and testing has prompted the consideration of synthetic imagery as a supplement to real imagery. The unique ability of synthetic image generation (SIG) tools to supply per-pixel truth allows algorithm writers to test difficult scenarios that would require expensive collection and instrumentation efforts. In addition, SIG data products can supply the user with `actual' truth measurements of the entire image area that are not subject to measurement error thereby allowing the user to more accurately evaluate the performance of their algorithm. Advanced algorithms place a high demand on synthetic imagery to reproduce both the spectro-radiometric and spatial character observed in real imagery. This paper describes a synthetic image generation model that strives to include the radiometric processes that affect spectral image formation and capture. In particular, it addresses recent advances in SIG modeling that attempt to capture the spatial/spectral correlation inherent in real images. The model is capable of simultaneously generating imagery from a wide range of sensors allowing it to generate daylight, low-light-level and thermal image inputs for broadband, multi- and hyper-spectral exploitation algorithms.
SIMULATION OF DISPERSION OF A POWER PLANT PLUME USING AN ADAPTIVE GRID ALGORITHM
A new dynamic adaptive grid algorithm has been developed for use in air quality modeling. This algorithm uses a higher order numerical scheme?the piecewise parabolic method (PPM)?for computing advective solution fields; a weight function capable of promoting grid node clustering ...
ERIC Educational Resources Information Center
Iler, H. Darrell; Brown, Amber; Landis, Amanda; Schimke, Greg; Peters, George
2014-01-01
A numerical analysis of the free radical addition polymerization system is described that provides those teaching polymer, physical, or advanced organic chemistry courses the opportunity to introduce students to numerical methods in the context of a simple but mathematically stiff chemical kinetic system. Numerical analysis can lead students to an…
NASA Astrophysics Data System (ADS)
Fikri, Fariz Fahmi; Nuraini, Nuning
2018-03-01
The differential equation is one of the branches in mathematics which is closely related to human life problems. Some problems that occur in our life can be modeled into differential equations as well as systems of differential equations such as the Lotka-Volterra model and SIR model. Therefore, solving a problem of differential equations is very important. Some differential equations are difficult to solve, so numerical methods are needed to solve that problems. Some numerical methods for solving differential equations that have been widely used are Euler Method, Heun Method, Runge-Kutta and others. However, some of these methods still have some restrictions that cause the method cannot be used to solve more complex problems such as an evaluation interval that we cannot change freely. New methods are needed to improve that problems. One of the method that can be used is the artificial bees colony algorithm. This algorithm is one of metaheuristic algorithm method, which can come out from local search space and do exploration in solution search space so that will get better solution than other method.
Wang, Hua; Liu, Feng; Xia, Ling; Crozier, Stuart
2008-11-21
This paper presents a stabilized Bi-conjugate gradient algorithm (BiCGstab) that can significantly improve the performance of the impedance method, which has been widely applied to model low-frequency field induction phenomena in voxel phantoms. The improved impedance method offers remarkable computational advantages in terms of convergence performance and memory consumption over the conventional, successive over-relaxation (SOR)-based algorithm. The scheme has been validated against other numerical/analytical solutions on a lossy, multilayered sphere phantom excited by an ideal coil loop. To demonstrate the computational performance and application capability of the developed algorithm, the induced fields inside a human phantom due to a low-frequency hyperthermia device is evaluated. The simulation results show the numerical accuracy and superior performance of the method.
Real Time Optima Tracking Using Harvesting Models of the Genetic Algorithm
NASA Technical Reports Server (NTRS)
Baskaran, Subbiah; Noever, D.
1999-01-01
Tracking optima in real time propulsion control, particularly for non-stationary optimization problems is a challenging task. Several approaches have been put forward for such a study including the numerical method called the genetic algorithm. In brief, this approach is built upon Darwinian-style competition between numerical alternatives displayed in the form of binary strings, or by analogy to 'pseudogenes'. Breeding of improved solution is an often cited parallel to natural selection in.evolutionary or soft computing. In this report we present our results of applying a novel model of a genetic algorithm for tracking optima in propulsion engineering and in real time control. We specialize the algorithm to mission profiling and planning optimizations, both to select reduced propulsion needs through trajectory planning and to explore time or fuel conservation strategies.
NASA Astrophysics Data System (ADS)
Penenko, Alexey; Penenko, Vladimir; Nuterman, Roman; Baklanov, Alexander; Mahura, Alexander
2015-11-01
Atmospheric chemistry dynamics is studied with convection-diffusion-reaction model. The numerical Data Assimilation algorithm presented is based on the additive-averaged splitting schemes. It carries out ''fine-grained'' variational data assimilation on the separate splitting stages with respect to spatial dimensions and processes i.e. the same measurement data is assimilated to different parts of the split model. This design has efficient implementation due to the direct data assimilation algorithms of the transport process along coordinate lines. Results of numerical experiments with chemical data assimilation algorithm of in situ concentration measurements on real data scenario have been presented. In order to construct the scenario, meteorological data has been taken from EnviroHIRLAM model output, initial conditions from MOZART model output and measurements from Airbase database.
The generalization ability of online SVM classification based on Markov sampling.
Xu, Jie; Yan Tang, Yuan; Zou, Bin; Xu, Zongben; Li, Luoqing; Lu, Yang
2015-03-01
In this paper, we consider online support vector machine (SVM) classification learning algorithms with uniformly ergodic Markov chain (u.e.M.c.) samples. We establish the bound on the misclassification error of an online SVM classification algorithm with u.e.M.c. samples based on reproducing kernel Hilbert spaces and obtain a satisfactory convergence rate. We also introduce a novel online SVM classification algorithm based on Markov sampling, and present the numerical studies on the learning ability of online SVM classification based on Markov sampling for benchmark repository. The numerical studies show that the learning performance of the online SVM classification algorithm based on Markov sampling is better than that of classical online SVM classification based on random sampling as the size of training samples is larger.
A Christoffel function weighted least squares algorithm for collocation approximations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Narayan, Akil; Jakeman, John D.; Zhou, Tao
Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis tomore » motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.« less
NASA Astrophysics Data System (ADS)
Titeux, Isabelle; Li, Yuming M.; Debray, Karl; Guo, Ying-Qiao
2004-11-01
This Note deals with an efficient algorithm to carry out the plastic integration and compute the stresses due to large strains for materials satisfying the Hill's anisotropic yield criterion. The classical algorithm of plastic integration such as 'Return Mapping Method' is largely used for nonlinear analyses of structures and numerical simulations of forming processes, but it requires an iterative schema and may have convergence problems. A new direct algorithm based on a scalar method is developed which allows us to directly obtain the plastic multiplier without an iteration procedure; thus the computation time is largely reduced and the numerical problems are avoided. To cite this article: I. Titeux et al., C. R. Mecanique 332 (2004).
Conservative algorithms for non-Maxwellian plasma kinetics
Le, Hai P.; Cambier, Jean -Luc
2017-12-08
Here, we present a numerical model and a set of conservative algorithms for Non-Maxwellian plasma kinetics with inelastic collisions. These algorithms self-consistently solve for the time evolution of an isotropic electron energy distribution function interacting with an atomic state distribution function of an arbitrary number of levels through collisional excitation, deexcitation, as well as ionization and recombination. Electron-electron collisions, responsible for thermalization of the electron distribution, are also included in the model. The proposed algorithms guarantee mass/charge and energy conservation in a single step, and is applied to the case of non-uniform gridding of the energy axis in the phasemore » space of the electron distribution function. Numerical test cases are shown to demonstrate the accuracy of the method and its conservation properties.« less
A Christoffel function weighted least squares algorithm for collocation approximations
Narayan, Akil; Jakeman, John D.; Zhou, Tao
2016-11-28
Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis tomore » motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.« less
Implementation of a partitioned algorithm for simulation of large CSI problems
NASA Technical Reports Server (NTRS)
Alvin, Kenneth F.; Park, K. C.
1991-01-01
The implementation of a partitioned numerical algorithm for determining the dynamic response of coupled structure/controller/estimator finite-dimensional systems is reviewed. The partitioned approach leads to a set of coupled first and second-order linear differential equations which are numerically integrated with extrapolation and implicit step methods. The present software implementation, ACSIS, utilizes parallel processing techniques at various levels to optimize performance on a shared-memory concurrent/vector processing system. A general procedure for the design of controller and filter gains is also implemented, which utilizes the vibration characteristics of the structure to be solved. Also presented are: example problems; a user's guide to the software; the procedures and algorithm scripts; a stability analysis for the algorithm; and the source code for the parallel implementation.
Multi-fidelity uncertainty quantification in large-scale predictive simulations of turbulent flow
NASA Astrophysics Data System (ADS)
Geraci, Gianluca; Jofre-Cruanyes, Lluis; Iaccarino, Gianluca
2017-11-01
The performance characterization of complex engineering systems often relies on accurate, but computationally intensive numerical simulations. It is also well recognized that in order to obtain a reliable numerical prediction the propagation of uncertainties needs to be included. Therefore, Uncertainty Quantification (UQ) plays a fundamental role in building confidence in predictive science. Despite the great improvement in recent years, even the more advanced UQ algorithms are still limited to fairly simplified applications and only moderate parameter dimensionality. Moreover, in the case of extremely large dimensionality, sampling methods, i.e. Monte Carlo (MC) based approaches, appear to be the only viable alternative. In this talk we describe and compare a family of approaches which aim to accelerate the convergence of standard MC simulations. These methods are based on hierarchies of generalized numerical resolutions (multi-level) or model fidelities (multi-fidelity), and attempt to leverage the correlation between Low- and High-Fidelity (HF) models to obtain a more accurate statistical estimator without introducing additional HF realizations. The performance of these methods are assessed on an irradiated particle laden turbulent flow (PSAAP II solar energy receiver). This investigation was funded by the United States Department of Energy's (DoE) National Nuclear Security Administration (NNSA) under the Predicitive Science Academic Alliance Program (PSAAP) II at Stanford University.
NASA Technical Reports Server (NTRS)
Werner, Frank; Wind, Galina; Zhang, Zhibo; Platnick, Steven; Di Girolamo, Larry; Zhao, Guangyu; Amarasinghe, Nandana; Meyer, Kerry
2016-01-01
A research-level retrieval algorithm for cloud optical and microphysical properties is developed for the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) aboard the Terra satellite. It is based on the operational MODIS algorithm. This paper documents the technical details of this algorithm and evaluates the retrievals for selected marine boundary layer cloud scenes through comparisons with the operational MODIS Data Collection 6 (C6) cloud product. The newly developed, ASTERspecific cloud masking algorithm is evaluated through comparison with an independent algorithm reported in Zhao and Di Girolamo (2006). To validate and evaluate the cloud optical thickness (tau) and cloud effective radius (r(sub eff)) from ASTER, the high-spatial-resolution ASTER observations are first aggregated to the same 1000m resolution as MODIS. Subsequently, tau(sub aA) and r(sub eff, aA) retrieved from the aggregated ASTER radiances are compared with the collocated MODIS retrievals. For overcast pixels, the two data sets agree very well with Pearson's product-moment correlation coefficients of R greater than 0.970. However, for partially cloudy pixels there are significant differences between r(sub eff, aA) and the MODIS results which can exceed 10 micrometers. Moreover, it is shown that the numerous delicate cloud structures in the example marine boundary layer scenes, resolved by the high-resolution ASTER retrievals, are smoothed by the MODIS observations. The overall good agreement between the research-level ASTER results and the operational MODIS C6 products proves the feasibility of MODIS-like retrievals from ASTER reflectance measurements and provides the basis for future studies concerning the scale dependency of satellite observations and three-dimensional radiative effects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moignier, C; Huet, C; Barraux, V
Purpose: Advanced stereotactic radiotherapy (SRT) treatments require accurate dose calculation for treatment planning especially for treatment sites involving heterogeneous patient anatomy. The purpose of this study was to evaluate the accuracy of dose calculation algorithms, Raytracing and Monte Carlo (MC), implemented in the MultiPlan treatment planning system (TPS) in presence of heterogeneities. Methods: First, the LINAC of a CyberKnife radiotherapy facility was modeled with the PENELOPE MC code. A protocol for the measurement of dose distributions with EBT3 films was established and validated thanks to comparison between experimental dose distributions and calculated dose distributions obtained with MultiPlan Raytracing and MCmore » algorithms as well as with the PENELOPE MC model for treatments planned with the homogenous Easycube phantom. Finally, bones and lungs inserts were used to set up a heterogeneous Easycube phantom. Treatment plans with the 10, 7.5 or the 5 mm field sizes were generated in Multiplan TPS with different tumor localizations (in the lung and at the lung/bone/soft tissue interface). Experimental dose distributions were compared to the PENELOPE MC and Multiplan calculations using the gamma index method. Results: Regarding the experiment in the homogenous phantom, 100% of the points passed for the 3%/3mm tolerance criteria. These criteria include the global error of the method (CT-scan resolution, EBT3 dosimetry, LINAC positionning …), and were used afterwards to estimate the accuracy of the MultiPlan algorithms in heterogeneous media. Comparison of the dose distributions obtained in the heterogeneous phantom is in progress. Conclusion: This work has led to the development of numerical and experimental dosimetric tools for small beam dosimetry. Raytracing and MC algorithms implemented in MultiPlan TPS were evaluated in heterogeneous media.« less
Higher-order time integration of Coulomb collisions in a plasma using Langevin equations
Dimits, A. M.; Cohen, B. I.; Caflisch, R. E.; ...
2013-02-08
The extension of Langevin-equation Monte-Carlo algorithms for Coulomb collisions from the conventional Euler-Maruyama time integration to the next higher order of accuracy, the Milstein scheme, has been developed, implemented, and tested. This extension proceeds via a formulation of the angular scattering directly as stochastic differential equations in the two fixed-frame spherical-coordinate velocity variables. Results from the numerical implementation show the expected improvement [O(Δt) vs. O(Δt 1/2)] in the strong convergence rate both for the speed |v| and angular components of the scattering. An important result is that this improved convergence is achieved for the angular component of the scattering ifmore » and only if the “area-integral” terms in the Milstein scheme are included. The resulting Milstein scheme is of value as a step towards algorithms with both improved accuracy and efficiency. These include both algorithms with improved convergence in the averages (weak convergence) and multi-time-level schemes. The latter have been shown to give a greatly reduced cost for a given overall error level when compared with conventional Monte-Carlo schemes, and their performance is improved considerably when the Milstein algorithm is used for the underlying time advance versus the Euler-Maruyama algorithm. A new method for sampling the area integrals is given which is a simplification of an earlier direct method and which retains high accuracy. Lastly, this method, while being useful in its own right because of its relative simplicity, is also expected to considerably reduce the computational requirements for the direct conditional sampling of the area integrals that is needed for adaptive strong integration.« less
2013-01-01
Background Next generation sequencing technologies have greatly advanced many research areas of the biomedical sciences through their capability to generate massive amounts of genetic information at unprecedented rates. The advent of next generation sequencing has led to the development of numerous computational tools to analyze and assemble the millions to billions of short sequencing reads produced by these technologies. While these tools filled an important gap, current approaches for storing, processing, and analyzing short read datasets generally have remained simple and lack the complexity needed to efficiently model the produced reads and assemble them correctly. Results Previously, we presented an overlap graph coarsening scheme for modeling read overlap relationships on multiple levels. Most current read assembly and analysis approaches use a single graph or set of clusters to represent the relationships among a read dataset. Instead, we use a series of graphs to represent the reads and their overlap relationships across a spectrum of information granularity. At each information level our algorithm is capable of generating clusters of reads from the reduced graph, forming an integrated graph modeling and clustering approach for read analysis and assembly. Previously we applied our algorithm to simulated and real 454 datasets to assess its ability to efficiently model and cluster next generation sequencing data. In this paper we extend our algorithm to large simulated and real Illumina datasets to demonstrate that our algorithm is practical for both sequencing technologies. Conclusions Our overlap graph theoretic algorithm is able to model next generation sequencing reads at various levels of granularity through the process of graph coarsening. Additionally, our model allows for efficient representation of the read overlap relationships, is scalable for large datasets, and is practical for both Illumina and 454 sequencing technologies. PMID:24564333
NASA Astrophysics Data System (ADS)
Werner, Frank; Wind, Galina; Zhang, Zhibo; Platnick, Steven; Di Girolamo, Larry; Zhao, Guangyu; Amarasinghe, Nandana; Meyer, Kerry
2016-12-01
A research-level retrieval algorithm for cloud optical and microphysical properties is developed for the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) aboard the Terra satellite. It is based on the operational MODIS algorithm. This paper documents the technical details of this algorithm and evaluates the retrievals for selected marine boundary layer cloud scenes through comparisons with the operational MODIS Data Collection 6 (C6) cloud product. The newly developed, ASTER-specific cloud masking algorithm is evaluated through comparison with an independent algorithm reported in [Zhao and Di Girolamo(2006)]. To validate and evaluate the cloud optical thickness (τ) and cloud effective radius (reff) from ASTER, the high-spatial-resolution ASTER observations are first aggregated to the same 1000 m resolution as MODIS. Subsequently, τaA and reff,
Performance-Based Seismic Design of Steel Frames Utilizing Colliding Bodies Algorithm
Veladi, H.
2014-01-01
A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm. PMID:25202717
Performance-based seismic design of steel frames utilizing colliding bodies algorithm.
Veladi, H
2014-01-01
A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm.
Fort, J C
1988-01-01
We present an application of the Kohonen algorithm to the traveling salesman problem: Using only this algorithm, without energy function nor any parameter chosen "ad hoc", we found good suboptimal tours. We give a neural model version of this algorithm, closer to classical neural networks. This is illustrated with various numerical examples.
NASA Technical Reports Server (NTRS)
Pflaum, Christoph
1996-01-01
A multilevel algorithm is presented that solves general second order elliptic partial differential equations on adaptive sparse grids. The multilevel algorithm consists of several V-cycles. Suitable discretizations provide that the discrete equation system can be solved in an efficient way. Numerical experiments show a convergence rate of order Omicron(1) for the multilevel algorithm.
2017-01-01
Computational scientists have designed many useful algorithms by exploring a biological process or imitating natural evolution. These algorithms can be used to solve engineering optimization problems. Inspired by the change of matter state, we proposed a novel optimization algorithm called differential cloud particles evolution algorithm based on data-driven mechanism (CPDD). In the proposed algorithm, the optimization process is divided into two stages, namely, fluid stage and solid stage. The algorithm carries out the strategy of integrating global exploration with local exploitation in fluid stage. Furthermore, local exploitation is carried out mainly in solid stage. The quality of the solution and the efficiency of the search are influenced greatly by the control parameters. Therefore, the data-driven mechanism is designed for obtaining better control parameters to ensure good performance on numerical benchmark problems. In order to verify the effectiveness of CPDD, numerical experiments are carried out on all the CEC2014 contest benchmark functions. Finally, two application problems of artificial neural network are examined. The experimental results show that CPDD is competitive with respect to other eight state-of-the-art intelligent optimization algorithms. PMID:28761438
NVU dynamics. I. Geodesic motion on the constant-potential-energy hypersurface.
Ingebrigtsen, Trond S; Toxvaerd, Søren; Heilmann, Ole J; Schrøder, Thomas B; Dyre, Jeppe C
2011-09-14
An algorithm is derived for computer simulation of geodesics on the constant-potential-energy hypersurface of a system of N classical particles. First, a basic time-reversible geodesic algorithm is derived by discretizing the geodesic stationarity condition and implementing the constant-potential-energy constraint via standard Lagrangian multipliers. The basic NVU algorithm is tested by single-precision computer simulations of the Lennard-Jones liquid. Excellent numerical stability is obtained if the force cutoff is smoothed and the two initial configurations have identical potential energy within machine precision. Nevertheless, just as for NVE algorithms, stabilizers are needed for very long runs in order to compensate for the accumulation of numerical errors that eventually lead to "entropic drift" of the potential energy towards higher values. A modification of the basic NVU algorithm is introduced that ensures potential-energy and step-length conservation; center-of-mass drift is also eliminated. Analytical arguments confirmed by simulations demonstrate that the modified NVU algorithm is absolutely stable. Finally, we present simulations showing that the NVU algorithm and the standard leap-frog NVE algorithm have identical radial distribution functions for the Lennard-Jones liquid. © 2011 American Institute of Physics
A new approach of watermarking technique by means multichannel wavelet functions
NASA Astrophysics Data System (ADS)
Agreste, Santa; Puccio, Luigia
2012-12-01
The digital piracy involving images, music, movies, books, and so on, is a legal problem that has not found a solution. Therefore it becomes crucial to create and to develop methods and numerical algorithms in order to solve the copyright problems. In this paper we focus the attention on a new approach of watermarking technique applied to digital color images. Our aim is to describe the realized watermarking algorithm based on multichannel wavelet functions with multiplicity r = 3, called MCWM 1.0. We report a large experimentation and some important numerical results in order to show the robustness of the proposed algorithm to geometrical attacks.
Andrianov, Alexey; Szabo, Aron; Sergeev, Alexander; Kim, Arkady; Chvykov, Vladimir; Kalashnikov, Mikhail
2016-11-14
We developed an improved approach to calculate the Fourier transform of signals with arbitrary large quadratic phase which can be efficiently implemented in numerical simulations utilizing Fast Fourier transform. The proposed algorithm significantly reduces the computational cost of Fourier transform of a highly chirped and stretched pulse by splitting it into two separate transforms of almost transform limited pulses, thereby reducing the required grid size roughly by a factor of the pulse stretching. The application of our improved Fourier transform algorithm in the split-step method for numerical modeling of CPA and OPCPA shows excellent agreement with standard algorithms.
Hromadka, T.V.; Guymon, G.L.
1985-01-01
An algorithm is presented for the numerical solution of the Laplace equation boundary-value problem, which is assumed to apply to soil freezing or thawing. The Laplace equation is numerically approximated by the complex-variable boundary-element method. The algorithm aids in reducing integrated relative error by providing a true measure of modeling error along the solution domain boundary. This measure of error can be used to select locations for adding, removing, or relocating nodal points on the boundary or to provide bounds for the integrated relative error of unknown nodal variable values along the boundary.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fawley, William M.
We discuss the underlying reasoning behind and the details of the numerical algorithm used in the GINGER free-electron laser(FEL) simulation code to load the initial shot noise microbunching on the electron beam. In particular, we point out that there are some additional subtleties which must be followed for multi-dimensional codes which are not necessary for one-dimensional formulations. Moreover, requiring that the higher harmonics of the microbunching also be properly initialized with the correct statistics leads to additional complexities. We present some numerical results including the predicted incoherent, spontaneous emission as tests of the shot noise algorithm's correctness.
NASA Technical Reports Server (NTRS)
Farhat, C.; Park, K. C.; Dubois-Pelerin, Y.
1991-01-01
An unconditionally stable second order accurate implicit-implicit staggered procedure for the finite element solution of fully coupled thermoelasticity transient problems is proposed. The procedure is stabilized with a semi-algebraic augmentation technique. A comparative cost analysis reveals the superiority of the proposed computational strategy to other conventional staggered procedures. Numerical examples of one and two-dimensional thermomechanical coupled problems demonstrate the accuracy of the proposed numerical solution algorithm.
Electromagnetic Field Effects in Semiconductor Crystal Growth
NASA Technical Reports Server (NTRS)
Dulikravich, George S.
1996-01-01
This proposed two-year research project was to involve development of an analytical model, a numerical algorithm for its integration, and a software for the analysis of a solidification process under the influence of electric and magnetic fields in microgravity. Due to the complexity of the analytical model that was developed and its boundary conditions, only a preliminary version of the numerical algorithm was developed while the development of the software package was not completed.
A mathematical model for simulating noise suppression of lined ejectors
NASA Technical Reports Server (NTRS)
Watson, Willie R.
1994-01-01
A mathematical model containing the essential features embodied in the noise suppression of lined ejectors is presented. Although some simplification of the physics is necessary to render the model mathematically tractable, the current model is the most versatile and technologically advanced at the current time. A system of linearized equations and the boundary conditions governing the sound field are derived starting from the equations of fluid dynamics. A nonreflecting boundary condition is developed. In view of the complex nature of the equations, a parametric study requires the use of numerical techniques and modern computers. A finite element algorithm that solves the differential equations coupled with the boundary condition is then introduced. The numerical method results in a matrix equation with several hundred thousand degrees of freedom that is solved efficiently on a supercomputer. The model is validated by comparing results either with exact solutions or with approximate solutions from other works. In each case, excellent correlations are obtained. The usefulness of the model as an optimization tool and the importance of variable impedance liners as a mechanism for achieving broadband suppression within a lined ejector are demonstrated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steich, D J; Brugger, S T; Kallman, J S
2000-02-01
This final report describes our efforts on the Three-Dimensional Massively Parallel CEM Technologies LDRD project (97-ERD-009). Significant need exists for more advanced time domain computational electromagnetics modeling. Bookkeeping details and modifying inflexible software constitute a vast majority of the effort required to address such needs. The required effort escalates rapidly as problem complexity increases. For example, hybrid meshes requiring hybrid numerics on massively parallel platforms (MPPs). This project attempts to alleviate the above limitations by investigating flexible abstractions for these numerical algorithms on MPPs using object-oriented methods, providing a programming environment insulating physics from bookkeeping. The three major design iterationsmore » during the project, known as TIGER-I to TIGER-III, are discussed. Each version of TIGER is briefly discussed along with lessons learned during the development and implementation. An Application Programming Interface (API) of the object-oriented interface for Tiger-III is included in three appendices. The three appendices contain the Utilities, Entity-Attribute, and Mesh libraries developed during the project. The API libraries represent a snapshot of our latest attempt at insulated the physics from the bookkeeping.« less
NASA Astrophysics Data System (ADS)
Nguyen, Dinh-Liem; Klibanov, Michael V.; Nguyen, Loc H.; Kolesov, Aleksandr E.; Fiddy, Michael A.; Liu, Hui
2017-09-01
We analyze in this paper the performance of a newly developed globally convergent numerical method for a coefficient inverse problem for the case of multi-frequency experimental backscatter data associated to a single incident wave. These data were collected using a microwave scattering facility at the University of North Carolina at Charlotte. The challenges for the inverse problem under the consideration are not only from its high nonlinearity and severe ill-posedness but also from the facts that the amount of the measured data is minimal and that these raw data are contaminated by a significant amount of noise, due to a non-ideal experimental setup. This setup is motivated by our target application in detecting and identifying explosives. We show in this paper how the raw data can be preprocessed and successfully inverted using our inversion method. More precisely, we are able to reconstruct the dielectric constants and the locations of the scattering objects with a good accuracy, without using any advanced a priori knowledge of their physical and geometrical properties.
A spectral-finite difference solution of the Navier-Stokes equations in three dimensions
NASA Astrophysics Data System (ADS)
Alfonsi, Giancarlo; Passoni, Giuseppe; Pancaldo, Lea; Zampaglione, Domenico
1998-07-01
A new computational code for the numerical integration of the three-dimensional Navier-Stokes equations in their non-dimensional velocity-pressure formulation is presented. The system of non-linear partial differential equations governing the time-dependent flow of a viscous incompressible fluid in a channel is managed by means of a mixed spectral-finite difference method, in which different numerical techniques are applied: Fourier decomposition is used along the homogeneous directions, second-order Crank-Nicolson algorithms are employed for the spatial derivatives in the direction orthogonal to the solid walls and a fourth-order Runge-Kutta procedure is implemented for both the calculation of the convective term and the time advancement. The pressure problem, cast in the Helmholtz form, is solved with the use of a cyclic reduction procedure. No-slip boundary conditions are used at the walls of the channel and cyclic conditions are imposed at the other boundaries of the computing domain.Results are provided for different values of the Reynolds number at several time steps of integration and are compared with results obtained by other authors.
A numerical algorithm of tooth profile of non-circular cylindrical gear
NASA Astrophysics Data System (ADS)
Wang, Xuan
2017-08-01
Non-circular cylindrical gear (NCCG) is a common form of non-circular gear. Different from the circular gear, the tooth profile equation of NCCG cannot be obtained. So it is necessary to use a numerical algorithm to calculate the tooth profile of NCCG. For this reason, this paper presents a simple and highly efficient numerical algorithm to obtain the tooth profile of NCCG. Firstly, the mathematical model of tooth profile envelope of NCCG is established based on the principle of gear shaping, and the tooth profile envelope of NCCG is obtained. Secondly, the polar radius and polar angle of shaper cutter tooth profile are chosen as the criterions, by which the points of NCCG tooth cogging can be screened out. Finally, the boundary of tooth cogging points is extracted by a distance criterion and correspondingly the tooth profile of NCCG is obtained.
Asymptotic integration algorithms for first-order ODEs with application to viscoplasticity
NASA Technical Reports Server (NTRS)
Freed, Alan D.; Yao, Minwu; Walker, Kevin P.
1992-01-01
When constructing an algorithm for the numerical integration of a differential equation, one must first convert the known ordinary differential equation (ODE), which is defined at a point, into an ordinary difference equation (O(delta)E), which is defined over an interval. Asymptotic, generalized, midpoint, and trapezoidal, O(delta)E algorithms are derived for a nonlinear first order ODE written in the form of a linear ODE. The asymptotic forward (typically underdamped) and backward (typically overdamped) integrators bound these midpoint and trapezoidal integrators, which tend to cancel out unwanted numerical damping by averaging, in some sense, the forward and backward integrations. Viscoplasticity presents itself as a system of nonlinear, coupled first-ordered ODE's that are mathematically stiff, and therefore, difficult to numerically integrate. They are an excellent application for the asymptotic integrators. Considering a general viscoplastic structure, it is demonstrated that one can either integrate the viscoplastic stresses or their associated eigenstrains.
Fast algorithms for Quadrature by Expansion I: Globally valid expansions
NASA Astrophysics Data System (ADS)
Rachh, Manas; Klöckner, Andreas; O'Neil, Michael
2017-09-01
The use of integral equation methods for the efficient numerical solution of PDE boundary value problems requires two main tools: quadrature rules for the evaluation of layer potential integral operators with singular kernels, and fast algorithms for solving the resulting dense linear systems. Classically, these tools were developed separately. In this work, we present a unified numerical scheme based on coupling Quadrature by Expansion, a recent quadrature method, to a customized Fast Multipole Method (FMM) for the Helmholtz equation in two dimensions. The method allows the evaluation of layer potentials in linear-time complexity, anywhere in space, with a uniform, user-chosen level of accuracy as a black-box computational method. Providing this capability requires geometric and algorithmic considerations beyond the needs of standard FMMs as well as careful consideration of the accuracy of multipole translations. We illustrate the speed and accuracy of our method with various numerical examples.
Cognitive correlates of performance in advanced mathematics.
Wei, Wei; Yuan, Hongbo; Chen, Chuansheng; Zhou, Xinlin
2012-03-01
Much research has been devoted to understanding cognitive correlates of elementary mathematics performance, but little such research has been done for advanced mathematics (e.g., modern algebra, statistics, and mathematical logic). To promote mathematical knowledge among college students, it is necessary to understand what factors (including cognitive factors) are important for acquiring advanced mathematics. We recruited 80 undergraduates from four universities in Beijing. The current study investigated the associations between students' performance on a test of advanced mathematics and a battery of 17 cognitive tasks on basic numerical processing, complex numerical processing, spatial abilities, language abilities, and general cognitive processing. The results showed that spatial abilities were significantly correlated with performance in advanced mathematics after controlling for other factors. In addition, certain language abilities (i.e., comprehension of words and sentences) also made unique contributions. In contrast, basic numerical processing and computation were generally not correlated with performance in advanced mathematics. Results suggest that spatial abilities and language comprehension, but not basic numerical processing, may play an important role in advanced mathematics. These results are discussed in terms of their theoretical significance and practical implications. ©2011 The British Psychological Society.
Parabolized Navier-Stokes Code for Computing Magneto-Hydrodynamic Flowfields
NASA Technical Reports Server (NTRS)
Mehta, Unmeel B. (Technical Monitor); Tannehill, J. C.
2003-01-01
This report consists of two published papers, 'Computation of Magnetohydrodynamic Flows Using an Iterative PNS Algorithm' and 'Numerical Simulation of Turbulent MHD Flows Using an Iterative PNS Algorithm'.
Simultaneous and semi-alternating projection algorithms for solving split equality problems.
Dong, Qiao-Li; Jiang, Dan
2018-01-01
In this article, we first introduce two simultaneous projection algorithms for solving the split equality problem by using a new choice of the stepsize, and then propose two semi-alternating projection algorithms. The weak convergence of the proposed algorithms is analyzed under standard conditions. As applications, we extend the results to solve the split feasibility problem. Finally, a numerical example is presented to illustrate the efficiency and advantage of the proposed algorithms.
Numerical simulation of a bubble rising in an environment consisting of Xanthan gum
NASA Astrophysics Data System (ADS)
Aguirre, Víctor A.; Castillo, Byron A.; Narvaez, Christian P.
2017-09-01
An improved numerical algorithm for front tracking method is developed to simulate a bubble rising in viscous liquid. In the new numerical algorithm, volume correction is introduced to conserve the bubble volume while tracking the bubble's rising and deforming. Volume flux conservation is adopted to solve the Navier-Stokes equation for fluid flow using finite volume method. Non-Newtonian fluids are widely used in industry such as feed and energy industries. In this research we used Xanthan gum which is a microbiological polysaccharide. In order to obtain the properties of the Xanthan gum, such as viscosity, storage and loss modulus, shear rate, etc., it was necessary to do an amplitude sweep and steady flow test in a rheometer with a concentric cylinder as geometry. Based on the data given and using a numerical regression, the coefficients required by Giesekus model are obtained. With these coefficients, it is possible to simulate the comportment of the fluid by the use of the developed algorithm. Once the data given by OpenFOAM is acquired, it is compared with the experimental data.
Optimization methods and silicon solar cell numerical models
NASA Technical Reports Server (NTRS)
Girardini, K.; Jacobsen, S. E.
1986-01-01
An optimization algorithm for use with numerical silicon solar cell models was developed. By coupling an optimization algorithm with a solar cell model, it is possible to simultaneously vary design variables such as impurity concentrations, front junction depth, back junction depth, and cell thickness to maximize the predicted cell efficiency. An optimization algorithm was developed and interfaced with the Solar Cell Analysis Program in 1 Dimension (SCAP1D). SCAP1D uses finite difference methods to solve the differential equations which, along with several relations from the physics of semiconductors, describe mathematically the performance of a solar cell. A major obstacle is that the numerical methods used in SCAP1D require a significant amount of computer time, and during an optimization the model is called iteratively until the design variables converge to the values associated with the maximum efficiency. This problem was alleviated by designing an optimization code specifically for use with numerically intensive simulations, to reduce the number of times the efficiency has to be calculated to achieve convergence to the optimal solution.
SIMULATION OF DISPERSION OF A POWER PLANT PLUME USING AN ADAPTIVE GRID ALGORITHM. (R827028)
A new dynamic adaptive grid algorithm has been developed for use in air quality modeling. This algorithm uses a higher order numerical scheme––the piecewise parabolic method (PPM)––for computing advective solution fields; a weight function capable o...
Derivative Free Gradient Projection Algorithms for Rotation
ERIC Educational Resources Information Center
Jennrich, Robert I.
2004-01-01
A simple modification substantially simplifies the use of the gradient projection (GP) rotation algorithms of Jennrich (2001, 2002). These algorithms require subroutines to compute the value and gradient of any specific rotation criterion of interest. The gradient can be difficult to derive and program. It is shown that using numerical gradients…
Alocomotino Control Algorithm for Robotic Linkage Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dohner, Jeffrey L.
This dissertation describes the development of a control algorithm that transitions a robotic linkage system between stabilized states producing responsive locomotion. The developed algorithm is demonstrated using a simple robotic construction consisting of a few links with actuation and sensing at each joint. Numerical and experimental validation is presented.
ERIC Educational Resources Information Center
Pliszka, Steven R.; Crismon, M. Lynn; Hughes, Carroll W.; Corners, C. Keith; Emslie, Graham J.; Jensen, Peter S.; McCracken, James T.; Swanson, James M.; Lopez, Molly
2006-01-01
Objective: In 1998, the Texas Department of Mental Health and Mental Retardation developed algorithms for medication treatment of attention-deficit/hyperactivity disorder (ADHD). Advances in the psychopharmacology of ADHD and results of a feasibility study of algorithm use in community mental health centers caused the algorithm to be modified and…
NASA Technical Reports Server (NTRS)
Russell, B. Don
1989-01-01
This research concentrated on the application of advanced signal processing, expert system, and digital technologies for the detection and control of low grade, incipient faults on spaceborne power systems. The researchers have considerable experience in the application of advanced digital technologies and the protection of terrestrial power systems. This experience was used in the current contracts to develop new approaches for protecting the electrical distribution system in spaceborne applications. The project was divided into three distinct areas: (1) investigate the applicability of fault detection algorithms developed for terrestrial power systems to the detection of faults in spaceborne systems; (2) investigate the digital hardware and architectures required to monitor and control spaceborne power systems with full capability to implement new detection and diagnostic algorithms; and (3) develop a real-time expert operating system for implementing diagnostic and protection algorithms. Significant progress has been made in each of the above areas. Several terrestrial fault detection algorithms were modified to better adapt to spaceborne power system environments. Several digital architectures were developed and evaluated in light of the fault detection algorithms.
Quantitative imaging technique using the layer-stripping algorithm
NASA Astrophysics Data System (ADS)
Beilina, L.
2017-07-01
We present the layer-stripping algorithm for the solution of the hyperbolic coefficient inverse problem (CIP). Our numerical examples show quantitative reconstruction of small tumor-like inclusions in two-dimensions.
Advanced Physiological Estimation of Cognitive Status (APECS)
2009-09-15
REPORT Advanced Physiological Estimation of Cognitive Status (APECS) Final Report 14. ABSTRACT 16. SECURITY CLASSIFICATION OF: EEG...fitness and transmit data to command and control systems. Some of the signals that the physiological sensors measure are readily interpreted, such as...electroencephalogram (EEG) and other signals requires a complex series of mathematical transformations or algorithms. Overall, research on algorithms
Reconstructing householder vectors from Tall-Skinny QR
Ballard, Grey Malone; Demmel, James; Grigori, Laura; ...
2015-08-05
The Tall-Skinny QR (TSQR) algorithm is more communication efficient than the standard Householder algorithm for QR decomposition of matrices with many more rows than columns. However, TSQR produces a different representation of the orthogonal factor and therefore requires more software development to support the new representation. Further, implicitly applying the orthogonal factor to the trailing matrix in the context of factoring a square matrix is more complicated and costly than with the Householder representation. We show how to perform TSQR and then reconstruct the Householder vector representation with the same asymptotic communication efficiency and little extra computational cost. We demonstratemore » the high performance and numerical stability of this algorithm both theoretically and empirically. The new Householder reconstruction algorithm allows us to design more efficient parallel QR algorithms, with significantly lower latency cost compared to Householder QR and lower bandwidth and latency costs compared with Communication-Avoiding QR (CAQR) algorithm. Experiments on supercomputers demonstrate the benefits of the communication cost improvements: in particular, our experiments show substantial improvements over tuned library implementations for tall-and-skinny matrices. Furthermore, we also provide algorithmic improvements to the Householder QR and CAQR algorithms, and we investigate several alternatives to the Householder reconstruction algorithm that sacrifice guarantees on numerical stability in some cases in order to obtain higher performance.« less
Parareal in time 3D numerical solver for the LWR Benchmark neutron diffusion transient model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baudron, Anne-Marie, E-mail: anne-marie.baudron@cea.fr; CEA-DRN/DMT/SERMA, CEN-Saclay, 91191 Gif sur Yvette Cedex; Lautard, Jean-Jacques, E-mail: jean-jacques.lautard@cea.fr
2014-12-15
In this paper we present a time-parallel algorithm for the 3D neutrons calculation of a transient model in a nuclear reactor core. The neutrons calculation consists in numerically solving the time dependent diffusion approximation equation, which is a simplified transport equation. The numerical resolution is done with finite elements method based on a tetrahedral meshing of the computational domain, representing the reactor core, and time discretization is achieved using a θ-scheme. The transient model presents moving control rods during the time of the reaction. Therefore, cross-sections (piecewise constants) are taken into account by interpolations with respect to the velocity ofmore » the control rods. The parallelism across the time is achieved by an adequate use of the parareal in time algorithm to the handled problem. This parallel method is a predictor corrector scheme that iteratively combines the use of two kinds of numerical propagators, one coarse and one fine. Our method is made efficient by means of a coarse solver defined with large time step and fixed position control rods model, while the fine propagator is assumed to be a high order numerical approximation of the full model. The parallel implementation of our method provides a good scalability of the algorithm. Numerical results show the efficiency of the parareal method on large light water reactor transient model corresponding to the Langenbuch–Maurer–Werner benchmark.« less
A dose error evaluation study for 4D dose calculations
NASA Astrophysics Data System (ADS)
Milz, Stefan; Wilkens, Jan J.; Ullrich, Wolfgang
2014-10-01
Previous studies have shown that respiration induced motion is not negligible for Stereotactic Body Radiation Therapy. The intrafractional breathing induced motion influences the delivered dose distribution on the underlying patient geometry such as the lung or the abdomen. If a static geometry is used, a planning process for these indications does not represent the entire dynamic process. The quality of a full 4D dose calculation approach depends on the dose coordinate transformation process between deformable geometries. This article provides an evaluation study that introduces an advanced method to verify the quality of numerical dose transformation generated by four different algorithms. The used transformation metric value is based on the deviation of the dose mass histogram (DMH) and the mean dose throughout dose transformation. The study compares the results of four algorithms. In general, two elementary approaches are used: dose mapping and energy transformation. Dose interpolation (DIM) and an advanced concept, so called divergent dose mapping model (dDMM), are used for dose mapping. The algorithms are compared to the basic energy transformation model (bETM) and the energy mass congruent mapping (EMCM). For evaluation 900 small sample regions of interest (ROI) are generated inside an exemplary lung geometry (4DCT). A homogeneous fluence distribution is assumed for dose calculation inside the ROIs. The dose transformations are performed with the four different algorithms. The study investigates the DMH-metric and the mean dose metric for different scenarios (voxel sizes: 8 mm, 4 mm, 2 mm, 1 mm 9 different breathing phases). dDMM achieves the best transformation accuracy in all measured test cases with 3-5% lower errors than the other models. The results of dDMM are reasonable and most efficient in this study, although the model is simple and easy to implement. The EMCM model also achieved suitable results, but the approach requires a more complex programming structure. The study discloses disadvantages for the bETM and for the DIM. DIM yielded insufficient results for large voxel sizes, while bETM is prone to errors for small voxel sizes.
A dose error evaluation study for 4D dose calculations.
Milz, Stefan; Wilkens, Jan J; Ullrich, Wolfgang
2014-11-07
Previous studies have shown that respiration induced motion is not negligible for Stereotactic Body Radiation Therapy. The intrafractional breathing induced motion influences the delivered dose distribution on the underlying patient geometry such as the lung or the abdomen. If a static geometry is used, a planning process for these indications does not represent the entire dynamic process. The quality of a full 4D dose calculation approach depends on the dose coordinate transformation process between deformable geometries. This article provides an evaluation study that introduces an advanced method to verify the quality of numerical dose transformation generated by four different algorithms.The used transformation metric value is based on the deviation of the dose mass histogram (DMH) and the mean dose throughout dose transformation. The study compares the results of four algorithms. In general, two elementary approaches are used: dose mapping and energy transformation. Dose interpolation (DIM) and an advanced concept, so called divergent dose mapping model (dDMM), are used for dose mapping. The algorithms are compared to the basic energy transformation model (bETM) and the energy mass congruent mapping (EMCM). For evaluation 900 small sample regions of interest (ROI) are generated inside an exemplary lung geometry (4DCT). A homogeneous fluence distribution is assumed for dose calculation inside the ROIs. The dose transformations are performed with the four different algorithms.The study investigates the DMH-metric and the mean dose metric for different scenarios (voxel sizes: 8 mm, 4 mm, 2 mm, 1 mm; 9 different breathing phases). dDMM achieves the best transformation accuracy in all measured test cases with 3-5% lower errors than the other models. The results of dDMM are reasonable and most efficient in this study, although the model is simple and easy to implement. The EMCM model also achieved suitable results, but the approach requires a more complex programming structure. The study discloses disadvantages for the bETM and for the DIM. DIM yielded insufficient results for large voxel sizes, while bETM is prone to errors for small voxel sizes.
On-the-fly Numerical Surface Integration for Finite-Difference Poisson-Boltzmann Methods.
Cai, Qin; Ye, Xiang; Wang, Jun; Luo, Ray
2011-11-01
Most implicit solvation models require the definition of a molecular surface as the interface that separates the solute in atomic detail from the solvent approximated as a continuous medium. Commonly used surface definitions include the solvent accessible surface (SAS), the solvent excluded surface (SES), and the van der Waals surface. In this study, we present an efficient numerical algorithm to compute the SES and SAS areas to facilitate the applications of finite-difference Poisson-Boltzmann methods in biomolecular simulations. Different from previous numerical approaches, our algorithm is physics-inspired and intimately coupled to the finite-difference Poisson-Boltzmann methods to fully take advantage of its existing data structures. Our analysis shows that the algorithm can achieve very good agreement with the analytical method in the calculation of the SES and SAS areas. Specifically, in our comprehensive test of 1,555 molecules, the average unsigned relative error is 0.27% in the SES area calculations and 1.05% in the SAS area calculations at the grid spacing of 1/2Å. In addition, a systematic correction analysis can be used to improve the accuracy for the coarse-grid SES area calculations, with the average unsigned relative error in the SES areas reduced to 0.13%. These validation studies indicate that the proposed algorithm can be applied to biomolecules over a broad range of sizes and structures. Finally, the numerical algorithm can also be adapted to evaluate the surface integral of either a vector field or a scalar field defined on the molecular surface for additional solvation energetics and force calculations.
Diedrich, Karl T; Roberts, John A; Schmidt, Richard H; Parker, Dennis L
2012-12-01
Attributes like length, diameter, and tortuosity of tubular anatomical structures such as blood vessels in medical images can be measured from centerlines. This study develops methods for comparing the accuracy and stability of centerline algorithms. Sample data included numeric phantoms simulating arteries and clinical human brain artery images. Centerlines were calculated from segmented phantoms and arteries with shortest paths centerline algorithms developed with different cost functions. The cost functions were the inverse modified distance from edge (MDFE(i) ), the center of mass (COM), the binary-thinned (BT)-MDFE(i) , and the BT-COM. The accuracy of the centerline algorithms were measured by the root mean square error from known centerlines of phantoms. The stability of the centerlines was measured by starting the centerline tree from different points and measuring the differences between trees. The accuracy and stability of the centerlines were visualized by overlaying centerlines on vasculature images. The BT-COM cost function centerline was the most stable in numeric phantoms and human brain arteries. The MDFE(i) -based centerline was most accurate in the numeric phantoms. The COM-based centerline correctly handled the "kissing" artery in 16 of 16 arteries in eight subjects whereas the BT-COM was correct in 10 of 16 and MDFE(i) was correct in 6 of 16. The COM-based centerline algorithm was selected for future use based on the ability to handle arteries where the initial binary vessels segmentation exhibits closed loops. The selected COM centerline was found to measure numerical phantoms to within 2% of the known length. Copyright © 2012 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Papadopoulos, Periklis; Venkatapathy, Ethiraj; Prabhu, Dinesh; Loomis, Mark P.; Olynick, Dave; Arnold, James O. (Technical Monitor)
1998-01-01
Recent advances in computational power enable computational fluid dynamic modeling of increasingly complex configurations. A review of grid generation methodologies implemented in support of the computational work performed for the X-38 and X-33 are presented. In strategizing topological constructs and blocking structures factors considered are the geometric configuration, optimal grid size, numerical algorithms, accuracy requirements, physics of the problem at hand, computational expense, and the available computer hardware. Also addressed are grid refinement strategies, the effects of wall spacing, and convergence. The significance of grid is demonstrated through a comparison of computational and experimental results of the aeroheating environment experienced by the X-38 vehicle. Special topics on grid generation strategies are also addressed to model control surface deflections, and material mapping.
Structural reliability methods: Code development status
NASA Astrophysics Data System (ADS)
Millwater, Harry R.; Thacker, Ben H.; Wu, Y.-T.; Cruse, T. A.
1991-05-01
The Probabilistic Structures Analysis Method (PSAM) program integrates state of the art probabilistic algorithms with structural analysis methods in order to quantify the behavior of Space Shuttle Main Engine structures subject to uncertain loadings, boundary conditions, material parameters, and geometric conditions. An advanced, efficient probabilistic structural analysis software program, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) was developed as a deliverable. NESSUS contains a number of integrated software components to perform probabilistic analysis of complex structures. A nonlinear finite element module NESSUS/FEM is used to model the structure and obtain structural sensitivities. Some of the capabilities of NESSUS/FEM are shown. A Fast Probability Integration module NESSUS/FPI estimates the probability given the structural sensitivities. A driver module, PFEM, couples the FEM and FPI. NESSUS, version 5.0, addresses component reliability, resistance, and risk.
Structural reliability methods: Code development status
NASA Technical Reports Server (NTRS)
Millwater, Harry R.; Thacker, Ben H.; Wu, Y.-T.; Cruse, T. A.
1991-01-01
The Probabilistic Structures Analysis Method (PSAM) program integrates state of the art probabilistic algorithms with structural analysis methods in order to quantify the behavior of Space Shuttle Main Engine structures subject to uncertain loadings, boundary conditions, material parameters, and geometric conditions. An advanced, efficient probabilistic structural analysis software program, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) was developed as a deliverable. NESSUS contains a number of integrated software components to perform probabilistic analysis of complex structures. A nonlinear finite element module NESSUS/FEM is used to model the structure and obtain structural sensitivities. Some of the capabilities of NESSUS/FEM are shown. A Fast Probability Integration module NESSUS/FPI estimates the probability given the structural sensitivities. A driver module, PFEM, couples the FEM and FPI. NESSUS, version 5.0, addresses component reliability, resistance, and risk.
Ethics and Epistemology in Big Data Research.
Lipworth, Wendy; Mason, Paul H; Kerridge, Ian; Ioannidis, John P A
2017-12-01
Biomedical innovation and translation are increasingly emphasizing research using "big data." The hope is that big data methods will both speed up research and make its results more applicable to "real-world" patients and health services. While big data research has been embraced by scientists, politicians, industry, and the public, numerous ethical, organizational, and technical/methodological concerns have also been raised. With respect to technical and methodological concerns, there is a view that these will be resolved through sophisticated information technologies, predictive algorithms, and data analysis techniques. While such advances will likely go some way towards resolving technical and methodological issues, we believe that the epistemological issues raised by big data research have important ethical implications and raise questions about the very possibility of big data research achieving its goals.
Fuzzy multi objective transportation problem – evolutionary algorithm approach
NASA Astrophysics Data System (ADS)
Karthy, T.; Ganesan, K.
2018-04-01
This paper deals with fuzzy multi objective transportation problem. An fuzzy optimal compromise solution is obtained by using Fuzzy Genetic Algorithm. A numerical example is provided to illustrate the methodology.
NASA Astrophysics Data System (ADS)
Liu, Y.; Wu, W.; Zhang, Y.; Kucera, P. A.; Liu, Y.; Pan, L.
2012-12-01
Weather forecasting in the Middle East is challenging because of its complicated geographical nature including massive coastal area and heterogeneous land, and regional spare observational network. Strong air-land-sea interactions form multi-scale weather regimes in the area, which require a numerical weather prediction model capable of properly representing multi-scale atmospheric flow with appropriate initial conditions. The WRF-based Real-Time Four Dimensional Data Assimilation (RTFDDA) system is one of advanced multi-scale weather analysis and forecasting facilities developed at the Research Applications Laboratory (RAL) of NCAR. The forecasting system is applied for the Middle East with careful configuration. To overcome the limitation of the very sparsely available conventional observations in the region, we develop a hybrid data assimilation algorithm combining RTFDDA and WRF-3DVAR, which ingests remote sensing data from satellites and radar. This hybrid data assimilation blends Newtonian nudging FDDA and 3DVAR technology to effectively assimilate both conventional observations and remote sensing measurements and provide improved initial conditions for the forecasting system. For brevity, the forecasting system is called RTF3H (RTFDDA-3DVAR Hybrid). In this presentation, we will discuss the hybrid data assimilation algorithm, and its implementation, and the applications for high-impact weather events in the area. Sensitivity studies are conducted to understand the strength and limitations of this hybrid data assimilation algorithm.
High Impact Weather Forecasts and Warnings with the GOES-R Geostationary Lightning Mapper (GLM)
NASA Technical Reports Server (NTRS)
Goodman, Steven J.; Blakeslee, Richard J.; Koshak, William; Mach, Douglas M.
2011-01-01
The Geostationary Operational Environmental Satellite (GOES-R) is the next series to follow the existing GOES system currently operating over the Western Hemisphere. A major advancement over the current GOES include a new capability for total lightning detection (cloud and cloud-to-ground flashes) from the Geostationary Lightning Mapper (GLM). The GLM will operate continuously day and night with near-uniform spatial resolution of 8 km with a product refresh rate of less than 20 sec over the Americas and adjacent oceanic regions. This will aid in forecasting severe storms and tornado activity, and convective weather impacts on aviation safety and efficiency. In parallel with the instrument development, a GOES-R Risk Reduction Science Team and Algorithm Working Group Lightning Applications Team have begun to develop cal/val performance monitoring tools and new applications using the GLM alone, in conjunction with other instruments, and merged or blended integrated observing system products combining satellite, radar, in-situ and numerical models. Proxy total lightning data from the NASA Lightning Imaging Sensor (LIS) on the Tropical Rainfall Measuring Mission (TRMM) satellite and regional ground-based lightning networks are being used to develop the pre-launch algorithms, test data sets, and applications, as well as improve our knowledge of thunderstorm initiation and evolution. In this presentation we review the planned implementation of the instrument and suite of operational algorithms.
Recent progress in 3-D imaging of sea freight containers
NASA Astrophysics Data System (ADS)
Fuchs, Theobald; Schön, Tobias; Dittmann, Jonas; Sukowski, Frank; Hanke, Randolf
2015-03-01
The inspection of very large objects like sea freight containers with X-ray Computed Tomography (CT) is an emerging technology. A complete 3-D CT scan of a see-freight container takes several hours. Of course, this is too slow to apply it to a large number of containers. However, the benefits of a 3-D CT for sealed freight are obvious: detection of potential threats or illicit cargo without being confronted with legal complications or high time consumption and risks for the security personnel during a manual inspection. Recently distinct progress was made in the field of reconstruction of projections with only a relatively low number of angular positions. Instead of today's 500 to 1000 rotational steps, as needed for conventional CT reconstruction techniques, this new class of algorithms provides the potential to reduce the number of projection angles approximately by a factor of 10. The main drawback of these advanced iterative methods is the high consumption for numerical processing. But as computational power is getting steadily cheaper, there will be practical applications of these complex algorithms in a foreseeable future. In this paper, we discuss the properties of iterative image reconstruction algorithms and show results of their application to CT of extremely large objects scanning a sea-freight container. A specific test specimen is used to quantitatively evaluate the image quality in terms of spatial and contrast resolution and depending on different number of projections.
NASA Astrophysics Data System (ADS)
Poulter, Benjamin; Goodall, Jonathan L.; Halpin, Patrick N.
2008-08-01
SummaryThe vulnerability of coastal landscapes to sea level rise is compounded by the existence of extensive artificial drainage networks initially built to lower water tables for agriculture, forestry, and human settlements. These drainage networks are found in landscapes with little topographic relief where channel flow is characterized by bi-directional movement across multiple time-scales and related to precipitation, wind, and tidal patterns. The current configuration of many artificial drainage networks exacerbates impacts associated with sea level rise such as salt-intrusion and increased flooding. This suggests that in the short-term, drainage networks might be managed to mitigate sea level rise related impacts. The challenge, however, is that hydrologic processes in regions where channel flow direction is weakly related to slope and topography require extensive parameterization for numerical models which is limited where network size is on the order of a hundred or more kilometers in total length. Here we present an application of graph theoretic algorithms to efficiently investigate network properties relevant to the management of a large artificial drainage system in coastal North Carolina, USA. We created a digital network model representing the observation network topology and four types of drainage features (canal, collector and field ditches, and streams). We applied betweenness-centrality concepts (using Dijkstra's shortest path algorithm) to determine major hydrologic flowpaths based off of hydraulic resistance. Following this, we identified sub-networks that could be managed independently using a community structure and modularity approach. Lastly, a betweenness-centrality algorithm was applied to identify major shoreline entry points to the network that disproportionately control water movement in and out of the network. We demonstrate that graph theory can be applied to solving management and monitoring problems associated with sea level rise for poorly understood drainage networks in advance of numerical methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None, None
The Second SIAM Conference on Computational Science and Engineering was held in San Diego from February 10-12, 2003. Total conference attendance was 553. This is a 23% increase in attendance over the first conference. The focus of this conference was to draw attention to the tremendous range of major computational efforts on large problems in science and engineering, to promote the interdisciplinary culture required to meet these large-scale challenges, and to encourage the training of the next generation of computational scientists. Computational Science & Engineering (CS&E) is now widely accepted, along with theory and experiment, as a crucial third modemore » of scientific investigation and engineering design. Aerospace, automotive, biological, chemical, semiconductor, and other industrial sectors now rely on simulation for technical decision support. For federal agencies also, CS&E has become an essential support for decisions on resources, transportation, and defense. CS&E is, by nature, interdisciplinary. It grows out of physical applications and it depends on computer architecture, but at its heart are powerful numerical algorithms and sophisticated computer science techniques. From an applied mathematics perspective, much of CS&E has involved analysis, but the future surely includes optimization and design, especially in the presence of uncertainty. Another mathematical frontier is the assimilation of very large data sets through such techniques as adaptive multi-resolution, automated feature search, and low-dimensional parameterization. The themes of the 2003 conference included, but were not limited to: Advanced Discretization Methods; Computational Biology and Bioinformatics; Computational Chemistry and Chemical Engineering; Computational Earth and Atmospheric Sciences; Computational Electromagnetics; Computational Fluid Dynamics; Computational Medicine and Bioengineering; Computational Physics and Astrophysics; Computational Solid Mechanics and Materials; CS&E Education; Meshing and Adaptivity; Multiscale and Multiphysics Problems; Numerical Algorithms for CS&E; Discrete and Combinatorial Algorithms for CS&E; Inverse Problems; Optimal Design, Optimal Control, and Inverse Problems; Parallel and Distributed Computing; Problem-Solving Environments; Software and Wddleware Systems; Uncertainty Estimation and Sensitivity Analysis; and Visualization and Computer Graphics.« less
Morphological evolution of protective works by Genetic Algorithms: An application to Mt Etna
NASA Astrophysics Data System (ADS)
Marocco, Davide; Spataro, William; D'Ambrosio, Donato; Filippone, Giuseppe; Rongo, Rocco; Iovine, Giulio; Neri, Marco
2013-04-01
The hazard induced by dangerous flow-type phenomena - e.g. lava flows, earth flows, debris flows, and debris avalanches - has increased in recent years due to continuous urbanization. In many cases, the numerical simulation of hypothetical events can help to forecast the flow path in advance and therefore give indications about the areas that can be considered for the construction of protective works - e.g. earth barriers or channels. In this way, urbanized areas, as well as cultural heritage sites or even important infrastructures, can be protected by diverting the flow towards lower interest regions. Here, we have considered the numerical Cellular Automata model Sciara-fv2 for simulating lava flows at Mt Etna and Genetic Algorithms for optimizing the position, orientation and extension of an earth barrier built to protect the Rifugio Sapienza, a well-known touristic facility located near the summit of the volcano. The Rifugio Sapienza area was in fact interested by a lava flow in 2003, which destroyed a Service Center, a parking area and a Cafeteria. In this study, a perimeter was devised around the Rifugio (i.e., security perimeter), which delimitates the area that has to be protected by the flow. Furthermore, another perimeter was devised (i.e., work perimeter), specifying the area in which the earth barrier can be located. The barrier is specified by three parameters, namely the two geographic coordinates of the vertex and the height. In fact, in this preliminary analysis the barrier was modeled as a segment (in plant) having a constant height. Though preliminary, the study has produced extremely positive results. Among different alternatives generated by the genetic algorithm, an interesting scenario consists of a 35 meters barrier high solution, which completely deviates the flow avoiding that the lava reaches the inhabited area. The relative elevated height of the barrier is high due to the fact that the crater is located close to the area to be protected and, consequently, the lava rate is very high.
An algorithm for the automatic synchronization of Omega receivers
NASA Technical Reports Server (NTRS)
Stonestreet, W. M.; Marzetta, T. L.
1977-01-01
The Omega navigation system and the requirement for receiver synchronization are discussed. A description of the synchronization algorithm is provided. The numerical simulation and its associated assumptions were examined and results of the simulation are presented. The suggested form of the synchronization algorithm and the suggested receiver design values were surveyed. A Fortran of the synchronization algorithm used in the simulation was also included.
Stitzel, Joel D; Weaver, Ashley A; Talton, Jennifer W; Barnard, Ryan T; Schoell, Samantha L; Doud, Andrea N; Martin, R Shayn; Meredith, J Wayne
2016-06-01
Advanced Automatic Crash Notification algorithms use vehicle telemetry measurements to predict risk of serious motor vehicle crash injury. The objective of the study was to develop an Advanced Automatic Crash Notification algorithm to reduce response time, increase triage efficiency, and improve patient outcomes by minimizing undertriage (<5%) and overtriage (<50%), as recommended by the American College of Surgeons. A list of injuries associated with a patient's need for Level I/II trauma center treatment known as the Target Injury List was determined using an approach based on 3 facets of injury: severity, time sensitivity, and predictability. Multivariable logistic regression was used to predict an occupant's risk of sustaining an injury on the Target Injury List based on crash severity and restraint factors for occupants in the National Automotive Sampling System - Crashworthiness Data System 2000-2011. The Advanced Automatic Crash Notification algorithm was optimized and evaluated to minimize triage rates, per American College of Surgeons recommendations. The following rates were achieved: <50% overtriage and <5% undertriage in side impacts and 6% to 16% undertriage in other crash modes. Nationwide implementation of our algorithm is estimated to improve triage decisions for 44% of undertriaged and 38% of overtriaged occupants. Annually, this translates to more appropriate care for >2,700 seriously injured occupants and reduces unnecessary use of trauma center resources for >162,000 minimally injured occupants. The algorithm could be incorporated into vehicles to inform emergency personnel of recommended motor vehicle crash triage decisions. Lower under- and overtriage was achieved, and nationwide implementation of the algorithm would yield improved triage decision making for an estimated 165,000 occupants annually. Copyright © 2016. Published by Elsevier Inc.
NASA Technical Reports Server (NTRS)
Baker, A. J.
1974-01-01
The finite-element method is used to establish a numerical solution algorithm for the Navier-Stokes equations for two-dimensional flows of a viscous compressible fluid. Numerical experiments confirm the advection property for the finite-element equivalent of the nonlinear convection term for both unidirectional and recirculating flowfields. For linear functionals, the algorithm demonstrates good accuracy using coarse discretizations and h squared convergence with discretization refinement.
1991-06-01
algorithms (for the analysis of mechanisms), traditional numerical simulation methods, and algorithms that examine the (continued on back) 14. SUBJECT TERMS ...7540-01-280.S500 )doo’c -O• 98 (; : 89) 2YB Block 13 continued: simulation results and reinterpret them in qualitative terms . Moreover...simulation results and reinterpret them in qualitative terms . Moreover, the Workbench can use symbolic procedures to help guide or simplify the task
NASA Astrophysics Data System (ADS)
Alfonso, Lester; Zamora, Jose; Cruz, Pedro
2015-04-01
The stochastic approach to coagulation considers the coalescence process going in a system of a finite number of particles enclosed in a finite volume. Within this approach, the full description of the system can be obtained from the solution of the multivariate master equation, which models the evolution of the probability distribution of the state vector for the number of particles of a given mass. Unfortunately, due to its complexity, only limited results were obtained for certain type of kernels and monodisperse initial conditions. In this work, a novel numerical algorithm for the solution of the multivariate master equation for stochastic coalescence that works for any type of kernels and initial conditions is introduced. The performance of the method was checked by comparing the numerically calculated particle mass spectrum with analytical solutions obtained for the constant and sum kernels, with an excellent correspondence between the analytical and numerical solutions. In order to increase the speedup of the algorithm, software parallelization techniques with OpenMP standard were used, along with an implementation in order to take advantage of new accelerator technologies. Simulations results show an important speedup of the parallelized algorithms. This study was funded by a grant from Consejo Nacional de Ciencia y Tecnologia de Mexico SEP-CONACYT CB-131879. The authors also thanks LUFAC® Computacion SA de CV for CPU time and all the support provided.
Structure-preserving and rank-revealing QR-factorizations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bischof, C.H.; Hansen, P.C.
1991-11-01
The rank-revealing QR-factorization (RRQR-factorization) is a special QR-factorization that is guaranteed to reveal the numerical rank of the matrix under consideration. This makes the RRQR-factorization a useful tool in the numerical treatment of many rank-deficient problems in numerical linear algebra. In this paper, a framework is presented for the efficient implementation of RRQR algorithms, in particular, for sparse matrices. A sparse RRQR-algorithm should seek to preserve the structure and sparsity of the matrix as much as possible while retaining the ability to capture safely the numerical rank. To this end, the paper proposes to compute an initial QR-factorization using amore » restricted pivoting strategy guarded by incremental condition estimation (ICE), and then applies the algorithm suggested by Chan and Foster to this QR-factorization. The column exchange strategy used in the initial QR factorization will exploit the fact that certain column exchanges do not change the sparsity structure, and compute a sparse QR-factorization that is a good approximation of the sought-after RRQR-factorization. Due to quantities produced by ICE, the Chan/Foster RRQR algorithm can be implemented very cheaply, thus verifying that the sought-after RRQR-factorization has indeed been computed. Experimental results on a model problem show that the initial QR-factorization is indeed very likely to produce RRQR-factorization.« less
Uzuner, Özlem; Szolovits, Peter
2017-01-01
Research on extracting biomedical relations has received growing attention recently, with numerous biological and clinical applications including those in pharmacogenomics, clinical trial screening and adverse drug reaction detection. The ability to accurately capture both semantic and syntactic structures in text expressing these relations becomes increasingly critical to enable deep understanding of scientific papers and clinical narratives. Shared task challenges have been organized by both bioinformatics and clinical informatics communities to assess and advance the state-of-the-art research. Significant progress has been made in algorithm development and resource construction. In particular, graph-based approaches bridge semantics and syntax, often achieving the best performance in shared tasks. However, a number of problems at the frontiers of biomedical relation extraction continue to pose interesting challenges and present opportunities for great improvement and fruitful research. In this article, we place biomedical relation extraction against the backdrop of its versatile applications, present a gentle introduction to its general pipeline and shared resources, review the current state-of-the-art in methodology advancement, discuss limitations and point out several promising future directions. PMID:26851224
Time-domain hybrid method for simulating large amplitude motions of ships advancing in waves
NASA Astrophysics Data System (ADS)
Liu, Shukui; Papanikolaou, Apostolos D.
2011-03-01
Typical results obtained by a newly developed, nonlinear time domain hybrid method for simulating large amplitude motions of ships advancing with constant forward speed in waves are presented. The method is hybrid in the way of combining a time-domain transient Green function method and a Rankine source method. The present approach employs a simple double integration algorithm with respect to time to simulate the free-surface boundary condition. During the simulation, the diffraction and radiation forces are computed by pressure integration over the mean wetted surface, whereas the incident wave and hydrostatic restoring forces/moments are calculated on the instantaneously wetted surface of the hull. Typical numerical results of application of the method to the seakeeping performance of a standard containership, namely the ITTC S175, are herein presented. Comparisons have been made between the results from the present method, the frequency domain 3D panel method (NEWDRIFT) of NTUA-SDL and available experimental data and good agreement has been observed for all studied cases between the results of the present method and comparable other data.
New computing systems and their impact on structural analysis and design
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.
1989-01-01
A review is given of the recent advances in computer technology that are likely to impact structural analysis and design. The computational needs for future structures technology are described. The characteristics of new and projected computing systems are summarized. Advances in programming environments, numerical algorithms, and computational strategies for new computing systems are reviewed, and a novel partitioning strategy is outlined for maximizing the degree of parallelism. The strategy is designed for computers with a shared memory and a small number of powerful processors (or a small number of clusters of medium-range processors). It is based on approximating the response of the structure by a combination of symmetric and antisymmetric response vectors, each obtained using a fraction of the degrees of freedom of the original finite element model. The strategy was implemented on the CRAY X-MP/4 and the Alliant FX/8 computers. For nonlinear dynamic problems on the CRAY X-MP with four CPUs, it resulted in an order of magnitude reduction in total analysis time, compared with the direct analysis on a single-CPU CRAY X-MP machine.
Collignon, Bertrand; Séguret, Axel; Halloy, José
2016-01-01
Collective motion is one of the most ubiquitous behaviours displayed by social organisms and has led to the development of numerous models. Recent advances in the understanding of sensory system and information processing by animals impels one to revise classical assumptions made in decisional algorithms. In this context, we present a model describing the three-dimensional visual sensory system of fish that adjust their trajectory according to their perception field. Furthermore, we introduce a stochastic process based on a probability distribution function to move in targeted directions rather than on a summation of influential vectors as is classically assumed by most models. In parallel, we present experimental results of zebrafish (alone or in group of 10) swimming in both homogeneous and heterogeneous environments. We use these experimental data to set the parameter values of our model and show that this perception-based approach can simulate the collective motion of species showing cohesive behaviour in heterogeneous environments. Finally, we discuss the advances of this multilayer model and its possible outcomes in biological, physical and robotic sciences. PMID:26909173
NASA Astrophysics Data System (ADS)
Ming, Mei-Jun; Xu, Long-Kun; Wang, Fan; Bi, Ting-Jun; Li, Xiang-Yuan
2017-07-01
In this work, a matrix form of numerical algorithm for spectral shift is presented based on the novel nonequilibrium solvation model that is established by introducing the constrained equilibrium manipulation. This form is convenient for the development of codes for numerical solution. By means of the integral equation formulation polarizable continuum model (IEF-PCM), a subroutine has been implemented to compute spectral shift numerically. Here, the spectral shifts of absorption spectra for several popular chromophores, N,N-diethyl-p-nitroaniline (DEPNA), methylenecyclopropene (MCP), acrolein (ACL) and p-nitroaniline (PNA) were investigated in different solvents with various polarities. The computed spectral shifts can explain the available experimental findings reasonably. Discussions were made on the contributions of solute geometry distortion, electrostatic polarization and other non-electrostatic interactions to spectral shift.
Numerical simulation of three-dimensional transonic turbulent projectile aerodynamics by TVD schemes
NASA Technical Reports Server (NTRS)
Shiau, Nae-Haur; Hsu, Chen-Chi; Chyu, Wei-Jao
1989-01-01
The two-dimensional symmetric TVD scheme proposed by Yee has been extended to and investigated for three-dimensional thin-layer Navier-Stokes simulation of complex aerodynamic problems. An existing three-dimensional Navier-stokes code based on the beam and warming algorithm is modified to provide an option of using the TVD algorithm and the flow problem considered is a transonic turbulent flow past a projectile with sting at ten-degree angle of attack. Numerical experiments conducted for three flow cases, free-stream Mach numbers of 0.91, 0.96 and 1.20 show that the symmetric TVD algorithm can provide surface pressure distribution in excellent agreement with measured data; moreover, the rate of convergence to attain a steady state solution is about two times faster than the original beam and warming algorithm.
Nash equilibrium and multi criterion aerodynamic optimization
NASA Astrophysics Data System (ADS)
Tang, Zhili; Zhang, Lianhe
2016-06-01
Game theory and its particular Nash Equilibrium (NE) are gaining importance in solving Multi Criterion Optimization (MCO) in engineering problems over the past decade. The solution of a MCO problem can be viewed as a NE under the concept of competitive games. This paper surveyed/proposed four efficient algorithms for calculating a NE of a MCO problem. Existence and equivalence of the solution are analyzed and proved in the paper based on fixed point theorem. Specific virtual symmetric Nash game is also presented to set up an optimization strategy for single objective optimization problems. Two numerical examples are presented to verify proposed algorithms. One is mathematical functions' optimization to illustrate detailed numerical procedures of algorithms, the other is aerodynamic drag reduction of civil transport wing fuselage configuration by using virtual game. The successful application validates efficiency of algorithms in solving complex aerodynamic optimization problem.
Multi-objective optimal design of sandwich panels using a genetic algorithm
NASA Astrophysics Data System (ADS)
Xu, Xiaomei; Jiang, Yiping; Pueh Lee, Heow
2017-10-01
In this study, an optimization problem concerning sandwich panels is investigated by simultaneously considering the two objectives of minimizing the panel mass and maximizing the sound insulation performance. First of all, the acoustic model of sandwich panels is discussed, which provides a foundation to model the acoustic objective function. Then the optimization problem is formulated as a bi-objective programming model, and a solution algorithm based on the non-dominated sorting genetic algorithm II (NSGA-II) is provided to solve the proposed model. Finally, taking an example of a sandwich panel that is expected to be used as an automotive roof panel, numerical experiments are carried out to verify the effectiveness of the proposed model and solution algorithm. Numerical results demonstrate in detail how the core material, geometric constraints and mechanical constraints impact the optimal designs of sandwich panels.
NASA Astrophysics Data System (ADS)
Lashkin, S. V.; Kozelkov, A. S.; Yalozo, A. V.; Gerasimov, V. Yu.; Zelensky, D. K.
2017-12-01
This paper describes the details of the parallel implementation of the SIMPLE algorithm for numerical solution of the Navier-Stokes system of equations on arbitrary unstructured grids. The iteration schemes for the serial and parallel versions of the SIMPLE algorithm are implemented. In the description of the parallel implementation, special attention is paid to computational data exchange among processors under the condition of the grid model decomposition using fictitious cells. We discuss the specific features for the storage of distributed matrices and implementation of vector-matrix operations in parallel mode. It is shown that the proposed way of matrix storage reduces the number of interprocessor exchanges. A series of numerical experiments illustrates the effect of the multigrid SLAE solver tuning on the general efficiency of the algorithm; the tuning involves the types of the cycles used (V, W, and F), the number of iterations of a smoothing operator, and the number of cells for coarsening. Two ways (direct and indirect) of efficiency evaluation for parallelization of the numerical algorithm are demonstrated. The paper presents the results of solving some internal and external flow problems with the evaluation of parallelization efficiency by two algorithms. It is shown that the proposed parallel implementation enables efficient computations for the problems on a thousand processors. Based on the results obtained, some general recommendations are made for the optimal tuning of the multigrid solver, as well as for selecting the optimal number of cells per processor.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walstrom, Peter Lowell
A numerical algorithm for computing the field components B r and B z and their r and z derivatives with open boundaries in cylindrical coordinates for radially thin solenoids with uniform current density is described in this note. An algorithm for computing the vector potential A θ is also described. For the convenience of the reader, derivations of the final expressions from their defining integrals are given in detail, since their derivations are not all easily found in textbooks. Numerical calculations are based on evaluation of complete elliptic integrals using the Bulirsch algorithm cel. The (apparently) new feature of themore » algorithms described in this note applies to cases where the field point is outside of the bore of the solenoid and the field-point radius approaches the solenoid radius. Since the elliptic integrals of the third kind normally used in computing B z and A θ become infinite in this region of parameter space, fields for points with the axial coordinate z outside of the ends of the solenoid and near the solenoid radius are treated by use of elliptic integrals of the third kind of modified argument, derived by use of an addition theorem. Also, the algorithms also avoid the numerical difficulties the textbook solutions have for points near the axis arising from explicit factors of 1/r or 1/r 2 in the some of the expressions.« less
Operational algorithm development and refinement approaches
NASA Astrophysics Data System (ADS)
Ardanuy, Philip E.
2003-11-01
Next-generation polar and geostationary systems, such as the National Polar-orbiting Operational Environmental Satellite System (NPOESS) and the Geostationary Operational Environmental Satellite (GOES)-R, will deploy new generations of electro-optical reflective and emissive capabilities. These will include low-radiometric-noise, improved spatial resolution multi-spectral and hyperspectral imagers and sounders. To achieve specified performances (e.g., measurement accuracy, precision, uncertainty, and stability), and best utilize the advanced space-borne sensing capabilities, a new generation of retrieval algorithms will be implemented. In most cases, these advanced algorithms benefit from ongoing testing and validation using heritage research mission algorithms and data [e.g., the Earth Observing System (EOS)] Moderate-resolution Imaging Spectroradiometer (MODIS) and Shuttle Ozone Limb Scattering Experiment (SOLSE)/Limb Ozone Retreival Experiment (LORE). In these instances, an algorithm's theoretical basis is not static, but rather improves with time. Once frozen, an operational algorithm can "lose ground" relative to research analogs. Cost/benefit analyses provide a basis for change management. The challenge is in reconciling and balancing the stability, and "comfort," that today"s generation of operational platforms provide (well-characterized, known, sensors and algorithms) with the greatly improved quality, opportunities, and risks, that the next generation of operational sensors and algorithms offer. By using the best practices and lessons learned from heritage/groundbreaking activities, it is possible to implement an agile process that enables change, while managing change. This approach combines a "known-risk" frozen baseline with preset completion schedules with insertion opportunities for algorithm advances as ongoing validation activities identify and repair areas of weak performance. This paper describes an objective, adaptive implementation roadmap that takes into account the specific maturities of each system"s (sensor and algorithm) technology to provide for a program that contains continuous improvement while retaining its manageability.
Adaptive Wavelet Modeling of Geophysical Data
NASA Astrophysics Data System (ADS)
Plattner, A.; Maurer, H.; Dahmen, W.; Vorloeper, J.
2009-12-01
Despite the ever-increasing power of modern computers, realistic modeling of complex three-dimensional Earth models is still a challenging task and requires substantial computing resources. The overwhelming majority of current geophysical modeling approaches includes either finite difference or non-adaptive finite element algorithms, and variants thereof. These numerical methods usually require the subsurface to be discretized with a fine mesh to accurately capture the behavior of the physical fields. However, this may result in excessive memory consumption and computing times. A common feature of most of these algorithms is that the modeled data discretizations are independent of the model complexity, which may be wasteful when there are only minor to moderate spatial variations in the subsurface parameters. Recent developments in the theory of adaptive numerical solvers have the potential to overcome this problem. Here, we consider an adaptive wavelet based approach that is applicable to a large scope of problems, also including nonlinear problems. To the best of our knowledge such algorithms have not yet been applied in geophysics. Adaptive wavelet algorithms offer several attractive features: (i) for a given subsurface model, they allow the forward modeling domain to be discretized with a quasi minimal number of degrees of freedom, (ii) sparsity of the associated system matrices is guaranteed, which makes the algorithm memory efficient, and (iii) the modeling accuracy scales linearly with computing time. We have implemented the adaptive wavelet algorithm for solving three-dimensional geoelectric problems. To test its performance, numerical experiments were conducted with a series of conductivity models exhibiting varying degrees of structural complexity. Results were compared with a non-adaptive finite element algorithm, which incorporates an unstructured mesh to best fit subsurface boundaries. Such algorithms represent the current state-of-the-art in geoelectrical modeling. An analysis of the numerical accuracy as a function of the number of degrees of freedom revealed that the adaptive wavelet algorithm outperforms the finite element solver for simple and moderately complex models, whereas the results become comparable for models with spatially highly variable electrical conductivities. The linear dependency of the modeling error and the computing time proved to be model-independent. This feature will allow very efficient computations using large-scale models as soon as our experimental code is optimized in terms of its implementation.
Numerical and experimental investigation of the 3D free surface flow in a model Pelton turbine
NASA Astrophysics Data System (ADS)
Fiereder, R.; Riemann, S.; Schilling, R.
2010-08-01
This investigation focuses on the numerical and experimental analysis of the 3D free surface flow in a Pelton turbine. In particular, two typical flow conditions occurring in a full scale Pelton turbine - a configuration with a straight inlet as well as a configuration with a 90 degree elbow upstream of the nozzle - are considered. Thereby, the effect of secondary flow due to the 90 degree bending of the upstream pipe on the characteristics of the jet is explored. The hybrid flow field consists of pure liquid flow within the conduit and free surface two component flow of the liquid jet emerging out of the nozzle into air. The numerical results are validated against experimental investigations performed in the laboratory of the Institute of Fluid Mechanics (FLM). For the numerical simulation of the flow the in-house unstructured fully parallelized finite volume solver solver3D is utilized. An advanced interface capturing model based on the classic Volume of Fluid method is applied. In order to ensure sharp interface resolution an additional convection term is added to the transport equation of the volume fraction. A collocated variable arrangement is used and the set of non-linear equations, containing fluid conservation equations and model equations for turbulence and volume fraction, are solved in a segregated manner. For pressure-velocity coupling the SIMPLE and PISO algorithms are implemented. Detailed analysis of the observed flow patterns in the jet and of the jet geometry are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dr. Tarasankar DebRoy
In recent years, applications of numerical heat transfer and fluid flow models of fusion welding have resulted in improved understanding of both the welding processes and welded materials. They have been used to accurately calculate thermal cycles and fusion zone geometry in many cases. Here we report the following three major advancements from this project. First, we show how microstructures, grain size distribution and topology of welds of several important engineering alloys can be computed starting from better understanding of the fusion welding process through numerical heat transfer and fluid flow calculations. Second, we provide a conclusive proof that themore » reliability of numerical heat transfer and fluid flow calculations can be significantly improved by optimizing several uncertain model parameters. Third, we demonstrate how the numerical heat transfer and fluid flow models can be combined with a suitable global optimization program such as a genetic algorithm for the tailoring of weld attributes such as attaining a specified weld geometry or a weld thermal cycle. The results of the project have been published in many papers and a listing of these are included together with a list of the graduate thesis that resulted from this project. The work supported by the DOE award has resulted in several important national and international awards. A listing of these awards and the status of the graduate students are also presented in this report.« less
MCore: A High-Order Finite-Volume Dynamical Core for Atmospheric General Circulation Models
NASA Astrophysics Data System (ADS)
Ullrich, P.; Jablonowski, C.
2011-12-01
The desire for increasingly accurate predictions of the atmosphere has driven numerical models to smaller and smaller resolutions, while simultaneously exponentially driving up the cost of existing numerical models. Even with the modern rapid advancement of computational performance, it is estimated that it will take more than twenty years before existing models approach the scales needed to resolve atmospheric convection. However, smarter numerical methods may allow us to glimpse the types of results we would expect from these fine-scale simulations while only requiring a fraction of the computational cost. The next generation of atmospheric models will likely need to rely on both high-order accuracy and adaptive mesh refinement in order to properly capture features of interest. We present our ongoing research on developing a set of ``smart'' numerical methods for simulating the global non-hydrostatic fluid equations which govern atmospheric motions. We have harnessed a high-order finite-volume based approach in developing an atmospheric dynamical core on the cubed-sphere. This type of method is desirable for applications involving adaptive grids, since it has been shown that spuriously reflected wave modes are intrinsically damped out under this approach. The model further makes use of an implicit-explicit Runge-Kutta-Rosenbrock (IMEX-RKR) time integrator for accurate and efficient coupling of the horizontal and vertical model components. We survey the algorithmic development of the model and present results from idealized dynamical core test cases, as well as give a glimpse at future work with our model.
Strong convergence of an extragradient-type algorithm for the multiple-sets split equality problem.
Zhao, Ying; Shi, Luoyi
2017-01-01
This paper introduces a new extragradient-type method to solve the multiple-sets split equality problem (MSSEP). Under some suitable conditions, the strong convergence of an algorithm can be verified in the infinite-dimensional Hilbert spaces. Moreover, several numerical results are given to show the effectiveness of our algorithm.
Advancing solar energy forecasting through the underlying physics
NASA Astrophysics Data System (ADS)
Yang, H.; Ghonima, M. S.; Zhong, X.; Ozge, B.; Kurtz, B.; Wu, E.; Mejia, F. A.; Zamora, M.; Wang, G.; Clemesha, R.; Norris, J. R.; Heus, T.; Kleissl, J. P.
2017-12-01
As solar power comprises an increasingly large portion of the energy generation mix, the ability to accurately forecast solar photovoltaic generation becomes increasingly important. Due to the variability of solar power caused by cloud cover, knowledge of both the magnitude and timing of expected solar power production ahead of time facilitates the integration of solar power onto the electric grid by reducing electricity generation from traditional ancillary generators such as gas and oil power plants, as well as decreasing the ramping of all generators, reducing start and shutdown costs, and minimizing solar power curtailment, thereby providing annual economic value. The time scales involved in both the energy markets and solar variability range from intra-hour to several days ahead. This wide range of time horizons led to the development of a multitude of techniques, with each offering unique advantages in specific applications. For example, sky imagery provides site-specific forecasts on the minute-scale. Statistical techniques including machine learning algorithms are commonly used in the intra-day forecast horizon for regional applications, while numerical weather prediction models can provide mesoscale forecasts on both the intra-day and days-ahead time scale. This talk will provide an overview of the challenges unique to each technique and highlight the advances in their ongoing development which come alongside advances in the fundamental physics underneath.
Park, Chunjae; Kwon, Ohin; Woo, Eung Je; Seo, Jin Keun
2004-03-01
In magnetic resonance electrical impedance tomography (MREIT), we try to visualize cross-sectional conductivity (or resistivity) images of a subject. We inject electrical currents into the subject through surface electrodes and measure the z component Bz of the induced internal magnetic flux density using an MRI scanner. Here, z is the direction of the main magnetic field of the MRI scanner. We formulate the conductivity image reconstruction problem in MREIT from a careful analysis of the relationship between the injection current and the induced magnetic flux density Bz. Based on the novel mathematical formulation, we propose the gradient Bz decomposition algorithm to reconstruct conductivity images. This new algorithm needs to differentiate Bz only once in contrast to the previously developed harmonic Bz algorithm where the numerical computation of (inverted delta)2Bz is required. The new algorithm, therefore, has the important advantage of much improved noise tolerance. Numerical simulations with added random noise of realistic amounts show the feasibility of the algorithm in practical applications and also its robustness against measurement noise.
ERIC Educational Resources Information Center
Laszewski, Audrey; Wichman, Christina L.; Doering, Jennifer J.; Maletta, Kristyn; Hammel, Jennifer
2016-01-01
Early childhood professionals do many things to support young families. This is true now more than ever, as researchers continue to discover the long-term benefits of early, healthy, nurturing relationships. This article provides an overview of the development of an advanced practice perinatal depression algorithm created as a step-by-step guide…
Advanced processing for high-bandwidth sensor systems
NASA Astrophysics Data System (ADS)
Szymanski, John J.; Blain, Phil C.; Bloch, Jeffrey J.; Brislawn, Christopher M.; Brumby, Steven P.; Cafferty, Maureen M.; Dunham, Mark E.; Frigo, Janette R.; Gokhale, Maya; Harvey, Neal R.; Kenyon, Garrett; Kim, Won-Ha; Layne, J.; Lavenier, Dominique D.; McCabe, Kevin P.; Mitchell, Melanie; Moore, Kurt R.; Perkins, Simon J.; Porter, Reid B.; Robinson, S.; Salazar, Alfonso; Theiler, James P.; Young, Aaron C.
2000-11-01
Compute performance and algorithm design are key problems of image processing and scientific computing in general. For example, imaging spectrometers are capable of producing data in hundreds of spectral bands with millions of pixels. These data sets show great promise for remote sensing applications, but require new and computationally intensive processing. The goal of the Deployable Adaptive Processing Systems (DAPS) project at Los Alamos National Laboratory is to develop advanced processing hardware and algorithms for high-bandwidth sensor applications. The project has produced electronics for processing multi- and hyper-spectral sensor data, as well as LIDAR data, while employing processing elements using a variety of technologies. The project team is currently working on reconfigurable computing technology and advanced feature extraction techniques, with an emphasis on their application to image and RF signal processing. This paper presents reconfigurable computing technology and advanced feature extraction algorithm work and their application to multi- and hyperspectral image processing. Related projects on genetic algorithms as applied to image processing will be introduced, as will the collaboration between the DAPS project and the DARPA Adaptive Computing Systems program. Further details are presented in other talks during this conference and in other conferences taking place during this symposium.
NASA Technical Reports Server (NTRS)
Powell, Richard W.
1998-01-01
This paper describes the development and evaluation of a numerical roll reversal predictor-corrector guidance algorithm for the atmospheric flight portion of the Mars Surveyor Program 2001 Orbiter and Lander missions. The Lander mission utilizes direct entry and has a demanding requirement to deploy its parachute within 10 km of the target deployment point. The Orbiter mission utilizes aerocapture to achieve a precise captured orbit with a single atmospheric pass. Detailed descriptions of these predictor-corrector algorithms are given. Also, results of three and six degree-of-freedom Monte Carlo simulations which include navigation, aerodynamics, mass properties and atmospheric density uncertainties are presented.
Parallel language constructs for tensor product computations on loosely coupled architectures
NASA Technical Reports Server (NTRS)
Mehrotra, Piyush; Vanrosendale, John
1989-01-01
Distributed memory architectures offer high levels of performance and flexibility, but have proven awkard to program. Current languages for nonshared memory architectures provide a relatively low level programming environment, and are poorly suited to modular programming, and to the construction of libraries. A set of language primitives designed to allow the specification of parallel numerical algorithms at a higher level is described. Tensor product array computations are focused on along with a simple but important class of numerical algorithms. The problem of programming 1-D kernal routines is focused on first, such as parallel tridiagonal solvers, and then how such parallel kernels can be combined to form parallel tensor product algorithms is examined.
Parallel processing in finite element structural analysis
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.
1987-01-01
A brief review is made of the fundamental concepts and basic issues of parallel processing. Discussion focuses on parallel numerical algorithms, performance evaluation of machines and algorithms, and parallelism in finite element computations. A computational strategy is proposed for maximizing the degree of parallelism at different levels of the finite element analysis process including: 1) formulation level (through the use of mixed finite element models); 2) analysis level (through additive decomposition of the different arrays in the governing equations into the contributions to a symmetrized response plus correction terms); 3) numerical algorithm level (through the use of operator splitting techniques and application of iterative processes); and 4) implementation level (through the effective combination of vectorization, multitasking and microtasking, whenever available).
Model of a Frame of Dynamic Routing and Its Equilibrium
NASA Astrophysics Data System (ADS)
Zhang, Shu; Yuan, Yuan; Xu, Jian
Dynamic routing algorithm based on the shortest path principle is criticized due to the oscillation induced by such routing scheme. In the present work, we propose the model of TCP/RED algorithm by a new frame of dynamic routing, based on the measurement of occupation ratio of router buffer for different links, which only requires the information of the queue size at the buffer of the router, to stabilize the system. We classify several types of equilibrium and employ the numerical method to study the stability of the steady state. Our numerical results show that the careful selection of the parameters characterizing the dynamic routing algorithm can stabilize the system in some cases.
NASA Astrophysics Data System (ADS)
Doha, E. H.; Abd-Elhameed, W. M.; Youssri, Y. H.
2013-10-01
In this paper, we present a new second kind Chebyshev (S2KC) operational matrix of derivatives. With the aid of S2KC, an algorithm is described to obtain numerical solutions of a class of linear and nonlinear Lane-Emden type singular initial value problems (IVPs). The idea of obtaining such solutions is essentially based on reducing the differential equation with its initial conditions to a system of algebraic equations. Two illustrative examples concern relevant physical problems (the Lane-Emden equations of the first and second kind) are discussed to demonstrate the validity and applicability of the suggested algorithm. Numerical results obtained are comparing favorably with the analytical known solutions.
QMR: A Quasi-Minimal Residual method for non-Hermitian linear systems
NASA Technical Reports Server (NTRS)
Freund, Roland W.; Nachtigal, Noel M.
1990-01-01
The biconjugate gradient (BCG) method is the natural generalization of the classical conjugate gradient algorithm for Hermitian positive definite matrices to general non-Hermitian linear systems. Unfortunately, the original BCG algorithm is susceptible to possible breakdowns and numerical instabilities. A novel BCG like approach is presented called the quasi-minimal residual (QMR) method, which overcomes the problems of BCG. An implementation of QMR based on a look-ahead version of the nonsymmetric Lanczos algorithm is proposed. It is shown how BCG iterates can be recovered stably from the QMR process. Some further properties of the QMR approach are given and an error bound is presented. Finally, numerical experiments are reported.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakeman, John D.; Narayan, Akil; Zhou, Tao
We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less
NASA Astrophysics Data System (ADS)
Havemann, Frank; Heinz, Michael; Struck, Alexander; Gläser, Jochen
2011-01-01
We propose a new local, deterministic and parameter-free algorithm that detects fuzzy and crisp overlapping communities in a weighted network and simultaneously reveals their hierarchy. Using a local fitness function, the algorithm greedily expands natural communities of seeds until the whole graph is covered. The hierarchy of communities is obtained analytically by calculating resolution levels at which communities grow rather than numerically by testing different resolution levels. This analytic procedure is not only more exact than its numerical alternatives such as LFM and GCE but also much faster. Critical resolution levels can be identified by searching for intervals in which large changes of the resolution do not lead to growth of communities. We tested our algorithm on benchmark graphs and on a network of 492 papers in information science. Combined with a specific post-processing, the algorithm gives much more precise results on LFR benchmarks with high overlap compared to other algorithms and performs very similarly to GCE.
A simple algorithm for beam profile diagnostics using a thermographic camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katagiri, Ken; Hojo, Satoru; Honma, Toshihiro
2014-03-15
A new algorithm for digital image processing apparatuses is developed to evaluate profiles of high-intensity DC beams from temperature images of irradiated thin foils. Numerical analyses are performed to examine the reliability of the algorithm. To simulate the temperature images acquired by a thermographic camera, temperature distributions are numerically calculated for 20 MeV proton beams with different parameters. Noise in the temperature images which is added by the camera sensor is also simulated to account for its effect. Using the algorithm, beam profiles are evaluated from the simulated temperature images and compared with exact solutions. We find that niobium ismore » an appropriate material for the thin foil used in the diagnostic system. We also confirm that the algorithm is adaptable over a wide beam current range of 0.11–214 μA, even when employing a general-purpose thermographic camera with rather high noise (ΔT{sub NETD} ≃ 0.3 K; NETD: noise equivalent temperature difference)« less
The NLO jet vertex in the small-cone approximation for kt and cone algorithms
NASA Astrophysics Data System (ADS)
Colferai, D.; Niccoli, A.
2015-04-01
We determine the jet vertex for Mueller-Navelet jets and forward jets in the small-cone approximation for two particular choices of jet algoritms: the kt algorithm and the cone algorithm. These choices are motivated by the extensive use of such algorithms in the phenomenology of jets. The differences with the original calculations of the small-cone jet vertex by Ivanov and Papa, which is found to be equivalent to a formerly algorithm proposed by Furman, are shown at both analytic and numerical level, and turn out to be sizeable. A detailed numerical study of the error introduced by the small-cone approximation is also presented, for various observables of phenomenological interest. For values of the jet "radius" R = 0 .5, the use of the small-cone approximation amounts to an error of about 5% at the level of cross section, while it reduces to less than 2% for ratios of distributions such as those involved in the measure of the azimuthal decorrelation of dijets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakeman, John D.; Narayan, Akil; Zhou, Tao
We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less
Jakeman, John D.; Narayan, Akil; Zhou, Tao
2017-06-22
We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less
Wei, Jiangyong; Hu, Xiaohua; Zou, Xiufen; Tian, Tianhai
2017-12-28
Recent advances in omics technologies have raised great opportunities to study large-scale regulatory networks inside the cell. In addition, single-cell experiments have measured the gene and protein activities in a large number of cells under the same experimental conditions. However, a significant challenge in computational biology and bioinformatics is how to derive quantitative information from the single-cell observations and how to develop sophisticated mathematical models to describe the dynamic properties of regulatory networks using the derived quantitative information. This work designs an integrated approach to reverse-engineer gene networks for regulating early blood development based on singel-cell experimental observations. The wanderlust algorithm is initially used to develop the pseudo-trajectory for the activities of a number of genes. Since the gene expression data in the developed pseudo-trajectory show large fluctuations, we then use Gaussian process regression methods to smooth the gene express data in order to obtain pseudo-trajectories with much less fluctuations. The proposed integrated framework consists of both bioinformatics algorithms to reconstruct the regulatory network and mathematical models using differential equations to describe the dynamics of gene expression. The developed approach is applied to study the network regulating early blood cell development. A graphic model is constructed for a regulatory network with forty genes and a dynamic model using differential equations is developed for a network of nine genes. Numerical results suggests that the proposed model is able to match experimental data very well. We also examine the networks with more regulatory relations and numerical results show that more regulations may exist. We test the possibility of auto-regulation but numerical simulations do not support the positive auto-regulation. In addition, robustness is used as an importantly additional criterion to select candidate networks. The research results in this work shows that the developed approach is an efficient and effective method to reverse-engineer gene networks using single-cell experimental observations.
NASA Technical Reports Server (NTRS)
Goodman, Steven J.; Blakeslee, Richard; Koshak, William; Petersen, Walter; Carey, Larry; Mach, Douglas; Buechler, Dennis; Bateman, Monte; McCaul, Eugene; Bruning, Eric;
2010-01-01
The next generation Geostationary Operational Environmental Satellite (GOES-R) series with a planned launch in 2015 is a follow on to the existing GOES system currently operating over the Western Hemisphere. The system will aid in forecasting severe storms and tornado activity, and convective weather impacts on aviation safety and efficiency. The system provides products including lightning, cloud properties, rainfall rate, volcanic ash, air quality, hurricane intensity, and fire/hot spot characterization. Advancements over current GOES include a new capability for total lightning detection (cloud and cloud-to-ground flashes) from the Geostationary Lightning Mapper (GLM), and improved spectral, spatial, and temporal resolution for the 16-channel Advanced Baseline Imager (ABI). The Geostationary Lightning Mapper (GLM), an optical transient detector will map total (in-cloud and cloud-to-ground) lightning flashes continuously day and night with near-uniform spatial resolution of 8 km with a product refresh rate of less than 20 sec over the Americas and adjacent oceanic regions, from the west coast of Africa (GOES-E) to New Zealand (GOES-W) when the constellation is fully operational. In parallel with the instrument development, a GOES-R Risk Reduction Team and Algorithm Working Group Lightning Applications Team have begun to develop the higher level algorithms and applications using the GLM alone and decision aids incorporating information from the ABI, ground-based weather radar, and numerical models. Proxy total lightning data from the NASA Lightning Imaging Sensor on the Tropical Rainfall Measuring Mission (TRMM) satellite and regional lightning networks are being used to develop the pre-launch algorithms and applications, and also improve our knowledge of thunderstorm initiation and evolution. Real time total lightning mapping data are also being provided in an experimental mode to selected National Weather Service (NWS) national centers and forecast offices via the GOES-R Proving Ground to help improve our understanding of the application of these data in operational settings and facilitate early on-orbit user readiness for this new capability.
Parallel Algorithms for Least Squares and Related Computations.
1991-03-22
for dense computations in linear algebra . The work has recently been published in a general reference book on parallel algorithms by SIAM. AFO SR...written his Ph.D. dissertation with the principal investigator. (See publication 6.) • Parallel Algorithms for Dense Linear Algebra Computations. Our...and describe and to put into perspective a selection of the more important parallel algorithms for numerical linear algebra . We give a major new
Convergence and Applications of a Gossip-Based Gauss-Newton Algorithm
NASA Astrophysics Data System (ADS)
Li, Xiao; Scaglione, Anna
2013-11-01
The Gauss-Newton algorithm is a popular and efficient centralized method for solving non-linear least squares problems. In this paper, we propose a multi-agent distributed version of this algorithm, named Gossip-based Gauss-Newton (GGN) algorithm, which can be applied in general problems with non-convex objectives. Furthermore, we analyze and present sufficient conditions for its convergence and show numerically that the GGN algorithm achieves performance comparable to the centralized algorithm, with graceful degradation in case of network failures. More importantly, the GGN algorithm provides significant performance gains compared to other distributed first order methods.
An implicit iterative algorithm with a tuning parameter for Itô Lyapunov matrix equations
NASA Astrophysics Data System (ADS)
Zhang, Ying; Wu, Ai-Guo; Sun, Hui-Jie
2018-01-01
In this paper, an implicit iterative algorithm is proposed for solving a class of Lyapunov matrix equations arising in Itô stochastic linear systems. A tuning parameter is introduced in this algorithm, and thus the convergence rate of the algorithm can be changed. Some conditions are presented such that the developed algorithm is convergent. In addition, an explicit expression is also derived for the optimal tuning parameter, which guarantees that the obtained algorithm achieves its fastest convergence rate. Finally, numerical examples are employed to illustrate the effectiveness of the given algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Priimak, Dmitri
2014-12-01
We present a finite difference numerical algorithm for solving two dimensional spatially homogeneous Boltzmann transport equation which describes electron transport in a semiconductor superlattice subject to crossed time dependent electric and constant magnetic fields. The algorithm is implemented both in C language targeted to CPU and in CUDA C language targeted to commodity NVidia GPU. We compare performances and merits of one implementation versus another and discuss various software optimisation techniques.
Numerical study of a matrix-free trust-region SQP method for equality constrained optimization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heinkenschloss, Matthias; Ridzal, Denis; Aguilo, Miguel Antonio
2011-12-01
This is a companion publication to the paper 'A Matrix-Free Trust-Region SQP Algorithm for Equality Constrained Optimization' [11]. In [11], we develop and analyze a trust-region sequential quadratic programming (SQP) method that supports the matrix-free (iterative, in-exact) solution of linear systems. In this report, we document the numerical behavior of the algorithm applied to a variety of equality constrained optimization problems, with constraints given by partial differential equations (PDEs).
Multi-fidelity stochastic collocation method for computation of statistical moments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Xueyu, E-mail: xueyu-zhu@uiowa.edu; Linebarger, Erin M., E-mail: aerinline@sci.utah.edu; Xiu, Dongbin, E-mail: xiu.16@osu.edu
We present an efficient numerical algorithm to approximate the statistical moments of stochastic problems, in the presence of models with different fidelities. The method extends the multi-fidelity approximation method developed in . By combining the efficiency of low-fidelity models and the accuracy of high-fidelity models, our method exhibits fast convergence with a limited number of high-fidelity simulations. We establish an error bound of the method and present several numerical examples to demonstrate the efficiency and applicability of the multi-fidelity algorithm.
Optimization-Based Inverse Identification of the Parameters of a Concrete Cap Material Model
NASA Astrophysics Data System (ADS)
Král, Petr; Hokeš, Filip; Hušek, Martin; Kala, Jiří; Hradil, Petr
2017-10-01
Issues concerning the advanced numerical analysis of concrete building structures in sophisticated computing systems currently require the involvement of nonlinear mechanics tools. The efforts to design safer, more durable and mainly more economically efficient concrete structures are supported via the use of advanced nonlinear concrete material models and the geometrically nonlinear approach. The application of nonlinear mechanics tools undoubtedly presents another step towards the approximation of the real behaviour of concrete building structures within the framework of computer numerical simulations. However, the success rate of this application depends on having a perfect understanding of the behaviour of the concrete material models used and having a perfect understanding of the used material model parameters meaning. The effective application of nonlinear concrete material models within computer simulations often becomes very problematic because these material models very often contain parameters (material constants) whose values are difficult to obtain. However, getting of the correct values of material parameters is very important to ensure proper function of a concrete material model used. Today, one possibility, which permits successful solution of the mentioned problem, is the use of optimization algorithms for the purpose of the optimization-based inverse material parameter identification. Parameter identification goes hand in hand with experimental investigation while it trying to find parameter values of the used material model so that the resulting data obtained from the computer simulation will best approximate the experimental data. This paper is focused on the optimization-based inverse identification of the parameters of a concrete cap material model which is known under the name the Continuous Surface Cap Model. Within this paper, material parameters of the model are identified on the basis of interaction between nonlinear computer simulations, gradient based and nature inspired optimization algorithms and experimental data, the latter of which take the form of a load-extension curve obtained from the evaluation of uniaxial tensile test results. The aim of this research was to obtain material model parameters corresponding to the quasi-static tensile loading which may be further used for the research involving dynamic and high-speed tensile loading. Based on the obtained results it can be concluded that the set goal has been reached.
NASA Astrophysics Data System (ADS)
Liu, Jinxin; Chen, Xuefeng; Gao, Jiawei; Zhang, Xingwu
2016-12-01
Air vehicles, space vehicles and underwater vehicles, the cabins of which can be viewed as variable section cylindrical structures, have multiple rotational vibration sources (e.g., engines, propellers, compressors and motors), making the spectrum of noise multiple-harmonic. The suppression of such noise has been a focus of interests in the field of active vibration control (AVC). In this paper, a multiple-source multiple-harmonic (MSMH) active vibration suppression algorithm with feed-forward structure is proposed based on reference amplitude rectification and conjugate gradient method (CGM). An AVC simulation scheme called finite element model in-loop simulation (FEMILS) is also proposed for rapid algorithm verification. Numerical studies of AVC are conducted on a variable section cylindrical structure based on the proposed MSMH algorithm and FEMILS scheme. It can be seen from the numerical studies that: (1) the proposed MSMH algorithm can individually suppress each component of the multiple-harmonic noise with an unified and improved convergence rate; (2) the FEMILS scheme is convenient and straightforward for multiple-source simulations with an acceptable loop time. Moreover, the simulations have similar procedure to real-life control and can be easily extended to physical model platform.
A model reduction approach to numerical inversion for a parabolic partial differential equation
NASA Astrophysics Data System (ADS)
Borcea, Liliana; Druskin, Vladimir; Mamonov, Alexander V.; Zaslavsky, Mikhail
2014-12-01
We propose a novel numerical inversion algorithm for the coefficients of parabolic partial differential equations, based on model reduction. The study is motivated by the application of controlled source electromagnetic exploration, where the unknown is the subsurface electrical resistivity and the data are time resolved surface measurements of the magnetic field. The algorithm presented in this paper considers inversion in one and two dimensions. The reduced model is obtained with rational interpolation in the frequency (Laplace) domain and a rational Krylov subspace projection method. It amounts to a nonlinear mapping from the function space of the unknown resistivity to the small dimensional space of the parameters of the reduced model. We use this mapping as a nonlinear preconditioner for the Gauss-Newton iterative solution of the inverse problem. The advantage of the inversion algorithm is twofold. First, the nonlinear preconditioner resolves most of the nonlinearity of the problem. Thus the iterations are less likely to get stuck in local minima and the convergence is fast. Second, the inversion is computationally efficient because it avoids repeated accurate simulations of the time-domain response. We study the stability of the inversion algorithm for various rational Krylov subspaces, and assess its performance with numerical experiments.
Inverse Problems in Geodynamics Using Machine Learning Algorithms
NASA Astrophysics Data System (ADS)
Shahnas, M. H.; Yuen, D. A.; Pysklywec, R. N.
2018-01-01
During the past few decades numerical studies have been widely employed to explore the style of circulation and mixing in the mantle of Earth and other planets. However, in geodynamical studies there are many properties from mineral physics, geochemistry, and petrology in these numerical models. Machine learning, as a computational statistic-related technique and a subfield of artificial intelligence, has rapidly emerged recently in many fields of sciences and engineering. We focus here on the application of supervised machine learning (SML) algorithms in predictions of mantle flow processes. Specifically, we emphasize on estimating mantle properties by employing machine learning techniques in solving an inverse problem. Using snapshots of numerical convection models as training samples, we enable machine learning models to determine the magnitude of the spin transition-induced density anomalies that can cause flow stagnation at midmantle depths. Employing support vector machine algorithms, we show that SML techniques can successfully predict the magnitude of mantle density anomalies and can also be used in characterizing mantle flow patterns. The technique can be extended to more complex geodynamic problems in mantle dynamics by employing deep learning algorithms for putting constraints on properties such as viscosity, elastic parameters, and the nature of thermal and chemical anomalies.
Improved transition path sampling methods for simulation of rare events
NASA Astrophysics Data System (ADS)
Chopra, Manan; Malshe, Rohit; Reddy, Allam S.; de Pablo, J. J.
2008-04-01
The free energy surfaces of a wide variety of systems encountered in physics, chemistry, and biology are characterized by the existence of deep minima separated by numerous barriers. One of the central aims of recent research in computational chemistry and physics has been to determine how transitions occur between deep local minima on rugged free energy landscapes, and transition path sampling (TPS) Monte-Carlo methods have emerged as an effective means for numerical investigation of such transitions. Many of the shortcomings of TPS-like approaches generally stem from their high computational demands. Two new algorithms are presented in this work that improve the efficiency of TPS simulations. The first algorithm uses biased shooting moves to render the sampling of reactive trajectories more efficient. The second algorithm is shown to substantially improve the accuracy of the transition state ensemble by introducing a subset of local transition path simulations in the transition state. The system considered in this work consists of a two-dimensional rough energy surface that is representative of numerous systems encountered in applications. When taken together, these algorithms provide gains in efficiency of over two orders of magnitude when compared to traditional TPS simulations.