Approximation concepts for efficient structural synthesis
NASA Technical Reports Server (NTRS)
Schmit, L. A., Jr.; Miura, H.
1976-01-01
It is shown that efficient structural synthesis capabilities can be created by using approximation concepts to mesh finite element structural analysis methods with nonlinear mathematical programming techniques. The history of the application of mathematical programming techniques to structural design optimization problems is reviewed. Several rather general approximation concepts are described along with the technical foundations of the ACCESS 1 computer program, which implements several approximation concepts. A substantial collection of structural design problems involving truss and idealized wing structures is presented. It is concluded that since the basic ideas employed in creating the ACCESS 1 program are rather general, its successful development supports the contention that the introduction of approximation concepts will lead to the emergence of a new generation of practical and efficient, large scale, structural synthesis capabilities in which finite element analysis methods and mathematical programming algorithms will play a central role.
Optimal feedback control infinite dimensional parabolic evolution systems: Approximation techniques
NASA Technical Reports Server (NTRS)
Banks, H. T.; Wang, C.
1989-01-01
A general approximation framework is discussed for computation of optimal feedback controls in linear quadratic regular problems for nonautonomous parabolic distributed parameter systems. This is done in the context of a theoretical framework using general evolution systems in infinite dimensional Hilbert spaces. Conditions are discussed for preservation under approximation of stabilizability and detectability hypotheses on the infinite dimensional system. The special case of periodic systems is also treated.
A hybrid Pade-Galerkin technique for differential equations
NASA Technical Reports Server (NTRS)
Geer, James F.; Andersen, Carl M.
1993-01-01
A three-step hybrid analysis technique, which successively uses the regular perturbation expansion method, the Pade expansion method, and then a Galerkin approximation, is presented and applied to some model boundary value problems. In the first step of the method, the regular perturbation method is used to construct an approximation to the solution in the form of a finite power series in a small parameter epsilon associated with the problem. In the second step of the method, the series approximation obtained in step one is used to construct a Pade approximation in the form of a rational function in the parameter epsilon. In the third step, the various powers of epsilon which appear in the Pade approximation are replaced by new (unknown) parameters (delta(sub j)). These new parameters are determined by requiring that the residual formed by substituting the new approximation into the governing differential equation is orthogonal to each of the perturbation coordinate functions used in step one. The technique is applied to model problems involving ordinary or partial differential equations. In general, the technique appears to provide good approximations to the solution even when the perturbation and Pade approximations fail to do so. The method is discussed and topics for future investigations are indicated.
Hamiltonian Analysis of Subcritical Stochastic Epidemic Dynamics
2017-01-01
We extend a technique of approximation of the long-term behavior of a supercritical stochastic epidemic model, using the WKB approximation and a Hamiltonian phase space, to the subcritical case. The limiting behavior of the model and approximation are qualitatively different in the subcritical case, requiring a novel analysis of the limiting behavior of the Hamiltonian system away from its deterministic subsystem. This yields a novel, general technique of approximation of the quasistationary distribution of stochastic epidemic and birth-death models and may lead to techniques for analysis of these models beyond the quasistationary distribution. For a classic SIS model, the approximation found for the quasistationary distribution is very similar to published approximations but not identical. For a birth-death process without depletion of susceptibles, the approximation is exact. Dynamics on the phase plane similar to those predicted by the Hamiltonian analysis are demonstrated in cross-sectional data from trachoma treatment trials in Ethiopia, in which declining prevalences are consistent with subcritical epidemic dynamics. PMID:28932256
NASA Astrophysics Data System (ADS)
Kokurin, M. Yu.
2010-11-01
A general scheme for improving approximate solutions to irregular nonlinear operator equations in Hilbert spaces is proposed and analyzed in the presence of errors. A modification of this scheme designed for equations with quadratic operators is also examined. The technique of universal linear approximations of irregular equations is combined with the projection onto finite-dimensional subspaces of a special form. It is shown that, for finite-dimensional quadratic problems, the proposed scheme provides information about the global geometric properties of the intersections of quadrics.
NASA Technical Reports Server (NTRS)
Banks, H. T.; Kunisch, K.
1982-01-01
Approximation results from linear semigroup theory are used to develop a general framework for convergence of approximation schemes in parameter estimation and optimal control problems for nonlinear partial differential equations. These ideas are used to establish theoretical convergence results for parameter identification using modal (eigenfunction) approximation techniques. Results from numerical investigations of these schemes for both hyperbolic and parabolic systems are given.
Intermediate boundary conditions for LOD, ADI and approximate factorization methods
NASA Technical Reports Server (NTRS)
Leveque, R. J.
1985-01-01
A general approach to determining the correct intermediate boundary conditions for dimensional splitting methods is presented. The intermediate solution U is viewed as a second order accurate approximation to a modified equation. Deriving the modified equation and using the relationship between this equation and the original equation allows us to determine the correct boundary conditions for U*. This technique is illustrated by applying it to locally one dimensional (LOD) and alternating direction implicit (ADI) methods for the heat equation in two and three space dimensions. The approximate factorization method is considered in slightly more generality.
Numerical realization of the variational method for generating self-trapped beams
NASA Astrophysics Data System (ADS)
Duque, Erick I.; Lopez-Aguayo, Servando; Malomed, Boris A.
2018-03-01
We introduce a numerical variational method based on the Rayleigh-Ritz optimization principle for predicting two-dimensional self-trapped beams in nonlinear media. This technique overcomes the limitation of the traditional variational approximation in performing analytical Lagrangian integration and differentiation. Approximate soliton solutions of a generalized nonlinear Schr\\"odinger equation are obtained, demonstrating robustness of the beams of various types (fundamental, vortices, multipoles, azimuthons) in the course of their propagation. The algorithm offers possibilities to produce more sophisticated soliton profiles in general nonlinear models.
Sensitivity analysis and approximation methods for general eigenvalue problems
NASA Technical Reports Server (NTRS)
Murthy, D. V.; Haftka, R. T.
1986-01-01
Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.
NASA Astrophysics Data System (ADS)
Ngom, Ndèye Fatou; Monga, Olivier; Ould Mohamed, Mohamed Mahmoud; Garnier, Patricia
2012-02-01
This paper focuses on the modeling of soil microstructures using generalized cylinders, with a specific application to pore space. The geometric modeling of these microstructures is a recent area of study, made possible by the improved performance of computed tomography techniques. X-scanners provide very-high-resolution 3D volume images ( 3-5μm) of soil samples in which pore spaces can be extracted by thresholding. However, in most cases, the pore space defines a complex volume shape that cannot be approximated using simple analytical functions. We propose representing this shape using a compact, stable, and robust piecewise approximation by means of generalized cylinders. This intrinsic shape representation conserves its topological and geometric properties. Our algorithm includes three main processing stages. The first stage consists in describing the volume shape using a minimum number of balls included within the shape, such that their union recovers the shape skeleton. The second stage involves the optimum extraction of simply connected chains of balls. The final stage copes with the approximation of each simply optimal chain using generalized cylinders: circular generalized cylinders, tori, cylinders, and truncated cones. This technique was applied to several data sets formed by real volume computed tomography soil samples. It was possible to demonstrate that our geometric representation supplied a good approximation of the pore space. We also stress the compactness and robustness of this method with respect to any changes affecting the initial data, as well as its coherence with the intuitive notion of pores. During future studies, this geometric pore space representation will be used to simulate biological dynamics.
Rahaman, Mijanur; Pang, Chin-Tzong; Ishtyak, Mohd; Ahmad, Rais
2017-01-01
In this article, we introduce a perturbed system of generalized mixed quasi-equilibrium-like problems involving multi-valued mappings in Hilbert spaces. To calculate the approximate solutions of the perturbed system of generalized multi-valued mixed quasi-equilibrium-like problems, firstly we develop a perturbed system of auxiliary generalized multi-valued mixed quasi-equilibrium-like problems, and then by using the celebrated Fan-KKM technique, we establish the existence and uniqueness of solutions of the perturbed system of auxiliary generalized multi-valued mixed quasi-equilibrium-like problems. By deploying an auxiliary principle technique and an existence result, we formulate an iterative algorithm for solving the perturbed system of generalized multi-valued mixed quasi-equilibrium-like problems. Lastly, we study the strong convergence analysis of the proposed iterative sequences under monotonicity and some mild conditions. These results are new and generalize some known results in this field.
Solution of linear systems by a singular perturbation technique
NASA Technical Reports Server (NTRS)
Ardema, M. D.
1976-01-01
An approximate solution is obtained for a singularly perturbed system of initial valued, time invariant, linear differential equations with multiple boundary layers. Conditions are stated under which the approximate solution converges uniformly to the exact solution as the perturbation parameter tends to zero. The solution is obtained by the method of matched asymptotic expansions. Use of the results for obtaining approximate solutions of general linear systems is discussed. An example is considered to illustrate the method and it is shown that the formulas derived give a readily computed uniform approximation.
Numerical realization of the variational method for generating self-trapped beams.
Duque, Erick I; Lopez-Aguayo, Servando; Malomed, Boris A
2018-03-19
We introduce a numerical variational method based on the Rayleigh-Ritz optimization principle for predicting two-dimensional self-trapped beams in nonlinear media. This technique overcomes the limitation of the traditional variational approximation in performing analytical Lagrangian integration and differentiation. Approximate soliton solutions of a generalized nonlinear Schrödinger equation are obtained, demonstrating robustness of the beams of various types (fundamental, vortices, multipoles, azimuthons) in the course of their propagation. The algorithm offers possibilities to produce more sophisticated soliton profiles in general nonlinear models.
Linear time relational prototype based learning.
Gisbrecht, Andrej; Mokbel, Bassam; Schleif, Frank-Michael; Zhu, Xibin; Hammer, Barbara
2012-10-01
Prototype based learning offers an intuitive interface to inspect large quantities of electronic data in supervised or unsupervised settings. Recently, many techniques have been extended to data described by general dissimilarities rather than Euclidean vectors, so-called relational data settings. Unlike the Euclidean counterparts, the techniques have quadratic time complexity due to the underlying quadratic dissimilarity matrix. Thus, they are infeasible already for medium sized data sets. The contribution of this article is twofold: On the one hand we propose a novel supervised prototype based classification technique for dissimilarity data based on popular learning vector quantization (LVQ), on the other hand we transfer a linear time approximation technique, the Nyström approximation, to this algorithm and an unsupervised counterpart, the relational generative topographic mapping (GTM). This way, linear time and space methods result. We evaluate the techniques on three examples from the biomedical domain.
Estimation on nonlinear damping in second order distributed parameter systems
NASA Technical Reports Server (NTRS)
Banks, H. T.; Reich, Simeon; Rosen, I. G.
1989-01-01
An approximation and convergence theory for the identification of nonlinear damping in abstract wave equations is developed. It is assumed that the unknown dissipation mechanism to be identified can be described by a maximal monotone operator acting on the generalized velocity. The stiffness is assumed to be linear and symmetric. Functional analytic techniques are used to establish that solutions to a sequence of finite dimensional (Galerkin) approximating identification problems in some sense approximate a solution to the original infinite dimensional inverse problem.
Continuation of probability density functions using a generalized Lyapunov approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baars, S., E-mail: s.baars@rug.nl; Viebahn, J.P., E-mail: viebahn@cwi.nl; Mulder, T.E., E-mail: t.e.mulder@uu.nl
Techniques from numerical bifurcation theory are very useful to study transitions between steady fluid flow patterns and the instabilities involved. Here, we provide computational methodology to use parameter continuation in determining probability density functions of systems of stochastic partial differential equations near fixed points, under a small noise approximation. Key innovation is the efficient solution of a generalized Lyapunov equation using an iterative method involving low-rank approximations. We apply and illustrate the capabilities of the method using a problem in physical oceanography, i.e. the occurrence of multiple steady states of the Atlantic Ocean circulation.
Post-shock temperatures in minerals
NASA Technical Reports Server (NTRS)
Raikes, S. A.; Ahrens, T. J.
1979-01-01
An experimental technique was developed for measuring post-shock temperatures in a wide variety of materials, including those of geophysical interest such as silicates. The technique uses an infrared radiation detector to determine the brightness temperature of samples shocked to pressures in the range 5 to approximately 30 GPa; in these experiments measurements were made in two wavelength ranges (4.5 to 5.75 microns and 7 to 14 microns). Reproducible results, with the temperatures in the two wavelength bands generally in excellent agreement, were obtained for aluminum-2024 (10.5 to 33 GPa, 125 to 260 C), stainless steel-304 (11.5 to 50 GPa, 80 to 350 C), crystalline quartz (5.0 to 21.5 GPa, 80 to 250 C), forsterite (7.5 to 28.0 GPa, approximately 30 to 160 C) and Bamble bronzite (6.0 to 26.0 GPa, approximately 30 to 225 C). It is concluded that release adiabat data should be used, wherever available, for calculations of residual temperature, and that adequate descriptions of the shock and release processes in minerals are more complex than generally assumed.
NASA Astrophysics Data System (ADS)
Bescond, Marc; Li, Changsheng; Mera, Hector; Cavassilas, Nicolas; Lannoo, Michel
2013-10-01
We present a one-shot current-conserving approach to model the influence of electron-phonon scattering in nano-transistors using the non-equilibrium Green's function formalism. The approach is based on the lowest order approximation (LOA) to the current and its simplest analytic continuation (LOA+AC). By means of a scaling argument, we show how both LOA and LOA+AC can be easily obtained from the first iteration of the usual self-consistent Born approximation (SCBA) algorithm. Both LOA and LOA+AC are then applied to model n-type silicon nanowire field-effect-transistors and are compared to SCBA current characteristics. In this system, the LOA fails to describe electron-phonon scattering, mainly because of the interactions with acoustic phonons at the band edges. In contrast, the LOA+AC still well approximates the SCBA current characteristics, thus demonstrating the power of analytic continuation techniques. The limits of validity of LOA+AC are also discussed, and more sophisticated and general analytic continuation techniques are suggested for more demanding cases.
Nonequilibrium flow computations. 1: An analysis of numerical formulations of conservation laws
NASA Technical Reports Server (NTRS)
Liu, Yen; Vinokur, Marcel
1988-01-01
Modern numerical techniques employing properties of flux Jacobian matrices are extended to general, nonequilibrium flows. Generalizations of the Beam-Warming scheme, Steger-Warming and van Leer Flux-vector splittings, and Roe's approximate Riemann solver are presented for 3-D, time-varying grids. The analysis is based on a thermodynamic model that includes the most general thermal and chemical nonequilibrium flow of an arbitrary gas. Various special cases are also discussed.
Course 4: Density Functional Theory, Methods, Techniques, and Applications
NASA Astrophysics Data System (ADS)
Chrétien, S.; Salahub, D. R.
Contents 1 Introduction 2 Density functional theory 2.1 Hohenberg and Kohn theorems 2.2 Levy's constrained search 2.3 Kohn-Sham method 3 Density matrices and pair correlation functions 4 Adiabatic connection or coupling strength integration 5 Comparing and constrasting KS-DFT and HF-CI 6 Preparing new functionals 7 Approximate exchange and correlation functionals 7.1 The Local Spin Density Approximation (LSDA) 7.2 Gradient Expansion Approximation (GEA) 7.3 Generalized Gradient Approximation (GGA) 7.4 meta-Generalized Gradient Approximation (meta-GGA) 7.5 Hybrid functionals 7.6 The Optimized Effective Potential method (OEP) 7.7 Comparison between various approximate functionals 8 LAP correlation functional 9 Solving the Kohn-Sham equations 9.1 The Kohn-Sham orbitals 9.2 Coulomb potential 9.3 Exchange-correlation potential 9.4 Core potential 9.5 Other choices and sources of error 9.6 Functionality 10 Applications 10.1 Ab initio molecular dynamics for an alanine dipeptide model 10.2 Transition metal clusters: The ecstasy, and the agony... 10.3 The conversion of acetylene to benzene on Fe clusters 11 Conclusions
NASA Technical Reports Server (NTRS)
Smalley, L. L.
1975-01-01
The coordinate independence of gravitational radiation and the parameterized post-Newtonian approximation from which it is extended are described. The general consistency of the field equations with Bianchi identities, gauge conditions, and the Newtonian limit of the perfect fluid equations of hydrodynamics are studied. A technique of modification is indicated for application to vector-metric or double metric theories, as well as to scalar-tensor theories.
Phase Retrieval for Radio Telescope and Antenna Control
NASA Technical Reports Server (NTRS)
Dean, Bruce
2011-01-01
Phase-retrieval is a general term used in optics to describe the estimation of optical imperfections or "aberrations." The purpose of this innovation is to develop the application of phase retrieval to radio telescope and antenna control in the millimeter wave band. Earlier techniques do not approximate the incoherent subtraction process as a coherent propagation. This approximation reduces the noise in the data and allows a straightforward application of conventional phase retrieval techniques for radio telescope and antenna control. The application of iterative-transform phase retrieval to radio telescope and antenna control is made by approximating the incoherent subtraction process as a coherent propagation. Thus, for systems utilizing both positive and negative polarity feeds, this approximation allows both surface and alignment errors to be assessed without the use of additional hardware or laser metrology. Knowledge of the antenna surface profile allows errors to be corrected at a given surface temperature and observing angle. In addition to imperfections of the antenna surface figure, the misalignment of multiple antennas operating in unison can reduce or degrade the signal-to-noise ratio of the received or broadcast signals. This technique also has application to the alignment of antenna array configurations.
Calculation of light delay for coupled microrings by FDTD technique and Padé approximation.
Huang, Yong-Zhen; Yang, Yue-De
2009-11-01
The Padé approximation with Baker's algorithm is compared with the least-squares Prony method and the generalized pencil-of-functions (GPOF) method for calculating mode frequencies and mode Q factors for coupled optical microdisks by FDTD technique. Comparisons of intensity spectra and the corresponding mode frequencies and Q factors show that the Padé approximation can yield more stable results than the Prony and the GPOF methods, especially the intensity spectrum. The results of the Prony method and the GPOF method are greatly influenced by the selected number of resonant modes, which need to be optimized during the data processing, in addition to the length of the time response signal. Furthermore, the Padé approximation is applied to calculate light delay for embedded microring resonators from complex transmission spectra obtained by the Padé approximation from a FDTD output. The Prony and the GPOF methods cannot be applied to calculate the transmission spectra, because the transmission signal obtained by the FDTD simulation cannot be expressed as a sum of damped complex exponentials.
Fuzzy rationality and parameter elicitation in decision analysis
NASA Astrophysics Data System (ADS)
Nikolova, Natalia D.; Tenekedjiev, Kiril I.
2010-07-01
It is widely recognised by decision analysts that real decision-makers always make estimates in an interval form. An overview of techniques to find an optimal alternative among such with imprecise and interval probabilities is presented. Scalarisation methods are outlined as most appropriate. A proper continuation of such techniques is fuzzy rational (FR) decision analysis. A detailed representation of the elicitation process influenced by fuzzy rationality is given. The interval character of probabilities leads to the introduction of ribbon functions, whose general form and special cases are compared with the p-boxes. As demonstrated, approximation of utilities in FR decision analysis does not depend on the probabilities, but the approximation of probabilities is dependent on preferences.
Numerical approximations for fractional diffusion equations via a Chebyshev spectral-tau method
NASA Astrophysics Data System (ADS)
Doha, Eid H.; Bhrawy, Ali H.; Ezz-Eldien, Samer S.
2013-10-01
In this paper, a class of fractional diffusion equations with variable coefficients is considered. An accurate and efficient spectral tau technique for solving the fractional diffusion equations numerically is proposed. This method is based upon Chebyshev tau approximation together with Chebyshev operational matrix of Caputo fractional differentiation. Such approach has the advantage of reducing the problem to the solution of a system of algebraic equations, which may then be solved by any standard numerical technique. We apply this general method to solve four specific examples. In each of the examples considered, the numerical results show that the proposed method is of high accuracy and is efficient for solving the time-dependent fractional diffusion equations.
New Operational Matrices for Solving Fractional Differential Equations on the Half-Line
2015-01-01
In this paper, the fractional-order generalized Laguerre operational matrices (FGLOM) of fractional derivatives and fractional integration are derived. These operational matrices are used together with spectral tau method for solving linear fractional differential equations (FDEs) of order ν (0 < ν < 1) on the half line. An upper bound of the absolute errors is obtained for the approximate and exact solutions. Fractional-order generalized Laguerre pseudo-spectral approximation is investigated for solving nonlinear initial value problem of fractional order ν. The extension of the fractional-order generalized Laguerre pseudo-spectral method is given to solve systems of FDEs. We present the advantages of using the spectral schemes based on fractional-order generalized Laguerre functions and compare them with other methods. Several numerical examples are implemented for FDEs and systems of FDEs including linear and nonlinear terms. We demonstrate the high accuracy and the efficiency of the proposed techniques. PMID:25996369
New operational matrices for solving fractional differential equations on the half-line.
Bhrawy, Ali H; Taha, Taha M; Alzahrani, Ebraheem O; Alzahrani, Ebrahim O; Baleanu, Dumitru; Alzahrani, Abdulrahim A
2015-01-01
In this paper, the fractional-order generalized Laguerre operational matrices (FGLOM) of fractional derivatives and fractional integration are derived. These operational matrices are used together with spectral tau method for solving linear fractional differential equations (FDEs) of order ν (0 < ν < 1) on the half line. An upper bound of the absolute errors is obtained for the approximate and exact solutions. Fractional-order generalized Laguerre pseudo-spectral approximation is investigated for solving nonlinear initial value problem of fractional order ν. The extension of the fractional-order generalized Laguerre pseudo-spectral method is given to solve systems of FDEs. We present the advantages of using the spectral schemes based on fractional-order generalized Laguerre functions and compare them with other methods. Several numerical examples are implemented for FDEs and systems of FDEs including linear and nonlinear terms. We demonstrate the high accuracy and the efficiency of the proposed techniques.
Applications of Laplace transform methods to airfoil motion and stability calculations
NASA Technical Reports Server (NTRS)
Edwards, J. W.
1979-01-01
This paper reviews the development of generalized unsteady aerodynamic theory and presents a derivation of the generalized Possio integral equation. Numerical calculations resolve questions concerning subsonic indicial lift functions and demonstrate the generation of Kutta waves at high values of reduced frequency, subsonic Mach number, or both. The use of rational function approximations of unsteady aerodynamic loads in aeroelastic stability calculations is reviewed, and a reformulation of the matrix Pade approximation technique is given. Numerical examples of flutter boundary calculations for a wing which is to be flight tested are given. Finally, a simplified aerodynamic model of transonic flow is used to study the stability of an airfoil exposed to supersonic and subsonic flow regions.
Analytical approximation of a distorted reflector surface defined by a discrete set of points
NASA Technical Reports Server (NTRS)
Acosta, Roberto J.; Zaman, Afroz A.
1988-01-01
Reflector antennas on Earth orbiting spacecrafts generally cannot be described analytically. The reflector surface is subjected to a large temperature fluctuation and gradients, and is thus warped from its true geometrical shape. Aside from distortion by thermal stresses, reflector surfaces are often purposely shaped to minimize phase aberrations and scanning losses. To analyze distorted reflector antennas defined by discrete surface points, a numerical technique must be applied to compute an interpolatory surface passing through a grid of discrete points. In this paper, the distorted reflector surface points are approximated by two analytical components: an undistorted surface component and a surface error component. The undistorted surface component is a best fit paraboloid polynomial for the given set of points and the surface error component is a Fourier series expansion of the deviation of the actual surface points, from the best fit paraboloid. By applying the numerical technique to approximate the surface normals of the distorted reflector surface, the induced surface current can be obtained using physical optics technique. These surface currents are integrated to find the far field radiation pattern.
NASA Astrophysics Data System (ADS)
Chen, Shuhong; Tan, Zhong
2007-11-01
In this paper, we consider the nonlinear elliptic systems under controllable growth condition. We use a new method introduced by Duzaar and Grotowski, for proving partial regularity for weak solutions, based on a generalization of the technique of harmonic approximation. We extend previous partial regularity results under the natural growth condition to the case of the controllable growth condition, and directly establishing the optimal Hölder exponent for the derivative of a weak solution.
Applying the Zel'dovich approximation to general relativity
NASA Astrophysics Data System (ADS)
Croudace, K. M.; Parry, J.; Salopek, D. S.; Stewart, J. M.
1994-03-01
Starting from general relativity, we give a systematic derivation of the Zel'dovich approximation describing the nonlinear evolution of collisionless dust. We begin by evolving dust along world lines, and we demonstrate that the Szekeres line element is an exact but apparently unstable solution of the evolution equations describing pancake collapse. Next, we solve the Einstein field equations by employing Hamilton-Jacobi techniques and a spatial gradient expansion. We give a prescription for evolving a primordial or 'seed' metric up to the formation of pancakes, and demonstrate its validity by rederiving the Szekeres solution approximately at third order and exactly at fifth order in spatial gradients. Finally we show that the range of validity of the expansion can be improved quite significantly if one notes that the 3-metric must have nonnegative eigenvalues. With this improvement the exact Szekeres solution is obtained after only one iteration.
Wavelet Algorithms for Illumination Computations
NASA Astrophysics Data System (ADS)
Schroder, Peter
One of the core problems of computer graphics is the computation of the equilibrium distribution of light in a scene. This distribution is given as the solution to a Fredholm integral equation of the second kind involving an integral over all surfaces in the scene. In the general case such solutions can only be numerically approximated, and are generally costly to compute, due to the geometric complexity of typical computer graphics scenes. For this computation both Monte Carlo and finite element techniques (or hybrid approaches) are typically used. A simplified version of the illumination problem is known as radiosity, which assumes that all surfaces are diffuse reflectors. For this case hierarchical techniques, first introduced by Hanrahan et al. (32), have recently gained prominence. The hierarchical approaches lead to an asymptotic improvement when only finite precision is required. The resulting algorithms have cost proportional to O(k^2 + n) versus the usual O(n^2) (k is the number of input surfaces, n the number of finite elements into which the input surfaces are meshed). Similarly a hierarchical technique has been introduced for the more general radiance problem (which allows glossy reflectors) by Aupperle et al. (6). In this dissertation we show the equivalence of these hierarchical techniques to the use of a Haar wavelet basis in a general Galerkin framework. By so doing, we come to a deeper understanding of the properties of the numerical approximations used and are able to extend the hierarchical techniques to higher orders. In particular, we show the correspondence of the geometric arguments underlying hierarchical methods to the theory of Calderon-Zygmund operators and their sparse realization in wavelet bases. The resulting wavelet algorithms for radiosity and radiance are analyzed and numerical results achieved with our implementation are reported. We find that the resulting algorithms achieve smaller and smoother errors at equivalent work.
75 FR 18246 - Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-09
... be issued in connection with employee benefit plans. We estimate that Form S-8 takes approximately 24... techniques or other forms of information technology. Consideration will be given to comments and suggestions... Boucher, Director/ CIO, Securities and Exchange Commission, C/O Shirley Martinson, 6432 General Green Way...
Sanz, Luis; Alonso, Juan Antonio
2017-12-01
In this work we develop approximate aggregation techniques in the context of slow-fast linear population models governed by stochastic differential equations and apply the results to the treatment of populations with spatial heterogeneity. Approximate aggregation techniques allow one to transform a complex system involving many coupled variables and in which there are processes with different time scales, by a simpler reduced model with a fewer number of 'global' variables, in such a way that the dynamics of the former can be approximated by that of the latter. In our model we contemplate a linear fast deterministic process together with a linear slow process in which the parameters are affected by additive noise, and give conditions for the solutions corresponding to positive initial conditions to remain positive for all times. By letting the fast process reach equilibrium we build a reduced system with a lesser number of variables, and provide results relating the asymptotic behaviour of the first- and second-order moments of the population vector for the original and the reduced system. The general technique is illustrated by analysing a multiregional stochastic system in which dispersal is deterministic and the rate growth of the populations in each patch is affected by additive noise.
Approximation methods for stochastic petri nets
NASA Technical Reports Server (NTRS)
Jungnitz, Hauke Joerg
1992-01-01
Stochastic Marked Graphs are a concurrent decision free formalism provided with a powerful synchronization mechanism generalizing conventional Fork Join Queueing Networks. In some particular cases the analysis of the throughput can be done analytically. Otherwise the analysis suffers from the classical state explosion problem. Embedded in the divide and conquer paradigm, approximation techniques are introduced for the analysis of stochastic marked graphs and Macroplace/Macrotransition-nets (MPMT-nets), a new subclass introduced herein. MPMT-nets are a subclass of Petri nets that allow limited choice, concurrency and sharing of resources. The modeling power of MPMT is much larger than that of marked graphs, e.g., MPMT-nets can model manufacturing flow lines with unreliable machines and dataflow graphs where choice and synchronization occur. The basic idea leads to the notion of a cut to split the original net system into two subnets. The cuts lead to two aggregated net systems where one of the subnets is reduced to a single transition. A further reduction leads to a basic skeleton. The generalization of the idea leads to multiple cuts, where single cuts can be applied recursively leading to a hierarchical decomposition. Based on the decomposition, a response time approximation technique for the performance analysis is introduced. Also, delay equivalence, which has previously been introduced in the context of marked graphs by Woodside et al., Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's is slower, but the accuracy is generally better. Delay equivalence often fails to converge, while flow equivalent aggregation can lead to potentially bad results if a strong dependence of the mean completion time on the interarrival process exists.
A-posteriori error estimation for the finite point method with applications to compressible flow
NASA Astrophysics Data System (ADS)
Ortega, Enrique; Flores, Roberto; Oñate, Eugenio; Idelsohn, Sergio
2017-08-01
An a-posteriori error estimate with application to inviscid compressible flow problems is presented. The estimate is a surrogate measure of the discretization error, obtained from an approximation to the truncation terms of the governing equations. This approximation is calculated from the discrete nodal differential residuals using a reconstructed solution field on a modified stencil of points. Both the error estimation methodology and the flow solution scheme are implemented using the Finite Point Method, a meshless technique enabling higher-order approximations and reconstruction procedures on general unstructured discretizations. The performance of the proposed error indicator is studied and applications to adaptive grid refinement are presented.
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1988-01-01
An abstract approximation and convergence theory for the closed-loop solution of discrete-time linear-quadratic regulator problems for parabolic systems with unbounded input is developed. Under relatively mild stabilizability and detectability assumptions, functional analytic, operator techniques are used to demonstrate the norm convergence of Galerkin-based approximations to the optimal feedback control gains. The application of the general theory to a class of abstract boundary control systems is considered. Two examples, one involving the Neumann boundary control of a one-dimensional heat equation, and the other, the vibration control of a cantilevered viscoelastic beam via shear input at the free end, are discussed.
A fast efficient implicit scheme for the gasdynamic equations using a matrix reduction technique
NASA Technical Reports Server (NTRS)
Barth, T. J.; Steger, J. L.
1985-01-01
An efficient implicit finite-difference algorithm for the gasdynamic equations utilizing matrix reduction techniques is presented. A significant reduction in arithmetic operations is achieved without loss of the stability characteristics generality found in the Beam and Warming approximate factorization algorithm. Steady-state solutions to the conservative Euler equations in generalized coordinates are obtained for transonic flows and used to show that the method offers computational advantages over the conventional Beam and Warming scheme. Existing Beam and Warming codes can be retrofit with minimal effort. The theoretical extension of the matrix reduction technique to the full Navier-Stokes equations in Cartesian coordinates is presented in detail. Linear stability, using a Fourier stability analysis, is demonstrated and discussed for the one-dimensional Euler equations.
Anandakrishnan, Ramu; Scogland, Tom R. W.; Fenley, Andrew T.; Gordon, John C.; Feng, Wu-chun; Onufriev, Alexey V.
2010-01-01
Tools that compute and visualize biomolecular electrostatic surface potential have been used extensively for studying biomolecular function. However, determining the surface potential for large biomolecules on a typical desktop computer can take days or longer using currently available tools and methods. Two commonly used techniques to speed up these types of electrostatic computations are approximations based on multi-scale coarse-graining and parallelization across multiple processors. This paper demonstrates that for the computation of electrostatic surface potential, these two techniques can be combined to deliver significantly greater speed-up than either one separately, something that is in general not always possible. Specifically, the electrostatic potential computation, using an analytical linearized Poisson Boltzmann (ALPB) method, is approximated using the hierarchical charge partitioning (HCP) multiscale method, and parallelized on an ATI Radeon 4870 graphical processing unit (GPU). The implementation delivers a combined 934-fold speed-up for a 476,040 atom viral capsid, compared to an equivalent non-parallel implementation on an Intel E6550 CPU without the approximation. This speed-up is significantly greater than the 42-fold speed-up for the HCP approximation alone or the 182-fold speed-up for the GPU alone. PMID:20452792
Linear Approximation to Optimal Control Allocation for Rocket Nozzles with Elliptical Constraints
NASA Technical Reports Server (NTRS)
Orr, Jeb S.; Wall, Johnm W.
2011-01-01
In this paper we present a straightforward technique for assessing and realizing the maximum control moment effectiveness for a launch vehicle with multiple constrained rocket nozzles, where elliptical deflection limits in gimbal axes are expressed as an ensemble of independent quadratic constraints. A direct method of determining an approximating ellipsoid that inscribes the set of attainable angular accelerations is derived. In the case of a parameterized linear generalized inverse, the geometry of the attainable set is computationally expensive to obtain but can be approximated to a high degree of accuracy with the proposed method. A linear inverse can then be optimized to maximize the volume of the true attainable set by maximizing the volume of the approximating ellipsoid. The use of a linear inverse does not preclude the use of linear methods for stability analysis and control design, preferred in practice for assessing the stability characteristics of the inertial and servoelastic coupling appearing in large boosters. The present techniques are demonstrated via application to the control allocation scheme for a concept heavy-lift launch vehicle.
The scale invariant generator technique for quantifying anisotropic scale invariance
NASA Astrophysics Data System (ADS)
Lewis, G. M.; Lovejoy, S.; Schertzer, D.; Pecknold, S.
1999-11-01
Scale invariance is rapidly becoming a new paradigm for geophysics. However, little attention has been paid to the anisotropy that is invariably present in geophysical fields in the form of differential stratification and rotation, texture and morphology. In order to account for scaling anisotropy, the formalism of generalized scale invariance (GSI) was developed. Until now there has existed only a single fairly ad hoc GSI analysis technique valid for studying differential rotation. In this paper, we use a two-dimensional representation of the linear approximation to generalized scale invariance, to obtain a much improved technique for quantifying anisotropic scale invariance called the scale invariant generator technique (SIG). The accuracy of the technique is tested using anisotropic multifractal simulations and error estimates are provided for the geophysically relevant range of parameters. It is found that the technique yields reasonable estimates for simulations with a diversity of anisotropic and statistical characteristics. The scale invariant generator technique can profitably be applied to the scale invariant study of vertical/horizontal and space/time cross-sections of geophysical fields as well as to the study of the texture/morphology of fields.
Coherent states, quantum gravity, and the Born-Oppenheimer approximation. I. General considerations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stottmeister, Alexander, E-mail: alexander.stottmeister@gravity.fau.de; Thiemann, Thomas, E-mail: thomas.thiemann@gravity.fau.de
2016-06-15
This article, as the first of three, aims at establishing the (time-dependent) Born-Oppenheimer approximation, in the sense of space adiabatic perturbation theory, for quantum systems constructed by techniques of the loop quantum gravity framework, especially the canonical formulation of the latter. The analysis presented here fits into a rather general framework and offers a solution to the problem of applying the usual Born-Oppenheimer ansatz for molecular (or structurally analogous) systems to more general quantum systems (e.g., spin-orbit models) by means of space adiabatic perturbation theory. The proposed solution is applied to a simple, finite dimensional model of interacting spin systems,more » which serves as a non-trivial, minimal model of the aforesaid problem. Furthermore, it is explained how the content of this article and its companion affect the possible extraction of quantum field theory on curved spacetime from loop quantum gravity (including matter fields).« less
Taylor, K.R.; James, R.W.; Helinsky, B.M.
1986-01-01
Two traveltime and dispersion measurements using rhodamin dye were conducted on a 178-mile reach of the Shenandoah River between Waynesboro, Virginia, and Harpers Ferry, West Virginia. The flows during the two measurements were at approximately the 85% and 45% flow durations. The two sets of data were used to develop a generalized procedure for predicting traveltimes and downstream concentrations resulting from spillage of water soluble substances at any point along the river reach studied. The procedure can be used to calculate traveltime and concentration data for almost any spillage that occurs during relatively steady flow between a 40% to 95% flow duration. Based on an analogy between the general shape of a time concentration curve and a scalene triangle, the procedures can be used on long river reaches to approximate the conservative time concentration curve for instantaneous spills of contaminants. The triangular approximation technique can be combined with a superposition technique to predict the approximate, conservative time concentration curve for constant rate and variable rate injections of contaminants. The procedure was applied to a hypothetical situation in which 5,000 pounds of contaminants is spilled instantaneously at Island Ford, Virginia. The times required for the leading edge, the peak concentration, and the trailing edge of the contaminant cloud to reach the water intake at Front Royal, Virginia (85 miles downstream), are 234,280, and 340 hrs, respectively, for a flow at an 80% flow duration. The conservative peak concentration would be approximately 940 micrograms/L at Front Royal. The procedures developed cannot be depended upon when a significant hydraulic wave or other unsteady flow condition exists in the flow system or when the spilled material floats or is immiscible in water. (Author 's abstract)
Front dynamics and entanglement in the XXZ chain with a gradient
NASA Astrophysics Data System (ADS)
Eisler, Viktor; Bauernfeind, Daniel
2017-11-01
We consider the XXZ spin chain with a magnetic field gradient and study the profiles of the magnetization as well as the entanglement entropy. For a slowly varying field, it is shown that, by means of a local density approximation, the ground-state magnetization profile can be obtained with standard Bethe ansatz techniques. Furthermore, it is argued that the low-energy description of the theory is given by a Luttinger liquid with slowly varying parameters. This allows us to obtain a very good approximation of the entanglement profile using a recently introduced technique of conformal field theory in curved spacetime. Finally, the front dynamics is also studied after the gradient field has been switched off, following arguments of generalized hydrodynamics for integrable systems. While for the XX chain the hydrodynamic solution can be found analytically, the XXZ case appears to be more complicated and the magnetization profiles are recovered only around the edge of the front via an approximate numerical solution.
Johnson, Jason K.; Oyen, Diane Adele; Chertkov, Michael; ...
2016-12-01
Inference and learning of graphical models are both well-studied problems in statistics and machine learning that have found many applications in science and engineering. However, exact inference is intractable in general graphical models, which suggests the problem of seeking the best approximation to a collection of random variables within some tractable family of graphical models. In this paper, we focus on the class of planar Ising models, for which exact inference is tractable using techniques of statistical physics. Based on these techniques and recent methods for planarity testing and planar embedding, we propose a greedy algorithm for learning the bestmore » planar Ising model to approximate an arbitrary collection of binary random variables (possibly from sample data). Given the set of all pairwise correlations among variables, we select a planar graph and optimal planar Ising model defined on this graph to best approximate that set of correlations. Finally, we demonstrate our method in simulations and for two applications: modeling senate voting records and identifying geo-chemical depth trends from Mars rover data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Jason K.; Oyen, Diane Adele; Chertkov, Michael
Inference and learning of graphical models are both well-studied problems in statistics and machine learning that have found many applications in science and engineering. However, exact inference is intractable in general graphical models, which suggests the problem of seeking the best approximation to a collection of random variables within some tractable family of graphical models. In this paper, we focus on the class of planar Ising models, for which exact inference is tractable using techniques of statistical physics. Based on these techniques and recent methods for planarity testing and planar embedding, we propose a greedy algorithm for learning the bestmore » planar Ising model to approximate an arbitrary collection of binary random variables (possibly from sample data). Given the set of all pairwise correlations among variables, we select a planar graph and optimal planar Ising model defined on this graph to best approximate that set of correlations. Finally, we demonstrate our method in simulations and for two applications: modeling senate voting records and identifying geo-chemical depth trends from Mars rover data.« less
A unified framework for approximation in inverse problems for distributed parameter systems
NASA Technical Reports Server (NTRS)
Banks, H. T.; Ito, K.
1988-01-01
A theoretical framework is presented that can be used to treat approximation techniques for very general classes of parameter estimation problems involving distributed systems that are either first or second order in time. Using the approach developed, one can obtain both convergence and stability (continuous dependence of parameter estimates with respect to the observations) under very weak regularity and compactness assumptions on the set of admissible parameters. This unified theory can be used for many problems found in the recent literature and in many cases offers significant improvements to existing results.
Data-Driven Model Reduction and Transfer Operator Approximation
NASA Astrophysics Data System (ADS)
Klus, Stefan; Nüske, Feliks; Koltai, Péter; Wu, Hao; Kevrekidis, Ioannis; Schütte, Christof; Noé, Frank
2018-06-01
In this review paper, we will present different data-driven dimension reduction techniques for dynamical systems that are based on transfer operator theory as well as methods to approximate transfer operators and their eigenvalues, eigenfunctions, and eigenmodes. The goal is to point out similarities and differences between methods developed independently by the dynamical systems, fluid dynamics, and molecular dynamics communities such as time-lagged independent component analysis, dynamic mode decomposition, and their respective generalizations. As a result, extensions and best practices developed for one particular method can be carried over to other related methods.
Using Logistic Approximations of Marginal Trace Lines to Develop Short Assessments
ERIC Educational Resources Information Center
Stucky, Brian D.; Thissen, David; Edelen, Maria Orlando
2013-01-01
Test developers often need to create unidimensional scales from multidimensional data. For item analysis, "marginal trace lines" capture the relation with the general dimension while accounting for nuisance dimensions and may prove to be a useful technique for creating short-form tests. This article describes the computations needed to obtain…
Smooth function approximation using neural networks.
Ferrari, Silvia; Stengel, Robert F
2005-01-01
An algebraic approach for representing multidimensional nonlinear functions by feedforward neural networks is presented. In this paper, the approach is implemented for the approximation of smooth batch data containing the function's input, output, and possibly, gradient information. The training set is associated to the network adjustable parameters by nonlinear weight equations. The cascade structure of these equations reveals that they can be treated as sets of linear systems. Hence, the training process and the network approximation properties can be investigated via linear algebra. Four algorithms are developed to achieve exact or approximate matching of input-output and/or gradient-based training sets. Their application to the design of forward and feedback neurocontrollers shows that algebraic training is characterized by faster execution speeds and better generalization properties than contemporary optimization techniques.
Radiative transfer in dusty nebulae. III - The effects of dust albedo
NASA Technical Reports Server (NTRS)
Petrosian, V.; Dana, R. A.
1980-01-01
The effects of an albedo of internal dust, such as ionization structure and temperature of dust grain, were studied by the quasi-diffusion method with an iterative technique for solving the radiative heat transfer equations. It was found that the generalized on-the-spot approximation solution is adequate for most astrophysical applications for a zero albedo; for a nonzero albedo, the Eddington approximation is more accurate. The albedo increases the average energy of the diffuse photons, increasing the ionization level of hydrogen and heavy elements if the Eddington approximation is applied; the dust thermal gradient is reduced so that the infrared spectrum approaches blackbody spectrum with an increasing albedo.
Anandakrishnan, Ramu; Scogland, Tom R W; Fenley, Andrew T; Gordon, John C; Feng, Wu-chun; Onufriev, Alexey V
2010-06-01
Tools that compute and visualize biomolecular electrostatic surface potential have been used extensively for studying biomolecular function. However, determining the surface potential for large biomolecules on a typical desktop computer can take days or longer using currently available tools and methods. Two commonly used techniques to speed-up these types of electrostatic computations are approximations based on multi-scale coarse-graining and parallelization across multiple processors. This paper demonstrates that for the computation of electrostatic surface potential, these two techniques can be combined to deliver significantly greater speed-up than either one separately, something that is in general not always possible. Specifically, the electrostatic potential computation, using an analytical linearized Poisson-Boltzmann (ALPB) method, is approximated using the hierarchical charge partitioning (HCP) multi-scale method, and parallelized on an ATI Radeon 4870 graphical processing unit (GPU). The implementation delivers a combined 934-fold speed-up for a 476,040 atom viral capsid, compared to an equivalent non-parallel implementation on an Intel E6550 CPU without the approximation. This speed-up is significantly greater than the 42-fold speed-up for the HCP approximation alone or the 182-fold speed-up for the GPU alone. Copyright (c) 2010 Elsevier Inc. All rights reserved.
Asymptotic Poincare lemma and its applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ziolkowski, R.W.; Deschamps, G.A.
1984-05-01
An asymptotic version of Poincare's lemma is defined and solutions are obtained with the calculus of exterior differential forms. They are used to construct the asymptotic approximations of multidimensional oscillatory integrals whose forms are commonly encountered, for example, in electromagnetic problems. In particular, the boundary and stationary point evaluations of these integrals are considered. The former is applied to the Kirchhoff representation of a scalar field diffracted through an aperture and simply recovers the Maggi-Rubinowicz-Miyamoto-Wolf results. Asymptotic approximations in the presence of other (standard) critical points are also discussed. Techniques developed for the asymptotic Poincare lemma are used to generatemore » a general representation of the Leray form. All of the (differential form) expressions presented are generalizations of known (vector calculus) results. 14 references, 4 figures.« less
Nguyen, Sy-Tuan; Vu, Mai-Ba; Vu, Minh-Ngoc; To, Quy-Dong
2018-02-01
Closed-form solutions for the effective rheological properties of a 2D viscoelastic drained porous medium made of a Generalized Maxwell viscoelastic matrix and pore inclusions are developed and applied for cortical bone. The in-plane (transverse) effective viscoelastic bulk and shear moduli of the Generalized Maxwell rheology of the homogenized medium are expressed as functions of the porosity and the viscoelastic properties of the solid phase. When deriving these functions, the classical inverse Laplace-Carson transformation technique is avoided, due to its complexity, by considering the short and long term approximations. The approximated results are validated against exact solutions obtained from the inverse Laplace-Carson transform for a simple configuration when the later is available. An application for cortical bone with assumption of circular pore in the transverse plane shows that the proposed approximation fit very well with experimental data. Copyright © 2017 Elsevier Ltd. All rights reserved.
Enhanced Approximate Nearest Neighbor via Local Area Focused Search.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gonzales, Antonio; Blazier, Nicholas Paul
Approximate Nearest Neighbor (ANN) algorithms are increasingly important in machine learning, data mining, and image processing applications. There is a large family of space- partitioning ANN algorithms, such as randomized KD-Trees, that work well in practice but are limited by an exponential increase in similarity comparisons required to optimize recall. Additionally, they only support a small set of similarity metrics. We present Local Area Fo- cused Search (LAFS), a method that enhances the way queries are performed using an existing ANN index. Instead of a single query, LAFS performs a number of smaller (fewer similarity comparisons) queries and focuses onmore » a local neighborhood which is refined as candidates are identified. We show that our technique improves performance on several well known datasets and is easily extended to general similarity metrics using kernel projection techniques.« less
Application of Weibull analysis to SSME hardware
NASA Technical Reports Server (NTRS)
Gray, L. A. B.
1986-01-01
Generally, it has been documented that the wearing of engine parts forms a failure distribution which can be approximated by a function developed by Weibull. The purpose here is to examine to what extent the Weibull distribution approximates failure data for designated engine parts of the Space Shuttle Main Engine (SSME). The current testing certification requirements will be examined in order to establish confidence levels. An examination of the failure history of SSME parts/assemblies (turbine blades, main combustion chamber, or high pressure fuel pump first stage impellers) which are limited in usage by time or starts will be done by using updated Weibull techniques. Efforts will be made by the investigator to predict failure trends by using Weibull techniques for SSME parts (turbine temperature sensors, chamber pressure transducers, actuators, and controllers) which are not severely limited by time or starts.
GLOBAL SOLUTIONS TO FOLDED CONCAVE PENALIZED NONCONVEX LEARNING
Liu, Hongcheng; Yao, Tao; Li, Runze
2015-01-01
This paper is concerned with solving nonconvex learning problems with folded concave penalty. Despite that their global solutions entail desirable statistical properties, there lack optimization techniques that guarantee global optimality in a general setting. In this paper, we show that a class of nonconvex learning problems are equivalent to general quadratic programs. This equivalence facilitates us in developing mixed integer linear programming reformulations, which admit finite algorithms that find a provably global optimal solution. We refer to this reformulation-based technique as the mixed integer programming-based global optimization (MIPGO). To our knowledge, this is the first global optimization scheme with a theoretical guarantee for folded concave penalized nonconvex learning with the SCAD penalty (Fan and Li, 2001) and the MCP penalty (Zhang, 2010). Numerical results indicate a significant outperformance of MIPGO over the state-of-the-art solution scheme, local linear approximation, and other alternative solution techniques in literature in terms of solution quality. PMID:27141126
NASA Astrophysics Data System (ADS)
Reber, E. E.; Foote, F. B.; Schellenbaum, R. L.; Bradley, R. G.
1981-07-01
The potential of radiometric imaging technique to detect shielded nuclear materials and explosives carried covertly by personnel was investigated. This method of detecting contraband depends upon the differences in emissivity and reflectivity of the contraband relative to human tissue. Explosives, unlike metals and metal composites, generally have high emissivities and low reflectivities that closely approximate those of human tissue making explosives difficult to detect. Samples of several common types of explosives (TNT, Detasheet, C4, and several types of water gels) were examined at the 1.4- and 3-mm wavelengths using active and passive radiometeric techniques.
Countably QC-Approximating Posets
Mao, Xuxin; Xu, Luoshan
2014-01-01
As a generalization of countably C-approximating posets, the concept of countably QC-approximating posets is introduced. With the countably QC-approximating property, some characterizations of generalized completely distributive lattices and generalized countably approximating posets are given. The main results are as follows: (1) a complete lattice is generalized completely distributive if and only if it is countably QC-approximating and weakly generalized countably approximating; (2) a poset L having countably directed joins is generalized countably approximating if and only if the lattice σ c(L)op of all σ-Scott-closed subsets of L is weakly generalized countably approximating. PMID:25165730
Lay-Ekuakille, Aimé; Fabbiano, Laura; Vacca, Gaetano; Kitoko, Joël Kidiamboko; Kulapa, Patrice Bibala; Telesca, Vito
2018-06-04
Pipelines conveying fluids are considered strategic infrastructures to be protected and maintained. They generally serve for transportation of important fluids such as drinkable water, waste water, oil, gas, chemicals, etc. Monitoring and continuous testing, especially on-line, are necessary to assess the condition of pipelines. The paper presents findings related to a comparison between two spectral response algorithms based on the decimated signal diagonalization (DSD) and decimated Padé approximant (DPA) techniques that allow to one to process signals delivered by pressure sensors mounted on an experimental pipeline.
A structure preserving Lanczos algorithm for computing the optical absorption spectrum
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shao, Meiyue; Jornada, Felipe H. da; Lin, Lin
2016-11-16
We present a new structure preserving Lanczos algorithm for approximating the optical absorption spectrum in the context of solving full Bethe-Salpeter equation without Tamm-Dancoff approximation. The new algorithm is based on a structure preserving Lanczos procedure, which exploits the special block structure of Bethe-Salpeter Hamiltonian matrices. A recently developed technique of generalized averaged Gauss quadrature is incorporated to accelerate the convergence. We also establish the connection between our structure preserving Lanczos procedure with several existing Lanczos procedures developed in different contexts. Numerical examples are presented to demonstrate the effectiveness of our Lanczos algorithm.
Fine tuning classical and quantum molecular dynamics using a generalized Langevin equation
NASA Astrophysics Data System (ADS)
Rossi, Mariana; Kapil, Venkat; Ceriotti, Michele
2018-03-01
Generalized Langevin Equation (GLE) thermostats have been used very effectively as a tool to manipulate and optimize the sampling of thermodynamic ensembles and the associated static properties. Here we show that a similar, exquisite level of control can be achieved for the dynamical properties computed from thermostatted trajectories. We develop quantitative measures of the disturbance induced by the GLE to the Hamiltonian dynamics of a harmonic oscillator, and show that these analytical results accurately predict the behavior of strongly anharmonic systems. We also show that it is possible to correct, to a significant extent, the effects of the GLE term onto the corresponding microcanonical dynamics, which puts on more solid grounds the use of non-equilibrium Langevin dynamics to approximate quantum nuclear effects and could help improve the prediction of dynamical quantities from techniques that use a Langevin term to stabilize dynamics. Finally we address the use of thermostats in the context of approximate path-integral-based models of quantum nuclear dynamics. We demonstrate that a custom-tailored GLE can alleviate some of the artifacts associated with these techniques, improving the quality of results for the modeling of vibrational dynamics of molecules, liquids, and solids.
Analysis of biochemical phase shift oscillators by a harmonic balancing technique.
Rapp, P
1976-11-25
The use of harmonic balancing techniques for theoretically investigating a large class of biochemical phase shift oscillators is outlined and the accuracy of this approximate technique for large dimension nonlinear chemical systems is considered. It is concluded that for the equations under study these techniques can be successfully employed to both find periodic solutions and to indicate those cases which can not oscillate. The technique is a general one and it is possible to state a step by step procedure for its application. It has a substantial advantage in producing results which are immediately valid for arbitrary dimension. As the accuracy of the method increases with dimension, it complements classical small dimension methods. The results obtained by harmonic balancing analysis are compared with those obtained by studying the local stability properties of the singular points of the differential equation. A general theorem is derived which identifies those special cases where the results of first order harmonic balancing are identical to those of local stability analysis, and a necessary condition for this equivalence is derived. As a concrete example, the n-dimensional Goodwin oscillator is considered where p, the Hill coefficient of the feedback metabolite, is equal to three and four. It is shown that for p = 3 or 4 and n less than or equal to 4 the approximation indicates that it is impossible to construct a set of physically permissible reaction constants such that the system possesses a periodic solution. However for n greater than or equal to 5 it is always possible to find a large domain in the reaction constant space giving stable oscillations. A means of constructing such a parameter set is given. The results obtained here are compared with previously derived results for p = 1 and p = 2.
NASA Astrophysics Data System (ADS)
Chen, Gui-Qiang G.; Schrecker, Matthew R. I.
2018-04-01
We are concerned with globally defined entropy solutions to the Euler equations for compressible fluid flows in transonic nozzles with general cross-sectional areas. Such nozzles include the de Laval nozzles and other more general nozzles whose cross-sectional area functions are allowed at the nozzle ends to be either zero (closed ends) or infinity (unbounded ends). To achieve this, in this paper, we develop a vanishing viscosity method to construct globally defined approximate solutions and then establish essential uniform estimates in weighted L p norms for the whole range of physical adiabatic exponents γ\\in (1, ∞) , so that the viscosity approximate solutions satisfy the general L p compensated compactness framework. The viscosity method is designed to incorporate artificial viscosity terms with the natural Dirichlet boundary conditions to ensure the uniform estimates. Then such estimates lead to both the convergence of the approximate solutions and the existence theory of globally defined finite-energy entropy solutions to the Euler equations for transonic flows that may have different end-states in the class of nozzles with general cross-sectional areas for all γ\\in (1, ∞) . The approach and techniques developed here apply to other problems with similar difficulties. In particular, we successfully apply them to construct globally defined spherically symmetric entropy solutions to the Euler equations for all γ\\in (1, ∞).
Field by field hybrid upwind splitting methods
NASA Technical Reports Server (NTRS)
Coquel, Frederic; Liou, Meng-Sing
1993-01-01
A new and general approach to upwind splitting is presented. The design principle combines the robustness of flux vector splitting schemes in the capture of nonlinear waves and the accuracy of some flux difference splitting schemes in the resolution of linear waves. The new schemes are derived following a general hybridization technique performed directly at the basic level of the field by field decomposition involved in FDS methods. The scheme does not use a spatial switch to be tuned up according to the local smoothness of the approximate solution.
NASA Technical Reports Server (NTRS)
Tiffany, S. H.; Adams, W. M., Jr.
1984-01-01
A technique which employs both linear and nonlinear methods in a multilevel optimization structure to best approximate generalized unsteady aerodynamic forces for arbitrary motion is described. Optimum selection of free parameters is made in a rational function approximation of the aerodynamic forces in the Laplace domain such that a best fit is obtained, in a least squares sense, to tabular data for purely oscillatory motion. The multilevel structure and the corresponding formulation of the objective models are presented which separate the reduction of the fit error into linear and nonlinear problems, thus enabling the use of linear methods where practical. Certain equality and inequality constraints that may be imposed are identified; a brief description of the nongradient, nonlinear optimizer which is used is given; and results which illustrate application of the method are presented.
Numerical Schemes for the Hamilton-Jacobi and Level Set Equations on Triangulated Domains
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Sethian, James A.
1997-01-01
Borrowing from techniques developed for conservation law equations, numerical schemes which discretize the Hamilton-Jacobi (H-J), level set, and Eikonal equations on triangulated domains are presented. The first scheme is a provably monotone discretization for certain forms of the H-J equations. Unfortunately, the basic scheme lacks proper Lipschitz continuity of the numerical Hamiltonian. By employing a virtual edge flipping technique, Lipschitz continuity of the numerical flux is restored on acute triangulations. Next, schemes are introduced and developed based on the weaker concept of positive coefficient approximations for homogeneous Hamiltonians. These schemes possess a discrete maximum principle on arbitrary triangulations and naturally exhibit proper Lipschitz continuity of the numerical Hamiltonian. Finally, a class of Petrov-Galerkin approximations are considered. These schemes are stabilized via a least-squares bilinear form. The Petrov-Galerkin schemes do not possess a discrete maximum principle but generalize to high order accuracy.
Extrapolation techniques applied to matrix methods in neutron diffusion problems
NASA Technical Reports Server (NTRS)
Mccready, Robert R
1956-01-01
A general matrix method is developed for the solution of characteristic-value problems of the type arising in many physical applications. The scheme employed is essentially that of Gauss and Seidel with appropriate modifications needed to make it applicable to characteristic-value problems. An iterative procedure produces a sequence of estimates to the answer; and extrapolation techniques, based upon previous behavior of iterants, are utilized in speeding convergence. Theoretically sound limits are placed on the magnitude of the extrapolation that may be tolerated. This matrix method is applied to the problem of finding criticality and neutron fluxes in a nuclear reactor with control rods. The two-dimensional finite-difference approximation to the two-group neutron fluxes in a nuclear reactor with control rods. The two-dimensional finite-difference approximation to the two-group neutron-diffusion equations is treated. Results for this example are indicated.
Utilization of advanced calibration techniques in stochastic rock fall analysis of quarry slopes
NASA Astrophysics Data System (ADS)
Preh, Alexander; Ahmadabadi, Morteza; Kolenprat, Bernd
2016-04-01
In order to study rock fall dynamics, a research project was conducted by the Vienna University of Technology and the Austrian Central Labour Inspectorate (Federal Ministry of Labour, Social Affairs and Consumer Protection). A part of this project included 277 full-scale drop tests at three different quarries in Austria and recording key parameters of the rock fall trajectories. The tests involved a total of 277 boulders ranging from 0.18 to 1.8 m in diameter and from 0.009 to 8.1 Mg in mass. The geology of these sites included strong rock belonging to igneous, metamorphic and volcanic types. In this paper the results of the tests are used for calibration and validation a new stochastic computer model. It is demonstrated that the error of the model (i.e. the difference between observed and simulated results) has a lognormal distribution. Selecting two parameters, advanced calibration techniques including Markov Chain Monte Carlo Technique, Maximum Likelihood and Root Mean Square Error (RMSE) are utilized to minimize the error. Validation of the model based on the cross validation technique reveals that in general, reasonable stochastic approximations of the rock fall trajectories are obtained in all dimensions, including runout, bounce heights and velocities. The approximations are compared to the measured data in terms of median, 95% and maximum values. The results of the comparisons indicate that approximate first-order predictions, using a single set of input parameters, are possible and can be used to aid practical hazard and risk assessment.
NASA Technical Reports Server (NTRS)
Tiffany, Sherwood H.; Adams, William M., Jr.
1988-01-01
The approximation of unsteady generalized aerodynamic forces in the equations of motion of a flexible aircraft are discussed. Two methods of formulating these approximations are extended to include the same flexibility in constraining the approximations and the same methodology in optimizing nonlinear parameters as another currently used extended least-squares method. Optimal selection of nonlinear parameters is made in each of the three methods by use of the same nonlinear, nongradient optimizer. The objective of the nonlinear optimization is to obtain rational approximations to the unsteady aerodynamics whose state-space realization is lower order than that required when no optimization of the nonlinear terms is performed. The free linear parameters are determined using the least-squares matrix techniques of a Lagrange multiplier formulation of an objective function which incorporates selected linear equality constraints. State-space mathematical models resulting from different approaches are described and results are presented that show comparative evaluations from application of each of the extended methods to a numerical example.
Resummed memory kernels in generalized system-bath master equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mavros, Michael G.; Van Voorhis, Troy, E-mail: tvan@mit.edu
2014-08-07
Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between themore » two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the “Landau-Zener resummation” of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics.« less
Magnitude of pseudopotential localization errors in fixed node diffusion quantum Monte Carlo
Kent, Paul R.; Krogel, Jaron T.
2017-06-22
Growth in computational resources has lead to the application of real space diffusion quantum Monte Carlo to increasingly heavy elements. Although generally assumed to be small, we find that when using standard techniques, the pseudopotential localization error can be large, on the order of an electron volt for an isolated cerium atom. We formally show that the localization error can be reduced to zero with improvements to the Jastrow factor alone, and we define a metric of Jastrow sensitivity that may be useful in the design of pseudopotentials. We employ an extrapolation scheme to extract the bare fixed node energymore » and estimate the localization error in both the locality approximation and the T-moves schemes for the Ce atom in charge states 3+/4+. The locality approximation exhibits the lowest Jastrow sensitivity and generally smaller localization errors than T-moves although the locality approximation energy approaches the localization free limit from above/below for the 3+/4+ charge state. We find that energy minimized Jastrow factors including three-body electron-electron-ion terms are the most effective at reducing the localization error for both the locality approximation and T-moves for the case of the Ce atom. Less complex or variance minimized Jastrows are generally less effective. Finally, our results suggest that further improvements to Jastrow factors and trial wavefunction forms may be needed to reduce localization errors to chemical accuracy when medium core pseudopotentials are applied to heavy elements such as Ce.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kent, Paul R.; Krogel, Jaron T.
Growth in computational resources has lead to the application of real space diffusion quantum Monte Carlo to increasingly heavy elements. Although generally assumed to be small, we find that when using standard techniques, the pseudopotential localization error can be large, on the order of an electron volt for an isolated cerium atom. We formally show that the localization error can be reduced to zero with improvements to the Jastrow factor alone, and we define a metric of Jastrow sensitivity that may be useful in the design of pseudopotentials. We employ an extrapolation scheme to extract the bare fixed node energymore » and estimate the localization error in both the locality approximation and the T-moves schemes for the Ce atom in charge states 3+/4+. The locality approximation exhibits the lowest Jastrow sensitivity and generally smaller localization errors than T-moves although the locality approximation energy approaches the localization free limit from above/below for the 3+/4+ charge state. We find that energy minimized Jastrow factors including three-body electron-electron-ion terms are the most effective at reducing the localization error for both the locality approximation and T-moves for the case of the Ce atom. Less complex or variance minimized Jastrows are generally less effective. Finally, our results suggest that further improvements to Jastrow factors and trial wavefunction forms may be needed to reduce localization errors to chemical accuracy when medium core pseudopotentials are applied to heavy elements such as Ce.« less
NASA Technical Reports Server (NTRS)
Tsai, C.; Szabo, B. A.
1973-01-01
An approch to the finite element method which utilizes families of conforming finite elements based on complete polynomials is presented. Finite element approximations based on this method converge with respect to progressively reduced element sizes as well as with respect to progressively increasing orders of approximation. Numerical results of static and dynamic applications of plates are presented to demonstrate the efficiency of the method. Comparisons are made with plate elements in NASTRAN and the high-precision plate element developed by Cowper and his co-workers. Some considerations are given to implementation of the constraint method into general purpose computer programs such as NASTRAN.
Effective quadrature formula in solving linear integro-differential equations of order two
NASA Astrophysics Data System (ADS)
Eshkuvatov, Z. K.; Kammuji, M.; Long, N. M. A. Nik; Yunus, Arif A. M.
2017-08-01
In this note, we solve general form of Fredholm-Volterra integro-differential equations (IDEs) of order 2 with boundary condition approximately and show that proposed method is effective and reliable. Initially, IDEs is reduced into integral equation of the third kind by using standard integration techniques and identity between multiple and single integrals then truncated Legendre series are used to estimate the unknown function. For the kernel integrals, we have applied Gauss-Legendre quadrature formula and collocation points are chosen as the roots of the Legendre polynomials. Finally, reduce the integral equations of the third kind into the system of algebraic equations and Gaussian elimination method is applied to get approximate solutions. Numerical examples and comparisons with other methods reveal that the proposed method is very effective and dominated others in many cases. General theory of existence of the solution is also discussed.
Efficient Kriging via Fast Matrix-Vector Products
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess; Raykar, Vikas C.; Duraiswami, Ramani; Mount, David M.
2008-01-01
Interpolating scattered data points is a problem of wide ranging interest. Ordinary kriging is an optimal scattered data estimator, widely used in geosciences and remote sensing. A generalized version of this technique, called cokriging, can be used for image fusion of remotely sensed data. However, it is computationally very expensive for large data sets. We demonstrate the time efficiency and accuracy of approximating ordinary kriging through the use of fast matrixvector products combined with iterative methods. We used methods based on the fast Multipole methods and nearest neighbor searching techniques for implementations of the fast matrix-vector products.
Globally convergent techniques in nonlinear Newton-Krylov
NASA Technical Reports Server (NTRS)
Brown, Peter N.; Saad, Youcef
1989-01-01
Some convergence theory is presented for nonlinear Krylov subspace methods. The basic idea of these methods is to use variants of Newton's iteration in conjunction with a Krylov subspace method for solving the Jacobian linear systems. These methods are variants of inexact Newton methods where the approximate Newton direction is taken from a subspace of small dimensions. The main focus is to analyze these methods when they are combined with global strategies such as linesearch techniques and model trust region algorithms. Most of the convergence results are formulated for projection onto general subspaces rather than just Krylov subspaces.
LVQ and backpropagation neural networks applied to NASA SSME data
NASA Technical Reports Server (NTRS)
Doniere, Timothy F.; Dhawan, Atam P.
1993-01-01
Feedfoward neural networks with backpropagation learning have been used as function approximators for modeling the space shuttle main engine (SSME) sensor signals. The modeling of these sensor signals is aimed at the development of a sensor fault detection system that can be used during ground test firings. The generalization capability of a neural network based function approximator depends on the training vectors which in this application may be derived from a number of SSME ground test-firings. This yields a large number of training vectors. Large training sets can cause the time required to train the network to be very large. Also, the network may not be able to generalize for large training sets. To reduce the size of the training sets, the SSME test-firing data is reduced using the learning vector quantization (LVQ) based technique. Different compression ratios were used to obtain compressed data in training the neural network model. The performance of the neural model trained using reduced sets of training patterns is presented and compared with the performance of the model trained using complete data. The LVQ can also be used as a function approximator. The performance of the LVQ as a function approximator using reduced training sets is presented and compared with the performance of the backpropagation network.
Ruíz, A; Ramos, A; San Emeterio, J L
2004-04-01
An estimation procedure to efficiently find approximate values of internal parameters in ultrasonic transducers intended for broadband operation would be a valuable tool to discover internal construction data. This information is necessary in the modelling and simulation of acoustic and electrical behaviour related to ultrasonic systems containing commercial transducers. There is not a general solution for this generic problem of parameter estimation in the case of broadband piezoelectric probes. In this paper, this general problem is briefly analysed for broadband conditions. The viability of application in this field of an artificial intelligence technique supported on the modelling of the transducer internal components is studied. A genetic algorithm (GA) procedure is presented and applied to the estimation of different parameters, related to two transducers which are working as pulsed transmitters. The efficiency of this GA technique is studied, considering the influence of the number and variation range of the estimated parameters. Estimation results are experimentally ratified.
Munro, Peter R.T.; Ignatyev, Konstantin; Speller, Robert D.; Olivo, Alessandro
2013-01-01
X-ray phase contrast imaging is a very promising technique which may lead to significant advancements in medical imaging. One of the impediments to the clinical implementation of the technique is the general requirement to have an x-ray source of high coherence. The radiation physics group at UCL is currently developing an x-ray phase contrast imaging technique which works with laboratory x-ray sources. Validation of the system requires extensive modelling of relatively large samples of tissue. To aid this, we have undertaken a study of when geometrical optics may be employed to model the system in order to avoid the need to perform a computationally expensive wave optics calculation. In this paper, we derive the relationship between the geometrical and wave optics model for our system imaging an infinite cylinder. From this model we are able to draw conclusions regarding the general applicability of the geometrical optics approximation. PMID:20389424
Munro, Peter R T; Ignatyev, Konstantin; Speller, Robert D; Olivo, Alessandro
2010-03-01
X-ray phase contrast imaging is a very promising technique which may lead to significant advancements in medical imaging. One of the impediments to the clinical implementation of the technique is the general requirement to have an x-ray source of high coherence. The radiation physics group at UCL is currently developing an x-ray phase contrast imaging technique which works with laboratory x-ray sources. Validation of the system requires extensive modelling of relatively large samples of tissue. To aid this, we have undertaken a study of when geometrical optics may be employed to model the system in order to avoid the need to perform a computationally expensive wave optics calculation. In this paper, we derive the relationship between the geometrical and wave optics model for our system imaging an infinite cylinder. From this model we are able to draw conclusions regarding the general applicability of the geometrical optics approximation.
A robust multilevel simultaneous eigenvalue solver
NASA Technical Reports Server (NTRS)
Costiner, Sorin; Taasan, Shlomo
1993-01-01
Multilevel (ML) algorithms for eigenvalue problems are often faced with several types of difficulties such as: the mixing of approximated eigenvectors by the solution process, the approximation of incomplete clusters of eigenvectors, the poor representation of solution on coarse levels, and the existence of close or equal eigenvalues. Algorithms that do not treat appropriately these difficulties usually fail, or their performance degrades when facing them. These issues motivated the development of a robust adaptive ML algorithm which treats these difficulties, for the calculation of a few eigenvectors and their corresponding eigenvalues. The main techniques used in the new algorithm include: the adaptive completion and separation of the relevant clusters on different levels, the simultaneous treatment of solutions within each cluster, and the robustness tests which monitor the algorithm's efficiency and convergence. The eigenvectors' separation efficiency is based on a new ML projection technique generalizing the Rayleigh Ritz projection, combined with a technique, the backrotations. These separation techniques, when combined with an FMG formulation, in many cases lead to algorithms of O(qN) complexity, for q eigenvectors of size N on the finest level. Previously developed ML algorithms are less focused on the mentioned difficulties. Moreover, algorithms which employ fine level separation techniques are of O(q(sub 2)N) complexity and usually do not overcome all these difficulties. Computational examples are presented where Schrodinger type eigenvalue problems in 2-D and 3-D, having equal and closely clustered eigenvalues, are solved with the efficiency of the Poisson multigrid solver. A second order approximation is obtained in O(qN) work, where the total computational work is equivalent to only a few fine level relaxations per eigenvector.
Summation by parts, projections, and stability
NASA Technical Reports Server (NTRS)
Olsson, Pelle
1993-01-01
We have derived stability results for high-order finite difference approximations of mixed hyperbolic-parabolic initial-boundary value problems (IBVP). The results are obtained using summation by parts and a new way of representing general linear boundary conditions as an orthogonal projection. By slightly rearranging the analytic equations, we can prove strict stability for hyperbolic-parabolic IBVP. Furthermore, we generalize our technique so as to yield strict stability on curvilinear non-smooth domains in two space dimensions. Finally, we show how to incorporate inhomogeneous boundary data while retaining strict stability. Using the same procedure one can prove strict stability in higher dimensions as well.
Analysis of entry accelerometer data: A case study of Mars Pathfinder
NASA Astrophysics Data System (ADS)
Withers, Paul; Towner, M. C.; Hathi, B.; Zarnecki, J. C.
2003-08-01
Accelerometers are regularly flown on atmosphere-entering spacecraft. Using their measurements, the spacecraft trajectory and the vertical structure of density, pressure, and temperature in the atmosphere through which it descends can be calculated. We review the general procedures for trajectory and atmospheric structure reconstruction and outline them here in detail. We discuss which physical properties are important in atmospheric entry, instead of working exclusively with the dimensionless numbers of fluid dynamics. Integration of the equations of motion governing the spacecraft trajectory is carried out in a novel and general formulation. This does not require an axisymmetric gravitational field or many of the other assumptions that are present in the literature. We discuss four techniques - head-on, drag-only, acceleration ratios, and gyroscopes - for constraining spacecraft attitude, which is the critical issue in the trajectory reconstruction. The head-on technique uses an approximate magnitude and direction for the aerodynamic acceleration, whereas the drag-only technique uses the correct magnitude and an approximate direction. The acceleration ratios technique uses the correct magnitude and an indirect way of finding the correct direction and the gyroscopes technique uses the correct magnitude and a direct way of finding the correct direction. The head-on and drag-only techniques are easy to implement and require little additional information. The acceleration ratios technique requires extensive and expensive aerodynamic modelling. The gyroscopes technique requires additional onboard instrumentation. The effects of errors are briefly addressed. Our implementations of these trajectory reconstruction procedures have been verified on the Mars Pathfinder dataset. We find inconsistencies within the published work of the Pathfinder science team, and in the PDS archive itself, relating to the entry state of the spacecraft. Our atmospheric structure reconstruction, which uses only a simple aerodynamic database, is consistent with the PDS archive to about 4%. Surprisingly accurate profiles of atmospheric temperatures can be derived with no information about the spacecraft aerodynamics. Using no aerodynamic information whatsoever about Pathfinder, our profile of atmospheric temperature is still consistent with the PDS archive to about 8%. As a service to the community, we have placed simplified versions of our trajectory and atmospheric structure computer programmes online for public use.
NASA Astrophysics Data System (ADS)
Teter, Andrzej; Kolakowski, Zbigniew
2018-01-01
The numerical modelling of a plate structure was performed with the finite element method and a one-mode approach based on Koiter's method. The first order approximation of Koiter's method enables one to solve the eigenvalue problem. The second order approximation describes post-buckling equilibrium paths. In the finite element analysis, the Lanczos method was used to solve the linear problem of buckling. Simulations of the non-linear problem were performed with the Newton-Raphson method. Detailed calculations were carried out for a short Z-column made of general laminates. Configurations of laminated layers were non-symmetric. Due to possibilities of its application, the general laminate is very interesting. The length of the samples was chosen to obtain the lowest value of local buckling load. The amplitude of initial imperfections was 10% of the wall thickness. Thin-walled structures were simply supported on both ends. The numerical results were verified in experimental tests. A strain-gauge technique was applied. A static compression test was performed on a universal testing machine and a special grip, which consisted of two rigid steel plates and clamping sleeves, was used. Specimens were obtained with an autoclave technique. Tests were performed at a constant velocity of the cross-bar equal to 2 mm/min. The compressive load was less than 150% of the bifurcation load. Additionally, soft and thin pads were used to reduce inaccuracy of the sample ends.
Petersson, N. Anders; Sjogreen, Bjorn
2015-07-20
We develop a fourth order accurate finite difference method for solving the three-dimensional elastic wave equation in general heterogeneous anisotropic materials on curvilinear grids. The proposed method is an extension of the method for isotropic materials, previously described in the paper by Sjögreen and Petersson (2012) [11]. The method we proposed discretizes the anisotropic elastic wave equation in second order formulation, using a node centered finite difference method that satisfies the principle of summation by parts. The summation by parts technique results in a provably stable numerical method that is energy conserving. Also, we generalize and evaluate the super-grid far-fieldmore » technique for truncating unbounded domains. Unlike the commonly used perfectly matched layers (PML), the super-grid technique is stable for general anisotropic material, because it is based on a coordinate stretching combined with an artificial dissipation. Moreover, the discretization satisfies an energy estimate, proving that the numerical approximation is stable. We demonstrate by numerical experiments that sufficiently wide super-grid layers result in very small artificial reflections. Applications of the proposed method are demonstrated by three-dimensional simulations of anisotropic wave propagation in crystals.« less
NASA Technical Reports Server (NTRS)
Laurenson, R. M.; Baumgarten, J. R.
1975-01-01
An approximation technique has been developed for determining the transient response of a nonlinear dynamic system. The nonlinearities in the system which has been considered appear in the system's dissipation function. This function was expressed as a second order polynomial in the system's velocity. The developed approximation is an extension of the classic Kryloff-Bogoliuboff technique. Two examples of the developed approximation are presented for comparative purposes with other approximation methods.
Structural design using equilibrium programming formulations
NASA Technical Reports Server (NTRS)
Scotti, Stephen J.
1995-01-01
Solutions to increasingly larger structural optimization problems are desired. However, computational resources are strained to meet this need. New methods will be required to solve increasingly larger problems. The present approaches to solving large-scale problems involve approximations for the constraints of structural optimization problems and/or decomposition of the problem into multiple subproblems that can be solved in parallel. An area of game theory, equilibrium programming (also known as noncooperative game theory), can be used to unify these existing approaches from a theoretical point of view (considering the existence and optimality of solutions), and be used as a framework for the development of new methods for solving large-scale optimization problems. Equilibrium programming theory is described, and existing design techniques such as fully stressed design and constraint approximations are shown to fit within its framework. Two new structural design formulations are also derived. The first new formulation is another approximation technique which is a general updating scheme for the sensitivity derivatives of design constraints. The second new formulation uses a substructure-based decomposition of the structure for analysis and sensitivity calculations. Significant computational benefits of the new formulations compared with a conventional method are demonstrated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davies, J. A.; Perry, C. H.; Harrison, R. A.
2013-11-10
The twin-spacecraft STEREO mission has enabled simultaneous white-light imaging of the solar corona and inner heliosphere from multiple vantage points. This has led to the development of numerous stereoscopic techniques to investigate the three-dimensional structure and kinematics of solar wind transients such as coronal mass ejections (CMEs). Two such methods—triangulation and the tangent to a sphere—can be used to determine time profiles of the propagation direction and radial distance (and thereby radial speed) of a solar wind transient as it travels through the inner heliosphere, based on its time-elongation profile viewed by two observers. These techniques are founded on themore » assumption that the transient can be characterized as a point source (fixed φ, FP, approximation) or a circle attached to Sun-center (harmonic mean, HM, approximation), respectively. These geometries constitute extreme descriptions of solar wind transients, in terms of their cross-sectional extent. Here, we present the stereoscopic expressions necessary to derive propagation direction and radial distance/speed profiles of such transients based on the more generalized self-similar expansion (SSE) geometry, for which the FP and HM geometries form the limiting cases; our implementation of these equations is termed the stereoscopic SSE method. We apply the technique to two Earth-directed CMEs from different phases of the STEREO mission, the well-studied event of 2008 December and a more recent event from 2012 March. The latter CME was fast, with an initial speed exceeding 2000 km s{sup –1}, and highly geoeffective, in stark contrast to the slow and ineffectual 2008 December CME.« less
NASA Astrophysics Data System (ADS)
He, Zhenzong; Qi, Hong; Yao, Yuchen; Ruan, Liming
2014-12-01
The Ant Colony Optimization algorithm based on the probability density function (PDF-ACO) is applied to estimate the bimodal aerosol particle size distribution (PSD). The direct problem is solved by the modified Anomalous Diffraction Approximation (ADA, as an approximation for optically large and soft spheres, i.e., χ⪢1 and |m-1|⪡1) and the Beer-Lambert law. First, a popular bimodal aerosol PSD and three other bimodal PSDs are retrieved in the dependent model by the multi-wavelength extinction technique. All the results reveal that the PDF-ACO algorithm can be used as an effective technique to investigate the bimodal PSD. Then, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution function to retrieve the bimodal PSDs under the independent model. Finally, the J-SB and M-β functions are applied to recover actual measurement aerosol PSDs over Beijing and Shanghai obtained from the aerosol robotic network (AERONET). The numerical simulation and experimental results demonstrate that these two general functions, especially the J-SB function, can be used as a versatile distribution function to retrieve the bimodal aerosol PSD when no priori information about the PSD is available.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Golosio, Bruno; Carpinelli, Massimo; Masala, Giovanni Luca
Phase contrast imaging is a technique widely used in synchrotron facilities for nondestructive analysis. Such technique can also be implemented through microfocus x-ray tube systems. Recently, a relatively new type of compact, quasimonochromatic x-ray sources based on Compton backscattering has been proposed for phase contrast imaging applications. In order to plan a phase contrast imaging system setup, to evaluate the system performance and to choose the experimental parameters that optimize the image quality, it is important to have reliable software for phase contrast imaging simulation. Several software tools have been developed and tested against experimental measurements at synchrotron facilities devotedmore » to phase contrast imaging. However, many approximations that are valid in such conditions (e.g., large source-object distance, small transverse size of the object, plane wave approximation, monochromatic beam, and Gaussian-shaped source focal spot) are not generally suitable for x-ray tubes and other compact systems. In this work we describe a general method for the simulation of phase contrast imaging using polychromatic sources based on a spherical wave description of the beam and on a double-Gaussian model of the source focal spot, we discuss the validity of some possible approximations, and we test the simulations against experimental measurements using a microfocus x-ray tube on three types of polymers (nylon, poly-ethylene-terephthalate, and poly-methyl-methacrylate) at varying source-object distance. It will be shown that, as long as all experimental conditions are described accurately in the simulations, the described method yields results that are in good agreement with experimental measurements.« less
NASA Astrophysics Data System (ADS)
Plimak, L. I.; Fleischhauer, M.; Olsen, M. K.; Collett, M. J.
2003-01-01
We present an introduction to phase-space techniques (PST) based on a quantum-field-theoretical (QFT) approach. In addition to bridging the gap between PST and QFT, our approach results in a number of generalizations of the PST. First, for problems where the usual PST do not result in a genuine Fokker-Planck equation (even after phase-space doubling) and hence fail to produce a stochastic differential equation (SDE), we show how the system in question may be approximated via stochastic difference equations (SΔE). Second, we show that introducing sources into the SDE’s (or SΔE’s) generalizes them to a full quantum nonlinear stochastic response problem (thus generalizing Kubo’s linear reaction theory to a quantum nonlinear stochastic response theory). Third, we establish general relations linking quantum response properties of the system in question to averages of operator products ordered in a way different from time normal. This extends PST to a much wider assemblage of operator products than are usually considered in phase-space approaches. In all cases, our approach yields a very simple and straightforward way of deriving stochastic equations in phase space.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spanner, Michael; Batista, Victor S.; Brumer, Paul
2005-02-22
The utility of the Filinov integral conditioning technique, as implemented in semiclassical initial value representation (SC-IVR) methods, is analyzed for a number of regular and chaotic systems. For nonchaotic systems of low dimensionality, the Filinov technique is found to be quite ineffective at accelerating convergence of semiclassical calculations since, contrary to the conventional wisdom, the semiclassical integrands usually do not exhibit significant phase oscillations in regions of large integrand amplitude. In the case of chaotic dynamics, it is found that the regular component is accurately represented by the SC-IVR, even when using the Filinov integral conditioning technique, but that quantummore » manifestations of chaotic behavior was easily overdamped by the filtering technique. Finally, it is shown that the level of approximation introduced by the Filinov filter is, in general, comparable to the simpler ad hoc truncation procedure introduced by Kay [J. Chem. Phys. 101, 2250 (1994)].« less
System Identification for Nonlinear Control Using Neural Networks
NASA Technical Reports Server (NTRS)
Stengel, Robert F.; Linse, Dennis J.
1990-01-01
An approach to incorporating artificial neural networks in nonlinear, adaptive control systems is described. The controller contains three principal elements: a nonlinear inverse dynamic control law whose coefficients depend on a comprehensive model of the plant, a neural network that models system dynamics, and a state estimator whose outputs drive the control law and train the neural network. Attention is focused on the system identification task, which combines an extended Kalman filter with generalized spline function approximation. Continual learning is possible during normal operation, without taking the system off line for specialized training. Nonlinear inverse dynamic control requires smooth derivatives as well as function estimates, imposing stringent goals on the approximating technique.
NASA Astrophysics Data System (ADS)
Sanders, Sören; Holthaus, Martin
2017-11-01
We explore in detail how analytic continuation of divergent perturbation series by generalized hypergeometric functions is achieved in practice. Using the example of strong-coupling perturbation series provided by the two-dimensional Bose-Hubbard model, we compare hypergeometric continuation to Shanks and Padé techniques, and demonstrate that the former yields a powerful, efficient and reliable alternative for computing the phase diagram of the Mott insulator-to-superfluid transition. In contrast to Shanks transformations and Padé approximations, hypergeometric continuation also allows us to determine the exponents which characterize the divergence of correlation functions at the transition points. Therefore, hypergeometric continuation constitutes a promising tool for the study of quantum phase transitions.
A numerical projection technique for large-scale eigenvalue problems
NASA Astrophysics Data System (ADS)
Gamillscheg, Ralf; Haase, Gundolf; von der Linden, Wolfgang
2011-10-01
We present a new numerical technique to solve large-scale eigenvalue problems. It is based on the projection technique, used in strongly correlated quantum many-body systems, where first an effective approximate model of smaller complexity is constructed by projecting out high energy degrees of freedom and in turn solving the resulting model by some standard eigenvalue solver. Here we introduce a generalization of this idea, where both steps are performed numerically and which in contrast to the standard projection technique converges in principle to the exact eigenvalues. This approach is not just applicable to eigenvalue problems encountered in many-body systems but also in other areas of research that result in large-scale eigenvalue problems for matrices which have, roughly speaking, mostly a pronounced dominant diagonal part. We will present detailed studies of the approach guided by two many-body models.
A Survey of Techniques for Approximate Computing
Mittal, Sparsh
2016-03-18
Approximate computing trades off computation quality with the effort expended and as rising performance demands confront with plateauing resource budgets, approximate computing has become, not merely attractive, but even imperative. Here, we present a survey of techniques for approximate computing (AC). We discuss strategies for finding approximable program portions and monitoring output quality, techniques for using AC in different processing units (e.g., CPU, GPU and FPGA), processor components, memory technologies etc., and programming frameworks for AC. Moreover, we classify these techniques based on several key characteristics to emphasize their similarities and differences. Finally, the aim of this paper is tomore » provide insights to researchers into working of AC techniques and inspire more efforts in this area to make AC the mainstream computing approach in future systems.« less
On the energy integral for first post-Newtonian approximation
NASA Astrophysics Data System (ADS)
O'Leary, Joseph; Hill, James M.; Bennett, James C.
2018-07-01
The post-Newtonian approximation for general relativity is widely adopted by the geodesy and astronomy communities. It has been successfully exploited for the inclusion of relativistic effects in practically all geodetic applications and techniques such as satellite/lunar laser ranging and very long baseline interferometry. Presently, the levels of accuracy required in geodetic techniques require that reference frames, planetary and satellite orbits and signal propagation be treated within the post-Newtonian regime. For arbitrary scalar W and vector gravitational potentials W^j (j=1,2,3), we present a novel derivation of the energy associated with a test particle in the post-Newtonian regime. The integral so obtained appears not to have been given previously in the literature and is deduced through algebraic manipulation on seeking a Jacobi-like integral associated with the standard post-Newtonian equations of motion. The new integral is independently verified through a variational formulation using the post-Newtonian metric components and is subsequently verified by numerical integration of the post-Newtonian equations of motion.
Spatial homogenization methods for pin-by-pin neutron transport calculations
NASA Astrophysics Data System (ADS)
Kozlowski, Tomasz
For practical reactor core applications low-order transport approximations such as SP3 have been shown to provide sufficient accuracy for both static and transient calculations with considerably less computational expense than the discrete ordinate or the full spherical harmonics methods. These methods have been applied in several core simulators where homogenization was performed at the level of the pin cell. One of the principal problems has been to recover the error introduced by pin-cell homogenization. Two basic approaches to treat pin-cell homogenization error have been proposed: Superhomogenization (SPH) factors and Pin-Cell Discontinuity Factors (PDF). These methods are based on well established Equivalence Theory and Generalized Equivalence Theory to generate appropriate group constants. These methods are able to treat all sources of error together, allowing even few-group diffusion with one mesh per cell to reproduce the reference solution. A detailed investigation and consistent comparison of both homogenization techniques showed potential of PDF approach to improve accuracy of core calculation, but also reveal its limitation. In principle, the method is applicable only for the boundary conditions at which it was created, i.e. for boundary conditions considered during the homogenization process---normally zero current. Therefore, there exists a need to improve this method, making it more general and environment independent. The goal of proposed general homogenization technique is to create a function that is able to correctly predict the appropriate correction factor with only homogeneous information available, i.e. a function based on heterogeneous solution that could approximate PDFs using homogeneous solution. It has been shown that the PDF can be well approximated by least-square polynomial fit of non-dimensional heterogeneous solution and later used for PDF prediction using homogeneous solution. This shows a promise for PDF prediction for off-reference conditions, such as during reactor transients which provide conditions that can not typically be anticipated a priori.
Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess; Mount, David M.
2007-01-01
Interpolating scattered data points is a problem of wide ranging interest. A number of approaches for interpolation have been proposed both from theoretical domains such as computational geometry and in applications' fields such as geostatistics. Our motivation arises from geological and mining applications. In many instances data can be costly to compute and are available only at nonuniformly scattered positions. Because of the high cost of collecting measurements, high accuracy is required in the interpolants. One of the most popular interpolation methods in this field is called ordinary kriging. It is popular because it is a best linear unbiased estimator. The price for its statistical optimality is that the estimator is computationally very expensive. This is because the value of each interpolant is given by the solution of a large dense linear system. In practice, kriging problems have been solved approximately by restricting the domain to a small local neighborhood of points that lie near the query point. Determining the proper size for this neighborhood is a solved by ad hoc methods, and it has been shown that this approach leads to undesirable discontinuities in the interpolant. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. This process achieves its efficiency by replacing the large dense kriging system with a much sparser linear system. This technique has been applied to a restriction of our problem, called simple kriging, which is not unbiased for general data sets. In this paper we generalize these results by showing how to apply covariance tapering to the more general problem of ordinary kriging. Through experimentation we demonstrate the space and time efficiency and accuracy of approximating ordinary kriging through the use of covariance tapering combined with iterative methods for solving large sparse systems. We demonstrate our approach on large data sizes arising both from synthetic sources and from real applications.
Nonadditivity of van der Waals forces on liquid surfaces
NASA Astrophysics Data System (ADS)
Venkataram, Prashanth S.; Whitton, Jeremy D.; Rodriguez, Alejandro W.
2016-09-01
We present an approach for modeling nanoscale wetting and dewetting of textured solid surfaces that exploits recently developed, sophisticated techniques for computing exact long-range dispersive van der Waals (vdW) or (more generally) Casimir forces in arbitrary geometries. We apply these techniques to solve the variational formulation of the Young-Laplace equation and predict the equilibrium shapes of liquid-vacuum interfaces near solid gratings. We show that commonly employed methods of computing vdW interactions based on additive Hamaker or Derjaguin approximations, which neglect important electromagnetic boundary effects, can result in large discrepancies in the shapes and behaviors of liquid surfaces compared to exact methods.
NASA Astrophysics Data System (ADS)
Macías-Díaz, J. E.; Hendy, A. S.; De Staelen, R. H.
2018-03-01
In this work, we investigate a general nonlinear wave equation with Riesz space-fractional derivatives that generalizes various classical hyperbolic models, including the sine-Gordon and the Klein-Gordon equations from relativistic quantum mechanics. A finite-difference discretization of the model is provided using fractional centered differences. The method is a technique that is capable of preserving an energy-like quantity at each iteration. Some computational comparisons against solutions available in the literature are performed in order to assess the capability of the method to preserve the invariant. Our experiments confirm that the technique yields good approximations to the solutions considered. As an application of our scheme, we provide simulations that confirm, for the first time in the literature, the presence of the phenomenon of nonlinear supratransmission in Riesz space-fractional Klein-Gordon equations driven by a harmonic perturbation at the boundary.
An Optimal Order Nonnested Mixed Multigrid Method for Generalized Stokes Problems
NASA Technical Reports Server (NTRS)
Deng, Qingping
1996-01-01
A multigrid algorithm is developed and analyzed for generalized Stokes problems discretized by various nonnested mixed finite elements within a unified framework. It is abstractly proved by an element-independent analysis that the multigrid algorithm converges with an optimal order if there exists a 'good' prolongation operator. A technique to construct a 'good' prolongation operator for nonnested multilevel finite element spaces is proposed. Its basic idea is to introduce a sequence of auxiliary nested multilevel finite element spaces and define a prolongation operator as a composite operator of two single grid level operators. This makes not only the construction of a prolongation operator much easier (the final explicit forms of such prolongation operators are fairly simple), but the verification of the approximate properties for prolongation operators is also simplified. Finally, as an application, the framework and technique is applied to seven typical nonnested mixed finite elements.
Riemann Solvers in Relativistic Hydrodynamics: Basics and Astrophysical Applications
NASA Astrophysics Data System (ADS)
Ibanez, Jose M.
2001-12-01
My contribution to these proceedings summarizes a general overview on t High Resolution Shock Capturing methods (HRSC) in the field of relativistic hydrodynamics with special emphasis on Riemann solvers. HRSC techniques achieve highly accurate numerical approximations (formally second order or better) in smooth regions of the flow, and capture the motion of unresolved steep gradients without creating spurious oscillations. In the first part I will show how these techniques have been extended to relativistic hydrodynamics, making it possible to explore some challenging astrophysical scenarios. I will review recent literature concerning the main properties of different special relativistic Riemann solvers, and discuss several 1D and 2D test problems which are commonly used to evaluate the performance of numerical methods in relativistic hydrodynamics. In the second part I will illustrate the use of HRSC methods in several astrophysical applications where special and general relativistic hydrodynamical processes play a crucial role.
Simple Skin-Stretching Device in Assisted Tension-Free Wound Closure.
Cheng, Li-Fu; Lee, Jiunn-Tat; Hsu, Honda; Wu, Meng-Si
2017-03-01
Numerous conventional wound reconstruction methods, such as wound undermining with direct suture, skin graft, and flap surgery, can be used to treat large wounds. The adequate undermining of the skin flaps of a wound is a commonly used technique for achieving the closure of large tension wounds; however, the use of tension to approximate and suture the skin flaps can cause ischemic marginal necrosis. The purpose of this study is to use elastic rubber bands to relieve the tension of direct wound closure for simultaneously minimizing the risks of wound dehiscence and wound edge ischemia that lead to necrosis. This retrospective study was conducted to evaluate our clinical experiences with 22 large wounds, which involved performing primary closures under a considerable amount of tension by using elastic rubber bands in a skin-stretching technique after a wide undermining procedure. Assessment of the results entailed complete wound healing and related complications. All 22 wounds in our study showed fair to good results except for one. The mean success rate was approximately 95.45%. The simple skin-stretching design enabled tension-free skin closure, which pulled the bilateral undermining skin flaps as bilateral fasciocutaneous advancement flaps. The skin-stretching technique was generally successful.
A Stochastic Mixed Finite Element Heterogeneous Multiscale Method for Flow in Porous Media
2010-08-01
applicable for flow in porous media has drawn significant interest in the last few years. Several techniques like generalized polynomial chaos expansions (gPC...represents the stochastic solution as a polynomial approxima- tion. This interpolant is constructed via independent function calls to the de- terministic...of orthogonal polynomials [34,38] or sparse grid approximations [39–41]. It is well known that the global polynomial interpolation cannot resolve lo
Computer-Aided Engineering of Semiconductor Integrated Circuits
1979-07-01
equation using a five point finite difference approximation. Section 4.3.6 describes the numerical techniques and iterative algorithms which are used...neighbor points. This is generally referred to as a five point finite difference scheme on a rectangular grid, as described below. The finite difference ...problems in steady state have been analyzed by the finite difference method [4. 16 ] [4.17 3 or finite element method [4. 18 3, [4. 19 3 as reported last
NASA Technical Reports Server (NTRS)
Banks, H. T.; Ito, K.
1988-01-01
Numerical techniques for parameter identification in distributed-parameter systems are developed analytically. A general convergence and stability framework (for continuous dependence on observations) is derived for first-order systems on the basis of (1) a weak formulation in terms of sesquilinear forms and (2) the resolvent convergence form of the Trotter-Kato approximation. The extension of this framework to second-order systems is considered.
Gravitational Lensing from a Spacetime Perspective.
Perlick, Volker
2004-01-01
The theory of gravitational lensing is reviewed from a spacetime perspective, without quasi-Newtonian approximations. More precisely, the review covers all aspects of gravitational lensing where light propagation is described in terms of lightlike geodesics of a metric of Lorentzian signature. It includes the basic equations and the relevant techniques for calculating the position, the shape, and the brightness of images in an arbitrary general-relativistic spacetime. It also includes general theorems on the classification of caustics, on criteria for multiple imaging, and on the possible number of images. The general results are illustrated with examples of spacetimes where the lensing features can be explicitly calculated, including the Schwarzschild spacetime, the Kerr spacetime, the spacetime of a straight string, plane gravitational waves, and others.
A full-wave Helmholtz model for continuous-wave ultrasound transmission.
Huttunen, Tomi; Malinen, Matti; Kaipio, Jari P; White, Phillip Jason; Hynynen, Kullervo
2005-03-01
A full-wave Helmholtz model of continuous-wave (CW) ultrasound fields may offer several attractive features over widely used partial-wave approximations. For example, many full-wave techniques can be easily adjusted for complex geometries, and multiple reflections of sound are automatically taken into account in the model. To date, however, the full-wave modeling of CW fields in general 3D geometries has been avoided due to the large computational cost associated with the numerical approximation of the Helmholtz equation. Recent developments in computing capacity together with improvements in finite element type modeling techniques are making possible wave simulations in 3D geometries which reach over tens of wavelengths. The aim of this study is to investigate the feasibility of a full-wave solution of the 3D Helmholtz equation for modeling of continuous-wave ultrasound fields in an inhomogeneous medium. The numerical approximation of the Helmholtz equation is computed using the ultraweak variational formulation (UWVF) method. In addition, an inverse problem technique is utilized to reconstruct the velocity distribution on the transducer which is used to model the sound source in the UWVF scheme. The modeling method is verified by comparing simulated and measured fields in the case of transmission of 531 kHz CW fields through layered plastic plates. The comparison shows a reasonable agreement between simulations and measurements at low angles of incidence but, due to mode conversion, the Helmholtz model becomes insufficient for simulating ultrasound fields in plates at large angles of incidence.
A direct-measurement technique for estimating discharge-chamber lifetime. [for ion thrusters
NASA Technical Reports Server (NTRS)
Beattie, J. R.; Garvin, H. L.
1982-01-01
The use of short-term measurement techniques for predicting the wearout of ion thrusters resulting from sputter-erosion damage is investigated. The laminar-thin-film technique is found to provide high precision erosion-rate data, although the erosion rates are generally substantially higher than those found during long-term erosion tests, so that the results must be interpreted in a relative sense. A technique for obtaining absolute measurements is developed using a masked-substrate arrangement. This new technique provides a means for estimating the lifetimes of critical discharge-chamber components based on direct measurements of sputter-erosion depths obtained during short-duration (approximately 1 hr) tests. Results obtained using the direct-measurement technique are shown to agree with sputter-erosion depths calculated for the plasma conditions of the test. The direct-measurement approach is found to be applicable to both mercury and argon discharge-plasma environments and will be useful for estimating the lifetimes of inert gas and extended performance mercury ion thrusters currently under development.
Tools for Analysis and Visualization of Large Time-Varying CFD Data Sets
NASA Technical Reports Server (NTRS)
Wilhelms, Jane; VanGelder, Allen
1997-01-01
In the second year, we continued to built upon and improve our scanline-based direct volume renderer that we developed in the first year of this grant. This extremely general rendering approach can handle regular or irregular grids, including overlapping multiple grids, and polygon mesh surfaces. It runs in parallel on multi-processors. It can also be used in conjunction with a k-d tree hierarchy, where approximate models and error terms are stored in the nodes of the tree, and approximate fast renderings can be created. We have extended our software to handle time-varying data where the data changes but the grid does not. We are now working on extending it to handle more general time-varying data. We have also developed a new extension of our direct volume renderer that uses automatic decimation of the 3D grid, as opposed to an explicit hierarchy. We explored this alternative approach as being more appropriate for very large data sets, where the extra expense of a tree may be unacceptable. We also describe a new approach to direct volume rendering using hardware 3D textures and incorporates lighting effects. Volume rendering using hardware 3D textures is extremely fast, and machines capable of using this technique are becoming more moderately priced. While this technique, at present, is limited to use with regular grids, we are pursuing possible algorithms extending the approach to more general grid types. We have also begun to explore a new method for determining the accuracy of approximate models based on the light field method described at ACM SIGGRAPH '96. In our initial implementation, we automatically image the volume from 32 equi-distant positions on the surface of an enclosing tessellated sphere. We then calculate differences between these images under different conditions of volume approximation or decimation. We are studying whether this will give a quantitative measure of the effects of approximation. We have created new tools for exploring the differences between images produced by various rendering methods. Images created by our software can be stored in the SGI RGB format. Our idtools software reads in pair of images and compares them using various metrics. The differences of the images using the RGB, HSV, and HSL color models can be calculated and shown. We can also calculate the auto-correlation function and the Fourier transform of the image and image differences. We will explore how these image differences compare in order to find useful metrics for quantifying the success of various visualization approaches. In general, progress was consistent with our research plan for the second year of the grant.
NASA Astrophysics Data System (ADS)
Khusainov, T. A.; Shalashov, A. G.; Gospodchikov, E. D.
2018-05-01
The field structure of quasi-optical wave beams tunneled through the evanescence region in the vicinity of the plasma cutoff in a nonuniform magnetoactive plasma is analyzed. This problem is traditionally associated with the process of linear transformation of ordinary and extraordinary waves. An approximate analytical solution is constructed for a rather general magnetic configuration applicable to spherical tokamaks, optimized stellarators, and other magnetic confinement systems with a constant plasma density on magnetic surfaces. A general technique for calculating the transformation coefficient of a finite-aperture wave beam is proposed, and the physical conditions required for the most efficient transformation are analyzed.
The HVT technique and the 'uncertainty' relation for central potentials
NASA Astrophysics Data System (ADS)
Grypeos, M. E.; Koutroulos, C. G.; Oyewumi, K. J.; Petridou, Th
2004-08-01
The quantum mechanical hypervirial theorems (HVT) technique is used to treat the so-called 'uncertainty' relation for quite a general class of central potential wells, including the (reduced) Poeschl-Teller and the Gaussian one. It is shown that this technique is quite suitable in deriving an approximate analytic expression in the form of a truncated power series expansion for the dimensionless product Pnl equiv langr2rangnllangp2rangnl/planck2, for every (deeply) bound state of a particle moving non-relativistically in the well, provided that a (dimensionless) parameter s is sufficiently small. Attention is also paid to a number of cases, among the limited existing ones, in which exact analytic or semi-analytic expressions for Pnl can be derived. Finally, numerical results are given and discussed.
Twidwell, Dirac; Wonkka, Carissa L; Sindelar, Michael T; Weir, John R
2015-01-01
Fire is widely recognized as a critical ecological and evolutionary driver that needs to be at the forefront of land management actions if conservation targets are to be met. However, the prevailing view is that prescribed fire is riskier than other land management techniques. Perceived risks associated with the application of fire limits its use and reduces agency support for prescribed burning in the private sector. As a result, considerably less cost-share support is given for prescribed fire compared to mechanical techniques. This study tests the general perception that fire is a riskier technique relative to other land management options. Due to the lack of data available to directly test this notion, we use a combination of approaches including 1) a comparison of fatalities resulting from different occupations that are proxies for techniques employed in land management, 2) a comparison of fatalities resulting from wildland fire versus prescribed fire, and 3) an exploration of causal factors responsible for wildland fire-related fatalities. This approach establishes a first approximation of the relative risk of fatality to private citizens using prescribed fire compared to other management techniques that are readily used in ecosystem management. Our data do not support using risks of landowner fatalities as justification for the use of alternative land management techniques, such as mechanical (machine-related) equipment, over prescribed fire. Vehicles and heavy machinery are consistently leading reasons for fatalities within occupations selected as proxies for management techniques employed by ranchers and agricultural producers, and also constitute a large proportion of fatalities among firefighters. Our study provides the foundation for agencies to establish data-driven decisions regarding the degree of support they provide for prescribed burning on private lands.
Twidwell, Dirac; Wonkka, Carissa L.; Sindelar, Michael T.; Weir, John R.
2015-01-01
Fire is widely recognized as a critical ecological and evolutionary driver that needs to be at the forefront of land management actions if conservation targets are to be met. However, the prevailing view is that prescribed fire is riskier than other land management techniques. Perceived risks associated with the application of fire limits its use and reduces agency support for prescribed burning in the private sector. As a result, considerably less cost-share support is given for prescribed fire compared to mechanical techniques. This study tests the general perception that fire is a riskier technique relative to other land management options. Due to the lack of data available to directly test this notion, we use a combination of approaches including 1) a comparison of fatalities resulting from different occupations that are proxies for techniques employed in land management, 2) a comparison of fatalities resulting from wildland fire versus prescribed fire, and 3) an exploration of causal factors responsible for wildland fire-related fatalities. This approach establishes a first approximation of the relative risk of fatality to private citizens using prescribed fire compared to other management techniques that are readily used in ecosystem management. Our data do not support using risks of landowner fatalities as justification for the use of alternative land management techniques, such as mechanical (machine-related) equipment, over prescribed fire. Vehicles and heavy machinery are consistently leading reasons for fatalities within occupations selected as proxies for management techniques employed by ranchers and agricultural producers, and also constitute a large proportion of fatalities among firefighters. Our study provides the foundation for agencies to establish data-driven decisions regarding the degree of support they provide for prescribed burning on private lands. PMID:26465329
NASA Technical Reports Server (NTRS)
Poole, L. R.
1975-01-01
A study of the effects of using different methods for approximating bottom topography in a wave-refraction computer model was conducted. Approximation techniques involving quadratic least squares, cubic least squares, and constrained bicubic polynomial interpolation were compared for computed wave patterns and parameters in the region of Saco Bay, Maine. Although substantial local differences can be attributed to use of the different approximation techniques, results indicated that overall computed wave patterns and parameter distributions were quite similar.
Blending Velocities In Task Space In Computing Robot Motions
NASA Technical Reports Server (NTRS)
Volpe, Richard A.
1995-01-01
Blending of linear and angular velocities between sequential specified points in task space constitutes theoretical basis of improved method of computing trajectories followed by robotic manipulators. In method, generalized velocity-vector-blending technique provides relatively simple, common conceptual framework for blending linear, angular, and other parametric velocities. Velocity vectors originate from straight-line segments connecting specified task-space points, called "via frames" and represent specified robot poses. Linear-velocity-blending functions chosen from among first-order, third-order-polynomial, and cycloidal options. Angular velocities blended by use of first-order approximation of previous orientation-matrix-blending formulation. Angular-velocity approximation yields small residual error, quantified and corrected. Method offers both relative simplicity and speed needed for generation of robot-manipulator trajectories in real time.
Linear approximations of nonlinear systems
NASA Technical Reports Server (NTRS)
Hunt, L. R.; Su, R.
1983-01-01
The development of a method for designing an automatic flight controller for short and vertical take off aircraft is discussed. This technique involves transformations of nonlinear systems to controllable linear systems and takes into account the nonlinearities of the aircraft. In general, the transformations cannot always be given in closed form. Using partial differential equations, an approximate linear system called the modified tangent model was introduced. A linear transformation of this tangent model to Brunovsky canonical form can be constructed, and from this the linear part (about a state space point x sub 0) of an exact transformation for the nonlinear system can be found. It is shown that a canonical expansion in Lie brackets about the point x sub 0 yields the same modified tangent model.
Finite state modeling of aeroelastic systems
NASA Technical Reports Server (NTRS)
Vepa, R.
1977-01-01
A general theory of finite state modeling of aerodynamic loads on thin airfoils and lifting surfaces performing completely arbitrary, small, time-dependent motions in an airstream is developed and presented. The nature of the behavior of the unsteady airloads in the frequency domain is explained, using as raw materials any of the unsteady linearized theories that have been mechanized for simple harmonic oscillations. Each desired aerodynamic transfer function is approximated by means of an appropriate Pade approximant, that is, a rational function of finite degree polynomials in the Laplace transform variable. The modeling technique is applied to several two dimensional and three dimensional airfoils. Circular, elliptic, rectangular and tapered planforms are considered as examples. Identical functions are also obtained for control surfaces for two and three dimensional airfoils.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haut, T. S.; Babb, T.; Martinsson, P. G.
2015-06-16
Our manuscript demonstrates a technique for efficiently solving the classical wave equation, the shallow water equations, and, more generally, equations of the form ∂u/∂t=Lu∂u/∂t=Lu, where LL is a skew-Hermitian differential operator. The idea is to explicitly construct an approximation to the time-evolution operator exp(τL)exp(τL) for a relatively large time-step ττ. Recently developed techniques for approximating oscillatory scalar functions by rational functions, and accelerated algorithms for computing functions of discretized differential operators are exploited. Principal advantages of the proposed method include: stability even for large time-steps, the possibility to parallelize in time over many characteristic wavelengths and large speed-ups over existingmore » methods in situations where simulation over long times are required. Numerical examples involving the 2D rotating shallow water equations and the 2D wave equation in an inhomogenous medium are presented, and the method is compared to the 4th order Runge–Kutta (RK4) method and to the use of Chebyshev polynomials. The new method achieved high accuracy over long-time intervals, and with speeds that are orders of magnitude faster than both RK4 and the use of Chebyshev polynomials.« less
Metamodels for Computer-Based Engineering Design: Survey and Recommendations
NASA Technical Reports Server (NTRS)
Simpson, Timothy W.; Peplinski, Jesse; Koch, Patrick N.; Allen, Janet K.
1997-01-01
The use of statistical techniques to build approximations of expensive computer analysis codes pervades much of todays engineering design. These statistical approximations, or metamodels, are used to replace the actual expensive computer analyses, facilitating multidisciplinary, multiobjective optimization and concept exploration. In this paper we review several of these techniques including design of experiments, response surface methodology, Taguchi methods, neural networks, inductive learning, and kriging. We survey their existing application in engineering design and then address the dangers of applying traditional statistical techniques to approximate deterministic computer analysis codes. We conclude with recommendations for the appropriate use of statistical approximation techniques in given situations and how common pitfalls can be avoided.
NASA Technical Reports Server (NTRS)
Patel, D. K.; Czarnecki, K. R.
1975-01-01
A theoretical investigation of the pressure distributions and drag characteristics was made for forward facing steps in turbulent flow at supersonic speeds. An approximate solution technique proposed by Uebelhack has been modified and extended to obtain a more consistent numerical procedure. A comparison of theoretical calculations with experimental data generally indicated good agreement over the experimentally available range of ratios of step height to boundary layer thickness from 7 to 0.05.
Friedman, David P; Maitino, Andrea J
2003-08-01
Debate in the neuroradiology community surrounds the amount of formal training in sonography of the carotid arteries that should be provided to fellows. This study was designed to assess current practice patterns at both academic and nonacademic practices regarding the performance of carotid sonography. A neurovascular radiology survey was sent to all 102 program directors of neuroradiology fellowships in the United States and Canada (academic practices). The survey was also sent to 146 randomly selected senior members of the ASNR (three per state, except one each for Alaska and Vermont) who were not affiliated with fellowship programs (nonacademic practices). Fifty-seven surveys from academic practices and 70 surveys from nonacademic practices were returned. Radiologists at academic practices performed approximately 42% of studies (general radiologists or sonography specialists, 36%; neuroradiologists, 5%; cardiovascular radiologists, 1%). Nonradiologists performed approximately 58% of studies (vascular surgeons, 47%; neurologists, 10%; cardiologists, 1%; neurosurgeons, <1%). Neuroradiologists performed carotid sonography at 11% (6/57) of academic practices. On average, radiologists at nonacademic practices performed approximately 62% of studies (general radiologists or sonography specialists, 38%; neuroradiologists, 15%; cardiovascular radiologists, 9%). Nonradiologists performed approximately 38% of studies (vascular surgeons, 25%; neurologists, 6%; cardiologists or internists, 6%). Neuroradiologists performed carotid sonography at 53% (37/70) of nonacademic practices. At most academic practices, neuroradiologists do not perform sonography of the carotid arteries. This may explain the reluctance of some fellowships to provide formal training in this technique. In contrast, although neuroradiologists perform carotid sonography at a majority of the nonacademic practices, the percentage of studies that they perform is small; moreover, neuroradiologists perform far fewer studies than do general radiologists or sonography specialists.
Erukhimovich, I Ya; Kudryavtsev, Ya V
2003-08-01
An extended generalization of the dynamic random phase approximation (DRPA) for L-component polymer systems is presented. Unlike the original version of the DRPA, which relates the (LxL) matrices of the collective density-density time correlation functions and the corresponding susceptibilities of concentrated polymer systems to those of the tracer macromolecules and so-called broken-links system (BLS), our generalized DRPA solves this problem for the (5xL) x (5xL) matrices of the coupled susceptibilities and time correlation functions of the component number, kinetic energy and flux densities. The presented technique is used to study propagation of sound and dynamic form-factor in disentangled (Rouse) monodisperse homopolymer melt. The calculated ultrasonic velocity and absorption coefficient reveal substantial frequency dispersion. The relaxation time tau is proportional to the degree of polymerization N, which is N times less than the Rouse time and evidences strong dynamic screening because of interchain interaction. We discuss also some peculiarities of the Brillouin scattering in polymer melts. Besides, a new convenient expression for the dynamic structure function of the single Rouse chain in (q,p) representation is found.
Proximal tibial osteotomy. A survivorship analysis.
Ritter, M A; Fechtman, R A
1988-01-01
Proximal tibial osteotomy is generally accepted as a treatment for the patient with unicompartmental arthritis. However, a few reports of the long-term results of this procedure are available in the literature, and none have used the technique known as survivorship analysis. This technique has an advantage over conventional analysis because it does not exclude patients for inadequate follow-up, loss to follow-up, or patient death. In this study, survivorship analysis was applied to 78 proximal tibial osteotomies, performed exclusively by the senior author for the correction of a preoperative varus deformity, and a survival curve was constructed. It was concluded that the reliable longevity of the proximal tibial osteotomy is approximately 6 years.
NASA Technical Reports Server (NTRS)
Kaneko, Hideaki; Bey, Kim S.; Hou, Gene J. W.
2004-01-01
A recent paper is generalized to a case where the spatial region is taken in R(sup 3). The region is assumed to be a thin body, such as a panel on the wing or fuselage of an aerospace vehicle. The traditional h- as well as hp-finite element methods are applied to the surface defined in the x - y variables, while, through the thickness, the technique of the p-element is employed. Time and spatial discretization scheme based upon an assumption of certain weak singularity of double vertical line u(sub t) double vertical line 2, is used to derive an optimal a priori error estimate for the current method.
NASA Technical Reports Server (NTRS)
Smedes, H. W.; Linnerud, H. J.; Woolaver, L. B.; Su, M. Y.; Jayroe, R. R.
1972-01-01
Two clustering techniques were used for terrain mapping by computer of test sites in Yellowstone National Park. One test was made with multispectral scanner data using a composite technique which consists of (1) a strictly sequential statistical clustering which is a sequential variance analysis, and (2) a generalized K-means clustering. In this composite technique, the output of (1) is a first approximation of the cluster centers. This is the input to (2) which consists of steps to improve the determination of cluster centers by iterative procedures. Another test was made using the three emulsion layers of color-infrared aerial film as a three-band spectrometer. Relative film densities were analyzed using a simple clustering technique in three-color space. Important advantages of the clustering technique over conventional supervised computer programs are (1) human intervention, preparation time, and manipulation of data are reduced, (2) the computer map, gives unbiased indication of where best to select the reference ground control data, (3) use of easy to obtain inexpensive film, and (4) the geometric distortions can be easily rectified by simple standard photogrammetric techniques.
Sparse approximation problem: how rapid simulated annealing succeeds and fails
NASA Astrophysics Data System (ADS)
Obuchi, Tomoyuki; Kabashima, Yoshiyuki
2016-03-01
Information processing techniques based on sparseness have been actively studied in several disciplines. Among them, a mathematical framework to approximately express a given dataset by a combination of a small number of basis vectors of an overcomplete basis is termed the sparse approximation. In this paper, we apply simulated annealing, a metaheuristic algorithm for general optimization problems, to sparse approximation in the situation where the given data have a planted sparse representation and noise is present. The result in the noiseless case shows that our simulated annealing works well in a reasonable parameter region: the planted solution is found fairly rapidly. This is true even in the case where a common relaxation of the sparse approximation problem, the G-relaxation, is ineffective. On the other hand, when the dimensionality of the data is close to the number of non-zero components, another metastable state emerges, and our algorithm fails to find the planted solution. This phenomenon is associated with a first-order phase transition. In the case of very strong noise, it is no longer meaningful to search for the planted solution. In this situation, our algorithm determines a solution with close-to-minimum distortion fairly quickly.
NASA Astrophysics Data System (ADS)
Hirt, Christian; Rexer, Moritz; Claessens, Sten; Rummel, Reiner
2017-10-01
Comparisons between high-degree models of the Earth's topographic and gravitational potential may give insight into the quality and resolution of the source data sets, provide feedback on the modelling techniques and help to better understand the gravity field composition. Degree correlations (cross-correlation coefficients) or reduction rates (quantifying the amount of topographic signal contained in the gravitational potential) are indicators used in a number of contemporary studies. However, depending on the modelling techniques and underlying levels of approximation, the correlation at high degrees may vary significantly, as do the conclusions drawn. The present paper addresses this problem by attempting to provide a guide on global correlation measures with particular emphasis on approximation effects and variants of topographic potential modelling. We investigate and discuss the impact of different effects (e.g., truncation of series expansions of the topographic potential, mass compression, ellipsoidal versus spherical approximation, ellipsoidal harmonic coefficient versus spherical harmonic coefficient (SHC) representation) on correlation measures. Our study demonstrates that the correlation coefficients are realistic only when the model's harmonic coefficients of a given degree are largely independent of the coefficients of other degrees, permitting degree-wise evaluations. This is the case, e.g., when both models are represented in terms of SHCs and spherical approximation (i.e. spherical arrangement of field-generating masses). Alternatively, a representation in ellipsoidal harmonics can be combined with ellipsoidal approximation. The usual ellipsoidal approximation level (i.e. ellipsoidal mass arrangement) is shown to bias correlation coefficients when SHCs are used. Importantly, gravity models from the International Centre for Global Earth Models (ICGEM) are inherently based on this approximation level. A transformation is presented that enables a transformation of ICGEM geopotential models from ellipsoidal to spherical approximation. The transformation is applied to generate a spherical transform of EGM2008 (sphEGM2008) that can meaningfully be correlated degree-wise with the topographic potential. We exploit this new technique and compare a number of models of topographic potential constituents (e.g., potential implied by land topography, ocean water masses) based on the Earth2014 global relief model and a mass-layer forward modelling technique with sphEGM2008. Different to previous findings, our results show very significant short-scale correlation between Earth's gravitational potential and the potential generated by Earth's land topography (correlation +0.92, and 60% of EGM2008 signals are delivered through the forward modelling). Our tests reveal that the potential generated by Earth's oceans water masses is largely unrelated to the geopotential at short scales, suggesting that altimetry-derived gravity and/or bathymetric data sets are significantly underpowered at 5 arc-min scales. We further decompose the topographic potential into the Bouguer shell and terrain correction and show that they are responsible for about 20 and 25% of EGM2008 short-scale signals, respectively. As a general conclusion, the paper shows the importance of using compatible models in topographic/gravitational potential comparisons and recommends the use of SHCs together with spherical approximation or EHCs with ellipsoidal approximation in order to avoid biases in the correlation measures.
Multi-level adaptive finite element methods. 1: Variation problems
NASA Technical Reports Server (NTRS)
Brandt, A.
1979-01-01
A general numerical strategy for solving partial differential equations and other functional problems by cycling between coarser and finer levels of discretization is described. Optimal discretization schemes are provided together with very fast general solvers. It is described in terms of finite element discretizations of general nonlinear minimization problems. The basic processes (relaxation sweeps, fine-grid-to-coarse-grid transfers of residuals, coarse-to-fine interpolations of corrections) are directly and naturally determined by the objective functional and the sequence of approximation spaces. The natural processes, however, are not always optimal. Concrete examples are given and some new techniques are reviewed. Including the local truncation extrapolation and a multilevel procedure for inexpensively solving chains of many boundary value problems, such as those arising in the solution of time-dependent problems.
Solving large mixed linear models using preconditioned conjugate gradient iteration.
Strandén, I; Lidauer, M
1999-12-01
Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.
Wang, Ning; Sun, Jing-Chao; Han, Min; Zheng, Zhongjiu; Er, Meng Joo
2017-09-06
In this paper, for a general class of uncertain nonlinear (cascade) systems, including unknown dynamics, which are not feedback linearizable and cannot be solved by existing approaches, an innovative adaptive approximation-based regulation control (AARC) scheme is developed. Within the framework of adding a power integrator (API), by deriving adaptive laws for output weights and prediction error compensation pertaining to single-hidden-layer feedforward network (SLFN) from the Lyapunov synthesis, a series of SLFN-based approximators are explicitly constructed to exactly dominate completely unknown dynamics. By the virtue of significant advancements on the API technique, an adaptive API methodology is eventually established in combination with SLFN-based adaptive approximators, and it contributes to a recursive mechanism for the AARC scheme. As a consequence, the output regulation error can asymptotically converge to the origin, and all other signals of the closed-loop system are uniformly ultimately bounded. Simulation studies and comprehensive comparisons with backstepping- and API-based approaches demonstrate that the proposed AARC scheme achieves remarkable performance and superiority in dealing with unknown dynamics.
An implicit-iterative solution of the heat conduction equation with a radiation boundary condition
NASA Technical Reports Server (NTRS)
Williams, S. D.; Curry, D. M.
1977-01-01
For the problem of predicting one-dimensional heat transfer between conducting and radiating mediums by an implicit finite difference method, four different formulations were used to approximate the surface radiation boundary condition while retaining an implicit formulation for the interior temperature nodes. These formulations are an explicit boundary condition, a linearized boundary condition, an iterative boundary condition, and a semi-iterative boundary method. The results of these methods in predicting surface temperature on the space shuttle orbiter thermal protection system model under a variety of heating rates were compared. The iterative technique caused the surface temperature to be bounded at each step. While the linearized and explicit methods were generally more efficient, the iterative and semi-iterative techniques provided a realistic surface temperature response without requiring step size control techniques.
General relaxation schemes in multigrid algorithms for higher order singularity methods
NASA Technical Reports Server (NTRS)
Oskam, B.; Fray, J. M. J.
1981-01-01
Relaxation schemes based on approximate and incomplete factorization technique (AF) are described. The AF schemes allow construction of a fast multigrid method for solving integral equations of the second and first kind. The smoothing factors for integral equations of the first kind, and comparison with similar results from the second kind of equations are a novel item. Application of the MD algorithm shows convergence to the level of truncation error of a second order accurate panel method.
Quasi-linear theory via the cumulant expansion approach
NASA Technical Reports Server (NTRS)
Jones, F. C.; Birmingham, T. J.
1974-01-01
The cumulant expansion technique of Kubo was used to derive an intergro-differential equation for f , the average one particle distribution function for particles being accelerated by electric and magnetic fluctuations of a general nature. For a very restricted class of fluctuations, the f equation degenerates exactly to a differential equation of Fokker-Planck type. Quasi-linear theory, including the adiabatic assumption, is an exact theory for this limited class of fluctuations. For more physically realistic fluctuations, however, quasi-linear theory is at best approximate.
Transient Gratings, Four-Wave Mixing and Polariton Effects in Nonlinear Optics
1991-06-01
w, are. under some very general conditions, equal to the Fourier transform of the TG signal 1361. The possibility of exciton localization l37-3...which is tile analogue of the Anderson electron localization , could also be probed ideally by the grating technique 1401. In this review we develop a...often handled using, at mean-tield t heor ilie local -tield approximation) I 7). -lI. Our -,encral formialism reduce,, to these commnon procedureN xxci he
Gálvez, Akemi; Iglesias, Andrés
2013-01-01
Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently.
Gálvez, Akemi; Iglesias, Andrés
2013-01-01
Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently. PMID:24376380
Parameter estimation in a structural acoustic system with fully nonlinear coupling conditions
NASA Technical Reports Server (NTRS)
Banks, H. T.; Smith, Ralph C.
1994-01-01
A methodology for estimating physical parameters in a class of structural acoustic systems is presented. The general model under consideration consists of an interior cavity which is separated from an exterior noise source by an enclosing elastic structure. Piezoceramic patches are bonded to or embedded in the structure; these can be used both as actuators and sensors in applications ranging from the control of interior noise levels to the determination of structural flaws through nondestructive evaluation techniques. The presence and excitation of patches, however, changes the geometry and material properties of the structure as well as involves unknown patch parameters, thus necessitating the development of parameter estimation techniques which are applicable in this coupled setting. In developing a framework for approximation, parameter estimation and implementation, strong consideration is given to the fact that the input operator is unbonded due to the discrete nature of the patches. Moreover, the model is weakly nonlinear. As a result of the coupling mechanism between the structural vibrations and the interior acoustic dynamics. Within this context, an illustrating model is given, well-posedness and approximations results are discussed and an applicable parameter estimation methodology is presented. The scheme is then illustrated through several numerical examples with simulations modeling a variety of commonly used structural acoustic techniques for systems excitations and data collection.
Simple skin-stretching device in assisted tension-free wound closure
Cheng, Li-Fu; Lee, Jiunn-Tat; Hsu, Honda; Wu, Meng-Si
2017-01-01
Background Numerous conventional wound reconstruction methods such as wound undermining with direct suture, skin graft, and flap surgery can be used to treat large wounds. The adequate undermining of the skin flaps of a wound is a commonly used technique for achieving the closure of large tension wounds; however, the use of tension to approximate and suture the skin flaps can cause ischemic marginal necrosis. The purpose of this study is to use elastic rubber bands to relieve the tension of direct wound closure for simultaneously minimizing the risks of wound dehiscence and wound edge ischemia that lead to necrosis. Materials and Methods This retrospective study was conducted to evaluate our clinical experiences with 22 large wounds, which involved performing primary closures under a considerable amount of tension by using elastic rubber bands in a skin-stretching technique following a wide undermining procedure. Assessment of the results entailed complete wound healing and related complications. Results All 22 wounds in our study showed fair to good results except for one. The mean success rate was approximately 95.45%. Conclusion The simple skin-stretching design enabled tension-free skin closure, which pulled the bilateral undermining skin flaps as bilateral fasciocutaneous advancement flaps. The skin-stretching technique was generally successful. PMID:28195891
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mai, Sebastian; Marquetand, Philipp; González, Leticia
2014-08-21
An efficient perturbational treatment of spin-orbit coupling within the framework of high-level multi-reference techniques has been implemented in the most recent version of the COLUMBUS quantum chemistry package, extending the existing fully variational two-component (2c) multi-reference configuration interaction singles and doubles (MRCISD) method. The proposed scheme follows related implementations of quasi-degenerate perturbation theory (QDPT) model space techniques. Our model space is built either from uncontracted, large-scale scalar relativistic MRCISD wavefunctions or based on the scalar-relativistic solutions of the linear-response-theory-based multi-configurational averaged quadratic coupled cluster method (LRT-MRAQCC). The latter approach allows for a consistent, approximatively size-consistent and size-extensive treatment of spin-orbitmore » coupling. The approach is described in detail and compared to a number of related techniques. The inherent accuracy of the QDPT approach is validated by comparing cuts of the potential energy surfaces of acrolein and its S, Se, and Te analoga with the corresponding data obtained from matching fully variational spin-orbit MRCISD calculations. The conceptual availability of approximate analytic gradients with respect to geometrical displacements is an attractive feature of the 2c-QDPT-MRCISD and 2c-QDPT-LRT-MRAQCC methods for structure optimization and ab inito molecular dynamics simulations.« less
Nguyen, N; Milanfar, P; Golub, G
2001-01-01
In many image restoration/resolution enhancement applications, the blurring process, i.e., point spread function (PSF) of the imaging system, is not known or is known only to within a set of parameters. We estimate these PSF parameters for this ill-posed class of inverse problem from raw data, along with the regularization parameters required to stabilize the solution, using the generalized cross-validation method (GCV). We propose efficient approximation techniques based on the Lanczos algorithm and Gauss quadrature theory, reducing the computational complexity of the GCV. Data-driven PSF and regularization parameter estimation experiments with synthetic and real image sequences are presented to demonstrate the effectiveness and robustness of our method.
Metallic lithium by quantum Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sugiyama, G.; Zerah, G.; Alder, B.J.
Lithium was chosen as the simplest known metal for the first application of quantum Monte Carlo methods in order to evaluate the accuracy of conventional one-electron band theories. Lithium has been extensively studied using such techniques. Band theory calculations have certain limitations in general and specifically in their application to lithium. Results depend on such factors as charge shape approximations (muffin tins), pseudopotentials (a special problem for lithium where the lack of rho core states requires a strong pseudopotential), and the form and parameters chosen for the exchange potential. The calculations are all one-electron methods in which the correlation effectsmore » are included in an ad hoc manner. This approximation may be particularly poor in the high compression regime, where the core states become delocalized. Furthermore, band theory provides only self-consistent results rather than strict limits on the energies. The quantum Monte Carlo method is a totally different technique using a many-body rather than a mean field approach which yields an upper bound on the energies. 18 refs., 4 figs., 1 tab.« less
Automated segmentation and feature extraction of product inspection items
NASA Astrophysics Data System (ADS)
Talukder, Ashit; Casasent, David P.
1997-03-01
X-ray film and linescan images of pistachio nuts on conveyor trays for product inspection are considered. The final objective is the categorization of pistachios into good, blemished and infested nuts. A crucial step before classification is the separation of touching products and the extraction of features essential for classification. This paper addresses new detection and segmentation algorithms to isolate touching or overlapping items. These algorithms employ a new filter, a new watershed algorithm, and morphological processing to produce nutmeat-only images. Tests on a large database of x-ray film and real-time x-ray linescan images of around 2900 small, medium and large nuts showed excellent segmentation results. A new technique to detect and segment dark regions in nutmeat images is also presented and tested on approximately 300 x-ray film and approximately 300 real-time linescan x-ray images with 95-97 percent detection and correct segmentation. New algorithms are described that determine nutmeat fill ratio and locate splits in nutmeat. The techniques formulated in this paper are of general use in many different product inspection and computer vision problems.
A computer model for the 30S ribosome subunit.
Kuntz, I D; Crippen, G M
1980-01-01
We describe a computer-generated model for the locations of the 21 proteins of the 30S subunit of the E. coli ribosome. The model uses a new method of incorporating experimental measurements based on a mathematical technique called distance geometry. In this paper, we use data from two sources: immunoelectron microscopy and neutron-scattering studies. The data are generally self-consistent and lead to a set of relatively well-defined structures in which individual protein coordinates differ by approximately 20 A from one structure to another. Two important features of this calculation are the use of extended proteins rather than just the centers of mass, and the ability to confine the protein locations within an arbitrary boundary surface so that only solutions with an approximate 30S "shape" are permitted. PMID:7020786
A generalized vortex lattice method for subsonic and supersonic flow applications
NASA Technical Reports Server (NTRS)
Miranda, L. R.; Elliot, R. D.; Baker, W. M.
1977-01-01
If the discrete vortex lattice is considered as an approximation to the surface-distributed vorticity, then the concept of the generalized principal part of an integral yields a residual term to the vorticity-induced velocity field. The proper incorporation of this term to the velocity field generated by the discrete vortex lines renders the present vortex lattice method valid for supersonic flow. Special techniques for simulating nonzero thickness lifting surfaces and fusiform bodies with vortex lattice elements are included. Thickness effects of wing-like components are simulated by a double (biplanar) vortex lattice layer, and fusiform bodies are represented by a vortex grid arranged on a series of concentrical cylindrical surfaces. The analysis of sideslip effects by the subject method is described. Numerical considerations peculiar to the application of these techniques are also discussed. The method has been implemented in a digital computer code. A users manual is included along with a complete FORTRAN compilation, an executed case, and conversion programs for transforming input for the NASA wave drag program.
NASA Astrophysics Data System (ADS)
Xia, Ya-Rong; Zhang, Shun-Li; Xin, Xiang-Peng
2018-03-01
In this paper, we propose the concept of the perturbed invariant subspaces (PISs), and study the approximate generalized functional variable separation solution for the nonlinear diffusion-convection equation with weak source by the approximate generalized conditional symmetries (AGCSs) related to the PISs. Complete classification of the perturbed equations which admit the approximate generalized functional separable solutions (AGFSSs) is obtained. As a consequence, some AGFSSs to the resulting equations are explicitly constructed by way of examples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Broda, Jill Terese
The neutron flux across the nuclear reactor core is of interest to reactor designers and others. The diffusion equation, an integro-differential equation in space and energy, is commonly used to determine the flux level. However, the solution of a simplified version of this equation when automated is very time consuming. Since the flux level changes with time, in general, this calculation must be made repeatedly. Therefore solution techniques that speed the calculation while maintaining accuracy are desirable. One factor that contributes to the solution time is the spatial flux shape approximation used. It is common practice to use the samemore » order flux shape approximation in each energy group even though this method may not be the most efficient. The one-dimensional, two-energy group diffusion equation was solved, for the node average flux and core k-effective, using two sets of spatial shape approximations for each of three reactor types. A fourth-order approximation in both energy groups forms the first set of approximations used. The second set used combines a second-order approximation with a fourth-order approximation in energy group two. Comparison of the results from the two approximation sets show that the use of a different order spatial flux shape approximation results in considerable loss in accuracy for the pressurized water reactor modeled. However, the loss in accuracy is small for the heavy water and graphite reactors modeled. The use of different order approximations in each energy group produces mixed results. Further investigation into the accuracy and computing time is required before any quantitative advantage of the use of the second-order approximation in energy group one and the fourth-order approximation in energy group two can be determined.« less
Density functional theory calculations of the water interactions with ZrO2 nanoparticles Y2O3 doped
NASA Astrophysics Data System (ADS)
Subhoni, Mekhrdod; Kholmurodov, Kholmirzo; Doroshkevich, Aleksandr; Asgerov, Elmar; Yamamoto, Tomoyuki; Lyubchyk, Andrei; Almasan, Valer; Madadzada, Afag
2018-03-01
Development of a new electricity generation techniques is one of the most relevant tasks, especially nowadays under conditions of extreme growth in energy consumption. The exothermic heterogeneous electrochemical energy conversion to the electric energy through interaction of the ZrO2 based nanopowder system with atmospheric moisture is one of the ways of electric energy obtaining. The questions of conversion into the electric form of the energy of water molecules adsorption in 3 mol% Y2O3 doped ZrO2 nanopowder systems were investigated using the density functional theory calculations. The density functional theory calculations has been realized as in the Kohn-Sham formulation, where the exchange-correlation potential is approximated by a functional of the electronic density. The electronic density, total energy and band structure calculations are carried out using the all-electron, full potential, linear augmented plane wave method of the electronic density and related approximations, i.e. the local density, the generalized gradient and their hybrid approximations.
Approximate techniques of structural reanalysis
NASA Technical Reports Server (NTRS)
Noor, A. K.; Lowder, H. E.
1974-01-01
A study is made of two approximate techniques for structural reanalysis. These include Taylor series expansions for response variables in terms of design variables and the reduced-basis method. In addition, modifications to these techniques are proposed to overcome some of their major drawbacks. The modifications include a rational approach to the selection of the reduced-basis vectors and the use of Taylor series approximation in an iterative process. For the reduced basis a normalized set of vectors is chosen which consists of the original analyzed design and the first-order sensitivity analysis vectors. The use of the Taylor series approximation as a first (initial) estimate in an iterative process, can lead to significant improvements in accuracy, even with one iteration cycle. Therefore, the range of applicability of the reanalysis technique can be extended. Numerical examples are presented which demonstrate the gain in accuracy obtained by using the proposed modification techniques, for a wide range of variations in the design variables.
NASA Technical Reports Server (NTRS)
Coddington, Odele; Pilewskie, Peter; Schmidt, K. Sebastian; McBride, Patrick J.; Vukicevic, Tomislava
2013-01-01
This paper presents an approach using the GEneralized Nonlinear Retrieval Analysis (GENRA) tool and general inverse theory diagnostics including the maximum likelihood solution and the Shannon information content to investigate the performance of a new spectral technique for the retrieval of cloud optical properties from surface based transmittance measurements. The cumulative retrieval information over broad ranges in cloud optical thickness (tau), droplet effective radius (r(sub e)), and overhead sun angles is quantified under two conditions known to impact transmitted radiation; the variability in land surface albedo and atmospheric water vapor content. Our conclusions are: (1) the retrieved cloud properties are more sensitive to the natural variability in land surface albedo than to water vapor content; (2) the new spectral technique is more accurate (but still imprecise) than a standard approach, in particular for tau between 5 and 60 and r(sub e) less than approximately 20 nm; and (3) the retrieved cloud properties are dependent on sun angle for clouds of tau from 5 to 10 and r(sub e) less than 10 nm, with maximum sensitivity obtained for an overhead sun.
Quantitative Characterization of the Microstructure and Transport Properties of Biopolymer Networks
Jiao, Yang; Torquato, Salvatore
2012-01-01
Biopolymer networks are of fundamental importance to many biological processes in normal and tumorous tissues. In this paper, we employ the panoply of theoretical and simulation techniques developed for characterizing heterogeneous materials to quantify the microstructure and effective diffusive transport properties (diffusion coefficient De and mean survival time τ) of collagen type I networks at various collagen concentrations. In particular, we compute the pore-size probability density function P(δ) for the networks and present a variety of analytical estimates of the effective diffusion coefficient De for finite-sized diffusing particles, including the low-density approximation, the Ogston approximation, and the Torquato approximation. The Hashin-Strikman upper bound on the effective diffusion coefficient De and the pore-size lower bound on the mean survival time τ are used as benchmarks to test our analytical approximations and numerical results. Moreover, we generalize the efficient first-passage-time techniques for Brownian-motion simulations in suspensions of spheres to the case of fiber networks and compute the associated effective diffusion coefficient De as well as the mean survival time τ, which is related to nuclear magnetic resonance (NMR) relaxation times. Our numerical results for De are in excellent agreement with analytical results for simple network microstructures, such as periodic arrays of parallel cylinders. Specifically, the Torquato approximation provides the most accurate estimates of De for all collagen concentrations among all of the analytical approximations we consider. We formulate a universal curve for τ for the networks at different collagen concentrations, extending the work of Yeong and Torquato [J. Chem. Phys. 106, 8814 (1997)]. We apply rigorous cross-property relations to estimate the effective bulk modulus of collagen networks from a knowledge of the effective diffusion coefficient computed here. The use of cross-property relations to link other physical properties to the transport properties of collagen networks is also discussed. PMID:22683739
COMPLEXITY&APPROXIMABILITY OF QUANTIFIED&STOCHASTIC CONSTRAINT SATISFACTION PROBLEMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunt, H. B.; Marathe, M. V.; Stearns, R. E.
2001-01-01
Let D be an arbitrary (not necessarily finite) nonempty set, let C be a finite set of constant symbols denoting arbitrary elements of D, and let S and T be an arbitrary finite set of finite-arity relations on D. We denote the problem of determining the satisfiability of finite conjunctions of relations in S applied to variables (to variables and symbols in C) by SAT(S) (by SATc(S).) Here, we study simultaneously the complexity of decision, counting, maximization and approximate maximization problems, for unquantified, quantified and stochastically quantified formulas. We present simple yet general techniques to characterize simultaneously, the complexity ormore » efficient approximability of a number of versions/variants of the problems SAT(S), Q-SAT(S), S-SAT(S),MAX-Q-SAT(S) etc., for many different such D,C ,S, T. These versions/variants include decision, counting, maximization and approximate maximization problems, for unquantified, quantified and stochastically quantified formulas. Our unified approach is based on the following two basic concepts: (i) strongly-local replacements/reductions and (ii) relational/algebraic represent ability. Some of the results extend the earlier results in [Pa85,LMP99,CF+93,CF+94O]u r techniques and results reported here also provide significant steps towards obtaining dichotomy theorems, for a number of the problems above, including the problems MAX-&-SAT( S), and MAX-S-SAT(S). The discovery of such dichotomy theorems, for unquantified formulas, has received significant recent attention in the literature [CF+93,CF+94,Cr95,KSW97]« less
Lu, Dan; Zhang, Guannan; Webster, Clayton G.; ...
2016-12-30
In this paper, we develop an improved multilevel Monte Carlo (MLMC) method for estimating cumulative distribution functions (CDFs) of a quantity of interest, coming from numerical approximation of large-scale stochastic subsurface simulations. Compared with Monte Carlo (MC) methods, that require a significantly large number of high-fidelity model executions to achieve a prescribed accuracy when computing statistical expectations, MLMC methods were originally proposed to significantly reduce the computational cost with the use of multifidelity approximations. The improved performance of the MLMC methods depends strongly on the decay of the variance of the integrand as the level increases. However, the main challengemore » in estimating CDFs is that the integrand is a discontinuous indicator function whose variance decays slowly. To address this difficult task, we approximate the integrand using a smoothing function that accelerates the decay of the variance. In addition, we design a novel a posteriori optimization strategy to calibrate the smoothing function, so as to balance the computational gain and the approximation error. The combined proposed techniques are integrated into a very general and practical algorithm that can be applied to a wide range of subsurface problems for high-dimensional uncertainty quantification, such as a fine-grid oil reservoir model considered in this effort. The numerical results reveal that with the use of the calibrated smoothing function, the improved MLMC technique significantly reduces the computational complexity compared to the standard MC approach. Finally, we discuss several factors that affect the performance of the MLMC method and provide guidance for effective and efficient usage in practice.« less
Experimental confirmation of a PDE-based approach to design of feedback controls
NASA Technical Reports Server (NTRS)
Banks, H. T.; Smith, Ralph C.; Brown, D. E.; Silcox, R. J.; Metcalf, Vern L.
1995-01-01
Issues regarding the experimental implementation of partial differential equation based controllers are discussed in this work. While the motivating application involves the reduction of vibration levels for a circular plate through excitation of surface-mounted piezoceramic patches, the general techniques described here will extend to a variety of applications. The initial step is the development of a PDE model which accurately captures the physics of the underlying process. This model is then discretized to yield a vector-valued initial value problem. Optimal control theory is used to determine continuous-time voltages to the patches, and the approximations needed to facilitate discrete time implementation are addressed. Finally, experimental results demonstrating the control of both transient and steady state vibrations through these techniques are presented.
NASA Astrophysics Data System (ADS)
Yannopapas, Vassilios; Paspalakis, Emmanuel
2018-07-01
We present a new theoretical tool for simulating optical trapping of nanoparticles in the presence of an arbitrary metamaterial design. The method is based on rigorously solving Maxwell's equations for the metamaterial via a hybrid discrete-dipole approximation/multiple-scattering technique and direct calculation of the optical force exerted on the nanoparticle by means of the Maxwell stress tensor. We apply the method to the case of a spherical polystyrene probe trapped within the optical landscape created by illuminating of a plasmonic metamaterial consisting of periodically arranged tapered metallic nanopyramids. The developed technique is ideally suited for general optomechanical calculations involving metamaterial designs and can compete with purely numerical methods such as finite-difference or finite-element schemes.
nu-Anomica: A Fast Support Vector Based Novelty Detection Technique
NASA Technical Reports Server (NTRS)
Das, Santanu; Bhaduri, Kanishka; Oza, Nikunj C.; Srivastava, Ashok N.
2009-01-01
In this paper we propose nu-Anomica, a novel anomaly detection technique that can be trained on huge data sets with much reduced running time compared to the benchmark one-class Support Vector Machines algorithm. In -Anomica, the idea is to train the machine such that it can provide a close approximation to the exact decision plane using fewer training points and without losing much of the generalization performance of the classical approach. We have tested the proposed algorithm on a variety of continuous data sets under different conditions. We show that under all test conditions the developed procedure closely preserves the accuracy of standard one-class Support Vector Machines while reducing both the training time and the test time by 5 - 20 times.
A BAYESIAN APPROACH TO DERIVING AGES OF INDIVIDUAL FIELD WHITE DWARFS
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Malley, Erin M.; Von Hippel, Ted; Van Dyk, David A., E-mail: ted.vonhippel@erau.edu, E-mail: dvandyke@imperial.ac.uk
2013-09-20
We apply a self-consistent and robust Bayesian statistical approach to determine the ages, distances, and zero-age main sequence (ZAMS) masses of 28 field DA white dwarfs (WDs) with ages of approximately 4-8 Gyr. Our technique requires only quality optical and near-infrared photometry to derive ages with <15% uncertainties, generally with little sensitivity to our choice of modern initial-final mass relation. We find that age, distance, and ZAMS mass are correlated in a manner that is too complex to be captured by traditional error propagation techniques. We further find that the posterior distributions of age are often asymmetric, indicating that themore » standard approach to deriving WD ages can yield misleading results.« less
Choux, Alexandre; Busvelle, Eric; Gauthier, Jean Paul; Pascal, Ghislain
2007-11-20
Our work is in the context of the French "laser mégajoule" project, about fusion by inertial confinement. The project leads to the problem of characterizing the inner surface, of the approximately spherical target, by optical shadowgraphy techniques. Our work is entirely based on the basic idea that optical shadowgraphy produces "caustics" of systems of optical rays, which contain a great deal of 3D information about the surface to be characterized. We develop a method of 3D reconstruction based upon this idea plus a "small perturbations" technique. Although computations are made in the special "spherical" case, the method is in fact general and may be extended to several other situations.
NASA Astrophysics Data System (ADS)
Piazza, Roberto; Buzzaccaro, Stefano; Secchi, Eleonora; Parola, Alberto
Particle settling is a pervasive process in nature, and centrifugation is a much versatile separation technique. Yet, the results of settling and ultracentrifugation experiments often appear to contradict the very law on which they are based: Archimedes Principle - arguably, the oldest Physical Law. The purpose of this paper is delving at the very roots of the concept of buoyancy by means of a combined experimental-theoretical study on sedimentation profiles in colloidal mixtures. Our analysis shows that the standard Archimedes' principle is only a limiting approximation, valid for mesoscopic particles settling in a molecular fluid, and we provide a general expression for the actual buoyancy force. This "Generalized Archimedes Principle" accounts for unexpected effects, such as denser particles floating on top of a lighter fluid, which in fact we observe in our experiments.
2D-pattern matching image and video compression: theory, algorithms, and experiments.
Alzina, Marc; Szpankowski, Wojciech; Grama, Ananth
2002-01-01
In this paper, we propose a lossy data compression framework based on an approximate two-dimensional (2D) pattern matching (2D-PMC) extension of the Lempel-Ziv (1977, 1978) lossless scheme. This framework forms the basis upon which higher level schemes relying on differential coding, frequency domain techniques, prediction, and other methods can be built. We apply our pattern matching framework to image and video compression and report on theoretical and experimental results. Theoretically, we show that the fixed database model used for video compression leads to suboptimal but computationally efficient performance. The compression ratio of this model is shown to tend to the generalized entropy. For image compression, we use a growing database model for which we provide an approximate analysis. The implementation of 2D-PMC is a challenging problem from the algorithmic point of view. We use a range of techniques and data structures such as k-d trees, generalized run length coding, adaptive arithmetic coding, and variable and adaptive maximum distortion level to achieve good compression ratios at high compression speeds. We demonstrate bit rates in the range of 0.25-0.5 bpp for high-quality images and data rates in the range of 0.15-0.5 Mbps for a baseline video compression scheme that does not use any prediction or interpolation. We also demonstrate that this asymmetric compression scheme is capable of extremely fast decompression making it particularly suitable for networked multimedia applications.
NASA Technical Reports Server (NTRS)
Anuta, P. E.
1975-01-01
Least squares approximation techniques were developed for use in computer aided correction of spatial image distortions for registration of multitemporal remote sensor imagery. Polynomials were first used to define image distortion over the entire two dimensional image space. Spline functions were then investigated to determine if the combination of lower order polynomials could approximate a higher order distortion with less computational difficulty. Algorithms for generating approximating functions were developed and applied to the description of image distortion in aircraft multispectral scanner imagery. Other applications of the techniques were suggested for earth resources data processing areas other than geometric distortion representation.
Tomographic reconstruction of tokamak plasma light emission using wavelet-vaguelette decomposition
NASA Astrophysics Data System (ADS)
Schneider, Kai; Nguyen van Yen, Romain; Fedorczak, Nicolas; Brochard, Frederic; Bonhomme, Gerard; Farge, Marie; Monier-Garbet, Pascale
2012-10-01
Images acquired by cameras installed in tokamaks are difficult to interpret because the three-dimensional structure of the plasma is flattened in a non-trivial way. Nevertheless, taking advantage of the slow variation of the fluctuations along magnetic field lines, the optical transformation may be approximated by a generalized Abel transform, for which we proposed in Nguyen van yen et al., Nucl. Fus., 52 (2012) 013005, an inversion technique based on the wavelet-vaguelette decomposition. After validation of the new method using an academic test case and numerical data obtained with the Tokam 2D code, we present an application to an experimental movie obtained in the tokamak Tore Supra. A comparison with a classical regularization technique for ill-posed inverse problems, the singular value decomposition, allows us to assess the efficiency. The superiority of the wavelet-vaguelette technique is reflected in preserving local features, such as blobs and fronts, in the denoised emissivity map.
NASA Astrophysics Data System (ADS)
Nguyen van yen, R.; Fedorczak, N.; Brochard, F.; Bonhomme, G.; Schneider, K.; Farge, M.; Monier-Garbet, P.
2012-01-01
Images acquired by cameras installed in tokamaks are difficult to interpret because the three-dimensional structure of the plasma is flattened in a non-trivial way. Nevertheless, taking advantage of the slow variation of the fluctuations along magnetic field lines, the optical transformation may be approximated by a generalized Abel transform, for which we propose an inversion technique based on the wavelet-vaguelette decomposition. After validation of the new method using an academic test case and numerical data obtained with the Tokam 2D code, we present an application to an experimental movie obtained in the tokamak Tore Supra. A comparison with a classical regularization technique for ill-posed inverse problems, the singular value decomposition, allows us to assess the efficiency. The superiority of the wavelet-vaguelette technique is reflected in preserving local features, such as blobs and fronts, in the denoised emissivity map.
An analytical technique for approximating unsteady aerodynamics in the time domain
NASA Technical Reports Server (NTRS)
Dunn, H. J.
1980-01-01
An analytical technique is presented for approximating unsteady aerodynamic forces in the time domain. The order of elements of a matrix Pade approximation was postulated, and the resulting polynomial coefficients were determined through a combination of least squares estimates for the numerator coefficients and a constrained gradient search for the denominator coefficients which insures stable approximating functions. The number of differential equations required to represent the aerodynamic forces to a given accuracy tends to be smaller than that employed in certain existing techniques where the denominator coefficients are chosen a priori. Results are shown for an aeroelastic, cantilevered, semispan wing which indicate a good fit to the aerodynamic forces for oscillatory motion can be achieved with a matrix Pade approximation having fourth order numerator and second order denominator polynomials.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shao, MeiYue; Lin, Lin; Yang, Chao
The single particle energies obtained in a Kohn-Sham density functional theory (DFT) calculation are generally known to be poor approximations to electron excitation energies that are measured in tr ansport, tunneling and spectroscopic experiments such as photo-emission spectroscopy. The correction to these energies can be obtained from the poles of a single particle Green’s function derived from a many-body perturbation theory. From a computational perspective, the accuracy and efficiency of such an approach depends on how a self energy term that properly accounts for dynamic screening of electrons is approximated. The G 0W 0 approximation is a widely used techniquemore » in which the self energy is expressed as the convolution of a noninteracting Green’s function (G 0) and a screened Coulomb interaction (W 0) in the frequency domain. The computational cost associated with such a convolution is high due to the high complexity of evaluating W 0 at multiple frequencies. In this paper, we discuss how the cost of G 0W 0 calculation can be reduced by constructing a low rank approximation to the frequency dependent part of W 0 . In particular, we examine the effect of such a low rank approximation on the accuracy of the G 0W 0 approximation. We also discuss how the numerical convolution of G 0 and W 0 can be evaluated efficiently and accurately by using a contour deformation technique with an appropriate choice of the contour.« less
Theory of inhomogeneous quantum systems. III. Variational wave functions for Fermi fluids
NASA Astrophysics Data System (ADS)
Krotscheck, E.
1985-04-01
We develop a general variational theory for inhomogeneous Fermi systems such as the electron gas in a metal surface, the surface of liquid 3He, or simple models of heavy nuclei. The ground-state wave function is expressed in terms of two-body correlations, a one-body attenuation factor, and a model-system Slater determinant. Massive partial summations of cluster expansions are performed by means of Born-Green-Yvon and hypernetted-chain techniques. An optimal single-particle basis is generated by a generalized Hartree-Fock equation in which the two-body correlations screen the bare interparticle interaction. The optimization of the pair correlations leads to a state-averaged random-phase-approximation equation and a strictly microscopic determination of the particle-hole interaction.
Particle Transport through Scattering Regions with Clear Layers and Inclusions
NASA Astrophysics Data System (ADS)
Bal, Guillaume
2002-08-01
This paper introduces generalized diffusion models for the transport of particles in scattering media with nonscattering inclusions. Classical diffusion is known as a good approximation of transport only in scattering media. Based on asymptotic expansions and the coupling of transport and diffusion models, generalized diffusion equations with nonlocal interface conditions are proposed which offer a computationally cheap, yet accurate, alternative to solving the full phase-space transport equations. The paper shows which computational model should be used depending on the size and shape of the nonscattering inclusions in the simplified setting of two space dimensions. An important application is the treatment of clear layers in near-infrared (NIR) spectroscopy, an imaging technique based on the propagation of NIR photons in human tissues.
Quasistatic Evolution in Perfect Plasticity for General Heterogeneous Materials
NASA Astrophysics Data System (ADS)
Solombrino, Francesco
2014-04-01
Inspired by some recent developments in the theory of small-strain heterogeneous elastoplasticity, we both revisit and generalize the formulation of the quasistatic evolutionary problem in perfect plasticity given by Francfort and Giacomini (Commun Pure Appl Math, 65:1185-1241, 2012). We show that their definition of the plastic dissipation measure is equivalent to an abstract one, where it is defined as the supremum of the dualities between the deviatoric parts of admissible stress fields and the plastic strains. By means of this abstract definition, a viscoplastic approximation and variational techniques from the theory of rate-independent processes give the existence of an evolution satisfying an energy-dissipation balance and consequently Hill's maximum plastic work principle for an abstract and very large class of yield conditions.
An efficient and accurate molecular alignment and docking technique using ab initio quality scoring
Füsti-Molnár, László; Merz, Kenneth M.
2008-01-01
An accurate and efficient molecular alignment technique is presented based on first principle electronic structure calculations. This new scheme maximizes quantum similarity matrices in the relative orientation of the molecules and uses Fourier transform techniques for two purposes. First, building up the numerical representation of true ab initio electronic densities and their Coulomb potentials is accelerated by the previously described Fourier transform Coulomb method. Second, the Fourier convolution technique is applied for accelerating optimizations in the translational coordinates. In order to avoid any interpolation error, the necessary analytical formulas are derived for the transformation of the ab initio wavefunctions in rotational coordinates. The results of our first implementation for a small test set are analyzed in detail and compared with published results of the literature. A new way of refinement of existing shape based alignments is also proposed by using Fourier convolutions of ab initio or other approximate electron densities. This new alignment technique is generally applicable for overlap, Coulomb, kinetic energy, etc., quantum similarity measures and can be extended to a genuine docking solution with ab initio scoring. PMID:18624561
On the connection between multigrid and cyclic reduction
NASA Technical Reports Server (NTRS)
Merriam, M. L.
1984-01-01
A technique is shown whereby it is possible to relate a particular multigrid process to cyclic reduction using purely mathematical arguments. This technique suggest methods for solving Poisson's equation in 1-, 2-, or 3-dimensions with Dirichlet or Neumann boundary conditions. In one dimension the method is exact and, in fact, reduces to cyclic reduction. This provides a valuable reference point for understanding multigrid techniques. The particular multigrid process analyzed is referred to here as Approximate Cyclic Reduction (ACR) and is one of a class known as Multigrid Reduction methods in the literature. It involves one approximation with a known error term. It is possible to relate the error term in this approximation with certain eigenvector components of the error. These are sharply reduced in amplitude by classical relaxation techniques. The approximation can thus be made a very good one.
Low rank approximation in G 0W 0 calculations
Shao, MeiYue; Lin, Lin; Yang, Chao; ...
2016-06-04
The single particle energies obtained in a Kohn-Sham density functional theory (DFT) calculation are generally known to be poor approximations to electron excitation energies that are measured in tr ansport, tunneling and spectroscopic experiments such as photo-emission spectroscopy. The correction to these energies can be obtained from the poles of a single particle Green’s function derived from a many-body perturbation theory. From a computational perspective, the accuracy and efficiency of such an approach depends on how a self energy term that properly accounts for dynamic screening of electrons is approximated. The G 0W 0 approximation is a widely used techniquemore » in which the self energy is expressed as the convolution of a noninteracting Green’s function (G 0) and a screened Coulomb interaction (W 0) in the frequency domain. The computational cost associated with such a convolution is high due to the high complexity of evaluating W 0 at multiple frequencies. In this paper, we discuss how the cost of G 0W 0 calculation can be reduced by constructing a low rank approximation to the frequency dependent part of W 0 . In particular, we examine the effect of such a low rank approximation on the accuracy of the G 0W 0 approximation. We also discuss how the numerical convolution of G 0 and W 0 can be evaluated efficiently and accurately by using a contour deformation technique with an appropriate choice of the contour.« less
"Tools For Analysis and Visualization of Large Time- Varying CFD Data Sets"
NASA Technical Reports Server (NTRS)
Wilhelms, Jane; vanGelder, Allen
1999-01-01
During the four years of this grant (including the one year extension), we have explored many aspects of the visualization of large CFD (Computational Fluid Dynamics) datasets. These have included new direct volume rendering approaches, hierarchical methods, volume decimation, error metrics, parallelization, hardware texture mapping, and methods for analyzing and comparing images. First, we implemented an extremely general direct volume rendering approach that can be used to render rectilinear, curvilinear, or tetrahedral grids, including overlapping multiple zone grids, and time-varying grids. Next, we developed techniques for associating the sample data with a k-d tree, a simple hierarchial data model to approximate samples in the regions covered by each node of the tree, and an error metric for the accuracy of the model. We also explored a new method for determining the accuracy of approximate models based on the light field method described at ACM SIGGRAPH (Association for Computing Machinery Special Interest Group on Computer Graphics) '96. In our initial implementation, we automatically image the volume from 32 approximately evenly distributed positions on the surface of an enclosing tessellated sphere. We then calculate differences between these images under different conditions of volume approximation or decimation.
Using Approximations to Accelerate Engineering Design Optimization
NASA Technical Reports Server (NTRS)
Torczon, Virginia; Trosset, Michael W.
1998-01-01
Optimization problems that arise in engineering design are often characterized by several features that hinder the use of standard nonlinear optimization techniques. Foremost among these features is that the functions used to define the engineering optimization problem often are computationally intensive. Within a standard nonlinear optimization algorithm, the computational expense of evaluating the functions that define the problem would necessarily be incurred for each iteration of the optimization algorithm. Faced with such prohibitive computational costs, an attractive alternative is to make use of surrogates within an optimization context since surrogates can be chosen or constructed so that they are typically much less expensive to compute. For the purposes of this paper, we will focus on the use of algebraic approximations as surrogates for the objective. In this paper we introduce the use of so-called merit functions that explicitly recognize the desirability of improving the current approximation to the objective during the course of the optimization. We define and experiment with the use of merit functions chosen to simultaneously improve both the solution to the optimization problem (the objective) and the quality of the approximation. Our goal is to further improve the effectiveness of our general approach without sacrificing any of its rigor.
Scattering by ensembles of small particles experiment, theory and application
NASA Technical Reports Server (NTRS)
Gustafson, B. A. S.
1980-01-01
A hypothetical self consistent picture of evolution of prestellar intertellar dust through a comet phase leads to predictions about the composition of the circum-solar dust cloud. Scattering properties of thus resulting conglomerates with a bird's-nest type of structure are investigated using a micro-wave analogue technique. Approximate theoretical methods of general interest are developed which compared favorably with the experimental results. The principal features of scattering of visible radiation by zodiacal light particles are reasonably reproduced. A component which is suggestive of (ALPHA)-meteoroids is also predicted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strzalka, J.; Liu, J; Tronin, A
2009-01-01
We previously reported the synthesis and structural characterization of a model membrane protein comprised of an amphiphilic 4-helix bundle peptide with a hydrophobic domain based on a synthetic ion channel and a hydrophilic domain with designed cavities for binding the general anesthetic halothane. In this work, we synthesized an improved version of this halothane-binding amphiphilic peptide with only a single cavity and an otherwise identical control peptide with no such cavity, and applied x-ray reflectivity to monolayers of these peptides to probe the distribution of halothane along the length of the core of the 4-helix bundle as a function ofmore » the concentration of halothane. At the moderate concentrations achieved in this study, approximately three molecules of halothane were found to be localized within a broad symmetric unimodal distribution centered about the designed cavity. At the lowest concentration achieved, of approximately one molecule per bundle, the halothane distribution became narrower and more peaked due to a component of {approx}19Angstroms width centered about the designed cavity. At higher concentrations, approximately six to seven molecules were found to be uniformly distributed along the length of the bundle, corresponding to approximately one molecule per heptad. Monolayers of the control peptide showed only the latter behavior, namely a uniform distribution along the length of the bundle irrespective of the halothane concentration over this range. The results provide insight into the nature of such weak binding when the dissociation constant is in the mM regime, relevant for clinical applications of anesthesia. They also demonstrate the suitability of both the model system and the experimental technique for additional work on the mechanism of general anesthesia, some of it presented in the companion parts II and III under this title.« less
Mixed models and reduction method for dynamic analysis of anisotropic shells
NASA Technical Reports Server (NTRS)
Noor, A. K.; Peters, J. M.
1985-01-01
A time-domain computational procedure is presented for predicting the dynamic response of laminated anisotropic shells. The two key elements of the procedure are: (1) use of mixed finite element models having independent interpolation (shape) functions for stress resultants and generalized displacements for the spatial discretization of the shell, with the stress resultants allowed to be discontinuous at interelement boundaries; and (2) use of a dynamic reduction method, with the global approximation vectors consisting of the static solution and an orthogonal set of Lanczos vectors. The dynamic reduction is accomplished by means of successive application of the finite element method and the classical Rayleigh-Ritz technique. The finite element method is first used to generate the global approximation vectors. Then the Rayleigh-Ritz technique is used to generate a reduced system of ordinary differential equations in the amplitudes of these modes. The temporal integration of the reduced differential equations is performed by using an explicit half-station central difference scheme (Leap-frog method). The effectiveness of the proposed procedure is demonstrated by means of a numerical example and its advantages over reduction methods used with the displacement formulation are discussed.
NASA Astrophysics Data System (ADS)
Lohmann, Christoph; Kuzmin, Dmitri; Shadid, John N.; Mabuza, Sibusiso
2017-09-01
This work extends the flux-corrected transport (FCT) methodology to arbitrary order continuous finite element discretizations of scalar conservation laws on simplex meshes. Using Bernstein polynomials as local basis functions, we constrain the total variation of the numerical solution by imposing local discrete maximum principles on the Bézier net. The design of accuracy-preserving FCT schemes for high order Bernstein-Bézier finite elements requires the development of new algorithms and/or generalization of limiting techniques tailored for linear and multilinear Lagrange elements. In this paper, we propose (i) a new discrete upwinding strategy leading to local extremum bounded low order approximations with compact stencils, (ii) high order variational stabilization based on the difference between two gradient approximations, and (iii) new localized limiting techniques for antidiffusive element contributions. The optional use of a smoothness indicator, based on a second derivative test, makes it possible to potentially avoid unnecessary limiting at smooth extrema and achieve optimal convergence rates for problems with smooth solutions. The accuracy of the proposed schemes is assessed in numerical studies for the linear transport equation in 1D and 2D.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartlep, T.; Zhao, J.; Kosovichev, A. G.
2013-01-10
The meridional flow in the Sun is an axisymmetric flow that is generally directed poleward at the surface, and is presumed to be of fundamental importance in the generation and transport of magnetic fields. Its true shape and strength, however, are debated. We present a numerical simulation of helioseismic wave propagation in the whole solar interior in the presence of a prescribed, stationary, single-cell, deep meridional circulation serving as synthetic data for helioseismic measurement techniques. A deep-focusing time-distance helioseismology technique is applied to the synthetic data, showing that it can in fact be used to measure the effects of themore » meridional flow very deep in the solar convection zone. It is shown that the ray approximation that is commonly used for interpretation of helioseismology measurements remains a reasonable approximation even for very long distances between 12 Degree-Sign and 42 Degree-Sign corresponding to depths between 52 and 195 Mm. From the measurement noise, we extrapolate that time-resolved observations on the order of a full solar cycle may be needed to probe the flow all the way to the base of the convection zone.« less
Semiclassical states on Lie algebras
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsobanjan, Artur, E-mail: artur.tsobanjan@gmail.com
2015-03-15
The effective technique for analyzing representation-independent features of quantum systems based on the semiclassical approximation (developed elsewhere) has been successfully used in the context of the canonical (Weyl) algebra of the basic quantum observables. Here, we perform the important step of extending this effective technique to the quantization of a more general class of finite-dimensional Lie algebras. The case of a Lie algebra with a single central element (the Casimir element) is treated in detail by considering semiclassical states on the corresponding universal enveloping algebra. Restriction to an irreducible representation is performed by “effectively” fixing the Casimir condition, following themore » methods previously used for constrained quantum systems. We explicitly determine the conditions under which this restriction can be consistently performed alongside the semiclassical truncation.« less
Born approximation in linear-time invariant system
NASA Astrophysics Data System (ADS)
Gumjudpai, Burin
2017-09-01
An alternative way of finding the LTI’s solution with the Born approximation, is investigated. We use Born approximation in the LTI and in the transformed LTI in form of Helmholtz equation. General solution are considered as infinite series or Feynman graph. Slow-roll approximation are explored. Transforming the LTI system into Helmholtz equation, approximated general solution can be found for any given forms of force with its initial value.
Approximate Computing Techniques for Iterative Graph Algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Panyala, Ajay R.; Subasi, Omer; Halappanavar, Mahantesh
Approximate computing enables processing of large-scale graphs by trading off quality for performance. Approximate computing techniques have become critical not only due to the emergence of parallel architectures but also the availability of large scale datasets enabling data-driven discovery. Using two prototypical graph algorithms, PageRank and community detection, we present several approximate computing heuristics to scale the performance with minimal loss of accuracy. We present several heuristics including loop perforation, data caching, incomplete graph coloring and synchronization, and evaluate their efficiency. We demonstrate performance improvements of up to 83% for PageRank and up to 450x for community detection, with lowmore » impact of accuracy for both the algorithms. We expect the proposed approximate techniques will enable scalable graph analytics on data of importance to several applications in science and their subsequent adoption to scale similar graph algorithms.« less
Construction of a general human chromosome jumping library, with application to cystic fibrosis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, F.S.; Drumm, M.L.; Cole, J.L.
1987-02-27
In many genetic disorders, the responsible gene and its protein product are unknown. The technique known as reverse genetics, in which chromosomal map positions and genetically linked DNA markers are used to identify and clone such genes, is complicated by the fact that the molecular distances from the closest DNA markers to the gene itself are often too large to traverse by standard cloning techniques. To address this situation, a general human chromosome jumping library was constructed that allows the cloning of DNA sequences approximately 100 kilobases away from any starting point in genomic DNA. As an illustration of itsmore » usefulness, this library was searched for a jumping clone, starting at the met oncogene, which is a marker tightly linked to the cystic fibrosis gene that is located on human chromosome 7. Mapping of the new genomic fragment by pulsed field gel electrophoresis confirmed that it resides on chromosome 7 within 240 kilobases downstream of the met gene. The use of chromosome jumping should be applicable to any genetic locus for which a closely linked DNA marker is available.« less
Numerical integration techniques for curved-element discretizations of molecule-solvent interfaces.
Bardhan, Jaydeep P; Altman, Michael D; Willis, David J; Lippow, Shaun M; Tidor, Bruce; White, Jacob K
2007-07-07
Surface formulations of biophysical modeling problems offer attractive theoretical and computational properties. Numerical simulations based on these formulations usually begin with discretization of the surface under consideration; often, the surface is curved, possessing complicated structure and possibly singularities. Numerical simulations commonly are based on approximate, rather than exact, discretizations of these surfaces. To assess the strength of the dependence of simulation accuracy on the fidelity of surface representation, here methods were developed to model several important surface formulations using exact surface discretizations. Following and refining Zauhar's work [J. Comput.-Aided Mol. Des. 9, 149 (1995)], two classes of curved elements were defined that can exactly discretize the van der Waals, solvent-accessible, and solvent-excluded (molecular) surfaces. Numerical integration techniques are presented that can accurately evaluate nonsingular and singular integrals over these curved surfaces. After validating the exactness of the surface discretizations and demonstrating the correctness of the presented integration methods, a set of calculations are presented that compare the accuracy of approximate, planar-triangle-based discretizations and exact, curved-element-based simulations of surface-generalized-Born (sGB), surface-continuum van der Waals (scvdW), and boundary-element method (BEM) electrostatics problems. Results demonstrate that continuum electrostatic calculations with BEM using curved elements, piecewise-constant basis functions, and centroid collocation are nearly ten times more accurate than planar-triangle BEM for basis sets of comparable size. The sGB and scvdW calculations give exceptional accuracy even for the coarsest obtainable discretized surfaces. The extra accuracy is attributed to the exact representation of the solute-solvent interface; in contrast, commonly used planar-triangle discretizations can only offer improved approximations with increasing discretization and associated increases in computational resources. The results clearly demonstrate that the methods for approximate integration on an exact geometry are far more accurate than exact integration on an approximate geometry. A MATLAB implementation of the presented integration methods and sample data files containing curved-element discretizations of several small molecules are available online as supplemental material.
Calculating TMDs of a large nucleus: Quasi-classical approximation and quantum evolution
Kovchegov, Yuri V.; Sievert, Matthew D.
2015-12-24
We set up a formalism for calculating transverse-momentum-dependent parton distribution functions (TMDs) of a large nucleus using the tools of saturation physics. By generalizing the quasi-classical Glauber–Gribov–Mueller/McLerran–Venugopalan approximation to allow for the possibility of spin–orbit coupling, we show how any TMD can be calculated in the saturation framework. This can also be applied to the TMDs of a proton by modeling it as a large “nucleus.” To illustrate our technique, we calculate the quark TMDs of an unpolarized nucleus at large-x: the unpolarized quark distribution and the quark Boer–Mulders distribution. Here, we observe that spin–orbit coupling leads to mixing betweenmore » different TMDs of the nucleus and of the nucleons. We then consider the evolution of TMDs: at large-x, in the double-logarithmic approximation, we obtain the Sudakov form factor. At small-x the evolution of unpolarized-target quark TMDs is governed by BK/JIMWLK evolution, while the small-x evolution of polarized-target quark TMDs appears to be dominated by the QCD Reggeon.« less
NASA Astrophysics Data System (ADS)
Wang, Wei; Shen, Jianqi
2018-06-01
The use of a shaped beam for applications relying on light scattering depends much on the ability to evaluate the beam shape coefficients (BSC) effectively. Numerical techniques for evaluating the BSCs of a shaped beam, such as the quadrature, the localized approximation (LA), the integral localized approximation (ILA) methods, have been developed within the framework of generalized Lorenz-Mie theory (GLMT). The quadrature methods usually employ the 2-/3-dimensional integrations. In this work, the expressions of the BSCs for an elliptical Gaussian beam (EGB) are simplified into the 1-dimensional integral so as to speed up the numerical computation. Numerical results of BSCs are used to reconstruct the beam field and the fidelity of the reconstructed field to the given beam field is estimated. It is demonstrated that the proposed method is much faster than the 2-dimensional integrations and it can acquire more accurate results than the LA method. Limitations of the quadrature method and also the LA method in the numerical calculation are analyzed in detail.
Fast summation of divergent series and resurgent transseries from Meijer-G approximants
NASA Astrophysics Data System (ADS)
Mera, Héctor; Pedersen, Thomas G.; Nikolić, Branislav K.
2018-05-01
We develop a resummation approach based on Meijer-G functions and apply it to approximate the Borel sum of divergent series and the Borel-Écalle sum of resurgent transseries in quantum mechanics and quantum field theory (QFT). The proposed method is shown to vastly outperform the conventional Borel-Padé and Borel-Padé-Écalle summation methods. The resulting Meijer-G approximants are easily parametrized by means of a hypergeometric ansatz and can be thought of as a generalization to arbitrary order of the Borel-hypergeometric method [Mera et al., Phys. Rev. Lett. 115, 143001 (2015), 10.1103/PhysRevLett.115.143001]. Here we demonstrate the accuracy of this technique in various examples from quantum mechanics and QFT, traditionally employed as benchmark models for resummation, such as zero-dimensional ϕ4 theory; the quartic anharmonic oscillator; the calculation of critical exponents for the N -vector model; ϕ4 with degenerate minima; self-interacting QFT in zero dimensions; and the summation of one- and two-instanton contributions in the quantum-mechanical double-well problem.
A new estimator for VLBI baseline length repeatability
NASA Astrophysics Data System (ADS)
Titov, O.
2009-11-01
The goal of this paper is to introduce a more effective technique to approximate for the “repeatability-baseline length” relationship that is used to evaluate the quality of geodetic VLBI results. Traditionally, this relationship is approximated by a quadratic function of baseline length over all baselines. The new model incorporates the mean number of observed group delays of the reference radio sources (i.e. estimated as global parameters) used in the estimation of each baseline. It is shown that the new method provides a better approximation of the “repeatability-baseline length” relationship than the traditional model. Further development of the new approach comes down to modeling the repeatability as a function of two parameters: baseline length and baseline slewing rate. Within the framework of this new approach the station vertical and horizontal uncertainties can be treated as a function of baseline length. While the previous relationship indicated that the station vertical uncertainties are generally 4-5 times larger than the horizontal uncertainties, the vertical uncertainties as determined by the new method are only larger by a factor of 1.44 over all baseline lengths.
An operator calculus for surface and volume modeling
NASA Technical Reports Server (NTRS)
Gordon, W. J.
1984-01-01
The mathematical techniques which form the foundation for most of the surface and volume modeling techniques used in practice are briefly described. An outline of what may be termed an operator calculus for the approximation and interpolation of functions of more than one independent variable is presented. By considering the linear operators associated with bivariate and multivariate interpolation/approximation schemes, it is shown how they can be compounded by operator multiplication and Boolean addition to obtain a distributive lattice of approximation operators. It is then demonstrated via specific examples how this operator calculus leads to practical techniques for sculptured surface and volume modeling.
Jing, Liwen; Li, Zhao; Wang, Wenjie; Dubey, Amartansh; Lee, Pedro; Meniconi, Silvia; Brunone, Bruno; Murch, Ross D
2018-05-01
An approximate inverse scattering technique is proposed for reconstructing cross-sectional area variation along water pipelines to deduce the size and position of blockages. The technique allows the reconstructed blockage profile to be written explicitly in terms of the measured acoustic reflectivity. It is based upon the Born approximation and provides good accuracy, low computational complexity, and insight into the reconstruction process. Numerical simulations and experimental results are provided for long pipelines with mild and severe blockages of different lengths. Good agreement is found between the inverse result and the actual pipe condition for mild blockages.
A Discrete Approximation Framework for Hereditary Systems.
1980-05-01
schemes which are included in the general framework and which may be implemented directly on high-speed computing machines are developed. A numerical...an appropriately chosen Hilbert space. We then proceed to develop general approximation schemes for the solutions to the homogeneous AEE which in turn...rich classes of these schemes . In addition, two particular families of approximation schemes included in the general framework are developed and
NASA Astrophysics Data System (ADS)
FernáNdez Pantoja, M.; Yarovoy, A. G.; Rubio Bretones, A.; GonzáLez GarcíA, S.
2009-12-01
This paper presents a procedure to extend the methods of moments in time domain for the transient analysis of thin-wire antennas to include those cases where the antennas are located over a lossy half-space. This extended technique is based on the reflection coefficient (RC) approach, which approximates the fields incident on the ground interface as plane waves and calculates the time domain RC using the inverse Fourier transform of Fresnel equations. The implementation presented in this paper uses general expressions for the RC which extend its range of applicability to lossy grounds, and is proven to be accurate and fast for antennas located not too near to the ground. The resulting general purpose procedure, able to treat arbitrarily oriented thin-wire antennas, is appropriate for all kind of half-spaces, including lossy cases, and it has turned out to be as computationally fast solving the problem of an arbitrary ground as dealing with a perfect electric conductor ground plane. Results show a numerical validation of the method for different half-spaces, paying special attention to the influence of the antenna to ground distance in the accuracy of the results.
Structural factoring approach for analyzing stochastic networks
NASA Technical Reports Server (NTRS)
Hayhurst, Kelly J.; Shier, Douglas R.
1991-01-01
The problem of finding the distribution of the shortest path length through a stochastic network is investigated. A general algorithm for determining the exact distribution of the shortest path length is developed based on the concept of conditional factoring, in which a directed, stochastic network is decomposed into an equivalent set of smaller, generally less complex subnetworks. Several network constructs are identified and exploited to reduce significantly the computational effort required to solve a network problem relative to complete enumeration. This algorithm can be applied to two important classes of stochastic path problems: determining the critical path distribution for acyclic networks and the exact two-terminal reliability for probabilistic networks. Computational experience with the algorithm was encouraging and allowed the exact solution of networks that have been previously analyzed only by approximation techniques.
NASA Technical Reports Server (NTRS)
Paknys, J. R.
1982-01-01
The reflector antenna may be thought of as an aperture antenna. The classical solution for the radiation pattern of such an antenna is found by the aperture integration (AI) method. Success with this method depends on how accurately the aperture currents are known beforehand. In the past, geometrical optics (GO) has been employed to find the aperture currents. This approximation is suitable for calculating the main beam and possibly the first few sidelobes. A better approximation is to use aperture currents calculated from the geometrical theory of diffraction (GTD). Integration of the GTD currents over and extended aperture yields more accurate results for the radiation pattern. This approach is useful when conventional AI and GTD solutions have no common region of validity. This problem arises in reflector antennas. Two dimensional models of parabolic reflectors are studied; however, the techniques discussed can be applied to any aperture antenna.
A Group Action Method for Construction of Strong Substitution Box
NASA Astrophysics Data System (ADS)
Jamal, Sajjad Shaukat; Shah, Tariq; Attaullah, Atta
2017-06-01
In this paper, the method to develop cryptographically strong substitution box is presented which can be used in multimedia security and data hiding techniques. The algorithm of construction depends on the action of a projective general linear group over the set of units of the finite commutative ring. The strength of substitution box and ability to create confusion is assessed with different available analyses. Moreover, the ability of resistance against malicious attacks is also evaluated. The substitution box is examined by bit independent criterion, strict avalanche criterion, nonlinearity test, linear approximation probability test and differential approximation probability test. This substitution box is equated with well-recognized substitution boxes such as AES, Gray, APA, S8, prime of residue, Xyi and Skipjack. The comparison shows encouraging results about the strength of the proposed box. The majority logic criterion is also calculated to analyze the strength and its practical implementation.
Approximating high-dimensional dynamics by barycentric coordinates with linear programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirata, Yoshito, E-mail: yoshito@sat.t.u-tokyo.ac.jp; Aihara, Kazuyuki; Suzuki, Hideyuki
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics ofmore » the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.« less
NASA Technical Reports Server (NTRS)
Wang, C.-W.; Stark, W.
2005-01-01
This article considers a quaternary direct-sequence code-division multiple-access (DS-CDMA) communication system with asymmetric quadrature phase-shift-keying (AQPSK) modulation for unequal error protection (UEP) capability. Both time synchronous and asynchronous cases are investigated. An expression for the probability distribution of the multiple-access interference is derived. The exact bit-error performance and the approximate performance using a Gaussian approximation and random signature sequences are evaluated by extending the techniques used for uniform quadrature phase-shift-keying (QPSK) and binary phase-shift-keying (BPSK) DS-CDMA systems. Finally, a general system model with unequal user power and the near-far problem is considered and analyzed. The results show that, for a system with UEP capability, the less protected data bits are more sensitive to the near-far effect that occurs in a multiple-access environment than are the more protected bits.
Approximating high-dimensional dynamics by barycentric coordinates with linear programming.
Hirata, Yoshito; Shiro, Masanori; Takahashi, Nozomu; Aihara, Kazuyuki; Suzuki, Hideyuki; Mas, Paloma
2015-01-01
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.
Probabilistic Structural Analysis Theory Development
NASA Technical Reports Server (NTRS)
Burnside, O. H.
1985-01-01
The objective of the Probabilistic Structural Analysis Methods (PSAM) project is to develop analysis techniques and computer programs for predicting the probabilistic response of critical structural components for current and future space propulsion systems. This technology will play a central role in establishing system performance and durability. The first year's technical activity is concentrating on probabilistic finite element formulation strategy and code development. Work is also in progress to survey critical materials and space shuttle mian engine components. The probabilistic finite element computer program NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) is being developed. The final probabilistic code will have, in the general case, the capability of performing nonlinear dynamic of stochastic structures. It is the goal of the approximate methods effort to increase problem solving efficiency relative to finite element methods by using energy methods to generate trial solutions which satisfy the structural boundary conditions. These approximate methods will be less computer intensive relative to the finite element approach.
Almost sure convergence in quantum spin glasses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buzinski, David, E-mail: dab197@case.edu; Meckes, Elizabeth, E-mail: elizabeth.meckes@case.edu
2015-12-15
Recently, Keating, Linden, and Wells [Markov Processes Relat. Fields 21(3), 537-555 (2015)] showed that the density of states measure of a nearest-neighbor quantum spin glass model is approximately Gaussian when the number of particles is large. The density of states measure is the ensemble average of the empirical spectral measure of a random matrix; in this paper, we use concentration of measure and entropy techniques together with the result of Keating, Linden, and Wells to show that in fact the empirical spectral measure of such a random matrix is almost surely approximately Gaussian itself with no ensemble averaging. We alsomore » extend this result to a spherical quantum spin glass model and to the more general coupling geometries investigated by Erdős and Schröder [Math. Phys., Anal. Geom. 17(3-4), 441–464 (2014)].« less
Linear Optical and SERS Study on Metallic Membranes with Subwavelength Complementary Patterns
NASA Astrophysics Data System (ADS)
Hao, Qingzhen; Zeng, Yong; Jensen, Lasse; Werner, Douglas; Crespi, Vincent; Huang, Tony Jun; Interdepartmental Collaboration
2011-03-01
An efficient technique is developed to fabricate optically thin metallic films with subwavelength patterns and their complements simultaneously. By comparing the spectra of the complementary films, we show that Babinet's principle nearly holds in the optical domain. A discrete-dipole approximation can qualitatively describe their spectral dependence on the geometry of the constituent particles and the illuminating polarization. Using pyridine as probe molecules, we studied surface-enhanced Raman spectroscopy (SERS) from the complementary structure. Although the complementary structure posses closely related linear spectra, they have quite different near-field behaviors. For hole arrays, their averaged local field gains as well as the SERS enhancements are strongly correlated to their transmission spectra. We therefore can use cos 4 θ to approximately describe the dependence of the Raman intensity on the excitation polarization angle θ , while the complementary particle arrays present maximal local field gains at wavelengths generally much bigger than their localized surface plasmonic resonant wavelengths.
The difference between LSMC and replicating portfolio in insurance liability modeling.
Pelsser, Antoon; Schweizer, Janina
2016-01-01
Solvency II requires insurers to calculate the 1-year value at risk of their balance sheet. This involves the valuation of the balance sheet in 1 year's time. As for insurance liabilities, closed-form solutions to their value are generally not available, insurers turn to estimation procedures. While pure Monte Carlo simulation set-ups are theoretically sound, they are often infeasible in practice. Therefore, approximation methods are exploited. Among these, least squares Monte Carlo (LSMC) and portfolio replication are prominent and widely applied in practice. In this paper, we show that, while both are variants of regression-based Monte Carlo methods, they differ in one significant aspect. While the replicating portfolio approach only contains an approximation error, which converges to zero in the limit, in LSMC a projection error is additionally present, which cannot be eliminated. It is revealed that the replicating portfolio technique enjoys numerous advantages and is therefore an attractive model choice.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Renzi, N.E.; Roseberry, R.J.
>The experimental measurements and nuclear analysis of a uniformly loaded, unpoisoned slab core with a partially insented hafnium rod are described. Comparisons of experimental data with calculated results of the UFO code and flux synthesis techniques are given. It was concluded that one of the flux synthesis techniques and the UFO code are able to predict flux distributions to within approximately 5% of experiment for most cases. An error of approximately 10% was found in the synthesis technique for a channel near the partially inserted rod. The various calculations were able to predict neutron pulsed shutdowns to only approximately 30%.more » (auth)« less
NASA Astrophysics Data System (ADS)
Blanchet, Luc; Detweiler, Steven; Le Tiec, Alexandre; Whiting, Bernard F.
2010-03-01
The problem of a compact binary system whose components move on circular orbits is addressed using two different approximation techniques in general relativity. The post-Newtonian (PN) approximation involves an expansion in powers of v/c≪1, and is most appropriate for small orbital velocities v. The perturbative self-force analysis requires an extreme mass ratio m1/m2≪1 for the components of the binary. A particular coordinate-invariant observable is determined as a function of the orbital frequency of the system using these two different approximations. The post-Newtonian calculation is pushed up to the third post-Newtonian (3PN) order. It involves the metric generated by two point particles and evaluated at the location of one of the particles. We regularize the divergent self-field of the particle by means of dimensional regularization. We show that the poles ∝(d-3)-1 appearing in dimensional regularization at the 3PN order cancel out from the final gauge invariant observable. The 3PN analytical result, through first order in the mass ratio, and the numerical self-force calculation are found to agree well. The consistency of this cross cultural comparison confirms the soundness of both approximations in describing compact binary systems. In particular, it provides an independent test of the very different regularization procedures invoked in the two approximation schemes.
Fuzzy logic, neural networks, and soft computing
NASA Technical Reports Server (NTRS)
Zadeh, Lofti A.
1994-01-01
The past few years have witnessed a rapid growth of interest in a cluster of modes of modeling and computation which may be described collectively as soft computing. The distinguishing characteristic of soft computing is that its primary aims are to achieve tractability, robustness, low cost, and high MIQ (machine intelligence quotient) through an exploitation of the tolerance for imprecision and uncertainty. Thus, in soft computing what is usually sought is an approximate solution to a precisely formulated problem or, more typically, an approximate solution to an imprecisely formulated problem. A simple case in point is the problem of parking a car. Generally, humans can park a car rather easily because the final position of the car is not specified exactly. If it were specified to within, say, a few millimeters and a fraction of a degree, it would take hours or days of maneuvering and precise measurements of distance and angular position to solve the problem. What this simple example points to is the fact that, in general, high precision carries a high cost. The challenge, then, is to exploit the tolerance for imprecision by devising methods of computation which lead to an acceptable solution at low cost. By its nature, soft computing is much closer to human reasoning than the traditional modes of computation. At this juncture, the major components of soft computing are fuzzy logic (FL), neural network theory (NN), and probabilistic reasoning techniques (PR), including genetic algorithms, chaos theory, and part of learning theory. Increasingly, these techniques are used in combination to achieve significant improvement in performance and adaptability. Among the important application areas for soft computing are control systems, expert systems, data compression techniques, image processing, and decision support systems. It may be argued that it is soft computing, rather than the traditional hard computing, that should be viewed as the foundation for artificial intelligence. In the years ahead, this may well become a widely held position.
An approximate generalized linear model with random effects for informative missing data.
Follmann, D; Wu, M
1995-03-01
This paper develops a class of models to deal with missing data from longitudinal studies. We assume that separate models for the primary response and missingness (e.g., number of missed visits) are linked by a common random parameter. Such models have been developed in the econometrics (Heckman, 1979, Econometrica 47, 153-161) and biostatistics (Wu and Carroll, 1988, Biometrics 44, 175-188) literature for a Gaussian primary response. We allow the primary response, conditional on the random parameter, to follow a generalized linear model and approximate the generalized linear model by conditioning on the data that describes missingness. The resultant approximation is a mixed generalized linear model with possibly heterogeneous random effects. An example is given to illustrate the approximate approach, and simulations are performed to critique the adequacy of the approximation for repeated binary data.
Tan, Ferdinand Frederik Som Ling; Schiere, Sjouke; Reidinga, Auke C; Wit, Fennie; Veldman, Peter Hjm
2015-01-01
Regional anesthesia is gaining popularity with anesthesiologists as it offers superb postoperative analgesia. However, as the sole anesthetic technique in high-risk patients in whom general anesthesia is not preferred, some regional anesthetic possibilities may be easily overlooked. By presenting two cases of very old patients with considerable comorbidities, we would like to bring the mental nerve field block under renewed attention as a safe alternative to general anesthesia and to achieve broader application of this simple nerve block. Two very old male patients (84 and 91 years) both presented with an ulcerative lesion at the lower lip for which surgical removal was scheduled. Because of their considerable comorbidities and increased frailty, bilateral blockade of the mental nerve was considered superior to general anesthesia. As an additional advantage for the 84-year-old patient, who had a pneumonectomy in his medical history, the procedure could be safely performed in a beach-chair position to prevent atelectasis and optimize the ventilation/perfusion ratio of the single lung. The mental nerve blockades were performed intraorally in a blind fashion, after eversion of the lip and identifying the lower canine. A 5 mL syringe with a 23-gauge needle attached was passed into the buccal mucosa until it approximated the mental foramen, where 2 mL of lidocaine 2% with adrenaline 1:100.000 was injected. The other side was anesthetized in a similar fashion. Both patients underwent the surgical procedure uneventfully under a bilateral mental nerve block and were discharged from the hospital on the same day. A mental nerve block is an easy-to-perform regional anesthetic technique for lower lip surgery. This technique might be especially advantageous in the very old, frail patient.
Differentially Private Empirical Risk Minimization
Chaudhuri, Kamalika; Monteleoni, Claire; Sarwate, Anand D.
2011-01-01
Privacy-preserving machine learning algorithms are crucial for the increasingly common setting in which personal data, such as medical or financial records, are analyzed. We provide general techniques to produce privacy-preserving approximations of classifiers learned via (regularized) empirical risk minimization (ERM). These algorithms are private under the ε-differential privacy definition due to Dwork et al. (2006). First we apply the output perturbation ideas of Dwork et al. (2006), to ERM classification. Then we propose a new method, objective perturbation, for privacy-preserving machine learning algorithm design. This method entails perturbing the objective function before optimizing over classifiers. If the loss and regularizer satisfy certain convexity and differentiability criteria, we prove theoretical results showing that our algorithms preserve privacy, and provide generalization bounds for linear and nonlinear kernels. We further present a privacy-preserving technique for tuning the parameters in general machine learning algorithms, thereby providing end-to-end privacy guarantees for the training process. We apply these results to produce privacy-preserving analogues of regularized logistic regression and support vector machines. We obtain encouraging results from evaluating their performance on real demographic and benchmark data sets. Our results show that both theoretically and empirically, objective perturbation is superior to the previous state-of-the-art, output perturbation, in managing the inherent tradeoff between privacy and learning performance. PMID:21892342
Technique for evaluation of the strong potential Born approximation for electron capture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sil, N.C.; McGuire, J.H.
1985-04-01
A technique is presented for evaluating differential cross sections in the strong potential Born (SPB) approximation. Our final expression is expressed as a finite sum of one-dimensional integrals, expressible as a finite sum of derivatives of hypergeometric functions.
RATIONAL APPROXIMATIONS TO GENERALIZED HYPERGEOMETRIC FUNCTIONS.
Under weak restrictions on the various free parameters, general theorems for rational representations of the generalized hypergeometric functions...and certain Meijer G-functions are developed. Upon specialization, these theorems yield a sequency of rational approximations which converge to the
Performance Evaluation of Various STL File Mesh Refining Algorithms Applied for FDM-RP Process
NASA Astrophysics Data System (ADS)
Ledalla, Siva Rama Krishna; Tirupathi, Balaji; Sriram, Venkatesh
2018-06-01
Layered manufacturing machines use the stereolithography (STL) file to build parts. When a curved surface is converted from a computer aided design (CAD) file to STL, it results in a geometrical distortion and chordal error. Parts manufactured with this file, might not satisfy geometric dimensioning and tolerance requirements due to approximated geometry. Current algorithms built in CAD packages have export options to globally reduce this distortion, which leads to an increase in the file size and pre-processing time. In this work, different mesh subdivision algorithms are applied on STL file of a complex geometric features using MeshLab software. The mesh subdivision algorithms considered in this work are modified butterfly subdivision technique, loops sub division technique and general triangular midpoint sub division technique. A comparative study is made with respect to volume and the build time using the above techniques. It is found that triangular midpoint sub division algorithm is more suitable for the geometry under consideration. Only the wheel cap part is then manufactured on Stratasys MOJO FDM machine. The surface roughness of the part is measured on Talysurf surface roughness tester.
Beland, Laurent Karim; Osetskiy, Yury N.; Stoller, Roger E.; ...
2015-02-07
Here, we present a comparison of the Kinetic Activation–Relaxation Technique (k-ART) and the Self-Evolving Atomistic Kinetic Monte Carlo (SEAKMC), two off-lattice, on-the-fly Kinetic Monte Carlo (KMC) techniques that were recently used to solve several materials science problems. We show that if the initial displacements are localized the dimer method and the Activation–Relaxation Technique nouveau provide similar performance. We also show that k-ART and SEAKMC, although based on different approximations, are in agreement with each other, as demonstrated by the examples of 50 vacancies in a 1950-atom Fe box and of interstitial loops in 16,000-atom boxes. Generally speaking, k-ART’s treatment ofmore » geometry and flickers is more flexible, e.g. it can handle amorphous systems, and rigorous than SEAKMC’s, while the later’s concept of active volumes permits a significant speedup of simulations for the systems under consideration and therefore allows investigations of processes requiring large systems that are not accessible if not localizing calculations.« less
Electroencephalography signatures of attention-deficit/hyperactivity disorder: clinical utility.
Alba, Guzmán; Pereda, Ernesto; Mañas, Soledad; Méndez, Leopoldo D; González, Almudena; González, Julián J
2015-01-01
The techniques and the most important results on the use of electroencephalography (EEG) to extract different measures are reviewed in this work, which can be clinically useful to study subjects with attention-deficit/hyperactivity disorder (ADHD). First, we discuss briefly and in simple terms the EEG analysis and processing techniques most used in the context of ADHD. We review techniques that both analyze individual EEG channels (univariate measures) and study the statistical interdependence between different EEG channels (multivariate measures), the so-called functional brain connectivity. Among the former ones, we review the classical indices of absolute and relative spectral power and estimations of the complexity of the channels, such as the approximate entropy and the Lempel-Ziv complexity. Among the latter ones, we focus on the magnitude square coherence and on different measures based on the concept of generalized synchronization and its estimation in the state space. Second, from a historical point of view, we present the most important results achieved with these techniques and their clinical utility (sensitivity, specificity, and accuracy) to diagnose ADHD. Finally, we propose future research lines based on these results.
Discrete-Time Stable Generalized Self-Learning Optimal Control With Approximation Errors.
Wei, Qinglai; Li, Benkai; Song, Ruizhuo
2018-04-01
In this paper, a generalized policy iteration (GPI) algorithm with approximation errors is developed for solving infinite horizon optimal control problems for nonlinear systems. The developed stable GPI algorithm provides a general structure of discrete-time iterative adaptive dynamic programming algorithms, by which most of the discrete-time reinforcement learning algorithms can be described using the GPI structure. It is for the first time that approximation errors are explicitly considered in the GPI algorithm. The properties of the stable GPI algorithm with approximation errors are analyzed. The admissibility of the approximate iterative control law can be guaranteed if the approximation errors satisfy the admissibility criteria. The convergence of the developed algorithm is established, which shows that the iterative value function is convergent to a finite neighborhood of the optimal performance index function, if the approximate errors satisfy the convergence criterion. Finally, numerical examples and comparisons are presented.
Quantum dynamics in continuum for proton transport—Generalized correlation
NASA Astrophysics Data System (ADS)
Chen, Duan; Wei, Guo-Wei
2012-04-01
As a key process of many biological reactions such as biological energy transduction or human sensory systems, proton transport has attracted much research attention in biological, biophysical, and mathematical fields. A quantum dynamics in continuum framework has been proposed to study proton permeation through membrane proteins in our earlier work and the present work focuses on the generalized correlation of protons with their environment. Being complementary to electrostatic potentials, generalized correlations consist of proton-proton, proton-ion, proton-protein, and proton-water interactions. In our approach, protons are treated as quantum particles while other components of generalized correlations are described classically and in different levels of approximations upon simulation feasibility and difficulty. Specifically, the membrane protein is modeled as a group of discrete atoms, while ion densities are approximated by Boltzmann distributions, and water molecules are represented as a dielectric continuum. These proton-environment interactions are formulated as convolutions between number densities of species and their corresponding interaction kernels, in which parameters are obtained from experimental data. In the present formulation, generalized correlations are important components in the total Hamiltonian of protons, and thus is seamlessly embedded in the multiscale/multiphysics total variational model of the system. It takes care of non-electrostatic interactions, including the finite size effect, the geometry confinement induced channel barriers, dehydration and hydrogen bond effects, etc. The variational principle or the Euler-Lagrange equation is utilized to minimize the total energy functional, which includes the total Hamiltonian of protons, and obtain a new version of generalized Laplace-Beltrami equation, generalized Poisson-Boltzmann equation and generalized Kohn-Sham equation. A set of numerical algorithms, such as the matched interface and boundary method, the Dirichlet to Neumann mapping, Gummel iteration, and Krylov space techniques, is employed to improve the accuracy, efficiency, and robustness of model simulations. Finally, comparisons between the present model predictions and experimental data of current-voltage curves, as well as current-concentration curves of the Gramicidin A channel, verify our new model.
A model of head-related transfer functions based on a state-space analysis
NASA Astrophysics Data System (ADS)
Adams, Norman Herkamp
This dissertation develops and validates a novel state-space method for binaural auditory display. Binaural displays seek to immerse a listener in a 3D virtual auditory scene with a pair of headphones. The challenge for any binaural display is to compute the two signals to supply to the headphones. The present work considers a general framework capable of synthesizing a wide variety of auditory scenes. The framework models collections of head-related transfer functions (HRTFs) simultaneously. This framework improves the flexibility of contemporary displays, but it also compounds the steep computational cost of the display. The cost is reduced dramatically by formulating the collection of HRTFs in the state-space and employing order-reduction techniques to design efficient approximants. Order-reduction techniques based on the Hankel-operator are found to yield accurate low-cost approximants. However, the inter-aural time difference (ITD) of the HRTFs degrades the time-domain response of the approximants. Fortunately, this problem can be circumvented by employing a state-space architecture that allows the ITD to be modeled outside of the state-space. Accordingly, three state-space architectures are considered. Overall, a multiple-input, single-output (MISO) architecture yields the best compromise between performance and flexibility. The state-space approximants are evaluated both empirically and psychoacoustically. An array of truncated FIR filters is used as a pragmatic reference system for comparison. For a fixed cost bound, the state-space systems yield lower approximation error than FIR arrays for D>10, where D is the number of directions in the HRTF collection. A series of headphone listening tests are also performed to validate the state-space approach, and to estimate the minimum order N of indiscriminable approximants. For D = 50, the state-space systems yield order thresholds less than half those of the FIR arrays. Depending upon the stimulus uncertainty, a minimum state-space order of 7≤N≤23 appears to be adequate. In conclusion, the proposed state-space method enables a more flexible and immersive binaural display with low computational cost.
Approximated Stable Inversion for Nonlinear Systems with Nonhyperbolic Internal Dynamics. Revised
NASA Technical Reports Server (NTRS)
Devasia, Santosh
1999-01-01
A technique to achieve output tracking for nonminimum phase nonlinear systems with non- hyperbolic internal dynamics is presented. The present paper integrates stable inversion techniques (that achieve exact-tracking) with approximation techniques (that modify the internal dynamics) to circumvent the nonhyperbolicity of the internal dynamics - this nonhyperbolicity is an obstruction to applying presently available stable inversion techniques. The theory is developed for nonlinear systems and the method is applied to a two-cart with inverted-pendulum example.
Lindsay, Kaitlin E; Rühli, Frank J; Deleon, Valerie Burke
2015-06-01
The technique of forensic facial approximation, or reconstruction, is one of many facets of the field of mummy studies. Although far from a rigorous scientific technique, evidence-based visualization of antemortem appearance may supplement radiological, chemical, histological, and epidemiological studies of ancient remains. Published guidelines exist for creating facial approximations, but few approximations are published with documentation of the specific process and references used. Additionally, significant new research has taken place in recent years which helps define best practices in the field. This case study records the facial approximation of a 3,000-year-old ancient Egyptian woman using medical imaging data and the digital sculpting program, ZBrush. It represents a synthesis of current published techniques based on the most solid anatomical and/or statistical evidence. Through this study, it was found that although certain improvements have been made in developing repeatable, evidence-based guidelines for facial approximation, there are many proposed methods still awaiting confirmation from comprehensive studies. This study attempts to assist artists, anthropologists, and forensic investigators working in facial approximation by presenting the recommended methods in a chronological and usable format. © 2015 Wiley Periodicals, Inc.
An Explicit Upwind Algorithm for Solving the Parabolized Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Korte, John J.
1991-01-01
An explicit, upwind algorithm was developed for the direct (noniterative) integration of the 3-D Parabolized Navier-Stokes (PNS) equations in a generalized coordinate system. The new algorithm uses upwind approximations of the numerical fluxes for the pressure and convection terms obtained by combining flux difference splittings (FDS) formed from the solution of an approximate Riemann (RP). The approximate RP is solved using an extension of the method developed by Roe for steady supersonic flow of an ideal gas. Roe's method is extended for use with the 3-D PNS equations expressed in generalized coordinates and to include Vigneron's technique of splitting the streamwise pressure gradient. The difficulty associated with applying Roe's scheme in the subsonic region is overcome. The second-order upwind differencing of the flux derivatives are obtained by adding FDS to either an original forward or backward differencing of the flux derivative. This approach is used to modify an explicit MacCormack differencing scheme into an upwind differencing scheme. The second order upwind flux approximations, applied with flux limiters, provide a method for numerically capturing shocks without the need for additional artificial damping terms which require adjustment by the user. In addition, a cubic equation is derived for determining Vegneron's pressure splitting coefficient using the updated streamwise flux vector. Decoding the streamwise flux vector with the updated value of Vigneron's pressure splitting improves the stability of the scheme. The new algorithm is applied to 2-D and 3-D supersonic and hypersonic laminar flow test cases. Results are presented for the experimental studies of Holden and of Tracy. In addition, a flow field solution is presented for a generic hypersonic aircraft at a Mach number of 24.5 and angle of attack of 1 degree. The computed results compare well to both experimental data and numerical results from other algorithms. Computational times required for the upwind PNS code are approximately equal to an explicit PNS MacCormack's code and existing implicit PNS solvers.
The Role of 3 Tesla MRA in the Detection of Intracranial Aneurysms
Kapsalaki, Eftychia Z.; Rountas, Christos D.; Fountas, Kostas N.
2012-01-01
Intracranial aneurysms constitute a common pathological entity, affecting approximately 1–8% of the general population. Their early detection is essential for their prompt treatment. Digital subtraction angiography is considered the imaging method of choice. However, other noninvasive methodologies such as CTA and MRA have been employed in the investigation of patients with suspected aneurysms. MRA is a noninvasive angiographic modality requiring no radiation exposure. However, its sensitivity and diagnostic accuracy were initially inadequate. Several MRA techniques have been developed for overcoming all these drawbacks and for improving its sensitivity. 3D TOF MRA and contrast-enhanced MRA are the most commonly employed techniques. The introduction of 3 T magnetic field further increased MRA's sensitivity, allowing detection of aneurysms smaller than 3 mm. The development of newer MRA techniques may provide valuable information regarding the flow characteristics of an aneurysm. Meticulous knowledge of MRA's limitations and pitfalls is of paramount importance for avoiding any erroneous interpretation of its findings. PMID:22292121
NASA Technical Reports Server (NTRS)
Wilmoth, R. G.
1973-01-01
A molecular beam time-of-flight technique is studied as a means of determining surface stay times for physical adsorption. The experimental approach consists of pulsing a molecular beam, allowing the pulse to strike an adsorbing surface and detecting the molecular pulse after it has subsequently desorbed. The technique is also found to be useful for general studies of adsorption under nonequilibrium conditions including the study of adsorbate-adsorbate interactions. The shape of the detected pulse is analyzed in detail for a first-order desorption process. For mean stay times, tau, less than the mean molecular transit times involved, the peak of the detected pulse is delayed by an amount approximately equal to tau. For tau much greater than these transit times, the detected pulse should decay as exp(-t/tau). However, for stay times of the order of the transit times, both the molecular speed distributions and the incident pulse duration time must be taken into account.
NASA Astrophysics Data System (ADS)
Wong, Meng Fei; Heng, Xiangxin; Zeng, Kaiyang
2008-10-01
Domain structures of [001]T and [011]T-cut Pb(Zn1/3Nb2/3)O3-(6%-7%)PbTiO3 (PZN-PT) single crystals are studied using scanning electron acoustic microscope (SEAM) technique. The observation of the orientation of domain walls agree reasonably well with the trigonometric projection of rhombohedral and orthorhombic dipoles on the (001) and (011) surfaces, respectively. After mechanical loading with microindentation, domain switching is also observed to form a hyperbolic butterfly shape and extend preferentially along four diagonal directions, i.e., ⟨110⟩ on (001) surface and ⟨111¯⟩ on (011) surface. The critical shear stress to cause domain switching for PZN-PT crystal is estimated to be approximately 49 MPa for both {110} and {111¯} planes based on theoretical analysis. Generally, the SEAM technique has been successfully demonstrated to be a valid technique for observation of domain structures in single crystal PZN-PTs.
Solar variability. [measurements by spaceborne instruments
NASA Technical Reports Server (NTRS)
Sofia, S.
1981-01-01
Reference is made to direct measurements carried out by space-borne detectors which have shown variations of the solar constant at the 0.2 percent level, with times scales ranging from days to tens of days. It is contended that these changes do not necessarily reflect variations in the solar luminosity and that, in general, direct measurements have not yet been able to establish (or exclude) solar luminosity changes with longer time scales. Indirect techniques, however, especially radius measurements,suggest that solar luminosity variations of up to approximately 0.7 percent have occurred within a period of tens to hundreds of years.
NASA Technical Reports Server (NTRS)
Young, David P.; Melvin, Robin G.; Bieterman, Michael B.; Johnson, Forrester T.; Samant, Satish S.
1991-01-01
The present FEM technique addresses both linear and nonlinear boundary value problems encountered in computational physics by handling general three-dimensional regions, boundary conditions, and material properties. The box finite elements used are defined by a Cartesian grid independent of the boundary definition, and local refinements proceed by dividing a given box element into eight subelements. Discretization employs trilinear approximations on the box elements; special element stiffness matrices are included for boxes cut by any boundary surface. Illustrative results are presented for representative aerodynamics problems involving up to 400,000 elements.
NASA Astrophysics Data System (ADS)
Palombi, Filippo; Toti, Simona
2015-05-01
Approximate weak solutions of the Fokker-Planck equation represent a useful tool to analyze the equilibrium fluctuations of birth-death systems, as they provide a quantitative knowledge lying in between numerical simulations and exact analytic arguments. In this paper, we adapt the general mathematical formalism known as the Ritz-Galerkin method for partial differential equations to the Fokker-Planck equation with time-independent polynomial drift and diffusion coefficients on the simplex. Then, we show how the method works in two examples, namely the binary and multi-state voter models with zealots.
NASA Technical Reports Server (NTRS)
Elmiligui, Alaa; Cannizzaro, Frank; Melson, N. D.
1991-01-01
A general multiblock method for the solution of the three-dimensional, unsteady, compressible, thin-layer Navier-Stokes equations has been developed. The convective and pressure terms are spatially discretized using Roe's flux differencing technique while the viscous terms are centrally differenced. An explicit Runge-Kutta method is used to advance the solution in time. Local time stepping, adaptive implicit residual smoothing, and the Full Approximation Storage (FAS) multigrid scheme are added to the explicit time stepping scheme to accelerate convergence to steady state. Results for three-dimensional test cases are presented and discussed.
Supersonic second order analysis and optimization program user's manual
NASA Technical Reports Server (NTRS)
Clever, W. C.
1984-01-01
Approximate nonlinear inviscid theoretical techniques for predicting aerodynamic characteristics and surface pressures for relatively slender vehicles at supersonic and moderate hypersonic speeds were developed. Emphasis was placed on approaches that would be responsive to conceptual configuration design level of effort. Second order small disturbance theory was utilized to meet this objective. Numerical codes were developed for analysis and design of relatively general three dimensional geometries. Results from the computations indicate good agreement with experimental results for a variety of wing, body, and wing-body shapes. Case computational time of one minute on a CDC 176 are typical for practical aircraft arrangement.
NASA Astrophysics Data System (ADS)
Abdelzaher, Tarek; Roy, Heather; Wang, Shiguang; Giridhar, Prasanna; Al Amin, Md. Tanvir; Bowman, Elizabeth K.; Kolodny, Michael A.
2016-05-01
Signal processing techniques such as filtering, detection, estimation and frequency domain analysis have long been applied to extract information from noisy sensor data. This paper describes the exploitation of these signal processing techniques to extract information from social networks, such as Twitter and Instagram. Specifically, we view social networks as noisy sensors that report events in the physical world. We then present a data processing stack for detection, localization, tracking, and veracity analysis of reported events using social network data. We show using a controlled experiment that the behavior of social sources as information relays varies dramatically depending on context. In benign contexts, there is general agreement on events, whereas in conflict scenarios, a significant amount of collective filtering is introduced by conflicted groups, creating a large data distortion. We describe signal processing techniques that mitigate such distortion, resulting in meaningful approximations of actual ground truth, given noisy reported observations. Finally, we briefly present an implementation of the aforementioned social network data processing stack in a sensor network analysis toolkit, called Apollo. Experiences with Apollo show that our techniques are successful at identifying and tracking credible events in the physical world.
Ihme, Matthias; Marsden, Alison L; Pitsch, Heinz
2008-02-01
A pattern search optimization method is applied to the generation of optimal artificial neural networks (ANNs). Optimization is performed using a mixed variable extension to the generalized pattern search method. This method offers the advantage that categorical variables, such as neural transfer functions and nodal connectivities, can be used as parameters in optimization. When used together with a surrogate, the resulting algorithm is highly efficient for expensive objective functions. Results demonstrate the effectiveness of this method in optimizing an ANN for the number of neurons, the type of transfer function, and the connectivity among neurons. The optimization method is applied to a chemistry approximation of practical relevance. In this application, temperature and a chemical source term are approximated as functions of two independent parameters using optimal ANNs. Comparison of the performance of optimal ANNs with conventional tabulation methods demonstrates equivalent accuracy by considerable savings in memory storage. The architecture of the optimal ANN for the approximation of the chemical source term consists of a fully connected feedforward network having four nonlinear hidden layers and 117 synaptic weights. An equivalent representation of the chemical source term using tabulation techniques would require a 500 x 500 grid point discretization of the parameter space.
Nonlinear programming extensions to rational function approximations of unsteady aerodynamics
NASA Technical Reports Server (NTRS)
Tiffany, Sherwood H.; Adams, William M., Jr.
1987-01-01
This paper deals with approximating unsteady generalized aerodynamic forces in the equations of motion of a flexible aircraft. Two methods of formulating these approximations are extended to include both the same flexibility in constraining them and the same methodology in optimizing nonlinear parameters as another currently used 'extended least-squares' method. Optimal selection of 'nonlinear' parameters is made in each of the three methods by use of the same nonlinear (nongradient) optimizer. The objective of the nonlinear optimization is to obtain rational approximations to the unsteady aerodynamics whose state-space realization is of lower order than that required when no optimization of the nonlinear terms is performed. The free 'linear' parameters are determined using least-squares matrix techniques on a Lagrange multiplier formulation of an objective function which incorporates selected linear equality constraints. State-space mathematical models resulting from the different approaches are described, and results are presented which show comparative evaluations from application of each of the extended methods to a numerical example. The results obtained for the example problem show a significant (up to 63 percent) reduction in the number of differential equations used to represent the unsteady aerodynamic forces in linear time-invariant equations of motion as compared to a conventional method in which nonlinear terms are not optimized.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shadid, John Nicolas; Elman, Howard; Shuttleworth, Robert R.
2007-04-01
In recent years, considerable effort has been placed on developing efficient and robust solution algorithms for the incompressible Navier-Stokes equations based on preconditioned Krylov methods. These include physics-based methods, such as SIMPLE, and purely algebraic preconditioners based on the approximation of the Schur complement. All these techniques can be represented as approximate block factorization (ABF) type preconditioners. The goal is to decompose the application of the preconditioner into simplified sub-systems in which scalable multi-level type solvers can be applied. In this paper we develop a taxonomy of these ideas based on an adaptation of a generalized approximate factorization of themore » Navier-Stokes system first presented in [25]. This taxonomy illuminates the similarities and differences among these preconditioners and the central role played by efficient approximation of certain Schur complement operators. We then present a parallel computational study that examines the performance of these methods and compares them to an additive Schwarz domain decomposition (DD) algorithm. Results are presented for two and three-dimensional steady state problems for enclosed domains and inflow/outflow systems on both structured and unstructured meshes. The numerical experiments are performed using MPSalsa, a stabilized finite element code.« less
Multivariate moment closure techniques for stochastic kinetic models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lakatos, Eszter, E-mail: e.lakatos13@imperial.ac.uk; Ale, Angelique; Kirk, Paul D. W.
2015-09-07
Stochastic effects dominate many chemical and biochemical processes. Their analysis, however, can be computationally prohibitively expensive and a range of approximation schemes have been proposed to lighten the computational burden. These, notably the increasingly popular linear noise approximation and the more general moment expansion methods, perform well for many dynamical regimes, especially linear systems. At higher levels of nonlinearity, it comes to an interplay between the nonlinearities and the stochastic dynamics, which is much harder to capture correctly by such approximations to the true stochastic processes. Moment-closure approaches promise to address this problem by capturing higher-order terms of the temporallymore » evolving probability distribution. Here, we develop a set of multivariate moment-closures that allows us to describe the stochastic dynamics of nonlinear systems. Multivariate closure captures the way that correlations between different molecular species, induced by the reaction dynamics, interact with stochastic effects. We use multivariate Gaussian, gamma, and lognormal closure and illustrate their use in the context of two models that have proved challenging to the previous attempts at approximating stochastic dynamics: oscillations in p53 and Hes1. In addition, we consider a larger system, Erk-mediated mitogen-activated protein kinases signalling, where conventional stochastic simulation approaches incur unacceptably high computational costs.« less
Special cases of friction and applications
NASA Technical Reports Server (NTRS)
Litvin, F. L.; Coy, J. J.
1983-01-01
Two techniques for reducing friction forces are presented. The techniques are applied to the generalized problem of reducing the friction between kinematic pairs which connect a moveable link to a frame. The basic principles are: (1) Let the moveable link be supported by two bearings where the relative velocities of the link with respect to each bearing are of opposite directions. Thus the resultant force (torque) of friction acting on the link due to the bearings is approximately zero. Then, additional perturbation of motion parallel to the main motion of the moveable link will require only a very small force; (2) Let the perturbation in motion be perpendicular to the main motion. Equations are developed which explain these two methods. The results are discussed in relation to friction in geared couplings, gyroscope gimbal bearings and a rotary conveyor system. Design examples are presented.
A two-dimensional composite grid numerical model based on the reduced system for oceanography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Y.F.; Browning, G.L.; Chesshire, G.
The proper mathematical limit of a hyperbolic system with multiple time scales, the reduced system, is a system that contains no high-frequency motions and is well posed if suitable boundary conditions are chosen for the initial-boundary value problem. The composite grid method, a robust and efficient grid-generation technique that smoothly and accurately treats general irregular boundaries, is used to approximate the two-dimensional version of the reduced system for oceanography on irregular ocean basins. A change-of-variable technique that substantially increases the accuracy of the model and a method for efficiently solving the elliptic equation for the geopotential are discussed. Numerical resultsmore » are presented for circular and kidney-shaped basins by using a set of analytic solutions constructed in this paper.« less
NASA Astrophysics Data System (ADS)
Arshad, Muhammad; Lu, Dianchen; Wang, Jun
2017-07-01
In this paper, we pursue the general form of the fractional reduced differential transform method (DTM) to (N+1)-dimensional case, so that fractional order partial differential equations (PDEs) can be resolved effectively. The most distinct aspect of this method is that no prescribed assumptions are required, and the huge computational exertion is reduced and round-off errors are also evaded. We utilize the proposed scheme on some initial value problems and approximate numerical solutions of linear and nonlinear time fractional PDEs are obtained, which shows that the method is highly accurate and simple to apply. The proposed technique is thus an influential technique for solving the fractional PDEs and fractional order problems occurring in the field of engineering, physics etc. Numerical results are obtained for verification and demonstration purpose by using Mathematica software.
NASA Technical Reports Server (NTRS)
Schwenke, David W.; Truhlar, Donald G.
1990-01-01
The Generalized Newton Variational Principle for 3D quantum mechanical reactive scattering is briefly reviewed. Then three techniques are described which improve the efficiency of the computations. First, the fact that the Hamiltonian is Hermitian is used to reduce the number of integrals computed, and then the properties of localized basis functions are exploited in order to eliminate redundant work in the integral evaluation. A new type of localized basis function with desirable properties is suggested. It is shown how partitioned matrices can be used with localized basis functions to reduce the amount of work required to handle the complex boundary conditions. The new techniques do not introduce any approximations into the calculations, so they may be used to obtain converged solutions of the Schroedinger equation.
NASA Astrophysics Data System (ADS)
Huber, Franz J. T.; Will, Stefan; Daun, Kyle J.
2016-11-01
Inferring the size distribution of aerosolized fractal aggregates from the angular distribution of elastically scattered light is a mathematically ill-posed problem. This paper presents a procedure for analyzing Wide-Angle Light Scattering (WALS) data using Bayesian inference. The outcome is probability densities for the recovered size distribution and aggregate morphology parameters. This technique is applied to both synthetic data and experimental data collected on soot-laden aerosols, using a measurement equation derived from Rayleigh-Debye-Gans fractal aggregate (RDG-FA) theory. In the case of experimental data, the recovered aggregate size distribution parameters are generally consistent with TEM-derived values, but the accuracy is impaired by the well-known limited accuracy of RDG-FA theory. Finally, we show how this bias could potentially be avoided using the approximation error technique.
On the integration of reinforcement learning and approximate reasoning for control
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.
1991-01-01
The author discusses the importance of strengthening the knowledge representation characteristic of reinforcement learning techniques using methods such as approximate reasoning. The ARIC (approximate reasoning-based intelligent control) architecture is an example of such a hybrid approach in which the fuzzy control rules are modified (fine-tuned) using reinforcement learning. ARIC also demonstrates that it is possible to start with an approximately correct control knowledge base and learn to refine this knowledge through further experience. On the other hand, techniques such as the TD (temporal difference) algorithm and Q-learning establish stronger theoretical foundations for their use in adaptive control and also in stability analysis of hybrid reinforcement learning and approximate reasoning-based controllers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kavenoky, A.
1973-01-01
From national topical meeting on mathematical models and computational techniques for analysis of nuclear systems; Ann Arbor, Michigan, USA (8 Apr 1973). In mathematical models and computational techniques for analysis of nuclear systems. APOLLO calculates the space-and-energy-dependent flux for a one dimensional medium, in the multigroup approximation of the transport equation. For a one dimensional medium, refined collision probabilities have been developed for the resolution of the integral form of the transport equation; these collision probabilities increase accuracy and save computing time. The interaction between a few cells can also be treated by the multicell option of APOLLO. The diffusionmore » coefficient and the material buckling can be computed in the various B and P approximations with a linearly anisotropic scattering law, even in the thermal range of the spectrum. Eventually this coefficient is corrected for streaming by use of Benoist's theory. The self-shielding of the heavy isotopes is treated by a new and accurate technique which preserves the reaction rates of the fundamental fine structure flux. APOLLO can perform a depletion calculation for one cell, a group of cells or a complete reactor. The results of an APOLLO calculation are the space-and-energy-dependent flux, the material buckling or any reaction rate; these results can also be macroscopic cross sections used as input data for a 2D or 3D depletion and diffusion code in reactor geometry. 10 references. (auth)« less
NASA Technical Reports Server (NTRS)
Greene, William H.
1990-01-01
A study was performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal of the study was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semi-analytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models. In several cases this fixed mode approach resulted in very poor approximations of the stress sensitivities. Almost all of the original modes were required for an accurate sensitivity and for small numbers of modes, the accuracy was extremely poor. To overcome this poor accuracy, two semi-analytical techniques were developed. The first technique accounts for the change in eigenvectors through approximate eigenvector derivatives. The second technique applies the mode acceleration method of transient analysis to the sensitivity calculations. Both result in accurate values of the stress sensitivities with a small number of modes and much lower computational costs than if the vibration modes were recalculated and then used in an overall finite difference method.
Empirical single sample quantification of bias and variance in Q-ball imaging.
Hainline, Allison E; Nath, Vishwesh; Parvathaneni, Prasanna; Blaber, Justin A; Schilling, Kurt G; Anderson, Adam W; Kang, Hakmook; Landman, Bennett A
2018-02-06
The bias and variance of high angular resolution diffusion imaging methods have not been thoroughly explored in the literature and may benefit from the simulation extrapolation (SIMEX) and bootstrap techniques to estimate bias and variance of high angular resolution diffusion imaging metrics. The SIMEX approach is well established in the statistics literature and uses simulation of increasingly noisy data to extrapolate back to a hypothetical case with no noise. The bias of calculated metrics can then be computed by subtracting the SIMEX estimate from the original pointwise measurement. The SIMEX technique has been studied in the context of diffusion imaging to accurately capture the bias in fractional anisotropy measurements in DTI. Herein, we extend the application of SIMEX and bootstrap approaches to characterize bias and variance in metrics obtained from a Q-ball imaging reconstruction of high angular resolution diffusion imaging data. The results demonstrate that SIMEX and bootstrap approaches provide consistent estimates of the bias and variance of generalized fractional anisotropy, respectively. The RMSE for the generalized fractional anisotropy estimates shows a 7% decrease in white matter and an 8% decrease in gray matter when compared with the observed generalized fractional anisotropy estimates. On average, the bootstrap technique results in SD estimates that are approximately 97% of the true variation in white matter, and 86% in gray matter. Both SIMEX and bootstrap methods are flexible, estimate population characteristics based on single scans, and may be extended for bias and variance estimation on a variety of high angular resolution diffusion imaging metrics. © 2018 International Society for Magnetic Resonance in Medicine.
Recursive least-squares learning algorithms for neural networks
NASA Astrophysics Data System (ADS)
Lewis, Paul S.; Hwang, Jenq N.
1990-11-01
This paper presents the development of a pair of recursive least squares (ItLS) algorithms for online training of multilayer perceptrons which are a class of feedforward artificial neural networks. These algorithms incorporate second order information about the training error surface in order to achieve faster learning rates than are possible using first order gradient descent algorithms such as the generalized delta rule. A least squares formulation is derived from a linearization of the training error function. Individual training pattern errors are linearized about the network parameters that were in effect when the pattern was presented. This permits the recursive solution of the least squares approximation either via conventional RLS recursions or by recursive QR decomposition-based techniques. The computational complexity of the update is 0(N2) where N is the number of network parameters. This is due to the estimation of the N x N inverse Hessian matrix. Less computationally intensive approximations of the ilLS algorithms can be easily derived by using only block diagonal elements of this matrix thereby partitioning the learning into independent sets. A simulation example is presented in which a neural network is trained to approximate a two dimensional Gaussian bump. In this example RLS training required an order of magnitude fewer iterations on average (527) than did training with the generalized delta rule (6 1 BACKGROUND Artificial neural networks (ANNs) offer an interesting and potentially useful paradigm for signal processing and pattern recognition. The majority of ANN applications employ the feed-forward multilayer perceptron (MLP) network architecture in which network parameters are " trained" by a supervised learning algorithm employing the generalized delta rule (GDIt) [1 2]. The GDR algorithm approximates a fixed step steepest descent algorithm using derivatives computed by error backpropagatiori. The GDII algorithm is sometimes referred to as the backpropagation algorithm. However in this paper we will use the term backpropagation to refer only to the process of computing error derivatives. While multilayer perceptrons provide a very powerful nonlinear modeling capability GDR training can be very slow and inefficient. In linear adaptive filtering the analog of the GDR algorithm is the leastmean- squares (LMS) algorithm. Steepest descent-based algorithms such as GDR or LMS are first order because they use only first derivative or gradient information about the training error to be minimized. To speed up the training process second order algorithms may be employed that take advantage of second derivative or Hessian matrix information. Second order information can be incorporated into MLP training in different ways. In many applications especially in the area of pattern recognition the training set is finite. In these cases block learning can be applied using standard nonlinear optimization techniques [3 4 5].
High order discretization techniques for real-space ab initio simulations
NASA Astrophysics Data System (ADS)
Anderson, Christopher R.
2018-03-01
In this paper, we present discretization techniques to address numerical problems that arise when constructing ab initio approximations that use real-space computational grids. We present techniques to accommodate the singular nature of idealized nuclear and idealized electronic potentials, and we demonstrate the utility of using high order accurate grid based approximations to Poisson's equation in unbounded domains. To demonstrate the accuracy of these techniques, we present results for a Full Configuration Interaction computation of the dissociation of H2 using a computed, configuration dependent, orbital basis set.
Critical behavior of the anisotropic Heisenberg model by effective-field renormalization group
NASA Astrophysics Data System (ADS)
de Sousa, J. Ricardo; Fittipaldi, I. P.
1994-05-01
A real-space effective-field renormalization-group method (ERFG) recently derived for computing critical properties of Ising spins is extended to treat the quantum spin-1/2 anisotropic Heisenberg model. The formalism is based on a generalized but approximate Callen-Suzuki spin relation and utilizes a convenient differential operator expansion technique. The method is illustrated in several lattice structures by employing its simplest approximation version in which clusters with one (N'=1) and two (N=2) spins are used. The results are compared with those obtained from the standard mean-field (MFRG) and Migdal-Kadanoff (MKRG) renormalization-group treatments and it is shown that this technique leads to rather accurate results. It is shown that, in contrast with the MFRG and MKRG predictions, the EFRG, besides correctly distinguishing the geometries of different lattice structures, also provides a vanishing critical temperature for all two-dimensional lattices in the isotropic Heisenberg limit. For the simple cubic lattice, the dependence of the transition temperature Tc with the exchange anisotropy parameter Δ [i.e., Tc(Δ)], and the resulting value for the critical thermal crossover exponent φ [i.e., Tc≂Tc(0)+AΔ1/φ ] are in quite good agreement with results available in the literature in which more sophisticated treatments are used.
A Sensitivity Analysis of Circular Error Probable Approximation Techniques
1992-03-01
SENSITIVITY ANALYSIS OF CIRCULAR ERROR PROBABLE APPROXIMATION TECHNIQUES THESIS Presented to the Faculty of the School of Engineering of the Air Force...programming skills. Major Paul Auclair patiently advised me in this endeavor, and Major Andy Howell added numerous insightful contributions. I thank my...techniques. The two ret(st accuratec techniiques require numerical integration and can take several hours to run ov a personal comlputer [2:1-2,4-6]. Some
Neurocontrol and fuzzy logic: Connections and designs
NASA Technical Reports Server (NTRS)
Werbos, Paul J.
1991-01-01
Artificial neural networks (ANNs) and fuzzy logic are complementary technologies. ANNs extract information from systems to be learned or controlled, while fuzzy techniques mainly use verbal information from experts. Ideally, both sources of information should be combined. For example, one can learn rules in a hybrid fashion, and then calibrate them for better whole-system performance. ANNs offer universal approximation theorems, pedagogical advantages, very high-throughput hardware, and links to neurophysiology. Neurocontrol - the use of ANNs to directly control motors or actuators, etc. - uses five generalized designs, related to control theory, which can work on fuzzy logic systems as well as ANNs. These designs can copy what experts do instead of what they say, learn to track trajectories, generalize adaptive control, and maximize performance or minimize cost over time, even in noisy environments. Design tradeoffs and future directions are discussed throughout.
NASA Technical Reports Server (NTRS)
Stolzer, Alan J.; Halford, Carl
2007-01-01
In a previous study, multiple regression techniques were applied to Flight Operations Quality Assurance-derived data to develop parsimonious model(s) for fuel consumption on the Boeing 757 airplane. The present study examined several data mining algorithms, including neural networks, on the fuel consumption problem and compared them to the multiple regression results obtained earlier. Using regression methods, parsimonious models were obtained that explained approximately 85% of the variation in fuel flow. In general data mining methods were more effective in predicting fuel consumption. Classification and Regression Tree methods reported correlation coefficients of .91 to .92, and General Linear Models and Multilayer Perceptron neural networks reported correlation coefficients of about .99. These data mining models show great promise for use in further examining large FOQA databases for operational and safety improvements.
Analytical approximate solutions for a general class of nonlinear delay differential equations.
Căruntu, Bogdan; Bota, Constantin
2014-01-01
We use the polynomial least squares method (PLSM), which allows us to compute analytical approximate polynomial solutions for a very general class of strongly nonlinear delay differential equations. The method is tested by computing approximate solutions for several applications including the pantograph equations and a nonlinear time-delay model from biology. The accuracy of the method is illustrated by a comparison with approximate solutions previously computed using other methods.
Time and Memory Efficient Online Piecewise Linear Approximation of Sensor Signals.
Grützmacher, Florian; Beichler, Benjamin; Hein, Albert; Kirste, Thomas; Haubelt, Christian
2018-05-23
Piecewise linear approximation of sensor signals is a well-known technique in the fields of Data Mining and Activity Recognition. In this context, several algorithms have been developed, some of them with the purpose to be performed on resource constrained microcontroller architectures of wireless sensor nodes. While microcontrollers are usually constrained in computational power and memory resources, all state-of-the-art piecewise linear approximation techniques either need to buffer sensor data or have an execution time depending on the segment’s length. In the paper at hand, we propose a novel piecewise linear approximation algorithm, with a constant computational complexity as well as a constant memory complexity. Our proposed algorithm’s worst-case execution time is one to three orders of magnitude smaller and its average execution time is three to seventy times smaller compared to the state-of-the-art Piecewise Linear Approximation (PLA) algorithms in our experiments. In our evaluations, we show that our algorithm is time and memory efficient without sacrificing the approximation quality compared to other state-of-the-art piecewise linear approximation techniques, while providing a maximum error guarantee per segment, a small parameter space of only one parameter, and a maximum latency of one sample period plus its worst-case execution time.
Iterative design of one- and two-dimensional FIR digital filters. [Finite duration Impulse Response
NASA Technical Reports Server (NTRS)
Suk, M.; Choi, K.; Algazi, V. R.
1976-01-01
The paper describes a new iterative technique for designing FIR (finite duration impulse response) digital filters using a frequency weighted least squares approximation. The technique is as easy to implement (via FFT) and as effective in two dimensions as in one dimension, and there are virtually no limitations on the class of filter frequency spectra approximated. An adaptive adjustment of the frequency weight to achieve other types of design approximation such as Chebyshev type design is discussed.
Fast, Inclusive Searches for Geographic Names Using Digraphs
Donato, David I.
2008-01-01
An algorithm specifies how to quickly identify names that approximately match any specified name when searching a list or database of geographic names. Based on comparisons of the digraphs (ordered letter pairs) contained in geographic names, this algorithmic technique identifies approximately matching names by applying an artificial but useful measure of name similarity. A digraph index enables computer name searches that are carried out using this technique to be fast enough for deployment in a Web application. This technique, which is a member of the class of n-gram algorithms, is related to, but distinct from, the soundex, PHONIX, and metaphone phonetic algorithms. Despite this technique's tendency to return some counterintuitive approximate matches, it is an effective aid for fast, inclusive searches for geographic names when the exact name sought, or its correct spelling, is unknown.
NASA Astrophysics Data System (ADS)
Kavcar, Nevzat; Korkmaz, Cihan
2017-02-01
Purpose of this work is to determine the physics teacher candidates' views on Physics 10 textbook' content and general properties suitable to the 2013 Secondary School Physics Curriculum. 23 teacher candidates at 2014-2015 school year constituted the sampling of the study in which scanning model based on qualitative research technique was used by performing document analysis. Data collection tool of the research was the files prepared with 51 and nine open ended questions including the subject content and general properties of the textbook. It was concluded that the textbook was sufficient for being life context -based, language, activity-based and student-centered approximation, development of social and inquiry skills, and was insufficient for referring educational gains of the Curriculum, involving activities, projects and homework about application. Activities and applications about affective area, such tools for assessment and evaluation practices as concept map, concept network and semantic analysis table may be involved in the textbook.
Effect of shape and size of lung and chest wall on stresses in the lung
NASA Technical Reports Server (NTRS)
Vawter, D. L.; Matthews, F. L.; West, J. B.
1975-01-01
To understand better the effect of shape and size of lung and chest wall on the distribution of stresses, strains, and surface pressures, we analyzed a theoretical model using the technique of finite elements. First we investigated the effects of changing the chest wall shape during expansion, and second we studied lungs of a variety of inherent shapes and sizes. We found that, in general, the distributions of alveolar size, mechanical stresses, and surface pressures in the lungs were dominated by the weight of the lung and that changing the shape of the lung or chest wall had relatively little effect. Only at high states of expansion where the lung was very stiff did changing the shape of the chest wall cause substantial changes. Altering the inherent shape of the lung generally had little effect but the topographical differences in stresses and surface pressures were approximately proportional to lung height. The results are generally consistent with those found in the dog by Hoppin et al (1969).
A note on generalized Genome Scan Meta-Analysis statistics
Koziol, James A; Feng, Anne C
2005-01-01
Background Wise et al. introduced a rank-based statistical technique for meta-analysis of genome scans, the Genome Scan Meta-Analysis (GSMA) method. Levinson et al. recently described two generalizations of the GSMA statistic: (i) a weighted version of the GSMA statistic, so that different studies could be ascribed different weights for analysis; and (ii) an order statistic approach, reflecting the fact that a GSMA statistic can be computed for each chromosomal region or bin width across the various genome scan studies. Results We provide an Edgeworth approximation to the null distribution of the weighted GSMA statistic, and, we examine the limiting distribution of the GSMA statistics under the order statistic formulation, and quantify the relevance of the pairwise correlations of the GSMA statistics across different bins on this limiting distribution. We also remark on aggregate criteria and multiple testing for determining significance of GSMA results. Conclusion Theoretical considerations detailed herein can lead to clarification and simplification of testing criteria for generalizations of the GSMA statistic. PMID:15717930
Towards a general object-oriented software development methodology
NASA Technical Reports Server (NTRS)
Seidewitz, ED; Stark, Mike
1986-01-01
An object is an abstract software model of a problem domain entity. Objects are packages of both data and operations of that data (Goldberg 83, Booch 83). The Ada (tm) package construct is representative of this general notion of an object. Object-oriented design is the technique of using objects as the basic unit of modularity in systems design. The Software Engineering Laboratory at the Goddard Space Flight Center is currently involved in a pilot program to develop a flight dynamics simulator in Ada (approximately 40,000 statements) using object-oriented methods. Several authors have applied object-oriented concepts to Ada (e.g., Booch 83, Cherry 85). It was found that these methodologies are limited. As a result a more general approach was synthesized with allows a designer to apply powerful object-oriented principles to a wide range of applications and at all stages of design. An overview is provided of this approach. Further, how object-oriented design fits into the overall software life-cycle is considered.
NASA Technical Reports Server (NTRS)
Karpel, M.
1994-01-01
Various control analysis, design, and simulation techniques of aeroservoelastic systems require the equations of motion to be cast in a linear, time-invariant state-space form. In order to account for unsteady aerodynamics, rational function approximations must be obtained to represent them in the first order equations of the state-space formulation. A computer program, MIST, has been developed which determines minimum-state approximations of the coefficient matrices of the unsteady aerodynamic forces. The Minimum-State Method facilitates the design of lower-order control systems, analysis of control system performance, and near real-time simulation of aeroservoelastic phenomena such as the outboard-wing acceleration response to gust velocity. Engineers using this program will be able to calculate minimum-state rational approximations of the generalized unsteady aerodynamic forces. Using the Minimum-State formulation of the state-space equations, they will be able to obtain state-space models with good open-loop characteristics while reducing the number of aerodynamic equations by an order of magnitude more than traditional approaches. These low-order state-space mathematical models are good for design and simulation of aeroservoelastic systems. The computer program, MIST, accepts tabular values of the generalized aerodynamic forces over a set of reduced frequencies. It then determines approximations to these tabular data in the LaPlace domain using rational functions. MIST provides the capability to select the denominator coefficients in the rational approximations, to selectably constrain the approximations without increasing the problem size, and to determine and emphasize critical frequency ranges in determining the approximations. MIST has been written to allow two types data weighting options. The first weighting is a traditional normalization of the aerodynamic data to the maximum unit value of each aerodynamic coefficient. The second allows weighting the importance of different tabular values in determining the approximations based upon physical characteristics of the system. Specifically, the physical weighting capability is such that each tabulated aerodynamic coefficient, at each reduced frequency value, is weighted according to the effect of an incremental error of this coefficient on aeroelastic characteristics of the system. In both cases, the resulting approximations yield a relatively low number of aerodynamic lag states in the subsequent state-space model. MIST is written in ANSI FORTRAN 77 for DEC VAX series computers running VMS. It requires approximately 1Mb of RAM for execution. The standard distribution medium for this package is a 9-track 1600 BPI magnetic tape in DEC VAX FILES-11 format. It is also available on a TK50 tape cartridge in DEC VAX BACKUP format. MIST was developed in 1991. DEC VAX and VMS are trademarks of Digital Equipment Corporation. FORTRAN 77 is a registered trademark of Lahey Computer Systems, Inc.
Building detection in SAR imagery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steinbach, Ryan Matthew
Current techniques for building detection in Synthetic Aperture Radar (SAR) imagery can be computationally expensive and/or enforce stringent requirements for data acquisition. I present two techniques that are effective and efficient at determining an approximate building location. This approximate location can be used to extract a portion of the SAR image to then perform a more robust detection. The proposed techniques assume that for the desired image, bright lines and shadows, SAR artifact effects, are approximately labeled. These labels are enhanced and utilized to locate buildings, only if the related bright lines and shadows can be grouped. In order tomore » find which of the bright lines and shadows are related, all of the bright lines are connected to all of the shadows. This allows the problem to be solved from a connected graph viewpoint, where the nodes are the bright lines and shadows and the arcs are the connections between bright lines and shadows. For the first technique, constraints based on angle of depression and the relationship between connected bright lines and shadows are applied to remove unrelated arcs. The second technique calculates weights for the connections and then performs a series of increasingly relaxed hard and soft thresholds. This results in groups of various levels on their validity. Once the related bright lines and shadows are grouped, their locations are combined to provide an approximate building location. Experimental results demonstrate the outcome of the two techniques. The two techniques are compared and discussed.« less
Convergence analysis of surrogate-based methods for Bayesian inverse problems
NASA Astrophysics Data System (ADS)
Yan, Liang; Zhang, Yuan-Xiang
2017-12-01
The major challenges in the Bayesian inverse problems arise from the need for repeated evaluations of the forward model, as required by Markov chain Monte Carlo (MCMC) methods for posterior sampling. Many attempts at accelerating Bayesian inference have relied on surrogates for the forward model, typically constructed through repeated forward simulations that are performed in an offline phase. Although such approaches can be quite effective at reducing computation cost, there has been little analysis of the approximation on posterior inference. In this work, we prove error bounds on the Kullback-Leibler (KL) distance between the true posterior distribution and the approximation based on surrogate models. Our rigorous error analysis show that if the forward model approximation converges at certain rate in the prior-weighted L 2 norm, then the posterior distribution generated by the approximation converges to the true posterior at least two times faster in the KL sense. The error bound on the Hellinger distance is also provided. To provide concrete examples focusing on the use of the surrogate model based methods, we present an efficient technique for constructing stochastic surrogate models to accelerate the Bayesian inference approach. The Christoffel least squares algorithms, based on generalized polynomial chaos, are used to construct a polynomial approximation of the forward solution over the support of the prior distribution. The numerical strategy and the predicted convergence rates are then demonstrated on the nonlinear inverse problems, involving the inference of parameters appearing in partial differential equations.
Application of Discrete Fracture Modeling and Upscaling Techniques to Complex Fractured Reservoirs
NASA Astrophysics Data System (ADS)
Karimi-Fard, M.; Lapene, A.; Pauget, L.
2012-12-01
During the last decade, an important effort has been made to improve data acquisition (seismic and borehole imaging) and workflow for reservoir characterization which has greatly benefited the description of fractured reservoirs. However, the geological models resulting from the interpretations need to be validated or calibrated against dynamic data. Flow modeling in fractured reservoirs remains a challenge due to the difficulty of representing mass transfers at different heterogeneity scales. The majority of the existing approaches are based on dual continuum representation where the fracture network and the matrix are represented separately and their interactions are modeled using transfer functions. These models are usually based on idealized representation of the fracture distribution which makes the integration of real data difficult. In recent years, due to increases in computer power, discrete fracture modeling techniques (DFM) are becoming popular. In these techniques the fractures are represented explicitly allowing the direct use of data. In this work we consider the DFM technique developed by Karimi-Fard et al. [1] which is based on an unstructured finite-volume discretization. The mass flux between two adjacent control-volumes is evaluated using an optimized two-point flux approximation. The result of the discretization is a list of control-volumes with the associated pore-volumes and positions, and a list of connections with the associated transmissibilities. Fracture intersections are simplified using a connectivity transformation which contributes considerably to the efficiency of the methodology. In addition, the method is designed for general purpose simulators and any connectivity based simulator can be used for flow simulations. The DFM technique is either used standalone or as part of an upscaling technique. The upscaling techniques are required for large reservoirs where the explicit representation of all fractures and faults is not possible. Karimi-Fard et al. [2] have developed an upscaling technique based on DFM representation. The original version of this technique was developed to construct a dual-porosity model from a discrete fracture description. This technique has been extended and generalized so it can be applied to a wide range of problems from reservoirs with a few or no fracture to highly fractured reservoirs. In this work, we present the application of these techniques to two three-dimensional fractured reservoirs constructed using real data. The first model contains more than 600 medium and large scale fractures. The fractures are not always connected which requires a general modeling technique. The reservoir has 50 wells (injectors and producers) and water flooding simulations are performed. The second test case is a larger reservoir with sparsely distributed faults. Single-phase simulations are performed with 5 producing wells. [1] Karimi-Fard M., Durlofsky L.J., and Aziz K. 2004. An efficient discrete-fracture model applicable for general-purpose reservoir simulators. SPE Journal, 9(2): 227-236. [2] Karimi-Fard M., Gong B., and Durlofsky L.J. 2006. Generation of coarse-scale continuum flow models from detailed fracture characterizations. Water Resources Research, 42(10): W10423.
Derivative information recovery by a selective integration technique
NASA Technical Reports Server (NTRS)
Johnson, M. A.
1974-01-01
A nonlinear stationary homogeneous digital filter DIRSIT (derivative information recovery by a selective integration technique) is investigated. The spectrum of a quasi-linear discrete describing function (DDF) to DIRSIT is obtained by a digital measuring scheme. A finite impulse response (FIR) approximation to the quasi-linearization is then obtained. Finally, DIRSIT is compared with its quasi-linear approximation and with a standard digital differentiating technique. Results indicate the effects of DIRSIT on a wide variety of practical signals.
Webster, Victoria A; Nieto, Santiago G; Grosberg, Anna; Akkus, Ozan; Chiel, Hillel J; Quinn, Roger D
2016-10-01
In this study, new techniques for approximating the contractile properties of cells in biohybrid devices using Finite Element Analysis (FEA) have been investigated. Many current techniques for modeling biohybrid devices use individual cell forces to simulate the cellular contraction. However, such techniques result in long simulation runtimes. In this study we investigated the effect of the use of thermal contraction on simulation runtime. The thermal contraction model was significantly faster than models using individual cell forces, making it beneficial for rapidly designing or optimizing devices. Three techniques, Stoney׳s Approximation, a Modified Stoney׳s Approximation, and a Thermostat Model, were explored for calibrating thermal expansion/contraction parameters (TECPs) needed to simulate cellular contraction using thermal contraction. The TECP values were calibrated by using published data on the deflections of muscular thin films (MTFs). Using these techniques, TECP values that suitably approximate experimental deflections can be determined by using experimental data obtained from cardiomyocyte MTFs. Furthermore, a sensitivity analysis was performed in order to investigate the contribution of individual variables, such as elastic modulus and layer thickness, to the final calibrated TECP for each calibration technique. Additionally, the TECP values are applicable to other types of biohybrid devices. Two non-MTF models were simulated based on devices reported in the existing literature. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Ostroff, A. J.
1973-01-01
Some of the major difficulties associated with large orbiting astronomical telescopes are the cost of manufacturing the primary mirror to precise tolerances and the maintaining of diffraction-limited tolerances while in orbit. One successfully demonstrated approach for minimizing these problem areas is the technique of actively deforming the primary mirror by applying discrete forces to the rear of the mirror. A modal control technique, as applied to active optics, has previously been developed and analyzed. The modal control technique represents the plant to be controlled in terms of its eigenvalues and eigenfunctions which are estimated via numerical approximation techniques. The report includes an extension of previous work using the modal control technique and also describes an optimal feedback controller. The equations for both control laws are developed in state-space differential form and include such considerations as stability, controllability, and observability. These equations are general and allow the incorporation of various mode-analyzer designs; two design approaches are presented. The report also includes a technique for placing actuator and sensor locations at points on the mirror based upon the flexibility matrix of the uncontrolled or unobserved modes of the structure. The locations selected by this technique are used in the computer runs which are described. The results are based upon three different initial error distributions, two mode-analyzer designs, and both the modal and optimal control laws.
Multiple Scattering Effects on Pulse Propagation in Optically Turbid Media.
NASA Astrophysics Data System (ADS)
Joelson, Bradley David
The effects of multiple scattering in a optically turbid media is examined for an impulse solution to the radiative transfer equation for a variety of geometries and phase functions. In regions where the complexities of the phase function proved too cumbersome for analytic methods Monte Carlo techniques were developed to describe the entire scalar radiance distribution. The determination of a general spread function is strongly dependent on geometry and particular regions where limits can be placed on the variables of the problem. Hence, the general spread function is first simplified by considering optical regions which reduce the complexity of the variable dependence. First, in the small-angle limit we calculate some contracted spread functions along with their moments and then use Monte Carlo techniques to establish the limitations imposed by the small-angle approximation in planar geometry. The point spread function (PSF) for a spherical geometry is calculated for the full angular spread in the forward direction of ocean waters using Monte Carlo methods in the optically thin and moderate depths and analytic methods in the diffusion domain. The angular dependence of the PSF for various ocean waters is examined for a range of optical parameters. The analytic method used in the diffusion calculation is justified by examining the angular dependence of the radiance of a impulse solution in a planar geometry for a prolongated Henyey-Greenstein phase function of asymmetry factor approximately equal to that of the ocean phase functions. The Legendre moments of the radiance are examined in order to examine the viability of the diffusion approximation which assumes a linearly anisotropic angular distribution for the radiance. A realistic lidar calculation is performed for a variety of ocean waters to determine the effects of multiple scattering on the determination of the speed of sound by using the range gated frequency spectrum of the lidar signal. It is shown that the optical properties of the ocean help to ensure single scatter form for the frequency spectra of the lidar signal. This spectra can then be used to compute the speed of sound and backscatter probability.
Silvestrelli, Pier Luigi; Ambrosetti, Alberto
2014-03-28
The Density Functional Theory (DFT)/van der Waals-Quantum Harmonic Oscillator-Wannier function (vdW-QHO-WF) method, recently developed to include the vdW interactions in approximated DFT by combining the quantum harmonic oscillator model with the maximally localized Wannier function technique, is applied to the cases of atoms and small molecules (X=Ar, CO, H2, H2O) weakly interacting with benzene and with the ideal planar graphene surface. Comparison is also presented with the results obtained by other DFT vdW-corrected schemes, including PBE+D, vdW-DF, vdW-DF2, rVV10, and by the simpler Local Density Approximation (LDA) and semilocal generalized gradient approximation approaches. While for the X-benzene systems all the considered vdW-corrected schemes perform reasonably well, it turns out that an accurate description of the X-graphene interaction requires a proper treatment of many-body contributions and of short-range screening effects, as demonstrated by adopting an improved version of the DFT/vdW-QHO-WF method. We also comment on the widespread attitude of relying on LDA to get a rough description of weakly interacting systems.
Positron confinement in embedded lithium nanoclusters
NASA Astrophysics Data System (ADS)
van Huis, M. A.; van Veen, A.; Schut, H.; Falub, C. V.; Eijt, S. W.; Mijnarends, P. E.; Kuriplach, J.
2002-02-01
Quantum confinement of positrons in nanoclusters offers the opportunity to obtain detailed information on the electronic structure of nanoclusters by application of positron annihilation spectroscopy techniques. In this work, positron confinement is investigated in lithium nanoclusters embedded in monocrystalline MgO. These nanoclusters were created by means of ion implantation and subsequent annealing. It was found from the results of Doppler broadening positron beam analysis that approximately 92% of the implanted positrons annihilate in lithium nanoclusters rather than in the embedding MgO, while the local fraction of lithium at the implantation depth is only 1.3 at. %. The results of two-dimensional angular correlation of annihilation radiation confirm the presence of crystalline bulk lithium. The confinement of positrons is ascribed to the difference in positron affinity between lithium and MgO. The nanocluster acts as a potential well for positrons, where the depth of the potential well is equal to the difference in the positron affinities of lithium and MgO. These affinities were calculated using the linear muffin-tin orbital atomic sphere approximation method. This yields a positronic potential step at the MgO||Li interface of 1.8 eV using the generalized gradient approximation and 2.8 eV using the insulator model.
Creation of 0.10-cm-1 resolution quantitative infrared spectral libraries for gas samples
NASA Astrophysics Data System (ADS)
Sharpe, Steven W.; Sams, Robert L.; Johnson, Timothy J.; Chu, Pamela M.; Rhoderick, George C.; Guenther, Franklin R.
2002-02-01
The National Institute of Standards and Technology (NIST) and the Pacific Northwest National Laboratory (PNNL) are independently creating quantitative, approximately 0.10 cm-1 resolution, infrared spectral libraries of vapor phase compounds. The NIST library will consist of approximately 100 vapor phase spectra of volatile hazardous air pollutants (HAPs) and suspected greenhouse gases. The PNNL library will consist of approximately 400 vapor phase spectra associated with DOE's remediation mission. A critical part of creating and validating any quantitative work involves independent verification based on inter-laboratory comparisons. The two laboratories use significantly different sample preparation and handling techniques. NIST uses gravimetric dilution and a continuous flowing sample while PNNL uses partial pressure dilution and a static sample. Agreement is generally found to be within the statistical uncertainties of the Beer's law fit and less than 3 percent of the total integrated band areas for the 4 chemicals used in this comparison. There does appear to be a small systematic difference between the PNNL and NIST data, however. Possible sources of the systematic difference will be discussed as well as technical details concerning the sample preparation and the procedures for overcoming instrumental artifacts.
NASA Technical Reports Server (NTRS)
Collins, L.; Saunders, D.
1986-01-01
User information for program PROFILE, an aerodynamics design utility for refining, plotting, and tabulating airfoil profiles is provided. The theory and implementation details for two of the more complex options are also presented. These are the REFINE option, for smoothing curvature in selected regions while retaining or seeking some specified thickness ratio, and the OPTIMIZE option, which seeks a specified curvature distribution. REFINE uses linear techniques to manipulate ordinates via the central difference approximation to second derivatives, while OPTIMIZE works directly with curvature using nonlinear least squares techniques. Use of programs QPLOT and BPLOT is also described, since all of the plots provided by PROFILE (airfoil coordinates, curvature distributions) are achieved via the general purpose QPLOT utility. BPLOT illustrates (again, via QPLOT) the shape functions used by two of PROFILE's options. The programs were designed and implemented for the Applied Aerodynamics Branch at NASA Ames Research Center, Moffett Field, California, and written in FORTRAN and run on a VAX-11/780 under VMS.
Homogenization techniques for population dynamics in strongly heterogeneous landscapes.
Yurk, Brian P; Cobbold, Christina A
2018-12-01
An important problem in spatial ecology is to understand how population-scale patterns emerge from individual-level birth, death, and movement processes. These processes, which depend on local landscape characteristics, vary spatially and may exhibit sharp transitions through behavioural responses to habitat edges, leading to discontinuous population densities. Such systems can be modelled using reaction-diffusion equations with interface conditions that capture local behaviour at patch boundaries. In this work we develop a novel homogenization technique to approximate the large-scale dynamics of the system. We illustrate our approach, which also generalizes to multiple species, with an example of logistic growth within a periodic environment. We find that population persistence and the large-scale population carrying capacity is influenced by patch residence times that depend on patch preference, as well as movement rates in adjacent patches. The forms of the homogenized coefficients yield key theoretical insights into how large-scale dynamics arise from the small-scale features.
[Management of Acute Type A Dissection Complicated with Acute Mesenteric Ischemia].
Abe, Tomonobu; Usui, Akihiko
2017-07-01
Acute mesenteric ischemia as malperfusion syndrome associated with acute aortic dissection is a difficult situation. The incidence is approximately 3~4% in acute type A dissection. Traditionally, most of these patients underwent immediate simple central aortic repair expecting that mesenteric artery obstruction and intestinal ischemia would be resolved by simple central aortic repair. However, short term mortality has been reported very high in this strategy. With the aid of rapidly progressing imaging techniques and newer endovascular repair techniques, results seem to be improving in recent years. Newer management strategy include aggressive and patient specific revascularization to the mesenteric arteries, delayed central aortic repair, and meticulous intensive care. Diagnosis and management of this condition require high level of expertise. Cardiac surgeons, vascular surgeons, interventional radiologists, gastroenterologists, general surgeons, anesthesiologists, intensivists must corporate to save these patients' lives. Since this is a relatively rare condition, scientific evidence is insufficient to make robust recommendations. Further studies are warranted.
Hyperspherical Sparse Approximation Techniques for High-Dimensional Discontinuity Detection
Zhang, Guannan; Webster, Clayton G.; Gunzburger, Max; ...
2016-08-04
This work proposes a hyperspherical sparse approximation framework for detecting jump discontinuities in functions in high-dimensional spaces. The need for a novel approach results from the theoretical and computational inefficiencies of well-known approaches, such as adaptive sparse grids, for discontinuity detection. Our approach constructs the hyperspherical coordinate representation of the discontinuity surface of a function. Then sparse approximations of the transformed function are built in the hyperspherical coordinate system, with values at each point estimated by solving a one-dimensional discontinuity detection problem. Due to the smoothness of the hypersurface, the new technique can identify jump discontinuities with significantly reduced computationalmore » cost, compared to existing methods. Several approaches are used to approximate the transformed discontinuity surface in the hyperspherical system, including adaptive sparse grid and radial basis function interpolation, discrete least squares projection, and compressed sensing approximation. Moreover, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. In conclusion, rigorous complexity analyses of the new methods are provided, as are several numerical examples that illustrate the effectiveness of our approach.« less
The mechanism of double-exponential growth in hyper-inflation
NASA Astrophysics Data System (ADS)
Mizuno, T.; Takayasu, M.; Takayasu, H.
2002-05-01
Analyzing historical data of price indices, we find an extraordinary growth phenomenon in several examples of hyper-inflation in which, price changes are approximated nicely by double-exponential functions of time. In order to explain such behavior we introduce the general coarse-graining technique in physics, the Monte Carlo renormalization group method, to the price dynamics. Starting from a microscopic stochastic equation describing dealers’ actions in open markets, we obtain a macroscopic noiseless equation of price consistent with the observation. The effect of auto-catalytic shortening of characteristic time caused by mob psychology is shown to be responsible for the double-exponential behavior.
Weierstrass method for quaternionic polynomial root-finding
NASA Astrophysics Data System (ADS)
Falcão, M. Irene; Miranda, Fernando; Severino, Ricardo; Soares, M. Joana
2018-01-01
Quaternions, introduced by Hamilton in 1843 as a generalization of complex numbers, have found, in more recent years, a wealth of applications in a number of different areas which motivated the design of efficient methods for numerically approximating the zeros of quaternionic polynomials. In fact, one can find in the literature recent contributions to this subject based on the use of complex techniques, but numerical methods relying on quaternion arithmetic remain scarce. In this paper we propose a Weierstrass-like method for finding simultaneously {\\sl all} the zeros of unilateral quaternionic polynomials. The convergence analysis and several numerical examples illustrating the performance of the method are also presented.
Computational fluid dynamics in a marine environment
NASA Technical Reports Server (NTRS)
Carlson, Arthur D.
1987-01-01
The introduction of the supercomputer and recent advances in both Reynolds averaged, and large eddy simulation fluid flow approximation techniques to the Navier-Stokes equations, have created a robust environment for the exploration of problems of interest to the Navy in general, and the Naval Underwater Systems Center in particular. The nature of problems that are of interest, and the type of resources needed for their solution are addressed. The goal is to achieve a good engineering solution to the fluid-structure interaction problem. It is appropriate to indicate that a paper by D. Champman played a major role in developing the interest in the approach discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, R. B.; Dion, S.; Konigslow, K. von
Self-consistent field theory equations are presented that are suitable for use as a coarse-grained model for DNA coated colloids, polymer-grafted nanoparticles and other systems with approximately isotropic interactions. The equations are generalized for arbitrary numbers of chemically distinct colloids. The advantages and limitations of such a coarse-grained approach for DNA coated colloids are discussed, as are similarities with block copolymer self-assembly. In particular, preliminary results for three species self-assembly are presented that parallel results from a two dimensional ABC triblock copolymer phase. The possibility of incorporating crystallization, dynamics, inverse statistical mechanics and multiscale modelling techniques are discussed.
Patent foramen ovale: a new disease?
Drighil, Abdenasser; El Mosalami, Hanane; Elbadaoui, Nadia; Chraibi, Said; Bennis, Ahmed
2007-10-31
Patent foramen ovale is a frequent remnant of the fetal circulation. Affecting approximately 25% of the adult population. Its recognition, evaluation and treatment has attracted increasing interest as the importance and frequency of its implication in several pathologic processes, including ischemic stroke secondary to paradoxic embolism, the platypnea-orthodeoxia syndrome, decompression sickness (DCS) (an occupational hazard for underwater divers and high altitude aviators and astronauts) and migraine headache, has become better understood. Echocardiographic techniques have emerged as the principle means for diagnosis and assessment of PFO, in particular contrast echocardiography and transcranial Doppler. Its treatment remains controversial with a general tendency to propose a percutaneous closure among the symptomatic patients.
Elastic electron scattering from formamide
NASA Astrophysics Data System (ADS)
Buk, M. V.; Bardela, F. P.; da Silva, L. A.; Iga, I.; Homem, M. G. P.
2018-05-01
Differential cross sections for elastic electron scattering by formamide (NH2CHO) were measured in the 30–800 eV and 10°–120° ranges. The angular distribution of scattered electrons was obtained using a crossed electron beam-molecular beam geometry. The relative flow technique was applied to normalize our data. Integral and momentum-transfer cross sections were derived from the measured differential cross sections. Theoretical results in the framework of the independent-atom model at the static-exchange-polarization plus absorption level of approximation are also given. The present measured and calculated results are compared with those available in the literature showing a generally good agreement.
Instability of meridional axial system in f( R) gravity
NASA Astrophysics Data System (ADS)
Sharif, M.; Yousaf, Z.
2015-05-01
We analyze the dynamical instability of a non-static reflection axial stellar structure by taking into account the generalized Euler equation in metric f( R) gravity. Such an equation is obtained by contracting the Bianchi identities of the usual anisotropic and effective stress-energy tensors, which after using a radial perturbation technique gives a modified collapse equation. In the realm of the gravity model, we investigate instability constraints at Newtonian and post-Newtonian approximations. We find that the instability of a meridional axial self-gravitating system depends upon the static profile of the structure coefficients, while f( R) extra curvature terms induce the stability of the evolving celestial body.
Implicit solvers for unstructured meshes
NASA Technical Reports Server (NTRS)
Venkatakrishnan, V.; Mavriplis, Dimitri J.
1991-01-01
Implicit methods for unstructured mesh computations are developed and tested. The approximate system which arises from the Newton-linearization of the nonlinear evolution operator is solved by using the preconditioned generalized minimum residual technique. These different preconditioners are investigated: the incomplete LU factorization (ILU), block diagonal factorization, and the symmetric successive over-relaxation (SSOR). The preconditioners have been optimized to have good vectorization properties. The various methods are compared over a wide range of problems. Ordering of the unknowns, which affects the convergence of these sparse matrix iterative methods, is also investigated. Results are presented for inviscid and turbulent viscous calculations on single and multielement airfoil configurations using globally and adaptively generated meshes.
The gravitational potential of axially symmetric bodies from a regularized green kernel
NASA Astrophysics Data System (ADS)
Trova, A.; Huré, J.-M.; Hersant, F.
2011-12-01
The determination of the gravitational potential inside celestial bodies (rotating stars, discs, planets, asteroids) is a common challenge in numerical Astrophysics. Under axial symmetry, the potential is classically found from a two-dimensional integral over the body's meridional cross-section. Because it involves an improper integral, high accuracy is generally difficult to reach. We have discovered that, for homogeneous bodies, the singular Green kernel can be converted into a regular kernel by direct analytical integration. This new kernel, easily managed with standard techniques, opens interesting horizons, not only for numerical calculus but also to generate approximations, in particular for geometrically thin discs and rings.
NASA Astrophysics Data System (ADS)
Mercan, Kadir; Demir, Çiǧdem; Civalek, Ömer
2016-01-01
In the present manuscript, free vibration response of circular cylindrical shells with functionally graded material (FGM) is investigated. The method of discrete singular convolution (DSC) is used for numerical solution of the related governing equation of motion of FGM cylindrical shell. The constitutive relations are based on the Love's first approximation shell theory. The material properties are graded in the thickness direction according to a volume fraction power law indexes. Frequency values are calculated for different types of boundary conditions, material and geometric parameters. In general, close agreement between the obtained results and those of other researchers has been found.
Genten: Software for Generalized Tensor Decompositions v. 1.0.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phipps, Eric T.; Kolda, Tamara G.; Dunlavy, Daniel
Tensors, or multidimensional arrays, are a powerful mathematical means of describing multiway data. This software provides computational means for decomposing or approximating a given tensor in terms of smaller tensors of lower dimension, focusing on decomposition of large, sparse tensors. These techniques have applications in many scientific areas, including signal processing, linear algebra, computer vision, numerical analysis, data mining, graph analysis, neuroscience and more. The software is designed to take advantage of parallelism present emerging computer architectures such has multi-core CPUs, many-core accelerators such as the Intel Xeon Phi, and computation-oriented GPUs to enable efficient processing of large tensors.
Water Reclamation Using a Ceramic Nanofiltration Membrane and Surface Flushing with Ozonated Water
Hoang, Anh T.; Okuda, Tetsuji; Takeuchi, Haruka; Tanaka, Hiroaki; Nghiem, Long D.
2018-01-01
A new membrane fouling control technique using ozonated water flushing was evaluated for direct nanofiltration (NF) of secondary wastewater effluent using a ceramic NF membrane. Experiments were conducted at a permeate flux of 44 L/m2h to evaluate the ozonated water flushing technique for fouling mitigation. Surface flushing with clean water did not effectively remove foulants from the NF membrane. In contrast, surface flushing with ozonated water (4 mg/L dissolved ozone) could effectively remove most foulants to restore the membrane permeability. This surface flushing technique using ozonated water was able to limit the progression of fouling to 35% in transmembrane pressure increase over five filtration cycles. Results from this study also heighten the need for further development of ceramic NF membrane to ensure adequate removal of pharmaceuticals and personal care products (PPCPs) for water recycling applications. The ceramic NF membrane used in this study showed approximately 40% TOC rejection, and the rejection of PPCPs was generally low and highly variable. It is expected that the fouling mitigation technique developed here is even more important for ceramic NF membranes with smaller pore size and thus better PPCP rejection. PMID:29671797
Water Reclamation Using a Ceramic Nanofiltration Membrane and Surface Flushing with Ozonated Water.
Fujioka, Takahiro; Hoang, Anh T; Okuda, Tetsuji; Takeuchi, Haruka; Tanaka, Hiroaki; Nghiem, Long D
2018-04-19
A new membrane fouling control technique using ozonated water flushing was evaluated for direct nanofiltration (NF) of secondary wastewater effluent using a ceramic NF membrane. Experiments were conducted at a permeate flux of 44 L/m²h to evaluate the ozonated water flushing technique for fouling mitigation. Surface flushing with clean water did not effectively remove foulants from the NF membrane. In contrast, surface flushing with ozonated water (4 mg/L dissolved ozone) could effectively remove most foulants to restore the membrane permeability. This surface flushing technique using ozonated water was able to limit the progression of fouling to 35% in transmembrane pressure increase over five filtration cycles. Results from this study also heighten the need for further development of ceramic NF membrane to ensure adequate removal of pharmaceuticals and personal care products (PPCPs) for water recycling applications. The ceramic NF membrane used in this study showed approximately 40% TOC rejection, and the rejection of PPCPs was generally low and highly variable. It is expected that the fouling mitigation technique developed here is even more important for ceramic NF membranes with smaller pore size and thus better PPCP rejection.
Numerical Integration Techniques for Curved-Element Discretizations of Molecule–Solvent Interfaces
Bardhan, Jaydeep P.; Altman, Michael D.; Willis, David J.; Lippow, Shaun M.; Tidor, Bruce; White, Jacob K.
2012-01-01
Surface formulations of biophysical modeling problems offer attractive theoretical and computational properties. Numerical simulations based on these formulations usually begin with discretization of the surface under consideration; often, the surface is curved, possessing complicated structure and possibly singularities. Numerical simulations commonly are based on approximate, rather than exact, discretizations of these surfaces. To assess the strength of the dependence of simulation accuracy on the fidelity of surface representation, we have developed methods to model several important surface formulations using exact surface discretizations. Following and refining Zauhar’s work (J. Comp.-Aid. Mol. Des. 9:149-159, 1995), we define two classes of curved elements that can exactly discretize the van der Waals, solvent-accessible, and solvent-excluded (molecular) surfaces. We then present numerical integration techniques that can accurately evaluate nonsingular and singular integrals over these curved surfaces. After validating the exactness of the surface discretizations and demonstrating the correctness of the presented integration methods, we present a set of calculations that compare the accuracy of approximate, planar-triangle-based discretizations and exact, curved-element-based simulations of surface-generalized-Born (sGB), surface-continuum van der Waals (scvdW), and boundary-element method (BEM) electrostatics problems. Results demonstrate that continuum electrostatic calculations with BEM using curved elements, piecewise-constant basis functions, and centroid collocation are nearly ten times more accurate than planartriangle BEM for basis sets of comparable size. The sGB and scvdW calculations give exceptional accuracy even for the coarsest obtainable discretized surfaces. The extra accuracy is attributed to the exact representation of the solute–solvent interface; in contrast, commonly used planar-triangle discretizations can only offer improved approximations with increasing discretization and associated increases in computational resources. The results clearly demonstrate that our methods for approximate integration on an exact geometry are far more accurate than exact integration on an approximate geometry. A MATLAB implementation of the presented integration methods and sample data files containing curved-element discretizations of several small molecules are available online at http://web.mit.edu/tidor. PMID:17627358
Quantum Approximate Methods for the Atomistic Modeling of Multicomponent Alloys. Chapter 7
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo; Garces, Jorge; Mosca, Hugo; Gargano, pablo; Noebe, Ronald D.; Abel, Phillip
2007-01-01
This chapter describes the role of quantum approximate methods in the understanding of complex multicomponent alloys at the atomic level. The need to accelerate materials design programs based on economical and efficient modeling techniques provides the framework for the introduction of approximations and simplifications in otherwise rigorous theoretical schemes. As a promising example of the role that such approximate methods might have in the development of complex systems, the BFS method for alloys is presented and applied to Ru-rich Ni-base superalloys and also to the NiAI(Ti,Cu) system, highlighting the benefits that can be obtained from introducing simple modeling techniques to the investigation of such complex systems.
Computing aerodynamic sound using advanced statistical turbulence theories
NASA Technical Reports Server (NTRS)
Hecht, A. M.; Teske, M. E.; Bilanin, A. J.
1981-01-01
It is noted that the calculation of turbulence-generated aerodynamic sound requires knowledge of the spatial and temporal variation of Q sub ij (xi sub k, tau), the two-point, two-time turbulent velocity correlations. A technique is presented to obtain an approximate form of these correlations based on closure of the Reynolds stress equations by modeling of higher order terms. The governing equations for Q sub ij are first developed for a general flow. The case of homogeneous, stationary turbulence in a unidirectional constant shear mean flow is then assumed. The required closure form for Q sub ij is selected which is capable of qualitatively reproducing experimentally observed behavior. This form contains separation time dependent scale factors as parameters and depends explicitly on spatial separation. The approximate forms of Q sub ij are used in the differential equations and integral moments are taken over the spatial domain. The velocity correlations are used in the Lighthill theory of aerodynamic sound by assuming normal joint probability.
Sounding rocket flight report: MUMP 9 and MUMP 10
NASA Technical Reports Server (NTRS)
Grassl, H. J.
1971-01-01
The results of the launching of two Marshall-University of Michigan Probes (MUMP 9 and MUMP 10), Nike-Tomahawk sounding rocket payloads, are summarized. The MUMP 9 paylaod included an omegatron mass analyzer, a molecular fluorescence densitometer, a mini-tilty filter, and a lunar position sensor. This complement of instruments permitted the determination of the molecular nitrogen density and temperature in the altitude range from approximately 143 to 297 km over Wallops Island, Virginia, during January 1971. The MUMP 10 payload included an omegatron mass analyzer, an electron temperature probe (Spencer, Brace, and Carignan, 1962), a cryogenic densitometer, and a solar position sensor. This complement of instruments permitted the determination of the molecular nitrogen density and temperature and the charged particle density and temperature in the altitude range from approximately 145 to 290 km over Wallops Island, Virginia, during the afternoon preceding the MUMP 9 launch in January 1971. A general description of the payload kinematics, orientation analysis, and the technique for the reduction and analysis of the data is given.
Parallel Computing of Upwelling in a Rotating Stratified Flow
NASA Astrophysics Data System (ADS)
Cui, A.; Street, R. L.
1997-11-01
A code for the three-dimensional, unsteady, incompressible, and turbulent flow has been implemented on the IBM SP2, using message passing. The effects of rotation and variable density are included. A finite volume method is used to discretize the Navier-Stokes equations in general curvilinear coordinates on a non-staggered grid. All the spatial derivatives are approximated using second-order central differences with the exception of the convection terms, which are handled with special upwind-difference schemes. The semi-implicit, second-order accurate, time-advancement scheme employs the Adams-Bashforth method for the explicit terms and Crank-Nicolson for the implicit terms. A multigrid method, with the four-color ZEBRA as smoother, is used to solve the Poisson equation for pressure, while the momentum equations are solved with an approximate factorization technique. The code was successfully validated for a variety test cases. Simulations of a laboratory model of coastal upwelling in a rotating annulus are in progress and will be presented.
NASA Astrophysics Data System (ADS)
Zhou, Weimin; Anastasio, Mark A.
2018-03-01
It has been advocated that task-based measures of image quality (IQ) should be employed to evaluate and optimize imaging systems. Task-based measures of IQ quantify the performance of an observer on a medically relevant task. The Bayesian Ideal Observer (IO), which employs complete statistical information of the object and noise, achieves the upper limit of the performance for a binary signal classification task. However, computing the IO performance is generally analytically intractable and can be computationally burdensome when Markov-chain Monte Carlo (MCMC) techniques are employed. In this paper, supervised learning with convolutional neural networks (CNNs) is employed to approximate the IO test statistics for a signal-known-exactly and background-known-exactly (SKE/BKE) binary detection task. The receiver operating characteristic (ROC) curve and the area under the ROC curve (AUC) are compared to those produced by the analytically computed IO. The advantages of the proposed supervised learning approach for approximating the IO are demonstrated.
NASA Technical Reports Server (NTRS)
Gnoffo, P. A.
1978-01-01
A coordinate transformation, which can approximate many different two-dimensional and axisymmetric body shapes with an analytic function, is used as a basis for solving the Navier-Stokes equations for the purpose of predicting 0 deg angle of attack supersonic flow fields. The transformation defines a curvilinear, orthogonal coordinate system in which coordinate lines are perpendicular to the body and the body is defined by one coordinate line. This system is mapped in to a rectangular computational domain in which the governing flow field equations are solved numerically. Advantages of this technique are that the specification of boundary conditions are simplified and, most importantly, the entire flow field can be obtained, including flow in the wake. Good agreement has been obtained with experimental data for pressure distributions, density distributions, and heat transfer over spheres and cylinders in supersonic flow. Approximations to the Viking aeroshell and to a candidate Jupiter probe are presented and flow fields over these shapes are calculated.
First-principles calculations on the four phases of BaTiO3.
Evarestov, Robert A; Bandura, Andrei V
2012-04-30
The calculations based on linear combination of atomic orbitals basis functions as implemented in CRYSTAL09 computer code have been performed for cubic, tetragonal, orthorhombic, and rhombohedral modifications of BaTiO(3) crystal. Structural and electronic properties as well as phonon frequencies were obtained using local density approximation, generalized gradient approximation, and hybrid exchange-correlation density functional theory (DFT) functionals for four stable phases of BaTiO(3). A comparison was made between the results of different DFT techniques. It is concluded that the hybrid PBE0 [J. P. Perdew, K. Burke, M. Ernzerhof, J. Chem. Phys. 1996, 105, 9982.] functional is able to predict correctly the structural stability and phonon properties both for cubic and ferroelectric phases of BaTiO(3). The comparative phonon symmetry analysis in BaTiO(3) four phases has been made basing on the site symmetry and irreducible representation indexes for the first time. Copyright © 2012 Wiley Periodicals, Inc.
Cosner, O.J.; Harsh, J.F.
1978-01-01
The city of Cortland, New York, and surrounding areas obtain water from the highly productive glacial-outwash aquifer underlying the Otter Creek-Dry Creek basin. Pumpage from the aquifer in 1976 was approximately 6.3 million gallons per day and is expected to increase as a result of population growth and urbanization. A digital ground-water model that uses a finite-difference approximation technique to solve partial differential equations of flow through a porous medium was used to simulate the movement of water within the aquifer. The model was calibrated to equilibrium conditions by comparing water levels measured in the aquifer in March 1976 with those computed by the model. Then, from the simulated water-level surface for March, a transient-condition run was made to simulate the surface as measured in September 1976. Computed water levels presented as contours are generally in close agreement with potentiometric-surface maps prepared from field measurements of March and September 1976. (Woodard-USGS)
The New Zealand gravimetric quasigeoid model 2017 that incorporates nationwide airborne gravimetry
NASA Astrophysics Data System (ADS)
McCubbine, J. C.; Amos, M. J.; Tontini, F. C.; Smith, E.; Winefied, R.; Stagpoole, V.; Featherstone, W. E.
2017-12-01
A one arc-minute resolution gravimetric quasigeoid model has been computed for New Zealand, covering the region 25°S -60°S and 160°E -170°W . It was calculated by Wong and Gore modified Stokes integration using the remove-compute-restore technique with the EIGEN-6C4 global gravity model as the reference field. The gridded gravity data used for the computation consisted of 40,677 land gravity observations, satellite altimetry-derived marine gravity anomalies, historical shipborne marine gravity observations and, importantly, approximately one million new airborne gravity observations. The airborne data were collected with the specific intention of reinforcing the shortcomings of the existing data in areas of rough topography inaccessible to land gravimetry and in coastal areas where shipborne gravimetry cannot be collected and altimeter-derived gravity anomalies are generally poor. The new quasigeoid has a nominal precision of ± 48 mm on comparison with GPS-levelling data, which is approximately 14 mm less than its predecessor NZGeoid09.
NASA Astrophysics Data System (ADS)
Fine, Dana S.; Sawin, Stephen
2017-01-01
Feynman's time-slicing construction approximates the path integral by a product, determined by a partition of a finite time interval, of approximate propagators. This paper formulates general conditions to impose on a short-time approximation to the propagator in a general class of imaginary-time quantum mechanics on a Riemannian manifold which ensure that these products converge. The limit defines a path integral which agrees pointwise with the heat kernel for a generalized Laplacian. The result is a rigorous construction of the propagator for supersymmetric quantum mechanics, with potential, as a path integral. Further, the class of Laplacians includes the square of the twisted Dirac operator, which corresponds to an extension of N = 1/2 supersymmetric quantum mechanics. General results on the rate of convergence of the approximate path integrals suffice in this case to derive the local version of the Atiyah-Singer index theorem.
The impact of approximations and arbitrary choices on geophysical images
NASA Astrophysics Data System (ADS)
Valentine, Andrew P.; Trampert, Jeannot
2016-01-01
Whenever a geophysical image is to be constructed, a variety of choices must be made. Some, such as those governing data selection and processing, or model parametrization, are somewhat arbitrary: there may be little reason to prefer one choice over another. Others, such as defining the theoretical framework within which the data are to be explained, may be more straightforward: typically, an `exact' theory exists, but various approximations may need to be adopted in order to make the imaging problem computationally tractable. Differences between any two images of the same system can be explained in terms of differences between these choices. Understanding the impact of each particular decision is essential if images are to be interpreted properly-but little progress has been made towards a quantitative treatment of this effect. In this paper, we consider a general linearized inverse problem, applicable to a wide range of imaging situations. We write down an expression for the difference between two images produced using similar inversion strategies, but where different choices have been made. This provides a framework within which inversion algorithms may be analysed, and allows us to consider how image effects may arise. In this paper, we take a general view, and do not specialize our discussion to any specific imaging problem or setup (beyond the restrictions implied by the use of linearized inversion techniques). In particular, we look at the concept of `hybrid inversion', in which highly accurate synthetic data (typically the result of an expensive numerical simulation) is combined with an inverse operator constructed based on theoretical approximations. It is generally supposed that this offers the benefits of using the more complete theory, without the full computational costs. We argue that the inverse operator is as important as the forward calculation in determining the accuracy of results. We illustrate this using a simple example, based on imaging the density structure of a vibrating string.
Constraints to solve parallelogram grid problems in 2D non separable linear canonical transform
NASA Astrophysics Data System (ADS)
Zhao, Liang; Healy, John J.; Muniraj, Inbarasan; Cui, Xiao-Guang; Malallah, Ra'ed; Ryle, James P.; Sheridan, John T.
2017-05-01
The 2D non-separable linear canonical transform (2D-NS-LCT) can model a range of various paraxial optical systems. Digital algorithms to evaluate the 2D-NS-LCTs are important in modeling the light field propagations and also of interest in many digital signal processing applications. In [Zhao 14] we have reported that a given 2D input image with rectangular shape/boundary, in general, results in a parallelogram output sampling grid (generally in an affine coordinates rather than in a Cartesian coordinates) thus limiting the further calculations, e.g. inverse transform. One possible solution is to use the interpolation techniques; however, it reduces the speed and accuracy of the numerical approximations. To alleviate this problem, in this paper, some constraints are derived under which the output samples are located in the Cartesian coordinates. Therefore, no interpolation operation is required and thus the calculation error can be significantly eliminated.
Management of failed instability surgery: how to get it right the next time.
Boone, Julienne L; Arciero, Robert A
2010-07-01
Traumatic anterior shoulder dislocations are the most frequent type of joint dislocation and affect approximately 1.7% of the general population. The literature supports the consideration of primary stabilization in high-risk patients because of reported recurrences as high as 80% to 90% with nonoperative treatment regimens. Successful stabilization of anterior glenohumeral instability relies on not only good surgical techniques but also careful patient selection. Failure rates after open and arthroscopic stabilization have been reported to range from 2% to 8% and 4% to 13%, respectively. Recurrent shoulder instability leads to increased morbidity to the patient, increased pain, decreased activity level, prolonged time away from work and sports, and a general decrease in quality of life. This article reviews the potential pitfalls in anterior shoulder stabilization and discusses appropriate methods of addressing them in revision surgery. Copyright 2010 Elsevier Inc. All rights reserved.
The mode branching route to localization of the finite-length floating elastica
NASA Astrophysics Data System (ADS)
Rivetti, Marco; Neukirch, Sébastien
2014-09-01
The beam on elastic foundation is a general model used in physical, biological, and technological problems to study delamination, wrinkling, or pattern formation. Recent focus has been given to the buckling of beams deposited on liquid baths, and in the regime where the beam is soft compared to hydrostatic forces the wrinkling pattern observed at buckling has been shown to lead to localization of the deformation when the confinement is increased. Here we perform a global study of the general case where the intensity of the liquid foundation and the confinement are both varied. We compute equilibrium and stability of the solutions and unravel secondary bifurcations that play a major role in the route to localization. Moreover we classify the post-buckling solutions and shed light on the mechanism leading to localization. Finally, using an asymptotic technique imported from fluid mechanics, we derive an approximated analytical solution to the problem.
Computational prediction of muon stopping sites using ab initio random structure searching (AIRSS)
NASA Astrophysics Data System (ADS)
Liborio, Leandro; Sturniolo, Simone; Jochym, Dominik
2018-04-01
The stopping site of the muon in a muon-spin relaxation experiment is in general unknown. There are some techniques that can be used to guess the muon stopping site, but they often rely on approximations and are not generally applicable to all cases. In this work, we propose a purely theoretical method to predict muon stopping sites in crystalline materials from first principles. The method is based on a combination of ab initio calculations, random structure searching, and machine learning, and it has successfully predicted the MuT and MuBC stopping sites of muonium in Si, diamond, and Ge, as well as the muonium stopping site in LiF, without any recourse to experimental results. The method makes use of Soprano, a Python library developed to aid ab initio computational crystallography, that was publicly released and contains all the software tools necessary to reproduce our analysis.
Estimation of conformational entropy in protein-ligand interactions: a computational perspective.
Polyansky, Anton A; Zubac, Ruben; Zagrovic, Bojan
2012-01-01
Conformational entropy is an important component of the change in free energy upon binding of a ligand to its target protein. As a consequence, development of computational techniques for reliable estimation of conformational entropies is currently receiving an increased level of attention in the context of computational drug design. Here, we review the most commonly used techniques for conformational entropy estimation from classical molecular dynamics simulations. Although by-and-large still not directly used in practical drug design, these techniques provide a golden standard for developing other, computationally less-demanding methods for such applications, in addition to furthering our understanding of protein-ligand interactions in general. In particular, we focus on the quasi-harmonic approximation and discuss different approaches that can be used to go beyond it, most notably, when it comes to treating anharmonic and/or correlated motions. In addition to reviewing basic theoretical formalisms, we provide a concrete set of steps required to successfully calculate conformational entropy from molecular dynamics simulations, as well as discuss a number of practical issues that may arise in such calculations.
Optimization of auxiliary basis sets for the LEDO expansion and a projection technique for LEDO-DFT.
Götz, Andreas W; Kollmar, Christian; Hess, Bernd A
2005-09-01
We present a systematic procedure for the optimization of the expansion basis for the limited expansion of diatomic overlap density functional theory (LEDO-DFT) and report on optimized auxiliary orbitals for the Ahlrichs split valence plus polarization basis set (SVP) for the elements H, Li--F, and Na--Cl. A new method to deal with near-linear dependences in the LEDO expansion basis is introduced, which greatly reduces the computational effort of LEDO-DFT calculations. Numerical results for a test set of small molecules demonstrate the accuracy of electronic energies, structural parameters, dipole moments, and harmonic frequencies. For larger molecular systems the numerical errors introduced by the LEDO approximation can lead to an uncontrollable behavior of the self-consistent field (SCF) process. A projection technique suggested by Löwdin is presented in the framework of LEDO-DFT, which guarantees for SCF convergence. Numerical results on some critical test molecules suggest the general applicability of the auxiliary orbitals presented in combination with this projection technique. Timing results indicate that LEDO-DFT is competitive with conventional density fitting methods. (c) 2005 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Ikot, Akpan N.; Maghsoodi, Elham; Hassanabadi, Hassan; Obu, Joseph A.
2014-05-01
In this paper, we obtain the approximate analytical bound-state solutions of the Dirac particle with the generalized Yukawa potential within the framework of spin and pseudospin symmetries for the arbitrary к state with a generalized tensor interaction. The generalized parametric Nikiforov-Uvarov method is used to obtain the energy eigenvalues and the corresponding wave functions in closed form. We also report some numerical results and present figures to show the effect of the tensor interaction.
NASA Astrophysics Data System (ADS)
Zhong, XiaoXu; Liao, ShiJun
2018-01-01
Analytic approximations of the Von Kármán's plate equations in integral form for a circular plate under external uniform pressure to arbitrary magnitude are successfully obtained by means of the homotopy analysis method (HAM), an analytic approximation technique for highly nonlinear problems. Two HAM-based approaches are proposed for either a given external uniform pressure Q or a given central deflection, respectively. Both of them are valid for uniform pressure to arbitrary magnitude by choosing proper values of the so-called convergence-control parameters c 1 and c 2 in the frame of the HAM. Besides, it is found that the HAM-based iteration approaches generally converge much faster than the interpolation iterative method. Furthermore, we prove that the interpolation iterative method is a special case of the first-order HAM iteration approach for a given external uniform pressure Q when c 1 = - θ and c 2 = -1, where θ denotes the interpolation iterative parameter. Therefore, according to the convergence theorem of Zheng and Zhou about the interpolation iterative method, the HAM-based approaches are valid for uniform pressure to arbitrary magnitude at least in the special case c 1 = - θ and c 2 = -1. In addition, we prove that the HAM approach for the Von Kármán's plate equations in differential form is just a special case of the HAM for the Von Kármán's plate equations in integral form mentioned in this paper. All of these illustrate the validity and great potential of the HAM for highly nonlinear problems, and its superiority over perturbation techniques.
De Paëpe, Gaël; Lewandowski, Józef R; Griffin, Robert G
2008-03-28
We introduce a family of solid-state NMR pulse sequences that generalizes the concept of second averaging in the modulation frame and therefore provides a new approach to perform magic angle spinning dipolar recoupling experiments. Here, we focus on two particular recoupling mechanisms-cosine modulated rotary resonance (CMpRR) and cosine modulated recoupling with isotropic chemical shift reintroduction (COMICS). The first technique, CMpRR, is based on a cosine modulation of the rf phase and yields broadband double-quantum (DQ) (13)C recoupling using >70 kHz omega(1,C)/2pi rf field for the spinning frequency omega(r)/2=10-30 kHz and (1)H Larmor frequency omega(0,H)/2pi up to 900 MHz. Importantly, for p>or=5, CMpRR recouples efficiently in the absence of (1)H decoupling. Extension to lower p values (3.5
NASA Astrophysics Data System (ADS)
Voloshinov, V. V.
2018-03-01
In computations related to mathematical programming problems, one often has to consider approximate, rather than exact, solutions satisfying the constraints of the problem and the optimality criterion with a certain error. For determining stopping rules for iterative procedures, in the stability analysis of solutions with respect to errors in the initial data, etc., a justified characteristic of such solutions that is independent of the numerical method used to obtain them is needed. A necessary δ-optimality condition in the smooth mathematical programming problem that generalizes the Karush-Kuhn-Tucker theorem for the case of approximate solutions is obtained. The Lagrange multipliers corresponding to the approximate solution are determined by solving an approximating quadratic programming problem.
NASA Technical Reports Server (NTRS)
Ito, K.; Teglas, R.
1984-01-01
The numerical scheme based on the Legendre-tau approximation is proposed to approximate the feedback solution to the linear quadratic optimal control problem for hereditary differential systems. The convergence property is established using Trotter ideas. The method yields very good approximations at low orders and provides an approximation technique for computing closed-loop eigenvalues of the feedback system. A comparison with existing methods (based on averaging and spline approximations) is made.
NASA Technical Reports Server (NTRS)
Ito, Kazufumi; Teglas, Russell
1987-01-01
The numerical scheme based on the Legendre-tau approximation is proposed to approximate the feedback solution to the linear quadratic optimal control problem for hereditary differential systems. The convergence property is established using Trotter ideas. The method yields very good approximations at low orders and provides an approximation technique for computing closed-loop eigenvalues of the feedback system. A comparison with existing methods (based on averaging and spline approximations) is made.
General properties and analytical approximations of photorefractive solitons
NASA Astrophysics Data System (ADS)
Geisler, A.; Homann, F.; Schmidt, H.-J.
2004-08-01
We investigate general properties of spatial 1-dimensional bright photorefractive solitons and discuss various analytical approximations for the soliton profile and the half width, both depending on an intensity parameter r. The case of dark solitons is also briefly addressed.
Cosmic non-TEM radiation and synthetic feed array sensor system in ASIC mixed signal technology
NASA Astrophysics Data System (ADS)
Centureli, F.; Scotti, G.; Tommasino, P.; Trifiletti, A.; Romano, F.; Cimmino, R.; Saitto, A.
2014-08-01
The paper deals with the opportunity to introduce "Not strictly TEM waves" Synthetic detection Method (NTSM), consisting in a Three Axis Digital Beam Processing (3ADBP), to enhance the performances of radio telescope and sensor systems. Current Radio Telescopes generally use the classic 3D "TEM waves" approximation Detection Method, which consists in a linear tomography process (Single or Dual axis beam forming processing) neglecting the small z component. The Synthetic FEED ARRAY three axis Sensor SYSTEM is an innovative technique using a synthetic detection of the generic "NOT strictly TEM Waves radiation coming from the Cosmo, which processes longitudinal component of Angular Momentum too. Than the simultaneous extraction from radiation of both the linear and quadratic information component, may reduce the complexity to reconstruct the Early Universe in the different requested scales. This next order approximation detection of the observed cosmologic processes, may improve the efficacy of the statistical numerical model used to elaborate the same information acquired. The present work focuses on detection of such waves at carrier frequencies in the bands ranging from LF to MMW. The work shows in further detail the new generation of on line programmable and reconfigurable Mixed Signal ASIC technology that made possible the innovative Synthetic Sensor. Furthermore the paper shows the ability of such technique to increase the Radio Telescope Array Antenna performances.
Kirsch, A J; Chang, D T; Kayton, M L; Libutti, S K; Connor, J P; Hensle, T W
1996-01-01
Tissue welding using laser-activated protein solders may soon become an alternative to sutured tissue approximation. In most cases, approximating sutures are used both to align tissue edges and provide added tensile strength. Collateral thermal injury, however, may cause disruption of tissue alignment and weaken the tensile strength of sutures. The objective of this study was to evaluate the effect of laser welding on the tensile strength of suture materials used in urologic surgery. Eleven types of sutures were exposed to diode laser energy (power density = 15.9 W/cm2) for 10, 30, and 60 seconds. Each suture was compared with and without the addition of dye-enhanced albumin-based solder. After exposure, each suture material was strained (2"/min) until ultimate breakage on a tensometer and compared to untreated sutures using ANOVA. The strength of undyed sutures were not significantly affected; however, violet and green-dyed sutures were in general weakened by laser exposure in the presence of dye-enhanced glue. Laser activation of the smallest caliber, dyed sutures (7-0) in the presence of glue caused the most significant loss of tensile strength of all sutures tested. These results indicate that the thermal effects of laser welding using our technique decrease the tensile strength of dyed sutures. A thermally resistant suture material (undyed or clear) may prevent disruption of wounds closed by laser welding techniques.
Modeling a 400 Hz Signal Transmission Through the South China Sea Basin
2009-03-01
TRACING ..........................8 1. General Ray Theory and the Eikonal Approximation .....................8 2. Hamiltonian Ray Tracing...HAMILTONIAN RAY TRACING 1. General Ray Theory and the Eikonal Approximation In general, modeling acoustic propagation through the ocean necessitates... eikonal and represents the phase component of the solution. Since solutions of constant phase represent wave fronts, and rays travel in a direction
Comparison of dynamical approximation schemes for non-linear gravitational clustering
NASA Technical Reports Server (NTRS)
Melott, Adrian L.
1994-01-01
We have recently conducted a controlled comparison of a number of approximations for gravitational clustering against the same n-body simulations. These include ordinary linear perturbation theory (Eulerian), the adhesion approximation, the frozen-flow approximation, the Zel'dovich approximation (describable as first-order Lagrangian perturbation theory), and its second-order generalization. In the last two cases we also created new versions of approximation by truncation, i.e., smoothing the initial conditions by various smoothing window shapes and varying their sizes. The primary tool for comparing simulations to approximation schemes was crosscorrelation of the evolved mass density fields, testing the extent to which mass was moved to the right place. The Zel'dovich approximation, with initial convolution with a Gaussian e(exp -k(exp 2)/k(exp 2, sub G)) where k(sub G) is adjusted to be just into the nonlinear regime of the evolved model (details in text) worked extremely well. Its second-order generalization worked slightly better. All other schemes, including those proposed as generalizations of the Zel'dovich approximation created by adding forces, were in fact generally worse by this measure. By explicitly checking, we verified that the success of our best-choice was a result of the best treatment of the phases of nonlinear Fourier components. Of all schemes tested, the adhesion approximation produced the most accurate nonlinear power spectrum and density distribution, but its phase errors suggest mass condensations were moved to slightly the wrong location. Due to its better reproduction of the mass density distribution function and power spectrum, it might be preferred for some uses. We recommend either n-body simulations or our modified versions of the Zel'dovich approximation, depending upon the purpose. The theoretical implication is that pancaking is implicit in all cosmological gravitational clustering, at least from Gaussian initial conditions, even when subcondensations are present.
Chao, Jerry; Ram, Sripad; Ward, E. Sally; Ober, Raimund J.
2014-01-01
The extraction of information from images acquired under low light conditions represents a common task in diverse disciplines. In single molecule microscopy, for example, techniques for superresolution image reconstruction depend on the accurate estimation of the locations of individual particles from generally low light images. In order to estimate a quantity of interest with high accuracy, however, an appropriate model for the image data is needed. To this end, we previously introduced a data model for an image that is acquired using the electron-multiplying charge-coupled device (EMCCD) detector, a technology of choice for low light imaging due to its ability to amplify weak signals significantly above its readout noise floor. Specifically, we proposed the use of a geometrically multiplied branching process to model the EMCCD detector’s stochastic signal amplification. Geometric multiplication, however, can be computationally expensive and challenging to work with analytically. We therefore describe here two approximations for geometric multiplication that can be used instead. The high gain approximation is appropriate when a high level of signal amplification is used, a scenario which corresponds to the typical usage of an EMCCD detector. It is an accurate approximation that is computationally more efficient, and can be used to perform maximum likelihood estimation on EMCCD image data. In contrast, the Gaussian approximation is applicable at all levels of signal amplification, but is only accurate when the initial signal to be amplified is relatively large. As we demonstrate, it can importantly facilitate the analysis of an information-theoretic quantity called the noise coefficient. PMID:25075263
Metaheuristic optimisation methods for approximate solving of singular boundary value problems
NASA Astrophysics Data System (ADS)
Sadollah, Ali; Yadav, Neha; Gao, Kaizhou; Su, Rong
2017-07-01
This paper presents a novel approximation technique based on metaheuristics and weighted residual function (WRF) for tackling singular boundary value problems (BVPs) arising in engineering and science. With the aid of certain fundamental concepts of mathematics, Fourier series expansion, and metaheuristic optimisation algorithms, singular BVPs can be approximated as an optimisation problem with boundary conditions as constraints. The target is to minimise the WRF (i.e. error function) constructed in approximation of BVPs. The scheme involves generational distance metric for quality evaluation of the approximate solutions against exact solutions (i.e. error evaluator metric). Four test problems including two linear and two non-linear singular BVPs are considered in this paper to check the efficiency and accuracy of the proposed algorithm. The optimisation task is performed using three different optimisers including the particle swarm optimisation, the water cycle algorithm, and the harmony search algorithm. Optimisation results obtained show that the suggested technique can be successfully applied for approximate solving of singular BVPs.
NASA Astrophysics Data System (ADS)
González-Vida, Jose M.; Macías, Jorge; Mercado, Aurelio; Ortega, Sergio; Castro, Manuel J.
2017-04-01
Tsunami-HySEA model is used to simulate the Caribbean LANTEX 2013 scenario (LANTEX is the acronym for Large AtlaNtic Tsunami EXercise, which is carried out annually). The numerical simulation of the propagation and inundation phases, is performed with both models but using different mesh resolutions and nested meshes. Some comparisons with the MOST tsunami model available at the University of Puerto Rico (UPR) are made. Both models compare well for propagating tsunami waves in open sea, producing very similar results. In near-shore shallow waters, Tsunami-HySEA should be compared with the inundation version of MOST, since the propagation version of MOST is limited to deeper waters. Regarding the inundation phase, a 1 arc-sec (approximately 30 m) resolution mesh covering all of Puerto Rico, is used, and a three-level nested meshes technique implemented. In the inundation phase, larger differences between model results are observed. Nevertheless, the most striking difference resides in computational time; Tsunami-HySEA is coded using the advantages of GPU architecture, and can produce a 4 h simulation in a 60 arcsec resolution grid for the whole Caribbean Sea in less than 4 min with a single general-purpose GPU and as fast as 11 s with 32 general-purpose GPUs. In the inundation stage with nested meshes, approximately 8 hours of wall clock time is needed for a 2-h simulation in a single GPU (versus more than 2 days for the MOST inundation, running three different parts of the island—West, Center, East—at the same time due to memory limitations in MOST). When domain decomposition techniques are finally implemented by breaking up the computational domain into sub-domains and assigning a GPU to each sub-domain (multi-GPU Tsunami-HySEA version), we show that the wall clock time significantly decreases, allowing high-resolution inundation modelling in very short computational times, reducing, for example, if eight GPUs are used, the wall clock time to around 1 hour. Besides, these computational times are obtained using general-purpose GPU hardware.
Regularized quasinormal modes for plasmonic resonators and open cavities
NASA Astrophysics Data System (ADS)
Kamandar Dezfouli, Mohsen; Hughes, Stephen
2018-03-01
Optical mode theory and analysis of open cavities and plasmonic particles is an essential component of optical resonator physics, offering considerable insight and efficiency for connecting to classical and quantum optical properties such as the Purcell effect. However, obtaining the dissipative modes in normalized form for arbitrarily shaped open-cavity systems is notoriously difficult, often involving complex spatial integrations, even after performing the necessary full space solutions to Maxwell's equations. The formal solutions are termed quasinormal modes, which are known to diverge in space, and additional techniques are frequently required to obtain more accurate field representations in the far field. In this work, we introduce a finite-difference time-domain technique that can be used to obtain normalized quasinormal modes using a simple dipole-excitation source, and an inverse Green function technique, in real frequency space, without having to perform any spatial integrations. Moreover, we show how these modes are naturally regularized to ensure the correct field decay behavior in the far field, and thus can be used at any position within and outside the resonator. We term these modes "regularized quasinormal modes" and show the reliability and generality of the theory by studying the generalized Purcell factor of dipole emitters near metallic nanoresonators, hybrid devices with metal nanoparticles coupled to dielectric waveguides, as well as coupled cavity-waveguides in photonic crystals slabs. We also directly compare our results with full-dipole simulations of Maxwell's equations without any approximations, and show excellent agreement.
Treatment of systematic errors in land data assimilation systems
NASA Astrophysics Data System (ADS)
Crow, W. T.; Yilmaz, M.
2012-12-01
Data assimilation systems are generally designed to minimize the influence of random error on the estimation of system states. Yet, experience with land data assimilation systems has also revealed the presence of large systematic differences between model-derived and remotely-sensed estimates of land surface states. Such differences are commonly resolved prior to data assimilation through implementation of a pre-processing rescaling step whereby observations are scaled (or non-linearly transformed) to somehow "match" comparable predictions made by an assimilation model. While the rationale for removing systematic differences in means (i.e., bias) between models and observations is well-established, relatively little theoretical guidance is currently available to determine the appropriate treatment of higher-order moments during rescaling. This talk presents a simple analytical argument to define an optimal linear-rescaling strategy for observations prior to their assimilation into a land surface model. While a technique based on triple collocation theory is shown to replicate this optimal strategy, commonly-applied rescaling techniques (e.g., so called "least-squares regression" and "variance matching" approaches) are shown to represent only sub-optimal approximations to it. Since the triple collocation approach is likely infeasible in many real-world circumstances, general advice for deciding between various feasible (yet sub-optimal) rescaling approaches will be presented with an emphasis of the implications of this work for the case of directly assimilating satellite radiances. While the bulk of the analysis will deal with linear rescaling techniques, its extension to nonlinear cases will also be discussed.
NASA Technical Reports Server (NTRS)
Greene, William H.
1989-01-01
A study has been performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semianalytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models.
Al-Shayyab, Mohammad H; Ryalat, Soukaina; Dar-Odeh, Najla; Alsoleihat, Firas
2013-01-01
The study reported here aimed to identify current sedation practice among general dental practitioners (GDPs) and specialist dental practitioners (SDPs) in Jordan in 2010. Questionnaires were sent by email to 1683 GDPs and SDPs who were working in Jordan at the time of the study. The contact details of these dental practitioners were obtained from a Jordan Dental Association list. Details on personal status, use of, and training in, conscious sedation techniques were sought by the questionnaires. A total of 1003 (60%) questionnaires were returned, with 748 (86.9%) GDPs and 113 (13.1%) SDPs responding. Only ten (1.3%) GDPs and 63 (55.8%) SDPs provided information on the different types of treatments related to their specialties undertaken under some form of sedation performed by specialist and/or assistant anesthetists. Approximately 0.075% of the Jordanian population received some form of sedation during the year 2010, with approximately 0.054% having been treated by oral and maxillofacial surgeons. The main reason for the majority of GDPs (55.0%) and many SDPs (40%) not to perform sedation was lack of training in this field. While some SDPs (26.0%) indicated they did not use sedation because of the inadequacy of sedative facilities. Within the limitations of the present study, it can be concluded that the provision of conscious sedation services in general and specialist dental practices in Jordan is inconsistent and inadequate. This stresses the great need to train practitioners and dental assistants in Jordan to enable them to safely and effectively perform all forms of sedation.
Local thermodynamic mapping for effective liquid density-functional theory
NASA Technical Reports Server (NTRS)
Kyrlidis, Agathagelos; Brown, Robert A.
1992-01-01
The structural-mapping approximation introduced by Lutsko and Baus (1990) in the generalized effective-liquid approximation is extended to include a local thermodynamic mapping based on a spatially dependent effective density for approximating the solid phase in terms of the uniform liquid. This latter approximation, called the local generalized effective-liquid approximation (LGELA) yields excellent predictions for the free energy of hard-sphere solids and for the conditions of coexistence of a hard-sphere fcc solid with a liquid. Moreover, the predicted free energy remains single valued for calculations with more loosely packed crystalline structures, such as the diamond lattice. The spatial dependence of the weighted density makes the LGELA useful in the study of inhomogeneous solids.
NASA Technical Reports Server (NTRS)
Labudde, R. A.
1971-01-01
A technique is described which can be used to evaluate Jacobian determinants which occur in classical mechanical and quasiclassical approximation descriptions of molecular scattering. The method may be valuable in the study of reactive scattering using the quasiclassical approximation.
Belvedere, Claudio; Siegler, Sorin; Ensini, Andrea; Toy, Jason; Caravaggi, Paolo; Namani, Ramya; Giannini, Giulia; Durante, Stefano; Leardini, Alberto
2017-02-28
The mechanical characteristics of the ankle such as its kinematics and load transfer properties are influenced by the geometry of the articulating surfaces. A recent, image-based study found that these surfaces can be approximated by a saddle-shaped, skewed, truncated cone with its apex oriented laterally. The goal of this study was to establish a reliable experimental technique to study the relationship between the geometry of the articular surfaces of the ankle and its mobility and stability characteristics and to use this technique to determine if morphological approximations of the ankle surfaces based on recent discoveries, produce close to normal behavior. The study was performed on ten cadavers. For each specimen, a process based on medical imaging, modeling and 3D printing was used to produce two subject specific artificial implantable sets of the ankle surfaces. One set was a replica of the natural surfaces. The second approximated the ankle surfaces as an original saddle-shaped truncated cone with apex oriented laterally. Testing under cyclic loading conditions was then performed on each specimen following a previously established technique to determine its mobility and stability characteristics under three different conditions: natural surfaces; artificial surfaces replicating the natural surface morphology; and artificial approximation based on the saddle-shaped truncated cone concept. A repeated measure analysis of variance was then used to compare between the three conditions. The results show that (1): the artificial surfaces replicating natural morphology produce close to natural mobility and stability behavior thus establishing the reliability of the technique; and (2): the approximated surfaces based on saddle-shaped truncated cone concept produce mobility and stability behavior close to the ankle with natural surfaces. Copyright © 2017 Elsevier Ltd. All rights reserved.
Improving semantic scene understanding using prior information
NASA Astrophysics Data System (ADS)
Laddha, Ankit; Hebert, Martial
2016-05-01
Perception for ground robot mobility requires automatic generation of descriptions of the robot's surroundings from sensor input (cameras, LADARs, etc.). Effective techniques for scene understanding have been developed, but they are generally purely bottom-up in that they rely entirely on classifying features from the input data based on learned models. In fact, perception systems for ground robots have a lot of information at their disposal from knowledge about the domain and the task. For example, a robot in urban environments might have access to approximate maps that can guide the scene interpretation process. In this paper, we explore practical ways to combine such prior information with state of the art scene understanding approaches.
Error analysis of multipoint flux domain decomposition methods for evolutionary diffusion problems
NASA Astrophysics Data System (ADS)
Arrarás, A.; Portero, L.; Yotov, I.
2014-01-01
We study space and time discretizations for mixed formulations of parabolic problems. The spatial approximation is based on the multipoint flux mixed finite element method, which reduces to an efficient cell-centered pressure system on general grids, including triangles, quadrilaterals, tetrahedra, and hexahedra. The time integration is performed by using a domain decomposition time-splitting technique combined with multiterm fractional step diagonally implicit Runge-Kutta methods. The resulting scheme is unconditionally stable and computationally efficient, as it reduces the global system to a collection of uncoupled subdomain problems that can be solved in parallel without the need for Schwarz-type iteration. Convergence analysis for both the semidiscrete and fully discrete schemes is presented.
Bubble colloidal AFM probes formed from ultrasonically generated bubbles.
Vakarelski, Ivan U; Lee, Judy; Dagastine, Raymond R; Chan, Derek Y C; Stevens, Geoffrey W; Grieser, Franz
2008-02-05
Here we introduce a simple and effective experimental approach to measuring the interaction forces between two small bubbles (approximately 80-140 microm) in aqueous solution during controlled collisions on the scale of micrometers to nanometers. The colloidal probe technique using atomic force microscopy (AFM) was extended to measure interaction forces between a cantilever-attached bubble and surface-attached bubbles of various sizes. By using an ultrasonic source, we generated numerous small bubbles on a mildly hydrophobic surface of a glass slide. A single bubble picked up with a strongly hydrophobized V-shaped cantilever was used as the colloidal probe. Sample force measurements were used to evaluate the pure water bubble cleanliness and the general consistency of the measurements.
Trends in extreme learning machines: a review.
Huang, Gao; Huang, Guang-Bin; Song, Shiji; You, Keyou
2015-01-01
Extreme learning machine (ELM) has gained increasing interest from various research fields recently. In this review, we aim to report the current state of the theoretical research and practical advances on this subject. We first give an overview of ELM from the theoretical perspective, including the interpolation theory, universal approximation capability, and generalization ability. Then we focus on the various improvements made to ELM which further improve its stability, sparsity and accuracy under general or specific conditions. Apart from classification and regression, ELM has recently been extended for clustering, feature selection, representational learning and many other learning tasks. These newly emerging algorithms greatly expand the applications of ELM. From implementation aspect, hardware implementation and parallel computation techniques have substantially sped up the training of ELM, making it feasible for big data processing and real-time reasoning. Due to its remarkable efficiency, simplicity, and impressive generalization performance, ELM have been applied in a variety of domains, such as biomedical engineering, computer vision, system identification, and control and robotics. In this review, we try to provide a comprehensive view of these advances in ELM together with its future perspectives.
Extending generalized Kubelka-Munk to three-dimensional radiative transfer.
Sandoval, Christopher; Kim, Arnold D
2015-08-10
The generalized Kubelka-Munk (gKM) approximation is a linear transformation of the double spherical harmonics of order one (DP1) approximation of the radiative transfer equation. Here, we extend the gKM approximation to study problems in three-dimensional radiative transfer. In particular, we derive the gKM approximation for the problem of collimated beam propagation and scattering in a plane-parallel slab composed of a uniform absorbing and scattering medium. The result is an 8×8 system of partial differential equations that is much easier to solve than the radiative transfer equation. We compare the solutions of the gKM approximation with Monte Carlo simulations of the radiative transfer equation to identify the range of validity for this approximation. We find that the gKM approximation is accurate for isotropic scattering media that are sufficiently thick and much less accurate for anisotropic, forward-peaked scattering media.
Theory of biaxial graded-index optical fiber. M.S. Thesis
NASA Technical Reports Server (NTRS)
Kawalko, Stephen F.
1990-01-01
A biaxial graded-index fiber with a homogeneous cladding is studied. Two methods, wave equation and matrix differential equation, of formulating the problem and their respective solutions are discussed. For the wave equation formulation of the problem it is shown that for the case of a diagonal permittivity tensor the longitudinal electric and magnetic fields satisfy a pair of coupled second-order differential equations. Also, a generalized dispersion relation is derived in terms of the solutions for the longitudinal electric and magnetic fields. For the case of a step-index fiber, either isotropic or uniaxial, these differential equations can be solved exactly in terms of Bessel functions. For the cases of an istropic graded-index and a uniaxial graded-index fiber, a solution using the Wentzel, Krammers and Brillouin (WKB) approximation technique is shown. Results for some particular permittivity profiles are presented. Also the WKB solutions is compared with the vector solution found by Kurtz and Streifer. For the matrix formulation it is shown that the tangential components of the electric and magnetic fields satisfy a system of four first-order differential equations which can be conveniently written in matrix form. For the special case of meridional modes, the system of equations splits into two systems of two equations. A general iterative technique, asymptotic partitioning of systems of equations, for solving systems of differential equations is presented. As a simple example, Bessel's differential equation is written in matrix form and is solved using this asymptotic technique. Low order solutions for particular examples of a biaxial and uniaxial graded-index fiber are presented. Finally numerical results obtained using the asymptotic technique are presented for particular examples of isotropic and uniaxial step-index fibers and isotropic, uniaxial and biaxial graded-index fibers.
James, Kevin R; Dowling, David R
2008-09-01
In underwater acoustics, the accuracy of computational field predictions is commonly limited by uncertainty in environmental parameters. An approximate technique for determining the probability density function (PDF) of computed field amplitude, A, from known environmental uncertainties is presented here. The technique can be applied to several, N, uncertain parameters simultaneously, requires N+1 field calculations, and can be used with any acoustic field model. The technique implicitly assumes independent input parameters and is based on finding the optimum spatial shift between field calculations completed at two different values of each uncertain parameter. This shift information is used to convert uncertain-environmental-parameter distributions into PDF(A). The technique's accuracy is good when the shifted fields match well. Its accuracy is evaluated in range-independent underwater sound channels via an L(1) error-norm defined between approximate and numerically converged results for PDF(A). In 50-m- and 100-m-deep sound channels with 0.5% uncertainty in depth (N=1) at frequencies between 100 and 800 Hz, and for ranges from 1 to 8 km, 95% of the approximate field-amplitude distributions generated L(1) values less than 0.52 using only two field calculations. Obtaining comparable accuracy from traditional methods requires of order 10 field calculations and up to 10(N) when N>1.
Screen Space Ambient Occlusion Based Multiple Importance Sampling for Real-Time Rendering
NASA Astrophysics Data System (ADS)
Zerari, Abd El Mouméne; Babahenini, Mohamed Chaouki
2018-03-01
We propose a new approximation technique for accelerating the Global Illumination algorithm for real-time rendering. The proposed approach is based on the Screen-Space Ambient Occlusion (SSAO) method, which approximates the global illumination for large, fully dynamic scenes at interactive frame rates. Current algorithms that are based on the SSAO method suffer from difficulties due to the large number of samples that are required. In this paper, we propose an improvement to the SSAO technique by integrating it with a Multiple Importance Sampling technique that combines a stratified sampling method with an importance sampling method, with the objective of reducing the number of samples. Experimental evaluation demonstrates that our technique can produce high-quality images in real time and is significantly faster than traditional techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herbert, John M.
1997-01-01
Rayleigh-Schroedinger perturbation theory is an effective and popular tool for describing low-lying vibrational and rotational states of molecules. This method, in conjunction with ab initio techniques for computation of electronic potential energy surfaces, can be used to calculate first-principles molecular vibrational-rotational energies to successive orders of approximation. Because of mathematical complexities, however, such perturbation calculations are rarely extended beyond the second order of approximation, although recent work by Herbert has provided a formula for the nth-order energy correction. This report extends that work and furnishes the remaining theoretical details (including a general formula for the Rayleigh-Schroedinger expansion coefficients) necessary formore » calculation of energy corrections to arbitrary order. The commercial computer algebra software Mathematica is employed to perform the prohibitively tedious symbolic manipulations necessary for derivation of generalized energy formulae in terms of universal constants, molecular constants, and quantum numbers. As a pedagogical example, a Hamiltonian operator tailored specifically to diatomic molecules is derived, and the perturbation formulae obtained from this Hamiltonian are evaluated for a number of such molecules. This work provides a foundation for future analyses of polyatomic molecules, since it demonstrates that arbitrary-order perturbation theory can successfully be applied with the aid of commercially available computer algebra software.« less
Li, Changyang; Wang, Xiuying; Eberl, Stefan; Fulham, Michael; Yin, Yong; Dagan Feng, David
2015-01-01
Automated and general medical image segmentation can be challenging because the foreground and the background may have complicated and overlapping density distributions in medical imaging. Conventional region-based level set algorithms often assume piecewise constant or piecewise smooth for segments, which are implausible for general medical image segmentation. Furthermore, low contrast and noise make identification of the boundaries between foreground and background difficult for edge-based level set algorithms. Thus, to address these problems, we suggest a supervised variational level set segmentation model to harness the statistical region energy functional with a weighted probability approximation. Our approach models the region density distributions by using the mixture-of-mixtures Gaussian model to better approximate real intensity distributions and distinguish statistical intensity differences between foreground and background. The region-based statistical model in our algorithm can intuitively provide better performance on noisy images. We constructed a weighted probability map on graphs to incorporate spatial indications from user input with a contextual constraint based on the minimization of contextual graphs energy functional. We measured the performance of our approach on ten noisy synthetic images and 58 medical datasets with heterogeneous intensities and ill-defined boundaries and compared our technique to the Chan-Vese region-based level set model, the geodesic active contour model with distance regularization, and the random walker model. Our method consistently achieved the highest Dice similarity coefficient when compared to the other methods.
NASA Astrophysics Data System (ADS)
Tayebi, A.; Shekari, Y.; Heydari, M. H.
2017-07-01
Several physical phenomena such as transformation of pollutants, energy, particles and many others can be described by the well-known convection-diffusion equation which is a combination of the diffusion and advection equations. In this paper, this equation is generalized with the concept of variable-order fractional derivatives. The generalized equation is called variable-order time fractional advection-diffusion equation (V-OTFA-DE). An accurate and robust meshless method based on the moving least squares (MLS) approximation and the finite difference scheme is proposed for its numerical solution on two-dimensional (2-D) arbitrary domains. In the time domain, the finite difference technique with a θ-weighted scheme and in the space domain, the MLS approximation are employed to obtain appropriate semi-discrete solutions. Since the newly developed method is a meshless approach, it does not require any background mesh structure to obtain semi-discrete solutions of the problem under consideration, and the numerical solutions are constructed entirely based on a set of scattered nodes. The proposed method is validated in solving three different examples including two benchmark problems and an applied problem of pollutant distribution in the atmosphere. In all such cases, the obtained results show that the proposed method is very accurate and robust. Moreover, a remarkable property so-called positive scheme for the proposed method is observed in solving concentration transport phenomena.
NASA Astrophysics Data System (ADS)
Turcksin, Bruno; Ragusa, Jean C.; Morel, Jim E.
2012-01-01
It is well known that the diffusion synthetic acceleration (DSA) methods for the Sn equations become ineffective in the Fokker-Planck forward-peaked scattering limit. In response to this deficiency, Morel and Manteuffel (1991) developed an angular multigrid method for the 1-D Sn equations. This method is very effective, costing roughly twice as much as DSA per source iteration, and yielding a maximum spectral radius of approximately 0.6 in the Fokker-Planck limit. Pautz, Adams, and Morel (PAM) (1999) later generalized the angular multigrid to 2-D, but it was found that the method was unstable with sufficiently forward-peaked mappings between the angular grids. The method was stabilized via a filtering technique based on diffusion operators, but this filtering also degraded the effectiveness of the overall scheme. The spectral radius was not bounded away from unity in the Fokker-Planck limit, although the method remained more effective than DSA. The purpose of this article is to recast the multidimensional PAM angular multigrid method without the filtering as an Sn preconditioner and use it in conjunction with the Generalized Minimal RESidual (GMRES) Krylov method. The approach ensures stability and our computational results demonstrate that it is also significantly more efficient than an analogous DSA-preconditioned Krylov method.
NASA Astrophysics Data System (ADS)
He, Zhenzong; Qi, Hong; Wang, Yuqing; Ruan, Liming
2014-10-01
Four improved Ant Colony Optimization (ACO) algorithms, i.e. the probability density function based ACO (PDF-ACO) algorithm, the Region ACO (RACO) algorithm, Stochastic ACO (SACO) algorithm and Homogeneous ACO (HACO) algorithm, are employed to estimate the particle size distribution (PSD) of the spheroidal particles. The direct problems are solved by the extended Anomalous Diffraction Approximation (ADA) and the Lambert-Beer law. Three commonly used monomodal distribution functions i.e. the Rosin-Rammer (R-R) distribution function, the normal (N-N) distribution function, and the logarithmic normal (L-N) distribution function are estimated under dependent model. The influence of random measurement errors on the inverse results is also investigated. All the results reveal that the PDF-ACO algorithm is more accurate than the other three ACO algorithms and can be used as an effective technique to investigate the PSD of the spheroidal particles. Furthermore, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution functions to retrieve the PSD of spheroidal particles using PDF-ACO algorithm. The investigation shows a reasonable agreement between the original distribution function and the general distribution function when only considering the variety of the length of the rotational semi-axis.
Hohenforst-Schmidt, Wolfgang; Linsmeier, Bernd; Zarogoulidis, Paul; Freitag, Lutz; Darwiche, Kaid; Browning, Robert; Turner, J Francis; Huang, Haidong; Li, Qiang; Vogl, Thomas; Zarogoulidis, Konstantinos; Brachmann, Johannes; Rittger, Harald
2015-01-01
Tracheomalacia or tracheobronchomalacia (TM or TBM) is a common problem especially for elderly patients often unfit for surgical techniques. Several surgical or minimally invasive techniques have already been described. Stenting is one option but in general long-time stenting is accompanied by a high complication rate. Stent removal is more difficult in case of self-expandable nitinol stents or metallic stents in general in comparison to silicone stents. The main disadvantage of silicone stents in comparison to uncovered metallic stents is migration and plugging. We compared the operation time and in particular the duration of a sufficient Dumon stent fixation with different techniques in a patient with severe posttracheotomy TM and strongly reduced mobility of the vocal cords due to Parkinson's disease. The combined approach with simultaneous Dumon stenting and endoluminal transtracheal externalized suture under cone-beam computer tomography guidance with the Berci needle was by far the fastest approach compared to a (not performed) surgical intervention, or even purely endoluminal suturing through the rigid bronchoscope. The duration of the endoluminal transtracheal externalized suture was between 5 minutes and 9 minutes with the Berci needle; the pure endoluminal approach needed 51 minutes. The alternative of tracheobronchoplasty was refused by the patient. In general, 180 minutes for this surgical approach is calculated. The costs of the different approaches are supposed to vary widely due to the fact that in Germany 1 minute in an operation room costs on average approximately 50-60€ inclusive of taxes. In our own hospital (tertiary level), it is nearly 30€ per minute in an operation room for a surgical approach. Calculating an additional 15 minutes for patient preparation and transfer to wake-up room, therefore a total duration inside the investigation room of 30 minutes, the cost per flexible bronchoscopy is per minute on average less than 6€. Although the Dumon stenting requires a set-up with more expensive anesthesiology accompaniment, which takes longer than a flexible investigation estimated at 1 hour in an operation room, still without calculation of the costs of the materials and specialized staff that the surgical approach would consume at least 3,000€ more than a minimally invasive approach performed with the Berci needle. This difference is due to the longer time of the surgical intervention which is calculated at approximately 180 minutes in comparison to the achieved non-surgical approach of 60 minutes in the operation suite.
A new model to predict weak-lensing peak counts. II. Parameter constraint strategies
NASA Astrophysics Data System (ADS)
Lin, Chieh-An; Kilbinger, Martin
2015-11-01
Context. Peak counts have been shown to be an excellent tool for extracting the non-Gaussian part of the weak lensing signal. Recently, we developed a fast stochastic forward model to predict weak-lensing peak counts. Our model is able to reconstruct the underlying distribution of observables for analysis. Aims: In this work, we explore and compare various strategies for constraining a parameter using our model, focusing on the matter density Ωm and the density fluctuation amplitude σ8. Methods: First, we examine the impact from the cosmological dependency of covariances (CDC). Second, we perform the analysis with the copula likelihood, a technique that makes a weaker assumption than does the Gaussian likelihood. Third, direct, non-analytic parameter estimations are applied using the full information of the distribution. Fourth, we obtain constraints with approximate Bayesian computation (ABC), an efficient, robust, and likelihood-free algorithm based on accept-reject sampling. Results: We find that neglecting the CDC effect enlarges parameter contours by 22% and that the covariance-varying copula likelihood is a very good approximation to the true likelihood. The direct techniques work well in spite of noisier contours. Concerning ABC, the iterative process converges quickly to a posterior distribution that is in excellent agreement with results from our other analyses. The time cost for ABC is reduced by two orders of magnitude. Conclusions: The stochastic nature of our weak-lensing peak count model allows us to use various techniques that approach the true underlying probability distribution of observables, without making simplifying assumptions. Our work can be generalized to other observables where forward simulations provide samples of the underlying distribution.
NASA Technical Reports Server (NTRS)
Barlow, N. G.
1993-01-01
This study determines crater depth through use of photoclinometric profiles. Random checks of the photoclinometric results are performed using shadow estimation techniques. The images are Viking Orbiter digital format frames; in cases where the digital image is unusable for photoclinometric analysis, shadow estimation is used to determine crater depths. The two techniques provide depth results within 2 percent of each other. Crater diameters are obtained from the photoclinometric profiles and checked against the diameters measured from the hard-copy images using a digitizer. All images used in this analysis are of approximately 40 m/pixel resolution. The sites that have been analyzed to date include areas within Arabia, Maja Valles, Memnonia, Acidalia, and Elysium. Only results for simple craters (craters less than 5 km in diameter) are discussed here because of the low numbers of complex craters presently measured in the analysis. General results indicate that impact craters are deeper than average. A single d/D relationship for fresh impact craters on Mars does not exist due to changes in target properties across the planet's surface. Within regions where target properties are approximately constant, however, d/D ratios for fresh craters can be determined. In these regions, the d/D ratios of nonpristine craters can be compared with the fresh crater d/D relationship to obtain information on relative degrees of crater degradation. This technique reveals that regional episodes of enhanced degradation have occurred. However, the lack of statistically reliable size-frequency distribution data prevents comparison of the relative ages of these events between different regions, and thus determination of a large-scale episode (or perhaps several episodes) cannot be made at this time.
Dalbayrak, Sedat; Yaman, Onur; Yılmaz, Mesut
2013-01-01
Context: Treatment of Hangman's fractures is still controversial. Hangman's fractures Type II and IIA are usually treated with surgical procedures. Aim: This study aims at describing the Neurospinal Academy (NSA) technique as an attempt to achieve an approximation of the fracture line to the axis body, which may be used for Type II and IIA patients with severe displacement and angulation. Settings and Design: NSA technique both pars or pedicle screws are placed bicortically to ensure that anterior surface of C2 vertebral body will be crossed 1-2 mm. A rod is prepared in suitable length and curve to connect the two screws. For placing the rod, sufficient amount of bone is resected from the C2 spinous process. C2 vertebral body is pulled back by means of the screws that crossed the anterior surface of C2 vertebral body. Materials and Methods: Hangman II and IIA patient are treated with NSA technique. Result: Angulated and tilted C2 vertebral body was pulled back and approximated to posterior elements. Conclusions: In Hangman's fractures Type II and IIA with severe vertebral body and pedicle displacement, NSA technique is an effective and reliable treatment alternative for the approximation of posterior elements to the C2 vertebral body, which is tilted, angulated, and dislocated. PMID:24744563
Dalbayrak, Sedat; Yaman, Onur; Yılmaz, Mesut
2013-07-01
Treatment of Hangman's fractures is still controversial. Hangman's fractures Type II and IIA are usually treated with surgical procedures. This study aims at describing the Neurospinal Academy (NSA) technique as an attempt to achieve an approximation of the fracture line to the axis body, which may be used for Type II and IIA patients with severe displacement and angulation. NSA technique both pars or pedicle screws are placed bicortically to ensure that anterior surface of C2 vertebral body will be crossed 1-2 mm. A rod is prepared in suitable length and curve to connect the two screws. For placing the rod, sufficient amount of bone is resected from the C2 spinous process. C2 vertebral body is pulled back by means of the screws that crossed the anterior surface of C2 vertebral body. Hangman II and IIA patient are treated with NSA technique. Angulated and tilted C2 vertebral body was pulled back and approximated to posterior elements. In Hangman's fractures Type II and IIA with severe vertebral body and pedicle displacement, NSA technique is an effective and reliable treatment alternative for the approximation of posterior elements to the C2 vertebral body, which is tilted, angulated, and dislocated.
Solving fractional optimal control problems within a Chebyshev-Legendre operational technique
NASA Astrophysics Data System (ADS)
Bhrawy, A. H.; Ezz-Eldien, S. S.; Doha, E. H.; Abdelkawy, M. A.; Baleanu, D.
2017-06-01
In this manuscript, we report a new operational technique for approximating the numerical solution of fractional optimal control (FOC) problems. The operational matrix of the Caputo fractional derivative of the orthonormal Chebyshev polynomial and the Legendre-Gauss quadrature formula are used, and then the Lagrange multiplier scheme is employed for reducing such problems into those consisting of systems of easily solvable algebraic equations. We compare the approximate solutions achieved using our approach with the exact solutions and with those presented in other techniques and we show the accuracy and applicability of the new numerical approach, through two numerical examples.
NASA Technical Reports Server (NTRS)
Butera, M. K.
1979-01-01
The success of remotely mapping wetland vegetation of the southwestern coast of Florida is examined. A computerized technique to process aircraft and LANDSAT multispectral scanner data into vegetation classification maps was used. The cost effectiveness of this mapping technique was evaluated in terms of user requirements, accuracy, and cost. Results indicate that mangrove communities are classified most cost effectively by the LANDSAT technique, with an accuracy of approximately 87 percent and with a cost of approximately 3 cent per hectare compared to $46.50 per hectare for conventional ground survey methods.
A nonperturbative light-front coupled-cluster method
NASA Astrophysics Data System (ADS)
Hiller, J. R.
2012-10-01
The nonperturbative Hamiltonian eigenvalue problem for bound states of a quantum field theory is formulated in terms of Dirac's light-front coordinates and then approximated by the exponential-operator technique of the many-body coupled-cluster method. This approximation eliminates any need for the usual approximation of Fock-space truncation. Instead, the exponentiated operator is truncated, and the terms retained are determined by a set of nonlinear integral equations. These equations are solved simultaneously with an effective eigenvalue problem in the valence sector, where the number of constituents is small. Matrix elements can be calculated, with extensions of techniques from standard coupled-cluster theory, to obtain form factors and other observables.
Baù, Marco; Ferrari, Marco; Ferrari, Vittorio
2017-01-01
A technique for contactless electromagnetic interrogation of AT-cut quartz piezoelectric resonator sensors is proposed based on a primary coil electromagnetically air-coupled to a secondary coil connected to the electrodes of the resonator. The interrogation technique periodically switches between interleaved excitation and detection phases. During the excitation phase, the resonator is set into vibration by a driving voltage applied to the primary coil, whereas in the detection phase, the excitation signal is turned off and the transient decaying response of the resonator is sensed without contact by measuring the voltage induced back across the primary coil. This approach ensures that the readout frequency of the sensor signal is to a first order approximation independent of the interrogation distance between the primary and secondary coils. A detailed theoretical analysis of the interrogation principle based on a lumped-element equivalent circuit is presented. The analysis has been experimentally validated on a 4.432 MHz AT-cut quartz crystal resonator, demonstrating the accurate readout of the series resonant frequency and quality factor over an interrogation distance of up to 2 cm. As an application, the technique has been applied to the measurement of liquid microdroplets deposited on a 4.8 MHz AT-cut quartz crystal. More generally, the proposed technique can be exploited for the measurement of any physical or chemical quantities affecting the resonant response of quartz resonator sensors. PMID:28574459
Baù, Marco; Ferrari, Marco; Ferrari, Vittorio
2017-06-02
A technique for contactless electromagnetic interrogation of AT-cut quartz piezoelectric resonator sensors is proposed based on a primary coil electromagnetically air-coupled to a secondary coil connected to the electrodes of the resonator. The interrogation technique periodically switches between interleaved excitation and detection phases. During the excitation phase, the resonator is set into vibration by a driving voltage applied to the primary coil, whereas in the detection phase, the excitation signal is turned off and the transient decaying response of the resonator is sensed without contact by measuring the voltage induced back across the primary coil. This approach ensures that the readout frequency of the sensor signal is to a first order approximation independent of the interrogation distance between the primary and secondary coils. A detailed theoretical analysis of the interrogation principle based on a lumped-element equivalent circuit is presented. The analysis has been experimentally validated on a 4.432 MHz AT-cut quartz crystal resonator, demonstrating the accurate readout of the series resonant frequency and quality factor over an interrogation distance of up to 2 cm. As an application, the technique has been applied to the measurement of liquid microdroplets deposited on a 4.8 MHz AT-cut quartz crystal. More generally, the proposed technique can be exploited for the measurement of any physical or chemical quantities affecting the resonant response of quartz resonator sensors.
Accurate Estimation of Solvation Free Energy Using Polynomial Fitting Techniques
Shyu, Conrad; Ytreberg, F. Marty
2010-01-01
This report details an approach to improve the accuracy of free energy difference estimates using thermodynamic integration data (slope of the free energy with respect to the switching variable λ) and its application to calculating solvation free energy. The central idea is to utilize polynomial fitting schemes to approximate the thermodynamic integration data to improve the accuracy of the free energy difference estimates. Previously, we introduced the use of polynomial regression technique to fit thermodynamic integration data (Shyu and Ytreberg, J Comput Chem 30: 2297–2304, 2009). In this report we introduce polynomial and spline interpolation techniques. Two systems with analytically solvable relative free energies are used to test the accuracy of the interpolation approach. We also use both interpolation and regression methods to determine a small molecule solvation free energy. Our simulations show that, using such polynomial techniques and non-equidistant λ values, the solvation free energy can be estimated with high accuracy without using soft-core scaling and separate simulations for Lennard-Jones and partial charges. The results from our study suggest these polynomial techniques, especially with use of non-equidistant λ values, improve the accuracy for ΔF estimates without demanding additional simulations. We also provide general guidelines for use of polynomial fitting to estimate free energy. To allow researchers to immediately utilize these methods, free software and documentation is provided via http://www.phys.uidaho.edu/ytreberg/software. PMID:20623657
Liposuction: Anaesthesia challenges
Sood, Jayashree; Jayaraman, Lakshmi; Sethi, Nitin
2011-01-01
Liposuction is one of the most popular treatment modalities in aesthetic surgery with certain unique anaesthetic considerations. Liposuction is often performed as an office procedure. There are four main types of liposuction techniques based on the volume of infiltration or wetting solution injected, viz dry, wet, superwet, and tumescent technique. The tumescent technique is one of the most common liposuction techniques in which large volumes of dilute local anaesthetic (wetting solution) are injected into the fat to facilitate anaesthesia and decrease blood loss. The amount of lignocaine injected may be very large, approximately 35-55 mg/kg, raising concerns regarding local anaesthetic toxicity. Liposuction can be of two types according to the volume of solution aspirated: High volume (>4,000 ml aspirated) or low volume (<4,000 ml aspirated). While small volume liposuction may be done under local/monitored anaesthesia care, large-volume liposuction requires general anaesthesia. As a large volume of wetting solution is injected into the subcutaneous tissue, the intraoperative fluid management has to be carefully titrated along with haemodynamic monitoring and temperature control. Assessment of blood loss is difficult, as it is mixed with the aspirated fat. Since most obese patients opt for liposuction as a quick method to lose weight, all concerns related to obesity need to be addressed in a preoperative evaluation. PMID:21808392
Piezosurgery for the lingual split technique in mandibular third molar removal: a suggestion.
Pippi, Roberto; Alvaro, Roberto
2013-03-01
The lingual split technique is a surgical procedure for extraction of impacted mandibular third molar throughout a lingual approach. The main disadvantage of this technique is the high rate of temporary lingual nerve injury mainly because of the trauma induced by the lingual flap retraction. The purpose of this paper is to suggest the use of piezosurgery in performing the lingual cortical plate osteotomy of the third molar alveolar process. Surgical procedure was performed under general anesthesia, and it lasted approximately 60 minutes. After the buccal and lingual full-thickness flaps were incised and elevated, a piezosurgical device was used for osteotomy. A well-defined bony window was then removed, and it allowed the entire tooth was extracted in a lingual direction. The patient did not show any neurological postoperative complication. Lingual and inferior alveolar nerve functionality was normal before as well as after surgery. The use of piezoelectric surgery seems to be a good option in removing lower third molars when a lingual access is clearly indicated. The only disadvantage of this technique can be represented by an operating time lengthening possibly because of a lower power cut of the piezoelectric device, to the high mineralization of the mandibular cortical bone and to the use of inserts with a low degree of sharpening.
Feasibility of the partial-single arc technique in RapidArc planning for prostate cancer treatment
Rana, Suresh; Cheng, ChihYao
2013-01-01
The volumetric modulated arc therapy (VMAT) technique, in the form of RapidArc, is widely used to treat prostate cancer. The full-single arc (f-SA) technique in RapidArc planning for prostate cancer treatment provides efficient treatment, but it also delivers a higher radiation dose to the rectum. This study aimed to compare the dosimetric results from the new partial-single arc (p-SA) technique with those from the f-SA technique in RapidArc planning for prostate cancer treatment. In this study, 10 patients with low-risk prostate cancer were selected. For each patient, two sets of RapidArc plans (f-SA and p-SA) were created in the Eclipse treatment planning system. The f-SA plan was created using one full arc, and the p-SA plan was created using planning parameters identical to those of the f-SA plan but with anterior and posterior avoidance sectors. Various dosimetric parameters of the f-SA and p-SA plans were evaluated and compared for the same target coverage and identical plan optimization parameters. The f-SA and p-SA plans showed an average difference of ±1% for the doses to the planning target volume (PTV), and there were no clear differences in dose homogeneity or plan conformity. In comparison to the f-SA technique, the p-SA technique reduced the doses to the rectum by approximately 6.1% to 21.2%, to the bladder by approximately 10.3% to 29.5%, and to the penile bulb by approximately 2.2%. In contrast, the dose to the femoral heads, the integral dose, and the number of monitor units were higher in the p-SA plans by approximately 34.4%, 7.7%, and 9.2%, respectively. In conclusion, it is feasible to use the p-SA technique for RapidArc planning for prostate cancer treatment. For the same PTV coverage and identical plan optimization parameters, the p-SA technique is better in sparing the rectum and bladder without compromising plan conformity or target homogeneity when compared to the f-SA technique. PMID:23845140
A Modeling and Data Analysis of Laser Beam Propagation in the Maritime Domain
2015-05-18
approach to computing pdfs is the Kernel Density Method (Reference [9] has an intro - duction to the method), which we will apply to compute the pdf of our...The project has two parts to it: 1) we present a computational analysis of different probability density function approximation techniques; and 2) we... computational analysis of different probability density function approximation techniques; and 2) we introduce preliminary steps towards developing a
Order reduction for a model of marine bacteriophage evolution
NASA Astrophysics Data System (ADS)
Pagliarini, Silvia; Korobeinikov, Andrei
2017-02-01
A typical mechanistic model of viral evolution necessary includes several time scales which can differ by orders of magnitude. Such a diversity of time scales makes analysis of these models difficult. Reducing the order of a model is highly desirable when handling such a model. A typical approach applied to such slow-fast (or singularly perturbed) systems is the time scales separation technique. Constructing the so-called quasi-steady-state approximation is the usual first step in applying the technique. While this technique is commonly applied, in some cases its straightforward application can lead to unsatisfactory results. In this paper we construct the quasi-steady-state approximation for a model of evolution of marine bacteriophages based on the Beretta-Kuang model. We show that for this particular model the quasi-steady-state approximation is able to produce only qualitative but not quantitative fit.
Approximate Bayesian computation for spatial SEIR(S) epidemic models.
Brown, Grant D; Porter, Aaron T; Oleson, Jacob J; Hinman, Jessica A
2018-02-01
Approximate Bayesia n Computation (ABC) provides an attractive approach to estimation in complex Bayesian inferential problems for which evaluation of the kernel of the posterior distribution is impossible or computationally expensive. These highly parallelizable techniques have been successfully applied to many fields, particularly in cases where more traditional approaches such as Markov chain Monte Carlo (MCMC) are impractical. In this work, we demonstrate the application of approximate Bayesian inference to spatially heterogeneous Susceptible-Exposed-Infectious-Removed (SEIR) stochastic epidemic models. These models have a tractable posterior distribution, however MCMC techniques nevertheless become computationally infeasible for moderately sized problems. We discuss the practical implementation of these techniques via the open source ABSEIR package for R. The performance of ABC relative to traditional MCMC methods in a small problem is explored under simulation, as well as in the spatially heterogeneous context of the 2014 epidemic of Chikungunya in the Americas. Copyright © 2017 Elsevier Ltd. All rights reserved.
Quantum simulation of a quantum stochastic walk
NASA Astrophysics Data System (ADS)
Govia, Luke C. G.; Taketani, Bruno G.; Schuhmacher, Peter K.; Wilhelm, Frank K.
2017-03-01
The study of quantum walks has been shown to have a wide range of applications in areas such as artificial intelligence, the study of biological processes, and quantum transport. The quantum stochastic walk (QSW), which allows for incoherent movement of the walker, and therefore, directionality, is a generalization on the fully coherent quantum walk. While a QSW can always be described in Lindblad formalism, this does not mean that it can be microscopically derived in the standard weak-coupling limit under the Born-Markov approximation. This restricts the class of QSWs that can be experimentally realized in a simple manner. To circumvent this restriction, we introduce a technique to simulate open system evolution on a fully coherent quantum computer, using a quantum trajectories style approach. We apply this technique to a broad class of QSWs, and show that they can be simulated with minimal experimental resources. Our work opens the path towards the experimental realization of QSWs on large graphs with existing quantum technologies.
Textbook Multigrid Efficiency for Leading Edge Stagnation
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.; Mineck, Raymond E.
2004-01-01
A multigrid solver is defined as having textbook multigrid efficiency (TME) if the solutions to the governing system of equations are attained in a computational work which is a small (less than 10) multiple of the operation count in evaluating the discrete residuals. TME in solving the incompressible inviscid fluid equations is demonstrated for leading-edge stagnation flows. The contributions of this paper include (1) a special formulation of the boundary conditions near stagnation allowing convergence of the Newton iterations on coarse grids, (2) the boundary relaxation technique to facilitate relaxation and residual restriction near the boundaries, (3) a modified relaxation scheme to prevent initial error amplification, and (4) new general analysis techniques for multigrid solvers. Convergence of algebraic errors below the level of discretization errors is attained by a full multigrid (FMG) solver with one full approximation scheme (FAS) cycle per grid. Asymptotic convergence rates of the FAS cycles for the full system of flow equations are very fast, approaching those for scalar elliptic equations.
Textbook Multigrid Efficiency for Leading Edge Stagnation
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.; Mineck, Raymond E.
2004-01-01
A multigrid solver is defined as having textbook multigrid efficiency (TME) if the solutions to the governing system of equations are attained in a computational work which is a small (less than 10) multiple of the operation count in evaluating the discrete residuals. TME in solving the incompressible inviscid fluid equations is demonstrated for leading- edge stagnation flows. The contributions of this paper include (1) a special formulation of the boundary conditions near stagnation allowing convergence of the Newton iterations on coarse grids, (2) the boundary relaxation technique to facilitate relaxation and residual restriction near the boundaries, (3) a modified relaxation scheme to prevent initial error amplification, and (4) new general analysis techniques for multigrid solvers. Convergence of algebraic errors below the level of discretization errors is attained by a full multigrid (FMG) solver with one full approximation scheme (F.4S) cycle per grid. Asymptotic convergence rates of the F.4S cycles for the full system of flow equations are very fast, approaching those for scalar elliptic equations.
A general numerical model for wave rotor analysis
NASA Technical Reports Server (NTRS)
Paxson, Daniel W.
1992-01-01
Wave rotors represent one of the promising technologies for achieving very high core temperatures and pressures in future gas turbine engines. Their operation depends upon unsteady gas dynamics and as such, their analysis is quite difficult. This report describes a numerical model which has been developed to perform such an analysis. Following a brief introduction, a summary of the wave rotor concept is given. The governing equations are then presented, along with a summary of the assumptions used to obtain them. Next, the numerical integration technique is described. This is an explicit finite volume technique based on the method of Roe. The discussion then focuses on the implementation of appropriate boundary conditions. Following this, some results are presented which first compare the numerical approximation to the governing differential equations and then compare the overall model to an actual wave rotor experiment. Finally, some concluding remarks are presented concerning the limitations of the simplifying assumptions and areas where the model may be improved.
NASA Astrophysics Data System (ADS)
Fosas de Pando, Miguel; Schmid, Peter J.; Sipp, Denis
2016-11-01
Nonlinear model reduction for large-scale flows is an essential component in many fluid applications such as flow control, optimization, parameter space exploration and statistical analysis. In this article, we generalize the POD-DEIM method, introduced by Chaturantabut & Sorensen [1], to address nonlocal nonlinearities in the equations without loss of performance or efficiency. The nonlinear terms are represented by nested DEIM-approximations using multiple expansion bases based on the Proper Orthogonal Decomposition. These extensions are imperative, for example, for applications of the POD-DEIM method to large-scale compressible flows. The efficient implementation of the presented model-reduction technique follows our earlier work [2] on linearized and adjoint analyses and takes advantage of the modular structure of our compressible flow solver. The efficacy of the nonlinear model-reduction technique is demonstrated to the flow around an airfoil and its acoustic footprint. We could obtain an accurate and robust low-dimensional model that captures the main features of the full flow.
Hydroxyapatite scaffolds processed using a TBA-based freeze-gel casting/polymer sponge technique.
Yang, Tae Young; Lee, Jung Min; Yoon, Seog Young; Park, Hong Chae
2010-05-01
A novel freeze-gel casting/polymer sponge technique has been introduced to fabricate porous hydroxyapatite scaffolds with controlled "designer" pore structures and improved compressive strength for bone tissue engineering applications. Tertiary-butyl alcohol (TBA) was used as a solvent in this work. The merits of each production process, freeze casting, gel casting, and polymer sponge route were characterized by the sintered microstructure and mechanical strength. A reticulated structure with large pore size of 180-360 microm, which formed on burn-out of polyurethane foam, consisted of the strut with highly interconnected, unidirectional, long pore channels (approximately 4.5 microm in dia.) by evaporation of frozen TBA produced in freeze casting together with the dense inner walls with a few, isolated fine pores (<2 microm) by gel casting. The sintered porosity and pore size generally behaved in an opposite manner to the solid loading, i.e., a high solid loading gave low porosity and small pore size, and a thickening of the strut cross section, thus leading to higher compressive strengths.
Molecular Dynamics Simulations of G Protein-Coupled Receptors.
Bruno, Agostino; Costantino, Gabriele
2012-04-01
G protein-coupled receptors (GPCRs) constitute the largest family of membrane-bound receptors with more than 800 members encoded by 351 genes in humans. It has been estimated that more than 50 % of clinically available drugs act on GPCRs, with an amount of 400, 50 and 25 druggable proteins for the class A, B and C, respectively. Furthermore, Class A GPCRs with approximately 25 % of marketed small drugs represent the most attractive pharmaceutical class. The recent availability of high-resolution 3-dimensional structures of some GPCRs supports the notion that GPCRs are dynamically versatile, and their functions can be modulated by several factors. In this scenario, molecular dynamics (MD) simulations techniques appear to be crucial when studying GPCR flexibility associated to functioning and ligand recognition. A general overview of biased and unbiased MD techniques is here presented with special emphasis on the recent results obtained in the GPCRs field. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Deep Learning for Flow Sculpting: Insights into Efficient Learning using Scientific Simulation Data
NASA Astrophysics Data System (ADS)
Stoecklein, Daniel; Lore, Kin Gwn; Davies, Michael; Sarkar, Soumik; Ganapathysubramanian, Baskar
2017-04-01
A new technique for shaping microfluid flow, known as flow sculpting, offers an unprecedented level of passive fluid flow control, with potential breakthrough applications in advancing manufacturing, biology, and chemistry research at the microscale. However, efficiently solving the inverse problem of designing a flow sculpting device for a desired fluid flow shape remains a challenge. Current approaches struggle with the many-to-one design space, requiring substantial user interaction and the necessity of building intuition, all of which are time and resource intensive. Deep learning has emerged as an efficient function approximation technique for high-dimensional spaces, and presents a fast solution to the inverse problem, yet the science of its implementation in similarly defined problems remains largely unexplored. We propose that deep learning methods can completely outpace current approaches for scientific inverse problems while delivering comparable designs. To this end, we show how intelligent sampling of the design space inputs can make deep learning methods more competitive in accuracy, while illustrating their generalization capability to out-of-sample predictions.
Imprecise results: Utilizing partial computations in real-time systems
NASA Technical Reports Server (NTRS)
Lin, Kwei-Jay; Natarajan, Swaminathan; Liu, Jane W.-S.
1987-01-01
In real-time systems, a computation may not have time to complete its execution because of deadline requirements. In such cases, no result except the approximate results produced by the computations up to that point will be available. It is desirable to utilize these imprecise results if possible. Two approaches are proposed to enable computations to return imprecise results when executions cannot be completed normally. The milestone approach records results periodically, and if a deadline is reached, returns the last recorded result. The sieve approach demarcates sections of code which can be skipped if the time available is insufficient. By using these approaches, the system is able to produce imprecise results when deadlines are reached. The design of the Concord project is described which supports imprecise computations using these techniques. Also presented is a general model of imprecise computations using these techniques, as well as one which takes into account the influence of the environment, showing where the latter approach fits into this model.
Evidence-Based Medicine: Cleft Palate.
Woo, Albert S
2017-01-01
After studying this article, the participant should be able to: 1. Describe the incidence of cleft palate and risk factors associated with development of an orofacial cleft. 2. Understand differences among several techniques to repair clefts of both the hard and soft palates. 3. Discuss risk factors for development of postoperative fistulas, velopharyngeal insufficiency, and facial growth problems. 4. Establish a treatment plan for individualized care of a cleft palate patient. Orofacial clefts are the most common congenital malformations of the head and neck region, and approximately three-quarters of these patients have some form of cleft palate deformity. Cleft palate repair is generally performed in children between 6 and 12 months of age. The goals of palate repair are to minimize the occurrence of fistulas, establish a normal velopharyngeal mechanism, and optimize facial growth. This Maintenance of Certification review discusses the incidence and epidemiology associated with cleft palate deformity and specifics associated with patient care, including analgesia, surgical repair techniques, and complications associated with repair of the cleft palate.
Konevskikh, Tatiana; Ponossov, Arkadi; Blümel, Reinhold; Lukacs, Rozalia; Kohler, Achim
2015-06-21
The appearance of fringes in the infrared spectroscopy of thin films seriously hinders the interpretation of chemical bands because fringes change the relative peak heights of chemical spectral bands. Thus, for the correct interpretation of chemical absorption bands, physical properties need to be separated from chemical characteristics. In the paper at hand we revisit the theory of the scattering of infrared radiation at thin absorbing films. Although, in general, scattering and absorption are connected by a complex refractive index, we show that for the scattering of infrared radiation at thin biological films, fringes and chemical absorbance can in good approximation be treated as additive. We further introduce a model-based pre-processing technique for separating fringes from chemical absorbance by extended multiplicative signal correction (EMSC). The technique is validated by simulated and experimental FTIR spectra. It is further shown that EMSC, as opposed to other suggested filtering methods for the removal of fringes, does not remove information related to chemical absorption.
Kushwaha, Jitendra Kumar
2013-01-01
Approximation theory is a very important field which has various applications in pure and applied mathematics. The present study deals with a new theorem on the approximation of functions of Lipschitz class by using Euler's mean of conjugate series of Fourier series. In this paper, the degree of approximation by using Euler's means of conjugate of functions belonging to Lip (ξ(t), p) class has been obtained. Lipα and Lip (α, p) classes are the particular cases of Lip (ξ(t), p) class. The main result of this paper generalizes some well-known results in this direction. PMID:24379744
Boluda-Ruiz, Rubén; García-Zambrana, Antonio; Castillo-Vázquez, Carmen; Castillo-Vázquez, Beatriz
2016-10-03
A novel accurate and useful approximation of the well-known Beckmann distribution is presented here, which is used to model generalized pointing errors in the context of free-space optical (FSO) communication systems. We derive an approximate closed-form probability density function (PDF) for the composite gamma-gamma (GG) atmospheric turbulence with the pointing error model using the proposed approximation of the Beckmann distribution, which is valid for most practical terrestrial FSO links. This approximation takes into account the effect of the beam width, different jitters for the elevation and the horizontal displacement and the simultaneous effect of nonzero boresight errors for each axis at the receiver plane. Additionally, the proposed approximation allows us to delimit two different FSO scenarios. The first of them is when atmospheric turbulence is the dominant effect in relation to generalized pointing errors, and the second one when generalized pointing error is the dominant effect in relation to atmospheric turbulence. The second FSO scenario has not been studied in-depth by the research community. Moreover, the accuracy of the method is measured both visually and quantitatively using curve-fitting metrics. Simulation results are further included to confirm the analytical results.
USDA-ARS?s Scientific Manuscript database
Grazing lands are the most dominate land cover type in the United States with approximately 311.7 Mha being defined as rangelands. Approximately 53% of the Nation’s rangelands are owned and managed by the private sector while the Federal government manages approximately 43% of the Nation’s rangelan...
Computational methods for estimation of parameters in hyperbolic systems
NASA Technical Reports Server (NTRS)
Banks, H. T.; Ito, K.; Murphy, K. A.
1983-01-01
Approximation techniques for estimating spatially varying coefficients and unknown boundary parameters in second order hyperbolic systems are discussed. Methods for state approximation (cubic splines, tau-Legendre) and approximation of function space parameters (interpolatory splines) are outlined and numerical findings for use of the resulting schemes in model "one dimensional seismic inversion' problems are summarized.
Is approximated de-epithelized glanuloplasty beneficial for hypospadiologist?
ZakiEldahshoury, M; Gamal, W; Salem, E; Rashed, E; Mamdouh, A
2016-05-01
Further evaluation of the cosmetic and functional results of approximated de-epithelized glanuloplasty in different degree of hypospadias. This study included 96 male patients (DPH=68 & MPH=28). Patients selected for repair with glans approximation should have wide urethral plate & grooved glans. All cases were repaired with the classic TIP and glans approximation technique. Follow up was for one year by clinical examination of the meatal shape, size & site, glans shape, skin covering, suture line, urethral catheter, edema & fistula in addition to parent satisfaction. Mean operative time was 49±9minutes. As regards the functional and cosmetic outcomes, success was reported in 95.8%, while failure was in 4.16% in the form of glanular disruption in two patients and subcoronal urethrocutaneous fistula in another two patients. Glans approximation has many advantages, good cosmetic and functional results, short operative time, less blood loss, no need for tourniquet. Study of a large number of cases and comparing glans approximation with the classic TIP technique. Copyright © 2015 AEU. Publicado por Elsevier España, S.L.U. All rights reserved.
NASA Astrophysics Data System (ADS)
Ödén, Jakob; Toma-Dasu, Iuliana; Yu, Cedric X.; Feigenberg, Steven J.; Regine, William F.; Mutaf, Yildirim D.
2013-07-01
The GammaPod™ device, manufactured by Xcision Medical Systems, is a novel stereotactic breast irradiation device. It consists of a hemispherical source carrier containing 36 Cobalt-60 sources, a tungsten collimator with two built-in collimation sizes, a dynamically controlled patient support table and a breast immobilization cup also functioning as the stereotactic frame for the patient. The dosimetric output of the GammaPod™ was modelled using a Monte Carlo based treatment planning system. For the comparison, three-dimensional (3D) models of commonly used intra-cavitary breast brachytherapy techniques utilizing single lumen and multi-lumen balloon as well as peripheral catheter multi-lumen implant devices were created and corresponding 3D dose calculations were performed using the American Association of Physicists in Medicine Task Group-43 formalism. Dose distributions for clinically relevant target volumes were optimized using dosimetric goals set forth in the National Surgical Adjuvant Breast and Bowel Project Protocol B-39. For clinical scenarios assuming similar target sizes and proximity to critical organs, dose coverage, dose fall-off profiles beyond the target and skin doses at given distances beyond the target were calculated for GammaPod™ and compared with the doses achievable by the brachytherapy techniques. The dosimetric goals within the protocol guidelines were fulfilled for all target sizes and irradiation techniques. For central targets, at small distances from the target edge (up to approximately 1 cm) the brachytherapy techniques generally have a steeper dose fall-off gradient compared to GammaPod™ and at longer distances (more than about 1 cm) the relation is generally observed to be opposite. For targets close to the skin, the relative skin doses were considerably lower for GammaPod™ than for any of the brachytherapy techniques. In conclusion, GammaPod™ allows adequate and more uniform dose coverage to centrally and peripherally located targets with an acceptable dose fall-off and lower relative skin dose than the brachytherapy techniques considered in this study.
Segmentation of left atrial intracardiac ultrasound images for image guided cardiac ablation therapy
NASA Astrophysics Data System (ADS)
Rettmann, M. E.; Stephens, T.; Holmes, D. R.; Linte, C.; Packer, D. L.; Robb, R. A.
2013-03-01
Intracardiac echocardiography (ICE), a technique in which structures of the heart are imaged using a catheter navigated inside the cardiac chambers, is an important imaging technique for guidance in cardiac ablation therapy. Automatic segmentation of these images is valuable for guidance and targeting of treatment sites. In this paper, we describe an approach to segment ICE images by generating an empirical model of blood pool and tissue intensities. Normal, Weibull, Gamma, and Generalized Extreme Value (GEV) distributions are fit to histograms of tissue and blood pool pixels from a series of ICE scans. A total of 40 images from 4 separate studies were evaluated. The model was trained and tested using two approaches. In the first approach, the model was trained on all images from 3 studies and subsequently tested on the 40 images from the 4th study. This procedure was repeated 4 times using a leave-one-out strategy. This is termed the between-subjects approach. In the second approach, the model was trained on 10 randomly selected images from a single study and tested on the remaining 30 images in that study. This is termed the within-subjects approach. For both approaches, the model was used to automatically segment ICE images into blood and tissue regions. Each pixel is classified using the Generalized Liklihood Ratio Test across neighborhood sizes ranging from 1 to 49. Automatic segmentation results were compared against manual segmentations for all images. In the between-subjects approach, the GEV distribution using a neighborhood size of 17 was found to be the most accurate with a misclassification rate of approximately 17%. In the within-subjects approach, the GEV distribution using a neighborhood size of 19 was found to be the most accurate with a misclassification rate of approximately 15%. As expected, the majority of misclassified pixels were located near the boundaries between tissue and blood pool regions for both methods.
Estimation of correlation functions by stochastic approximation.
NASA Technical Reports Server (NTRS)
Habibi, A.; Wintz, P. A.
1972-01-01
Consideration of the autocorrelation function of a zero-mean stationary random process. The techniques are applicable to processes with nonzero mean provided the mean is estimated first and subtracted. Two recursive techniques are proposed, both of which are based on the method of stochastic approximation and assume a functional form for the correlation function that depends on a number of parameters that are recursively estimated from successive records. One technique uses a standard point estimator of the correlation function to provide estimates of the parameters that minimize the mean-square error between the point estimates and the parametric function. The other technique provides estimates of the parameters that maximize a likelihood function relating the parameters of the function to the random process. Examples are presented.
Basis Function Approximation of Transonic Aerodynamic Influence Coefficient Matrix
NASA Technical Reports Server (NTRS)
Li, Wesley Waisang; Pak, Chan-Gi
2010-01-01
A technique for approximating the modal aerodynamic influence coefficients [AIC] matrices by using basis functions has been developed and validated. An application of the resulting approximated modal AIC matrix for a flutter analysis in transonic speed regime has been demonstrated. This methodology can be applied to the unsteady subsonic, transonic and supersonic aerodynamics. The method requires the unsteady aerodynamics in frequency-domain. The flutter solution can be found by the classic methods, such as rational function approximation, k, p-k, p, root-locus et cetera. The unsteady aeroelastic analysis for design optimization using unsteady transonic aerodynamic approximation is being demonstrated using the ZAERO(TradeMark) flutter solver (ZONA Technology Incorporated, Scottsdale, Arizona). The technique presented has been shown to offer consistent flutter speed prediction on an aerostructures test wing [ATW] 2 configuration with negligible loss in precision in transonic speed regime. These results may have practical significance in the analysis of aircraft aeroelastic calculation and could lead to a more efficient design optimization cycle
Application of Approximate Unsteady Aerodynamics for Flutter Analysis
NASA Technical Reports Server (NTRS)
Pak, Chan-gi; Li, Wesley W.
2010-01-01
A technique for approximating the modal aerodynamic influence coefficient (AIC) matrices by using basis functions has been developed. A process for using the resulting approximated modal AIC matrix in aeroelastic analysis has also been developed. The method requires the unsteady aerodynamics in frequency domain, and this methodology can be applied to the unsteady subsonic, transonic, and supersonic aerodynamics. The flutter solution can be found by the classic methods, such as rational function approximation, k, p-k, p, root locus et cetera. The unsteady aeroelastic analysis using unsteady subsonic aerodynamic approximation is demonstrated herein. The technique presented is shown to offer consistent flutter speed prediction on an aerostructures test wing (ATW) 2 and a hybrid wing body (HWB) type of vehicle configuration with negligible loss in precision. This method computes AICs that are functions of the changing parameters being studied and are generated within minutes of CPU time instead of hours. These results may have practical application in parametric flutter analyses as well as more efficient multidisciplinary design and optimization studies.
Basis Function Approximation of Transonic Aerodynamic Influence Coefficient Matrix
NASA Technical Reports Server (NTRS)
Li, Wesley W.; Pak, Chan-gi
2011-01-01
A technique for approximating the modal aerodynamic influence coefficients matrices by using basis functions has been developed and validated. An application of the resulting approximated modal aerodynamic influence coefficients matrix for a flutter analysis in transonic speed regime has been demonstrated. This methodology can be applied to the unsteady subsonic, transonic, and supersonic aerodynamics. The method requires the unsteady aerodynamics in frequency-domain. The flutter solution can be found by the classic methods, such as rational function approximation, k, p-k, p, root-locus et cetera. The unsteady aeroelastic analysis for design optimization using unsteady transonic aerodynamic approximation is being demonstrated using the ZAERO flutter solver (ZONA Technology Incorporated, Scottsdale, Arizona). The technique presented has been shown to offer consistent flutter speed prediction on an aerostructures test wing 2 configuration with negligible loss in precision in transonic speed regime. These results may have practical significance in the analysis of aircraft aeroelastic calculation and could lead to a more efficient design optimization cycle.
NASA Astrophysics Data System (ADS)
Coşkun, Nart; Çakır, Özcan; Erduran, Murat; Arif Kutlu, Yusuf
2014-05-01
The Nevşehir Kale region located in the middle of Cappadocia with approximately cone shape is investigated for existence of an underground city using the geophysical methods of electrical resistivity and seismic surface wave tomography together. Underground cities are generally known to exist in Cappadocia. The current study has obtained important clues that there may be another one under the Nevşehir Kale region. Two-dimensional resistivity and seismic profiles approximately 4-km long surrounding the Nevşehir Kale are measured to determine the distribution of electrical resistivities and seismic velocities under the profiles. Several high resistivity anomalies with a depth range 8-20 m are discovered to associate with a systematic void structure beneath the region. Because of the high resolution resistivity measurement system currently employed we were able to isolate the void structure from the embedding structure. Low seismic velocity zones associated with the high resistivity depths are also discovered. Using three-dimensional visualization techniques we show the extension of the void structure under the measured profiles.
Modular thermal analyzer routine, volume 1
NASA Technical Reports Server (NTRS)
Oren, J. A.; Phillips, M. A.; Williams, D. R.
1972-01-01
The Modular Thermal Analyzer Routine (MOTAR) is a general thermal analysis routine with strong capabilities for performing thermal analysis of systems containing flowing fluids, fluid system controls (valves, heat exchangers, etc.), life support systems, and thermal radiation situations. Its modular organization permits the analysis of a very wide range of thermal problems for simple problems containing a few conduction nodes to those containing complicated flow and radiation analysis with each problem type being analyzed with peak computational efficiency and maximum ease of use. The organization and programming methods applied to MOTAR achieved a high degree of computer utilization efficiency in terms of computer execution time and storage space required for a given problem. The computer time required to perform a given problem on MOTAR is approximately 40 to 50 percent that required for the currently existing widely used routines. The computer storage requirement for MOTAR is approximately 25 percent more than the most commonly used routines for the most simple problems but the data storage techniques for the more complicated options should save a considerable amount of space.
Heuristic analogy in Ars Conjectandi: From Archimedes' De Circuli Dimensione to Bernoulli's theorem.
Campos, Daniel G
2018-02-01
This article investigates the way in which Jacob Bernoulli proved the main mathematical theorem that undergirds his art of conjecturing-the theorem that founded, historically, the field of mathematical probability. It aims to contribute a perspective into the question of problem-solving methods in mathematics while also contributing to the comprehension of the historical development of mathematical probability. It argues that Bernoulli proved his theorem by a process of mathematical experimentation in which the central heuristic strategy was analogy. In this context, the analogy functioned as an experimental hypothesis. The article expounds, first, Bernoulli's reasoning for proving his theorem, describing it as a process of experimentation in which hypothesis-making is crucial. Next, it investigates the analogy between his reasoning and Archimedes' approximation of the value of π, by clarifying both Archimedes' own experimental approach to the said approximation and its heuristic influence on Bernoulli's problem-solving strategy. The discussion includes some general considerations about analogy as a heuristic technique to make experimental hypotheses in mathematics. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Izmaylov, Artur F.; Staroverov, Viktor N.; Scuseria, Gustavo E.; Davidson, Ernest R.; Stoltz, Gabriel; Cancès, Eric
2007-02-01
We have recently formulated a new approach, named the effective local potential (ELP) method, for calculating local exchange-correlation potentials for orbital-dependent functionals based on minimizing the variance of the difference between a given nonlocal potential and its desired local counterpart [V. N. Staroverov et al., J. Chem. Phys. 125, 081104 (2006)]. Here we show that under a mildly simplifying assumption of frozen molecular orbitals, the equation defining the ELP has a unique analytic solution which is identical with the expression arising in the localized Hartree-Fock (LHF) and common energy denominator approximations (CEDA) to the optimized effective potential. The ELP procedure differs from the CEDA and LHF in that it yields the target potential as an expansion in auxiliary basis functions. We report extensive calculations of atomic and molecular properties using the frozen-orbital ELP method and its iterative generalization to prove that ELP results agree with the corresponding LHF and CEDA values, as they should. Finally, we make the case for extending the iterative frozen-orbital ELP method to full orbital relaxation.
Autonomic Closure for Turbulent Flows Using Approximate Bayesian Computation
NASA Astrophysics Data System (ADS)
Doronina, Olga; Christopher, Jason; Hamlington, Peter; Dahm, Werner
2017-11-01
Autonomic closure is a new technique for achieving fully adaptive and physically accurate closure of coarse-grained turbulent flow governing equations, such as those solved in large eddy simulations (LES). Although autonomic closure has been shown in recent a priori tests to more accurately represent unclosed terms than do dynamic versions of traditional LES models, the computational cost of the approach makes it challenging to implement for simulations of practical turbulent flows at realistically high Reynolds numbers. The optimization step used in the approach introduces large matrices that must be inverted and is highly memory intensive. In order to reduce memory requirements, here we propose to use approximate Bayesian computation (ABC) in place of the optimization step, thereby yielding a computationally-efficient implementation of autonomic closure that trades memory-intensive for processor-intensive computations. The latter challenge can be overcome as co-processors such as general purpose graphical processing units become increasingly available on current generation petascale and exascale supercomputers. In this work, we outline the formulation of ABC-enabled autonomic closure and present initial results demonstrating the accuracy and computational cost of the approach.
Hamiltonian Monte Carlo acceleration using surrogate functions with random bases.
Zhang, Cheng; Shahbaba, Babak; Zhao, Hongkai
2017-11-01
For big data analysis, high computational cost for Bayesian methods often limits their applications in practice. In recent years, there have been many attempts to improve computational efficiency of Bayesian inference. Here we propose an efficient and scalable computational technique for a state-of-the-art Markov chain Monte Carlo methods, namely, Hamiltonian Monte Carlo. The key idea is to explore and exploit the structure and regularity in parameter space for the underlying probabilistic model to construct an effective approximation of its geometric properties. To this end, we build a surrogate function to approximate the target distribution using properly chosen random bases and an efficient optimization process. The resulting method provides a flexible, scalable, and efficient sampling algorithm, which converges to the correct target distribution. We show that by choosing the basis functions and optimization process differently, our method can be related to other approaches for the construction of surrogate functions such as generalized additive models or Gaussian process models. Experiments based on simulated and real data show that our approach leads to substantially more efficient sampling algorithms compared to existing state-of-the-art methods.
Methods in the study of discrete upper hybrid waves
NASA Astrophysics Data System (ADS)
Yoon, P. H.; Ye, S.; Labelle, J.; Weatherwax, A. T.; Menietti, J. D.
2007-11-01
Naturally occurring plasma waves characterized by fine frequency structure or discrete spectrum, detected by satellite, rocket-borne instruments, or ground-based receivers, can be interpreted as eigenmodes excited and trapped in field-aligned density structures. This paper overviews various theoretical methods to study such phenomena for a one-dimensional (1-D) density structure. Among the various methods are parabolic approximation, eikonal matching, eigenfunction matching, and full numerical solution based upon shooting method. Various approaches are compared against the full numerical solution. Among the analytic methods it is found that the eigenfunction matching technique best approximates the actual numerical solution. The analysis is further extended to 2-D geometry. A detailed comparative analysis between the eigenfunction matching and fully numerical methods is carried out for the 2-D case. Although in general the two methods compare favorably, significant differences are also found such that for application to actual observations it is prudent to employ the fully numerical method. Application of the methods developed in the present paper to actual geophysical problems will be given in a companion paper.
NASA Astrophysics Data System (ADS)
Kudryavtsev, O.; Rodochenko, V.
2018-03-01
We propose a new general numerical method aimed to solve integro-differential equations with variable coefficients. The problem under consideration arises in finance where in the context of pricing barrier options in a wide class of stochastic volatility models with jumps. To handle the effect of the correlation between the price and the variance, we use a suitable substitution for processes. Then we construct a Markov-chain approximation for the variation process on small time intervals and apply a maturity randomization technique. The result is a system of boundary problems for integro-differential equations with constant coefficients on the line in each vertex of the chain. We solve the arising problems using a numerical Wiener-Hopf factorization method. The approximate formulae for the factors are efficiently implemented by means of the Fast Fourier Transform. Finally, we use a recurrent procedure that moves backwards in time on the variance tree. We demonstrate the convergence of the method using Monte-Carlo simulations and compare our results with the results obtained by the Wiener-Hopf method with closed-form expressions of the factors.
Design of fuzzy systems using neurofuzzy networks.
Figueiredo, M; Gomide, F
1999-01-01
This paper introduces a systematic approach for fuzzy system design based on a class of neural fuzzy networks built upon a general neuron model. The network structure is such that it encodes the knowledge learned in the form of if-then fuzzy rules and processes data following fuzzy reasoning principles. The technique provides a mechanism to obtain rules covering the whole input/output space as well as the membership functions (including their shapes) for each input variable. Such characteristics are of utmost importance in fuzzy systems design and application. In addition, after learning, it is very simple to extract fuzzy rules in the linguistic form. The network has universal approximation capability, a property very useful in, e.g., modeling and control applications. Here we focus on function approximation problems as a vehicle to illustrate its usefulness and to evaluate its performance. Comparisons with alternative approaches are also included. Both, nonnoisy and noisy data have been studied and considered in the computational experiments. The neural fuzzy network developed here and, consequently, the underlying approach, has shown to provide good results from the accuracy, complexity, and system design points of view.
Approximate labeling via graph cuts based on linear programming.
Komodakis, Nikos; Tziritas, Georgios
2007-08-01
A new framework is presented for both understanding and developing graph-cut-based combinatorial algorithms suitable for the approximate optimization of a very wide class of Markov Random Fields (MRFs) that are frequently encountered in computer vision. The proposed framework utilizes tools from the duality theory of linear programming in order to provide an alternative and more general view of state-of-the-art techniques like the \\alpha-expansion algorithm, which is included merely as a special case. Moreover, contrary to \\alpha-expansion, the derived algorithms generate solutions with guaranteed optimality properties for a much wider class of problems, for example, even for MRFs with nonmetric potentials. In addition, they are capable of providing per-instance suboptimality bounds in all occasions, including discrete MRFs with an arbitrary potential function. These bounds prove to be very tight in practice (that is, very close to 1), which means that the resulting solutions are almost optimal. Our algorithms' effectiveness is demonstrated by presenting experimental results on a variety of low-level vision tasks, such as stereo matching, image restoration, image completion, and optical flow estimation, as well as on synthetic problems.
TORO II simulations of induction heating in ferromagnetic materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adkins, D.R.; Gartling, D.K.; Kelley, J.B.
TORO II is a finite element computer program that is used in the simulation of electric and magnetic fields. This code, which was developed at Sandia National Laboratories, has been coupled with a finite element thermal code, COYOTE II, to predict temperature profiles in inductively heated parts. The development of an effective technique to account for the nonlinear behavior of the magnetic permeability in ferromagnetic parts is one of the more difficult aspects of solving induction heating problems. In the TORO II code, nonlinear, spatially varying magnetic permeability is approximated by an effective permeability on an element-by-element basis that effectivelymore » provides the same energy deposition that is produced when the true permeability is used. This approximation has been found to give an accurate estimate of the volumetric heating distribution in the part, and predicted temperature distributions have been experimentally verified using a medium carbon steel and a 10kW industrial induction heating unit. Work on the model was funded through a Cooperative Research and Development Agreement (CRADA) between the Department of Energy and General Motors` Delphi Saginaw Steering Systems.« less
Rhenium in seawater - Confirmation of generally conservative behavior
NASA Technical Reports Server (NTRS)
Anbar, A. D.; Creaser, R. A.; Papanastassiou, D. A.; Wasserburg, G. J.
1992-01-01
A depth profile of the concentration of Re was measured in the Pacific Ocean using a technique developed for the clean chemical separation and the precise measurement of Re by isotope dilution and negative thermal ionization mass spectrometry (ID-NTIMS). We obtain a narrow range for Re from 7.20 +/- 0.03 to 7.38 +/- 0.03 ng/kg for depths between 45 m and 4700 m. This demonstrates that Re is relatively well mixed throughout the water column and confirms the theoretical prediction that the behavior of Re in the oceans is conservative. When examined in detail, both salinity and the concentration of Re increase by approximately 1.5 percent between 400 and 4700 m, a correlation consistent with conservative behavior. However, Re appears to be depleted relative to salinity by 1.0-1.5 percent at 100 m, and enriched by approximately 4 percent at the surface. These observations suggest a minor level of Re scavenging in near surface waters, and an input of Re to the ocean surface. This work demonstrates the utility of geochemical investigations of certain trace elements not previously been amenable to detailed study.
Model-checking techniques based on cumulative residuals.
Lin, D Y; Wei, L J; Ying, Z
2002-03-01
Residuals have long been used for graphical and numerical examinations of the adequacy of regression models. Conventional residual analysis based on the plots of raw residuals or their smoothed curves is highly subjective, whereas most numerical goodness-of-fit tests provide little information about the nature of model misspecification. In this paper, we develop objective and informative model-checking techniques by taking the cumulative sums of residuals over certain coordinates (e.g., covariates or fitted values) or by considering some related aggregates of residuals, such as moving sums and moving averages. For a variety of statistical models and data structures, including generalized linear models with independent or dependent observations, the distributions of these stochastic processes tinder the assumed model can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be easily generated by computer simulation. Each observed process can then be compared, both graphically and numerically, with a number of realizations from the Gaussian process. Such comparisons enable one to assess objectively whether a trend seen in a residual plot reflects model misspecification or natural variation. The proposed techniques are particularly useful in checking the functional form of a covariate and the link function. Illustrations with several medical studies are provided.
Chen, Xing; Tume, Ron K; Xu, Xinglian; Zhou, Guanghong
2017-10-13
The qualitative characteristics of meat products are closely related to the functionality of muscle proteins. Myofibrillar proteins (MPs), comprising approximately 50% of total muscle proteins, are generally considered to be insoluble in solutions of low ionic strength (< 0.2 M), requiring high concentrations of salt (> 0.3 M) for solubilization. These soluble proteins are the ones which determine many functional properties of meat products, including emulsification and thermal gelation. In order to increase the utilization of meat and meat products, many studies have investigated the solubilization of MPs in water or low ionic strength media and determining their functionality. However, there still remains a lack of systematic information on the functional properties of MPs solubilized in this manner. Hence, this review will explore some typical techniques that have been used. The main procedures used for their solubilization, the fundamental principles and their functionalities in water (low ionic strength medium) are comprehensively discussed. In addition, advantages and disadvantages of each technique are summarized. Finally, future considerations are presented to facilitate progress in this new area and to enable water soluble muscle MPs to be utilized as novel meat ingredients in the food industry.
Spike: Artificial intelligence scheduling for Hubble space telescope
NASA Technical Reports Server (NTRS)
Johnston, Mark; Miller, Glenn; Sponsler, Jeff; Vick, Shon; Jackson, Robert
1990-01-01
Efficient utilization of spacecraft resources is essential, but the accompanying scheduling problems are often computationally intractable and are difficult to approximate because of the presence of numerous interacting constraints. Artificial intelligence techniques were applied to the scheduling of the NASA/ESA Hubble Space Telescope (HST). This presents a particularly challenging problem since a yearlong observing program can contain some tens of thousands of exposures which are subject to a large number of scientific, operational, spacecraft, and environmental constraints. New techniques were developed for machine reasoning about scheduling constraints and goals, especially in cases where uncertainty is an important scheduling consideration and where resolving conflicts among conflicting preferences is essential. These technique were utilized in a set of workstation based scheduling tools (Spike) for HST. Graphical displays of activities, constraints, and schedules are an important feature of the system. High level scheduling strategies using both rule based and neural network approaches were developed. While the specific constraints implemented are those most relevant to HST, the framework developed is far more general and could easily handle other kinds of scheduling problems. The concept and implementation of the Spike system are described along with some experiments in adapting Spike to other spacecraft scheduling domains.
Sakai, Yusuke; Koike, Makiko; Hasegawa, Hideko; Yamanouchi, Kosho; Soyama, Akihiko; Takatsuki, Mitsuhisa; Kuroki, Tamotsu; Ohashi, Kazuo; Okano, Teruo; Eguchi, Susumu
2013-01-01
Cell sheet engineering is attracting attention from investigators in various fields, from basic research scientists to clinicians focused on regenerative medicine. However, hepatocytes have a limited proliferation potential in vitro, and it generally takes a several days to form a sheet morphology and multi-layered sheets. We herein report our rapid and efficient technique for generating multi-layered human hepatic cell (HepaRG® cell) sheets using pre-cultured fibroblast monolayers derived from human skin (TIG-118 cells) as a feeder layer on a temperature-responsive culture dish. Multi-layered TIG-118/HepaRG cell sheets with a thick morphology were harvested on day 4 of culturing HepaRG cells by forceful contraction of the TIG-118 cells, and the resulting sheet could be easily handled. In addition, the human albumin and alpha 1-antitrypsin synthesis activities of TIG-118/HepaRG cells were approximately 1.2 and 1.3 times higher than those of HepaRG cells, respectively. Therefore, this technique is considered to be a promising modality for rapidly fabricating multi-layered human hepatocyte sheets from cells with limited proliferation potential, and the engineered cell sheet could be used for cell transplantation with highly specific functions.
NASA Astrophysics Data System (ADS)
Allphin, Devin
Computational fluid dynamics (CFD) solution approximations for complex fluid flow problems have become a common and powerful engineering analysis technique. These tools, though qualitatively useful, remain limited in practice by their underlying inverse relationship between simulation accuracy and overall computational expense. While a great volume of research has focused on remedying these issues inherent to CFD, one traditionally overlooked area of resource reduction for engineering analysis concerns the basic definition and determination of functional relationships for the studied fluid flow variables. This artificial relationship-building technique, called meta-modeling or surrogate/offline approximation, uses design of experiments (DOE) theory to efficiently approximate non-physical coupling between the variables of interest in a fluid flow analysis problem. By mathematically approximating these variables, DOE methods can effectively reduce the required quantity of CFD simulations, freeing computational resources for other analytical focuses. An idealized interpretation of a fluid flow problem can also be employed to create suitably accurate approximations of fluid flow variables for the purposes of engineering analysis. When used in parallel with a meta-modeling approximation, a closed-form approximation can provide useful feedback concerning proper construction, suitability, or even necessity of an offline approximation tool. It also provides a short-circuit pathway for further reducing the overall computational demands of a fluid flow analysis, again freeing resources for otherwise unsuitable resource expenditures. To validate these inferences, a design optimization problem was presented requiring the inexpensive estimation of aerodynamic forces applied to a valve operating on a simulated piston-cylinder heat engine. The determination of these forces was to be found using parallel surrogate and exact approximation methods, thus evidencing the comparative benefits of this technique. For the offline approximation, latin hypercube sampling (LHS) was used for design space filling across four (4) independent design variable degrees of freedom (DOF). Flow solutions at the mapped test sites were converged using STAR-CCM+ with aerodynamic forces from the CFD models then functionally approximated using Kriging interpolation. For the closed-form approximation, the problem was interpreted as an ideal 2-D converging-diverging (C-D) nozzle, where aerodynamic forces were directly mapped by application of the Euler equation solutions for isentropic compression/expansion. A cost-weighting procedure was finally established for creating model-selective discretionary logic, with a synthesized parallel simulation resource summary provided.
Estimation of under-reporting in epidemics using approximations.
Gamado, Kokouvi; Streftaris, George; Zachary, Stan
2017-06-01
Under-reporting in epidemics, when it is ignored, leads to under-estimation of the infection rate and therefore of the reproduction number. In the case of stochastic models with temporal data, a usual approach for dealing with such issues is to apply data augmentation techniques through Bayesian methodology. Departing from earlier literature approaches implemented using reversible jump Markov chain Monte Carlo (RJMCMC) techniques, we make use of approximations to obtain faster estimation with simple MCMC. Comparisons among the methods developed here, and with the RJMCMC approach, are carried out and highlight that approximation-based methodology offers useful alternative inference tools for large epidemics, with a good trade-off between time cost and accuracy.
Locating CVBEM collocation points for steady state heat transfer problems
Hromadka, T.V.
1985-01-01
The Complex Variable Boundary Element Method or CVBEM provides a highly accurate means of developing numerical solutions to steady state two-dimensional heat transfer problems. The numerical approach exactly solves the Laplace equation and satisfies the boundary conditions at specified points on the boundary by means of collocation. The accuracy of the approximation depends upon the nodal point distribution specified by the numerical analyst. In order to develop subsequent, refined approximation functions, four techniques for selecting additional collocation points are presented. The techniques are compared as to the governing theory, representation of the error of approximation on the problem boundary, the computational costs, and the ease of use by the numerical analyst. ?? 1985.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatterjee, Koushik; Jawulski, Konrad; Pastorczak, Ewa
A perfect-pairing generalized valence bond (GVB) approximation is known to be one of the simplest approximations, which allows one to capture the essence of static correlation in molecular systems. In spite of its attractive feature of being relatively computationally efficient, this approximation misses a large portion of dynamic correlation and does not offer sufficient accuracy to be generally useful for studying electronic structure of molecules. We propose to correct the GVB model and alleviate some of its deficiencies by amending it with the correlation energy correction derived from the recently formulated extended random phase approximation (ERPA). On the examples ofmore » systems of diverse electronic structures, we show that the resulting ERPA-GVB method greatly improves upon the GVB model. ERPA-GVB recovers most of the electron correlation and it yields energy barrier heights of excellent accuracy. Thanks to a balanced treatment of static and dynamic correlation, ERPA-GVB stays reliable when one moves from systems dominated by dynamic electron correlation to those for which the static correlation comes into play.« less
Buto, Susan G.; Gold, Brittany L.; Jones, Kimberly A.
2014-01-01
Irrigation in arid environments can alter the natural rate at which salts are dissolved and transported to streams. Irrigated agricultural lands are the major anthropogenic source of dissolved solids in the Upper Colorado River Basin (UCRB). Understanding the location, spatial distribution, and irrigation status of agricultural lands and the method used to deliver water to agricultural lands are important to help improve the understanding of agriculturally derived dissolved-solids loading to surface water in the UCRB. Irrigation status is the presence or absence of irrigation on an agricultural field during the selected growing season or seasons. Irrigation method is the system used to irrigate a field. Irrigation method can broadly be grouped into sprinkler or flood methods, although other techniques such as drip irrigation are used in the UCRB. Flood irrigation generally causes greater dissolved-solids loading to streams than sprinkler irrigation. Agricultural lands in the UCRB mapped by state agencies at varying spatial and temporal resolutions were assembled and edited to represent conditions in the UCRB between 2007 and 2010. Edits were based on examination of 1-meter resolution aerial imagery collected between 2009 and 2011. Remote sensing classification techniques were used to classify irrigation status for the June to September growing seasons between 2007 and 2010. The final dataset contains polygons representing approximately 1,759,900 acres of agricultural lands in the UCRB. Approximately 66 percent of the mapped agricultural lands were likely irrigated during the study period.
Wang, Zhen; Li, Ru; Yu, Guolin
2017-01-01
In this work, several extended approximately invex vector-valued functions of higher order involving a generalized Jacobian are introduced, and some examples are presented to illustrate their existences. The notions of higher-order (weak) quasi-efficiency with respect to a function are proposed for a multi-objective programming. Under the introduced generalization of higher-order approximate invexities assumptions, we prove that the solutions of generalized vector variational-like inequalities in terms of the generalized Jacobian are the generalized quasi-efficient solutions of nonsmooth multi-objective programming problems. Moreover, the equivalent conditions are presented, namely, a vector critical point is a weakly quasi-efficient solution of higher order with respect to a function.
Transformation of general binary MRF minimization to the first-order case.
Ishikawa, Hiroshi
2011-06-01
We introduce a transformation of general higher-order Markov random field with binary labels into a first-order one that has the same minima as the original. Moreover, we formalize a framework for approximately minimizing higher-order multi-label MRF energies that combines the new reduction with the fusion-move and QPBO algorithms. While many computer vision problems today are formulated as energy minimization problems, they have mostly been limited to using first-order energies, which consist of unary and pairwise clique potentials, with a few exceptions that consider triples. This is because of the lack of efficient algorithms to optimize energies with higher-order interactions. Our algorithm challenges this restriction that limits the representational power of the models so that higher-order energies can be used to capture the rich statistics of natural scenes. We also show that some minimization methods can be considered special cases of the present framework, as well as comparing the new method experimentally with other such techniques.
Selection of regularization parameter in total variation image restoration.
Liao, Haiyong; Li, Fang; Ng, Michael K
2009-11-01
We consider and study total variation (TV) image restoration. In the literature there are several regularization parameter selection methods for Tikhonov regularization problems (e.g., the discrepancy principle and the generalized cross-validation method). However, to our knowledge, these selection methods have not been applied to TV regularization problems. The main aim of this paper is to develop a fast TV image restoration method with an automatic selection of the regularization parameter scheme to restore blurred and noisy images. The method exploits the generalized cross-validation (GCV) technique to determine inexpensively how much regularization to use in each restoration step. By updating the regularization parameter in each iteration, the restored image can be obtained. Our experimental results for testing different kinds of noise show that the visual quality and SNRs of images restored by the proposed method is promising. We also demonstrate that the method is efficient, as it can restore images of size 256 x 256 in approximately 20 s in the MATLAB computing environment.
Prabhu, Neeta T; Nunn, June H; Evans, D J; Girdler, N M
2010-01-01
The goal of this study was to elicit the views of patients or parents/caregivers of patients with disabilities regarding access to dental care. A questionnaire was generated both from interviews with patients/parents/caregivers already treated under sedation or general anesthesia as well as by use of the Delphi technique with other stakeholders. One hundred thirteen patients from across six community dental clinics and one dental hospital were included. Approximately, 38% of the subjects used a general dental practitioner and 35% used the community dental service for their dental care, with only 27% using the hospital dental services. Overall waiting time for an appointment at the secondary care setting was longer than for the primary care clinics. There was a high rate of parent/caregiver satisfaction with dental services and only five patients reported any difficulty with travel and access to clinics. This study highlights the need for a greater investment in education and training to improve skills in the primary dental care sector.
Jana, Subrata; Samal, Prasanjit
2017-06-29
Semilocal density functionals for the exchange-correlation energy of electrons are extensively used as they produce realistic and accurate results for finite and extended systems. The choice of techniques plays a crucial role in constructing such functionals of improved accuracy and efficiency. An accurate and efficient semilocal exchange energy functional in two dimensions is constructed by making use of the corresponding hole which is derived based on the density matrix expansion. The exchange hole involved is localized under the generalized coordinate transformation and satisfies all the relevant constraints. Comprehensive testing and excellent performance of the functional is demonstrated versus exact exchange results. The accuracy of results obtained by using the newly constructed functional is quite remarkable as it substantially reduces the errors present in the local and nonempirical exchange functionals proposed so far for two-dimensional quantum systems. The underlying principles involved in the functional construction are physically appealing and hold promise for developing range separated and nonlocal exchange functionals in two dimensions.
Superconducting transition temperature of a boron nitride layer with a high niobium coverage.
NASA Astrophysics Data System (ADS)
Vazquez, Gerardo; Magana, Fernando
We explore the possibility of inducing superconductivity in a Boron Nitride (BN) sheet, by doping its surface with Nb atoms sitting on the center of the hexagons. We used first-principles density functional theory in the general gradient approximation. The Quantum-Espresso package was used with norm conserving pseudo potentials. The structure considered was relaxed to their minimum energy configuration. Phonon frequencies were calculated using the linear-response technique on several phonon wave-vector meshes. The electron-phonon coupling parameter was calculated for a number of k meshes. The superconducting critical temperature was estimated using the Allen-Dynes formula with μ* = 0.1 - 0.15. We note that Nb is a good candidate material to show a superconductor transition for the BN-metal system. We thank Dirección General de Asuntos del Personal Académico de la Universidad Nacional Autónoma de México, partial financial support by Grant IN-106514 and we also thank Miztli Super-Computing center the technical assistance.
Curvature perturbations in the early universe: Theoretical models and observational tests
NASA Astrophysics Data System (ADS)
Vallinotto, Alberto
A very general prediction of inflation is that the power spectrum of density perturbations is characterized by a spectral index ns which is scale independent and approximately equal to unity. Drawing from the potential reconstruction method and adopting the slow-roll parameter expansion technique, we derive all possible single field inflationary potentials that would lead to a scale invariant density spectral index, consistent with current observations. In the process, a new method to determine the functional form of the inflationary potential in the slow roll approximation is devised, based on the reparametrization of the field dynamics with respect to the slow roll parameter epsilon which also allowed to show that under the assumptions made the investigation proved to be exhaustive and that no other solutions are available. Next, we focus on the fact that there exist a large class of inflationary models currently ruled out because the predicted production of curvature perturbations during the slow-roll stage results exponentially suppressed. We investigate whether an alternative mechanism for the generation of curvature perturbations can be devised for such a class of models. In the process, it is shown that it is sufficient for the inflationary potential to exhibit a broken symmetry to successfully convert isocurvature perturbations, which are excited during the slow-roll stage, into curvature perturbations thanks to an inhomogeneous decay stage. This conclusion is general, requiring as a sufficient condition only the fact that the inflation potential is characterized by a broken symmetry. Finally, we show that the perturbations thus produced are generally characterized by a non-negligible degree of non-gaussianity, which then provides a clear experimental signature for experimental detection or rejection.
Al-Shayyab, Mohammad H; Ryalat, Soukaina; Dar-odeh, Najla; Alsoleihat, Firas
2013-01-01
Purpose The study reported here aimed to identify current sedation practice among general dental practitioners (GDPs) and specialist dental practitioners (SDPs) in Jordan in 2010. Methods Questionnaires were sent by email to 1683 GDPs and SDPs who were working in Jordan at the time of the study. The contact details of these dental practitioners were obtained from a Jordan Dental Association list. Details on personal status, use of, and training in, conscious sedation techniques were sought by the questionnaires. Results A total of 1003 (60%) questionnaires were returned, with 748 (86.9%) GDPs and 113 (13.1%) SDPs responding. Only ten (1.3%) GDPs and 63 (55.8%) SDPs provided information on the different types of treatments related to their specialties undertaken under some form of sedation performed by specialist and/or assistant anesthetists. Approximately 0.075% of the Jordanian population received some form of sedation during the year 2010, with approximately 0.054% having been treated by oral and maxillofacial surgeons. The main reason for the majority of GDPs (55.0%) and many SDPs (40%) not to perform sedation was lack of training in this field. While some SDPs (26.0%) indicated they did not use sedation because of the inadequacy of sedative facilities. Conclusion Within the limitations of the present study, it can be concluded that the provision of conscious sedation services in general and specialist dental practices in Jordan is inconsistent and inadequate. This stresses the great need to train practitioners and dental assistants in Jordan to enable them to safely and effectively perform all forms of sedation. PMID:23700369
Symmetric rotating-wave approximation for the generalized single-mode spin-boson system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albert, Victor V.; Scholes, Gregory D.; Brumer, Paul
2011-10-15
The single-mode spin-boson model exhibits behavior not included in the rotating-wave approximation (RWA) in the ultra and deep-strong coupling regimes, where counter-rotating contributions become important. We introduce a symmetric rotating-wave approximation that treats rotating and counter-rotating terms equally, preserves the invariances of the Hamiltonian with respect to its parameters, and reproduces several qualitative features of the spin-boson spectrum not present in the original rotating-wave approximation both off-resonance and at deep-strong coupling. The symmetric rotating-wave approximation allows for the treatment of certain ultra- and deep-strong coupling regimes with similar accuracy and mathematical simplicity as does the RWA in the weak-coupling regime.more » Additionally, we symmetrize the generalized form of the rotating-wave approximation to obtain the same qualitative correspondence with the addition of improved quantitative agreement with the exact numerical results. The method is readily extended to higher accuracy if needed. Finally, we introduce the two-photon parity operator for the two-photon Rabi Hamiltonian and obtain its generalized symmetric rotating-wave approximation. The existence of this operator reveals a parity symmetry similar to that in the Rabi Hamiltonian as well as another symmetry that is unique to the two-photon case, providing insight into the mathematical structure of the two-photon spectrum, significantly simplifying the numerics, and revealing some interesting dynamical properties.« less
Dynamics of polymers: A mean-field theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fredrickson, Glenn H.; Materials Research Laboratory, University of California, Santa Barbara, California 93106; Department of Materials, University of California, Santa Barbara, California 93106
2014-02-28
We derive a general mean-field theory of inhomogeneous polymer dynamics; a theory whose form has been speculated and widely applied, but not heretofore derived. Our approach involves a functional integral representation of a Martin-Siggia-Rose (MSR) type description of the exact many-chain dynamics. A saddle point approximation to the generating functional, involving conditions where the MSR action is stationary with respect to a collective density field ρ and a conjugate MSR response field ϕ, produces the desired dynamical mean-field theory. Besides clarifying the proper structure of mean-field theory out of equilibrium, our results have implications for numerical studies of polymer dynamicsmore » involving hybrid particle-field simulation techniques such as the single-chain in mean-field method.« less
A methodology for commonality analysis, with applications to selected space station systems
NASA Technical Reports Server (NTRS)
Thomas, Lawrence Dale
1989-01-01
The application of commonality in a system represents an attempt to reduce costs by reducing the number of unique components. A formal method for conducting commonality analysis has not been established. In this dissertation, commonality analysis is characterized as a partitioning problem. The cost impacts of commonality are quantified in an objective function, and the solution is that partition which minimizes this objective function. Clustering techniques are used to approximate a solution, and sufficient conditions are developed which can be used to verify the optimality of the solution. This method for commonality analysis is general in scope. It may be applied to the various types of commonality analysis required in the conceptual, preliminary, and detail design phases of the system development cycle.
Space measurements of tropospheric aerosols
NASA Technical Reports Server (NTRS)
Griggs, M.
1981-01-01
A global-scale ground-truth experiment was conducted in the summer of 1980 with the AVHRR sensor on NOAA-6 to investigate the relationship between the upwelling visible radiance and the aerosol optical thickness over oceans at different sites around the globe. The possibility of using inland bodies of water such as rivers, lakes and reservoirs has been recently investigated using the Landsat MSS7 (approximately 0.9 micron) channel. This upwelling near-infrared radiance is less influenced than the visible radiance by the suspended matter generally found in the inland bodies of water, and by the adjacency effect of the surrounding higher albedo land. It is found that the water turbidity has more influence than the adjacency effect and reduces the effectiveness of the technique for inland observations.
A method for reducing the order of nonlinear dynamic systems
NASA Astrophysics Data System (ADS)
Masri, S. F.; Miller, R. K.; Sassi, H.; Caughey, T. K.
1984-06-01
An approximate method that uses conventional condensation techniques for linear systems together with the nonparametric identification of the reduced-order model generalized nonlinear restoring forces is presented for reducing the order of discrete multidegree-of-freedom dynamic systems that possess arbitrary nonlinear characteristics. The utility of the proposed method is demonstrated by considering a redundant three-dimensional finite-element model half of whose elements incorporate hysteretic properties. A nonlinear reduced-order model, of one-third the order of the original model, is developed on the basis of wideband stationary random excitation and the validity of the reduced-order model is subsequently demonstrated by its ability to predict with adequate accuracy the transient response of the original nonlinear model under a different nonstationary random excitation.
Real time estimation and prediction of ship motions using Kalman filtering techniques
NASA Technical Reports Server (NTRS)
Triantafyllou, M. A.; Bodson, M.; Athans, M.
1982-01-01
A landing scheme for landing V/STOL aircraft on rolling ships was sought using computerized simulations. The equations of motion as derived from hydrodynamics, their form and the physical mechanisms involved and the general form of the approximation are discussed. The modeling of the sea is discussed. The derivation of the state-space equations for the DD-963 destroyer is described. Kalman filter studies are presented and the influence of the various parameters is assessed. The effect of various modeling parameters on the rms error is assessed and simplifying conclusions are drawn. An upper bound for prediction time of about five seconds is established, with the exception of roll, which can be predicted up to ten seconds ahead.
Implicit solvers for unstructured meshes
NASA Technical Reports Server (NTRS)
Venkatakrishnan, V.; Mavriplis, Dimitri J.
1991-01-01
Implicit methods were developed and tested for unstructured mesh computations. The approximate system which arises from the Newton linearization of the nonlinear evolution operator is solved by using the preconditioned GMRES (Generalized Minimum Residual) technique. Three different preconditioners were studied, namely, the incomplete LU factorization (ILU), block diagonal factorization, and the symmetric successive over relaxation (SSOR). The preconditioners were optimized to have good vectorization properties. SSOR and ILU were also studied as iterative schemes. The various methods are compared over a wide range of problems. Ordering of the unknowns, which affects the convergence of these sparse matrix iterative methods, is also studied. Results are presented for inviscid and turbulent viscous calculations on single and multielement airfoil configurations using globally and adaptively generated meshes.
Hybrid and Constrained Resolution-of-Identity Techniques for Coulomb Integrals.
Duchemin, Ivan; Li, Jing; Blase, Xavier
2017-03-14
The introduction of auxiliary bases to approximate molecular orbital products has paved the way to significant savings in the evaluation of four-center two-electron Coulomb integrals. We present a generalized dual space strategy that sheds a new light on variants over the standard density and Coulomb-fitting schemes, including the possibility of introducing minimization constraints. We improve in particular the charge- or multipole-preserving strategies introduced respectively by Baerends and Van Alsenoy that we compare to a simple scheme where the Coulomb metric is used for lowest angular momentum auxiliary orbitals only. We explore the merits of these approaches on the basis of extensive Hartree-Fock and MP2 calculations over a standard set of medium size molecules.
Biologically inspired intelligent decision making
Manning, Timmy; Sleator, Roy D; Walsh, Paul
2014-01-01
Artificial neural networks (ANNs) are a class of powerful machine learning models for classification and function approximation which have analogs in nature. An ANN learns to map stimuli to responses through repeated evaluation of exemplars of the mapping. This learning approach results in networks which are recognized for their noise tolerance and ability to generalize meaningful responses for novel stimuli. It is these properties of ANNs which make them appealing for applications to bioinformatics problems where interpretation of data may not always be obvious, and where the domain knowledge required for deductive techniques is incomplete or can cause a combinatorial explosion of rules. In this paper, we provide an introduction to artificial neural network theory and review some interesting recent applications to bioinformatics problems. PMID:24335433
Plasma observations near saturn: initial results from voyager 2.
Bridge, H S; Bagenal, F; Belcher, J W; Lazarus, A J; McNutt, R L; Sullivan, J D; Gazis, P R; Hartle, R E; Ogilvie, K W; Scudder, J D; Sittler, E C; Eviatar, A; Siscoe, G L; Goertz, C K; Vasyliunas, V M
1982-01-29
Results of measurements of plasma electrons and poitive ions made during the Voyager 2 encounter with Saturn have been combined with measurements from Voyager 1 and Pioneer 11 to define more clearly the configuration of plasma in the Saturnian magnetosphere. The general morphology is well represented by four regions: (i) the shocked solar wind plasma in the magnetosheath, observed between about 30 and 22 Saturn radii (RS) near the noon meridian; (ii) a variable density region between approximately 17 RS and the magnetopause; (iii) an extended thick plasma sheet between approximately 17 and approximately 7 RS symmetrical with respect to Saturn's equatorial plane and rotation axis; and (iv) an inner plasma torus that probably originates from local sources and extends inward from L approximately 7 to less than L approximately 2.7 (L is the magnetic shell parameter). In general, the heavy ions, probably O(+), are more closely confined to the equatorial plane than H(+), so that the ratio of heavy to light ions varies along the trajectory according to the distance of the spacecraft from the equatorial plane. The general configuration of the plasma sheet at Saturn found by Voyager 1 is confirmed, with some notable differences and additions. The "extended plasma sheet," observed between L approximately 7 and L approximately 15 by Voyager 1 is considerably thicker as observed by Voyager 2. Inward of L approximately 4, the plasma sheet collapses to a thin region about the equatorial plane. At the ring plane crossing, L approximately 2.7, the observations are consistent with a density of O(+) of approximately 100 per cubic centimeter, with a temperature of approximately 10 electron volts. The location of the bow shock and magnetopause crossings were consistent with those previously observed. The entire magnetosphere was larger during the outbound passage of Voyager 2 than had been previously observed; however, a magnetosphere of this size or larger is expected approximately 3 percent of the time.
NASA Technical Reports Server (NTRS)
Bernstein, Dennis S.; Rosen, I. G.
1988-01-01
In controlling distributed parameter systems it is often desirable to obtain low-order, finite-dimensional controllers in order to minimize real-time computational requirements. Standard approaches to this problem employ model/controller reduction techniques in conjunction with LQG theory. In this paper we consider the finite-dimensional approximation of the infinite-dimensional Bernstein/Hyland optimal projection theory. This approach yields fixed-finite-order controllers which are optimal with respect to high-order, approximating, finite-dimensional plant models. The technique is illustrated by computing a sequence of first-order controllers for one-dimensional, single-input/single-output, parabolic (heat/diffusion) and hereditary systems using spline-based, Ritz-Galerkin, finite element approximation. Numerical studies indicate convergence of the feedback gains with less than 2 percent performance degradation over full-order LQG controllers for the parabolic system and 10 percent degradation for the hereditary system.
Recent developments in LIBXC - A comprehensive library of functionals for density functional theory
NASA Astrophysics Data System (ADS)
Lehtola, Susi; Steigemann, Conrad; Oliveira, Micael J. T.; Marques, Miguel A. L.
2018-01-01
LIBXC is a library of exchange-correlation functionals for density-functional theory. We are concerned with semi-local functionals (or the semi-local part of hybrid functionals), namely local-density approximations, generalized-gradient approximations, and meta-generalized-gradient approximations. Currently we include around 400 functionals for the exchange, correlation, and the kinetic energy, spanning more than 50 years of research. Moreover, LIBXC is by now used by more than 20 codes, not only from the atomic, molecular, and solid-state physics, but also from the quantum chemistry communities.
NASA Astrophysics Data System (ADS)
Weinberg, Steven
2015-09-01
Preface; Notation; 1. Historical introduction; 2. Particle states in a central potential; 3. General principles of quantum mechanics; 4. Spin; 5. Approximations for energy eigenstates; 6. Approximations for time-dependent problems; 7. Potential scattering; 8. General scattering theory; 9. The canonical formalism; 10. Charged particles in electromagnetic fields; 11. The quantum theory of radiation; 12. Entanglement; Author index; Subject index.
Arbitrary-level hanging nodes for adaptive hphp-FEM approximations in 3D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pavel Kus; Pavel Solin; David Andrs
2014-11-01
In this paper we discuss constrained approximation with arbitrary-level hanging nodes in adaptive higher-order finite element methods (hphp-FEM) for three-dimensional problems. This technique enables using highly irregular meshes, and it greatly simplifies the design of adaptive algorithms as it prevents refinements from propagating recursively through the finite element mesh. The technique makes it possible to design efficient adaptive algorithms for purely hexahedral meshes. We present a detailed mathematical description of the method and illustrate it with numerical examples.
NASA Technical Reports Server (NTRS)
Sidney, T.; Aylott, B.; Christensen, N.; Farr, B.; Farr, W.; Feroz, F.; Gair, J.; Grover, K.; Graff, P.; Hanna, C.;
2014-01-01
The problem of reconstructing the sky position of compact binary coalescences detected via gravitational waves is a central one for future observations with the ground-based network of gravitational-wave laser interferometers, such as Advanced LIGO and Advanced Virgo. Different techniques for sky localization have been independently developed. They can be divided in two broad categories: fully coherent Bayesian techniques, which are high latency and aimed at in-depth studies of all the parameters of a source, including sky position, and "triangulation-based" techniques, which exploit the data products from the search stage of the analysis to provide an almost real-time approximation of the posterior probability density function of the sky location of a detection candidate. These techniques have previously been applied to data collected during the last science runs of gravitational-wave detectors operating in the so-called initial configuration. Here, we develop and analyze methods for assessing the self consistency of parameter estimation methods and carrying out fair comparisons between different algorithms, addressing issues of efficiency and optimality. These methods are general, and can be applied to parameter estimation problems other than sky localization. We apply these methods to two existing sky localization techniques representing the two above-mentioned categories, using a set of simulated inspiralonly signals from compact binary systems with a total mass of equal to or less than 20M solar mass and nonspinning components. We compare the relative advantages and costs of the two techniques and show that sky location uncertainties are on average a factor approx. equals 20 smaller for fully coherent techniques than for the specific variant of the triangulation-based technique used during the last science runs, at the expense of a factor approx. equals 1000 longer processing time.
NASA Astrophysics Data System (ADS)
Sidery, T.; Aylott, B.; Christensen, N.; Farr, B.; Farr, W.; Feroz, F.; Gair, J.; Grover, K.; Graff, P.; Hanna, C.; Kalogera, V.; Mandel, I.; O'Shaughnessy, R.; Pitkin, M.; Price, L.; Raymond, V.; Röver, C.; Singer, L.; van der Sluys, M.; Smith, R. J. E.; Vecchio, A.; Veitch, J.; Vitale, S.
2014-04-01
The problem of reconstructing the sky position of compact binary coalescences detected via gravitational waves is a central one for future observations with the ground-based network of gravitational-wave laser interferometers, such as Advanced LIGO and Advanced Virgo. Different techniques for sky localization have been independently developed. They can be divided in two broad categories: fully coherent Bayesian techniques, which are high latency and aimed at in-depth studies of all the parameters of a source, including sky position, and "triangulation-based" techniques, which exploit the data products from the search stage of the analysis to provide an almost real-time approximation of the posterior probability density function of the sky location of a detection candidate. These techniques have previously been applied to data collected during the last science runs of gravitational-wave detectors operating in the so-called initial configuration. Here, we develop and analyze methods for assessing the self consistency of parameter estimation methods and carrying out fair comparisons between different algorithms, addressing issues of efficiency and optimality. These methods are general, and can be applied to parameter estimation problems other than sky localization. We apply these methods to two existing sky localization techniques representing the two above-mentioned categories, using a set of simulated inspiral-only signals from compact binary systems with a total mass of ≤20M⊙ and nonspinning components. We compare the relative advantages and costs of the two techniques and show that sky location uncertainties are on average a factor ≈20 smaller for fully coherent techniques than for the specific variant of the triangulation-based technique used during the last science runs, at the expense of a factor ≈1000 longer processing time.
NASA Astrophysics Data System (ADS)
Yang, Jianwen
2012-04-01
A general analytical solution is derived by using the Laplace transformation to describe transient reactive silica transport in a conceptualized 2-D system involving a set of parallel fractures embedded in an impermeable host rock matrix, taking into account of hydrodynamic dispersion and advection of silica transport along the fractures, molecular diffusion from each fracture to the intervening rock matrix, and dissolution of quartz. A special analytical solution is also developed by ignoring the longitudinal hydrodynamic dispersion term but remaining other conditions the same. The general and special solutions are in the form of a double infinite integral and a single infinite integral, respectively, and can be evaluated using Gauss-Legendre quadrature technique. A simple criterion is developed to determine under what conditions the general analytical solution can be approximated by the special analytical solution. It is proved analytically that the general solution always lags behind the special solution, unless a dimensionless parameter is less than a critical value. Several illustrative calculations are undertaken to demonstrate the effect of fracture spacing, fracture aperture and fluid flow rate on silica transport. The analytical solutions developed here can serve as a benchmark to validate numerical models that simulate reactive mass transport in fractured porous media.
A new technique is presented for the retrieval of ozone concentration profiles from backscattered signals obtained by a multi-wavelength differential-absorption lidar (DIAL). The technique makes it possible to reduce erroneous local fluctuations induced in the ozone-concentration...
Variationally consistent approximation scheme for charge transfer
NASA Technical Reports Server (NTRS)
Halpern, A. M.
1978-01-01
The author has developed a technique for testing various charge-transfer approximation schemes for consistency with the requirements of the Kohn variational principle for the amplitude to guarantee that the amplitude is correct to second order in the scattering wave functions. Applied to Born-type approximations for charge transfer it allows the selection of particular groups of first-, second-, and higher-Born-type terms that obey the consistency requirement, and hence yield more reliable approximation to the amplitude.
VIEWDEX: an efficient and easy-to-use software for observer performance studies.
Håkansson, Markus; Svensson, Sune; Zachrisson, Sara; Svalkvist, Angelica; Båth, Magnus; Månsson, Lars Gunnar
2010-01-01
The development of investigation techniques, image processing, workstation monitors, analysing tools etc. within the field of radiology is vast, and the need for efficient tools in the evaluation and optimisation process of image and investigation quality is important. ViewDEX (Viewer for Digital Evaluation of X-ray images) is an image viewer and task manager suitable for research and optimisation tasks in medical imaging. ViewDEX is DICOM compatible and the features of the interface (tasks, image handling and functionality) are general and flexible. The configuration of a study and output (for example, answers given) can be edited in any text editor. ViewDEX is developed in Java and can run from any disc area connected to a computer. It is free to use for non-commercial purposes and can be downloaded from http://www.vgregion.se/sas/viewdex. In the present work, an evaluation of the efficiency of ViewDEX for receiver operating characteristic (ROC) studies, free-response ROC (FROC) studies and visual grading (VG) studies was conducted. For VG studies, the total scoring rate was dependent on the number of criteria per case. A scoring rate of approximately 150 cases h(-1) can be expected for a typical VG study using single images and five anatomical criteria. For ROC and FROC studies using clinical images, the scoring rate was approximately 100 cases h(-1) using single images and approximately 25 cases h(-1) using image stacks ( approximately 50 images case(-1)). In conclusion, ViewDEX is an efficient and easy-to-use software for observer performance studies.
Large-scale structure in brane-induced gravity. I. Perturbation theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scoccimarro, Roman
2009-11-15
We study the growth of subhorizon perturbations in brane-induced gravity using perturbation theory. We solve for the linear evolution of perturbations taking advantage of the symmetry under gauge transformations along the extra-dimension to decouple the bulk equations in the quasistatic approximation, which we argue may be a better approximation at large scales than thought before. We then study the nonlinearities in the bulk and brane equations, concentrating on the workings of the Vainshtein mechanism by which the theory becomes general relativity (GR) at small scales. We show that at the level of the power spectrum, to a good approximation, themore » effect of nonlinearities in the modified gravity sector may be absorbed into a renormalization of the gravitational constant. Since the relation between the lensing potential and density perturbations is entirely unaffected by the extra physics in these theories, the modified gravity can be described in this approximation by a single function, an effective gravitational constant for nonrelativistic motion that depends on space and time. We develop a resummation scheme to calculate it, and provide predictions for the nonlinear power spectrum. At the level of the large-scale bispectrum, the leading order corrections are obtained by standard perturbation theory techniques, and show that the suppression of the brane-bending mode leads to characteristic signatures in the non-Gaussianity generated by gravity, generic to models that become GR at small scales through second-derivative interactions. We compare the predictions in this work to numerical simulations in a companion paper.« less
NASA Astrophysics Data System (ADS)
Wang, Y. Z.; Wang, B.; Xiong, X. M.; Zhang, J. X.
2011-03-01
In many previous research work associated with studying the deformation of the fluid interface interacting with a solid, the theoretical calculation of the surface energy density on the deformed fluid interface (or its interaction surface pressure) is often approximately obtained by using the expression for the interaction energy per unit area (or pressure) between two parallel macroscopic plates, e.g. σ(D) = - A / 12 πD2or π(D) = - A / 6 πD3for the van der Waals (vdW) interaction, through invoking the Derjaguin approximation (DA). This approximation however would result in over- or even inaccurate-prediction of the interaction force and the corresponding deformation of the fluid interface due to the invalidation of Derjaguin approximation in cases of microscopic or submacroscopic solids. To circumvent the above limitations existing in the previous DA-based theoretical work, a more accurate and quantitative theoretical model, available for exactly calculating the vdW-induced deformation of a planar fluid interface interacting with a sphere, and the interaction forces taking into account its change, is presented in this paper. The validity and advantage of the new mathematical and physical technique is rigorously verified by comparison with the numerical results on basis of the previous Paraboloid solid (PS) model and the Hamaker's sphere-flat expression (viz. F = - 2 Aa3 / (3 D2( D + 2 a) 2)), as well as its well-known DA-based general form of F / a = - A / 6z p02.
Automating approximate Bayesian computation by local linear regression.
Thornton, Kevin R
2009-07-07
In several biological contexts, parameter inference often relies on computationally-intensive techniques. "Approximate Bayesian Computation", or ABC, methods based on summary statistics have become increasingly popular. A particular flavor of ABC based on using a linear regression to approximate the posterior distribution of the parameters, conditional on the summary statistics, is computationally appealing, yet no standalone tool exists to automate the procedure. Here, I describe a program to implement the method. The software package ABCreg implements the local linear-regression approach to ABC. The advantages are: 1. The code is standalone, and fully-documented. 2. The program will automatically process multiple data sets, and create unique output files for each (which may be processed immediately in R), facilitating the testing of inference procedures on simulated data, or the analysis of multiple data sets. 3. The program implements two different transformation methods for the regression step. 4. Analysis options are controlled on the command line by the user, and the program is designed to output warnings for cases where the regression fails. 5. The program does not depend on any particular simulation machinery (coalescent, forward-time, etc.), and therefore is a general tool for processing the results from any simulation. 6. The code is open-source, and modular.Examples of applying the software to empirical data from Drosophila melanogaster, and testing the procedure on simulated data, are shown. In practice, the ABCreg simplifies implementing ABC based on local-linear regression.
Sharma, Ity; Kaminski, George A.
2012-01-01
We have computed pKa values for eleven substituted phenol compounds using the continuum Fuzzy-Border (FB) solvation model. Hydration energies for 40 other compounds, including alkanes, alkenes, alkynes, ketones, amines, alcohols, ethers, aromatics, amides, heterocycles, thiols, sulfides and acids have been calculated. The overall average unsigned error in the calculated acidity constant values was equal to 0.41 pH units and the average error in the solvation energies was 0.076 kcal/mol. We have also reproduced pKa values of propanoic and butanoic acids within ca. 0.1 pH units from the experimental values by fitting the solvation parameters for carboxylate ion carbon and oxygen atoms. The FB model combines two distinguishing features. First, it limits the amount of noise which is common in numerical treatment of continuum solvation models by using fixed-position grid points. Second, it employs either second- or first-order approximation for the solvent polarization, depending on a particular implementation. These approximations are similar to those used for solute and explicit solvent fast polarization treatment which we developed previously. This article describes results of employing the first-order technique. This approximation places the presented methodology between the Generalized Born and Poisson-Boltzmann continuum solvation models with respect to their accuracy of reproducing the many-body effects in modeling a continuum solvent. PMID:22815192
Feng, Hao; Ashkar, Rana; Steinke, Nina; ...
2018-02-01
A method dubbed grating-based holography was recently used to determine the structure of colloidal fluids in the rectangular grooves of a diffraction grating from X-ray scattering measurements. Similar grating-based measurements have also been recently made with neutrons using a technique called spin-echo small-angle neutron scattering. The analysis of the X-ray diffraction data was done using an approximation that treats the X-ray phase change caused by the colloidal structure as a small perturbation to the overall phase pattern generated by the grating. In this paper, the adequacy of this weak phase approximation is explored for both X-ray and neutron grating holography.more » Additionally, it is found that there are several approximations hidden within the weak phase approximation that can lead to incorrect conclusions from experiments. In particular, the phase contrast for the empty grating is a critical parameter. Finally, while the approximation is found to be perfectly adequate for X-ray grating holography experiments performed to date, it cannot be applied to similar neutron experiments because the latter technique requires much deeper grating channels.« less
Mauda, R.; Pinchas, M.
2014-01-01
Recently a new blind equalization method was proposed for the 16QAM constellation input inspired by the maximum entropy density approximation technique with improved equalization performance compared to the maximum entropy approach, Godard's algorithm, and others. In addition, an approximated expression for the minimum mean square error (MSE) was obtained. The idea was to find those Lagrange multipliers that bring the approximated MSE to minimum. Since the derivation of the obtained MSE with respect to the Lagrange multipliers leads to a nonlinear equation for the Lagrange multipliers, the part in the MSE expression that caused the nonlinearity in the equation for the Lagrange multipliers was ignored. Thus, the obtained Lagrange multipliers were not those Lagrange multipliers that bring the approximated MSE to minimum. In this paper, we derive a new set of Lagrange multipliers based on the nonlinear expression for the Lagrange multipliers obtained from minimizing the approximated MSE with respect to the Lagrange multipliers. Simulation results indicate that for the high signal to noise ratio (SNR) case, a faster convergence rate is obtained for a channel causing a high initial intersymbol interference (ISI) while the same equalization performance is obtained for an easy channel (initial ISI low). PMID:24723813
NASA Technical Reports Server (NTRS)
Pototzky, Anthony S.
2008-01-01
A simple matrix polynomial approach is introduced for approximating unsteady aerodynamics in the s-plane and ultimately, after combining matrix polynomial coefficients with matrices defining the structure, a matrix polynomial of the flutter equations of motion (EOM) is formed. A technique of recasting the matrix-polynomial form of the flutter EOM into a first order form is also presented that can be used to determine the eigenvalues near the origin and everywhere on the complex plane. An aeroservoelastic (ASE) EOM have been generalized to include the gust terms on the right-hand side. The reasons for developing the new matrix polynomial approach are also presented, which are the following: first, the "workhorse" methods such as the NASTRAN flutter analysis lack the capability to consistently find roots near the origin, along the real axis or accurately find roots farther away from the imaginary axis of the complex plane; and, second, the existing s-plane methods, such as the Roger s s-plane approximation method as implemented in ISAC, do not always give suitable fits of some tabular data of the unsteady aerodynamics. A method available in MATLAB is introduced that will accurately fit generalized aerodynamic force (GAF) coefficients in a tabular data form into the coefficients of a matrix polynomial form. The root-locus results from the NASTRAN pknl flutter analysis, the ISAC-Roger's s-plane method and the present matrix polynomial method are presented and compared for accuracy and for the number and locations of roots.
Precision cosmological parameter estimation
NASA Astrophysics Data System (ADS)
Fendt, William Ashton, Jr.
2009-09-01
Experimental efforts of the last few decades have brought. a golden age to mankind's endeavor to understand tine physical properties of the Universe throughout its history. Recent measurements of the cosmic microwave background (CMB) provide strong confirmation of the standard big bang paradigm, as well as introducing new mysteries, to unexplained by current physical models. In the following decades. even more ambitious scientific endeavours will begin to shed light on the new physics by looking at the detailed structure of the Universe both at very early and recent times. Modern data has allowed us to begins to test inflationary models of the early Universe, and the near future will bring higher precision data and much stronger tests. Cracking the codes hidden in these cosmological observables is a difficult and computationally intensive problem. The challenges will continue to increase as future experiments bring larger and more precise data sets. Because of the complexity of the problem, we are forced to use approximate techniques and make simplifying assumptions to ease the computational workload. While this has been reasonably sufficient until now, hints of the limitations of our techniques have begun to come to light. For example, the likelihood approximation used for analysis of CMB data from the Wilkinson Microwave Anistropy Probe (WMAP) satellite was shown to have short falls, leading to pre-emptive conclusions drawn about current cosmological theories. Also it can he shown that an approximate method used by all current analysis codes to describe the recombination history of the Universe will not be sufficiently accurate for future experiments. With a new CMB satellite scheduled for launch in the coming months, it is vital that we develop techniques to improve the analysis of cosmological data. This work develops a novel technique of both avoiding the use of approximate computational codes as well as allowing the application of new, more precise analysis methods. These techniques will help in the understanding of new physics contained in current and future data sets as well as benefit the research efforts of the cosmology community. Our idea is to shift the computationally intensive pieces of the parameter estimation framework to a parallel training step. We then provide a machine learning code that uses this training set to learn the relationship between the underlying cosmological parameters and the function we wish to compute. This code is very accurate and simple to evaluate. It can provide incredible speed- ups of parameter estimation codes. For some applications this provides the convenience of obtaining results faster, while in other cases this allows the use of codes that would be impossible to apply in the brute force setting. In this thesis we provide several examples where our method allows more accurate computation of functions important for data analysis than is currently possible. As the techniques developed in this work are very general, there are no doubt a wide array of applications both inside and outside of cosmology. We have already seen this interest as other scientists have presented ideas for using our algorithm to improve their computational work, indicating its importance as modern experiments push forward. In fact, our algorithm will play an important role in the parameter analysis of Planck, the next generation CMB space mission.
Approximation of Failure Probability Using Conditional Sampling
NASA Technical Reports Server (NTRS)
Giesy. Daniel P.; Crespo, Luis G.; Kenney, Sean P.
2008-01-01
In analyzing systems which depend on uncertain parameters, one technique is to partition the uncertain parameter domain into a failure set and its complement, and judge the quality of the system by estimating the probability of failure. If this is done by a sampling technique such as Monte Carlo and the probability of failure is small, accurate approximation can require so many sample points that the computational expense is prohibitive. Previous work of the authors has shown how to bound the failure event by sets of such simple geometry that their probabilities can be calculated analytically. In this paper, it is shown how to make use of these failure bounding sets and conditional sampling within them to substantially reduce the computational burden of approximating failure probability. It is also shown how the use of these sampling techniques improves the confidence intervals for the failure probability estimate for a given number of sample points and how they reduce the number of sample point analyses needed to achieve a given level of confidence.
Infantile hemangioma: pulsed dye laser versus surgical therapy
NASA Astrophysics Data System (ADS)
Remlova, E.; Dostalova, T.; Michalusova, I.; Vranova, J.; Jelinkova, H.; Hubacek, M.
2014-05-01
Hemangioma is a mesenchymal benign tumor formed by blood vessels. Anomalies affect up to 10% of children and they are more common in females than in males. The aim of our study was to compare the treatment efficacy, namely the curative effect and adverse events, such as loss of pigment and appearance of scarring, between classical surgery techniques and laser techniques. For that reason a group of 223 patients with hemangioma was retrospectively reviewed. For treatment, a pulsed dye laser (PDL) (Rhodamine G, wavelength 595 nm, pulsewidth between 0.45 and 40 ms, spot diameter 7 mm, energy density 9-11 J cm-2) was used and the results were compared with a control group treated with classical surgical therapy under general anesthesia. The curative effects, mainly number of sessions, appearance of scars, loss of pigment, and relapses were evaluated as a marker of successful treatment. From the results it was evident that the therapeutic effects of both systems are similar. The PDL was successful in all cases. The surgery patients had four relapses. Classical surgery is directly connected with the presence of scars, but the system is safe for larger hemangiomas. It was confirmed that the PDL had the optimal curative effect without scars for small lesions (approximately 10 mm). Surgical treatment under general anesthesia is better for large hemangiomas; the disadvantage is the presence of scars.
Comparing capacity value estimation techniques for photovoltaic solar power
Madaeni, Seyed Hossein; Sioshansi, Ramteen; Denholm, Paul
2012-09-28
In this paper, we estimate the capacity value of photovoltaic (PV) solar plants in the western U.S. Our results show that PV plants have capacity values that range between 52% and 93%, depending on location and sun-tracking capability. We further compare more robust but data- and computationally-intense reliability-based estimation techniques with simpler approximation methods. We show that if implemented properly, these techniques provide accurate approximations of reliability-based methods. Overall, methods that are based on the weighted capacity factor of the plant provide the most accurate estimate. As a result, we also examine the sensitivity of PV capacity value to themore » inclusion of sun-tracking systems.« less
Design of a laser system for instantaneous location of a longwall shearer
NASA Technical Reports Server (NTRS)
Stein, R.
1981-01-01
Calculations and measurements for the design of a laser system for instantaneous location of a longwall shearer were made. The designs determine shearer location to approximately one foot. The roll, pitch, and yaw angles of the shearer track are determined to approximately two degrees. The first technique uses the water target system. A single silicon sensor system and three gallium arsenide laser beams are used in this technique. The second technique is based on an arrangement similar to that employed in aircraft omnidirectional position finding. The angle between two points is determined by combining information in an onmidirectional flash with a scanned, narrow beam beacon. It is concluded that this approach maximizes the signal levels.
NASA Astrophysics Data System (ADS)
Rosenfeld, Yaakov
1989-01-01
The linearized mean-force-field approximation, leading to a Gaussian distribution, provides an exact formal solution to the mean-spherical integral equation model for the electric microfield distribution at a charged point in the general charged-hard-particles fluid. Lado's explicit solution for plasmas immediately follows this general observation.
ERIC Educational Resources Information Center
Seco, Guillermo Vallejo; Izquierdo, Marcelino Cuesta; Garcia, M. Paula Fernandez; Diez, F. Javier Herrero
2006-01-01
The authors compare the operating characteristics of the bootstrap-F approach, a direct extension of the work of Berkovits, Hancock, and Nevitt, with Huynh's improved general approximation (IGA) and the Brown-Forsythe (BF) multivariate approach in a mixed repeated measures design when normality and multisample sphericity assumptions do not hold.…
Ground-to-Flight Handling Qualities Comparisons for a High Performance Airplane
NASA Technical Reports Server (NTRS)
Brandon, Jay M.; Glaab, Louis J.; Brown, Philip W.; Phillips, Michael R.
1995-01-01
A flight test program was conducted in conjunction with a ground-based piloted simulation study to enable a comparison of handling qualities ratings for a variety of maneuvers between flight and simulation of a modern high performance airplane. Specific objectives included an evaluation of pilot-induced oscillation (PIO) tendencies and a determination of maneuver types which result in either good or poor ground-to-flight pilot handling qualities ratings. A General Dynamics F-16XL aircraft was used for the flight evaluations, and the NASA Langley Differential Maneuvering Simulator was employed for the ground based evaluations. Two NASA research pilots evaluated both the airplane and simulator characteristics using tasks developed in the simulator. Simulator and flight tests were all conducted within approximately a one month time frame. Maneuvers included numerous fine tracking evaluations at various angles of attack, load factors and speed ranges, gross acquisitions involving longitudinal and lateral maneuvering, roll angle captures, and an ILS task with a sidestep to landing. Overall results showed generally good correlation between ground and flight for PIO tendencies and general handling qualities comments. Differences in pilot technique used in simulator evaluations and effects of airplane accelerations and motions are illustrated.
Magnetic resonance imaging in Mexico
NASA Astrophysics Data System (ADS)
Rodriguez, A. O.; Rojas, R.; Barrios, F. A.
2001-10-01
MR imaging has experienced an important growth worldwide and in particular in the USA and Japan. This imaging technique has also shown an important rise in the number of MR imagers in Mexico. However, the development of MRI has followed a typical way of Latin American countries, which is very different from the path shown in the industrialised countries. Despite the fact that Mexico was one the very first countries to install and operate MR imagers in the world, it still lacks of qualified clinical and technical personnel. Since the first MR scanner started to operate, the number of units has grown at a moderate space that now sums up approximately 60 system installed nationwide. Nevertheless, there are no official records of the number of MR units operating, physicians and technicians involved in this imaging modality. The MRI market is dominated by two important companies: General Electric (approximately 51%) and Siemens (approximately 17.5%), the rest is shared by other five companies. According to the field intensity, medium-field systems (0.5 Tesla) represent 60% while a further 35% are 1.0 T or higher. Almost all of these units are in private hospitals and clinics: there is no high-field MR imagers in any public hospital. Because the political changes in the country, a new public plan for health care is still in the process and will be published soon this year. This plan will be determined by the new Congress. North American Free Trade Agreement (NAFTA) and president Fox. Experience acquired in the past shows that the demand for qualified professionals will grow in the new future. Therefore, systematic training of clinical and technical professionals will be in high demand to meet the needs of this technique. The National University (UNAM) and the Metropolitan University (UAM-Iztapalapa) are collaborating with diverse clinical groups in private facilities to create a systematic training program and carry out research and development in MRI
Radiative heat transfer in strongly forward scattering media using the discrete ordinates method
NASA Astrophysics Data System (ADS)
Granate, Pedro; Coelho, Pedro J.; Roger, Maxime
2016-03-01
The discrete ordinates method (DOM) is widely used to solve the radiative transfer equation, often yielding satisfactory results. However, in the presence of strongly forward scattering media, this method does not generally conserve the scattering energy and the phase function asymmetry factor. Because of this, the normalization of the phase function has been proposed to guarantee that the scattering energy and the asymmetry factor are conserved. Various authors have used different normalization techniques. Three of these are compared in the present work, along with two other methods, one based on the finite volume method (FVM) and another one based on the spherical harmonics discrete ordinates method (SHDOM). In addition, the approximation of the Henyey-Greenstein phase function by a different one is investigated as an alternative to the phase function normalization. The approximate phase function is given by the sum of a Dirac delta function, which accounts for the forward scattering peak, and a smoother scaled phase function. In this study, these techniques are applied to three scalar radiative transfer test cases, namely a three-dimensional cubic domain with a purely scattering medium, an axisymmetric cylindrical enclosure containing an emitting-absorbing-scattering medium, and a three-dimensional transient problem with collimated irradiation. The present results show that accurate predictions are achieved for strongly forward scattering media when the phase function is normalized in such a way that both the scattered energy and the phase function asymmetry factor are conserved. The normalization of the phase function may be avoided using the FVM or the SHDOM to evaluate the in-scattering term of the radiative transfer equation. Both methods yield results whose accuracy is similar to that obtained using the DOM along with normalization of the phase function. Very satisfactory predictions were also achieved using the delta-M phase function, while the delta-Eddington phase function and the transport approximation may perform poorly.
The derivation and approximation of coarse-grained dynamics from Langevin dynamics
NASA Astrophysics Data System (ADS)
Ma, Lina; Li, Xiantao; Liu, Chun
2016-11-01
We present a derivation of a coarse-grained description, in the form of a generalized Langevin equation, from the Langevin dynamics model that describes the dynamics of bio-molecules. The focus is placed on the form of the memory kernel function, the colored noise, and the second fluctuation-dissipation theorem that connects them. Also presented is a hierarchy of approximations for the memory and random noise terms, using rational approximations in the Laplace domain. These approximations offer increasing accuracy. More importantly, they eliminate the need to evaluate the integral associated with the memory term at each time step. Direct sampling of the colored noise can also be avoided within this framework. Therefore, the numerical implementation of the generalized Langevin equation is much more efficient.
On the estimation of the current density in space plasmas: Multi- versus single-point techniques
NASA Astrophysics Data System (ADS)
Perri, Silvia; Valentini, Francesco; Sorriso-Valvo, Luca; Reda, Antonio; Malara, Francesco
2017-06-01
Thanks to multi-spacecraft mission, it has recently been possible to directly estimate the current density in space plasmas, by using magnetic field time series from four satellites flying in a quasi perfect tetrahedron configuration. The technique developed, commonly called ;curlometer; permits a good estimation of the current density when the magnetic field time series vary linearly in space. This approximation is generally valid for small spacecraft separation. The recent space missions Cluster and Magnetospheric Multiscale (MMS) have provided high resolution measurements with inter-spacecraft separation up to 100 km and 10 km, respectively. The former scale corresponds to the proton gyroradius/ion skin depth in ;typical; solar wind conditions, while the latter to sub-proton scale. However, some works have highlighted an underestimation of the current density via the curlometer technique with respect to the current computed directly from the velocity distribution functions, measured at sub-proton scales resolution with MMS. In this paper we explore the limit of the curlometer technique studying synthetic data sets associated to a cluster of four artificial satellites allowed to fly in a static turbulent field, spanning a wide range of relative separation. This study tries to address the relative importance of measuring plasma moments at very high resolution from a single spacecraft with respect to the multi-spacecraft missions in the current density evaluation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Hao; Ashkar, Rana; Steinke, Nina
A method dubbed grating-based holography was recently used to determine the structure of colloidal fluids in the rectangular grooves of a diffraction grating from X-ray scattering measurements. Similar grating-based measurements have also been recently made with neutrons using a technique called spin-echo small-angle neutron scattering. The analysis of the X-ray diffraction data was done using an approximation that treats the X-ray phase change caused by the colloidal structure as a small perturbation to the overall phase pattern generated by the grating. In this paper, the adequacy of this weak phase approximation is explored for both X-ray and neutron grating holography.more » Additionally, it is found that there are several approximations hidden within the weak phase approximation that can lead to incorrect conclusions from experiments. In particular, the phase contrast for the empty grating is a critical parameter. Finally, while the approximation is found to be perfectly adequate for X-ray grating holography experiments performed to date, it cannot be applied to similar neutron experiments because the latter technique requires much deeper grating channels.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buda, I. G.; Lane, C.; Barbiellini, B.
We discuss self-consistently obtained ground-state electronic properties of monolayers of graphene and a number of ’beyond graphene’ compounds, including films of transition-metal dichalcogenides (TMDs), using the recently proposed strongly constrained and appropriately normed (SCAN) meta-generalized gradient approximation (meta-GGA) to the density functional theory. The SCAN meta-GGA results are compared with those based on the local density approximation (LDA) as well as the generalized gradient approximation (GGA). As expected, the GGA yields expanded lattices and softened bonds in relation to the LDA, but the SCAN meta-GGA systematically improves the agreement with experiment. Our study suggests the efficacy of the SCAN functionalmore » for accurate modeling of electronic structures of layered materials in high-throughput calculations more generally.« less
Buda, I. G.; Lane, C.; Barbiellini, B.; ...
2017-03-23
We discuss self-consistently obtained ground-state electronic properties of monolayers of graphene and a number of ’beyond graphene’ compounds, including films of transition-metal dichalcogenides (TMDs), using the recently proposed strongly constrained and appropriately normed (SCAN) meta-generalized gradient approximation (meta-GGA) to the density functional theory. The SCAN meta-GGA results are compared with those based on the local density approximation (LDA) as well as the generalized gradient approximation (GGA). As expected, the GGA yields expanded lattices and softened bonds in relation to the LDA, but the SCAN meta-GGA systematically improves the agreement with experiment. Our study suggests the efficacy of the SCAN functionalmore » for accurate modeling of electronic structures of layered materials in high-throughput calculations more generally.« less
Anesthetic management of prophylactic cervical cerclage: a retrospective multicenter cohort study.
Ioscovich, Alexander; Popov, Alla; Gimelfarb, Yuri; Gozal, Yaacov; Orbach-Zinger, Sharon; Shapiro, Joel; Ginosar, Yehuda
2015-03-01
Cervical incompetence complicates approximately 1 in 500 pregnancies and is the most common cause of second-trimester spontaneous abortion and preterm labor. No prospective or large retrospective studies have compared regional and general anesthesia for cervical cerclage. Following IRB approval, we performed a retrospective study in the two main medical centers over an 8-year period to assess the association of anesthesia choice with anesthetic and obstetric outcomes. Anesthetic and perioperative details were retrospectively collected from fails of all patients undergoing cervical cerclage from 01/01/2005 until 31/12/2012. Details included demographic data, anesthetic technique, PACU data and perioperative complications. We identified 487 cases of cervical cerclage in 327 women during the study period. The most commonly used anesthetic technique was general anesthesia (GA) (402/487; 82.5%) compared with regional anesthesia (RA) (85/487; 17.5%). When GA was performed, facemask was the most commonly used technique (275/402; 68.4%), followed by intravenous deep sedation (61/402; 15.2%); LMA (51/402; 12.7%) and tracheal intubation (13/402; 3.2%). There were no significant differences in demographic characteristics between women receiving general and regional anesthesia. Average duration of suturing the cervix among the GA group was 9.8 ± 1.6 and 10.6 ± 2.1 min in the RA group (p < 0.001). Average length of stay in the operating room in the GA group was 20.5 ± 3.9 and 23 ± 4.6 min in the RA group (p < 0.001). Patients receiving GA received in the PACU more opioids (6.2 versus 1.2%; p < 0.05) and more non-opioids analgesics (36.8 versus 9.4%; p < 0.001). Duration of PACU stay was shorter after GA (49.5 ± 18 min) than after RA (62.4 ± 28 min; p < 0.001). There were no other differences in anesthetic or perioperative outcome between groups. This study was not designed to provide evidence that RA reduces the risk of pulmonary aspiration, airway complications or adverse fetal neurological effects from maternal anesthetic exposure. Both regional and general anesthesia were safely used for the performance of cerclage. Patients after general anesthesia had a shorter recovery time but a higher demand for opioids and non-opioids analgesia.
On twelve types of covering-based rough sets.
Safari, Samira; Hooshmandasl, Mohammad Reza
2016-01-01
Covering approximation spaces are a generalization of equivalence-based rough set theories. In this paper, we will consider twelve types of covering based approximation operators by combining four types of covering lower approximation operators and three types of covering upper approximation operators. Then, we will study the properties of these new pairs and show they have most of the common properties among existing covering approximation pairs. Finally, the relation between these new pairs is studied.
Pawlak Algebra and Approximate Structure on Fuzzy Lattice
Zhuang, Ying; Liu, Wenqi; Wu, Chin-Chia; Li, Jinhai
2014-01-01
The aim of this paper is to investigate the general approximation structure, weak approximation operators, and Pawlak algebra in the framework of fuzzy lattice, lattice topology, and auxiliary ordering. First, we prove that the weak approximation operator space forms a complete distributive lattice. Then we study the properties of transitive closure of approximation operators and apply them to rough set theory. We also investigate molecule Pawlak algebra and obtain some related properties. PMID:25152922
Pawlak algebra and approximate structure on fuzzy lattice.
Zhuang, Ying; Liu, Wenqi; Wu, Chin-Chia; Li, Jinhai
2014-01-01
The aim of this paper is to investigate the general approximation structure, weak approximation operators, and Pawlak algebra in the framework of fuzzy lattice, lattice topology, and auxiliary ordering. First, we prove that the weak approximation operator space forms a complete distributive lattice. Then we study the properties of transitive closure of approximation operators and apply them to rough set theory. We also investigate molecule Pawlak algebra and obtain some related properties.
Real-time dynamics of matrix quantum mechanics beyond the classical approximation
NASA Astrophysics Data System (ADS)
Buividovich, Pavel; Hanada, Masanori; Schäfer, Andreas
2018-03-01
We describe a numerical method which allows to go beyond the classical approximation for the real-time dynamics of many-body systems by approximating the many-body Wigner function by the most general Gaussian function with time-dependent mean and dispersion. On a simple example of a classically chaotic system with two degrees of freedom we demonstrate that this Gaussian state approximation is accurate for significantly smaller field strengths and longer times than the classical one. Applying this approximation to matrix quantum mechanics, we demonstrate that the quantum Lyapunov exponents are in general smaller than their classical counterparts, and even seem to vanish below some temperature. This behavior resembles the finite-temperature phase transition which was found for this system in Monte-Carlo simulations, and ensures that the system does not violate the Maldacena-Shenker-Stanford bound λL < 2πT, which inevitably happens for classical dynamics at sufficiently small temperatures.
Hohenforst-Schmidt, Wolfgang; Linsmeier, Bernd; Zarogoulidis, Paul; Freitag, Lutz; Darwiche, Kaid; Browning, Robert; Turner, J Francis; Huang, Haidong; Li, Qiang; Vogl, Thomas; Zarogoulidis, Konstantinos; Brachmann, Johannes; Rittger, Harald
2015-01-01
Tracheomalacia or tracheobronchomalacia (TM or TBM) is a common problem especially for elderly patients often unfit for surgical techniques. Several surgical or minimally invasive techniques have already been described. Stenting is one option but in general long-time stenting is accompanied by a high complication rate. Stent removal is more difficult in case of self-expandable nitinol stents or metallic stents in general in comparison to silicone stents. The main disadvantage of silicone stents in comparison to uncovered metallic stents is migration and plugging. We compared the operation time and in particular the duration of a sufficient Dumon stent fixation with different techniques in a patient with severe posttracheotomy TM and strongly reduced mobility of the vocal cords due to Parkinson’s disease. The combined approach with simultaneous Dumon stenting and endoluminal transtracheal externalized suture under cone-beam computer tomography guidance with the Berci needle was by far the fastest approach compared to a (not performed) surgical intervention, or even purely endoluminal suturing through the rigid bronchoscope. The duration of the endoluminal transtracheal externalized suture was between 5 minutes and 9 minutes with the Berci needle; the pure endoluminal approach needed 51 minutes. The alternative of tracheobronchoplasty was refused by the patient. In general, 180 minutes for this surgical approach is calculated. The costs of the different approaches are supposed to vary widely due to the fact that in Germany 1 minute in an operation room costs on average approximately 50–60€ inclusive of taxes. In our own hospital (tertiary level), it is nearly 30€ per minute in an operation room for a surgical approach. Calculating an additional 15 minutes for patient preparation and transfer to wake-up room, therefore a total duration inside the investigation room of 30 minutes, the cost per flexible bronchoscopy is per minute on average less than 6€. Although the Dumon stenting requires a set-up with more expensive anesthesiology accompaniment, which takes longer than a flexible investigation estimated at 1 hour in an operation room, still without calculation of the costs of the materials and specialized staff that the surgical approach would consume at least 3,000€ more than a minimally invasive approach performed with the Berci needle. This difference is due to the longer time of the surgical intervention which is calculated at approximately 180 minutes in comparison to the achieved non-surgical approach of 60 minutes in the operation suite. PMID:26045666
NASA Astrophysics Data System (ADS)
Dhage, P. M.; Raghuwanshi, N. S.; Singh, R.; Mishra, A.
2017-05-01
Production of the principal paddy crop in West Bengal state of India is vulnerable to climate change due to limited water resources and strong dependence on surface irrigation. Therefore, assessment of impact of temperature scenarios on crop evapotranspiration (ETc) is essential for irrigation management in Kangsabati command (West Bengal). In the present study, impact of the projected temperatures on ETc was studied under climate change scenarios. Further, the performance of the bias correction and spatial downscaling (BCSD) technique was compared with the two well-known downscaling techniques, namely, multiple linear regression (MLR) and Kernel regression (KR), for the projections of daily maximum and minimum air temperatures for four stations, namely, Purulia, Bankura, Jhargram, and Kharagpur. In National Centers for Environmental Prediction (NCEP) and General Circulation Model (GCM), 14 predictors were used in MLR and KR techniques, whereas maximum and minimum surface air temperature predictor of CanESM2 GCM was used in BCSD technique. The comparison results indicated that the performance of the BCSD technique was better than the MLR and KR techniques. Therefore, the BCSD technique was used to project the future temperatures of study locations with three Representative Concentration Pathway (RCP) scenarios for the period of 2006-2100. The warming tendencies of maximum and minimum temperatures over the Kangsabati command area were projected as 0.013 and 0.014 °C/year under RCP 2.6, 0.015 and 0.023 °C/year under RCP 4.5, and 0.056 and 0.061 °C/year under RCP 8.5 for 2011-2100 period, respectively. As a result, kharif (monsoon) crop evapotranspiration demand of Kangsabati reservoir command (project area) will increase by approximately 10, 8, and 18 % over historical demand under RCP 2.6, 4.5, and 8.5 scenarios, respectively.
NASA Astrophysics Data System (ADS)
Wang, L.-P.; Ochoa-Rodríguez, S.; Onof, C.; Willems, P.
2015-09-01
Gauge-based radar rainfall adjustment techniques have been widely used to improve the applicability of radar rainfall estimates to large-scale hydrological modelling. However, their use for urban hydrological applications is limited as they were mostly developed based upon Gaussian approximations and therefore tend to smooth off so-called "singularities" (features of a non-Gaussian field) that can be observed in the fine-scale rainfall structure. Overlooking the singularities could be critical, given that their distribution is highly consistent with that of local extreme magnitudes. This deficiency may cause large errors in the subsequent urban hydrological modelling. To address this limitation and improve the applicability of adjustment techniques at urban scales, a method is proposed herein which incorporates a local singularity analysis into existing adjustment techniques and allows the preservation of the singularity structures throughout the adjustment process. In this paper the proposed singularity analysis is incorporated into the Bayesian merging technique and the performance of the resulting singularity-sensitive method is compared with that of the original Bayesian (non singularity-sensitive) technique and the commonly used mean field bias adjustment. This test is conducted using as case study four storm events observed in the Portobello catchment (53 km2) (Edinburgh, UK) during 2011 and for which radar estimates, dense rain gauge and sewer flow records, as well as a recently calibrated urban drainage model were available. The results suggest that, in general, the proposed singularity-sensitive method can effectively preserve the non-normality in local rainfall structure, while retaining the ability of the original adjustment techniques to generate nearly unbiased estimates. Moreover, the ability of the singularity-sensitive technique to preserve the non-normality in rainfall estimates often leads to better reproduction of the urban drainage system's dynamics, particularly of peak runoff flows.
Influence of different treatment techniques on radiation dose to the LAD coronary artery
Nieder, Carsten; Schill, Sabine; Kneschaurek, Peter; Molls, Michael
2007-01-01
Background The purpose of this proof-of-principle study was to test the ability of an intensity-modulated radiotherapy (IMRT) technique to reduce the radiation dose to the heart plus the left ventricle and a coronary artery. Radiation-induced heart disease might be a serious complication in long-term cancer survivors. Methods Planning CT scans from 6 female patients were available. They were part of a previous study of mediastinal IMRT for target volumes used in lymphoma treatment that included 8 patients and represent all cases where the left anterior descending coronary artery (LAD) could be contoured. We compared 6 MV AP/PA opposed fields to a 3D conformal 4-field technique and an optimised 7-field step-and-shoot IMRT technique and evaluated DVH's for several structures. The planning system was BrainSCAN 5.21 (BrainLAB, Heimstetten, Germany). Results IMRT maintained target volume coverage but resulted in better dose reduction to the heart, left ventricle and LAD than the other techniques. Selective dose reduction could be accomplished, although not to the degree initially attempted. The median LAD dose was approximately 50% lower with IMRT. In 5 out of 6 patients, IMRT was the best technique with regard to heart sparing. Conclusion IMRT techniques are able to reduce the radiation dose to the heart. In addition to dose reduction to whole heart, individualised dose distributions can be created, which spare, e.g., one ventricle plus one of the coronary arteries. Certain patients with well-defined vessel pathology might profit from an approach of general heart sparing with further selective dose reduction, accounting for the individual aspects of pre-existing damage. PMID:17547777
Suvarapu, Lakshmi Narayana; Baek, Sung-Ok
2015-01-01
This paper reviews the speciation and determination of mercury by various analytical techniques such as atomic absorption spectrometry, voltammetry, inductively coupled plasma techniques, spectrophotometry, spectrofluorometry, high performance liquid chromatography, and gas chromatography. Approximately 126 research papers on the speciation and determination of mercury by various analytical techniques published in international journals since 2013 are reviewed. PMID:26236539
Convergence of generalized MUSCL schemes
NASA Technical Reports Server (NTRS)
Osher, S.
1984-01-01
Semi-discrete generalizations of the second order extension of Godunov's scheme, known as the MUSCL scheme, are constructed, starting with any three point E scheme. They are used to approximate scalar conservation laws in one space dimension. For convex conservation laws, each member of a wide class is proven to be a convergent approximation to the correct physical solution. Comparison with another class of high resolution convergent schemes is made.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pribram-Jones, Aurora; Grabowski, Paul E.; Burke, Kieron
We present that the van Leeuwen proof of linear-response time-dependent density functional theory (TDDFT) is generalized to thermal ensembles. This allows generalization to finite temperatures of the Gross-Kohn relation, the exchange-correlation kernel of TDDFT, and fluctuation dissipation theorem for DFT. Finally, this produces a natural method for generating new thermal exchange-correlation approximations.
Pribram-Jones, Aurora; Grabowski, Paul E.; Burke, Kieron
2016-06-08
We present that the van Leeuwen proof of linear-response time-dependent density functional theory (TDDFT) is generalized to thermal ensembles. This allows generalization to finite temperatures of the Gross-Kohn relation, the exchange-correlation kernel of TDDFT, and fluctuation dissipation theorem for DFT. Finally, this produces a natural method for generating new thermal exchange-correlation approximations.
Generalized model of seismic pulse
NASA Astrophysics Data System (ADS)
Rabinovich, E. V.; Filipenko, N. Y.; Shefel, G. S.
2018-05-01
The paper presents data on a pulse model, suitable for generalizing models of known seismic pulses. It is shown that for each of the known models it is possible to obtain a very accurate quadratic approximation using the proposed model. For example, the fragment of a real seismic trace is approximated by a pulses set formed on the basis of the proposed model, with a high accuracy.
James, Andrew I.; Jawitz, James W.; Munoz-Carpena, Rafael
2009-01-01
A model to simulate transport of materials in surface water and ground water has been developed to numerically approximate solutions to the advection-dispersion equation. This model, known as the Transport and Reaction Simulation Engine (TaRSE), uses an algorithm that incorporates a time-splitting technique where the advective part of the equation is solved separately from the dispersive part. An explicit finite-volume Godunov method is used to approximate the advective part, while a mixed-finite element technique is used to approximate the dispersive part. The dispersive part uses an implicit discretization, which allows it to run stably with a larger time step than the explicit advective step. The potential exists to develop algorithms that run several advective steps, and then one dispersive step that encompasses the time interval of the advective steps. Because the dispersive step is computationally most expensive, schemes can be implemented that are more computationally efficient than non-time-split algorithms. This technique enables scientists to solve problems with high grid Peclet numbers, such as transport problems with sharp solute fronts, without spurious oscillations in the numerical approximation to the solution and with virtually no artificial diffusion.
1983-03-21
zero , it is necessary that B M(0) be nonzero. In the case considered here, B M(0) is taken to be nonsingula and withot loss of generality it may be set...452. (c.51 D. Levin, " General order Padd type rational approximants defined from a double power series," J. Inst. Maths. Applics., 18, 1976, pp. 1-8...common zeros in the closed unit bidisc, U- 2 . The 2-D setting provides a nice theoretical framework for generalization of these stabilization results to
NASA Technical Reports Server (NTRS)
Poole, L. R.
1976-01-01
The Langley Research Center and Virginia Institute of Marine Science wave refraction computer model was applied to the Baltimore Canyon region of the mid-Atlantic continental shelf. Wave refraction diagrams for a wide range of normally expected wave periods and directions were computed by using three bottom topography approximation techniques: quadratic least squares, cubic least squares, and constrained bicubic interpolation. Mathematical or physical interpretation of certain features appearing in the computed diagrams is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malin, Martha J.; Bartol, Laura J.; DeWerd, Larry A., E-mail: mmalin@wisc.edu, E-mail: ladewerd@wisc.edu
2015-05-15
Purpose: To investigate why dose-rate constants for {sup 125}I and {sup 103}Pd seeds computed using the spectroscopic technique, Λ{sub spec}, differ from those computed with standard Monte Carlo (MC) techniques. A potential cause of these discrepancies is the spectroscopic technique’s use of approximations of the true fluence distribution leaving the source, φ{sub full}. In particular, the fluence distribution used in the spectroscopic technique, φ{sub spec}, approximates the spatial, angular, and energy distributions of φ{sub full}. This work quantified the extent to which each of these approximations affects the accuracy of Λ{sub spec}. Additionally, this study investigated how the simplified water-onlymore » model used in the spectroscopic technique impacts the accuracy of Λ{sub spec}. Methods: Dose-rate constants as described in the AAPM TG-43U1 report, Λ{sub full}, were computed with MC simulations using the full source geometry for each of 14 different {sup 125}I and 6 different {sup 103}Pd source models. In addition, the spectrum emitted along the perpendicular bisector of each source was simulated in vacuum using the full source model and used to compute Λ{sub spec}. Λ{sub spec} was compared to Λ{sub full} to verify the discrepancy reported by Rodriguez and Rogers. Using MC simulations, a phase space of the fluence leaving the encapsulation of each full source model was created. The spatial and angular distributions of φ{sub full} were extracted from the phase spaces and were qualitatively compared to those used by φ{sub spec}. Additionally, each phase space was modified to reflect one of the approximated distributions (spatial, angular, or energy) used by φ{sub spec}. The dose-rate constant resulting from using approximated distribution i, Λ{sub approx,i}, was computed using the modified phase space and compared to Λ{sub full}. For each source, this process was repeated for each approximation in order to determine which approximations used in the spectroscopic technique affect the accuracy of Λ{sub spec}. Results: For all sources studied, the angular and spatial distributions of φ{sub full} were more complex than the distributions used in φ{sub spec}. Differences between Λ{sub spec} and Λ{sub full} ranged from −0.6% to +6.4%, confirming the discrepancies found by Rodriguez and Rogers. The largest contribution to the discrepancy was the assumption of isotropic emission in φ{sub spec}, which caused differences in Λ of up to +5.3% relative to Λ{sub full}. Use of the approximated spatial and energy distributions caused smaller average discrepancies in Λ of −0.4% and +0.1%, respectively. The water-only model introduced an average discrepancy in Λ of −0.4%. Conclusions: The approximations used in φ{sub spec} caused discrepancies between Λ{sub approx,i} and Λ{sub full} of up to 7.8%. With the exception of the energy distribution, the approximations used in φ{sub spec} contributed to this discrepancy for all source models studied. To improve the accuracy of Λ{sub spec}, the spatial and angular distributions of φ{sub full} could be measured, with the measurements replacing the approximated distributions. The methodology used in this work could be used to determine the resolution that such measurements would require by computing the dose-rate constants from phase spaces modified to reflect φ{sub full} binned at different spatial and angular resolutions.« less
Yang, Weitao; Mori-Sánchez, Paula; Cohen, Aron J
2013-09-14
The exact conditions for density functionals and density matrix functionals in terms of fractional charges and fractional spins are known, and their violation in commonly used functionals has been shown to be the root of many major failures in practical applications. However, approximate functionals are designed for physical systems with integer charges and spins, not in terms of the fractional variables. Here we develop a general framework for extending approximate density functionals and many-electron theory to fractional-charge and fractional-spin systems. Our development allows for the fractional extension of any approximate theory that is a functional of G(0), the one-electron Green's function of the non-interacting reference system. The extension to fractional charge and fractional spin systems is based on the ensemble average of the basic variable, G(0). We demonstrate the fractional extension for the following theories: (1) any explicit functional of the one-electron density, such as the local density approximation and generalized gradient approximations; (2) any explicit functional of the one-electron density matrix of the non-interacting reference system, such as the exact exchange functional (or Hartree-Fock theory) and hybrid functionals; (3) many-body perturbation theory; and (4) random-phase approximations. A general rule for such an extension has also been derived through scaling the orbitals and should be useful for functionals where the link to the Green's function is not obvious. The development thus enables the examination of approximate theories against known exact conditions on the fractional variables and the analysis of their failures in chemical and physical applications in terms of violations of exact conditions of the energy functionals. The present work should facilitate the calculation of chemical potentials and fundamental bandgaps with approximate functionals and many-electron theories through the energy derivatives with respect to the fractional charge. It should play an important role in developing accurate approximate density functionals and many-body theory.
Application of perturbation theory to lattice calculations based on method of cyclic characteristics
NASA Astrophysics Data System (ADS)
Assawaroongruengchot, Monchai
Perturbation theory is a technique used for the estimation of changes in performance functionals, such as linear reaction rate ratio and eigenvalue affected by small variations in reactor core compositions. Here the algorithm of perturbation theory is developed for the multigroup integral neutron transport problems in 2D fuel assemblies with isotropic scattering. The integral transport equation is used in the perturbative formulation because it represents the interconnecting neutronic systems of the lattice assemblies via the tracking lines. When the integral neutron transport equation is used in the formulation, one needs to solve the resulting integral transport equations for the flux importance and generalized flux importance functions. The relationship between the generalized flux importance and generalized source importance functions is defined in order to transform the generalized flux importance transport equations into the integro-differential equations for the generalized adjoints. Next we develop the adjoint and generalized adjoint transport solution algorithms based on the method of cyclic characteristics (MOCC) in DRAGON code. In the MOCC method, the adjoint characteristics equations associated with a cyclic tracking line are formulated in such a way that a closed form for the adjoint angular function can be obtained. The MOCC method then requires only one cycle of scanning over the cyclic tracking lines in each spatial iteration. We also show that the source importance function by CP method is mathematically equivalent to the adjoint function by MOCC method. In order to speed up the MOCC solution algorithm, a group-reduction and group-splitting techniques based on the structure of the adjoint scattering matrix are implemented. A combined forward flux/adjoint function iteration scheme, based on the group-splitting technique and the common use of a large number of variables storing tracking-line data and exponential values, is proposed to reduce the computing time when both direct and adjoint solutions are required. A problem that arises for the generalized adjoint problem is that the direct use of the negative external generalized adjoint sources in the adjoint solution algorithm results in negative generalized adjoint functions. A coupled flux biasing/decontamination scheme is applied to make the generalized adjoint functions positive using the adjoint functions in such a way that it can be used for the multigroup rebalance technique. Next we consider the application of the perturbation theory to the reactor problems. Since the coolant void reactivity (CVR) is a important factor in reactor safety analysis, we have decided to select this parameter for optimization studies. We consider the optimization and adjoint sensitivity techniques for the adjustments of CVR at beginning of burnup cycle (BOC) and k eff at end of burnup cycle (EOC) for a 2D Advanced CANDU Reactor (ACR) lattice. The sensitivity coefficients are evaluated using the perturbation theory based on the integral transport equations. Three sets of parameters for CVR-BOC and keff-EOC adjustments are studied: (1) Dysprosium density in the central pin with Uranium enrichment in the outer fuel rings, (2) Dysprosium density and Uranium enrichment both in the central pin, and (3) the same parameters as in the first case but the objective is to obtain a negative checkerboard CVR at beginning of cycle (CBCVR-BOC). To approximate the sensitivity coefficient at EOC, we perform constant-power burnup/depletion calculations for 600 full power days (FPD) using a slightly perturbed nuclear library and the unperturbed neutron fluxes to estimate the variation of nuclide densities at EOC. Sensitivity analyses of CVR and eigenvalue are included in the study. In addition the optimization and adjoint sensitivity techniques are applied to the CBCVR-BOC and keff-EOC adjustment of the ACR lattices with Gadolinium in the central pin. Finally we apply these techniques to the CVR-BOC, CVR-EOC and keff-EOC adjustment of a CANDU lattice of which the burnup period is extended from 300 to 450 FPDs. The cases with the central pin containing either Dysprosium or Gadolinium in the natural Uranium are considered in our study. (Abstract shortened by UMI.)
CONTRIBUTIONS TO RATIONAL APPROXIMATION,
Some of the key results of linear Chebyshev approximation theory are extended to generalized rational functions. Prominent among these is Haar’s...linear theorem which yields necessary and sufficient conditions for uniqueness. Some new results in the classic field of rational function Chebyshev...Furthermore a Weierstrass type theorem is proven for rational Chebyshev approximation. A characterization theorem for rational trigonometric Chebyshev approximation in terms of sign alternation is developed. (Author)
Application of geometric approximation to the CPMG experiment: Two- and three-site exchange.
Chao, Fa-An; Byrd, R Andrew
2017-04-01
The Carr-Purcell-Meiboom-Gill (CPMG) experiment is one of the most classical and well-known relaxation dispersion experiments in NMR spectroscopy, and it has been successfully applied to characterize biologically relevant conformational dynamics in many cases. Although the data analysis of the CPMG experiment for the 2-site exchange model can be facilitated by analytical solutions, the data analysis in a more complex exchange model generally requires computationally-intensive numerical analysis. Recently, a powerful computational strategy, geometric approximation, has been proposed to provide approximate numerical solutions for the adiabatic relaxation dispersion experiments where analytical solutions are neither available nor feasible. Here, we demonstrate the general potential of geometric approximation by providing a data analysis solution of the CPMG experiment for both the traditional 2-site model and a linear 3-site exchange model. The approximate numerical solution deviates less than 0.5% from the numerical solution on average, and the new approach is computationally 60,000-fold more efficient than the numerical approach. Moreover, we find that accurate dynamic parameters can be determined in most cases, and, for a range of experimental conditions, the relaxation can be assumed to follow mono-exponential decay. The method is general and applicable to any CPMG RD experiment (e.g. N, C', C α , H α , etc.) The approach forms a foundation of building solution surfaces to analyze the CPMG experiment for different models of 3-site exchange. Thus, the geometric approximation is a general strategy to analyze relaxation dispersion data in any system (biological or chemical) if the appropriate library can be built in a physically meaningful domain. Published by Elsevier Inc.
Comparative In Situ Measurements of Plasma Instabilities in the Equatorial and Auroral Electrojets
NASA Technical Reports Server (NTRS)
Pfaff, Robert F.
2008-01-01
This presentation provides a comparison of in situ measurements of plasma instabilities gathered by rocket-borne probes in the equatorial and auroral electrojets. Specifically, using detailed measurements of the DC electric fields, current density, and plasma number density within the unstable daytime equatorial electrojet from Brazil (Guara Campaign) and in the auroral electrojet from Sweden (ERRIS Campaign), we present comparative observations and general conclusions regarding the observed physical properties of Farley-Buneman two-stream waves and large scale, gradient drift waves. The two stream observations reveal coherent-like waves propagating near the E x B direction but at reduced speeds (nearer to the presumed acoustic velocity) with wavelengths of approximately 5-10m in both the equatorial and auroral electrojet, as measured using the spaced-receiver technique. The auroral electrojet data generally shows extensions to shorter wavelengths, in concert with the fact that these waves are driven harder. With respect to gradient-drift driven waves, observations of this instability are much more pronounced in the equatorial electrojet, given the more favorable geometry for growth provided by the vertical gradient and horizontal magnetic field lines. We present new analysis of Guara rocket observations of electric field and plasma density data that reveal considerable structuring in the middle and lower portion of the electrojet (90-105 km) where the ambient plasma density gradient is unstable. Although the electric field amplitudes are largest (approximately 10-15 mV/m) in the zonal direction, considerable structure (approximately 5-10 mV/m) is also observed in the vertical electric field component as well, implying that the dominant large scale waves involve significant vertical interaction and coupling within the narrow altitude range where they are observed. Furthermore, a detailed examination of the phase of the waveforms show that on some, but not all occasions, locally enhanced eastward fields are associated with locally enhanced upwards (polarization) electric fields. The measurements are discussed in terms of theories involving the non-linear evolution and structuring of plasma waves.
UNAERO: A package of FORTRAN subroutines for approximating unsteady aerodynamics in the time domain
NASA Technical Reports Server (NTRS)
Dunn, H. J.
1985-01-01
This report serves as an instruction and maintenance manual for a collection of CDC CYBER FORTRAN IV subroutines for approximating the unsteady aerodynamic forces in the time domain. The result is a set of constant-coefficient first-order differential equations that approximate the dynamics of the vehicle. Provisions are included for adjusting the number of modes used for calculating the approximations so that an accurate approximation is generated. The number of data points at different values of reduced frequency can also be varied to adjust the accuracy of the approximation over the reduced-frequency range. The denominator coefficients of the approximation may be calculated by means of a gradient method or a least-squares approximation technique. Both the approximation methods use weights on the residual error. A new set of system equations, at a different dynamic pressure, can be generated without the approximations being recalculated.
Atomistic Modeling of Nanostructures via the BFS Quantum Approximate Method
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo; Garces, Jorge E.; Noebe, Ronald D.; Farias, D.
2003-01-01
Ideally, computational modeling techniques for nanoscopic physics would be able to perform free of limitations on the type and number of elements, while providing comparable accuracy when dealing with bulk or surface problems. Computational efficiency is also desirable, if not mandatory, for properly dealing with the complexity of typical nano-strucured systems. A quantum approximate technique, the BFS method for alloys, which attempts to meet these demands, is introduced for the calculation of the energetics of nanostructures. The versatility of the technique is demonstrated through analysis of diverse systems, including multi-phase precipitation in a five element Ni-Al-Ti-Cr-Cu alloy and the formation of mixed composition Co-Cu islands on a metallic Cu(III) substrate.
Formulation of aerodynamic prediction techniques for hypersonic configuration design
NASA Technical Reports Server (NTRS)
1979-01-01
An investigation of approximate theoretical techniques for predicting aerodynamic characteristics and surface pressures for relatively slender vehicles at moderate hypersonic speeds was performed. Emphasis was placed on approaches that would be responsive to preliminary configuration design level of effort. Supersonic second order potential theory was examined in detail to meet this objective. Shock layer integral techniques were considered as an alternative means of predicting gross aerodynamic characteristics. Several numerical pilot codes were developed for simple three dimensional geometries to evaluate the capability of the approximate equations of motion considered. Results from the second order computations indicated good agreement with higher order solutions and experimental results for a variety of wing like shapes and values of the hypersonic similarity parameter M delta approaching one.
Symbolic Execution Enhanced System Testing
NASA Technical Reports Server (NTRS)
Davies, Misty D.; Pasareanu, Corina S.; Raman, Vishwanath
2012-01-01
We describe a testing technique that uses information computed by symbolic execution of a program unit to guide the generation of inputs to the system containing the unit, in such a way that the unit's, and hence the system's, coverage is increased. The symbolic execution computes unit constraints at run-time, along program paths obtained by system simulations. We use machine learning techniques treatment learning and function fitting to approximate the system input constraints that will lead to the satisfaction of the unit constraints. Execution of system input predictions either uncovers new code regions in the unit under analysis or provides information that can be used to improve the approximation. We have implemented the technique and we have demonstrated its effectiveness on several examples, including one from the aerospace domain.
Testing approximations for non-linear gravitational clustering
NASA Technical Reports Server (NTRS)
Coles, Peter; Melott, Adrian L.; Shandarin, Sergei F.
1993-01-01
The accuracy of various analytic approximations for following the evolution of cosmological density fluctuations into the nonlinear regime is investigated. The Zel'dovich approximation is found to be consistently the best approximation scheme. It is extremely accurate for power spectra characterized by n = -1 or less; when the approximation is 'enhanced' by truncating highly nonlinear Fourier modes the approximation is excellent even for n = +1. The performance of linear theory is less spectrum-dependent, but this approximation is less accurate than the Zel'dovich one for all cases because of the failure to treat dynamics. The lognormal approximation generally provides a very poor fit to the spatial pattern.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roseberry, R.J.
The experimental measurements and nuclear analysis of a uniformly loaded, unpoisoned slab core with a partially inserted hafnium rod and/or a partially inserted water gap are described. Comparisons of experimental data with calculated results of the UFO core and flux synthesis techniques are given. It is concluded that one of the flux synthesis techniques and the UFO code are able to predict flux distributions to within approximately -5% of experiment for most cases, with a maximum error of approximately -10% for a channel at the core- reflector boundary. The second synthesis technique failed to give comparable agreement with experiment evenmore » when various refinements were used, e.g. increasing the number of mesh points, performing the flux synthesis technique of iteration, and spectrum-weighting the appropriate calculated fluxes through the use of the SWAKRAUM code. These results are comparable to those reported in Part I of this study. (auth)« less
Presurgical cleft lip and palate orthopedics: an overview
Alzain, Ibtesam; Batwa, Waeil; Cash, Alex; Murshid, Zuhair A
2017-01-01
Patients with cleft lip and/or palate go through a lifelong journey of multidisciplinary care, starting from before birth and extending until adulthood. Presurgical orthopedic (PSO) treatment is one of the earliest stages of this care plan. In this paper we provide a review of the PSO treatment. This review should help general and specialist dentists to better understand the cleft patient care path and to be able to answer patient queries more efficiently. The objectives of this paper were to review the basic principles of PSO treatment, the various types of techniques used in this therapy, and the protocol followed, and to critically evaluate the advantages and disadvantages of some of these techniques. In conclusion, we believe that PSO treatment, specifically nasoalveolar molding, does help to approximate the segments of the cleft maxilla and does reduce the intersegment space in readiness for the surgical closure of cleft sites. However, what we remain unable to prove equivocally at this point is whether the reduction in the dimensions of the cleft presurgically and the manipulation of the nasal complex benefit our patients in the long term. PMID:28615974
DOE Office of Scientific and Technical Information (OSTI.GOV)
Armas-Perez, Julio C.; Londono-Hurtado, Alejandro; Guzman, Orlando
2015-07-27
A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystalmore » droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Armas-Pérez, Julio C.; Londono-Hurtado, Alejandro; Guzmán, Orlando
2015-07-28
A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystalmore » droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate.« less
Wang, Huanqing; Liu, Peter Xiaoping; Li, Shuai; Wang, Ding
2017-08-29
This paper presents the development of an adaptive neural controller for a class of nonlinear systems with unmodeled dynamics and immeasurable states. An observer is designed to estimate system states. The structure consistency of virtual control signals and the variable partition technique are combined to overcome the difficulties appearing in a nonlower triangular form. An adaptive neural output-feedback controller is developed based on the backstepping technique and the universal approximation property of the radial basis function (RBF) neural networks. By using the Lyapunov stability analysis, the semiglobally and uniformly ultimate boundedness of all signals within the closed-loop system is guaranteed. The simulation results show that the controlled system converges quickly, and all the signals are bounded. This paper is novel at least in the two aspects: 1) an output-feedback control strategy is developed for a class of nonlower triangular nonlinear systems with unmodeled dynamics and 2) the nonlinear disturbances and their bounds are the functions of all states, which is in a more general form than existing results.
Fast Learning for Immersive Engagement in Energy Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bush, Brian W; Bugbee, Bruce; Gruchalla, Kenny M
The fast computation which is critical for immersive engagement with and learning from energy simulations would be furthered by developing a general method for creating rapidly computed simplified versions of NREL's computation-intensive energy simulations. Created using machine learning techniques, these 'reduced form' simulations can provide statistically sound estimates of the results of the full simulations at a fraction of the computational cost with response times - typically less than one minute of wall-clock time - suitable for real-time human-in-the-loop design and analysis. Additionally, uncertainty quantification techniques can document the accuracy of the approximate models and their domain of validity. Approximationmore » methods are applicable to a wide range of computational models, including supply-chain models, electric power grid simulations, and building models. These reduced-form representations cannot replace or re-implement existing simulations, but instead supplement them by enabling rapid scenario design and quality assurance for large sets of simulations. We present an overview of the framework and methods we have implemented for developing these reduced-form representations.« less
Deep Learning for Flow Sculpting: Insights into Efficient Learning using Scientific Simulation Data
Stoecklein, Daniel; Lore, Kin Gwn; Davies, Michael; Sarkar, Soumik; Ganapathysubramanian, Baskar
2017-01-01
A new technique for shaping microfluid flow, known as flow sculpting, offers an unprecedented level of passive fluid flow control, with potential breakthrough applications in advancing manufacturing, biology, and chemistry research at the microscale. However, efficiently solving the inverse problem of designing a flow sculpting device for a desired fluid flow shape remains a challenge. Current approaches struggle with the many-to-one design space, requiring substantial user interaction and the necessity of building intuition, all of which are time and resource intensive. Deep learning has emerged as an efficient function approximation technique for high-dimensional spaces, and presents a fast solution to the inverse problem, yet the science of its implementation in similarly defined problems remains largely unexplored. We propose that deep learning methods can completely outpace current approaches for scientific inverse problems while delivering comparable designs. To this end, we show how intelligent sampling of the design space inputs can make deep learning methods more competitive in accuracy, while illustrating their generalization capability to out-of-sample predictions. PMID:28402332
NASA Technical Reports Server (NTRS)
Moes, Timothy R.; Iliff, Kenneth
2002-01-01
A maximum-likelihood output-error parameter estimation technique is used to obtain stability and control derivatives for the NASA Dryden Flight Research Center SR-71A airplane and for configurations that include experiments externally mounted to the top of the fuselage. This research is being done as part of the envelope clearance for the new experiment configurations. Flight data are obtained at speeds ranging from Mach 0.4 to Mach 3.0, with an extensive amount of test points at approximately Mach 1.0. Pilot-input pitch and yaw-roll doublets are used to obtain the data. This report defines the parameter estimation technique used, presents stability and control derivative results, and compares the derivatives for the three configurations tested. The experimental configurations studied generally show acceptable stability, control, trim, and handling qualities throughout the Mach regimes tested. The reduction of directional stability for the experimental configurations is the most significant aerodynamic effect measured and identified as a design constraint for future experimental configurations. This report also shows the significant effects of aircraft flexibility on the stability and control derivatives.
Aquino, Arturo; Gegundez-Arias, Manuel Emilio; Marin, Diego
2010-11-01
Optic disc (OD) detection is an important step in developing systems for automated diagnosis of various serious ophthalmic pathologies. This paper presents a new template-based methodology for segmenting the OD from digital retinal images. This methodology uses morphological and edge detection techniques followed by the Circular Hough Transform to obtain a circular OD boundary approximation. It requires a pixel located within the OD as initial information. For this purpose, a location methodology based on a voting-type algorithm is also proposed. The algorithms were evaluated on the 1200 images of the publicly available MESSIDOR database. The location procedure succeeded in 99% of cases, taking an average computational time of 1.67 s. with a standard deviation of 0.14 s. On the other hand, the segmentation algorithm rendered an average common area overlapping between automated segmentations and true OD regions of 86%. The average computational time was 5.69 s with a standard deviation of 0.54 s. Moreover, a discussion on advantages and disadvantages of the models more generally used for OD segmentation is also presented in this paper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warpinski, N.R.
At present, the only viable technique for accurately measuring stresses at depth in a borehole is hydraulic fracturing. These have been termed microfracs because very small amounts of fluid are injected at low flow rates into the formation. When the well is shut in, the pressure immediately drops from the injection pressure to the instantaneous shut-in pressure (ISIP) which is approximately equal to sigma/sub min/. In general, the ISIP can be measured quite accurately in open holes. For most oil and gas applications, however, it is impossible or impractical to conduct these tests in an open-hole environment. The effects ofmore » the casing, cement annulus, explosive perforation damage, and random performation orientation are impossible to predict theoretically, and laboratory tests are usually conducted under nonrealistic conditions. A set of in situ experiments was conducted to evaluate the accuracy and reliability of this technique, to aid in the selection of an optimum perforation schedule, and to develop a diagnostic capability from the pressure response.« less
Transition-Independent Decentralized Markov Decision Processes
NASA Technical Reports Server (NTRS)
Becker, Raphen; Silberstein, Shlomo; Lesser, Victor; Goldman, Claudia V.; Morris, Robert (Technical Monitor)
2003-01-01
There has been substantial progress with formal models for sequential decision making by individual agents using the Markov decision process (MDP). However, similar treatment of multi-agent systems is lacking. A recent complexity result, showing that solving decentralized MDPs is NEXP-hard, provides a partial explanation. To overcome this complexity barrier, we identify a general class of transition-independent decentralized MDPs that is widely applicable. The class consists of independent collaborating agents that are tied up by a global reward function that depends on both of their histories. We present a novel algorithm for solving this class of problems and examine its properties. The result is the first effective technique to solve optimally a class of decentralized MDPs. This lays the foundation for further work in this area on both exact and approximate solutions.
Quantum speedup of Monte Carlo methods.
Montanaro, Ashley
2015-09-08
Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently.
Spectral analysis of the turbulent mixing of two fluids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steinkamp, M.J.
1996-02-01
The authors describe a spectral approach to the investigation of fluid instability, generalized turbulence, and the interpenetration of fluids across an interface. The technique also applies to a single fluid with large variations in density. Departures of fluctuating velocity components from the local mean are far subsonic, but the mean Mach number can be large. Validity of the description is demonstrated by comparisons with experiments on turbulent mixing due to the late stages of Rayleigh-Taylor instability, when the dynamics become approximately self-similar in response to a constant body force. Generic forms for anisotropic spectral structure are described and used asmore » a basis for deriving spectrally integrated moment equations that can be incorporated into computer codes for scientific and engineering analyses.« less
Counting Unfolding Events in Stretched Helices with Induced Oscillation by Optical Tweezers
NASA Astrophysics Data System (ADS)
Bacabac, Rommel Gaud; Otadoy, Roland
Correlation measures based on embedded probe fluctuations, single or paired, are now widely used for characterizing the viscoelastic properties of biological samples. However, more robust applications using this technique are still lacking. Considering that the study of living matter routinely demonstrates new and complex phenomena, mathematical and experimental tools for analysis have to catch up in order to arrive at newer insights. Therefore, we derive ways of probing non-equilibrium events in helical biopolymers provided by stretching beyond thermal forces. We generalize, for the first time, calculations for winding turn probabilities to account for unfolding events in single fibrous biopolymers and globular proteins under tensile stretching using twin optical traps. The approach is based on approximating the ensuing probe fluctuations as originating from a damped harmonic oscillator under oscillatory forcing.
Manning, Timmy; Sleator, Roy D; Walsh, Paul
2014-01-01
Artificial neural networks (ANNs) are a class of powerful machine learning models for classification and function approximation which have analogs in nature. An ANN learns to map stimuli to responses through repeated evaluation of exemplars of the mapping. This learning approach results in networks which are recognized for their noise tolerance and ability to generalize meaningful responses for novel stimuli. It is these properties of ANNs which make them appealing for applications to bioinformatics problems where interpretation of data may not always be obvious, and where the domain knowledge required for deductive techniques is incomplete or can cause a combinatorial explosion of rules. In this paper, we provide an introduction to artificial neural network theory and review some interesting recent applications to bioinformatics problems.
The unsaturated flow in porous media with dynamic capillary pressure
NASA Astrophysics Data System (ADS)
Milišić, Josipa-Pina
2018-05-01
In this paper we consider a degenerate pseudoparabolic equation for the wetting saturation of an unsaturated two-phase flow in porous media with dynamic capillary pressure-saturation relationship where the relaxation parameter depends on the saturation. Following the approach given in [13] the existence of a weak solution is proved using Galerkin approximation and regularization techniques. A priori estimates needed for passing to the limit when the regularization parameter goes to zero are obtained by using appropriate test-functions, motivated by the fact that considered PDE allows a natural generalization of the classical Kullback entropy. Finally, a special care was given in obtaining an estimate of the mixed-derivative term by combining the information from the capillary pressure with the obtained a priori estimates on the saturation.
NASA Astrophysics Data System (ADS)
Pan, Jun-Yang; Xie, Yi
2015-02-01
With tremendous advances in modern techniques, Einstein's general relativity has become an inevitable part of deep space missions. We investigate the relativistic algorithm for time transfer between the proper time τ of the onboard clock and the Geocentric Coordinate Time, which extends some previous works by including the effects of propagation of electromagnetic signals. In order to evaluate the implicit algebraic equations and integrals in the model, we take an analytic approach to work out their approximate values. This analytic model might be used in an onboard computer because of its limited capability to perform calculations. Taking an orbiter like Yinghuo-1 as an example, we find that the contributions of the Sun, the ground station and the spacecraft dominate the outcomes of the relativistic corrections to the model.
NASA Technical Reports Server (NTRS)
Hilsenrath, E.; Kirschner, P. T.
1980-01-01
The chemiluminescent rocket ozonesonde utilizing rhodamine-B as a detector and self-pumping for air sampling has been improved. The instrument employs standard meteorological sounding systems and is the only technique available for routine nighttime ozone measurements above balloon altitudes. The chemiluminescent detector, when properly calibrated, is shown to be specific to ozone, stable, and of sufficient sensitivity for accurate measurements of ozone from about 65-20 km. An error analysis indicates that the measured ozone profiles have an absolute accuracy of about + or - 12% and a precision of about + or - 6%. Approximately 20 flights have been conducted for geophysical investigations, while additional flights were conducted with other rocket and satellite ozone soundings for comparisons. In general, these comparisons showed good agreement.
Du, Shouqiang; Chen, Miao
2018-01-01
We consider a kind of nonsmooth optimization problems with [Formula: see text]-norm minimization, which has many applications in compressed sensing, signal reconstruction, and the related engineering problems. Using smoothing approximate techniques, this kind of nonsmooth optimization problem can be transformed into a general unconstrained optimization problem, which can be solved by the proposed smoothing modified three-term conjugate gradient method. The smoothing modified three-term conjugate gradient method is based on Polak-Ribière-Polyak conjugate gradient method. For the Polak-Ribière-Polyak conjugate gradient method has good numerical properties, the proposed method possesses the sufficient descent property without any line searches, and it is also proved to be globally convergent. Finally, the numerical experiments show the efficiency of the proposed method.
Controller Synthesis for Periodically Forced Chaotic Systems
NASA Astrophysics Data System (ADS)
Basso, Michele; Genesio, Roberto; Giovanardi, Lorenzo
Delayed feedback controllers are an appealing tool for stabilization of periodic orbits in chaotic systems. Despite their conceptual simplicity, specific and reliable design procedures are difficult to obtain, partly also because of their inherent infinite-dimensional structure. This chapter considers the use of finite dimensional linear time invariant controllers for stabilization of periodic solutions in a general class of sinusoidally forced nonlinear systems. For such controllers — which can be interpreted as rational approximations of the delayed ones — we provide a computationally attractive synthesis technique based on Linear Matrix Inequalities (LMIs), by mixing results concerning absolute stability of nonlinear systems and robustness of uncertain linear systems. The resulting controllers prove to be effective for chaos suppression in electronic circuits and systems, as shown by two different application examples.
Quantum speedup of Monte Carlo methods
Montanaro, Ashley
2015-01-01
Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently. PMID:26528079
NASA Astrophysics Data System (ADS)
Yarmohammadi, Mohsen
2016-12-01
Using the Harrison model and Green's function technique, impurity doping effects on the orbital density of states (DOS), electronic heat capacity (EHC) and magnetic susceptibility (MS) of a monolayer hydrogenated graphene, chair-like graphane, are investigated. The effect of scattering between electrons and dilute charged impurities is discussed in terms of the self-consistent Born approximation. Our results show that the graphane is a semiconductor and its band gap decreases with impurity. As a remarkable point, comparatively EHC reaches almost linearly to Schottky anomaly and does not change at low temperatures in the presence of impurity. Generally, EHC and MS increases with impurity doping. Surprisingly, impurity doping only affects the salient behavior of py orbital contribution of carbon atoms due to the symmetry breaking.
Bendifallah, Sofiane; Ballester, Marcos; Darai, Emile
2017-12-01
Endometriosis is a benign pathology that affects 3% of the general population and about 10% of women of reproductive age. Three anatomoclinical entities are described: peritoneal, ovarian (endometrioma) and deep endometriosis characterized by the infiltration of anatomical structures or organs beyond the peritoneum. Laparoscopic surgery should be performed, as this is associated with a reduction in postoperative complications, length of hospitalization and convalescence. Several surgical techniques allow the removal of deep endometriosis with colorectal involvement: rectal shaving, anterior discoid resection, segmental resection. Deep endometriosis surgery with colorectal involvement is a source of postoperative complications: anastomotic fistula, rectovaginal fistula, intestinal occlusion, digestive haemorrhage, urinary fistula, deep pelvic abscess. Involvement of the urinary tract by endometriosis affects approximately 1% of patients with endometriosis. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Full-order optimal compensators for flow control: the multiple inputs case
NASA Astrophysics Data System (ADS)
Semeraro, Onofrio; Pralits, Jan O.
2018-03-01
Flow control has been the subject of numerous experimental and theoretical works. We analyze full-order, optimal controllers for large dynamical systems in the presence of multiple actuators and sensors. The full-order controllers do not require any preliminary model reduction or low-order approximation: this feature allows us to assess the optimal performance of an actuated flow without relying on any estimation process or further hypothesis on the disturbances. We start from the original technique proposed by Bewley et al. (Meccanica 51(12):2997-3014, 2016. https://doi.org/10.1007/s11012-016-0547-3), the adjoint of the direct-adjoint (ADA) algorithm. The algorithm is iterative and allows bypassing the solution of the algebraic Riccati equation associated with the optimal control problem, typically infeasible for large systems. In this numerical work, we extend the ADA iteration into a more general framework that includes the design of controllers with multiple, coupled inputs and robust controllers (H_{∞} methods). First, we demonstrate our results by showing the analytical equivalence between the full Riccati solutions and the ADA approximations in the multiple inputs case. In the second part of the article, we analyze the performance of the algorithm in terms of convergence of the solution, by comparing it with analogous techniques. We find an excellent scalability with the number of inputs (actuators), making the method a viable way for full-order control design in complex settings. Finally, the applicability of the algorithm to fluid mechanics problems is shown using the linearized Kuramoto-Sivashinsky equation and the Kármán vortex street past a two-dimensional cylinder.
NASA Astrophysics Data System (ADS)
Resmini, Ronald G.; Graver, William R.; Kappus, Mary E.; Anderson, Mark E.
1996-11-01
Constrained energy minimization (CEM) has been applied to the mapping of the quantitative areal distribution of the mineral alunite in an approximately 1.8 km2 area of the Cuprite mining district, Nevada. CEM is a powerful technique for rapid quantitative mineral mapping which requires only the spectrum of the mineral to be mapped. A priori knowledge of background spectral signatures is not required. Our investigation applies CEM to calibrated radiance data converted to apparent reflectance (AR) and to single scattering albedo (SSA) spectra. The radiance data were acquired by the 210 channel, 0.4 micrometers to 2.5 micrometers airborne Hyperspectral Digital Imagery Collection Experiment sensor. CEM applied to AR spectra assumes linear mixing of the spectra of the materials exposed at the surface. This assumption is likely invalid as surface materials, which are often mixtures of particulates of different substances, are more properly modeled as intimate mixtures and thus spectral mixing analyses must take account of nonlinear effects. One technique for approximating nonlinear mixing requires the conversion of AR spectra to SSA spectra. The results of CEM applied to SSA spectra are compared to those of CEM applied to AR spectra. The occurrence of alunite is similar though not identical to mineral maps produced with both the SSA and AR spectra. Alunite is slightly more widespread based on processing with the SSA spectra. Further, fractional abundances derived from the SSA spectra are, in general, higher than those derived from AR spectra. Implications for the interpretation of quantitative mineral mapping with hyperspectral remote sensing data are discussed.
Kramers problem: Numerical Wiener-Hopf-like model characteristics
NASA Astrophysics Data System (ADS)
Ezin, A. N.; Samgin, A. L.
2010-11-01
Since the Kramers problem cannot be, in general, solved in terms of elementary functions, various numerical techniques or approximate methods must be employed. We present a study of characteristics for a particle in a damped well, which can be considered as a discretized version of the Melnikov [Phys. Rev. E 48, 3271 (1993)]10.1103/PhysRevE.48.3271 turnover theory. The main goal is to justify the direct computational scheme to the basic Wiener-Hopf model. In contrast to the Melnikov approach, which implements factorization through a Cauchy-theorem-based formulation, we employ the Wiener-Levy theorem to reduce the Kramers problem to a Wiener-Hopf sum equation written in terms of Toeplitz matrices. This latter can provide a stringent test for the reliability of analytic approximations for energy distribution functions occurring in the Kramers problems at arbitrary damping. For certain conditions, the simulated characteristics are compared well with those determined using the conventional Fourier-integral formulas, but sometimes may differ slightly depending on the value of a dissipation parameter. Another important feature is that, with our method, we can avoid some complications inherent to the Melnikov method. The calculational technique reported in the present paper may gain particular importance in situations where the energy losses of the particle to the bath are a complex-shaped function of the particle energy and analytic solutions of desired accuracy are not at hand. In order to appreciate more readily the significance and scope of the present numerical approach, we also discuss concrete aspects relating to the field of superionic conductors.