Sample records for method requires approximately

  1. Neural Network and Regression Approximations in High Speed Civil Transport Aircraft Design Optimization

    NASA Technical Reports Server (NTRS)

    Patniak, Surya N.; Guptill, James D.; Hopkins, Dale A.; Lavelle, Thomas M.

    1998-01-01

    Nonlinear mathematical-programming-based design optimization can be an elegant method. However, the calculations required to generate the merit function, constraints, and their gradients, which are frequently required, can make the process computational intensive. The computational burden can be greatly reduced by using approximating analyzers derived from an original analyzer utilizing neural networks and linear regression methods. The experience gained from using both of these approximation methods in the design optimization of a high speed civil transport aircraft is the subject of this paper. The Langley Research Center's Flight Optimization System was selected for the aircraft analysis. This software was exercised to generate a set of training data with which a neural network and a regression method were trained, thereby producing the two approximating analyzers. The derived analyzers were coupled to the Lewis Research Center's CometBoards test bed to provide the optimization capability. With the combined software, both approximation methods were examined for use in aircraft design optimization, and both performed satisfactorily. The CPU time for solution of the problem, which had been measured in hours, was reduced to minutes with the neural network approximation and to seconds with the regression method. Instability encountered in the aircraft analysis software at certain design points was also eliminated. On the other hand, there were costs and difficulties associated with training the approximating analyzers. The CPU time required to generate the input-output pairs and to train the approximating analyzers was seven times that required for solution of the problem.

  2. A Subsonic Aircraft Design Optimization With Neural Network and Regression Approximators

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.; Haller, William J.

    2004-01-01

    The Flight-Optimization-System (FLOPS) code encountered difficulty in analyzing a subsonic aircraft. The limitation made the design optimization problematic. The deficiencies have been alleviated through use of neural network and regression approximations. The insight gained from using the approximators is discussed in this paper. The FLOPS code is reviewed. Analysis models are developed and validated for each approximator. The regression method appears to hug the data points, while the neural network approximation follows a mean path. For an analysis cycle, the approximate model required milliseconds of central processing unit (CPU) time versus seconds by the FLOPS code. Performance of the approximators was satisfactory for aircraft analysis. A design optimization capability has been created by coupling the derived analyzers to the optimization test bed CometBoards. The approximators were efficient reanalysis tools in the aircraft design optimization. Instability encountered in the FLOPS analyzer was eliminated. The convergence characteristics were improved for the design optimization. The CPU time required to calculate the optimum solution, measured in hours with the FLOPS code was reduced to minutes with the neural network approximation and to seconds with the regression method. Generation of the approximators required the manipulation of a very large quantity of data. Design sensitivity with respect to the bounds of aircraft constraints is easily generated.

  3. Approximate l-fold cross-validation with Least Squares SVM and Kernel Ridge Regression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, Richard E; Zhang, Hao; Parker, Lynne Edwards

    2013-01-01

    Kernel methods have difficulties scaling to large modern data sets. The scalability issues are based on computational and memory requirements for working with a large matrix. These requirements have been addressed over the years by using low-rank kernel approximations or by improving the solvers scalability. However, Least Squares Support VectorMachines (LS-SVM), a popular SVM variant, and Kernel Ridge Regression still have several scalability issues. In particular, the O(n^3) computational complexity for solving a single model, and the overall computational complexity associated with tuning hyperparameters are still major problems. We address these problems by introducing an O(n log n) approximate l-foldmore » cross-validation method that uses a multi-level circulant matrix to approximate the kernel. In addition, we prove our algorithm s computational complexity and present empirical runtimes on data sets with approximately 1 million data points. We also validate our approximate method s effectiveness at selecting hyperparameters on real world and standard benchmark data sets. Lastly, we provide experimental results on using a multi-level circulant kernel approximation to solve LS-SVM problems with hyperparameters selected using our method.« less

  4. Mechanical System Reliability and Cost Integration Using a Sequential Linear Approximation Method

    NASA Technical Reports Server (NTRS)

    Kowal, Michael T.

    1997-01-01

    The development of new products is dependent on product designs that incorporate high levels of reliability along with a design that meets predetermined levels of system cost. Additional constraints on the product include explicit and implicit performance requirements. Existing reliability and cost prediction methods result in no direct linkage between variables affecting these two dominant product attributes. A methodology to integrate reliability and cost estimates using a sequential linear approximation method is proposed. The sequential linear approximation method utilizes probability of failure sensitivities determined from probabilistic reliability methods as well a manufacturing cost sensitivities. The application of the sequential linear approximation method to a mechanical system is demonstrated.

  5. Exact Solution of Gas Dynamics Equations Through Reduced Differential Transform and Sumudu Transform Linked with Pades Approximants

    NASA Astrophysics Data System (ADS)

    Rao, T. R. Ramesh

    2018-04-01

    In this paper, we study the analytical method based on reduced differential transform method coupled with sumudu transform through Pades approximants. The proposed method may be considered as alternative approach for finding exact solution of Gas dynamics equation in an effective manner. This method does not require any discretization, linearization and perturbation.

  6. ELECTRONIC DIGITAL COMPUTER

    DOEpatents

    Stone, J.J. Jr.; Bettis, E.S.; Mann, E.R.

    1957-10-01

    The electronic digital computer is designed to solve systems involving a plurality of simultaneous linear equations. The computer can solve a system which converges rather rapidly when using Von Seidel's method of approximation and performs the summations required for solving for the unknown terms by a method of successive approximations.

  7. Flexible scheme to truncate the hierarchy of pure states.

    PubMed

    Zhang, P-P; Bentley, C D B; Eisfeld, A

    2018-04-07

    The hierarchy of pure states (HOPS) is a wavefunction-based method that can be used for numerically modeling open quantum systems. Formally, HOPS recovers the exact system dynamics for an infinite depth of the hierarchy. However, truncation of the hierarchy is required to numerically implement HOPS. We want to choose a "good" truncation method, where by "good" we mean that it is numerically feasible to check convergence of the results. For the truncation approximation used in previous applications of HOPS, convergence checks are numerically challenging. In this work, we demonstrate the application of the "n-particle approximation" to HOPS. We also introduce a new approximation, which we call the "n-mode approximation." We then explore the convergence of these truncation approximations with respect to the number of equations required in the hierarchy in two exemplary problems: absorption and energy transfer of molecular aggregates.

  8. Flexible scheme to truncate the hierarchy of pure states

    NASA Astrophysics Data System (ADS)

    Zhang, P.-P.; Bentley, C. D. B.; Eisfeld, A.

    2018-04-01

    The hierarchy of pure states (HOPS) is a wavefunction-based method that can be used for numerically modeling open quantum systems. Formally, HOPS recovers the exact system dynamics for an infinite depth of the hierarchy. However, truncation of the hierarchy is required to numerically implement HOPS. We want to choose a "good" truncation method, where by "good" we mean that it is numerically feasible to check convergence of the results. For the truncation approximation used in previous applications of HOPS, convergence checks are numerically challenging. In this work, we demonstrate the application of the "n-particle approximation" to HOPS. We also introduce a new approximation, which we call the "n-mode approximation." We then explore the convergence of these truncation approximations with respect to the number of equations required in the hierarchy in two exemplary problems: absorption and energy transfer of molecular aggregates.

  9. Trajectories for High Specific Impulse High Specific Power Deep Space Exploration

    NASA Technical Reports Server (NTRS)

    Polsgrove, T.; Adams, R. B.; Brady, Hugh J. (Technical Monitor)

    2002-01-01

    Preliminary results are presented for two methods to approximate the mission performance of high specific impulse high specific power vehicles. The first method is based on an analytical approximation derived by Williams and Shepherd and can be used to approximate mission performance to outer planets and interstellar space. The second method is based on a parametric analysis of trajectories created using the well known trajectory optimization code, VARITOP. This parametric analysis allows the reader to approximate payload ratios and optimal power requirements for both one-way and round-trip missions. While this second method only addresses missions to and from Jupiter, future work will encompass all of the outer planet destinations and some interstellar precursor missions.

  10. Ion mobility spectrometry: A personal view of its development at UCSB

    DTIC Science & Technology

    2014-09-15

    molecules. As we progressed we realized that new, more accurate algorithms were needed to augment our early projection approximation (PA) for determining...required. The goal was to maintain some of the speed of the projection approximation and retain the accuracy of the trajectory method. Christian...Bleiholder, while a postdoc in my group, did just that by development of the projection superposition approximation (PSA) [31–35]. This new method is 100

  11. A hybrid Pade-Galerkin technique for differential equations

    NASA Technical Reports Server (NTRS)

    Geer, James F.; Andersen, Carl M.

    1993-01-01

    A three-step hybrid analysis technique, which successively uses the regular perturbation expansion method, the Pade expansion method, and then a Galerkin approximation, is presented and applied to some model boundary value problems. In the first step of the method, the regular perturbation method is used to construct an approximation to the solution in the form of a finite power series in a small parameter epsilon associated with the problem. In the second step of the method, the series approximation obtained in step one is used to construct a Pade approximation in the form of a rational function in the parameter epsilon. In the third step, the various powers of epsilon which appear in the Pade approximation are replaced by new (unknown) parameters (delta(sub j)). These new parameters are determined by requiring that the residual formed by substituting the new approximation into the governing differential equation is orthogonal to each of the perturbation coordinate functions used in step one. The technique is applied to model problems involving ordinary or partial differential equations. In general, the technique appears to provide good approximations to the solution even when the perturbation and Pade approximations fail to do so. The method is discussed and topics for future investigations are indicated.

  12. Low rank approximation method for efficient Green's function calculation of dissipative quantum transport

    NASA Astrophysics Data System (ADS)

    Zeng, Lang; He, Yu; Povolotskyi, Michael; Liu, XiaoYan; Klimeck, Gerhard; Kubis, Tillmann

    2013-06-01

    In this work, the low rank approximation concept is extended to the non-equilibrium Green's function (NEGF) method to achieve a very efficient approximated algorithm for coherent and incoherent electron transport. This new method is applied to inelastic transport in various semiconductor nanodevices. Detailed benchmarks with exact NEGF solutions show (1) a very good agreement between approximated and exact NEGF results, (2) a significant reduction of the required memory, and (3) a large reduction of the computational time (a factor of speed up as high as 150 times is observed). A non-recursive solution of the inelastic NEGF transport equations of a 1000 nm long resistor on standard hardware illustrates nicely the capability of this new method.

  13. Efficient l1 -norm-based low-rank matrix approximations for large-scale problems using alternating rectified gradient method.

    PubMed

    Kim, Eunwoo; Lee, Minsik; Choi, Chong-Ho; Kwak, Nojun; Oh, Songhwai

    2015-02-01

    Low-rank matrix approximation plays an important role in the area of computer vision and image processing. Most of the conventional low-rank matrix approximation methods are based on the l2 -norm (Frobenius norm) with principal component analysis (PCA) being the most popular among them. However, this can give a poor approximation for data contaminated by outliers (including missing data), because the l2 -norm exaggerates the negative effect of outliers. Recently, to overcome this problem, various methods based on the l1 -norm, such as robust PCA methods, have been proposed for low-rank matrix approximation. Despite the robustness of the methods, they require heavy computational effort and substantial memory for high-dimensional data, which is impractical for real-world problems. In this paper, we propose two efficient low-rank factorization methods based on the l1 -norm that find proper projection and coefficient matrices using the alternating rectified gradient method. The proposed methods are applied to a number of low-rank matrix approximation problems to demonstrate their efficiency and robustness. The experimental results show that our proposals are efficient in both execution time and reconstruction performance unlike other state-of-the-art methods.

  14. Improved Linear Algebra Methods for Redshift Computation from Limited Spectrum Data - II

    NASA Technical Reports Server (NTRS)

    Foster, Leslie; Waagen, Alex; Aijaz, Nabella; Hurley, Michael; Luis, Apolo; Rinsky, Joel; Satyavolu, Chandrika; Gazis, Paul; Srivastava, Ashok; Way, Michael

    2008-01-01

    Given photometric broadband measurements of a galaxy, Gaussian processes may be used with a training set to solve the regression problem of approximating the redshift of this galaxy. However, in practice solving the traditional Gaussian processes equation is too slow and requires too much memory. We employed several methods to avoid this difficulty using algebraic manipulation and low-rank approximation, and were able to quickly approximate the redshifts in our testing data within 17 percent of the known true values using limited computational resources. The accuracy of one method, the V Formulation, is comparable to the accuracy of the best methods currently used for this problem.

  15. Approximate Genealogies Under Genetic Hitchhiking

    PubMed Central

    Pfaffelhuber, P.; Haubold, B.; Wakolbinger, A.

    2006-01-01

    The rapid fixation of an advantageous allele leads to a reduction in linked neutral variation around the target of selection. The genealogy at a neutral locus in such a selective sweep can be simulated by first generating a random path of the advantageous allele's frequency and then a structured coalescent in this background. Usually the frequency path is approximated by a logistic growth curve. We discuss an alternative method that approximates the genealogy by a random binary splitting tree, a so-called Yule tree that does not require first constructing a frequency path. Compared to the coalescent in a logistic background, this method gives a slightly better approximation for identity by descent during the selective phase and a much better approximation for the number of lineages that stem from the founder of the selective sweep. In applications such as the approximation of the distribution of Tajima's D, the two approximation methods perform equally well. For relevant parameter ranges, the Yule approximation is faster. PMID:17182733

  16. Minimal-Approximation-Based Decentralized Backstepping Control of Interconnected Time-Delay Systems.

    PubMed

    Choi, Yun Ho; Yoo, Sung Jin

    2016-12-01

    A decentralized adaptive backstepping control design using minimal function approximators is proposed for nonlinear large-scale systems with unknown unmatched time-varying delayed interactions and unknown backlash-like hysteresis nonlinearities. Compared with existing decentralized backstepping methods, the contribution of this paper is to design a simple local control law for each subsystem, consisting of an actual control with one adaptive function approximator, without requiring the use of multiple function approximators and regardless of the order of each subsystem. The virtual controllers for each subsystem are used as intermediate signals for designing a local actual control at the last step. For each subsystem, a lumped unknown function including the unknown nonlinear terms and the hysteresis nonlinearities is derived at the last step and is estimated by one function approximator. Thus, the proposed approach only uses one function approximator to implement each local controller, while existing decentralized backstepping control methods require the number of function approximators equal to the order of each subsystem and a calculation of virtual controllers to implement each local actual controller. The stability of the total controlled closed-loop system is analyzed using the Lyapunov stability theorem.

  17. Efficient solution of parabolic equations by Krylov approximation methods

    NASA Technical Reports Server (NTRS)

    Gallopoulos, E.; Saad, Y.

    1990-01-01

    Numerical techniques for solving parabolic equations by the method of lines is addressed. The main motivation for the proposed approach is the possibility of exploiting a high degree of parallelism in a simple manner. The basic idea of the method is to approximate the action of the evolution operator on a given state vector by means of a projection process onto a Krylov subspace. Thus, the resulting approximation consists of applying an evolution operator of a very small dimension to a known vector which is, in turn, computed accurately by exploiting well-known rational approximations to the exponential. Because the rational approximation is only applied to a small matrix, the only operations required with the original large matrix are matrix-by-vector multiplications, and as a result the algorithm can easily be parallelized and vectorized. Some relevant approximation and stability issues are discussed. We present some numerical experiments with the method and compare its performance with a few explicit and implicit algorithms.

  18. Newton's method applied to finite-difference approximations for the steady-state compressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Bailey, Harry E.; Beam, Richard M.

    1991-01-01

    Finite-difference approximations for steady-state compressible Navier-Stokes equations, whose two spatial dimensions are written in generalized curvilinear coordinates and strong conservation-law form, are presently solved by means of Newton's method in order to obtain a lifting-airfoil flow field under subsonic and transonnic conditions. In addition to ascertaining the computational requirements of an initial guess ensuring convergence and the degree of computational efficiency obtainable via the approximate Newton method's freezing of the Jacobian matrices, attention is given to the need for auxiliary methods assessing the temporal stability of steady-state solutions. It is demonstrated that nonunique solutions of the finite-difference equations are obtainable by Newton's method in conjunction with a continuation method.

  19. An Implementation Method of the Fractional-Order PID Control System Considering the Memory Constraint and its Application to the Temperature Control of Heat Plate

    NASA Astrophysics Data System (ADS)

    Sasano, Koji; Okajima, Hiroshi; Matsunaga, Nobutomo

    Recently, the fractional order PID (FO-PID) control, which is the extension of the PID control, has been focused on. Even though the FO-PID requires the high-order filter, it is difficult to realize the high-order filter due to the memory limitation of digital computer. For implementation of FO-PID, approximation of the fractional integrator and differentiator are required. Short memory principle (SMP) is one of the effective approximation methods. However, there is a disadvantage that the approximated filter with SMP cannot eliminate the steady-state error. For this problem, we introduce the distributed implementation of the integrator and the dynamic quantizer to make the efficient use of permissible memory. The objective of this study is to clarify how to implement the accurate FO-PID with limited memories. In this paper, we propose the implementation method of FO-PID with memory constraint using dynamic quantizer. And the trade off between approximation of fractional elements and quantized data size are examined so as to close to the ideal FO-PID responses. The effectiveness of proposed method is evaluated by numerical example and experiment in the temperature control of heat plate.

  20. Nonlinear programming extensions to rational function approximation methods for unsteady aerodynamic forces

    NASA Technical Reports Server (NTRS)

    Tiffany, Sherwood H.; Adams, William M., Jr.

    1988-01-01

    The approximation of unsteady generalized aerodynamic forces in the equations of motion of a flexible aircraft are discussed. Two methods of formulating these approximations are extended to include the same flexibility in constraining the approximations and the same methodology in optimizing nonlinear parameters as another currently used extended least-squares method. Optimal selection of nonlinear parameters is made in each of the three methods by use of the same nonlinear, nongradient optimizer. The objective of the nonlinear optimization is to obtain rational approximations to the unsteady aerodynamics whose state-space realization is lower order than that required when no optimization of the nonlinear terms is performed. The free linear parameters are determined using the least-squares matrix techniques of a Lagrange multiplier formulation of an objective function which incorporates selected linear equality constraints. State-space mathematical models resulting from different approaches are described and results are presented that show comparative evaluations from application of each of the extended methods to a numerical example.

  1. Spline Approximation of Thin Shell Dynamics

    NASA Technical Reports Server (NTRS)

    delRosario, R. C. H.; Smith, R. C.

    1996-01-01

    A spline-based method for approximating thin shell dynamics is presented here. While the method is developed in the context of the Donnell-Mushtari thin shell equations, it can be easily extended to the Byrne-Flugge-Lur'ye equations or other models for shells of revolution as warranted by applications. The primary requirements for the method include accuracy, flexibility and efficiency in smart material applications. To accomplish this, the method was designed to be flexible with regard to boundary conditions, material nonhomogeneities due to sensors and actuators, and inputs from smart material actuators such as piezoceramic patches. The accuracy of the method was also of primary concern, both to guarantee full resolution of structural dynamics and to facilitate the development of PDE-based controllers which ultimately require real-time implementation. Several numerical examples provide initial evidence demonstrating the efficacy of the method.

  2. Multi-level methods and approximating distribution functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, D., E-mail: daniel.wilson@dtc.ox.ac.uk; Baker, R. E.

    2016-07-15

    Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie’s direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparablemore » to Gillespie’s direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146–179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.« less

  3. Comments on localized and integral localized approximations in spherical coordinates

    NASA Astrophysics Data System (ADS)

    Gouesbet, Gérard; Lock, James A.

    2016-08-01

    Localized approximation procedures are efficient ways to evaluate beam shape coefficients of laser beams, and are particularly useful when other methods are ineffective or inefficient. Comments on these procedures are, however, required in order to help researchers make correct decisions concerning their use. This paper has the flavor of a short review and takes the opportunity to attract the attention of the readers to a required refinement of terminology.

  4. Reliability-based design optimization using a generalized subset simulation method and posterior approximation

    NASA Astrophysics Data System (ADS)

    Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing

    2018-05-01

    The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.

  5. Structural optimization with approximate sensitivities

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Hopkins, D. A.; Coroneos, R.

    1994-01-01

    Computational efficiency in structural optimization can be enhanced if the intensive computations associated with the calculation of the sensitivities, that is, gradients of the behavior constraints, are reduced. Approximation to gradients of the behavior constraints that can be generated with small amount of numerical calculations is proposed. Structural optimization with these approximate sensitivities produced correct optimum solution. Approximate gradients performed well for different nonlinear programming methods, such as the sequence of unconstrained minimization technique, method of feasible directions, sequence of quadratic programming, and sequence of linear programming. Structural optimization with approximate gradients can reduce by one third the CPU time that would otherwise be required to solve the problem with explicit closed-form gradients. The proposed gradient approximation shows potential to reduce intensive computation that has been associated with traditional structural optimization.

  6. Exact Doppler broadening of tabulated cross sections. [SIGMA 1 kernel broadening method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cullen, D.E.; Weisbin, C.R.

    1976-07-01

    The SIGMA1 kernel broadening method is presented to Doppler broaden to any required accuracy a cross section that is described by a table of values and linear-linear interpolation in energy-cross section between tabulated values. The method is demonstrated to have no temperature or energy limitations and to be equally applicable to neutron or charged-particle cross sections. The method is qualitatively and quantitatively compared to contemporary approximate methods of Doppler broadening with particular emphasis on the effect of each approximation introduced.

  7. Method of making thermally removable epoxies

    DOEpatents

    Loy, Douglas A.; Wheeler, David R.; Russick, Edward M.; McElhanon, James R.; Saunders, Randall S.

    2002-01-01

    A method of making a thermally-removable epoxy by mixing a bis(maleimide) compound to a monomeric furan compound containing an oxirane group to form a di-epoxy mixture and then adding a curing agent at temperatures from approximately room temperature to less than approximately 90.degree. C. to form a thermally-removable epoxy. The thermally-removable epoxy can be easily removed within approximately an hour by heating to temperatures greater than approximately 90.degree. C. in a polar solvent. The epoxy material can be used in protecting electronic components that may require subsequent removal of the solid material for component repair, modification or quality control.

  8. Achieving accuracy in first-principles calculations for EOS: basis completeness at high temperatures

    NASA Astrophysics Data System (ADS)

    Wills, John; Mattsson, Ann

    2013-06-01

    First-principles electronic structure calculations can provide EOS data in regimes of pressure and temperature where accurate experimental data is difficult or impossible to obtain. This lack, however, also precludes validation of calculations in those regimes. Factors that influence the accuracy of first-principles data include (1) theoretical approximations and (2) computational approximations used in implementing and solving the underlying equations. In the first category are the approximate exchange/correlation functionals and approximate wave equations approximating the Dirac equation; in the second are basis completeness, series convergence, and truncation errors. We are using two rather different electronic structure methods (VASP and RSPt) to make definitive the requirements for accuracy of the second type, common to both. In this talk, we discuss requirements for converged calculation at high temperature and moderated pressure. At convergence we show that both methods give identical results. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  9. A coupling method for a cardiovascular simulation model which includes the Kalman filter.

    PubMed

    Hasegawa, Yuki; Shimayoshi, Takao; Amano, Akira; Matsuda, Tetsuya

    2012-01-01

    Multi-scale models of the cardiovascular system provide new insight that was unavailable with in vivo and in vitro experiments. For the cardiovascular system, multi-scale simulations provide a valuable perspective in analyzing the interaction of three phenomenons occurring at different spatial scales: circulatory hemodynamics, ventricular structural dynamics, and myocardial excitation-contraction. In order to simulate these interactions, multiscale cardiovascular simulation systems couple models that simulate different phenomena. However, coupling methods require a significant amount of calculation, since a system of non-linear equations must be solved for each timestep. Therefore, we proposed a coupling method which decreases the amount of calculation by using the Kalman filter. In our method, the Kalman filter calculates approximations for the solution to the system of non-linear equations at each timestep. The approximations are then used as initial values for solving the system of non-linear equations. The proposed method decreases the number of iterations required by 94.0% compared to the conventional strong coupling method. When compared with a smoothing spline predictor, the proposed method required 49.4% fewer iterations.

  10. Analysis of the iteratively regularized Gauss-Newton method under a heuristic rule

    NASA Astrophysics Data System (ADS)

    Jin, Qinian; Wang, Wei

    2018-03-01

    The iteratively regularized Gauss-Newton method is one of the most prominent regularization methods for solving nonlinear ill-posed inverse problems when the data is corrupted by noise. In order to produce a useful approximate solution, this iterative method should be terminated properly. The existing a priori and a posteriori stopping rules require accurate information on the noise level, which may not be available or reliable in practical applications. In this paper we propose a heuristic selection rule for this regularization method, which requires no information on the noise level. By imposing certain conditions on the noise, we derive a posteriori error estimates on the approximate solutions under various source conditions. Furthermore, we establish a convergence result without using any source condition. Numerical results are presented to illustrate the performance of our heuristic selection rule.

  11. An Approximate Dissipation Function for Large Strain Rubber Thermo-Mechanical Analyses

    NASA Technical Reports Server (NTRS)

    Johnson, Arthur R.; Chen, Tzi-Kang

    2003-01-01

    Mechanically induced viscoelastic dissipation is difficult to compute. When the constitutive model is defined by history integrals, the formula for dissipation is a double convolution integral. Since double convolution integrals are difficult to approximate, coupled thermo-mechanical analyses of highly viscous rubber-like materials cannot be made with most commercial finite element software. In this study, we present a method to approximate the dissipation for history integral constitutive models that represent Maxwell-like materials without approximating the double convolution integral. The method requires that the total stress can be separated into elastic and viscous components, and that the relaxation form of the constitutive law is defined with a Prony series. Numerical data is provided to demonstrate the limitations of this approximate method for determining dissipation. Rubber cylinders with imbedded steel disks and with an imbedded steel ball are dynamically loaded, and the nonuniform heating within the cylinders is computed.

  12. Research on Modeling of Propeller in a Turboprop Engine

    NASA Astrophysics Data System (ADS)

    Huang, Jiaqin; Huang, Xianghua; Zhang, Tianhong

    2015-05-01

    In the simulation of engine-propeller integrated control system for a turboprop aircraft, a real-time propeller model with high-accuracy is required. A study is conducted to compare the real-time and precision performance of propeller models based on strip theory and lifting surface theory. The emphasis in modeling by strip theory is focused on three points as follows: First, FLUENT is adopted to calculate the lift and drag coefficients of the propeller. Next, a method to calculate the induced velocity which occurs in the ground rig test is presented. Finally, an approximate method is proposed to obtain the downwash angle of the propeller when the conventional algorithm has no solution. An advanced approximation of the velocities induced by helical horseshoe vortices is applied in the model based on lifting surface theory. This approximate method will reduce computing time and remain good accuracy. Comparison between the two modeling techniques shows that the model based on strip theory which owns more advantage on both real-time and high-accuracy can meet the requirement.

  13. Solution of Cubic Equations by Iteration Methods on a Pocket Calculator

    ERIC Educational Resources Information Center

    Bamdad, Farzad

    2004-01-01

    A method to provide students a vision of how they can write iteration programs on an inexpensive programmable pocket calculator, without requiring a PC or a graphing calculator is developed. Two iteration methods are used, successive-approximations and bisection methods.

  14. Correlation energy extrapolation by many-body expansion

    DOE PAGES

    Boschen, Jeffery S.; Theis, Daniel; Ruedenberg, Klaus; ...

    2017-01-09

    Accounting for electron correlation is required for high accuracy calculations of molecular energies. The full configuration interaction (CI) approach can fully capture the electron correlation within a given basis, but it does so at a computational expense that is impractical for all but the smallest chemical systems. In this work, a new methodology is presented to approximate configuration interaction calculations at a reduced computational expense and memory requirement, namely, the correlation energy extrapolation by many-body expansion (CEEMBE). This method combines a MBE approximation of the CI energy with an extrapolated correction obtained from CI calculations using subsets of the virtualmore » orbitals. The extrapolation approach is inspired by, and analogous to, the method of correlation energy extrapolation by intrinsic scaling. Benchmark calculations of the new method are performed on diatomic fluorine and ozone. Finally, the method consistently achieves agreement with CI calculations to within a few mhartree and often achieves agreement to within ~1 millihartree or less, while requiring significantly less computational resources.« less

  15. Correlation energy extrapolation by many-body expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boschen, Jeffery S.; Theis, Daniel; Ruedenberg, Klaus

    Accounting for electron correlation is required for high accuracy calculations of molecular energies. The full configuration interaction (CI) approach can fully capture the electron correlation within a given basis, but it does so at a computational expense that is impractical for all but the smallest chemical systems. In this work, a new methodology is presented to approximate configuration interaction calculations at a reduced computational expense and memory requirement, namely, the correlation energy extrapolation by many-body expansion (CEEMBE). This method combines a MBE approximation of the CI energy with an extrapolated correction obtained from CI calculations using subsets of the virtualmore » orbitals. The extrapolation approach is inspired by, and analogous to, the method of correlation energy extrapolation by intrinsic scaling. Benchmark calculations of the new method are performed on diatomic fluorine and ozone. Finally, the method consistently achieves agreement with CI calculations to within a few mhartree and often achieves agreement to within ~1 millihartree or less, while requiring significantly less computational resources.« less

  16. Using digital inpainting to estimate incident light intensity for the calculation of red blood cell oxygen saturation from microscopy images.

    PubMed

    Sové, Richard J; Drakos, Nicole E; Fraser, Graham M; Ellis, Christopher G

    2018-05-25

    Red blood cell oxygen saturation is an important indicator of oxygen supply to tissues in the body. Oxygen saturation can be measured by taking advantage of spectroscopic properties of hemoglobin. When this technique is applied to transmission microscopy, the calculation of saturation requires determination of incident light intensity at each pixel occupied by the red blood cell; this value is often approximated from a sequence of images as the maximum intensity over time. This method often fails when the red blood cells are moving too slowly, or if hematocrit is too large since there is not a large enough gap between the cells to accurately calculate the incident intensity value. A new method of approximating incident light intensity is proposed using digital inpainting. This novel approach estimates incident light intensity with an average percent error of approximately 3%, which exceeds the accuracy of the maximum intensity based method in most cases. The error in incident light intensity corresponds to a maximum error of approximately 2% saturation. Therefore, though this new method is computationally more demanding than the traditional technique, it can be used in cases where the maximum intensity-based method fails (e.g. stationary cells), or when higher accuracy is required. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  17. Local Approximation and Hierarchical Methods for Stochastic Optimization

    NASA Astrophysics Data System (ADS)

    Cheng, Bolong

    In this thesis, we present local and hierarchical approximation methods for two classes of stochastic optimization problems: optimal learning and Markov decision processes. For the optimal learning problem class, we introduce a locally linear model with radial basis function for estimating the posterior mean of the unknown objective function. The method uses a compact representation of the function which avoids storing the entire history, as is typically required by nonparametric methods. We derive a knowledge gradient policy with the locally parametric model, which maximizes the expected value of information. We show the policy is asymptotically optimal in theory, and experimental works suggests that the method can reliably find the optimal solution on a range of test functions. For the Markov decision processes problem class, we are motivated by an application where we want to co-optimize a battery for multiple revenue, in particular energy arbitrage and frequency regulation. The nature of this problem requires the battery to make charging and discharging decisions at different time scales while accounting for the stochastic information such as load demand, electricity prices, and regulation signals. Computing the exact optimal policy becomes intractable due to the large state space and the number of time steps. We propose two methods to circumvent the computation bottleneck. First, we propose a nested MDP model that structure the co-optimization problem into smaller sub-problems with reduced state space. This new model allows us to understand how the battery behaves down to the two-second dynamics (that of the frequency regulation market). Second, we introduce a low-rank value function approximation for backward dynamic programming. This new method only requires computing the exact value function for a small subset of the state space and approximate the entire value function via low-rank matrix completion. We test these methods on historical price data from the PJM Interconnect and show that it outperforms the baseline approach used in the industry.

  18. Information loss in approximately bayesian data assimilation: a comparison of generative and discriminative approaches to estimating agricultural yield

    USDA-ARS?s Scientific Manuscript database

    Data assimilation and regression are two commonly used methods for predicting agricultural yield from remote sensing observations. Data assimilation is a generative approach because it requires explicit approximations of the Bayesian prior and likelihood to compute the probability density function...

  19. A fast calibration method for 3-D tracking of ultrasound images using a spatial localizer.

    PubMed

    Pagoulatos, N; Haynor, D R; Kim, Y

    2001-09-01

    We have developed a fast calibration method for computing the position and orientation of 2-D ultrasound (US) images in 3-D space where a position sensor is mounted on the US probe. This calibration is required in the fields of 3-D ultrasound and registration of ultrasound with other imaging modalities. Most of the existing calibration methods require a complex and tedious experimental procedure. Our method is simple and it is based on a custom-built phantom. Thirty N-fiducials (markers in the shape of the letter "N") embedded in the phantom provide the basis for our calibration procedure. We calibrated a 3.5-MHz sector phased-array probe with a magnetic position sensor, and we studied the accuracy and precision of our method. A typical calibration procedure requires approximately 2 min. We conclude that we can achieve accurate and precise calibration using a single US image, provided that a large number (approximately ten) of N-fiducials are captured within the US image, enabling a representative sampling of the imaging plane.

  20. On the use of finite difference matrix-vector products in Newton-Krylov solvers for implicit climate dynamics with spectral elements

    DOE PAGES

    Woodward, Carol S.; Gardner, David J.; Evans, Katherine J.

    2015-01-01

    Efficient solutions of global climate models require effectively handling disparate length and time scales. Implicit solution approaches allow time integration of the physical system with a step size governed by accuracy of the processes of interest rather than by stability of the fastest time scales present. Implicit approaches, however, require the solution of nonlinear systems within each time step. Usually, a Newton's method is applied to solve these systems. Each iteration of the Newton's method, in turn, requires the solution of a linear model of the nonlinear system. This model employs the Jacobian of the problem-defining nonlinear residual, but thismore » Jacobian can be costly to form. If a Krylov linear solver is used for the solution of the linear system, the action of the Jacobian matrix on a given vector is required. In the case of spectral element methods, the Jacobian is not calculated but only implemented through matrix-vector products. The matrix-vector multiply can also be approximated by a finite difference approximation which may introduce inaccuracy in the overall nonlinear solver. In this paper, we review the advantages and disadvantages of finite difference approximations of these matrix-vector products for climate dynamics within the spectral element shallow water dynamical core of the Community Atmosphere Model.« less

  1. On the Accuracy of Double Scattering Approximation for Atmospheric Polarization Computations

    NASA Technical Reports Server (NTRS)

    Korkin, Sergey V.; Lyapustin, Alexei I.; Marshak, Alexander L.

    2011-01-01

    Interpretation of multi-angle spectro-polarimetric data in remote sensing of atmospheric aerosols require fast and accurate methods of solving the vector radiative transfer equation (VRTE). The single and double scattering approximations could provide an analytical framework for the inversion algorithms and are relatively fast, however accuracy assessments of these approximations for the aerosol atmospheres in the atmospheric window channels have been missing. This paper provides such analysis for a vertically homogeneous aerosol atmosphere with weak and strong asymmetry of scattering. In both cases, the double scattering approximation gives a high accuracy result (relative error approximately 0.2%) only for the low optical path - 10(sup -2) As the error rapidly grows with optical thickness, a full VRTE solution is required for the practical remote sensing analysis. It is shown that the scattering anisotropy is not important at low optical thicknesses neither for reflected nor for transmitted polarization components of radiation.

  2. Advanced reliability methods for structural evaluation

    NASA Technical Reports Server (NTRS)

    Wirsching, P. H.; Wu, Y.-T.

    1985-01-01

    Fast probability integration (FPI) methods, which can yield approximate solutions to such general structural reliability problems as the computation of the probabilities of complicated functions of random variables, are known to require one-tenth the computer time of Monte Carlo methods for a probability level of 0.001; lower probabilities yield even more dramatic differences. A strategy is presented in which a computer routine is run k times with selected perturbed values of the variables to obtain k solutions for a response variable Y. An approximating polynomial is fit to the k 'data' sets, and FPI methods are employed for this explicit form.

  3. A novel Cs-(129)Xe atomic spin gyroscope with closed-loop Faraday modulation.

    PubMed

    Fang, Jiancheng; Wan, Shuangai; Qin, Jie; Zhang, Chen; Quan, Wei; Yuan, Heng; Dong, Haifeng

    2013-08-01

    We report a novel Cs-(129)Xe atomic spin gyroscope (ASG) with closed-loop Faraday modulation method. This ASG requires approximately 30 min to start-up and 110 °C to operate. A closed-loop Faraday modulation method for measurement of the optical rotation was used in this ASG. This method uses an additional Faraday modulator to suppress the laser intensity fluctuation and Faraday modulator thermal induced fluctuation. We theoretically and experimentally validate this method in the Cs-(129)Xe ASG and achieved a bias stability of approximately 3.25 °∕h.

  4. The arbitrary order mixed mimetic finite difference method for the diffusion equation

    DOE PAGES

    Gyrya, Vitaliy; Lipnikov, Konstantin; Manzini, Gianmarco

    2016-05-01

    Here, we propose an arbitrary-order accurate mimetic finite difference (MFD) method for the approximation of diffusion problems in mixed form on unstructured polygonal and polyhedral meshes. As usual in the mimetic numerical technology, the method satisfies local consistency and stability conditions, which determines the accuracy and the well-posedness of the resulting approximation. The method also requires the definition of a high-order discrete divergence operator that is the discrete analog of the divergence operator and is acting on the degrees of freedom. The new family of mimetic methods is proved theoretically to be convergent and optimal error estimates for flux andmore » scalar variable are derived from the convergence analysis. A numerical experiment confirms the high-order accuracy of the method in solving diffusion problems with variable diffusion tensor. It is worth mentioning that the approximation of the scalar variable presents a superconvergence effect.« less

  5. Subsonic Aircraft With Regression and Neural-Network Approximators Designed

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Hopkins, Dale A.

    2004-01-01

    At the NASA Glenn Research Center, NASA Langley Research Center's Flight Optimization System (FLOPS) and the design optimization testbed COMETBOARDS with regression and neural-network-analysis approximators have been coupled to obtain a preliminary aircraft design methodology. For a subsonic aircraft, the optimal design, that is the airframe-engine combination, is obtained by the simulation. The aircraft is powered by two high-bypass-ratio engines with a nominal thrust of about 35,000 lbf. It is to carry 150 passengers at a cruise speed of Mach 0.8 over a range of 3000 n mi and to operate on a 6000-ft runway. The aircraft design utilized a neural network and a regression-approximations-based analysis tool, along with a multioptimizer cascade algorithm that uses sequential linear programming, sequential quadratic programming, the method of feasible directions, and then sequential quadratic programming again. Optimal aircraft weight versus the number of design iterations is shown. The central processing unit (CPU) time to solution is given. It is shown that the regression-method-based analyzer exhibited a smoother convergence pattern than the FLOPS code. The optimum weight obtained by the approximation technique and the FLOPS code differed by 1.3 percent. Prediction by the approximation technique exhibited no error for the aircraft wing area and turbine entry temperature, whereas it was within 2 percent for most other parameters. Cascade strategy was required by FLOPS as well as the approximators. The regression method had a tendency to hug the data points, whereas the neural network exhibited a propensity to follow a mean path. The performance of the neural network and regression methods was considered adequate. It was at about the same level for small, standard, and large models with redundancy ratios (defined as the number of input-output pairs to the number of unknown coefficients) of 14, 28, and 57, respectively. In an SGI octane workstation (Silicon Graphics, Inc., Mountainview, CA), the regression training required a fraction of a CPU second, whereas neural network training was between 1 and 9 min, as given. For a single analysis cycle, the 3-sec CPU time required by the FLOPS code was reduced to milliseconds by the approximators. For design calculations, the time with the FLOPS code was 34 min. It was reduced to 2 sec with the regression method and to 4 min by the neural network technique. The performance of the regression and neural network methods was found to be satisfactory for the analysis and design optimization of the subsonic aircraft.

  6. Application of Approximate Unsteady Aerodynamics for Flutter Analysis

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi; Li, Wesley W.

    2010-01-01

    A technique for approximating the modal aerodynamic influence coefficient (AIC) matrices by using basis functions has been developed. A process for using the resulting approximated modal AIC matrix in aeroelastic analysis has also been developed. The method requires the unsteady aerodynamics in frequency domain, and this methodology can be applied to the unsteady subsonic, transonic, and supersonic aerodynamics. The flutter solution can be found by the classic methods, such as rational function approximation, k, p-k, p, root locus et cetera. The unsteady aeroelastic analysis using unsteady subsonic aerodynamic approximation is demonstrated herein. The technique presented is shown to offer consistent flutter speed prediction on an aerostructures test wing (ATW) 2 and a hybrid wing body (HWB) type of vehicle configuration with negligible loss in precision. This method computes AICs that are functions of the changing parameters being studied and are generated within minutes of CPU time instead of hours. These results may have practical application in parametric flutter analyses as well as more efficient multidisciplinary design and optimization studies.

  7. A simple low-computation-intensity model for approximating the distribution function of a sum of non-identical lognormals for financial applications

    NASA Astrophysics Data System (ADS)

    Messica, A.

    2016-10-01

    The probability distribution function of a weighted sum of non-identical lognormal random variables is required in various fields of science and engineering and specifically in finance for portfolio management as well as exotic options valuation. Unfortunately, it has no known closed form and therefore has to be approximated. Most of the approximations presented to date are complex as well as complicated for implementation. This paper presents a simple, and easy to implement, approximation method via modified moments matching and a polynomial asymptotic series expansion correction for a central limit theorem of a finite sum. The method results in an intuitively-appealing and computation-efficient approximation for a finite sum of lognormals of at least ten summands and naturally improves as the number of summands increases. The accuracy of the method is tested against the results of Monte Carlo simulationsand also compared against the standard central limit theorem andthe commonly practiced Markowitz' portfolio equations.

  8. Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives

    NASA Technical Reports Server (NTRS)

    Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.

    2016-01-01

    A new engine cycle analysis tool, called Pycycle, was built using the OpenMDAO framework. Pycycle provides analytic derivatives allowing for an efficient use of gradient-based optimization methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.

  9. Low rank approximation methods for MR fingerprinting with large scale dictionaries.

    PubMed

    Yang, Mingrui; Ma, Dan; Jiang, Yun; Hamilton, Jesse; Seiberlich, Nicole; Griswold, Mark A; McGivney, Debra

    2018-04-01

    This work proposes new low rank approximation approaches with significant memory savings for large scale MR fingerprinting (MRF) problems. We introduce a compressed MRF with randomized singular value decomposition method to significantly reduce the memory requirement for calculating a low rank approximation of large sized MRF dictionaries. We further relax this requirement by exploiting the structures of MRF dictionaries in the randomized singular value decomposition space and fitting them to low-degree polynomials to generate high resolution MRF parameter maps. In vivo 1.5T and 3T brain scan data are used to validate the approaches. T 1 , T 2 , and off-resonance maps are in good agreement with that of the standard MRF approach. Moreover, the memory savings is up to 1000 times for the MRF-fast imaging with steady-state precession sequence and more than 15 times for the MRF-balanced, steady-state free precession sequence. The proposed compressed MRF with randomized singular value decomposition and dictionary fitting methods are memory efficient low rank approximation methods, which can benefit the usage of MRF in clinical settings. They also have great potentials in large scale MRF problems, such as problems considering multi-component MRF parameters or high resolution in the parameter space. Magn Reson Med 79:2392-2400, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  10. Edge-augmented Fourier partial sums with applications to Magnetic Resonance Imaging (MRI)

    NASA Astrophysics Data System (ADS)

    Larriva-Latt, Jade; Morrison, Angela; Radgowski, Alison; Tobin, Joseph; Iwen, Mark; Viswanathan, Aditya

    2017-08-01

    Certain applications such as Magnetic Resonance Imaging (MRI) require the reconstruction of functions from Fourier spectral data. When the underlying functions are piecewise-smooth, standard Fourier approximation methods suffer from the Gibbs phenomenon - with associated oscillatory artifacts in the vicinity of edges and an overall reduced order of convergence in the approximation. This paper proposes an edge-augmented Fourier reconstruction procedure which uses only the first few Fourier coefficients of an underlying piecewise-smooth function to accurately estimate jump information and then incorporate it into a Fourier partial sum approximation. We provide both theoretical and empirical results showing the improved accuracy of the proposed method, as well as comparisons demonstrating superior performance over existing state-of-the-art sparse optimization-based methods.

  11. Fuzzy classification for strawberry diseases-infection using machine vision and soft-computing techniques

    NASA Astrophysics Data System (ADS)

    Altıparmak, Hamit; Al Shahadat, Mohamad; Kiani, Ehsan; Dimililer, Kamil

    2018-04-01

    Robotic agriculture requires smart and doable techniques to substitute the human intelligence with machine intelligence. Strawberry is one of the important Mediterranean product and its productivity enhancement requires modern and machine-based methods. Whereas a human identifies the disease infected leaves by his eye, the machine should also be capable of vision-based disease identification. The objective of this paper is to practically verify the applicability of a new computer-vision method for discrimination between the healthy and disease infected strawberry leaves which does not require neural network or time consuming trainings. The proposed method was tested under outdoor lighting condition using a regular DLSR camera without any particular lens. Since the type and infection degree of disease is approximated a human brain a fuzzy decision maker classifies the leaves over the images captured on-site having the same properties of human vision. Optimizing the fuzzy parameters for a typical strawberry production area at a summer mid-day in Cyprus produced 96% accuracy for segmented iron deficiency and 93% accuracy for segmented using a typical human instant classification approximation as the benchmark holding higher accuracy than a human eye identifier. The fuzzy-base classifier provides approximate result for decision making on the leaf status as if it is healthy or not.

  12. Method of making thermally removable polymeric encapsulants

    DOEpatents

    Small, James H.; Loy, Douglas A.; Wheeler, David R.; McElhanon, James R.; Saunders, Randall S.

    2001-01-01

    A method of making a thermally-removable encapsulant by heating a mixture of at least one bis(maleimide) compound and at least one monomeric tris(furan) or tetrakis(furan) compound at temperatures from above room temperature to less than approximately 90.degree. C. to form a gel and cooling the gel to form the thermally-removable encapsulant. The encapsulant can be easily removed within approximately an hour by heating to temperatures greater than approximately 90.degree. C., preferably in a polar solvent. The encapsulant can be used in protecting electronic components that may require subsequent removal of the encapsulant for component repair, modification or quality control.

  13. Exact and approximate Fourier rebinning algorithms for the solution of the data truncation problem in 3-D PET.

    PubMed

    Bouallègue, Fayçal Ben; Crouzet, Jean-François; Comtat, Claude; Fourcade, Marjolaine; Mohammadi, Bijan; Mariano-Goulart, Denis

    2007-07-01

    This paper presents an extended 3-D exact rebinning formula in the Fourier space that leads to an iterative reprojection algorithm (iterative FOREPROJ), which enables the estimation of unmeasured oblique projection data on the basis of the whole set of measured data. In first approximation, this analytical formula also leads to an extended Fourier rebinning equation that is the basis for an approximate reprojection algorithm (extended FORE). These algorithms were evaluated on numerically simulated 3-D positron emission tomography (PET) data for the solution of the truncation problem, i.e., the estimation of the missing portions in the oblique projection data, before the application of algorithms that require complete projection data such as some rebinning methods (FOREX) or 3-D reconstruction algorithms (3DRP or direct Fourier methods). By taking advantage of all the 3-D data statistics, the iterative FOREPROJ reprojection provides a reliable alternative to the classical FOREPROJ method, which only exploits the low-statistics nonoblique data. It significantly improves the quality of the external reconstructed slices without loss of spatial resolution. As for the approximate extended FORE algorithm, it clearly exhibits limitations due to axial interpolations, but will require clinical studies with more realistic measured data in order to decide on its pertinence.

  14. Stability of semidiscrete approximations for hyperbolic initial-boundary-value problems: An eigenvalue analysis

    NASA Technical Reports Server (NTRS)

    Warming, Robert F.; Beam, Richard M.

    1986-01-01

    A hyperbolic initial-boundary-value problem can be approximated by a system of ordinary differential equations (ODEs) by replacing the spatial derivatives by finite-difference approximations. The resulting system of ODEs is called a semidiscrete approximation. A complication is the fact that more boundary conditions are required for the spatially discrete approximation than are specified for the partial differential equation. Consequently, additional numerical boundary conditions are required and improper treatment of these additional conditions can lead to instability. For a linear initial-boundary-value problem (IBVP) with homogeneous analytical boundary conditions, the semidiscrete approximation results in a system of ODEs of the form du/dt = Au whose solution can be written as u(t) = exp(At)u(O). Lax-Richtmyer stability requires that the matrix norm of exp(At) be uniformly bounded for O less than or = t less than or = T independent of the spatial mesh size. Although the classical Lax-Richtmyer stability definition involves a conventional vector norm, there is no known algebraic test for the uniform boundedness of the matrix norm of exp(At) for hyperbolic IBVPs. An alternative but more complicated stability definition is used in the theory developed by Gustafsson, Kreiss, and Sundstrom (GKS). The two methods are compared.

  15. On the parallel solution of parabolic equations

    NASA Technical Reports Server (NTRS)

    Gallopoulos, E.; Saad, Youcef

    1989-01-01

    Parallel algorithms for the solution of linear parabolic problems are proposed. The first of these methods is based on using polynomial approximation to the exponential. It does not require solving any linear systems and is highly parallelizable. The two other methods proposed are based on Pade and Chebyshev approximations to the matrix exponential. The parallelization of these methods is achieved by using partial fraction decomposition techniques to solve the resulting systems and thus offers the potential for increased time parallelism in time dependent problems. Experimental results from the Alliant FX/8 and the Cray Y-MP/832 vector multiprocessors are also presented.

  16. Bypassing the malfunction junction in warm dense matter simulations

    NASA Astrophysics Data System (ADS)

    Cangi, Attila; Pribram-Jones, Aurora

    2015-03-01

    Simulation of warm dense matter requires computational methods that capture both quantum and classical behavior efficiently under high-temperature and high-density conditions. The state-of-the-art approach to model electrons and ions under those conditions is density functional theory molecular dynamics, but this method's computational cost skyrockets as temperatures and densities increase. We propose finite-temperature potential functional theory as an in-principle-exact alternative that suffers no such drawback. In analogy to the zero-temperature theory developed previously, we derive an orbital-free free energy approximation through a coupling-constant formalism. Our density approximation and its associated free energy approximation demonstrate the method's accuracy and efficiency. A.C. has been partially supported by NSF Grant CHE-1112442. A.P.J. is supported by DOE Grant DE-FG02-97ER25308.

  17. Second derivatives for approximate spin projection methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, Lee M.; Hratchian, Hrant P., E-mail: hhratchian@ucmerced.edu

    2015-02-07

    The use of broken-symmetry electronic structure methods is required in order to obtain correct behavior of electronically strained open-shell systems, such as transition states, biradicals, and transition metals. This approach often has issues with spin contamination, which can lead to significant errors in predicted energies, geometries, and properties. Approximate projection schemes are able to correct for spin contamination and can often yield improved results. To fully make use of these methods and to carry out exploration of the potential energy surface, it is desirable to develop an efficient second energy derivative theory. In this paper, we formulate the analytical secondmore » derivatives for the Yamaguchi approximate projection scheme, building on recent work that has yielded an efficient implementation of the analytical first derivatives.« less

  18. Rapid methods for jugular bleeding of dogs requiring one technician.

    PubMed

    Frisk, C S; Richardson, M R

    1979-06-01

    Two methods were used to collect blood from the jugular vein of dogs. In both techniques, only one technician was required. A rope with a slip knot was placed around the base of the neck to assist in restraint and act as a tourniquet for the vein. The technician used one hand to restrain the dog by the muzzle and position the head. The other hand was used for collecting the sample. One of the methods could be accomplished with the dog in its cage. The bleeding techniques were rapid, requiring approximately 1 minute per dog.

  19. Approximation Model Building for Reliability & Maintainability Characteristics of Reusable Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Unal, Resit; Morris, W. Douglas; White, Nancy H.; Lepsch, Roger A.; Brown, Richard W.

    2000-01-01

    This paper describes the development of parametric models for estimating operational reliability and maintainability (R&M) characteristics for reusable vehicle concepts, based on vehicle size and technology support level. A R&M analysis tool (RMAT) and response surface methods are utilized to build parametric approximation models for rapidly estimating operational R&M characteristics such as mission completion reliability. These models that approximate RMAT, can then be utilized for fast analysis of operational requirements, for lifecycle cost estimating and for multidisciplinary sign optimization.

  20. Fast computation of the electrolyte-concentration transfer function of a lithium-ion cell model

    NASA Astrophysics Data System (ADS)

    Rodríguez, Albert; Plett, Gregory L.; Trimboli, M. Scott

    2017-08-01

    One approach to creating physics-based reduced-order models (ROMs) of battery-cell dynamics requires first generating linearized Laplace-domain transfer functions of all cell internal electrochemical variables of interest. Then, the resulting infinite-dimensional transfer functions can be reduced by various means in order to find an approximate low-dimensional model. These methods include Padé approximation or the Discrete-Time Realization algorithm. In a previous article, Lee and colleagues developed a transfer function of the electrolyte concentration for a porous-electrode pseudo-two-dimensional lithium-ion cell model. Their approach used separation of variables and Sturm-Liouville theory to compute an infinite-series solution to the transfer function, which they then truncated to a finite number of terms for reasons of practicality. Here, we instead use a variation-of-parameters approach to arrive at a different representation of the identical solution that does not require a series expansion. The primary benefits of the new approach are speed of computation of the transfer function and the removal of the requirement to approximate the transfer function by truncating the number of terms evaluated. Results show that the speedup of the new method can be more than 3800.

  1. The Atmospheric Mutual Coherence Function From the First and Second Rytov Approximations and Its Comparison to That of Strong Fluctuation Theory

    NASA Technical Reports Server (NTRS)

    Manning, Robert M.

    2011-01-01

    An expression for the mutual coherence function (MCF) of an electromagnetic beam wave propagating through atmospheric turbulence is derived within the confines of the Rytov approximation. It is shown that both the first and second Rytov approximations are required. The Rytov MCF is then compared to that which issues from the parabolic equation method of strong fluctuation theory. The agreement is found to be quite good in the weak fluctuation case. However, an instability is observed for the special case of beam wave intensities. The source of the instabilities is identified to be the characteristic way beam wave amplitudes are treated within the Rytov method.

  2. A numerical scheme based on radial basis function finite difference (RBF-FD) technique for solving the high-dimensional nonlinear Schrödinger equations using an explicit time discretization: Runge-Kutta method

    NASA Astrophysics Data System (ADS)

    Dehghan, Mehdi; Mohammadi, Vahid

    2017-08-01

    In this research, we investigate the numerical solution of nonlinear Schrödinger equations in two and three dimensions. The numerical meshless method which will be used here is RBF-FD technique. The main advantage of this method is the approximation of the required derivatives based on finite difference technique at each local-support domain as Ωi. At each Ωi, we require to solve a small linear system of algebraic equations with a conditionally positive definite matrix of order 1 (interpolation matrix). This scheme is efficient and its computational cost is same as the moving least squares (MLS) approximation. A challengeable issue is choosing suitable shape parameter for interpolation matrix in this way. In order to overcome this matter, an algorithm which was established by Sarra (2012), will be applied. This algorithm computes the condition number of the local interpolation matrix using the singular value decomposition (SVD) for obtaining the smallest and largest singular values of that matrix. Moreover, an explicit method based on Runge-Kutta formula of fourth-order accuracy will be applied for approximating the time variable. It also decreases the computational costs at each time step since we will not solve a nonlinear system. On the other hand, to compare RBF-FD method with another meshless technique, the moving kriging least squares (MKLS) approximation is considered for the studied model. Our results demonstrate the ability of the present approach for solving the applicable model which is investigated in the current research work.

  3. Indexing and retrieving point and region objects

    NASA Astrophysics Data System (ADS)

    Ibrahim, Azzam T.; Fotouhi, Farshad A.

    1996-03-01

    R-tree and its variants are examples of spatial data structures for paged-secondary memory. To process a query, these structures require multiple path traversals. In this paper, we present a new image access method, SB+-tree which requires a single path traversal to process a query. Also, SB+-tree will allow commercial databases an access method for spatial objects without a major change, since most commercial databases already support B+-tree as an access method for text data. The SB+-tree can be used for zero and non-zero size data objects. Non-zero size objects are approximated by their minimum bounding rectangles (MBRs). The number of SB+-trees generated is dependent upon the number of dimensions of the approximation of the object. The structure supports efficient spatial operations such as regions-overlap, distance and direction. In this paper, we experimentally and analytically demonstrate the superiority of SB+-tree over R-tree.

  4. The NonConforming Virtual Element Method for the Stokes Equations

    DOE PAGES

    Cangiani, Andrea; Gyrya, Vitaliy; Manzini, Gianmarco

    2016-01-01

    In this paper, we present the nonconforming virtual element method (VEM) for the numerical approximation of velocity and pressure in the steady Stokes problem. The pressure is approximated using discontinuous piecewise polynomials, while each component of the velocity is approximated using the nonconforming virtual element space. On each mesh element the local virtual space contains the space of polynomials of up to a given degree, plus suitable nonpolynomial functions. The virtual element functions are implicitly defined as the solution of local Poisson problems with polynomial Neumann boundary conditions. As typical in VEM approaches, the explicit evaluation of the non-polynomial functionsmore » is not required. This approach makes it possible to construct nonconforming (virtual) spaces for any polynomial degree regardless of the parity, for two- and three-dimensional problems, and for meshes with very general polygonal and polyhedral elements. We show that the nonconforming VEM is inf-sup stable and establish optimal a priori error estimates for the velocity and pressure approximations. Finally, numerical examples confirm the convergence analysis and the effectiveness of the method in providing high-order accurate approximations.« less

  5. A Continuous Method for Gene Flow

    PubMed Central

    Palczewski, Michal; Beerli, Peter

    2013-01-01

    Most modern population genetics inference methods are based on the coalescence framework. Methods that allow estimating parameters of structured populations commonly insert migration events into the genealogies. For these methods the calculation of the coalescence probability density of a genealogy requires a product over all time periods between events. Data sets that contain populations with high rates of gene flow among them require an enormous number of calculations. A new method, transition probability-structured coalescence (TPSC), replaces the discrete migration events with probability statements. Because the speed of calculation is independent of the amount of gene flow, this method allows calculating the coalescence densities efficiently. The current implementation of TPSC uses an approximation simplifying the interaction among lineages. Simulations and coverage comparisons of TPSC vs. MIGRATE show that TPSC allows estimation of high migration rates more precisely, but because of the approximation the estimation of low migration rates is biased. The implementation of TPSC into programs that calculate quantities on phylogenetic tree structures is straightforward, so the TPSC approach will facilitate more general inferences in many computer programs. PMID:23666937

  6. ASP: Automated symbolic computation of approximate symmetries of differential equations

    NASA Astrophysics Data System (ADS)

    Jefferson, G. F.; Carminati, J.

    2013-03-01

    A recent paper (Pakdemirli et al. (2004) [12]) compared three methods of determining approximate symmetries of differential equations. Two of these methods are well known and involve either a perturbation of the classical Lie symmetry generator of the differential system (Baikov, Gazizov and Ibragimov (1988) [7], Ibragimov (1996) [6]) or a perturbation of the dependent variable/s and subsequent determination of the classical Lie point symmetries of the resulting coupled system (Fushchych and Shtelen (1989) [11]), both up to a specified order in the perturbation parameter. The third method, proposed by Pakdemirli, Yürüsoy and Dolapçi (2004) [12], simplifies the calculations required by Fushchych and Shtelen's method through the assignment of arbitrary functions to the non-linear components prior to computing symmetries. All three methods have been implemented in the new MAPLE package ASP (Automated Symmetry Package) which is an add-on to the MAPLE symmetry package DESOLVII (Vu, Jefferson and Carminati (2012) [25]). To our knowledge, this is the first computer package to automate all three methods of determining approximate symmetries for differential systems. Extensions to the theory have also been suggested for the third method and which generalise the first method to systems of differential equations. Finally, a number of approximate symmetries and corresponding solutions are compared with results in the literature.

  7. Numerical Modeling of Fluorescence Emission Energy Dispersion in Luminescent Solar Concentrator

    NASA Astrophysics Data System (ADS)

    Li, Lanfang; Sheng, Xing; Rogers, John; Nuzzo, Ralph

    2013-03-01

    We present a numerical modeling method and the corresponding experimental results, to address fluorescence emission dispersion for applications such as luminescent solar concentrator and light emitting diode color correction. Previously established modeling methods utilized a statistic-thermodynamic theory (Kenard-Stepnov etc.) that required a thorough understanding of the free energy landscape of the fluorophores. Some more recent work used an empirical approximation of the measured emission energy dispersion profile without considering anti-Stokes shifting during absorption and emission. In this work we present a technique for modeling fluorescence absorption and emission that utilizes the experimentally measured spectrum and approximates the observable Frank-Condon vibronic states as a continuum and takes into account thermodynamic energy relaxation by allowing thermal fluctuations. This new approximation method relaxes the requirement for knowledge of the fluorophore system and reduces demand on computing resources while still capturing the essence of physical process. We present simulation results of the energy distribution of emitted photons and compare them with experimental results with good agreement in terms of peak red-shift and intensity attenuation in a luminescent solar concentrator. This work is supported by the DOE `Light-Material Interactions in Energy Conversion' Energy Frontier Research Center under grant DE-SC0001293.

  8. Sample size for post-marketing safety studies based on historical controls.

    PubMed

    Wu, Yu-te; Makuch, Robert W

    2010-08-01

    As part of a drug's entire life cycle, post-marketing studies are an important part in the identification of rare, serious adverse events. Recently, the US Food and Drug Administration (FDA) has begun to implement new post-marketing safety mandates as a consequence of increased emphasis on safety. The purpose of this research is to provide exact sample size formula for the proposed hybrid design, based on a two-group cohort study with incorporation of historical external data. Exact sample size formula based on the Poisson distribution is developed, because the detection of rare events is our outcome of interest. Performance of exact method is compared to its approximate large-sample theory counterpart. The proposed hybrid design requires a smaller sample size compared to the standard, two-group prospective study design. In addition, the exact method reduces the number of subjects required in the treatment group by up to 30% compared to the approximate method for the study scenarios examined. The proposed hybrid design satisfies the advantages and rationale of the two-group design with smaller sample sizes generally required. 2010 John Wiley & Sons, Ltd.

  9. Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives

    NASA Technical Reports Server (NTRS)

    Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.

    2016-01-01

    A new engine cycle analysis tool, called Pycycle, was recently built using the OpenMDAO framework. This tool uses equilibrium chemistry based thermodynamics, and provides analytic derivatives. This allows for stable and efficient use of gradient-based optimization and sensitivity analysis methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a multi-point turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.

  10. Gaussian-Beam/Physical-Optics Design Of Beam Waveguide

    NASA Technical Reports Server (NTRS)

    Veruttipong, Watt; Chen, Jacqueline C.; Bathker, Dan A.

    1993-01-01

    In iterative method of designing wideband beam-waveguide feed for paraboloidal-reflector antenna, Gaussian-beam approximation alternated with more nearly exact physical-optics analysis of diffraction. Includes curved and straight reflectors guiding radiation from feed horn to subreflector. For iterative design calculations, curved mirrors mathematically modeled as thin lenses. Each distance Li is combined length of two straight-line segments intersecting at one of flat mirrors. Method useful for designing beam-waveguide reflectors or mirrors required to have diameters approximately less than 30 wavelengths at one or more intended operating frequencies.

  11. Estimating the expected value of partial perfect information in health economic evaluations using integrated nested Laplace approximation.

    PubMed

    Heath, Anna; Manolopoulou, Ioanna; Baio, Gianluca

    2016-10-15

    The Expected Value of Perfect Partial Information (EVPPI) is a decision-theoretic measure of the 'cost' of parametric uncertainty in decision making used principally in health economic decision making. Despite this decision-theoretic grounding, the uptake of EVPPI calculations in practice has been slow. This is in part due to the prohibitive computational time required to estimate the EVPPI via Monte Carlo simulations. However, recent developments have demonstrated that the EVPPI can be estimated by non-parametric regression methods, which have significantly decreased the computation time required to approximate the EVPPI. Under certain circumstances, high-dimensional Gaussian Process (GP) regression is suggested, but this can still be prohibitively expensive. Applying fast computation methods developed in spatial statistics using Integrated Nested Laplace Approximations (INLA) and projecting from a high-dimensional into a low-dimensional input space allows us to decrease the computation time for fitting these high-dimensional GP, often substantially. We demonstrate that the EVPPI calculated using our method for GP regression is in line with the standard GP regression method and that despite the apparent methodological complexity of this new method, R functions are available in the package BCEA to implement it simply and efficiently. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  12. Tilt-tuned etalon locking for tunable laser stabilization.

    PubMed

    Gibson, Bradley M; McCall, Benjamin J

    2015-06-15

    Locking to a fringe of a tilt-tuned etalon provides a simple, inexpensive method for stabilizing tunable lasers. Here, we describe the use of such a system to stabilize an external-cavity quantum cascade laser; the locked laser has an Allan deviation of approximately 1 MHz over a one-second integration period, and has a single-scan tuning range of approximately 0.4  cm(-1). The system is robust, with minimal alignment requirements and automated lock acquisition, and can be easily adapted to different wavelength regions or more stringent stability requirements with minor alterations.

  13. Approximate Solution Methods for Spectral Radiative Transfer in High Refractive Index Layers

    NASA Technical Reports Server (NTRS)

    Siegel, R.; Spuckler, C. M.

    1994-01-01

    Some ceramic materials for high temperature applications are partially transparent for radiative transfer. The refractive indices of these materials can be substantially greater than one which influences internal radiative emission and reflections. Heat transfer behavior of single and laminated layers has been obtained in the literature by numerical solutions of the radiative transfer equations coupled with heat conduction and heating at the boundaries by convection and radiation. Two-flux and diffusion methods are investigated here to obtain approximate solutions using a simpler formulation than required for exact numerical solutions. Isotropic scattering is included. The two-flux method for a single layer yields excellent results for gray and two band spectral calculations. The diffusion method yields a good approximation for spectral behavior in laminated multiple layers if the overall optical thickness is larger than about ten. A hybrid spectral model is developed using the two-flux method in the optically thin bands, and radiative diffusion in bands that are optically thick.

  14. Optimal control of a variable spin speed CMG system for space vehicles. [Control Moment Gyros

    NASA Technical Reports Server (NTRS)

    Liu, T. C.; Chubb, W. B.; Seltzer, S. M.; Thompson, Z.

    1973-01-01

    Many future NASA programs require very high accurate pointing stability. These pointing requirements are well beyond anything attempted to date. This paper suggests a control system which has the capability of meeting these requirements. An optimal control law for the suggested system is specified. However, since no direct method of solution is known for this complicated system, a computation technique using successive approximations is used to develop the required solution. The method of calculus of variations is applied for estimating the changes of index of performance as well as those constraints of inequality of state variables and terminal conditions. Thus, an algorithm is obtained by the steepest descent method and/or conjugate gradient method. Numerical examples are given to show the optimal controls.

  15. Basis Function Approximation of Transonic Aerodynamic Influence Coefficient Matrix

    NASA Technical Reports Server (NTRS)

    Li, Wesley Waisang; Pak, Chan-Gi

    2010-01-01

    A technique for approximating the modal aerodynamic influence coefficients [AIC] matrices by using basis functions has been developed and validated. An application of the resulting approximated modal AIC matrix for a flutter analysis in transonic speed regime has been demonstrated. This methodology can be applied to the unsteady subsonic, transonic and supersonic aerodynamics. The method requires the unsteady aerodynamics in frequency-domain. The flutter solution can be found by the classic methods, such as rational function approximation, k, p-k, p, root-locus et cetera. The unsteady aeroelastic analysis for design optimization using unsteady transonic aerodynamic approximation is being demonstrated using the ZAERO(TradeMark) flutter solver (ZONA Technology Incorporated, Scottsdale, Arizona). The technique presented has been shown to offer consistent flutter speed prediction on an aerostructures test wing [ATW] 2 configuration with negligible loss in precision in transonic speed regime. These results may have practical significance in the analysis of aircraft aeroelastic calculation and could lead to a more efficient design optimization cycle

  16. Basis Function Approximation of Transonic Aerodynamic Influence Coefficient Matrix

    NASA Technical Reports Server (NTRS)

    Li, Wesley W.; Pak, Chan-gi

    2011-01-01

    A technique for approximating the modal aerodynamic influence coefficients matrices by using basis functions has been developed and validated. An application of the resulting approximated modal aerodynamic influence coefficients matrix for a flutter analysis in transonic speed regime has been demonstrated. This methodology can be applied to the unsteady subsonic, transonic, and supersonic aerodynamics. The method requires the unsteady aerodynamics in frequency-domain. The flutter solution can be found by the classic methods, such as rational function approximation, k, p-k, p, root-locus et cetera. The unsteady aeroelastic analysis for design optimization using unsteady transonic aerodynamic approximation is being demonstrated using the ZAERO flutter solver (ZONA Technology Incorporated, Scottsdale, Arizona). The technique presented has been shown to offer consistent flutter speed prediction on an aerostructures test wing 2 configuration with negligible loss in precision in transonic speed regime. These results may have practical significance in the analysis of aircraft aeroelastic calculation and could lead to a more efficient design optimization cycle.

  17. Method for calculating the aerodynamic loading on an oscillating finite wing in subsonic and sonic flow

    NASA Technical Reports Server (NTRS)

    Runyan, Harry L; Woolston, Donald S

    1957-01-01

    A method is presented for calculating the loading on a finite wing oscillating in subsonic or sonic flow. The method is applicable to any plan form and may be used for determining the loading on deformed wings. The procedure is approximate and requires numerical integration over the wing surface.

  18. Approach for Input Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives

    NASA Technical Reports Server (NTRS)

    Putko, Michele M.; Taylor, Arthur C., III; Newman, Perry A.; Green, Lawrence L.

    2002-01-01

    An implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for quasi 3-D Euler CFD code is presented. Given uncertainties in statistically independent, random, normally distributed input variables, first- and second-order statistical moment procedures are performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, these moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.

  19. Issues of planning trajectory of parallel robots taking into account zones of singularity

    NASA Astrophysics Data System (ADS)

    Rybak, L. A.; Khalapyan, S. Y.; Gaponenko, E. V.

    2018-03-01

    A method for determining the design characteristics of a parallel robot necessary to provide specified parameters of its working space that satisfy the controllability requirement is developed. The experimental verification of the proposed method was carried out using an approximate planar 3-RPR mechanism.

  20. Precision of Sensitivity in the Design Optimization of Indeterminate Structures

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Hopkins, Dale A.

    2006-01-01

    Design sensitivity is central to most optimization methods. The analytical sensitivity expression for an indeterminate structural design optimization problem can be factored into a simple determinate term and a complicated indeterminate component. Sensitivity can be approximated by retaining only the determinate term and setting the indeterminate factor to zero. The optimum solution is reached with the approximate sensitivity. The central processing unit (CPU) time to solution is substantially reduced. The benefit that accrues from using the approximate sensitivity is quantified by solving a set of problems in a controlled environment. Each problem is solved twice: first using the closed-form sensitivity expression, then using the approximation. The problem solutions use the CometBoards testbed as the optimization tool with the integrated force method as the analyzer. The modification that may be required, to use the stiffener method as the analysis tool in optimization, is discussed. The design optimization problem of an indeterminate structure contains many dependent constraints because of the implicit relationship between stresses, as well as the relationship between the stresses and displacements. The design optimization process can become problematic because the implicit relationship reduces the rank of the sensitivity matrix. The proposed approximation restores the full rank and enhances the robustness of the design optimization method.

  1. Subsonic aircraft: Evolution and the matching of size to performance

    NASA Technical Reports Server (NTRS)

    Loftin, L. K., Jr.

    1980-01-01

    Methods for estimating the approximate size, weight, and power of aircraft intended to meet specified performance requirements are presented for both jet-powered and propeller-driven aircraft. The methods are simple and require only the use of a pocket computer for rapid application to specific sizing problems. Application of the methods is illustrated by means of sizing studies of a series of jet-powered and propeller-driven aircraft with varying design constraints. Some aspects of the technical evolution of the airplane from 1918 to the present are also briefly discussed.

  2. Semi-discrete approximations to nonlinear systems of conservation laws; consistency and L(infinity)-stability imply convergence

    NASA Technical Reports Server (NTRS)

    Tadmor, Eitan

    1988-01-01

    A convergence theory for semi-discrete approximations to nonlinear systems of conservation laws is developed. It is shown, by a series of scalar counter-examples, that consistency with the conservation law alone does not guarantee convergence. Instead, a notion of consistency which takes into account both the conservation law and its augmenting entropy condition is introduced. In this context it is concluded that consistency and L(infinity)-stability guarantee for a relevant class of admissible entropy functions, that their entropy production rate belongs to a compact subset of H(loc)sup -1 (x,t). One can now use compensated compactness arguments in order to turn this conclusion into a convergence proof. The current state of the art for these arguments includes the scalar and a wide class of 2 x 2 systems of conservation laws. The general framework of the vanishing viscosity method is studied as an effective way to meet the consistency and L(infinity)-stability requirements. How this method is utilized to enforce consistency and stability for scalar conservation laws is shown. In this context we prove, under the appropriate assumptions, the convergence of finite difference approximations (e.g., the high resolution TVD and UNO methods), finite element approximations (e.g., the Streamline-Diffusion methods) and spectral and pseudospectral approximations (e.g., the Spectral Viscosity methods).

  3. Semi-discrete approximations to nonlinear systems of conservation laws; consistency and L(infinity)-stability imply convergence. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tadmor, E.

    1988-07-01

    A convergence theory for semi-discrete approximations to nonlinear systems of conservation laws is developed. It is shown, by a series of scalar counter-examples, that consistency with the conservation law alone does not guarantee convergence. Instead, a notion of consistency which takes into account both the conservation law and its augmenting entropy condition is introduced. In this context it is concluded that consistency and L(infinity)-stability guarantee for a relevant class of admissible entropy functions, that their entropy production rate belongs to a compact subset of H(loc)sup -1 (x,t). One can now use compensated compactness arguments in order to turn this conclusionmore » into a convergence proof. The current state of the art for these arguments includes the scalar and a wide class of 2 x 2 systems of conservation laws. The general framework of the vanishing viscosity method is studied as an effective way to meet the consistency and L(infinity)-stability requirements. How this method is utilized to enforce consistency and stability for scalar conservation laws is shown. In this context we prove, under the appropriate assumptions, the convergence of finite difference approximations (e.g., the high resolution TVD and UNO methods), finite element approximations (e.g., the Streamline-Diffusion methods) and spectral and pseudospectral approximations (e.g., the Spectral Viscosity methods).« less

  4. Generation of optimal artificial neural networks using a pattern search algorithm: application to approximation of chemical systems.

    PubMed

    Ihme, Matthias; Marsden, Alison L; Pitsch, Heinz

    2008-02-01

    A pattern search optimization method is applied to the generation of optimal artificial neural networks (ANNs). Optimization is performed using a mixed variable extension to the generalized pattern search method. This method offers the advantage that categorical variables, such as neural transfer functions and nodal connectivities, can be used as parameters in optimization. When used together with a surrogate, the resulting algorithm is highly efficient for expensive objective functions. Results demonstrate the effectiveness of this method in optimizing an ANN for the number of neurons, the type of transfer function, and the connectivity among neurons. The optimization method is applied to a chemistry approximation of practical relevance. In this application, temperature and a chemical source term are approximated as functions of two independent parameters using optimal ANNs. Comparison of the performance of optimal ANNs with conventional tabulation methods demonstrates equivalent accuracy by considerable savings in memory storage. The architecture of the optimal ANN for the approximation of the chemical source term consists of a fully connected feedforward network having four nonlinear hidden layers and 117 synaptic weights. An equivalent representation of the chemical source term using tabulation techniques would require a 500 x 500 grid point discretization of the parameter space.

  5. An alternative approach for computing seismic response with accidental eccentricity

    NASA Astrophysics Data System (ADS)

    Fan, Xuanhua; Yin, Jiacong; Sun, Shuli; Chen, Pu

    2014-09-01

    Accidental eccentricity is a non-standard assumption for seismic design of tall buildings. Taking it into consideration requires reanalysis of seismic resistance, which requires either time consuming computation of natural vibration of eccentric structures or finding a static displacement solution by applying an approximated equivalent torsional moment for each eccentric case. This study proposes an alternative modal response spectrum analysis (MRSA) approach to calculate seismic responses with accidental eccentricity. The proposed approach, called the Rayleigh Ritz Projection-MRSA (RRP-MRSA), is developed based on MRSA and two strategies: (a) a RRP method to obtain a fast calculation of approximate modes of eccentric structures; and (b) an approach to assemble mass matrices of eccentric structures. The efficiency of RRP-MRSA is tested via engineering examples and compared with the standard MRSA (ST-MRSA) and one approximate method, i.e., the equivalent torsional moment hybrid MRSA (ETM-MRSA). Numerical results show that RRP-MRSA not only achieves almost the same precision as ST-MRSA, and is much better than ETM-MRSA, but is also more economical. Thus, RRP-MRSA can be in place of current accidental eccentricity computations in seismic design.

  6. Progress in the Determination of the Earth's Gravity Field

    NASA Technical Reports Server (NTRS)

    Rapp, Richard H. (Editor)

    1989-01-01

    Topics addressed include: global gravity model development; methods for approximation of the gravity field; gravity field measuring techniques; global gravity field applications and requirements in geophysics and oceanography; and future gravity missions.

  7. Incorporating approximation error in surrogate based Bayesian inversion

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Zeng, L.; Li, W.; Wu, L.

    2015-12-01

    There are increasing interests in applying surrogates for inverse Bayesian modeling to reduce repetitive evaluations of original model. In this way, the computational cost is expected to be saved. However, the approximation error of surrogate model is usually overlooked. This is partly because that it is difficult to evaluate the approximation error for many surrogates. Previous studies have shown that, the direct combination of surrogates and Bayesian methods (e.g., Markov Chain Monte Carlo, MCMC) may lead to biased estimations when the surrogate cannot emulate the highly nonlinear original system. This problem can be alleviated by implementing MCMC in a two-stage manner. However, the computational cost is still high since a relatively large number of original model simulations are required. In this study, we illustrate the importance of incorporating approximation error in inverse Bayesian modeling. Gaussian process (GP) is chosen to construct the surrogate for its convenience in approximation error evaluation. Numerical cases of Bayesian experimental design and parameter estimation for contaminant source identification are used to illustrate this idea. It is shown that, once the surrogate approximation error is well incorporated into Bayesian framework, promising results can be obtained even when the surrogate is directly used, and no further original model simulations are required.

  8. Data-driven discovery of Koopman eigenfunctions using deep learning

    NASA Astrophysics Data System (ADS)

    Lusch, Bethany; Brunton, Steven L.; Kutz, J. Nathan

    2017-11-01

    Koopman operator theory transforms any autonomous non-linear dynamical system into an infinite-dimensional linear system. Since linear systems are well-understood, a mapping of non-linear dynamics to linear dynamics provides a powerful approach to understanding and controlling fluid flows. However, finding the correct change of variables remains an open challenge. We present a strategy to discover an approximate mapping using deep learning. Our neural networks find this change of variables, its inverse, and a finite-dimensional linear dynamical system defined on the new variables. Our method is completely data-driven and only requires measurements of the system, i.e. it does not require derivatives or knowledge of the governing equations. We find a minimal set of approximate Koopman eigenfunctions that are sufficient to reconstruct and advance the system to future states. We demonstrate the method on several dynamical systems.

  9. A diffusion approximation for ocean wave scatterings by randomly distributed ice floes

    NASA Astrophysics Data System (ADS)

    Zhao, Xin; Shen, Hayley

    2016-11-01

    This study presents a continuum approach using a diffusion approximation method to solve the scattering of ocean waves by randomly distributed ice floes. In order to model both strong and weak scattering, the proposed method decomposes the wave action density function into two parts: the transmitted part and the scattered part. For a given wave direction, the transmitted part of the wave action density is defined as the part of wave action density in the same direction before the scattering; and the scattered part is a first order Fourier series approximation for the directional spreading caused by scattering. An additional approximation is also adopted for simplification, in which the net directional redistribution of wave action by a single scatterer is assumed to be the reflected wave action of a normally incident wave into a semi-infinite ice cover. Other required input includes the mean shear modulus, diameter and thickness of ice floes, and the ice concentration. The directional spreading of wave energy from the diffusion approximation is found to be in reasonable agreement with the previous solution using the Boltzmann equation. The diffusion model provides an alternative method to implement wave scattering into an operational wave model.

  10. A method for approximating acoustic-field-amplitude uncertainty caused by environmental uncertainties.

    PubMed

    James, Kevin R; Dowling, David R

    2008-09-01

    In underwater acoustics, the accuracy of computational field predictions is commonly limited by uncertainty in environmental parameters. An approximate technique for determining the probability density function (PDF) of computed field amplitude, A, from known environmental uncertainties is presented here. The technique can be applied to several, N, uncertain parameters simultaneously, requires N+1 field calculations, and can be used with any acoustic field model. The technique implicitly assumes independent input parameters and is based on finding the optimum spatial shift between field calculations completed at two different values of each uncertain parameter. This shift information is used to convert uncertain-environmental-parameter distributions into PDF(A). The technique's accuracy is good when the shifted fields match well. Its accuracy is evaluated in range-independent underwater sound channels via an L(1) error-norm defined between approximate and numerically converged results for PDF(A). In 50-m- and 100-m-deep sound channels with 0.5% uncertainty in depth (N=1) at frequencies between 100 and 800 Hz, and for ranges from 1 to 8 km, 95% of the approximate field-amplitude distributions generated L(1) values less than 0.52 using only two field calculations. Obtaining comparable accuracy from traditional methods requires of order 10 field calculations and up to 10(N) when N>1.

  11. Conjugate-gradient preconditioning methods for shift-variant PET image reconstruction.

    PubMed

    Fessler, J A; Booth, S D

    1999-01-01

    Gradient-based iterative methods often converge slowly for tomographic image reconstruction and image restoration problems, but can be accelerated by suitable preconditioners. Diagonal preconditioners offer some improvement in convergence rate, but do not incorporate the structure of the Hessian matrices in imaging problems. Circulant preconditioners can provide remarkable acceleration for inverse problems that are approximately shift-invariant, i.e., for those with approximately block-Toeplitz or block-circulant Hessians. However, in applications with nonuniform noise variance, such as arises from Poisson statistics in emission tomography and in quantum-limited optical imaging, the Hessian of the weighted least-squares objective function is quite shift-variant, and circulant preconditioners perform poorly. Additional shift-variance is caused by edge-preserving regularization methods based on nonquadratic penalty functions. This paper describes new preconditioners that approximate more accurately the Hessian matrices of shift-variant imaging problems. Compared to diagonal or circulant preconditioning, the new preconditioners lead to significantly faster convergence rates for the unconstrained conjugate-gradient (CG) iteration. We also propose a new efficient method for the line-search step required by CG methods. Applications to positron emission tomography (PET) illustrate the method.

  12. Navier-Stokes and viscous-inviscid interaction

    NASA Technical Reports Server (NTRS)

    Steger, Joseph L.; Vandalsem, William R.

    1989-01-01

    Some considerations toward developing numerical procedures for simulating viscous compressible flows are discussed. Both Navier-Stokes and boundary layer field methods are considered. Because efficient viscous-inviscid interaction methods have been difficult to extend to complex 3-D flow simulations, Navier-Stokes procedures are more frequently being utilized even though they require considerably more work per grid point. It would seem a mistake, however, not to make use of the more efficient approximate methods in those regions in which they are clearly valid. Ideally, a general purpose compressible flow solver that can optionally take advantage of approximate solution methods would suffice, both to improve accuracy and efficiency. Some potentially useful steps toward this goal are described: a generalized 3-D boundary layer formulation and the fortified Navier-Stokes procedure.

  13. A computational algorithm addressing how vessel length might depend on vessel diameter

    Treesearch

    Jing Cai; Shuoxin Zhang; Melvin T. Tyree

    2010-01-01

    The objective of this method paper was to examine a computational algorithm that may reveal how vessel length might depend on vessel diameter within any given stem or species. The computational method requires the assumption that vessels remain approximately constant in diameter over their entire length. When this method is applied to three species or hybrids in the...

  14. Efficient Algorithms for Estimating the Absorption Spectrum within Linear Response TDDFT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brabec, Jiri; Lin, Lin; Shao, Meiyue

    We present two iterative algorithms for approximating the absorption spectrum of molecules within linear response of time-dependent density functional theory (TDDFT) framework. These methods do not attempt to compute eigenvalues or eigenvectors of the linear response matrix. They are designed to approximate the absorption spectrum as a function directly. They take advantage of the special structure of the linear response matrix. Neither method requires the linear response matrix to be constructed explicitly. They only require a procedure that performs the multiplication of the linear response matrix with a vector. These methods can also be easily modified to efficiently estimate themore » density of states (DOS) of the linear response matrix without computing the eigenvalues of this matrix. We show by computational experiments that the methods proposed in this paper can be much more efficient than methods that are based on the exact diagonalization of the linear response matrix. We show that they can also be more efficient than real-time TDDFT simulations. We compare the pros and cons of these methods in terms of their accuracy as well as their computational and storage cost.« less

  15. Adsorption energies of benzene on close packed transition metal surfaces using the random phase approximation

    NASA Astrophysics Data System (ADS)

    Garrido Torres, José A.; Ramberger, Benjamin; Früchtl, Herbert A.; Schaub, Renald; Kresse, Georg

    2017-11-01

    The adsorption energy of benzene on various metal substrates is predicted using the random phase approximation (RPA) for the correlation energy. Agreement with available experimental data is systematically better than 10% for both coinage and reactive metals. The results are also compared with more approximate methods, including van der Waals density functional theory (DFT), as well as dispersion-corrected DFT functionals. Although dispersion-corrected DFT can yield accurate results, for instance, on coinage metals, the adsorption energies are clearly overestimated on more reactive transition metals. Furthermore, coverage dependent adsorption energies are well described by the RPA. This shows that for the description of aromatic molecules on metal surfaces further improvements in density functionals are necessary, or more involved many-body methods such as the RPA are required.

  16. Connection between the regular approximation and the normalized elimination of the small component in relativistic quantum theory

    NASA Astrophysics Data System (ADS)

    Filatov, Michael; Cremer, Dieter

    2005-02-01

    The regular approximation to the normalized elimination of the small component (NESC) in the modified Dirac equation has been developed and presented in matrix form. The matrix form of the infinite-order regular approximation (IORA) expressions, obtained in [Filatov and Cremer, J. Chem. Phys. 118, 6741 (2003)] using the resolution of the identity, is the exact matrix representation and corresponds to the zeroth-order regular approximation to NESC (NESC-ZORA). Because IORA (=NESC-ZORA) is a variationally stable method, it was used as a suitable starting point for the development of the second-order regular approximation to NESC (NESC-SORA). As shown for hydrogenlike ions, NESC-SORA energies are closer to the exact Dirac energies than the energies from the fifth-order Douglas-Kroll approximation, which is much more computationally demanding than NESC-SORA. For the application of IORA (=NESC-ZORA) and NESC-SORA to many-electron systems, the number of the two-electron integrals that need to be evaluated (identical to the number of the two-electron integrals of a full Dirac-Hartree-Fock calculation) was drastically reduced by using the resolution of the identity technique. An approximation was derived, which requires only the two-electron integrals of a nonrelativistic calculation. The accuracy of this approach was demonstrated for heliumlike ions. The total energy based on the approximate integrals deviates from the energy calculated with the exact integrals by less than 5×10-9hartree units. NESC-ZORA and NESC-SORA can easily be implemented in any nonrelativistic quantum chemical program. Their application is comparable in cost with that of nonrelativistic methods. The methods can be run with density functional theory and any wave function method. NESC-SORA has the advantage that it does not imply a picture change.

  17. A 2D Gaussian-Beam-Based Method for Modeling the Dichroic Surfaces of Quasi-Optical Systems

    NASA Astrophysics Data System (ADS)

    Elis, Kevin; Chabory, Alexandre; Sokoloff, Jérôme; Bolioli, Sylvain

    2016-08-01

    In this article, we propose an approach in the spectral domain to treat the interaction of a field with a dichroic surface in two dimensions. For a Gaussian beam illumination of the surface, the reflected and transmitted fields are approximated by one reflected and one transmitted Gaussian beams. Their characteristics are determined by means of a matching in the spectral domain, which requires a second-order approximation of the dichroic surface response when excited by plane waves. This approximation is of the same order as the one used in Gaussian beam shooting algorithm to model curved interfaces associated with lenses, reflector, etc. The method uses general analytical formulations for the GBs that depend either on a paraxial or far-field approximation. Numerical experiments are led to test the efficiency of the method in terms of accuracy and computation time. They include a parametric study and a case for which the illumination is provided by a horn antenna. For the latter, the incident field is firstly expressed as a sum of Gaussian beams by means of Gabor frames.

  18. Simple algorithms for digital pulse-shape discrimination with liquid scintillation detectors

    NASA Astrophysics Data System (ADS)

    Alharbi, T.

    2015-01-01

    The development of compact, battery-powered digital liquid scintillation neutron detection systems for field applications requires digital pulse processing (DPP) algorithms with minimum computational overhead. To meet this demand, two DPP algorithms for the discrimination of neutron and γ-rays with liquid scintillation detectors were developed and examined by using a NE213 liquid scintillation detector in a mixed radiation field. The first algorithm is based on the relation between the amplitude of a current pulse at the output of a photomultiplier tube and the amount of charge contained in the pulse. A figure-of-merit (FOM) value of 0.98 with 450 keVee (electron equivalent energy) energy threshold was achieved with this method when pulses were sampled at 250 MSample/s and with 8-bit resolution. Compared to the similar method of charge-comparison this method requires only a single integration window, thereby reducing the amount of computations by approximately 40%. The second approach is a digital version of the trailing-edge constant-fraction discrimination method. A FOM value of 0.84 with an energy threshold of 450 keVee was achieved with this method. In comparison with the similar method of rise-time discrimination this method requires a single time pick-off, thereby reducing the amount of computations by approximately 50%. The algorithms described in this work are useful for developing portable detection systems for applications such as homeland security, radiation dosimetry and environmental monitoring.

  19. A general moment expansion method for stochastic kinetic models

    NASA Astrophysics Data System (ADS)

    Ale, Angelique; Kirk, Paul; Stumpf, Michael P. H.

    2013-05-01

    Moment approximation methods are gaining increasing attention for their use in the approximation of the stochastic kinetics of chemical reaction systems. In this paper we derive a general moment expansion method for any type of propensities and which allows expansion up to any number of moments. For some chemical reaction systems, more than two moments are necessary to describe the dynamic properties of the system, which the linear noise approximation is unable to provide. Moreover, also for systems for which the mean does not have a strong dependence on higher order moments, moment approximation methods give information about higher order moments of the underlying probability distribution. We demonstrate the method using a dimerisation reaction, Michaelis-Menten kinetics and a model of an oscillating p53 system. We show that for the dimerisation reaction and Michaelis-Menten enzyme kinetics system higher order moments have limited influence on the estimation of the mean, while for the p53 system, the solution for the mean can require several moments to converge to the average obtained from many stochastic simulations. We also find that agreement between lower order moments does not guarantee that higher moments will agree. Compared to stochastic simulations, our approach is numerically highly efficient at capturing the behaviour of stochastic systems in terms of the average and higher moments, and we provide expressions for the computational cost for different system sizes and orders of approximation. We show how the moment expansion method can be employed to efficiently quantify parameter sensitivity. Finally we investigate the effects of using too few moments on parameter estimation, and provide guidance on how to estimate if the distribution can be accurately approximated using only a few moments.

  20. An Incompressible, Depth-Averaged Lattice Boltzmann Method for Liquid Flow in Microfluidic Devices with Variable Aperture

    DOE PAGES

    Laleian, Artin; Valocchi, Albert J.; Werth, Charles J.

    2015-11-24

    Two-dimensional (2D) pore-scale models have successfully simulated microfluidic experiments of aqueous-phase flow with mixing-controlled reactions in devices with small aperture. A standard 2D model is not generally appropriate when the presence of mineral precipitate or biomass creates complex and irregular three-dimensional (3D) pore geometries. We modify the 2D lattice Boltzmann method (LBM) to incorporate viscous drag from the top and bottom microfluidic device (micromodel) surfaces, typically excluded in a 2D model. Viscous drag from these surfaces can be approximated by uniformly scaling a steady-state 2D velocity field at low Reynolds number. We demonstrate increased accuracy by approximating the viscous dragmore » with an analytically-derived body force which assumes a local parabolic velocity profile across the micromodel depth. Accuracy of the generated 2D velocity field and simulation permeability have not been evaluated in geometries with variable aperture. We obtain permeabilities within approximately 10% error and accurate streamlines from the proposed 2D method relative to results obtained from 3D simulations. Additionally, the proposed method requires a CPU run time approximately 40 times less than a standard 3D method, representing a significant computational benefit for permeability calculations.« less

  1. Nonlinear programming extensions to rational function approximations of unsteady aerodynamics

    NASA Technical Reports Server (NTRS)

    Tiffany, Sherwood H.; Adams, William M., Jr.

    1987-01-01

    This paper deals with approximating unsteady generalized aerodynamic forces in the equations of motion of a flexible aircraft. Two methods of formulating these approximations are extended to include both the same flexibility in constraining them and the same methodology in optimizing nonlinear parameters as another currently used 'extended least-squares' method. Optimal selection of 'nonlinear' parameters is made in each of the three methods by use of the same nonlinear (nongradient) optimizer. The objective of the nonlinear optimization is to obtain rational approximations to the unsteady aerodynamics whose state-space realization is of lower order than that required when no optimization of the nonlinear terms is performed. The free 'linear' parameters are determined using least-squares matrix techniques on a Lagrange multiplier formulation of an objective function which incorporates selected linear equality constraints. State-space mathematical models resulting from the different approaches are described, and results are presented which show comparative evaluations from application of each of the extended methods to a numerical example. The results obtained for the example problem show a significant (up to 63 percent) reduction in the number of differential equations used to represent the unsteady aerodynamic forces in linear time-invariant equations of motion as compared to a conventional method in which nonlinear terms are not optimized.

  2. Reduced and simplified chemical kinetics for air dissociation using Computational Singular Perturbation

    NASA Technical Reports Server (NTRS)

    Goussis, D. A.; Lam, S. H.; Gnoffo, P. A.

    1990-01-01

    The Computational Singular Perturbation CSP methods is employed (1) in the modeling of a homogeneous isothermal reacting system and (2) in the numerical simulation of the chemical reactions in a hypersonic flowfield. Reduced and simplified mechanisms are constructed. The solutions obtained on the basis of these approximate mechanisms are shown to be in very good agreement with the exact solution based on the full mechanism. Physically meaningful approximations are derived. It is demonstrated that the deduction of these approximations from CSP is independent of the complexity of the problem and requires no intuition or experience in chemical kinetics.

  3. Accuracy of least-squares methods for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Bochev, Pavel B.; Gunzburger, Max D.

    1993-01-01

    Recently there has been substantial interest in least-squares finite element methods for velocity-vorticity-pressure formulations of the incompressible Navier-Stokes equations. The main cause for this interest is the fact that algorithms for the resulting discrete equations can be devised which require the solution of only symmetric, positive definite systems of algebraic equations. On the other hand, it is well-documented that methods using the vorticity as a primary variable often yield very poor approximations. Thus, here we study the accuracy of these methods through a series of computational experiments, and also comment on theoretical error estimates. It is found, despite the failure of standard methods for deriving error estimates, that computational evidence suggests that these methods are, at the least, nearly optimally accurate. Thus, in addition to the desirable matrix properties yielded by least-squares methods, one also obtains accurate approximations.

  4. Including fluid shear viscosity in a structural acoustic finite element model using a scalar fluid representation

    PubMed Central

    Cheng, Lei; Li, Yizeng; Grosh, Karl

    2013-01-01

    An approximate boundary condition is developed in this paper to model fluid shear viscosity at boundaries of coupled fluid-structure system. The effect of shear viscosity is approximated by a correction term to the inviscid boundary condition, written in terms of second order in-plane derivatives of pressure. Both thin and thick viscous boundary layer approximations are formulated; the latter subsumes the former. These approximations are used to develop a variational formation, upon which a viscous finite element method (FEM) model is based, requiring only minor modifications to the boundary integral contributions of an existing inviscid FEM model. Since this FEM formulation has only one degree of freedom for pressure, it holds a great computational advantage over the conventional viscous FEM formulation which requires discretization of the full set of linearized Navier-Stokes equations. The results from thick viscous boundary layer approximation are found to be in good agreement with the prediction from a Navier-Stokes model. When applicable, thin viscous boundary layer approximation also gives accurate results with computational simplicity compared to the thick boundary layer formulation. Direct comparison of simulation results using the boundary layer approximations and a full, linearized Navier-Stokes model are made and used to evaluate the accuracy of the approximate technique. Guidelines are given for the parameter ranges over which the accurate application of the thick and thin boundary approximations can be used for a fluid-structure interaction problem. PMID:23729844

  5. Including fluid shear viscosity in a structural acoustic finite element model using a scalar fluid representation.

    PubMed

    Cheng, Lei; Li, Yizeng; Grosh, Karl

    2013-08-15

    An approximate boundary condition is developed in this paper to model fluid shear viscosity at boundaries of coupled fluid-structure system. The effect of shear viscosity is approximated by a correction term to the inviscid boundary condition, written in terms of second order in-plane derivatives of pressure. Both thin and thick viscous boundary layer approximations are formulated; the latter subsumes the former. These approximations are used to develop a variational formation, upon which a viscous finite element method (FEM) model is based, requiring only minor modifications to the boundary integral contributions of an existing inviscid FEM model. Since this FEM formulation has only one degree of freedom for pressure, it holds a great computational advantage over the conventional viscous FEM formulation which requires discretization of the full set of linearized Navier-Stokes equations. The results from thick viscous boundary layer approximation are found to be in good agreement with the prediction from a Navier-Stokes model. When applicable, thin viscous boundary layer approximation also gives accurate results with computational simplicity compared to the thick boundary layer formulation. Direct comparison of simulation results using the boundary layer approximations and a full, linearized Navier-Stokes model are made and used to evaluate the accuracy of the approximate technique. Guidelines are given for the parameter ranges over which the accurate application of the thick and thin boundary approximations can be used for a fluid-structure interaction problem.

  6. Deep learning and model predictive control for self-tuning mode-locked lasers

    NASA Astrophysics Data System (ADS)

    Baumeister, Thomas; Brunton, Steven L.; Nathan Kutz, J.

    2018-03-01

    Self-tuning optical systems are of growing importance in technological applications such as mode-locked fiber lasers. Such self-tuning paradigms require {\\em intelligent} algorithms capable of inferring approximate models of the underlying physics and discovering appropriate control laws in order to maintain robust performance for a given objective. In this work, we demonstrate the first integration of a {\\em deep learning} (DL) architecture with {\\em model predictive control} (MPC) in order to self-tune a mode-locked fiber laser. Not only can our DL-MPC algorithmic architecture approximate the unknown fiber birefringence, it also builds a dynamical model of the laser and appropriate control law for maintaining robust, high-energy pulses despite a stochastically drifting birefringence. We demonstrate the effectiveness of this method on a fiber laser which is mode-locked by nonlinear polarization rotation. The method advocated can be broadly applied to a variety of optical systems that require robust controllers.

  7. An artificial light source influences mating and oviposition of black soldier flies, Hermetia illucens.

    PubMed

    Zhang, Jibin; Huang, Ling; He, Jin; Tomberlin, Jeffery K; Li, Jianhong; Lei, Chaoliang; Sun, Ming; Liu, Ziduo; Yu, Ziniu

    2010-01-01

    Current methods for mass-rearing black soldier flies, Hermetia illucens (L.) (Diptera: Stratiomyidae), in the laboratory are dependent on sunlight. Quartz-iodine lamps and rare earth lamps were examined as artificial light sources for stimulating H. illucens to mate and lay eggs. Sunlight was used as the control. Adults in the quartz-iodine lamp treatment had a mating rate of 61% of those in the sunlight control. No mating occurred when the rare earth lamp was used as a substitute. Egg hatch for the quartz-iodine lamp and sunlight treatments occurred in approximately 4 days, and the hatch rate was similar between these two treatments. Larval and pupal development under these treatments required approximately 18 and 15 days at 28°C, respectively. Development of methods for mass rearing of H. illucens using artificial light will enable production of this fly throughout the year without investing in greenhouse space or requiring sunlight.

  8. A Novel Iterative Scheme for the Very Fast and Accurate Solution of Non-LTE Radiative Transfer Problems

    NASA Astrophysics Data System (ADS)

    Trujillo Bueno, J.; Fabiani Bendicho, P.

    1995-12-01

    Iterative schemes based on Gauss-Seidel (G-S) and optimal successive over-relaxation (SOR) iteration are shown to provide a dramatic increase in the speed with which non-LTE radiation transfer (RT) problems can be solved. The convergence rates of these new RT methods are identical to those of upper triangular nonlocal approximate operator splitting techniques, but the computing time per iteration and the memory requirements are similar to those of a local operator splitting method. In addition to these properties, both methods are particularly suitable for multidimensional geometry, since they neither require the actual construction of nonlocal approximate operators nor the application of any matrix inversion procedure. Compared with the currently used Jacobi technique, which is based on the optimal local approximate operator (see Olson, Auer, & Buchler 1986), the G-S method presented here is faster by a factor 2. It gives excellent smoothing of the high-frequency error components, which makes it the iterative scheme of choice for multigrid radiative transfer. This G-S method can also be suitably combined with standard acceleration techniques to achieve even higher performance. Although the convergence rate of the optimal SOR scheme developed here for solving non-LTE RT problems is much higher than G-S, the computing time per iteration is also minimal, i.e., virtually identical to that of a local operator splitting method. While the conventional optimal local operator scheme provides the converged solution after a total CPU time (measured in arbitrary units) approximately equal to the number n of points per decade of optical depth, the time needed by this new method based on the optimal SOR iterations is only √n/2√2. This method is competitive with those that result from combining the above-mentioned Jacobi and G-S schemes with the best acceleration techniques. Contrary to what happens with the local operator splitting strategy currently in use, these novel methods remain effective even under extreme non-LTE conditions in very fine grids.

  9. Screen Space Ambient Occlusion Based Multiple Importance Sampling for Real-Time Rendering

    NASA Astrophysics Data System (ADS)

    Zerari, Abd El Mouméne; Babahenini, Mohamed Chaouki

    2018-03-01

    We propose a new approximation technique for accelerating the Global Illumination algorithm for real-time rendering. The proposed approach is based on the Screen-Space Ambient Occlusion (SSAO) method, which approximates the global illumination for large, fully dynamic scenes at interactive frame rates. Current algorithms that are based on the SSAO method suffer from difficulties due to the large number of samples that are required. In this paper, we propose an improvement to the SSAO technique by integrating it with a Multiple Importance Sampling technique that combines a stratified sampling method with an importance sampling method, with the objective of reducing the number of samples. Experimental evaluation demonstrates that our technique can produce high-quality images in real time and is significantly faster than traditional techniques.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cangiani, Andrea; Gyrya, Vitaliy; Manzini, Gianmarco

    In this paper, we present the nonconforming virtual element method (VEM) for the numerical approximation of velocity and pressure in the steady Stokes problem. The pressure is approximated using discontinuous piecewise polynomials, while each component of the velocity is approximated using the nonconforming virtual element space. On each mesh element the local virtual space contains the space of polynomials of up to a given degree, plus suitable nonpolynomial functions. The virtual element functions are implicitly defined as the solution of local Poisson problems with polynomial Neumann boundary conditions. As typical in VEM approaches, the explicit evaluation of the non-polynomial functionsmore » is not required. This approach makes it possible to construct nonconforming (virtual) spaces for any polynomial degree regardless of the parity, for two- and three-dimensional problems, and for meshes with very general polygonal and polyhedral elements. We show that the nonconforming VEM is inf-sup stable and establish optimal a priori error estimates for the velocity and pressure approximations. Finally, numerical examples confirm the convergence analysis and the effectiveness of the method in providing high-order accurate approximations.« less

  11. Grating-based holographic diffraction methods for X-rays and neutrons: phase object approximation and dynamical theory

    DOE PAGES

    Feng, Hao; Ashkar, Rana; Steinke, Nina; ...

    2018-02-01

    A method dubbed grating-based holography was recently used to determine the structure of colloidal fluids in the rectangular grooves of a diffraction grating from X-ray scattering measurements. Similar grating-based measurements have also been recently made with neutrons using a technique called spin-echo small-angle neutron scattering. The analysis of the X-ray diffraction data was done using an approximation that treats the X-ray phase change caused by the colloidal structure as a small perturbation to the overall phase pattern generated by the grating. In this paper, the adequacy of this weak phase approximation is explored for both X-ray and neutron grating holography.more » Additionally, it is found that there are several approximations hidden within the weak phase approximation that can lead to incorrect conclusions from experiments. In particular, the phase contrast for the empty grating is a critical parameter. Finally, while the approximation is found to be perfectly adequate for X-ray grating holography experiments performed to date, it cannot be applied to similar neutron experiments because the latter technique requires much deeper grating channels.« less

  12. A parallel offline CFD and closed-form approximation strategy for computationally efficient analysis of complex fluid flows

    NASA Astrophysics Data System (ADS)

    Allphin, Devin

    Computational fluid dynamics (CFD) solution approximations for complex fluid flow problems have become a common and powerful engineering analysis technique. These tools, though qualitatively useful, remain limited in practice by their underlying inverse relationship between simulation accuracy and overall computational expense. While a great volume of research has focused on remedying these issues inherent to CFD, one traditionally overlooked area of resource reduction for engineering analysis concerns the basic definition and determination of functional relationships for the studied fluid flow variables. This artificial relationship-building technique, called meta-modeling or surrogate/offline approximation, uses design of experiments (DOE) theory to efficiently approximate non-physical coupling between the variables of interest in a fluid flow analysis problem. By mathematically approximating these variables, DOE methods can effectively reduce the required quantity of CFD simulations, freeing computational resources for other analytical focuses. An idealized interpretation of a fluid flow problem can also be employed to create suitably accurate approximations of fluid flow variables for the purposes of engineering analysis. When used in parallel with a meta-modeling approximation, a closed-form approximation can provide useful feedback concerning proper construction, suitability, or even necessity of an offline approximation tool. It also provides a short-circuit pathway for further reducing the overall computational demands of a fluid flow analysis, again freeing resources for otherwise unsuitable resource expenditures. To validate these inferences, a design optimization problem was presented requiring the inexpensive estimation of aerodynamic forces applied to a valve operating on a simulated piston-cylinder heat engine. The determination of these forces was to be found using parallel surrogate and exact approximation methods, thus evidencing the comparative benefits of this technique. For the offline approximation, latin hypercube sampling (LHS) was used for design space filling across four (4) independent design variable degrees of freedom (DOF). Flow solutions at the mapped test sites were converged using STAR-CCM+ with aerodynamic forces from the CFD models then functionally approximated using Kriging interpolation. For the closed-form approximation, the problem was interpreted as an ideal 2-D converging-diverging (C-D) nozzle, where aerodynamic forces were directly mapped by application of the Euler equation solutions for isentropic compression/expansion. A cost-weighting procedure was finally established for creating model-selective discretionary logic, with a synthesized parallel simulation resource summary provided.

  13. Radiation/convection coupling in rocket motors and plumes

    NASA Technical Reports Server (NTRS)

    Farmer, R. C.; Saladino, A. J.

    1993-01-01

    The three commonly used propellant systems - H2/O2, RP-1/O2, and solid propellants - primarily radiate as molecular emitters, non-scattering small particles, and scattering larger particles, respectively. Present technology has accepted the uncoupling of the radiation analysis from that of the flowfield. This approximation becomes increasingly inaccurate as one considers plumes, interior rocket chambers, and nuclear rocket propulsion devices. This study will develop a hierarchy of methods which will address radiation/convection coupling in all of the aforementioned propulsion systems. The nature of the radiation/convection coupled problem is that the divergence of the radiative heat flux must be included in the energy equation and that the local, volume-averaged intensity of the radiation must be determined by a solution of the radiative transfer equation (RTE). The intensity is approximated by solving the RTE along several lines of sight (LOS) for each point in the flowfield. Such a procedure is extremely costly; therefore, further approximations are needed. Modified differential approximations are being developed for this purpose. It is not obvious which order of approximations are required for a given rocket motor analysis. Therefore, LOS calculations have been made for typical rocket motor operating conditions in order to select the type approximations required. The results of these radiation calculations, and the interpretation of these intensity predictions are presented herein.

  14. Approach for Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives

    NASA Technical Reports Server (NTRS)

    Putko, Michele M.; Newman, Perry A.; Taylor, Arthur C., III; Green, Lawrence L.

    2001-01-01

    This paper presents an implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for a quasi 1-D Euler CFD (computational fluid dynamics) code. Given uncertainties in statistically independent, random, normally distributed input variables, a first- and second-order statistical moment matching procedure is performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, the moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.

  15. Neural Network Assisted Inverse Dynamic Guidance for Terminally Constrained Entry Flight

    PubMed Central

    Chen, Wanchun

    2014-01-01

    This paper presents a neural network assisted entry guidance law that is designed by applying Bézier approximation. It is shown that a fully constrained approximation of a reference trajectory can be made by using the Bézier curve. Applying this approximation, an inverse dynamic system for an entry flight is solved to generate guidance command. The guidance solution thus gotten ensures terminal constraints for position, flight path, and azimuth angle. In order to ensure terminal velocity constraint, a prediction of the terminal velocity is required, based on which, the approximated Bézier curve is adjusted. An artificial neural network is used for this prediction of the terminal velocity. The method enables faster implementation in achieving fully constrained entry flight. Results from simulations indicate improved performance of the neural network assisted method. The scheme is expected to have prospect for further research on automated onboard control of terminal velocity for both reentry and terminal guidance laws. PMID:24723821

  16. A new approximation of Fermi-Dirac integrals of order 1/2 for degenerate semiconductor devices

    NASA Astrophysics Data System (ADS)

    AlQurashi, Ahmed; Selvakumar, C. R.

    2018-06-01

    There had been tremendous growth in the field of Integrated circuits (ICs) in the past fifty years. Scaling laws mandated both lateral and vertical dimensions to be reduced and a steady increase in doping densities. Most of the modern semiconductor devices have invariably heavily doped regions where Fermi-Dirac Integrals are required. Several attempts have been devoted to developing analytical approximations for Fermi-Dirac Integrals since numerical computations of Fermi-Dirac Integrals are difficult to use in semiconductor devices, although there are several highly accurate tabulated functions available. Most of these analytical expressions are not sufficiently suitable to be employed in semiconductor device applications due to their poor accuracy, the requirement of complicated calculations, and difficulties in differentiating and integrating. A new approximation has been developed for the Fermi-Dirac integrals of the order 1/2 by using Prony's method and discussed in this paper. The approximation is accurate enough (Mean Absolute Error (MAE) = 0.38%) and easy enough to be used in semiconductor device equations. The new approximation of Fermi-Dirac Integrals is applied to a more generalized Einstein Relation which is an important relation in semiconductor devices.

  17. Method of making thermally removable polyurethanes

    DOEpatents

    Loy, Douglas A.; Wheeler, David R.; McElhanon, James R.; Saunders, Randall S.; Durbin-Voss, Marvie Lou

    2002-01-01

    A method of making a thermally-removable polyurethane material by heating a mixture of a maleimide compound and a furan compound, and introducing alcohol and isocyanate functional groups, where the alcohol group and the isocyanate group reacts to form the urethane linkages and the furan compound and the maleimide compound react to form the thermally weak Diels-Alder adducts that are incorporated into the backbone of the urethane linkages during the formation of the polyurethane material at temperatures from above room temperature to less than approximately 90.degree. C. The polyurethane material can be easily removed within approximately an hour by heating to temperatures greater than approximately 90.degree. C. in a polar solvent. The polyurethane material can be used in protecting electronic components that may require subsequent removal of the solid material for component repair, modification or quality control.

  18. MLFMA-accelerated Nyström method for ultrasonic scattering - Numerical results and experimental validation

    NASA Astrophysics Data System (ADS)

    Gurrala, Praveen; Downs, Andrew; Chen, Kun; Song, Jiming; Roberts, Ron

    2018-04-01

    Full wave scattering models for ultrasonic waves are necessary for the accurate prediction of voltage signals received from complex defects/flaws in practical nondestructive evaluation (NDE) measurements. We propose the high-order Nyström method accelerated by the multilevel fast multipole algorithm (MLFMA) as an improvement to the state-of-the-art full-wave scattering models that are based on boundary integral equations. We present numerical results demonstrating improvements in simulation time and memory requirement. Particularly, we demonstrate the need for higher order geom-etry and field approximation in modeling NDE measurements. Also, we illustrate the importance of full-wave scattering models using experimental pulse-echo data from a spherical inclusion in a solid, which cannot be modeled accurately by approximation-based scattering models such as the Kirchhoff approximation.

  19. Incorporation of varying types of temporal data in a neural network

    NASA Technical Reports Server (NTRS)

    Cohen, M. E.; Hudson, D. L.

    1992-01-01

    Most neural network models do not specifically deal with temporal data. Handling of these variables is complicated by the different uses to which temporal data are put, depending on the application. Even within the same application, temporal variables are often used in a number of different ways. In this paper, types of temporal data are discussed, along with their implications for approximate reasoning. Methods for integrating approximate temporal reasoning into existing neural network structures are presented. These methods are illustrated in a medical application for diagnosis of graft-versus-host disease which requires the use of several types of temporal data.

  20. A Fresh Math Perspective Opens New Possibilities for Computational Chemistry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vu, Linda; Govind, Niranjan; Yang, Chao

    2017-05-26

    By reformulating the TDDFT problem as a matrix function approximation, making use of a special transformation and taking advantage of the underlying symmetry with respect to a non-Euclidean metric, Yang and his colleagues were able to apply the Lanczos algorithm and a Kernal Polynomial Method (KPM) to approximate the absorption spectrum of several molecules. Both of these algorithms require relatively low-memory compared to non-symmetrical alternatives, which is the key to the computational savings.

  1. Meta-analysis of Odds Ratios: Current Good Practices

    PubMed Central

    Chang, Bei-Hung; Hoaglin, David C.

    2016-01-01

    Background Many systematic reviews of randomized clinical trials lead to meta-analyses of odds ratios. The customary methods of estimating an overall odds ratio involve weighted averages of the individual trials’ estimates of the logarithm of the odds ratio. That approach, however, has several shortcomings, arising from assumptions and approximations, that render the results unreliable. Although the problems have been documented in the literature for many years, the conventional methods persist in software and applications. A well-developed alternative approach avoids the approximations by working directly with the numbers of subjects and events in the arms of the individual trials. Objective We aim to raise awareness of methods that avoid the conventional approximations, can be applied with widely available software, and produce more-reliable results. Methods We summarize the fixed-effect and random-effects approaches to meta-analysis; describe conventional, approximate methods and alternative methods; apply the methods in a meta-analysis of 19 randomized trials of endoscopic sclerotherapy in patients with cirrhosis and esophagogastric varices; and compare the results. We demonstrate the use of SAS, Stata, and R software for the analysis. Results In the example, point estimates and confidence intervals for the overall log-odds-ratio differ between the conventional and alternative methods, in ways that can affect inferences. Programming is straightforward in the three software packages; an appendix gives the details. Conclusions The modest additional programming required should not be an obstacle to adoption of the alternative methods. Because their results are unreliable, use of the conventional methods for meta-analysis of odds ratios should be discontinued. PMID:28169977

  2. DQM: Decentralized Quadratically Approximated Alternating Direction Method of Multipliers

    NASA Astrophysics Data System (ADS)

    Mokhtari, Aryan; Shi, Wei; Ling, Qing; Ribeiro, Alejandro

    2016-10-01

    This paper considers decentralized consensus optimization problems where nodes of a network have access to different summands of a global objective function. Nodes cooperate to minimize the global objective by exchanging information with neighbors only. A decentralized version of the alternating directions method of multipliers (DADMM) is a common method for solving this category of problems. DADMM exhibits linear convergence rate to the optimal objective but its implementation requires solving a convex optimization problem at each iteration. This can be computationally costly and may result in large overall convergence times. The decentralized quadratically approximated ADMM algorithm (DQM), which minimizes a quadratic approximation of the objective function that DADMM minimizes at each iteration, is proposed here. The consequent reduction in computational time is shown to have minimal effect on convergence properties. Convergence still proceeds at a linear rate with a guaranteed constant that is asymptotically equivalent to the DADMM linear convergence rate constant. Numerical results demonstrate advantages of DQM relative to DADMM and other alternatives in a logistic regression problem.

  3. Modal propagation angles in ducts with soft walls and their connection with suppressor performance

    NASA Technical Reports Server (NTRS)

    Rice, E. J.

    1979-01-01

    The angles of propagation of the wave fronts associated with duct modes are derived for a cylindrical duct with soft walls (acoustic suppressors) and a uniform steady flow. The angle of propagation with respect to the radial coordinate (angle of incidence on the wall) is shown to be a better correlating parameter for the optimum wall impedance of spinning modes than the previously used mode cutoff ratio. Both the angle of incidence upon the duct wall and the propagation angle with respect to the duct axis are required to describe the attenuation of a propagating mode. Using the modal propagation angles, a geometric acoustics approach to suppressor acoustic performance was developed. Results from this approximate method were compared to exact modal propagation calculations to check the accuracy of the approximate method. The results are favorable except in the immediate vicinity of the modal optimum impedance where the approximate method yields about one-half of the exact maximum attenuation.

  4. TRUNCATED RANDOM MEASURES

    DTIC Science & Technology

    2018-01-12

    sequential representations, a method is required for deter- mining which to use for the application at hand and, once a representation is selected, for...DISTRIBUTION UNLIMITED Methods , Assumptions, and Procedures 3.1 Background 3.1.1 CRMs and truncation Consider a Poisson point process on R+ := [0...the heart of the study of truncated CRMs. They provide an itera- tive method that can be terminated at any point to yield a finite approximation to the

  5. Perceptually informed synthesis of bandlimited classical waveforms using integrated polynomial interpolation.

    PubMed

    Välimäki, Vesa; Pekonen, Jussi; Nam, Juhan

    2012-01-01

    Digital subtractive synthesis is a popular music synthesis method, which requires oscillators that are aliasing-free in a perceptual sense. It is a research challenge to find computationally efficient waveform generation algorithms that produce similar-sounding signals to analog music synthesizers but which are free from audible aliasing. A technique for approximately bandlimited waveform generation is considered that is based on a polynomial correction function, which is defined as the difference of a non-bandlimited step function and a polynomial approximation of the ideal bandlimited step function. It is shown that the ideal bandlimited step function is equivalent to the sine integral, and that integrated polynomial interpolation methods can successfully approximate it. Integrated Lagrange interpolation and B-spline basis functions are considered for polynomial approximation. The polynomial correction function can be added onto samples around each discontinuity in a non-bandlimited waveform to suppress aliasing. Comparison against previously known methods shows that the proposed technique yields the best tradeoff between computational cost and sound quality. The superior method amongst those considered in this study is the integrated third-order B-spline correction function, which offers perceptually aliasing-free sawtooth emulation up to the fundamental frequency of 7.8 kHz at the sample rate of 44.1 kHz. © 2012 Acoustical Society of America.

  6. Data-driven robust approximate optimal tracking control for unknown general nonlinear systems using adaptive dynamic programming method.

    PubMed

    Zhang, Huaguang; Cui, Lili; Zhang, Xin; Luo, Yanhong

    2011-12-01

    In this paper, a novel data-driven robust approximate optimal tracking control scheme is proposed for unknown general nonlinear systems by using the adaptive dynamic programming (ADP) method. In the design of the controller, only available input-output data is required instead of known system dynamics. A data-driven model is established by a recurrent neural network (NN) to reconstruct the unknown system dynamics using available input-output data. By adding a novel adjustable term related to the modeling error, the resultant modeling error is first guaranteed to converge to zero. Then, based on the obtained data-driven model, the ADP method is utilized to design the approximate optimal tracking controller, which consists of the steady-state controller and the optimal feedback controller. Further, a robustifying term is developed to compensate for the NN approximation errors introduced by implementing the ADP method. Based on Lyapunov approach, stability analysis of the closed-loop system is performed to show that the proposed controller guarantees the system state asymptotically tracking the desired trajectory. Additionally, the obtained control input is proven to be close to the optimal control input within a small bound. Finally, two numerical examples are used to demonstrate the effectiveness of the proposed control scheme.

  7. A broadband fast multipole accelerated boundary element method for the three dimensional Helmholtz equation.

    PubMed

    Gumerov, Nail A; Duraiswami, Ramani

    2009-01-01

    The development of a fast multipole method (FMM) accelerated iterative solution of the boundary element method (BEM) for the Helmholtz equations in three dimensions is described. The FMM for the Helmholtz equation is significantly different for problems with low and high kD (where k is the wavenumber and D the domain size), and for large problems the method must be switched between levels of the hierarchy. The BEM requires several approximate computations (numerical quadrature, approximations of the boundary shapes using elements), and these errors must be balanced against approximations introduced by the FMM and the convergence criterion for iterative solution. These different errors must all be chosen in a way that, on the one hand, excess work is not done and, on the other, that the error achieved by the overall computation is acceptable. Details of translation operators for low and high kD, choice of representations, and BEM quadrature schemes, all consistent with these approximations, are described. A novel preconditioner using a low accuracy FMM accelerated solver as a right preconditioner is also described. Results of the developed solvers for large boundary value problems with 0.0001 less, similarkD less, similar500 are presented and shown to perform close to theoretical expectations.

  8. Numerical solutions of the macroscopic Maxwell equations for scattering by non-spherical particles: A tutorial review

    NASA Astrophysics Data System (ADS)

    Kahnert, Michael

    2016-07-01

    Numerical solution methods for electromagnetic scattering by non-spherical particles comprise a variety of different techniques, which can be traced back to different assumptions and solution strategies applied to the macroscopic Maxwell equations. One can distinguish between time- and frequency-domain methods; further, one can divide numerical techniques into finite-difference methods (which are based on approximating the differential operators), separation-of-variables methods (which are based on expanding the solution in a complete set of functions, thus approximating the fields), and volume integral-equation methods (which are usually solved by discretisation of the target volume and invoking the long-wave approximation in each volume cell). While existing reviews of the topic often tend to have a target audience of program developers and expert users, this tutorial review is intended to accommodate the needs of practitioners as well as novices to the field. The required conciseness is achieved by limiting the presentation to a selection of illustrative methods, and by omitting many technical details that are not essential at a first exposure to the subject. On the other hand, the theoretical basis of numerical methods is explained with little compromises in mathematical rigour; the rationale is that a good grasp of numerical light scattering methods is best achieved by understanding their foundation in Maxwell's theory.

  9. Development and Application of a Numerical Framework for Improving Building Foundation Heat Transfer Calculations

    NASA Astrophysics Data System (ADS)

    Kruis, Nathanael J. F.

    Heat transfer from building foundations varies significantly in all three spatial dimensions and has important dynamic effects at all timescales, from one hour to several years. With the additional consideration of moisture transport, ground freezing, evapotranspiration, and other physical phenomena, the estimation of foundation heat transfer becomes increasingly sophisticated and computationally intensive to the point where accuracy must be compromised for reasonable computation time. The tools currently available to calculate foundation heat transfer are often either too limited in their capabilities to draw meaningful conclusions or too sophisticated to use in common practices. This work presents Kiva, a new foundation heat transfer computational framework. Kiva provides a flexible environment for testing different numerical schemes, initialization methods, spatial and temporal discretizations, and geometric approximations. Comparisons within this framework provide insight into the balance of computation speed and accuracy relative to highly detailed reference solutions. The accuracy and computational performance of six finite difference numerical schemes are verified against established IEA BESTEST test cases for slab-on-grade heat conduction. Of the schemes tested, the Alternating Direction Implicit (ADI) scheme demonstrates the best balance between accuracy, performance, and numerical stability. Kiva features four approaches of initializing soil temperatures for an annual simulation. A new accelerated initialization approach is shown to significantly reduce the required years of presimulation. Methods of approximating three-dimensional heat transfer within a representative two-dimensional context further improve computational performance. A new approximation called the boundary layer adjustment method is shown to improve accuracy over other established methods with a negligible increase in computation time. This method accounts for the reduced heat transfer from concave foundation shapes, which has not been adequately addressed to date. Within the Kiva framework, three-dimensional heat transfer that can require several days to simulate is approximated in two-dimensions in a matter of seconds while maintaining a mean absolute deviation within 3%.

  10. Automated segmentation of blood-flow regions in large thoracic arteries using 3D-cine PC-MRI measurements.

    PubMed

    van Pelt, Roy; Nguyen, Huy; ter Haar Romeny, Bart; Vilanova, Anna

    2012-03-01

    Quantitative analysis of vascular blood flow, acquired by phase-contrast MRI, requires accurate segmentation of the vessel lumen. In clinical practice, 2D-cine velocity-encoded slices are inspected, and the lumen is segmented manually. However, segmentation of time-resolved volumetric blood-flow measurements is a tedious and time-consuming task requiring automation. Automated segmentation of large thoracic arteries, based solely on the 3D-cine phase-contrast MRI (PC-MRI) blood-flow data, was done. An active surface model, which is fast and topologically stable, was used. The active surface model requires an initial surface, approximating the desired segmentation. A method to generate this surface was developed based on a voxel-wise temporal maximum of blood-flow velocities. The active surface model balances forces, based on the surface structure and image features derived from the blood-flow data. The segmentation results were validated using volunteer studies, including time-resolved 3D and 2D blood-flow data. The segmented surface was intersected with a velocity-encoded PC-MRI slice, resulting in a cross-sectional contour of the lumen. These cross-sections were compared to reference contours that were manually delineated on high-resolution 2D-cine slices. The automated approach closely approximates the manual blood-flow segmentations, with error distances on the order of the voxel size. The initial surface provides a close approximation of the desired luminal geometry. This improves the convergence time of the active surface and facilitates parametrization. An active surface approach for vessel lumen segmentation was developed, suitable for quantitative analysis of 3D-cine PC-MRI blood-flow data. As opposed to prior thresholding and level-set approaches, the active surface model is topologically stable. A method to generate an initial approximate surface was developed, and various features that influence the segmentation model were evaluated. The active surface segmentation results were shown to closely approximate manual segmentations.

  11. Noniterative estimation of a nonlinear parameter

    NASA Technical Reports Server (NTRS)

    Bergstroem, A.

    1973-01-01

    An algorithm is described which solves the parameters X = (x1,x2,...,xm) and p in an approximation problem Ax nearly equal to y(p), where the parameter p occurs nonlinearly in y. Instead of linearization methods, which require an approximate value of p to be supplied as a priori information, and which may lead to the finding of local minima, the proposed algorithm finds the global minimum by permitting the use of series expansions of arbitrary order, exploiting an a priori knowledge that the addition of a particular function, corresponding to a new column in A, will not improve the goodness of the approximation.

  12. A hybrid continuous-discrete method for stochastic reaction-diffusion processes.

    PubMed

    Lo, Wing-Cheong; Zheng, Likun; Nie, Qing

    2016-09-01

    Stochastic fluctuations in reaction-diffusion processes often have substantial effect on spatial and temporal dynamics of signal transductions in complex biological systems. One popular approach for simulating these processes is to divide the system into small spatial compartments assuming that molecules react only within the same compartment and jump between adjacent compartments driven by the diffusion. While the approach is convenient in terms of its implementation, its computational cost may become prohibitive when diffusive jumps occur significantly more frequently than reactions, as in the case of rapid diffusion. Here, we present a hybrid continuous-discrete method in which diffusion is simulated using continuous approximation while reactions are based on the Gillespie algorithm. Specifically, the diffusive jumps are approximated as continuous Gaussian random vectors with time-dependent means and covariances, allowing use of a large time step, even for rapid diffusion. By considering the correlation among diffusive jumps, the approximation is accurate for the second moment of the diffusion process. In addition, a criterion is obtained for identifying the region in which such diffusion approximation is required to enable adaptive calculations for better accuracy. Applications to a linear diffusion system and two nonlinear systems of morphogens demonstrate the effectiveness and benefits of the new hybrid method.

  13. A Mathematica program for the approximate analytical solution to a nonlinear undamped Duffing equation by a new approximate approach

    NASA Astrophysics Data System (ADS)

    Wu, Dongmei; Wang, Zhongcheng

    2006-03-01

    According to Mickens [R.E. Mickens, Comments on a Generalized Galerkin's method for non-linear oscillators, J. Sound Vib. 118 (1987) 563], the general HB (harmonic balance) method is an approximation to the convergent Fourier series representation of the periodic solution of a nonlinear oscillator and not an approximation to an expansion in terms of a small parameter. Consequently, for a nonlinear undamped Duffing equation with a driving force Bcos(ωx), to find a periodic solution when the fundamental frequency is identical to ω, the corresponding Fourier series can be written as y˜(x)=∑n=1m acos[(2n-1)ωx]. How to calculate the coefficients of the Fourier series efficiently with a computer program is still an open problem. For HB method, by substituting approximation y˜(x) into force equation, expanding the resulting expression into a trigonometric series, then letting the coefficients of the resulting lowest-order harmonic be zero, one can obtain approximate coefficients of approximation y˜(x) [R.E. Mickens, Comments on a Generalized Galerkin's method for non-linear oscillators, J. Sound Vib. 118 (1987) 563]. But for nonlinear differential equations such as Duffing equation, it is very difficult to construct higher-order analytical approximations, because the HB method requires solving a set of algebraic equations for a large number of unknowns with very complex nonlinearities. To overcome the difficulty, forty years ago, Urabe derived a computational method for Duffing equation based on Galerkin procedure [M. Urabe, A. Reiter, Numerical computation of nonlinear forced oscillations by Galerkin's procedure, J. Math. Anal. Appl. 14 (1966) 107-140]. Dooren obtained an approximate solution of the Duffing oscillator with a special set of parameters by using Urabe's method [R. van Dooren, Stabilization of Cowell's classic finite difference method for numerical integration, J. Comput. Phys. 16 (1974) 186-192]. In this paper, in the frame of the general HB method, we present a new iteration algorithm to calculate the coefficients of the Fourier series. By using this new method, the iteration procedure starts with a(x)cos(ωx)+b(x)sin(ωx), and the accuracy may be improved gradually by determining new coefficients a,a,… will be produced automatically in an one-by-one manner. In all the stage of calculation, we need only to solve a cubic equation. Using this new algorithm, we develop a Mathematica program, which demonstrates following main advantages over the previous HB method: (1) it avoids solving a set of associate nonlinear equations; (2) it is easier to be implemented into a computer program, and produces a highly accurate solution with analytical expression efficiently. It is interesting to find that, generally, for a given set of parameters, a nonlinear Duffing equation can have three independent oscillation modes. For some sets of the parameters, it can have two modes with complex displacement and one with real displacement. But in some cases, it can have three modes, all of them having real displacement. Therefore, we can divide the parameters into two classes, according to the solution property: there is only one mode with real displacement and there are three modes with real displacement. This program should be useful to study the dynamically periodic behavior of a Duffing oscillator and can provide an approximate analytical solution with high-accuracy for testing the error behavior of newly developed numerical methods with a wide range of parameters. Program summaryTitle of program:AnalyDuffing.nb Catalogue identifier:ADWR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWR_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions:none Computer for which the program is designed and others on which it has been tested:the program has been designed for a microcomputer and been tested on the microcomputer. Computers:IBM PC Installations:the address(es) of your computer(s) Operating systems under which the program has been tested:Windows XP Programming language used:Software Mathematica 4.2, 5.0 and 5.1 No. of lines in distributed program, including test data, etc.:23 663 No. of bytes in distributed program, including test data, etc.:152 321 Distribution format:tar.gz Memory required to execute with typical data:51 712 Bytes No. of bits in a word: No. of processors used:1 Has the code been vectorized?:no Peripherals used:no Program Library subprograms used:no Nature of physical problem:To find an approximate solution with analytical expressions for the undamped nonlinear Duffing equation with periodic driving force when the fundamental frequency is identical to the driving force. Method of solution:In the frame of the general HB method, by using a new iteration algorithm to calculate the coefficients of the Fourier series, we can obtain an approximate analytical solution with high-accuracy efficiently. Restrictions on the complexity of the problem:For problems, which have a large driving frequency, the convergence may be a little slow, because more iterative times are needed. Typical running time:several seconds Unusual features of the program:For an undamped Duffing equation, it can provide all the solutions or the oscillation modes with real displacement for any interesting parameters, for the required accuracy, efficiently. The program can be used to study the dynamically periodic behavior of a nonlinear oscillator, and can provide a high-accurate approximate analytical solution for developing high-accurate numerical method.

  14. General method and thermodynamic tables for computation of equilibrium composition and temperature of chemical reactions

    NASA Technical Reports Server (NTRS)

    Huff, Vearl N; Gordon, Sanford; Morrell, Virginia E

    1951-01-01

    A rapidly convergent successive approximation process is described that simultaneously determines both composition and temperature resulting from a chemical reaction. This method is suitable for use with any set of reactants over the complete range of mixture ratios as long as the products of reaction are ideal gases. An approximate treatment of limited amounts of liquids and solids is also included. This method is particularly suited to problems having a large number of products of reaction and to problems that require determination of such properties as specific heat or velocity of sound of a dissociating mixture. The method presented is applicable to a wide variety of problems that include (1) combustion at constant pressure or volume; and (2) isentropic expansion to an assigned pressure, temperature, or Mach number. Tables of thermodynamic functions needed with this method are included for 42 substances for convenience in numerical computations.

  15. Time domain convergence properties of Lyapunov stable penalty methods

    NASA Technical Reports Server (NTRS)

    Kurdila, A. J.; Sunkel, John

    1991-01-01

    Linear hyperbolic partial differential equations are analyzed using standard techniques to show that a sequence of solutions generated by the Liapunov stable penalty equations approaches the solution of the differential-algebraic equations governing the dynamics of multibody problems arising in linear vibrations. The analysis does not require that the system be conservative and does not impose any specific integration scheme. Variational statements are derived which bound the error in approximation by the norm of the constraint violation obtained in the approximate solutions.

  16. A fast Cauchy-Riemann solver. [differential equation solution for boundary conditions by finite difference approximation

    NASA Technical Reports Server (NTRS)

    Ghil, M.; Balgovind, R.

    1979-01-01

    The inhomogeneous Cauchy-Riemann equations in a rectangle are discretized by a finite difference approximation. Several different boundary conditions are treated explicitly, leading to algorithms which have overall second-order accuracy. All boundary conditions with either u or v prescribed along a side of the rectangle can be treated by similar methods. The algorithms presented here have nearly minimal time and storage requirements and seem suitable for development into a general-purpose direct Cauchy-Riemann solver for arbitrary boundary conditions.

  17. Optimal remediation of unconfined aquifers: Numerical applications and derivative calculations

    NASA Astrophysics Data System (ADS)

    Mansfield, Christopher M.; Shoemaker, Christine A.

    1999-05-01

    This paper extends earlier work on derivative-based optimization for cost-effective remediation to unconfined aquifers, which have more complex, nonlinear flow dynamics than confined aquifers. Most previous derivative-based optimization of contaminant removal has been limited to consideration of confined aquifers; however, contamination is more common in unconfined aquifers. Exact derivative equations are presented, and two computationally efficient approximations, the quasi-confined (QC) and head independent from previous (HIP) unconfined-aquifer finite element equation derivative approximations, are presented and demonstrated to be highly accurate. The derivative approximations can be used with any nonlinear optimization method requiring derivatives for computation of either time-invariant or time-varying pumping rates. The QC and HIP approximations are combined with the nonlinear optimal control algorithm SALQR into the unconfined-aquifer algorithm, which is shown to compute solutions for unconfined aquifers in CPU times that were not significantly longer than those required by the confined-aquifer optimization model. Two of the three example unconfined-aquifer cases considered obtained pumping policies with substantially lower objective function values with the unconfined model than were obtained with the confined-aquifer optimization, even though the mean differences in hydraulic heads predicted by the unconfined- and confined-aquifer models were small (less than 0.1%). We suggest a possible geophysical index based on differences in drawdown predictions between unconfined- and confined-aquifer models to estimate which aquifers require unconfined-aquifer optimization and which can be adequately approximated by the simpler confined-aquifer analysis.

  18. Two-dimensional analytic weighting functions for limb scattering

    NASA Astrophysics Data System (ADS)

    Zawada, D. J.; Bourassa, A. E.; Degenstein, D. A.

    2017-10-01

    Through the inversion of limb scatter measurements it is possible to obtain vertical profiles of trace species in the atmosphere. Many of these inversion methods require what is often referred to as weighting functions, or derivatives of the radiance with respect to concentrations of trace species in the atmosphere. Several radiative transfer models have implemented analytic methods to calculate weighting functions, alleviating the computational burden of traditional numerical perturbation methods. Here we describe the implementation of analytic two-dimensional weighting functions, where derivatives are calculated relative to atmospheric constituents in a two-dimensional grid of altitude and angle along the line of sight direction, in the SASKTRAN-HR radiative transfer model. Two-dimensional weighting functions are required for two-dimensional inversions of limb scatter measurements. Examples are presented where the analytic two-dimensional weighting functions are calculated with an underlying one-dimensional atmosphere. It is shown that the analytic weighting functions are more accurate than ones calculated with a single scatter approximation, and are orders of magnitude faster than a typical perturbation method. Evidence is presented that weighting functions for stratospheric aerosols calculated under a single scatter approximation may not be suitable for use in retrieval algorithms under solar backscatter conditions.

  19. Population genetics inference for longitudinally-sampled mutants under strong selection.

    PubMed

    Lacerda, Miguel; Seoighe, Cathal

    2014-11-01

    Longitudinal allele frequency data are becoming increasingly prevalent. Such samples permit statistical inference of the population genetics parameters that influence the fate of mutant variants. To infer these parameters by maximum likelihood, the mutant frequency is often assumed to evolve according to the Wright-Fisher model. For computational reasons, this discrete model is commonly approximated by a diffusion process that requires the assumption that the forces of natural selection and mutation are weak. This assumption is not always appropriate. For example, mutations that impart drug resistance in pathogens may evolve under strong selective pressure. Here, we present an alternative approximation to the mutant-frequency distribution that does not make any assumptions about the magnitude of selection or mutation and is much more computationally efficient than the standard diffusion approximation. Simulation studies are used to compare the performance of our method to that of the Wright-Fisher and Gaussian diffusion approximations. For large populations, our method is found to provide a much better approximation to the mutant-frequency distribution when selection is strong, while all three methods perform comparably when selection is weak. Importantly, maximum-likelihood estimates of the selection coefficient are severely attenuated when selection is strong under the two diffusion models, but not when our method is used. This is further demonstrated with an application to mutant-frequency data from an experimental study of bacteriophage evolution. We therefore recommend our method for estimating the selection coefficient when the effective population size is too large to utilize the discrete Wright-Fisher model. Copyright © 2014 by the Genetics Society of America.

  20. Adaptive optics system performance approximations for atmospheric turbulence correction

    NASA Astrophysics Data System (ADS)

    Tyson, Robert K.

    1990-10-01

    Analysis of adaptive optics system behavior often can be reduced to a few approximations and scaling laws. For atmospheric turbulence correction, the deformable mirror (DM) fitting error is most often used to determine a priori the interactuator spacing and the total number of correction zones required. This paper examines the mirror fitting error in terms of its most commonly used exponential form. The explicit constant in the error term is dependent on deformable mirror influence function shape and actuator geometry. The method of least squares fitting of discrete influence functions to the turbulent wavefront is compared to the linear spatial filtering approximation of system performance. It is found that the spatial filtering method overstimates the correctability of the adaptive optics system by a small amount. By evaluating fitting error for a number of DM configurations, actuator geometries, and influence functions, fitting error constants verify some earlier investigations.

  1. An Explicit Upwind Algorithm for Solving the Parabolized Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Korte, John J.

    1991-01-01

    An explicit, upwind algorithm was developed for the direct (noniterative) integration of the 3-D Parabolized Navier-Stokes (PNS) equations in a generalized coordinate system. The new algorithm uses upwind approximations of the numerical fluxes for the pressure and convection terms obtained by combining flux difference splittings (FDS) formed from the solution of an approximate Riemann (RP). The approximate RP is solved using an extension of the method developed by Roe for steady supersonic flow of an ideal gas. Roe's method is extended for use with the 3-D PNS equations expressed in generalized coordinates and to include Vigneron's technique of splitting the streamwise pressure gradient. The difficulty associated with applying Roe's scheme in the subsonic region is overcome. The second-order upwind differencing of the flux derivatives are obtained by adding FDS to either an original forward or backward differencing of the flux derivative. This approach is used to modify an explicit MacCormack differencing scheme into an upwind differencing scheme. The second order upwind flux approximations, applied with flux limiters, provide a method for numerically capturing shocks without the need for additional artificial damping terms which require adjustment by the user. In addition, a cubic equation is derived for determining Vegneron's pressure splitting coefficient using the updated streamwise flux vector. Decoding the streamwise flux vector with the updated value of Vigneron's pressure splitting improves the stability of the scheme. The new algorithm is applied to 2-D and 3-D supersonic and hypersonic laminar flow test cases. Results are presented for the experimental studies of Holden and of Tracy. In addition, a flow field solution is presented for a generic hypersonic aircraft at a Mach number of 24.5 and angle of attack of 1 degree. The computed results compare well to both experimental data and numerical results from other algorithms. Computational times required for the upwind PNS code are approximately equal to an explicit PNS MacCormack's code and existing implicit PNS solvers.

  2. Spectral ratio method for measuring emissivity

    USGS Publications Warehouse

    Watson, K.

    1992-01-01

    The spectral ratio method is based on the concept that although the spectral radiances are very sensitive to small changes in temperature the ratios are not. Only an approximate estimate of temperature is required thus, for example, we can determine the emissivity ratio to an accuracy of 1% with a temperature estimate that is only accurate to 12.5 K. Selecting the maximum value of the channel brightness temperatures is an unbiased estimate. Laboratory and field spectral data are easily converted into spectral ratio plots. The ratio method is limited by system signal:noise and spectral band-width. The images can appear quite noisy because ratios enhance high frequencies and may require spatial filtering. Atmospheric effects tend to rescale the ratios and require using an atmospheric model or a calibration site. ?? 1992.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dechant, Lawrence J.

    Wave packet analysis provides a connection between linear small disturbance theory and subsequent nonlinear turbulent spot flow behavior. The traditional association between linear stability analysis and nonlinear wave form is developed via the method of stationary phase whereby asymptotic (simplified) mean flow solutions are used to estimate dispersion behavior and stationary phase approximation are used to invert the associated Fourier transform. The resulting process typically requires nonlinear algebraic equations inversions that can be best performed numerically, which partially mitigates the value of the approximation as compared to a more complete, e.g. DNS or linear/nonlinear adjoint methods. To obtain a simpler,more » closed-form analytical result, the complete packet solution is modeled via approximate amplitude (linear convected kinematic wave initial value problem) and local sinusoidal (wave equation) expressions. Significantly, the initial value for the kinematic wave transport expression follows from a separable variable coefficient approximation to the linearized pressure fluctuation Poisson expression. The resulting amplitude solution, while approximate in nature, nonetheless, appears to mimic many of the global features, e.g. transitional flow intermittency and pressure fluctuation magnitude behavior. A low wave number wave packet models also recover meaningful auto-correlation and low frequency spectral behaviors.« less

  4. Branching-ratio approximation for the self-exciting Hawkes process

    NASA Astrophysics Data System (ADS)

    Hardiman, Stephen J.; Bouchaud, Jean-Philippe

    2014-12-01

    We introduce a model-independent approximation for the branching ratio of Hawkes self-exciting point processes. Our estimator requires knowing only the mean and variance of the event count in a sufficiently large time window, statistics that are readily obtained from empirical data. The method we propose greatly simplifies the estimation of the Hawkes branching ratio, recently proposed as a proxy for market endogeneity and formerly estimated using numerical likelihood maximization. We employ our method to support recent theoretical and experimental results indicating that the best fitting Hawkes model to describe S&P futures price changes is in fact critical (now and in the recent past) in light of the long memory of financial market activity.

  5. Universal single level implicit algorithm for gasdynamics

    NASA Technical Reports Server (NTRS)

    Lombard, C. K.; Venkatapthy, E.

    1984-01-01

    A single level effectively explicit implicit algorithm for gasdynamics is presented. The method meets all the requirements for unconditionally stable global iteration over flows with mixed supersonic and supersonic zones including blunt body flow and boundary layer flows with strong interaction and streamwise separation. For hyperbolic (supersonic flow) regions the method is automatically equivalent to contemporary space marching methods. For elliptic (subsonic flow) regions, rapid convergence is facilitated by alternating direction solution sweeps which bring both sets of eigenvectors and the influence of both boundaries of a coordinate line equally into play. Point by point updating of the data with local iteration on the solution procedure at each spatial step as the sweeps progress not only renders the method single level in storage but, also, improves nonlinear accuracy to accelerate convergence by an order of magnitude over related two level linearized implicit methods. The method derives robust stability from the combination of an eigenvector split upwind difference method (CSCM) with diagonally dominant ADI(DDADI) approximate factorization and computed characteristic boundary approximations.

  6. Effect of Heat Generation of Ultrasound Transducer on Ultrasonic Power Measured by Calorimetric Method

    NASA Astrophysics Data System (ADS)

    Uchida, Takeyoshi; Kikuchi, Tsuneo

    2013-07-01

    Ultrasonic power is one of the key quantities closely related to the safety of medical ultrasonic equipment. An ultrasonic power standard is required for establishment of safety. Generally, an ultrasonic power standard below approximately 20 W is established by the radiation force balance (RFB) method as the most accurate measurement method. However, RFB is not suitable for high ultrasonic power because of thermal damage to the absorbing target. Consequently, an alternative method to RFB is required. We have been developing a measurement technique for high ultrasonic power by the calorimetric method. In this study, we examined the effect of heat generation of an ultrasound transducer on ultrasonic power measured by the calorimetric method. As a result, an excessively high ultrasonic power was measured owing to the effect of heat generation from internal loss in the transducer. A reference ultrasound transducer with low heat generation is required for a high ultrasonic power standard established by the calorimetric method.

  7. A new voluntary blood collection method for the Andean bear (Tremarctos ornatus) and Asiatic black bear (Ursus thibetanus).

    PubMed

    Otaki, Yusuke; Kido, Nobuhide; Omiya, Tomoko; Ono, Kaori; Ueda, Miya; Azumano, Akinori; Tanaka, Sohei

    2015-01-01

    Various training methods have been developed for animal husbandry and health care in zoos and one of these trainings is blood collection. One training method, recently widely used for blood collection in Ursidae, requires setting up a sleeve outside the cage and gives access to limited blood collection sites. A new voluntary blood collection method without a sleeve was applied to the Andean bear (Tremarctos ornatus) and Asiatic black bear (Ursus thibetanus) with access to various veins at the same time. The present study evaluated the effectiveness of this new method and suggests improvements. Two Andean and two Asiatic black bears in Yokohama and Nogeyama Zoological Gardens, respectively, were trained to hold a bamboo pipe outside their cages. We could, thereby, simultaneously access superficial dorsal veins, the dorsal venous network of the hand, the cephalic vein from the carpal joint, and an area approximately 10 cm proximal to the carpal joint. This allowed us to evaluate which vein was most suitable for blood collection. We found that the cephalic vein, approximately 10 cm proximal to the carpal joint, was the most suitable for blood collection. This new method requires little or no modification of zoo facilities and provides a useful alternative method for blood collection. It could be adapted for use in other clinical examinations such as ultrasound examination. © 2015 Wiley Periodicals, Inc.

  8. An Efficient Algorithm for Perturbed Orbit Integration Combining Analytical Continuation and Modified Chebyshev Picard Iteration

    NASA Astrophysics Data System (ADS)

    Elgohary, T.; Kim, D.; Turner, J.; Junkins, J.

    2014-09-01

    Several methods exist for integrating the motion in high order gravity fields. Some recent methods use an approximate starting orbit, and an efficient method is needed for generating warm starts that account for specific low order gravity approximations. By introducing two scalar Lagrange-like invariants and employing Leibniz product rule, the perturbed motion is integrated by a novel recursive formulation. The Lagrange-like invariants allow exact arbitrary order time derivatives. Restricting attention to the perturbations due to the zonal harmonics J2 through J6, we illustrate an idea. The recursively generated vector-valued time derivatives for the trajectory are used to develop a continuation series-based solution for propagating position and velocity. Numerical comparisons indicate performance improvements of ~ 70X over existing explicit Runge-Kutta methods while maintaining mm accuracy for the orbit predictions. The Modified Chebyshev Picard Iteration (MCPI) is an iterative path approximation method to solve nonlinear ordinary differential equations. The MCPI utilizes Picard iteration with orthogonal Chebyshev polynomial basis functions to recursively update the states. The key advantages of the MCPI are as follows: 1) Large segments of a trajectory can be approximated by evaluating the forcing function at multiple nodes along the current approximation during each iteration. 2) It can readily handle general gravity perturbations as well as non-conservative forces. 3) Parallel applications are possible. The Picard sequence converges to the solution over large time intervals when the forces are continuous and differentiable. According to the accuracy of the starting solutions, however, the MCPI may require significant number of iterations and function evaluations compared to other integrators. In this work, we provide an efficient methodology to establish good starting solutions from the continuation series method; this warm start improves the performance of the MCPI significantly and will likely be useful for other applications where efficiently computed approximate orbit solutions are needed.

  9. Stress Measurement by Geometrical Optics

    NASA Technical Reports Server (NTRS)

    Robinson, R. S.; Rossnagel, S. M.

    1986-01-01

    Fast, simple technique measures stresses in thin films. Sample disk bowed by stress into approximately spherical shape. Reflected image of disk magnified by amount related to curvature and, therefore, stress. Method requires sample substrate, such as cheap microscope cover slide, two mirrors, laser light beam, and screen.

  10. Efficient implementation of neural network deinterlacing

    NASA Astrophysics Data System (ADS)

    Seo, Guiwon; Choi, Hyunsoo; Lee, Chulhee

    2009-02-01

    Interlaced scanning has been widely used in most broadcasting systems. However, there are some undesirable artifacts such as jagged patterns, flickering, and line twitters. Moreover, most recent TV monitors utilize flat panel display technologies such as LCD or PDP monitors and these monitors require progressive formats. Consequently, the conversion of interlaced video into progressive video is required in many applications and a number of deinterlacing methods have been proposed. Recently deinterlacing methods based on neural network have been proposed with good results. On the other hand, with high resolution video contents such as HDTV, the amount of video data to be processed is very large. As a result, the processing time and hardware complexity become an important issue. In this paper, we propose an efficient implementation of neural network deinterlacing using polynomial approximation of the sigmoid function. Experimental results show that these approximations provide equivalent performance with a considerable reduction of complexity. This implementation of neural network deinterlacing can be efficiently incorporated in HW implementation.

  11. An Artificial Light Source Influences Mating and Oviposition of Black Soldier Flies, Hermetia illucens

    PubMed Central

    Zhang, Jibin; Huang, Ling; He, Jin; Tomberlin, Jeffery K.; Li, Jianhong; Lei, Chaoliang; Sun, Ming; Liu, Ziduo; Yu, Ziniu

    2010-01-01

    Current methods for mass-rearing black soldier flies, Hermetia illucens (L.) (Diptera: Stratiomyidae), in the laboratory are dependent on sunlight. Quartz-iodine lamps and rare earth lamps were examined as artificial light sources for stimulating H. illucens to mate and lay eggs. Sunlight was used as the control. Adults in the quartz-iodine lamp treatment had a mating rate of 61% of those in the sunlight control. No mating occurred when the rare earth lamp was used as a substitute. Egg hatch for the quartz-iodine lamp and sunlight treatments occurred in approximately 4 days, and the hatch rate was similar between these two treatments. Larval and pupal development under these treatments required approximately 18 and 15 days at 28°° C, respectively. Development of methods for mass rearing of H. illucens using artificial light will enable production of this fly throughout the year without investing in greenhouse space or requiring sunlight. PMID:21268697

  12. A novel scalable manufacturing process for the production of hydrogel-forming microneedle arrays.

    PubMed

    Lutton, Rebecca E M; Larrañeta, Eneko; Kearney, Mary-Carmel; Boyd, Peter; Woolfson, A David; Donnelly, Ryan F

    2015-10-15

    A novel manufacturing process for fabricating microneedle arrays (MN) has been designed and evaluated. The prototype is able to successfully produce 14×14 MN arrays and is easily capable of scale-up, enabling the transition from laboratory to industry and subsequent commercialisation. The method requires the custom design of metal MN master templates to produce silicone MN moulds using an injection moulding process. The MN arrays produced using this novel method was compared with centrifugation, the traditional method of producing aqueous hydrogel-forming MN arrays. The results proved that there was negligible difference between either methods, with each producing MN arrays with comparable quality. Both types of MN arrays can be successfully inserted in a skin simulant. In both cases the insertion depth was approximately 60% of the needle length and the height reduction after insertion was in both cases approximately 3%. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. A novel scalable manufacturing process for the production of hydrogel-forming microneedle arrays

    PubMed Central

    Lutton, Rebecca E.M.; Larrañeta, Eneko; Kearney, Mary-Carmel; Boyd, Peter; Woolfson, A.David; Donnelly, Ryan F.

    2015-01-01

    A novel manufacturing process for fabricating microneedle arrays (MN) has been designed and evaluated. The prototype is able to successfully produce 14 × 14 MN arrays and is easily capable of scale-up, enabling the transition from laboratory to industry and subsequent commercialisation. The method requires the custom design of metal MN master templates to produce silicone MN moulds using an injection moulding process. The MN arrays produced using this novel method was compared with centrifugation, the traditional method of producing aqueous hydrogel-forming MN arrays. The results proved that there was negligible difference between either methods, with each producing MN arrays with comparable quality. Both types of MN arrays can be successfully inserted in a skin simulant. In both cases the insertion depth was approximately 60% of the needle length and the height reduction after insertion was in both cases approximately 3%. PMID:26302858

  14. Approximating high-dimensional dynamics by barycentric coordinates with linear programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirata, Yoshito, E-mail: yoshito@sat.t.u-tokyo.ac.jp; Aihara, Kazuyuki; Suzuki, Hideyuki

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics ofmore » the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.« less

  15. Approximating high-dimensional dynamics by barycentric coordinates with linear programming.

    PubMed

    Hirata, Yoshito; Shiro, Masanori; Takahashi, Nozomu; Aihara, Kazuyuki; Suzuki, Hideyuki; Mas, Paloma

    2015-01-01

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.

  16. Comparison of the iterated equation of motion approach and the density matrix formalism for the quantum Rabi model

    NASA Astrophysics Data System (ADS)

    Kalthoff, Mona; Keim, Frederik; Krull, Holger; Uhrig, Götz S.

    2017-05-01

    The density matrix formalism and the equation of motion approach are two semi-analytical methods that can be used to compute the non-equilibrium dynamics of correlated systems. While for a bilinear Hamiltonian both formalisms yield the exact result, for any non-bilinear Hamiltonian a truncation is necessary. Due to the fact that the commonly used truncation schemes differ for these two methods, the accuracy of the obtained results depends significantly on the chosen approach. In this paper, both formalisms are applied to the quantum Rabi model. This allows us to compare the approximate results and the exact dynamics of the system and enables us to discuss the accuracy of the approximations as well as the advantages and the disadvantages of both methods. It is shown to which extent the results fulfill physical requirements for the observables and which properties of the methods lead to unphysical results.

  17. Continuous control of chaos based on the stability criterion.

    PubMed

    Yu, Hong Jie; Liu, Yan Zhu; Peng, Jian Hua

    2004-06-01

    A method of chaos control based on stability criterion is proposed in the present paper. This method can stabilize chaotic systems onto a desired periodic orbit by a small time-continuous perturbation nonlinear feedback. This method does not require linearization of the system around the stabilized orbit and only an approximate location of the desired periodic orbit is required which can be automatically detected in the control process. The control can be started at any moment by choosing appropriate perturbation restriction condition. It seems that more flexibility and convenience are the main advantages of this method. The discussions on control of attitude motion of a spacecraft, Rössler system, and two coupled Duffing oscillators are given as numerical examples.

  18. MEANS: python package for Moment Expansion Approximation, iNference and Simulation

    PubMed Central

    Fan, Sisi; Geissmann, Quentin; Lakatos, Eszter; Lukauskas, Saulius; Ale, Angelique; Babtie, Ann C.; Kirk, Paul D. W.; Stumpf, Michael P. H.

    2016-01-01

    Motivation: Many biochemical systems require stochastic descriptions. Unfortunately these can only be solved for the simplest cases and their direct simulation can become prohibitively expensive, precluding thorough analysis. As an alternative, moment closure approximation methods generate equations for the time-evolution of the system’s moments and apply a closure ansatz to obtain a closed set of differential equations; that can become the basis for the deterministic analysis of the moments of the outputs of stochastic systems. Results: We present a free, user-friendly tool implementing an efficient moment expansion approximation with parametric closures that integrates well with the IPython interactive environment. Our package enables the analysis of complex stochastic systems without any constraints on the number of species and moments studied and the type of rate laws in the system. In addition to the approximation method our package provides numerous tools to help non-expert users in stochastic analysis. Availability and implementation: https://github.com/theosysbio/means Contacts: m.stumpf@imperial.ac.uk or e.lakatos13@imperial.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153663

  19. MEANS: python package for Moment Expansion Approximation, iNference and Simulation.

    PubMed

    Fan, Sisi; Geissmann, Quentin; Lakatos, Eszter; Lukauskas, Saulius; Ale, Angelique; Babtie, Ann C; Kirk, Paul D W; Stumpf, Michael P H

    2016-09-15

    Many biochemical systems require stochastic descriptions. Unfortunately these can only be solved for the simplest cases and their direct simulation can become prohibitively expensive, precluding thorough analysis. As an alternative, moment closure approximation methods generate equations for the time-evolution of the system's moments and apply a closure ansatz to obtain a closed set of differential equations; that can become the basis for the deterministic analysis of the moments of the outputs of stochastic systems. We present a free, user-friendly tool implementing an efficient moment expansion approximation with parametric closures that integrates well with the IPython interactive environment. Our package enables the analysis of complex stochastic systems without any constraints on the number of species and moments studied and the type of rate laws in the system. In addition to the approximation method our package provides numerous tools to help non-expert users in stochastic analysis. https://github.com/theosysbio/means m.stumpf@imperial.ac.uk or e.lakatos13@imperial.ac.uk Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  20. Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model

    NASA Astrophysics Data System (ADS)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2014-02-01

    Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can be successfully applied to process-based models of high complexity. The methodology is particularly suitable for heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models.

  1. Safety approaches for high power modular laser operation

    NASA Astrophysics Data System (ADS)

    Handren, R. T.

    1993-03-01

    Approximately 20 years ago, a program was initiated at the Lawrence Livermore National Laboratory (LLNL) to study the feasibility of using lasers to separate isotopes of uranium and other materials. Of particular interest was the development of a uranium enrichment method for the production of commercial nuclear power reactor fuel to replace current more expensive methods. The Uranium Atomic Vapor Laser Isotope Separation (U-AVLIS) Program progressed to the point where a plant-scale facility to demonstrate commercial feasibility was built and is being tested. The U-AVLIS Program uses copper vapor lasers which pump frequency selective dye lasers to photoionize uranium vapor produced by an electron beam. The selectively ionized isotopes are electrostatically collected. The copper lasers are arranged in oscillator/amplifier chains. The current configuration consists of 12 chains, each with a nominal output of 800 W for a system output in excess of 9 kW. The system requirements are for continuous operation (24 h a day, 7 days a week) and high availability. To meet these requirements, the lasers are designed in a modular form allowing for rapid change-out of the lasers requiring maintenance. Since beginning operation in early 1985, the copper lasers have accumulated over 2 million unit hours at a greater than 90% availability. The dye laser system provides approximately 2.5 kW average power in the visible wavelength range. This large-scale laser system has many safety considerations, including high-power laser beams, high voltage, and large quantities (approximately 3000 gal) of ethanol dye solutions. The Laboratory's safety policy requires that safety controls be designed into any process, equipment, or apparatus in the form of engineering controls. Administrative controls further reduce the risk to an acceptable level. Selected examples of engineering and administrative controls currently being used in the U-AVLIS Program are described.

  2. An investigation of several numerical procedures for time-asymptotic compressible Navier-Stokes solutions

    NASA Technical Reports Server (NTRS)

    Rudy, D. H.; Morris, D. J.; Blanchard, D. K.; Cooke, C. H.; Rubin, S. G.

    1975-01-01

    The status of an investigation of four numerical techniques for the time-dependent compressible Navier-Stokes equations is presented. Results for free shear layer calculations in the Reynolds number range from 1000 to 81000 indicate that a sequential alternating-direction implicit (ADI) finite-difference procedure requires longer computing times to reach steady state than a low-storage hopscotch finite-difference procedure. A finite-element method with cubic approximating functions was found to require excessive computer storage and computation times. A fourth method, an alternating-direction cubic spline technique which is still being tested, is also described.

  3. The generalized scattering coefficient method for plane wave scattering in layered structures

    NASA Astrophysics Data System (ADS)

    Liu, Yu; Li, Chao; Wang, Huai-Yu; Zhou, Yun-Song

    2017-02-01

    The generalized scattering coefficient (GSC) method is pedagogically derived and employed to study the scattering of plane waves in homogeneous and inhomogeneous layered structures. The numerical stabilities and accuracies of this method and other commonly used numerical methods are discussed and compared. For homogeneous layered structures, concise scattering formulas with clear physical interpretations and strong numerical stability are obtained by introducing the GSCs. For inhomogeneous layered structures, three numerical methods are employed: the staircase approximation method, the power series expansion method, and the differential equation based on the GSCs. We investigate the accuracies and convergence behaviors of these methods by comparing their predictions to the exact results. The conclusions are as follows. The staircase approximation method has a slow convergence in spite of its simple and intuitive implementation, and a fine stratification within the inhomogeneous layer is required for obtaining accurate results. The expansion method results are sensitive to the expansion order, and the treatment becomes very complicated for relatively complex configurations, which restricts its applicability. By contrast, the GSC-based differential equation possesses a simple implementation while providing fast and accurate results.

  4. Uncertainty Analysis Based on Sparse Grid Collocation and Quasi-Monte Carlo Sampling with Application in Groundwater Modeling

    NASA Astrophysics Data System (ADS)

    Zhang, G.; Lu, D.; Ye, M.; Gunzburger, M.

    2011-12-01

    Markov Chain Monte Carlo (MCMC) methods have been widely used in many fields of uncertainty analysis to estimate the posterior distributions of parameters and credible intervals of predictions in the Bayesian framework. However, in practice, MCMC may be computationally unaffordable due to slow convergence and the excessive number of forward model executions required, especially when the forward model is expensive to compute. Both disadvantages arise from the curse of dimensionality, i.e., the posterior distribution is usually a multivariate function of parameters. Recently, sparse grid method has been demonstrated to be an effective technique for coping with high-dimensional interpolation or integration problems. Thus, in order to accelerate the forward model and avoid the slow convergence of MCMC, we propose a new method for uncertainty analysis based on sparse grid interpolation and quasi-Monte Carlo sampling. First, we construct a polynomial approximation of the forward model in the parameter space by using the sparse grid interpolation. This approximation then defines an accurate surrogate posterior distribution that can be evaluated repeatedly at minimal computational cost. Second, instead of using MCMC, a quasi-Monte Carlo method is applied to draw samples in the parameter space. Then, the desired probability density function of each prediction is approximated by accumulating the posterior density values of all the samples according to the prediction values. Our method has the following advantages: (1) the polynomial approximation of the forward model on the sparse grid provides a very efficient evaluation of the surrogate posterior distribution; (2) the quasi-Monte Carlo method retains the same accuracy in approximating the PDF of predictions but avoids all disadvantages of MCMC. The proposed method is applied to a controlled numerical experiment of groundwater flow modeling. The results show that our method attains the same accuracy much more efficiently than traditional MCMC.

  5. An automated and universal method for measuring mean grain size from a digital image of sediment

    USGS Publications Warehouse

    Buscombe, Daniel D.; Rubin, David M.; Warrick, Jonathan A.

    2010-01-01

    Existing methods for estimating mean grain size of sediment in an image require either complicated sequences of image processing (filtering, edge detection, segmentation, etc.) or statistical procedures involving calibration. We present a new approach which uses Fourier methods to calculate grain size directly from the image without requiring calibration. Based on analysis of over 450 images, we found the accuracy to be within approximately 16% across the full range from silt to pebbles. Accuracy is comparable to, or better than, existing digital methods. The new method, in conjunction with recent advances in technology for taking appropriate images of sediment in a range of natural environments, promises to revolutionize the logistics and speed at which grain-size data may be obtained from the field.

  6. Jacobian-free approximate solvers for hyperbolic systems: Application to relativistic magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Castro, Manuel J.; Gallardo, José M.; Marquina, Antonio

    2017-10-01

    We present recent advances in PVM (Polynomial Viscosity Matrix) methods based on internal approximations to the absolute value function, and compare them with Chebyshev-based PVM solvers. These solvers only require a bound on the maximum wave speed, so no spectral decomposition is needed. Another important feature of the proposed methods is that they are suitable to be written in Jacobian-free form, in which only evaluations of the physical flux are used. This is particularly interesting when considering systems for which the Jacobians involve complex expressions, e.g., the relativistic magnetohydrodynamics (RMHD) equations. On the other hand, the proposed Jacobian-free solvers have also been extended to the case of approximate DOT (Dumbser-Osher-Toro) methods, which can be regarded as simple and efficient approximations to the classical Osher-Solomon method, sharing most of it interesting features and being applicable to general hyperbolic systems. To test the properties of our schemes a number of numerical experiments involving the RMHD equations are presented, both in one and two dimensions. The obtained results are in good agreement with those found in the literature and show that our schemes are robust and accurate, running stable under a satisfactory time step restriction. It is worth emphasizing that, although this work focuses on RMHD, the proposed schemes are suitable to be applied to general hyperbolic systems.

  7. Evaluation of Strain-Life Fatigue Curve Estimation Methods and Their Application to a Direct-Quenched High-Strength Steel

    NASA Astrophysics Data System (ADS)

    Dabiri, M.; Ghafouri, M.; Rohani Raftar, H. R.; Björk, T.

    2018-03-01

    Methods to estimate the strain-life curve, which were divided into three categories: simple approximations, artificial neural network-based approaches and continuum damage mechanics models, were examined, and their accuracy was assessed in strain-life evaluation of a direct-quenched high-strength steel. All the prediction methods claim to be able to perform low-cycle fatigue analysis using available or easily obtainable material properties, thus eliminating the need for costly and time-consuming fatigue tests. Simple approximations were able to estimate the strain-life curve with satisfactory accuracy using only monotonic properties. The tested neural network-based model, although yielding acceptable results for the material in question, was found to be overly sensitive to the data sets used for training and showed an inconsistency in estimation of the fatigue life and fatigue properties. The studied continuum damage-based model was able to produce a curve detecting early stages of crack initiation. This model requires more experimental data for calibration than approaches using simple approximations. As a result of the different theories underlying the analyzed methods, the different approaches have different strengths and weaknesses. However, it was found that the group of parametric equations categorized as simple approximations are the easiest for practical use, with their applicability having already been verified for a broad range of materials.

  8. Less-Complex Method of Classifying MPSK

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon

    2006-01-01

    An alternative to an optimal method of automated classification of signals modulated with M-ary phase-shift-keying (M-ary PSK or MPSK) has been derived. The alternative method is approximate, but it offers nearly optimal performance and entails much less complexity, which translates to much less computation time. Modulation classification is becoming increasingly important in radio-communication systems that utilize multiple data modulation schemes and include software-defined or software-controlled receivers. Such a receiver may "know" little a priori about an incoming signal but may be required to correctly classify its data rate, modulation type, and forward error-correction code before properly configuring itself to acquire and track the symbol timing, carrier frequency, and phase, and ultimately produce decoded bits. Modulation classification has long been an important component of military interception of initially unknown radio signals transmitted by adversaries. Modulation classification may also be useful for enabling cellular telephones to automatically recognize different signal types and configure themselves accordingly. The concept of modulation classification as outlined in the preceding paragraph is quite general. However, at the present early stage of development, and for the purpose of describing the present alternative method, the term "modulation classification" or simply "classification" signifies, more specifically, a distinction between M-ary and M'-ary PSK, where M and M' represent two different integer multiples of 2. Both the prior optimal method and the present alternative method require the acquisition of magnitude and phase values of a number (N) of consecutive baseband samples of the incoming signal + noise. The prior optimal method is based on a maximum- likelihood (ML) classification rule that requires a calculation of likelihood functions for the M and M' hypotheses: Each likelihood function is an integral, over a full cycle of carrier phase, of a complicated sum of functions of the baseband sample values, the carrier phase, the carrier-signal and noise magnitudes, and M or M'. Then the likelihood ratio, defined as the ratio between the likelihood functions, is computed, leading to the choice of whichever hypothesis - M or M'- is more likely. In the alternative method, the integral in each likelihood function is approximated by a sum over values of the integrand sampled at a number, 1, of equally spaced values of carrier phase. Used in this way, 1 is a parameter that can be adjusted to trade computational complexity against the probability of misclassification. In the limit as 1 approaches infinity, one obtains the integral form of the likelihood function and thus recovers the ML classification. The present approximate method has been tested in comparison with the ML method by means of computational simulations. The results of the simulations have shown that the performance (as quantified by probability of misclassification) of the approximate method is nearly indistinguishable from that of the ML method (see figure).

  9. Real-time automatic registration in optical surgical navigation

    NASA Astrophysics Data System (ADS)

    Lin, Qinyong; Yang, Rongqian; Cai, Ken; Si, Xuan; Chen, Xiuwen; Wu, Xiaoming

    2016-05-01

    An image-guided surgical navigation system requires the improvement of the patient-to-image registration time to enhance the convenience of the registration procedure. A critical step in achieving this aim is performing a fully automatic patient-to-image registration. This study reports on a design of custom fiducial markers and the performance of a real-time automatic patient-to-image registration method using these markers on the basis of an optical tracking system for rigid anatomy. The custom fiducial markers are designed to be automatically localized in both patient and image spaces. An automatic localization method is performed by registering a point cloud sampled from the three dimensional (3D) pedestal model surface of a fiducial marker to each pedestal of fiducial markers searched in image space. A head phantom is constructed to estimate the performance of the real-time automatic registration method under four fiducial configurations. The head phantom experimental results demonstrate that the real-time automatic registration method is more convenient, rapid, and accurate than the manual method. The time required for each registration is approximately 0.1 s. The automatic localization method precisely localizes the fiducial markers in image space. The averaged target registration error for the four configurations is approximately 0.7 mm. The automatic registration performance is independent of the positions relative to the tracking system and the movement of the patient during the operation.

  10. Quiet Quincy Quarter. Teacher's Guide [and] Student Materials.

    ERIC Educational Resources Information Center

    Zishka, Phyllis

    This document suggests learning activities, teaching methods, objectives, and evaluation measures for a second grade consumer education unit on quarters. The unit, which requires approximately six hours of class time, reinforces basic social studies and mathematics skills including following sequences of numbers, distinguishing left from right,…

  11. Some comparisons of complexity in dictionary-based and linear computational models.

    PubMed

    Gnecco, Giorgio; Kůrková, Věra; Sanguineti, Marcello

    2011-03-01

    Neural networks provide a more flexible approximation of functions than traditional linear regression. In the latter, one can only adjust the coefficients in linear combinations of fixed sets of functions, such as orthogonal polynomials or Hermite functions, while for neural networks, one may also adjust the parameters of the functions which are being combined. However, some useful properties of linear approximators (such as uniqueness, homogeneity, and continuity of best approximation operators) are not satisfied by neural networks. Moreover, optimization of parameters in neural networks becomes more difficult than in linear regression. Experimental results suggest that these drawbacks of neural networks are offset by substantially lower model complexity, allowing accuracy of approximation even in high-dimensional cases. We give some theoretical results comparing requirements on model complexity for two types of approximators, the traditional linear ones and so called variable-basis types, which include neural networks, radial, and kernel models. We compare upper bounds on worst-case errors in variable-basis approximation with lower bounds on such errors for any linear approximator. Using methods from nonlinear approximation and integral representations tailored to computational units, we describe some cases where neural networks outperform any linear approximator. Copyright © 2010 Elsevier Ltd. All rights reserved.

  12. Transport of phase space densities through tetrahedral meshes using discrete flow mapping

    NASA Astrophysics Data System (ADS)

    Bajars, Janis; Chappell, David J.; Søndergaard, Niels; Tanner, Gregor

    2017-01-01

    Discrete flow mapping was recently introduced as an efficient ray based method determining wave energy distributions in complex built up structures. Wave energy densities are transported along ray trajectories through polygonal mesh elements using a finite dimensional approximation of a ray transfer operator. In this way the method can be viewed as a smoothed ray tracing method defined over meshed surfaces. Many applications require the resolution of wave energy distributions in three-dimensional domains, such as in room acoustics, underwater acoustics and for electromagnetic cavity problems. In this work we extend discrete flow mapping to three-dimensional domains by propagating wave energy densities through tetrahedral meshes. The geometric simplicity of the tetrahedral mesh elements is utilised to efficiently compute the ray transfer operator using a mixture of analytic and spectrally accurate numerical integration. The important issue of how to choose a suitable basis approximation in phase space whilst maintaining a reasonable computational cost is addressed via low order local approximations on tetrahedral faces in the position coordinate and high order orthogonal polynomial expansions in momentum space.

  13. Review of probabilistic analysis of dynamic response of systems with random parameters

    NASA Technical Reports Server (NTRS)

    Kozin, F.; Klosner, J. M.

    1989-01-01

    The various methods that have been studied in the past to allow probabilistic analysis of dynamic response for systems with random parameters are reviewed. Dynamic response may have been obtained deterministically if the variations about the nominal values were small; however, for space structures which require precise pointing, the variations about the nominal values of the structural details and of the environmental conditions are too large to be considered as negligible. These uncertainties are accounted for in terms of probability distributions about their nominal values. The quantities of concern for describing the response of the structure includes displacements, velocities, and the distributions of natural frequencies. The exact statistical characterization of the response would yield joint probability distributions for the response variables. Since the random quantities will appear as coefficients, determining the exact distributions will be difficult at best. Thus, certain approximations will have to be made. A number of techniques that are available are discussed, even in the nonlinear case. The methods that are described were: (1) Liouville's equation; (2) perturbation methods; (3) mean square approximate systems; and (4) nonlinear systems with approximation by linear systems.

  14. Aeroacoustic directivity via wave-packet analysis of mean or base flows

    NASA Astrophysics Data System (ADS)

    Edstrand, Adam; Schmid, Peter; Cattafesta, Louis

    2017-11-01

    Noise pollution is an ever-increasing problem in society, and knowledge of the directivity patterns of the sound radiation is required for prediction and control. Directivity is frequently determined through costly numerical simulations of the flow field combined with an acoustic analogy. We introduce a new computationally efficient method of finding directivity for a given mean or base flow field using wave-packet analysis (Trefethen, PRSA 2005). Wave-packet analysis approximates the eigenvalue spectrum with spectral accuracy by modeling the eigenfunctions as wave packets. With the wave packets determined, we then follow the method of Obrist (JFM, 2009), which uses Lighthill's acoustic analogy to determine the far-field sound radiation and directivity of wave-packet modes. We apply this method to a canonical jet flow (Gudmundsson and Colonius, JFM 2011) and determine the directivity of potentially unstable wave packets. Furthermore, we generalize the method to consider a three-dimensional flow field of a trailing vortex wake. In summary, we approximate the disturbances as wave packets and extract the directivity from the wave-packet approximation in a fraction of the time of standard aeroacoustic solvers. ONR Grant N00014-15-1-2403.

  15. Partition resampling and extrapolation averaging: approximation methods for quantifying gene expression in large numbers of short oligonucleotide arrays.

    PubMed

    Goldstein, Darlene R

    2006-10-01

    Studies of gene expression using high-density short oligonucleotide arrays have become a standard in a variety of biological contexts. Of the expression measures that have been proposed to quantify expression in these arrays, multi-chip-based measures have been shown to perform well. As gene expression studies increase in size, however, utilizing multi-chip expression measures is more challenging in terms of computing memory requirements and time. A strategic alternative to exact multi-chip quantification on a full large chip set is to approximate expression values based on subsets of chips. This paper introduces an extrapolation method, Extrapolation Averaging (EA), and a resampling method, Partition Resampling (PR), to approximate expression in large studies. An examination of properties indicates that subset-based methods can perform well compared with exact expression quantification. The focus is on short oligonucleotide chips, but the same ideas apply equally well to any array type for which expression is quantified using an entire set of arrays, rather than for only a single array at a time. Software implementing Partition Resampling and Extrapolation Averaging is under development as an R package for the BioConductor project.

  16. Automated prediction of protein function and detection of functional sites from structure.

    PubMed

    Pazos, Florencio; Sternberg, Michael J E

    2004-10-12

    Current structural genomics projects are yielding structures for proteins whose functions are unknown. Accordingly, there is a pressing requirement for computational methods for function prediction. Here we present PHUNCTIONER, an automatic method for structure-based function prediction using automatically extracted functional sites (residues associated to functions). The method relates proteins with the same function through structural alignments and extracts 3D profiles of conserved residues. Functional features to train the method are extracted from the Gene Ontology (GO) database. The method extracts these features from the entire GO hierarchy and hence is applicable across the whole range of function specificity. 3D profiles associated with 121 GO annotations were extracted. We tested the power of the method both for the prediction of function and for the extraction of functional sites. The success of function prediction by our method was compared with the standard homology-based method. In the zone of low sequence similarity (approximately 15%), our method assigns the correct GO annotation in 90% of the protein structures considered, approximately 20% higher than inheritance of function from the closest homologue.

  17. Wavelet-based adaptation methodology combined with finite difference WENO to solve ideal magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Do, Seongju; Li, Haojun; Kang, Myungjoo

    2017-06-01

    In this paper, we present an accurate and efficient wavelet-based adaptive weighted essentially non-oscillatory (WENO) scheme for hydrodynamics and ideal magnetohydrodynamics (MHD) equations arising from the hyperbolic conservation systems. The proposed method works with the finite difference weighted essentially non-oscillatory (FD-WENO) method in space and the third order total variation diminishing (TVD) Runge-Kutta (RK) method in time. The philosophy of this work is to use the lifted interpolating wavelets as not only detector for singularities but also interpolator. Especially, flexible interpolations can be performed by an inverse wavelet transformation. When the divergence cleaning method introducing auxiliary scalar field ψ is applied to the base numerical schemes for imposing divergence-free condition to the magnetic field in a MHD equation, the approximations to derivatives of ψ require the neighboring points. Moreover, the fifth order WENO interpolation requires large stencil to reconstruct high order polynomial. In such cases, an efficient interpolation method is necessary. The adaptive spatial differentiation method is considered as well as the adaptation of grid resolutions. In order to avoid the heavy computation of FD-WENO, in the smooth regions fixed stencil approximation without computing the non-linear WENO weights is used, and the characteristic decomposition method is replaced by a component-wise approach. Numerical results demonstrate that with the adaptive method we are able to resolve the solutions that agree well with the solution of the corresponding fine grid.

  18. Transmutation approximations for the application of hybrid Monte Carlo/deterministic neutron transport to shutdown dose rate analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biondo, Elliott D.; Wilson, Paul P. H.

    In fusion energy systems (FES) neutrons born from burning plasma activate system components. The photon dose rate after shutdown from resulting radionuclides must be quantified. This shutdown dose rate (SDR) is calculated by coupling neutron transport, activation analysis, and photon transport. The size, complexity, and attenuating configuration of FES motivate the use of hybrid Monte Carlo (MC)/deterministic neutron transport. The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) method can be used to optimize MC neutron transport for coupled multiphysics problems, including SDR analysis, using deterministic estimates of adjoint flux distributions. When used for SDR analysis, MS-CADIS requires the formulation ofmore » an adjoint neutron source that approximates the transmutation process. In this work, transmutation approximations are used to derive a solution for this adjoint neutron source. It is shown that these approximations are reasonably met for typical FES neutron spectra and materials over a range of irradiation scenarios. When these approximations are met, the Groupwise Transmutation (GT)-CADIS method, proposed here, can be used effectively. GT-CADIS is an implementation of the MS-CADIS method for SDR analysis that uses a series of single-energy-group irradiations to calculate the adjoint neutron source. For a simple SDR problem, GT-CADIS provides speedups of 200 100 relative to global variance reduction with the Forward-Weighted (FW)-CADIS method and 9 ± 5 • 104 relative to analog. As a result, this work shows that GT-CADIS is broadly applicable to FES problems and will significantly reduce the computational resources necessary for SDR analysis.« less

  19. Transmutation approximations for the application of hybrid Monte Carlo/deterministic neutron transport to shutdown dose rate analysis

    DOE PAGES

    Biondo, Elliott D.; Wilson, Paul P. H.

    2017-05-08

    In fusion energy systems (FES) neutrons born from burning plasma activate system components. The photon dose rate after shutdown from resulting radionuclides must be quantified. This shutdown dose rate (SDR) is calculated by coupling neutron transport, activation analysis, and photon transport. The size, complexity, and attenuating configuration of FES motivate the use of hybrid Monte Carlo (MC)/deterministic neutron transport. The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) method can be used to optimize MC neutron transport for coupled multiphysics problems, including SDR analysis, using deterministic estimates of adjoint flux distributions. When used for SDR analysis, MS-CADIS requires the formulation ofmore » an adjoint neutron source that approximates the transmutation process. In this work, transmutation approximations are used to derive a solution for this adjoint neutron source. It is shown that these approximations are reasonably met for typical FES neutron spectra and materials over a range of irradiation scenarios. When these approximations are met, the Groupwise Transmutation (GT)-CADIS method, proposed here, can be used effectively. GT-CADIS is an implementation of the MS-CADIS method for SDR analysis that uses a series of single-energy-group irradiations to calculate the adjoint neutron source. For a simple SDR problem, GT-CADIS provides speedups of 200 100 relative to global variance reduction with the Forward-Weighted (FW)-CADIS method and 9 ± 5 • 104 relative to analog. As a result, this work shows that GT-CADIS is broadly applicable to FES problems and will significantly reduce the computational resources necessary for SDR analysis.« less

  20. A hybrid continuous-discrete method for stochastic reaction–diffusion processes

    PubMed Central

    Zheng, Likun; Nie, Qing

    2016-01-01

    Stochastic fluctuations in reaction–diffusion processes often have substantial effect on spatial and temporal dynamics of signal transductions in complex biological systems. One popular approach for simulating these processes is to divide the system into small spatial compartments assuming that molecules react only within the same compartment and jump between adjacent compartments driven by the diffusion. While the approach is convenient in terms of its implementation, its computational cost may become prohibitive when diffusive jumps occur significantly more frequently than reactions, as in the case of rapid diffusion. Here, we present a hybrid continuous-discrete method in which diffusion is simulated using continuous approximation while reactions are based on the Gillespie algorithm. Specifically, the diffusive jumps are approximated as continuous Gaussian random vectors with time-dependent means and covariances, allowing use of a large time step, even for rapid diffusion. By considering the correlation among diffusive jumps, the approximation is accurate for the second moment of the diffusion process. In addition, a criterion is obtained for identifying the region in which such diffusion approximation is required to enable adaptive calculations for better accuracy. Applications to a linear diffusion system and two nonlinear systems of morphogens demonstrate the effectiveness and benefits of the new hybrid method. PMID:27703710

  1. An improved multilevel Monte Carlo method for estimating probability distribution functions in stochastic oil reservoir simulations

    DOE PAGES

    Lu, Dan; Zhang, Guannan; Webster, Clayton G.; ...

    2016-12-30

    In this paper, we develop an improved multilevel Monte Carlo (MLMC) method for estimating cumulative distribution functions (CDFs) of a quantity of interest, coming from numerical approximation of large-scale stochastic subsurface simulations. Compared with Monte Carlo (MC) methods, that require a significantly large number of high-fidelity model executions to achieve a prescribed accuracy when computing statistical expectations, MLMC methods were originally proposed to significantly reduce the computational cost with the use of multifidelity approximations. The improved performance of the MLMC methods depends strongly on the decay of the variance of the integrand as the level increases. However, the main challengemore » in estimating CDFs is that the integrand is a discontinuous indicator function whose variance decays slowly. To address this difficult task, we approximate the integrand using a smoothing function that accelerates the decay of the variance. In addition, we design a novel a posteriori optimization strategy to calibrate the smoothing function, so as to balance the computational gain and the approximation error. The combined proposed techniques are integrated into a very general and practical algorithm that can be applied to a wide range of subsurface problems for high-dimensional uncertainty quantification, such as a fine-grid oil reservoir model considered in this effort. The numerical results reveal that with the use of the calibrated smoothing function, the improved MLMC technique significantly reduces the computational complexity compared to the standard MC approach. Finally, we discuss several factors that affect the performance of the MLMC method and provide guidance for effective and efficient usage in practice.« less

  2. Experiences on p-Version Time-Discontinuous Galerkin's Method for Nonlinear Heat Transfer Analysis and Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Hou, Gene

    2004-01-01

    The focus of this research is on the development of analysis and sensitivity analysis equations for nonlinear, transient heat transfer problems modeled by p-version, time discontinuous finite element approximation. The resulting matrix equation of the state equation is simply in the form ofA(x)x = c, representing a single step, time marching scheme. The Newton-Raphson's method is used to solve the nonlinear equation. Examples are first provided to demonstrate the accuracy characteristics of the resultant finite element approximation. A direct differentiation approach is then used to compute the thermal sensitivities of a nonlinear heat transfer problem. The report shows that only minimal coding effort is required to enhance the analysis code with the sensitivity analysis capability.

  3. Fuzzy and neural control

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.

    1992-01-01

    Fuzzy logic and neural networks provide new methods for designing control systems. Fuzzy logic controllers do not require a complete analytical model of a dynamic system and can provide knowledge-based heuristic controllers for ill-defined and complex systems. Neural networks can be used for learning control. In this chapter, we discuss hybrid methods using fuzzy logic and neural networks which can start with an approximate control knowledge base and refine it through reinforcement learning.

  4. Simulation of Simple Controlled Processes with Dead-Time.

    ERIC Educational Resources Information Center

    Watson, Keith R.; And Others

    1985-01-01

    The determination of closed-loop response of processes containing dead-time is typically not covered in undergraduate process control, possibly because the solution by Laplace transforms requires the use of Pade approximation for dead-time, which makes the procedure lengthy and tedious. A computer-aided method is described which simplifies the…

  5. Path Planning For A Class Of Cutting Operations

    NASA Astrophysics Data System (ADS)

    Tavora, Jose

    1989-03-01

    Optimizing processing time in some contour-cutting operations requires solving the so-called no-load path problem. This problem is formulated and an approximate resolution method (based on heuristic search techniques) is described. Results for real-life instances (clothing layouts in the apparel industry) are presented and evaluated.

  6. Multiconfigurational short-range density-functional theory for open-shell systems

    NASA Astrophysics Data System (ADS)

    Hedegârd, Erik Donovan; Toulouse, Julien; Jensen, Hans Jørgen Aagaard

    2018-06-01

    Many chemical systems cannot be described by quantum chemistry methods based on a single-reference wave function. Accurate predictions of energetic and spectroscopic properties require a delicate balance between describing the most important configurations (static correlation) and obtaining dynamical correlation efficiently. The former is most naturally done through a multiconfigurational (MC) wave function, whereas the latter can be done by, e.g., perturbation theory. We have employed a different strategy, namely, a hybrid between multiconfigurational wave functions and density-functional theory (DFT) based on range separation. The method is denoted by MC short-range DFT (MC-srDFT) and is more efficient than perturbative approaches as it capitalizes on the efficient treatment of the (short-range) dynamical correlation by DFT approximations. In turn, the method also improves DFT with standard approximations through the ability of multiconfigurational wave functions to recover large parts of the static correlation. Until now, our implementation was restricted to closed-shell systems, and to lift this restriction, we present here the generalization of MC-srDFT to open-shell cases. The additional terms required to treat open-shell systems are derived and implemented in the DALTON program. This new method for open-shell systems is illustrated on dioxygen and [Fe(H2O)6]3+.

  7. Improving the efficiency of configurational-bias Monte Carlo: A density-guided method for generating bending angle trials for linear and branched molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sepehri, Aliasghar; Loeffler, Troy D.; Chen, Bin, E-mail: binchen@lsu.edu

    2014-08-21

    A new method has been developed to generate bending angle trials to improve the acceptance rate and the speed of configurational-bias Monte Carlo. Whereas traditionally the trial geometries are generated from a uniform distribution, in this method we attempt to use the exact probability density function so that each geometry generated is likely to be accepted. In actual practice, due to the complexity of this probability density function, a numerical representation of this distribution function would be required. This numerical table can be generated a priori from the distribution function. This method has been tested on a united-atom model ofmore » alkanes including propane, 2-methylpropane, and 2,2-dimethylpropane, that are good representatives of both linear and branched molecules. It has been shown from these test cases that reasonable approximations can be made especially for the highly branched molecules to reduce drastically the dimensionality and correspondingly the amount of the tabulated data that is needed to be stored. Despite these approximations, the dependencies between the various geometrical variables can be still well considered, as evident from a nearly perfect acceptance rate achieved. For all cases, the bending angles were shown to be sampled correctly by this method with an acceptance rate of at least 96% for 2,2-dimethylpropane to more than 99% for propane. Since only one trial is required to be generated for each bending angle (instead of thousands of trials required by the conventional algorithm), this method can dramatically reduce the simulation time. The profiling results of our Monte Carlo simulation code show that trial generation, which used to be the most time consuming process, is no longer the time dominating component of the simulation.« less

  8. Calculation of water equivalent thickness of materials of arbitrary density, elemental composition and thickness in proton beam irradiation

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; Newhauser, Wayne D.

    2009-03-01

    In proton therapy, the radiological thickness of a material is commonly expressed in terms of water equivalent thickness (WET) or water equivalent ratio (WER). However, the WET calculations required either iterative numerical methods or approximate methods of unknown accuracy. The objective of this study was to develop a simple deterministic formula to calculate WET values with an accuracy of 1 mm for materials commonly used in proton radiation therapy. Several alternative formulas were derived in which the energy loss was calculated based on the Bragg-Kleeman rule (BK), the Bethe-Bloch equation (BB) or an empirical version of the Bethe-Bloch equation (EBB). Alternative approaches were developed for targets that were 'radiologically thin' or 'thick'. The accuracy of these methods was assessed by comparison to values from an iterative numerical method that utilized evaluated stopping power tables. In addition, we also tested the approximate formula given in the International Atomic Energy Agency's dosimetry code of practice (Technical Report Series No 398, 2000, IAEA, Vienna) and stopping power ratio approximation. The results of these comparisons revealed that most methods were accurate for cases involving thin or low-Z targets. However, only the thick-target formulas provided accurate WET values for targets that were radiologically thick and contained high-Z material.

  9. A high-order time-parallel scheme for solving wave propagation problems via the direct construction of an approximate time-evolution operator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haut, T. S.; Babb, T.; Martinsson, P. G.

    2015-06-16

    Our manuscript demonstrates a technique for efficiently solving the classical wave equation, the shallow water equations, and, more generally, equations of the form ∂u/∂t=Lu∂u/∂t=Lu, where LL is a skew-Hermitian differential operator. The idea is to explicitly construct an approximation to the time-evolution operator exp(τL)exp(τL) for a relatively large time-step ττ. Recently developed techniques for approximating oscillatory scalar functions by rational functions, and accelerated algorithms for computing functions of discretized differential operators are exploited. Principal advantages of the proposed method include: stability even for large time-steps, the possibility to parallelize in time over many characteristic wavelengths and large speed-ups over existingmore » methods in situations where simulation over long times are required. Numerical examples involving the 2D rotating shallow water equations and the 2D wave equation in an inhomogenous medium are presented, and the method is compared to the 4th order Runge–Kutta (RK4) method and to the use of Chebyshev polynomials. The new method achieved high accuracy over long-time intervals, and with speeds that are orders of magnitude faster than both RK4 and the use of Chebyshev polynomials.« less

  10. CyClus: a fast, comprehensive cylindrical interface approximation clustering/reranking method for rigid-body protein-protein docking decoys.

    PubMed

    Omori, Satoshi; Kitao, Akio

    2013-06-01

    We propose a fast clustering and reranking method, CyClus, for protein-protein docking decoys. This method enables comprehensive clustering of whole decoys generated by rigid-body docking using cylindrical approximation of the protein-proteininterface and hierarchical clustering procedures. We demonstrate the clustering and reranking of 54,000 decoy structures generated by ZDOCK for each complex within a few minutes. After parameter tuning for the test set in ZDOCK benchmark 2.0 with the ZDOCK and ZRANK scoring functions, blind tests for the incremental data in ZDOCK benchmark 3.0 and 4.0 were conducted. CyClus successfully generated smaller subsets of decoys containing near-native decoys. For example, the number of decoys required to create subsets containing near-native decoys with 80% probability was reduced from 22% to 50% of the number required in the original ZDOCK. Although specific ZDOCK and ZRANK results were demonstrated, the CyClus algorithm was designed to be more general and can be applied to a wide range of decoys and scoring functions by adjusting just two parameters, p and T. CyClus results were also compared to those from ClusPro. Copyright © 2013 Wiley Periodicals, Inc.

  11. A thin-shock-layer solution for nonequilibrium, inviscid hypersonic flows in earth, Martian, and Venusian atmospheres

    NASA Technical Reports Server (NTRS)

    Grose, W. L.

    1971-01-01

    An approximate inverse solution is presented for the nonequilibrium flow in the inviscid shock layer about a vehicle in hypersonic flight. The method is based upon a thin-shock-layer approximation and has the advantage of being applicable to both subsonic and supersonic regions of the shock layer. The relative simplicity of the method makes it ideally suited for programming on a digital computer with a significant reduction in storage capacity and computing time required by other more exact methods. Comparison of nonequilibrium solutions for an air mixture obtained by the present method is made with solutions obtained by two other methods. Additional cases are presented for entry of spherical nose cones into representative Venusian and Martian atmospheres. A digital computer program written in FORTRAN language is presented that permits an arbitrary gas mixture to be employed in the solution. The effects of vibration, dissociation, recombination, electronic excitation, and ionization are included in the program.

  12. Extended Finite Element Method with Simplified Spherical Harmonics Approximation for the Forward Model of Optical Molecular Imaging

    PubMed Central

    Li, Wei; Yi, Huangjian; Zhang, Qitan; Chen, Duofang; Liang, Jimin

    2012-01-01

    An extended finite element method (XFEM) for the forward model of 3D optical molecular imaging is developed with simplified spherical harmonics approximation (SPN). In XFEM scheme of SPN equations, the signed distance function is employed to accurately represent the internal tissue boundary, and then it is used to construct the enriched basis function of the finite element scheme. Therefore, the finite element calculation can be carried out without the time-consuming internal boundary mesh generation. Moreover, the required overly fine mesh conforming to the complex tissue boundary which leads to excess time cost can be avoided. XFEM conveniences its application to tissues with complex internal structure and improves the computational efficiency. Phantom and digital mouse experiments were carried out to validate the efficiency of the proposed method. Compared with standard finite element method and classical Monte Carlo (MC) method, the validation results show the merits and potential of the XFEM for optical imaging. PMID:23227108

  13. Extended finite element method with simplified spherical harmonics approximation for the forward model of optical molecular imaging.

    PubMed

    Li, Wei; Yi, Huangjian; Zhang, Qitan; Chen, Duofang; Liang, Jimin

    2012-01-01

    An extended finite element method (XFEM) for the forward model of 3D optical molecular imaging is developed with simplified spherical harmonics approximation (SP(N)). In XFEM scheme of SP(N) equations, the signed distance function is employed to accurately represent the internal tissue boundary, and then it is used to construct the enriched basis function of the finite element scheme. Therefore, the finite element calculation can be carried out without the time-consuming internal boundary mesh generation. Moreover, the required overly fine mesh conforming to the complex tissue boundary which leads to excess time cost can be avoided. XFEM conveniences its application to tissues with complex internal structure and improves the computational efficiency. Phantom and digital mouse experiments were carried out to validate the efficiency of the proposed method. Compared with standard finite element method and classical Monte Carlo (MC) method, the validation results show the merits and potential of the XFEM for optical imaging.

  14. Influence of parameter changes to stability behavior of rotors

    NASA Technical Reports Server (NTRS)

    Fritzen, C. P.; Nordmann, R.

    1982-01-01

    The occurrence of unstable vibrations in rotating machinery requires corrective measures for improvement of the stability behavior. A simple approximate method is represented to find out the influence of parameter changes to the stability behavior. The method is based on an expansion of the eigenvalues in terms of system parameters. Influence coefficients show the effect of structural modifications. The method first of all was applied to simple nonconservative rotor models. It was approved for an unsymmetric rotor of a test rig.

  15. Determination of Absolute Configuration of Secondary Alcohols Using Thin-Layer Chromatography

    PubMed Central

    Wagner, Alexander J.; Rychnovsky, Scott D.

    2013-01-01

    A new implementation of the Competing Enantioselective Conversion (CEC) method was developed to qualitatively determine the absolute configuration of enantioenriched secondary alcohols using thin-layer chromatography. The entire process for the method requires approximately 60 min and utilizes micromole quantities of the secondary alcohol being tested. A number of synthetically relevant secondary alcohols are presented. Additionally, 1H NMR spectroscopy was conducted on all samples to provide evidence of reaction conversion that supports the qualitative method presented herein. PMID:23593963

  16. Structural system reliability calculation using a probabilistic fault tree analysis method

    NASA Technical Reports Server (NTRS)

    Torng, T. Y.; Wu, Y.-T.; Millwater, H. R.

    1992-01-01

    The development of a new probabilistic fault tree analysis (PFTA) method for calculating structural system reliability is summarized. The proposed PFTA procedure includes: developing a fault tree to represent the complex structural system, constructing an approximation function for each bottom event, determining a dominant sampling sequence for all bottom events, and calculating the system reliability using an adaptive importance sampling method. PFTA is suitable for complicated structural problems that require computer-intensive computer calculations. A computer program has been developed to implement the PFTA.

  17. Convergence analysis of surrogate-based methods for Bayesian inverse problems

    NASA Astrophysics Data System (ADS)

    Yan, Liang; Zhang, Yuan-Xiang

    2017-12-01

    The major challenges in the Bayesian inverse problems arise from the need for repeated evaluations of the forward model, as required by Markov chain Monte Carlo (MCMC) methods for posterior sampling. Many attempts at accelerating Bayesian inference have relied on surrogates for the forward model, typically constructed through repeated forward simulations that are performed in an offline phase. Although such approaches can be quite effective at reducing computation cost, there has been little analysis of the approximation on posterior inference. In this work, we prove error bounds on the Kullback-Leibler (KL) distance between the true posterior distribution and the approximation based on surrogate models. Our rigorous error analysis show that if the forward model approximation converges at certain rate in the prior-weighted L 2 norm, then the posterior distribution generated by the approximation converges to the true posterior at least two times faster in the KL sense. The error bound on the Hellinger distance is also provided. To provide concrete examples focusing on the use of the surrogate model based methods, we present an efficient technique for constructing stochastic surrogate models to accelerate the Bayesian inference approach. The Christoffel least squares algorithms, based on generalized polynomial chaos, are used to construct a polynomial approximation of the forward solution over the support of the prior distribution. The numerical strategy and the predicted convergence rates are then demonstrated on the nonlinear inverse problems, involving the inference of parameters appearing in partial differential equations.

  18. Adaptive hybrid simulations for multiscale stochastic reaction networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hepp, Benjamin; Gupta, Ankit; Khammash, Mustafa

    2015-01-21

    The probability distribution describing the state of a Stochastic Reaction Network (SRN) evolves according to the Chemical Master Equation (CME). It is common to estimate its solution using Monte Carlo methods such as the Stochastic Simulation Algorithm (SSA). In many cases, these simulations can take an impractical amount of computational time. Therefore, many methods have been developed that approximate sample paths of the underlying stochastic process and estimate the solution of the CME. A prominent class of these methods include hybrid methods that partition the set of species and the set of reactions into discrete and continuous subsets. Such amore » partition separates the dynamics into a discrete and a continuous part. Simulating such a stochastic process can be computationally much easier than simulating the exact discrete stochastic process with SSA. Moreover, the quasi-stationary assumption to approximate the dynamics of fast subnetworks can be applied for certain classes of networks. However, as the dynamics of a SRN evolves, these partitions may have to be adapted during the simulation. We develop a hybrid method that approximates the solution of a CME by automatically partitioning the reactions and species sets into discrete and continuous components and applying the quasi-stationary assumption on identifiable fast subnetworks. Our method does not require any user intervention and it adapts to exploit the changing timescale separation between reactions and/or changing magnitudes of copy-numbers of constituent species. We demonstrate the efficiency of the proposed method by considering examples from systems biology and showing that very good approximations to the exact probability distributions can be achieved in significantly less computational time. This is especially the case for systems with oscillatory dynamics, where the system dynamics change considerably throughout the time-period of interest.« less

  19. Adaptive hybrid simulations for multiscale stochastic reaction networks.

    PubMed

    Hepp, Benjamin; Gupta, Ankit; Khammash, Mustafa

    2015-01-21

    The probability distribution describing the state of a Stochastic Reaction Network (SRN) evolves according to the Chemical Master Equation (CME). It is common to estimate its solution using Monte Carlo methods such as the Stochastic Simulation Algorithm (SSA). In many cases, these simulations can take an impractical amount of computational time. Therefore, many methods have been developed that approximate sample paths of the underlying stochastic process and estimate the solution of the CME. A prominent class of these methods include hybrid methods that partition the set of species and the set of reactions into discrete and continuous subsets. Such a partition separates the dynamics into a discrete and a continuous part. Simulating such a stochastic process can be computationally much easier than simulating the exact discrete stochastic process with SSA. Moreover, the quasi-stationary assumption to approximate the dynamics of fast subnetworks can be applied for certain classes of networks. However, as the dynamics of a SRN evolves, these partitions may have to be adapted during the simulation. We develop a hybrid method that approximates the solution of a CME by automatically partitioning the reactions and species sets into discrete and continuous components and applying the quasi-stationary assumption on identifiable fast subnetworks. Our method does not require any user intervention and it adapts to exploit the changing timescale separation between reactions and/or changing magnitudes of copy-numbers of constituent species. We demonstrate the efficiency of the proposed method by considering examples from systems biology and showing that very good approximations to the exact probability distributions can be achieved in significantly less computational time. This is especially the case for systems with oscillatory dynamics, where the system dynamics change considerably throughout the time-period of interest.

  20. Technical Note: Approximate Bayesian parameterization of a complex tropical forest model

    NASA Astrophysics Data System (ADS)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2013-08-01

    Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.

  1. Use of an OSSE to Evaluate Background Error Covariances Estimated by the 'NMC Method'

    NASA Technical Reports Server (NTRS)

    Errico, Ronald M.; Prive, Nikki C.; Gu, Wei

    2014-01-01

    The NMC method has proven utility for prescribing approximate background-error covariances required by variational data assimilation systems. Here, untunedNMCmethod estimates are compared with explicitly determined error covariances produced within an OSSE context by exploiting availability of the true simulated states. Such a comparison provides insights into what kind of rescaling is required to render the NMC method estimates usable. It is shown that rescaling of variances and directional correlation lengths depends greatly on both pressure and latitude. In particular, some scaling coefficients appropriate in the Tropics are the reciprocal of those in the Extratropics. Also, the degree of dynamic balance is grossly overestimated by the NMC method. These results agree with previous examinations of the NMC method which used ensembles as an alternative for estimating background-error statistics.

  2. Adaptation of the Carter-Tracy water influx calculation to groundwater flow simulation

    USGS Publications Warehouse

    Kipp, Kenneth L.

    1986-01-01

    The Carter-Tracy calculation for water influx is adapted to groundwater flow simulation with additional clarifying explanation not present in the original papers. The Van Everdingen and Hurst aquifer-influence functions for radial flow from an outer aquifer region are employed. This technique, based on convolution of unit-step response functions, offers a simple but approximate method for embedding an inner region of groundwater flow simulation within a much larger aquifer region where flow can be treated in an approximate fashion. The use of aquifer-influence functions in groundwater flow modeling reduces the size of the computational grid with a corresponding reduction in computer storage and execution time. The Carter-Tracy approximation to the convolution integral enables the aquifer influence function calculation to be made with an additional storage requirement of only two times the number of boundary nodes more than that required for the inner region simulation. It is a good approximation for constant flow rates but is poor for time-varying flow rates where the variation is large relative to the mean. A variety of outer aquifer region geometries, exterior boundary conditions, and flow rate versus potentiometric head relations can be used. The radial, transient-flow case presented is representative. An analytical approximation to the functions of Van Everdingen and Hurst for the dimensionless potentiometric head versus dimensionless time is given.

  3. Approximate matching of regular expressions.

    PubMed

    Myers, E W; Miller, W

    1989-01-01

    Given a sequence A and regular expression R, the approximate regular expression matching problem is to find a sequence matching R whose optimal alignment with A is the highest scoring of all such sequences. This paper develops an algorithm to solve the problem in time O(MN), where M and N are the lengths of A and R. Thus, the time requirement is asymptotically no worse than for the simpler problem of aligning two fixed sequences. Our method is superior to an earlier algorithm by Wagner and Seiferas in several ways. First, it treats real-valued costs, in addition to integer costs, with no loss of asymptotic efficiency. Second, it requires only O(N) space to deliver just the score of the best alignment. Finally, its structure permits implementation techniques that make it extremely fast in practice. We extend the method to accommodate gap penalties, as required for typical applications in molecular biology, and further refine it to search for sub-strings of A that strongly align with a sequence in R, as required for typical data base searches. We also show how to deliver an optimal alignment between A and R in only O(N + log M) space using O(MN log M) time. Finally, an O(MN(M + N) + N2log N) time algorithm is presented for alignment scoring schemes where the cost of a gap is an arbitrary increasing function of its length.

  4. Modeling pattern in collections of parameters

    USGS Publications Warehouse

    Link, W.A.

    1999-01-01

    Wildlife management is increasingly guided by analyses of large and complex datasets. The description of such datasets often requires a large number of parameters, among which certain patterns might be discernible. For example, one may consider a long-term study producing estimates of annual survival rates; of interest is the question whether these rates have declined through time. Several statistical methods exist for examining pattern in collections of parameters. Here, I argue for the superiority of 'random effects models' in which parameters are regarded as random variables, with distributions governed by 'hyperparameters' describing the patterns of interest. Unfortunately, implementation of random effects models is sometimes difficult. Ultrastructural models, in which the postulated pattern is built into the parameter structure of the original data analysis, are approximations to random effects models. However, this approximation is not completely satisfactory: failure to account for natural variation among parameters can lead to overstatement of the evidence for pattern among parameters. I describe quasi-likelihood methods that can be used to improve the approximation of random effects models by ultrastructural models.

  5. A high-order staggered meshless method for elliptic problems

    DOE PAGES

    Trask, Nathaniel; Perego, Mauro; Bochev, Pavel Blagoveston

    2017-03-21

    Here, we present a new meshless method for scalar diffusion equations, which is motivated by their compatible discretizations on primal-dual grids. Unlike the latter though, our approach is truly meshless because it only requires the graph of nearby neighbor connectivity of the discretization points. This graph defines a local primal-dual grid complex with a virtual dual grid, in the sense that specification of the dual metric attributes is implicit in the method's construction. Our method combines a topological gradient operator on the local primal grid with a generalized moving least squares approximation of the divergence on the local dual grid. We show that the resulting approximation of the div-grad operator maintains polynomial reproduction to arbitrary orders and yields a meshless method, which attainsmore » $$O(h^{m})$$ convergence in both $L^2$- and $H^1$-norms, similar to mixed finite element methods. We demonstrate this convergence on curvilinear domains using manufactured solutions in two and three dimensions. Application of the new method to problems with discontinuous coefficients reveals solutions that are qualitatively similar to those of compatible mesh-based discretizations.« less

  6. Modified harmonic balance method for the solution of nonlinear jerk equations

    NASA Astrophysics Data System (ADS)

    Rahman, M. Saifur; Hasan, A. S. M. Z.

    2018-03-01

    In this paper, a second approximate solution of nonlinear jerk equations (third order differential equation) can be obtained by using modified harmonic balance method. The method is simpler and easier to carry out the solution of nonlinear differential equations due to less number of nonlinear equations are required to solve than the classical harmonic balance method. The results obtained from this method are compared with those obtained from the other existing analytical methods that are available in the literature and the numerical method. The solution shows a good agreement with the numerical solution as well as the analytical methods of the available literature.

  7. Helicopter rotor and engine sizing for preliminary performance estimation

    NASA Technical Reports Server (NTRS)

    Talbot, P. D.; Bowles, J. V.; Lee, H. C.

    1986-01-01

    Methods are presented for estimating some of the more fundamental design variables of single-rotor helicopters (tip speed, blade area, disk loading, and installed power) based on design requirements (speed, weight, fuselage drag, and design hover ceiling). The well-known constraints of advancing-blade compressibility and retreating-blade stall are incorporated into the estimation process, based on an empirical interpretation of rotor performance data from large-scale wind-tunnel tests. Engine performance data are presented and correlated with a simple model usable for preliminary design. When approximate results are required quickly, these methods may be more convenient to use and provide more insight than large digital computer programs.

  8. Using Stochastic Approximation Techniques to Efficiently Construct Confidence Intervals for Heritability.

    PubMed

    Schweiger, Regev; Fisher, Eyal; Rahmani, Elior; Shenhav, Liat; Rosset, Saharon; Halperin, Eran

    2018-06-22

    Estimation of heritability is an important task in genetics. The use of linear mixed models (LMMs) to determine narrow-sense single-nucleotide polymorphism (SNP)-heritability and related quantities has received much recent attention, due of its ability to account for variants with small effect sizes. Typically, heritability estimation under LMMs uses the restricted maximum likelihood (REML) approach. The common way to report the uncertainty in REML estimation uses standard errors (SEs), which rely on asymptotic properties. However, these assumptions are often violated because of the bounded parameter space, statistical dependencies, and limited sample size, leading to biased estimates and inflated or deflated confidence intervals (CIs). In addition, for larger data sets (e.g., tens of thousands of individuals), the construction of SEs itself may require considerable time, as it requires expensive matrix inversions and multiplications. Here, we present FIESTA (Fast confidence IntErvals using STochastic Approximation), a method for constructing accurate CIs. FIESTA is based on parametric bootstrap sampling, and, therefore, avoids unjustified assumptions on the distribution of the heritability estimator. FIESTA uses stochastic approximation techniques, which accelerate the construction of CIs by several orders of magnitude, compared with previous approaches as well as to the analytical approximation used by SEs. FIESTA builds accurate CIs rapidly, for example, requiring only several seconds for data sets of tens of thousands of individuals, making FIESTA a very fast solution to the problem of building accurate CIs for heritability for all data set sizes.

  9. The dexmedetomidine concentration required after remifentanil anesthesia is three-fold higher than that after fentanyl anesthesia or that for general sedation in the ICU

    PubMed Central

    Kunisawa, Takayuki; Fujimoto, Kazuhiro; Kurosawa, Atsushi; Nagashima, Michio; Matsui, Koji; Hayashi, Dai; Yamamoto, Kunihiko; Goto, Yuya; Akutsu, Hiroaki; Iwasaki, Hiroshi

    2014-01-01

    Purpose The general dexmedetomidine (DEX) concentration required for sedation of intensive care unit patients is considered to be approximately 0.7 ng/mL. However, higher DEX concentrations are considered to be required for sedation and/or pain management after major surgery using remifentanil. We determined the DEX concentration required after major surgery by using a target-controlled infusion (TCI) system for DEX. Methods Fourteen patients undergoing surgery for abdominal aortic aneurysms (AAA) were randomly, double-blindly assigned to two groups and underwent fentanyl- or remifentanil-based anesthetic management. DEX TCI was started at the time of closing the peritoneum and continued for 12 hours after stopping propofol administration (M0); DEX TCI was adjusted according to the sedation score and complaints of pain. The doses and concentrations of all anesthetics and postoperative conditions were investigated. Results Throughout the observation period, the predicted plasma concentration of DEX in the fentanyl group was stable at approximately 0.7 ng/mL. In contrast, the predicted plasma concentration of DEX in the remifentanil group rapidly increased and stabilized at approximately 2 ng/mL. The actual DEX concentration at 540 minutes after M0 showed a similar trend (0.54±0.14 [fentanyl] versus 1.57±0.39 ng/mL [remifentanil]). In the remifentanil group, the dopamine dose required and the duration of intubation decreased, and urine output increased; however, no other outcomes improved. Conclusion The DEX concentration required after AAA surgery with remifentanil was three-fold higher than that required after AAA surgery with fentanyl or the conventional DEX concentration for sedation. High DEX concentration after remifentanil affords some benefits in anesthetic management. PMID:25328395

  10. Aeroservoelastic modeling and applications using minimum-state approximations of the unsteady aerodynamics

    NASA Technical Reports Server (NTRS)

    Tiffany, Sherwood H.; Karpel, Mordechay

    1989-01-01

    Various control analysis, design, and simulation techniques for aeroelastic applications require the equations of motion to be cast in a linear time-invariant state-space form. Unsteady aerodynamics forces have to be approximated as rational functions of the Laplace variable in order to put them in this framework. For the minimum-state method, the number of denominator roots in the rational approximation. Results are shown of applying various approximation enhancements (including optimization, frequency dependent weighting of the tabular data, and constraint selection) with the minimum-state formulation to the active flexible wing wind-tunnel model. The results demonstrate that good models can be developed which have an order of magnitude fewer augmenting aerodynamic equations more than traditional approaches. This reduction facilitates the design of lower order control systems, analysis of control system performance, and near real-time simulation of aeroservoelastic phenomena.

  11. 77 FR 60114 - Agency Information Collection Activities Under OMB Review

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-02

    ... approximately 100 entities on a daily basis. The recordkeeping requirement of section 22.5 is expected to apply to approximately 100 entities on an approximately annual basis. Based on experience with analogous... required by section 22.2(g) is expected to require about 100 hours annually per entity, for a total burden...

  12. Model-Free Adaptive Control for Unknown Nonlinear Zero-Sum Differential Game.

    PubMed

    Zhong, Xiangnan; He, Haibo; Wang, Ding; Ni, Zhen

    2018-05-01

    In this paper, we present a new model-free globalized dual heuristic dynamic programming (GDHP) approach for the discrete-time nonlinear zero-sum game problems. First, the online learning algorithm is proposed based on the GDHP method to solve the Hamilton-Jacobi-Isaacs equation associated with optimal regulation control problem. By setting backward one step of the definition of performance index, the requirement of system dynamics, or an identifier is relaxed in the proposed method. Then, three neural networks are established to approximate the optimal saddle point feedback control law, the disturbance law, and the performance index, respectively. The explicit updating rules for these three neural networks are provided based on the data generated during the online learning along the system trajectories. The stability analysis in terms of the neural network approximation errors is discussed based on the Lyapunov approach. Finally, two simulation examples are provided to show the effectiveness of the proposed method.

  13. Supplement Analysis for the Transmission System Vegetation Management Program FEIS (DOE/EIS-0285/SA-108), Satsop-Aberdeen #2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tippetts, Greg P.

    2002-09-05

    Vegetation Management along the Satsop-Aberdeen #2 230kV transmission line corridor from structure 1/1 through structure 11/5. BPA proposes to remove unwanted vegetation along the right-of- way, access roads and around tower structures along the subject transmission line corridors. Approximately 11 miles of right-of-way will be treated using selective and non-selective methods that include hand cutting, mowing and herbicide treatments. Approximately 0.8 miles of access roads will be cleared using selective and non-selective methods that include hand cutting, mowing and herbicide treatments. Tower sites will be treated using selective and non-selective methods that include hand cutting, mowing and herbicide treatments. Vegetationmore » management is required for unimpeded operation and maintenance of the subject transmission line. See Section 1of the attached checklist for a complete description of the proposal.« less

  14. Nonparametric estimation of stochastic differential equations with sparse Gaussian processes.

    PubMed

    García, Constantino A; Otero, Abraham; Félix, Paulo; Presedo, Jesús; Márquez, David G

    2017-08-01

    The application of stochastic differential equations (SDEs) to the analysis of temporal data has attracted increasing attention, due to their ability to describe complex dynamics with physically interpretable equations. In this paper, we introduce a nonparametric method for estimating the drift and diffusion terms of SDEs from a densely observed discrete time series. The use of Gaussian processes as priors permits working directly in a function-space view and thus the inference takes place directly in this space. To cope with the computational complexity that requires the use of Gaussian processes, a sparse Gaussian process approximation is provided. This approximation permits the efficient computation of predictions for the drift and diffusion terms by using a distribution over a small subset of pseudosamples. The proposed method has been validated using both simulated data and real data from economy and paleoclimatology. The application of the method to real data demonstrates its ability to capture the behavior of complex systems.

  15. Modular thermal analyzer routine, volume 1

    NASA Technical Reports Server (NTRS)

    Oren, J. A.; Phillips, M. A.; Williams, D. R.

    1972-01-01

    The Modular Thermal Analyzer Routine (MOTAR) is a general thermal analysis routine with strong capabilities for performing thermal analysis of systems containing flowing fluids, fluid system controls (valves, heat exchangers, etc.), life support systems, and thermal radiation situations. Its modular organization permits the analysis of a very wide range of thermal problems for simple problems containing a few conduction nodes to those containing complicated flow and radiation analysis with each problem type being analyzed with peak computational efficiency and maximum ease of use. The organization and programming methods applied to MOTAR achieved a high degree of computer utilization efficiency in terms of computer execution time and storage space required for a given problem. The computer time required to perform a given problem on MOTAR is approximately 40 to 50 percent that required for the currently existing widely used routines. The computer storage requirement for MOTAR is approximately 25 percent more than the most commonly used routines for the most simple problems but the data storage techniques for the more complicated options should save a considerable amount of space.

  16. An Iterative Solver in the Presence and Absence of Multiplicity for Nonlinear Equations

    PubMed Central

    Özkum, Gülcan

    2013-01-01

    We develop a high-order fixed point type method to approximate a multiple root. By using three functional evaluations per full cycle, a new class of fourth-order methods for this purpose is suggested and established. The methods from the class require the knowledge of the multiplicity. We also present a method in the absence of multiplicity for nonlinear equations. In order to attest the efficiency of the obtained methods, we employ numerical comparisons alongside obtaining basins of attraction to compare them in the complex plane according to their convergence speed and chaotic behavior. PMID:24453914

  17. A simplified flight-test method for determining aircraft takeoff performance that includes effects of pilot technique

    NASA Technical Reports Server (NTRS)

    Larson, T. J.; Schweikhard, W. G.

    1974-01-01

    A method for evaluating aircraft takeoff performance from brake release to air-phase height that requires fewer tests than conventionally required is evaluated with data for the XB-70 airplane. The method defines the effects of pilot technique on takeoff performance quantitatively, including the decrease in acceleration from drag due to lift. For a given takeoff weight and throttle setting, a single takeoff provides enough data to establish a standardizing relationship for the distance from brake release to any point where velocity is appropriate to rotation. The lower rotation rates penalized takeoff performance in terms of ground roll distance; the lowest observed rotation rate required a ground roll distance that was 19 percent longer than the highest. Rotations at the minimum rate also resulted in lift-off velocities that were approximately 5 knots lower than the highest rotation rate at any given lift-off distance.

  18. Using radiance predicted by the P3 approximation in a spherical geometry to predict tissue optical properties

    NASA Astrophysics Data System (ADS)

    Dickey, Dwayne J.; Moore, Ronald B.; Tulip, John

    2001-01-01

    For photodynamic therapy of solid tumors, such as prostatic carcinoma, to be achieved, an accurate model to predict tissue parameters and light dose must be found. Presently, most analytical light dosimetry models are fluence based and are not clinically viable for tissue characterization. Other methods of predicting optical properties, such as Monet Carlo, are accurate but far too time consuming for clinical application. However, radiance predicted by the P3-Approximation, an anaylitical solution to the transport equation, may be a viable and accurate alternative. The P3-Approximation accurately predicts optical parameters in intralipid/methylene blue based phantoms in a spherical geometry. The optical parameters furnished by the radiance, when introduced into fluence predicted by both P3- Approximation and Grosjean Theory, correlate well with experimental data. The P3-Approximation also predicts the optical properties of prostate tissue, agreeing with documented optical parameters. The P3-Approximation could be the clinical tool necessary to facilitate PDT of solid tumors because of the limited number of invasive measurements required and the speed in which accurate calculations can be performed.

  19. Approximation methods in relativistic eigenvalue perturbation theory

    NASA Astrophysics Data System (ADS)

    Noble, Jonathan Howard

    In this dissertation, three questions, concerning approximation methods for the eigenvalues of quantum mechanical systems, are investigated: (i) What is a pseudo--Hermitian Hamiltonian, and how can its eigenvalues be approximated via numerical calculations? This is a fairly broad topic, and the scope of the investigation is narrowed by focusing on a subgroup of pseudo--Hermitian operators, namely, PT--symmetric operators. Within a numerical approach, one projects a PT--symmetric Hamiltonian onto an appropriate basis, and uses a straightforward two--step algorithm to diagonalize the resulting matrix, leading to numerically approximated eigenvalues. (ii) Within an analytic ansatz, how can a relativistic Dirac Hamiltonian be decoupled into particle and antiparticle degrees of freedom, in appropriate kinematic limits? One possible answer is the Foldy--Wouthuysen transform; however, there are alter- native methods which seem to have some advantages over the time--tested approach. One such method is investigated by applying both the traditional Foldy--Wouthuysen transform and the "chiral" Foldy--Wouthuysen transform to a number of Dirac Hamiltonians, including the central-field Hamiltonian for a gravitationally bound system; namely, the Dirac-(Einstein-)Schwarzschild Hamiltonian, which requires the formal- ism of general relativity. (iii) Are there are pseudo--Hermitian variants of Dirac Hamiltonians that can be approximated using a decoupling transformation? The tachyonic Dirac Hamiltonian, which describes faster-than-light spin-1/2 particles, is gamma5--Hermitian, i.e., pseudo-Hermitian. Superluminal particles remain faster than light upon a Lorentz transformation, and hence, the Foldy--Wouthuysen program is unsuited for this case. Thus, inspired by the Foldy--Wouthuysen program, a decoupling transform in the ultrarelativistic limit is proposed, which is applicable to both sub- and superluminal particles.

  20. Computational aspects of sensitivity calculations in linear transient structural analysis. Ph.D. Thesis - Virginia Polytechnic Inst. and State Univ.

    NASA Technical Reports Server (NTRS)

    Greene, William H.

    1990-01-01

    A study was performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal of the study was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semi-analytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models. In several cases this fixed mode approach resulted in very poor approximations of the stress sensitivities. Almost all of the original modes were required for an accurate sensitivity and for small numbers of modes, the accuracy was extremely poor. To overcome this poor accuracy, two semi-analytical techniques were developed. The first technique accounts for the change in eigenvectors through approximate eigenvector derivatives. The second technique applies the mode acceleration method of transient analysis to the sensitivity calculations. Both result in accurate values of the stress sensitivities with a small number of modes and much lower computational costs than if the vibration modes were recalculated and then used in an overall finite difference method.

  1. Development and application of accurate analytical models for single active electron potentials

    NASA Astrophysics Data System (ADS)

    Miller, Michelle; Jaron-Becker, Agnieszka; Becker, Andreas

    2015-05-01

    The single active electron (SAE) approximation is a theoretical model frequently employed to study scenarios in which inner-shell electrons may productively be treated as frozen spectators to a physical process of interest, and accurate analytical approximations for these potentials are sought as a useful simulation tool. Density function theory is often used to construct a SAE potential, requiring that a further approximation for the exchange correlation functional be enacted. In this study, we employ the Krieger, Li, and Iafrate (KLI) modification to the optimized-effective-potential (OEP) method to reduce the complexity of the problem to the straightforward solution of a system of linear equations through simple arguments regarding the behavior of the exchange-correlation potential in regions where a single orbital dominates. We employ this method for the solution of atomic and molecular potentials, and use the resultant curve to devise a systematic construction for highly accurate and useful analytical approximations for several systems. Supported by the U.S. Department of Energy (Grant No. DE-FG02-09ER16103), and the U.S. National Science Foundation (Graduate Research Fellowship, Grants No. PHY-1125844 and No. PHY-1068706).

  2. Body surface detection method for photoacoustic image data using cloth-simulation technique

    NASA Astrophysics Data System (ADS)

    Sekiguchi, H.; Yoshikawa, A.; Matsumoto, Y.; Asao, Y.; Yagi, T.; Togashi, K.; Toi, M.

    2018-02-01

    Photoacoustic tomography (PAT) is a novel modality that can visualize blood vessels without contrast agents. It clearly shows blood vessels near the body surface. However, these vessels obstruct the observation of deep blood vessels. As the existence range of each vessel is determined by the distance from the body surface, they can be separated if the position of the skin is known. However, skin tissue, which does not contain hemoglobin, does not appear in PAT results, therefore, manual estimation is required. As this task is very labor-intensive, its automation is highly desirable. Therefore, we developed a method to estimate the body surface using the cloth-simulation technique, which is a commonly used method to create computer graphics (CG) animations; however, it has not yet been employed for medical image processing. In cloth simulations, the virtual cloth is represented by a two-dimensional array of mass nodes. The nodes are connected with each other by springs. Once the cloth is released from a position away from the body, each node begins to move downwards under the effect of gravity, spring, and other forces; some of the nodes hit the superficial vessels and stop. The cloth position in the stationary state represents the body surface. The body surface estimation, which required approximately 1 h with the manual method, is automated and it takes only approximately 10 s with the proposed method. The proposed method could facilitate the practical use of PAT.

  3. Easy Implementation of Internet-Based Whiteboard Physics Tutorials

    ERIC Educational Resources Information Center

    Robinson, Andrew

    2008-01-01

    The requirement for a method of capturing problem solving on a whiteboard for later replay stems from my teaching load, which includes two classes of first-year university general physics, each with relatively large class sizes of approximately 80-100 students. Most university-level teachers value one-to-one interaction with the students and find…

  4. High-yield production of graphene by liquid-phase exfoliation of graphite.

    PubMed

    Hernandez, Yenny; Nicolosi, Valeria; Lotya, Mustafa; Blighe, Fiona M; Sun, Zhenyu; De, Sukanta; McGovern, I T; Holland, Brendan; Byrne, Michele; Gun'Ko, Yurii K; Boland, John J; Niraj, Peter; Duesberg, Georg; Krishnamurthy, Satheesh; Goodhue, Robbie; Hutchison, John; Scardaci, Vittorio; Ferrari, Andrea C; Coleman, Jonathan N

    2008-09-01

    Fully exploiting the properties of graphene will require a method for the mass production of this remarkable material. Two main routes are possible: large-scale growth or large-scale exfoliation. Here, we demonstrate graphene dispersions with concentrations up to approximately 0.01 mg ml(-1), produced by dispersion and exfoliation of graphite in organic solvents such as N-methyl-pyrrolidone. This is possible because the energy required to exfoliate graphene is balanced by the solvent-graphene interaction for solvents whose surface energies match that of graphene. We confirm the presence of individual graphene sheets by Raman spectroscopy, transmission electron microscopy and electron diffraction. Our method results in a monolayer yield of approximately 1 wt%, which could potentially be improved to 7-12 wt% with further processing. The absence of defects or oxides is confirmed by X-ray photoelectron, infrared and Raman spectroscopies. We are able to produce semi-transparent conducting films and conducting composites. Solution processing of graphene opens up a range of potential large-area applications, from device and sensor fabrication to liquid-phase chemistry.

  5. Optimal sixteenth order convergent method based on quasi-Hermite interpolation for computing roots.

    PubMed

    Zafar, Fiza; Hussain, Nawab; Fatimah, Zirwah; Kharal, Athar

    2014-01-01

    We have given a four-step, multipoint iterative method without memory for solving nonlinear equations. The method is constructed by using quasi-Hermite interpolation and has order of convergence sixteen. As this method requires four function evaluations and one derivative evaluation at each step, it is optimal in the sense of the Kung and Traub conjecture. The comparisons are given with some other newly developed sixteenth-order methods. Interval Newton's method is also used for finding the enough accurate initial approximations. Some figures show the enclosure of finitely many zeroes of nonlinear equations in an interval. Basins of attractions show the effectiveness of the method.

  6. Wigner phase space distribution via classical adiabatic switching

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bose, Amartya; Makri, Nancy; Department of Physics, University of Illinois, 1110 W. Green Street, Urbana, Illinois 61801

    2015-09-21

    Evaluation of the Wigner phase space density for systems of many degrees of freedom presents an extremely demanding task because of the oscillatory nature of the Fourier-type integral. We propose a simple and efficient, approximate procedure for generating the Wigner distribution that avoids the computational difficulties associated with the Wigner transform. Starting from a suitable zeroth-order Hamiltonian, for which the Wigner density is available (either analytically or numerically), the phase space distribution is propagated in time via classical trajectories, while the perturbation is gradually switched on. According to the classical adiabatic theorem, each trajectory maintains a constant action if themore » perturbation is switched on infinitely slowly. We show that the adiabatic switching procedure produces the exact Wigner density for harmonic oscillator eigenstates and also for eigenstates of anharmonic Hamiltonians within the Wentzel-Kramers-Brillouin (WKB) approximation. We generalize the approach to finite temperature by introducing a density rescaling factor that depends on the energy of each trajectory. Time-dependent properties are obtained simply by continuing the integration of each trajectory under the full target Hamiltonian. Further, by construction, the generated approximate Wigner distribution is invariant under classical propagation, and thus, thermodynamic properties are strictly preserved. Numerical tests on one-dimensional and dissipative systems indicate that the method produces results in very good agreement with those obtained by full quantum mechanical methods over a wide temperature range. The method is simple and efficient, as it requires no input besides the force fields required for classical trajectory integration, and is ideal for use in quasiclassical trajectory calculations.« less

  7. Extended dynamic mode decomposition with dictionary learning: A data-driven adaptive spectral decomposition of the Koopman operator

    NASA Astrophysics Data System (ADS)

    Li, Qianxiao; Dietrich, Felix; Bollt, Erik M.; Kevrekidis, Ioannis G.

    2017-10-01

    Numerical approximation methods for the Koopman operator have advanced considerably in the last few years. In particular, data-driven approaches such as dynamic mode decomposition (DMD)51 and its generalization, the extended-DMD (EDMD), are becoming increasingly popular in practical applications. The EDMD improves upon the classical DMD by the inclusion of a flexible choice of dictionary of observables which spans a finite dimensional subspace on which the Koopman operator can be approximated. This enhances the accuracy of the solution reconstruction and broadens the applicability of the Koopman formalism. Although the convergence of the EDMD has been established, applying the method in practice requires a careful choice of the observables to improve convergence with just a finite number of terms. This is especially difficult for high dimensional and highly nonlinear systems. In this paper, we employ ideas from machine learning to improve upon the EDMD method. We develop an iterative approximation algorithm which couples the EDMD with a trainable dictionary represented by an artificial neural network. Using the Duffing oscillator and the Kuramoto Sivashinsky partical differential equation as examples, we show that our algorithm can effectively and efficiently adapt the trainable dictionary to the problem at hand to achieve good reconstruction accuracy without the need to choose a fixed dictionary a priori. Furthermore, to obtain a given accuracy, we require fewer dictionary terms than EDMD with fixed dictionaries. This alleviates an important shortcoming of the EDMD algorithm and enhances the applicability of the Koopman framework to practical problems.

  8. Extended dynamic mode decomposition with dictionary learning: A data-driven adaptive spectral decomposition of the Koopman operator.

    PubMed

    Li, Qianxiao; Dietrich, Felix; Bollt, Erik M; Kevrekidis, Ioannis G

    2017-10-01

    Numerical approximation methods for the Koopman operator have advanced considerably in the last few years. In particular, data-driven approaches such as dynamic mode decomposition (DMD) 51 and its generalization, the extended-DMD (EDMD), are becoming increasingly popular in practical applications. The EDMD improves upon the classical DMD by the inclusion of a flexible choice of dictionary of observables which spans a finite dimensional subspace on which the Koopman operator can be approximated. This enhances the accuracy of the solution reconstruction and broadens the applicability of the Koopman formalism. Although the convergence of the EDMD has been established, applying the method in practice requires a careful choice of the observables to improve convergence with just a finite number of terms. This is especially difficult for high dimensional and highly nonlinear systems. In this paper, we employ ideas from machine learning to improve upon the EDMD method. We develop an iterative approximation algorithm which couples the EDMD with a trainable dictionary represented by an artificial neural network. Using the Duffing oscillator and the Kuramoto Sivashinsky partical differential equation as examples, we show that our algorithm can effectively and efficiently adapt the trainable dictionary to the problem at hand to achieve good reconstruction accuracy without the need to choose a fixed dictionary a priori. Furthermore, to obtain a given accuracy, we require fewer dictionary terms than EDMD with fixed dictionaries. This alleviates an important shortcoming of the EDMD algorithm and enhances the applicability of the Koopman framework to practical problems.

  9. Accurate and Efficient Parallel Implementation of an Effective Linear-Scaling Direct Random Phase Approximation Method.

    PubMed

    Graf, Daniel; Beuerle, Matthias; Schurkus, Henry F; Luenser, Arne; Savasci, Gökcen; Ochsenfeld, Christian

    2018-05-08

    An efficient algorithm for calculating the random phase approximation (RPA) correlation energy is presented that is as accurate as the canonical molecular orbital resolution-of-the-identity RPA (RI-RPA) with the important advantage of an effective linear-scaling behavior (instead of quartic) for large systems due to a formulation in the local atomic orbital space. The high accuracy is achieved by utilizing optimized minimax integration schemes and the local Coulomb metric attenuated by the complementary error function for the RI approximation. The memory bottleneck of former atomic orbital (AO)-RI-RPA implementations ( Schurkus, H. F.; Ochsenfeld, C. J. Chem. Phys. 2016 , 144 , 031101 and Luenser, A.; Schurkus, H. F.; Ochsenfeld, C. J. Chem. Theory Comput. 2017 , 13 , 1647 - 1655 ) is addressed by precontraction of the large 3-center integral matrix with the Cholesky factors of the ground state density reducing the memory requirements of that matrix by a factor of [Formula: see text]. Furthermore, we present a parallel implementation of our method, which not only leads to faster RPA correlation energy calculations but also to a scalable decrease in memory requirements, opening the door for investigations of large molecules even on small- to medium-sized computing clusters. Although it is known that AO methods are highly efficient for extended systems, where sparsity allows for reaching the linear-scaling regime, we show that our work also extends the applicability when considering highly delocalized systems for which no linear scaling can be achieved. As an example, the interlayer distance of two covalent organic framework pore fragments (comprising 384 atoms in total) is analyzed.

  10. An efficient and flexible Abel-inversion method for noisy data

    NASA Astrophysics Data System (ADS)

    Antokhin, Igor I.

    2016-12-01

    We propose an efficient and flexible method for solving the Abel integral equation of the first kind, frequently appearing in many fields of astrophysics, physics, chemistry, and applied sciences. This equation represents an ill-posed problem, thus solving it requires some kind of regularization. Our method is based on solving the equation on a so-called compact set of functions and/or using Tikhonov's regularization. A priori constraints on the unknown function, defining a compact set, are very loose and can be set using simple physical considerations. Tikhonov's regularization in itself does not require any explicit a priori constraints on the unknown function and can be used independently of such constraints or in combination with them. Various target degrees of smoothness of the unknown function may be set, as required by the problem at hand. The advantage of the method, apart from its flexibility, is that it gives uniform convergence of the approximate solution to the exact solution, as the errors of input data tend to zero. The method is illustrated on several simulated models with known solutions. An example of astrophysical application of the method is also given.

  11. Airfoil Shape Optimization based on Surrogate Model

    NASA Astrophysics Data System (ADS)

    Mukesh, R.; Lingadurai, K.; Selvakumar, U.

    2018-02-01

    Engineering design problems always require enormous amount of real-time experiments and computational simulations in order to assess and ensure the design objectives of the problems subject to various constraints. In most of the cases, the computational resources and time required per simulation are large. In certain cases like sensitivity analysis, design optimisation etc where thousands and millions of simulations have to be carried out, it leads to have a life time of difficulty for designers. Nowadays approximation models, otherwise called as surrogate models (SM), are more widely employed in order to reduce the requirement of computational resources and time in analysing various engineering systems. Various approaches such as Kriging, neural networks, polynomials, Gaussian processes etc are used to construct the approximation models. The primary intention of this work is to employ the k-fold cross validation approach to study and evaluate the influence of various theoretical variogram models on the accuracy of the surrogate model construction. Ordinary Kriging and design of experiments (DOE) approaches are used to construct the SMs by approximating panel and viscous solution algorithms which are primarily used to solve the flow around airfoils and aircraft wings. The method of coupling the SMs with a suitable optimisation scheme to carryout an aerodynamic design optimisation process for airfoil shapes is also discussed.

  12. Comparative assessment of water use and environmental implications of coal slurry pipelines

    USGS Publications Warehouse

    Palmer, Richard N.; James II, I. C.; Hirsch, R.M.

    1977-01-01

    With other studies conducted by the U.S. Geological Survey of water use in the conversion and transportation of the West 's coal, an analysis of water use and environmental implications of coal-slurry pipeline transport is presented. Simulations of a hypothetical slurry pipeline of 1000-mile length transporting 12.5 million tons per year indicate that pipeline costs and energy requirements are quite sensitive to the coal-to-water ratio. For realistic water prices, the optimal ratio will not vary far from the 50/50 ratio by weight. In comparison to other methods of energy conversion and transport, coal-slurry pipeline utilize about 1/3 the amount of water required for coal gasification, and about 1/5 the amount required for on-site electrical generation. An analysis of net energy output from operating alternative energy transportation systems for the assumed conditions indicates that both slurry pipeline and rail shipment require approximately 4.5 percent of the potential electrical energy output of the coal transported, and high-voltage, direct-current transportation requires approximately 6.5 percent. The environmental impacts of the different transports options are so substantially different that a common basis for comparison does not exist. (Woodard-USGS)

  13. Spline methods for approximating quantile functions and generating random samples

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.; Matthews, C. G.

    1985-01-01

    Two cubic spline formulations are presented for representing the quantile function (inverse cumulative distribution function) of a random sample of data. Both B-spline and rational spline approximations are compared with analytic representations of the quantile function. It is also shown how these representations can be used to generate random samples for use in simulation studies. Comparisons are made on samples generated from known distributions and a sample of experimental data. The spline representations are more accurate for multimodal and skewed samples and to require much less time to generate samples than the analytic representation.

  14. Kurtosis Approach for Nonlinear Blind Source Separation

    NASA Technical Reports Server (NTRS)

    Duong, Vu A.; Stubbemd, Allen R.

    2005-01-01

    In this paper, we introduce a new algorithm for blind source signal separation for post-nonlinear mixtures. The mixtures are assumed to be linearly mixed from unknown sources first and then distorted by memoryless nonlinear functions. The nonlinear functions are assumed to be smooth and can be approximated by polynomials. Both the coefficients of the unknown mixing matrix and the coefficients of the approximated polynomials are estimated by the gradient descent method conditional on the higher order statistical requirements. The results of simulation experiments presented in this paper demonstrate the validity and usefulness of our approach for nonlinear blind source signal separation.

  15. Time-stable boundary conditions for finite-difference schemes solving hyperbolic systems: Methodology and application to high-order compact schemes

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Gottlieb, David; Abarbanel, Saul

    1993-01-01

    We present a systematic method for constructing boundary conditions (numerical and physical) of the required accuracy, for compact (Pade-like) high-order finite-difference schemes for hyperbolic systems. First, a roper summation-by-parts formula is found for the approximate derivative. A 'simultaneous approximation term' (SAT) is then introduced to treat the boundary conditions. This procedure leads to time-stable schemes even in the system case. An explicit construction of the fourth-order compact case is given. Numerical studies are presented to verify the efficacy of the approach.

  16. Demonstration of wetland vegetation mapping in Florida from computer-processed satellite and aircraft multispectral scanner data

    NASA Technical Reports Server (NTRS)

    Butera, M. K.

    1979-01-01

    The success of remotely mapping wetland vegetation of the southwestern coast of Florida is examined. A computerized technique to process aircraft and LANDSAT multispectral scanner data into vegetation classification maps was used. The cost effectiveness of this mapping technique was evaluated in terms of user requirements, accuracy, and cost. Results indicate that mangrove communities are classified most cost effectively by the LANDSAT technique, with an accuracy of approximately 87 percent and with a cost of approximately 3 cent per hectare compared to $46.50 per hectare for conventional ground survey methods.

  17. Neural networks for function approximation in nonlinear control

    NASA Technical Reports Server (NTRS)

    Linse, Dennis J.; Stengel, Robert F.

    1990-01-01

    Two neural network architectures are compared with a classical spline interpolation technique for the approximation of functions useful in a nonlinear control system. A standard back-propagation feedforward neural network and a cerebellar model articulation controller (CMAC) neural network are presented, and their results are compared with a B-spline interpolation procedure that is updated using recursive least-squares parameter identification. Each method is able to accurately represent a one-dimensional test function. Tradeoffs between size requirements, speed of operation, and speed of learning indicate that neural networks may be practical for identification and adaptation in a nonlinear control environment.

  18. Relaxation and approximate factorization methods for the unsteady full potential equation

    NASA Technical Reports Server (NTRS)

    Shankar, V.; Ide, H.; Gorski, J.

    1984-01-01

    The unsteady form of the full potential equation is solved in conservation form, using implicit methods based on approximate factorization and relaxation schemes. A local time linearization for density is introduced to enable solution to the equation in terms of phi, the velocity potential. A novel flux-biasing technique is applied to generate proper forms of the artificial viscosity, to treat hyperbolic regions with shocks and sonic lines present. The wake is properly modeled by accounting not only for jumps in phi, but also for jumps in higher derivatives of phi obtained from requirements of density continuity. The far field is modeled using the Riemann invariants to simulate nonreflecting boundary conditions. Results are presented for flows over airfoils, cylinders, and spheres. Comparisons are made with available Euler and full potential results.

  19. Shape functions for velocity interpolation in general hexahedral cells

    USGS Publications Warehouse

    Naff, R.L.; Russell, T.F.; Wilson, J.D.

    2002-01-01

    Numerical methods for grids with irregular cells require discrete shape functions to approximate the distribution of quantities across cells. For control-volume mixed finite-element (CVMFE) methods, vector shape functions approximate velocities and vector test functions enforce a discrete form of Darcy's law. In this paper, a new vector shape function is developed for use with irregular, hexahedral cells (trilinear images of cubes). It interpolates velocities and fluxes quadratically, because as shown here, the usual Piola-transformed shape functions, which interpolate linearly, cannot match uniform flow on general hexahedral cells. Truncation-error estimates for the shape function are demonstrated. CVMFE simulations of uniform and non-uniform flow with irregular meshes show first- and second-order convergence of fluxes in the L2 norm in the presence and absence of singularities, respectively.

  20. History matching by spline approximation and regularization in single-phase areal reservoirs

    NASA Technical Reports Server (NTRS)

    Lee, T. Y.; Kravaris, C.; Seinfeld, J.

    1986-01-01

    An automatic history matching algorithm is developed based on bi-cubic spline approximations of permeability and porosity distributions and on the theory of regularization to estimate permeability or porosity in a single-phase, two-dimensional real reservoir from well pressure data. The regularization feature of the algorithm is used to convert the ill-posed history matching problem into a well-posed problem. The algorithm employs the conjugate gradient method as its core minimization method. A number of numerical experiments are carried out to evaluate the performance of the algorithm. Comparisons with conventional (non-regularized) automatic history matching algorithms indicate the superiority of the new algorithm with respect to the parameter estimates obtained. A quasioptimal regularization parameter is determined without requiring a priori information on the statistical properties of the observations.

  1. Euler/Navier-Stokes calculations of transonic flow past fixed- and rotary-wing aircraft configurations

    NASA Technical Reports Server (NTRS)

    Deese, J. E.; Agarwal, R. K.

    1989-01-01

    Computational fluid dynamics has an increasingly important role in the design and analysis of aircraft as computer hardware becomes faster and algorithms become more efficient. Progress is being made in two directions: more complex and realistic configurations are being treated and algorithms based on higher approximations to the complete Navier-Stokes equations are being developed. The literature indicates that linear panel methods can model detailed, realistic aircraft geometries in flow regimes where this approximation is valid. As algorithms including higher approximations to the Navier-Stokes equations are developed, computer resource requirements increase rapidly. Generation of suitable grids become more difficult and the number of grid points required to resolve flow features of interest increases. Recently, the development of large vector computers has enabled researchers to attempt more complex geometries with Euler and Navier-Stokes algorithms. The results of calculations for transonic flow about a typical transport and fighter wing-body configuration using thin layer Navier-Stokes equations are described along with flow about helicopter rotor blades using both Euler/Navier-Stokes equations.

  2. Families of FPGA-Based Accelerators for Approximate String Matching1

    PubMed Central

    Van Court, Tom; Herbordt, Martin C.

    2011-01-01

    Dynamic programming for approximate string matching is a large family of different algorithms, which vary significantly in purpose, complexity, and hardware utilization. Many implementations have reported impressive speed-ups, but have typically been point solutions – highly specialized and addressing only one or a few of the many possible options. The problem to be solved is creating a hardware description that implements a broad range of behavioral options without losing efficiency due to feature bloat. We report a set of three component types that address different parts of the approximate string matching problem. This allows each application to choose the feature set required, then make maximum use of the FPGA fabric according to that application’s specific resource requirements. Multiple, interchangeable implementations are available for each component type. We show that these methods allow the efficient generation of a large, if not complete, family of accelerators for this application. This flexibility was obtained while retaining high performance: We have evaluated a sample against serial reference codes and found speed-ups of from 150× to 400× over a high-end PC. PMID:21603598

  3. Measuring (subglacial) bedform orientation, length, and longitudinal asymmetry - Method assessment.

    PubMed

    Jorge, Marco G; Brennand, Tracy A

    2017-01-01

    Geospatial analysis software provides a range of tools that can be used to measure landform morphometry. Often, a metric can be computed with different techniques that may give different results. This study is an assessment of 5 different methods for measuring longitudinal, or streamlined, subglacial bedform morphometry: orientation, length and longitudinal asymmetry, all of which require defining a longitudinal axis. The methods use the standard deviational ellipse (not previously applied in this context), the longest straight line fitting inside the bedform footprint (2 approaches), the minimum-size footprint-bounding rectangle, and Euler's approximation. We assess how well these methods replicate morphometric data derived from a manually mapped (visually interpreted) longitudinal axis, which, though subjective, is the most typically used reference. A dataset of 100 subglacial bedforms covering the size and shape range of those in the Puget Lowland, Washington, USA is used. For bedforms with elongation > 5, deviations from the reference values are negligible for all methods but Euler's approximation (length). For bedforms with elongation < 5, most methods had small mean absolute error (MAE) and median absolute deviation (MAD) for all morphometrics and thus can be confidently used to characterize the central tendencies of their distributions. However, some methods are better than others. The least precise methods are the ones based on the longest straight line and Euler's approximation; using these for statistical dispersion analysis is discouraged. Because the standard deviational ellipse method is relatively shape invariant and closely replicates the reference values, it is the recommended method. Speculatively, this study may also apply to negative-relief, and fluvial and aeolian bedforms.

  4. On a method for generating inequalities for the zeros of certain functions

    NASA Astrophysics Data System (ADS)

    Gatteschi, Luigi; Giordano, Carla

    2007-10-01

    In this paper we describe a general procedure which yields inequalities satisfied by the zeros of a given function. The method requires the knowledge of a two-term approximation of the function with bound for the error term. The method was successfully applied many years ago [L. Gatteschi, On the zeros of certain functions with application to Bessel functions, Nederl. Akad. Wetensch. Proc. Ser. 55(3)(1952), Indag. Math. 14(1952) 224-229] and more recently too [L. Gatteschi and C. Giordano, Error bounds for McMahon's asymptotic approximations of the zeros of the Bessel functions, Integral Transform Special Functions, 10(2000) 41-56], to the zeros of the Bessel functions of the first kind. Here, we present the results of the application of the method to get inequalities satisfied by the zeros of the derivative of the function . This function plays an important role in the asymptotic study of the stationary points of the solutions of certain differential equations.

  5. Reconstruction of fluorophore concentration variation in dynamic fluorescence molecular tomography.

    PubMed

    Zhang, Xuanxuan; Liu, Fei; Zuo, Simin; Shi, Junwei; Zhang, Guanglei; Bai, Jing; Luo, Jianwen

    2015-01-01

    Dynamic fluorescence molecular tomography (DFMT) is a potential approach for drug delivery, tumor detection, diagnosis, and staging. The purpose of DFMT is to quantify the changes of fluorescent agents in the bodies, which offer important information about the underlying physiological processes. However, the conventional method requires that the fluorophore concentrations to be reconstructed are stationary during the data collection period. As thus, it cannot offer the dynamic information of fluorophore concentration variation within the data collection period. In this paper, a method is proposed to reconstruct the fluorophore concentration variation instead of the fluorophore concentration through a linear approximation. The fluorophore concentration variation rate is introduced by the linear approximation as a new unknown term to be reconstructed and is used to obtain the time courses of fluorophore concentration. Simulation and phantom studies are performed to validate the proposed method. The results show that the method is able to reconstruct the fluorophore concentration variation rates and the time courses of fluorophore concentration with relative errors less than 0.0218.

  6. Practical approximation method for firing-rate models of coupled neural networks with correlated inputs

    NASA Astrophysics Data System (ADS)

    Barreiro, Andrea K.; Ly, Cheng

    2017-08-01

    Rapid experimental advances now enable simultaneous electrophysiological recording of neural activity at single-cell resolution across large regions of the nervous system. Models of this neural network activity will necessarily increase in size and complexity, thus increasing the computational cost of simulating them and the challenge of analyzing them. Here we present a method to approximate the activity and firing statistics of a general firing rate network model (of the Wilson-Cowan type) subject to noisy correlated background inputs. The method requires solving a system of transcendental equations and is fast compared to Monte Carlo simulations of coupled stochastic differential equations. We implement the method with several examples of coupled neural networks and show that the results are quantitatively accurate even with moderate coupling strengths and an appreciable amount of heterogeneity in many parameters. This work should be useful for investigating how various neural attributes qualitatively affect the spiking statistics of coupled neural networks.

  7. Evaluation of the eigenvalue method in the solution of transient heat conduction problems

    NASA Astrophysics Data System (ADS)

    Landry, D. W.

    1985-01-01

    The eigenvalue method is evaluated to determine the advantages and disadvantages of the method as compared to fully explicit, fully implicit, and Crank-Nicolson methods. Time comparisons and accuracy comparisons are made in an effort to rank the eigenvalue method in relation to the comparison schemes. The eigenvalue method is used to solve the parabolic heat equation in multidimensions with transient temperatures. Extensions into three dimensions are made to determine the method's feasibility in handling large geometry problems requiring great numbers of internal mesh points. The eigenvalue method proves to be slightly better in accuracy than the comparison routines because of an exact treatment, as opposed to a numerical approximation, of the time derivative in the heat equation. It has the potential of being a very powerful routine in solving long transient type problems. The method is not well suited to finely meshed grid arrays or large regions because of the time and memory requirements necessary for calculating large sets of eigenvalues and eigenvectors.

  8. Biomathematical modeling of pulsatile hormone secretion: a historical perspective.

    PubMed

    Evans, William S; Farhy, Leon S; Johnson, Michael L

    2009-01-01

    Shortly after the recognition of the profound physiological significance of the pulsatile nature of hormone secretion, computer-based modeling techniques were introduced for the identification and characterization of such pulses. Whereas these earlier approaches defined perturbations in hormone concentration-time series, deconvolution procedures were subsequently employed to separate such pulses into their secretion event and clearance components. Stochastic differential equation modeling was also used to define basal and pulsatile hormone secretion. To assess the regulation of individual components within a hormone network, a method that quantitated approximate entropy within hormone concentration-times series was described. To define relationships within coupled hormone systems, methods including cross-correlation and cross-approximate entropy were utilized. To address some of the inherent limitations of these methods, modeling techniques with which to appraise the strength of feedback signaling between and among hormone-secreting components of a network have been developed. Techniques such as dynamic modeling have been utilized to reconstruct dose-response interactions between hormones within coupled systems. A logical extension of these advances will require the development of mathematical methods with which to approximate endocrine networks exhibiting multiple feedback interactions and subsequently reconstruct their parameters based on experimental data for the purpose of testing regulatory hypotheses and estimating alterations in hormone release control mechanisms.

  9. An approximate solution for interlaminar stresses in laminated composites: Applied mechanics program

    NASA Technical Reports Server (NTRS)

    Rose, Cheryl A.; Herakovich, Carl T.

    1992-01-01

    An approximate solution for interlaminar stresses in finite width, laminated composites subjected to uniform extensional, and bending loads is presented. The solution is based upon the principle of minimum complementary energy and an assumed, statically admissible stress state, derived by considering local material mismatch effects and global equilibrium requirements. The stresses in each layer are approximated by polynomial functions of the thickness coordinate, multiplied by combinations of exponential functions of the in-plane coordinate, expressed in terms of fourteen unknown decay parameters. Imposing the stationary condition of the laminate complementary energy with respect to the unknown variables yields a system of fourteen non-linear algebraic equations for the parameters. Newton's method is implemented to solve this system. Once the parameters are known, the stresses can be easily determined at any point in the laminate. Results are presented for through-thickness and interlaminar stress distributions for angle-ply, cross-ply (symmetric and unsymmetric laminates), and quasi-isotropic laminates subjected to uniform extension and bending. It is shown that the solution compares well with existing finite element solutions and represents an improved approximate solution for interlaminar stresses, primarily at interfaces where global equilibrium is satisfied by the in-plane stresses, but large local mismatch in properties requires the presence of interlaminar stresses.

  10. Application of Reduced Order Transonic Aerodynamic Influence Coefficient Matrix for Design Optimization

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi; Li, Wesley W.

    2009-01-01

    Supporting the Aeronautics Research Mission Directorate guidelines, the National Aeronautics and Space Administration [NASA] Dryden Flight Research Center is developing a multidisciplinary design, analysis, and optimization [MDAO] tool. This tool will leverage existing tools and practices, and allow the easy integration and adoption of new state-of-the-art software. Today s modern aircraft designs in transonic speed are a challenging task due to the computation time required for the unsteady aeroelastic analysis using a Computational Fluid Dynamics [CFD] code. Design approaches in this speed regime are mainly based on the manual trial and error. Because of the time required for unsteady CFD computations in time-domain, this will considerably slow down the whole design process. These analyses are usually performed repeatedly to optimize the final design. As a result, there is considerable motivation to be able to perform aeroelastic calculations more quickly and inexpensively. This paper will describe the development of unsteady transonic aeroelastic design methodology for design optimization using reduced modeling method and unsteady aerodynamic approximation. The method requires the unsteady transonic aerodynamics be represented in the frequency or Laplace domain. Dynamically linear assumption is used for creating Aerodynamic Influence Coefficient [AIC] matrices in transonic speed regime. Unsteady CFD computations are needed for the important columns of an AIC matrix which corresponded to the primary modes for the flutter. Order reduction techniques, such as Guyan reduction and improved reduction system, are used to reduce the size of problem transonic flutter can be found by the classic methods, such as Rational function approximation, p-k, p, root-locus etc. Such a methodology could be incorporated into MDAO tool for design optimization at a reasonable computational cost. The proposed technique is verified using the Aerostructures Test Wing 2 actually designed, built, and tested at NASA Dryden Flight Research Center. The results from the full order model and the approximate reduced order model are analyzed and compared.

  11. Radiative Transfer Model for Operational Retrieval of Cloud Parameters from DSCOVR-EPIC Measurements

    NASA Astrophysics Data System (ADS)

    Yang, Y.; Molina Garcia, V.; Doicu, A.; Loyola, D. G.

    2016-12-01

    The Earth Polychromatic Imaging Camera (EPIC) onboard the Deep Space Climate Observatory (DSCOVR) measures the radiance in the backscattering region. To make sure that all details in the backward glory are covered, a large number of streams is required by a standard radiative transfer model based on the discrete ordinates method. Even the use of the delta-M scaling and the TMS correction do not substantially reduce the number of streams. The aim of this work is to analyze the capability of a fast radiative transfer model to retrieve operationally cloud parameters from EPIC measurements. The radiative transfer model combines the discrete ordinates method with matrix exponential for the computation of radiances and the matrix operator method for the calculation of the reflection and transmission matrices. Standard acceleration techniques as, for instance, the use of the normalized right and left eigenvectors, telescoping technique, Pade approximation and successive-order-of-scattering approximation are implemented. In addition, the model may compute the reflection matrix of the cloud by means of the asymptotic theory, and may use the equivalent Lambertian cloud model. The various approximations are analyzed from the point of view of efficiency and accuracy.

  12. An Implicit Characteristic Based Method for Electromagnetics

    NASA Technical Reports Server (NTRS)

    Beggs, John H.; Briley, W. Roger

    2001-01-01

    An implicit characteristic-based approach for numerical solution of Maxwell's time-dependent curl equations in flux conservative form is introduced. This method combines a characteristic based finite difference spatial approximation with an implicit lower-upper approximate factorization (LU/AF) time integration scheme. This approach is advantageous for three-dimensional applications because the characteristic differencing enables a two-factor approximate factorization that retains its unconditional stability in three space dimensions, and it does not require solution of tridiagonal systems. Results are given both for a Fourier analysis of stability, damping and dispersion properties, and for one-dimensional model problems involving propagation and scattering for free space and dielectric materials using both uniform and nonuniform grids. The explicit Finite Difference Time Domain Method (FDTD) algorithm is used as a convenient reference algorithm for comparison. The one-dimensional results indicate that for low frequency problems on a highly resolved uniform or nonuniform grid, this LU/AF algorithm can produce accurate solutions at Courant numbers significantly greater than one, with a corresponding improvement in efficiency for simulating a given period of time. This approach appears promising for development of dispersion optimized LU/AF schemes for three dimensional applications.

  13. Micromechanical potentiometric sensors

    DOEpatents

    Thundat, Thomas G.

    2000-01-01

    A microcantilever potentiometric sensor utilized for detecting and measuring physical and chemical parameters in a sample of media is described. The microcantilevered spring element includes at least one chemical coating on a coated region, that accumulates a surface charge in response to hydrogen ions, redox potential, or ion concentrations in a sample of the media being monitored. The accumulation of surface charge on one surface of the microcantilever, with a differing surface charge on an opposing surface, creates a mechanical stress and a deflection of the spring element. One of a multitude of deflection detection methods may include the use of a laser light source focused on the microcantilever, with a photo-sensitive detector receiving reflected laser impulses. The microcantilevered spring element is approximately 1 to 100 .mu.m long, approximately 1 to 50 .mu.m wide, and approximately 0.3 to 3.0 .mu.m thick. An accuracy of detection of deflections of the cantilever is provided in the range of 0.01 nanometers of deflection. The microcantilever apparatus and a method of detection of parameters require only microliters of a sample to be placed on, or near the spring element surface. The method is extremely sensitive to the detection of the parameters to be measured.

  14. Structural design using equilibrium programming formulations

    NASA Technical Reports Server (NTRS)

    Scotti, Stephen J.

    1995-01-01

    Solutions to increasingly larger structural optimization problems are desired. However, computational resources are strained to meet this need. New methods will be required to solve increasingly larger problems. The present approaches to solving large-scale problems involve approximations for the constraints of structural optimization problems and/or decomposition of the problem into multiple subproblems that can be solved in parallel. An area of game theory, equilibrium programming (also known as noncooperative game theory), can be used to unify these existing approaches from a theoretical point of view (considering the existence and optimality of solutions), and be used as a framework for the development of new methods for solving large-scale optimization problems. Equilibrium programming theory is described, and existing design techniques such as fully stressed design and constraint approximations are shown to fit within its framework. Two new structural design formulations are also derived. The first new formulation is another approximation technique which is a general updating scheme for the sensitivity derivatives of design constraints. The second new formulation uses a substructure-based decomposition of the structure for analysis and sensitivity calculations. Significant computational benefits of the new formulations compared with a conventional method are demonstrated.

  15. Evaluating significance in linear mixed-effects models in R.

    PubMed

    Luke, Steven G

    2017-08-01

    Mixed-effects models are being used ever more frequently in the analysis of experimental data. However, in the lme4 package in R the standards for evaluating significance of fixed effects in these models (i.e., obtaining p-values) are somewhat vague. There are good reasons for this, but as researchers who are using these models are required in many cases to report p-values, some method for evaluating the significance of the model output is needed. This paper reports the results of simulations showing that the two most common methods for evaluating significance, using likelihood ratio tests and applying the z distribution to the Wald t values from the model output (t-as-z), are somewhat anti-conservative, especially for smaller sample sizes. Other methods for evaluating significance, including parametric bootstrapping and the Kenward-Roger and Satterthwaite approximations for degrees of freedom, were also evaluated. The results of these simulations suggest that Type 1 error rates are closest to .05 when models are fitted using REML and p-values are derived using the Kenward-Roger or Satterthwaite approximations, as these approximations both produced acceptable Type 1 error rates even for smaller samples.

  16. Correcting AUC for Measurement Error.

    PubMed

    Rosner, Bernard; Tworoger, Shelley; Qiu, Weiliang

    2015-12-01

    Diagnostic biomarkers are used frequently in epidemiologic and clinical work. The ability of a diagnostic biomarker to discriminate between subjects who develop disease (cases) and subjects who do not (controls) is often measured by the area under the receiver operating characteristic curve (AUC). The diagnostic biomarkers are usually measured with error. Ignoring measurement error can cause biased estimation of AUC, which results in misleading interpretation of the efficacy of a diagnostic biomarker. Several methods have been proposed to correct AUC for measurement error, most of which required the normality assumption for the distributions of diagnostic biomarkers. In this article, we propose a new method to correct AUC for measurement error and derive approximate confidence limits for the corrected AUC. The proposed method does not require the normality assumption. Both real data analyses and simulation studies show good performance of the proposed measurement error correction method.

  17. Effectiveness of “Thin-Layer” and “Effective Medium” Approximations in Numerical Simulation of Dielectric Spectra of Biological Cell Suspensions

    NASA Astrophysics Data System (ADS)

    Asami, Koji

    2010-12-01

    There are a few concerns in dielectric modeling of biological cells by the finite-element method (FEM) to simulate their dielectric spectra. Cells possess thin plasma membranes and membrane-bound intracellular organelles, requiring extra fine meshes and considerable computational tasks in the simulation. To solve the problems, the “thin-layer” approximation (TLA) and the “effective medium” approximation (EMA) were adopted. TLA deals with the membrane as an interface of the specific membrane impedance, and therefore it is not necessary to divide the membrane region. EMA regards the composite cytoplasm as an effective homogeneous phase whose dielectric properties are calculated separately. It was proved that TLA and EMA were both useful for greatly reducing computational tasks while accurately coinciding with analytical solutions.

  18. High resolution frequency analysis techniques with application to the redshift experiment

    NASA Technical Reports Server (NTRS)

    Decher, R.; Teuber, D.

    1975-01-01

    High resolution frequency analysis methods, with application to the gravitational probe redshift experiment, are discussed. For this experiment a resolution of .00001 Hz is required to measure a slowly varying, low frequency signal of approximately 1 Hz. Major building blocks include fast Fourier transform, discrete Fourier transform, Lagrange interpolation, golden section search, and adaptive matched filter technique. Accuracy, resolution, and computer effort of these methods are investigated, including test runs on an IBM 360/65 computer.

  19. Ray Tracing Methods in Seismic Emission Tomography

    NASA Astrophysics Data System (ADS)

    Chebotareva, I. Ya.

    2018-03-01

    Highly efficient approximate ray tracing techniques which can be used in seismic emission tomography and in other methods requiring a large number of raypaths are described. The techniques are applicable for the gradient and plane-layered velocity sections of the medium and for the models with a complicated geometry of contrasting boundaries. The empirical results obtained with the use of the discussed ray tracing technologies and seismic emission tomography results, as well as the results of numerical modeling, are presented.

  20. Measurement Uncertainty of Dew-Point Temperature in a Two-Pressure Humidity Generator

    NASA Astrophysics Data System (ADS)

    Martins, L. Lages; Ribeiro, A. Silva; Alves e Sousa, J.; Forbes, Alistair B.

    2012-09-01

    This article describes the measurement uncertainty evaluation of the dew-point temperature when using a two-pressure humidity generator as a reference standard. The estimation of the dew-point temperature involves the solution of a non-linear equation for which iterative solution techniques, such as the Newton-Raphson method, are required. Previous studies have already been carried out using the GUM method and the Monte Carlo method but have not discussed the impact of the approximate numerical method used to provide the temperature estimation. One of the aims of this article is to take this approximation into account. Following the guidelines presented in the GUM Supplement 1, two alternative approaches can be developed: the forward measurement uncertainty propagation by the Monte Carlo method when using the Newton-Raphson numerical procedure; and the inverse measurement uncertainty propagation by Bayesian inference, based on prior available information regarding the usual dispersion of values obtained by the calibration process. The measurement uncertainties obtained using these two methods can be compared with previous results. Other relevant issues concerning this research are the broad application to measurements that require hygrometric conditions obtained from two-pressure humidity generators and, also, the ability to provide a solution that can be applied to similar iterative models. The research also studied the factors influencing both the use of the Monte Carlo method (such as the seed value and the convergence parameter) and the inverse uncertainty propagation using Bayesian inference (such as the pre-assigned tolerance, prior estimate, and standard deviation) in terms of their accuracy and adequacy.

  1. Adaptive Filtering Using Recurrent Neural Networks

    NASA Technical Reports Server (NTRS)

    Parlos, Alexander G.; Menon, Sunil K.; Atiya, Amir F.

    2005-01-01

    A method for adaptive (or, optionally, nonadaptive) filtering has been developed for estimating the states of complex process systems (e.g., chemical plants, factories, or manufacturing processes at some level of abstraction) from time series of measurements of system inputs and outputs. The method is based partly on the fundamental principles of the Kalman filter and partly on the use of recurrent neural networks. The standard Kalman filter involves an assumption of linearity of the mathematical model used to describe a process system. The extended Kalman filter accommodates a nonlinear process model but still requires linearization about the state estimate. Both the standard and extended Kalman filters involve the often unrealistic assumption that process and measurement noise are zero-mean, Gaussian, and white. In contrast, the present method does not involve any assumptions of linearity of process models or of the nature of process noise; on the contrary, few (if any) assumptions are made about process models, noise models, or the parameters of such models. In this regard, the method can be characterized as one of nonlinear, nonparametric filtering. The method exploits the unique ability of neural networks to approximate nonlinear functions. In a given case, the process model is limited mainly by limitations of the approximation ability of the neural networks chosen for that case. Moreover, despite the lack of assumptions regarding process noise, the method yields minimum- variance filters. In that they do not require statistical models of noise, the neural- network-based state filters of this method are comparable to conventional nonlinear least-squares estimators.

  2. High-Order Entropy Stable Finite Difference Schemes for Nonlinear Conservation Laws: Finite Domains

    NASA Technical Reports Server (NTRS)

    Fisher, Travis C.; Carpenter, Mark H.

    2013-01-01

    Developing stable and robust high-order finite difference schemes requires mathematical formalism and appropriate methods of analysis. In this work, nonlinear entropy stability is used to derive provably stable high-order finite difference methods with formal boundary closures for conservation laws. Particular emphasis is placed on the entropy stability of the compressible Navier-Stokes equations. A newly derived entropy stable weighted essentially non-oscillatory finite difference method is used to simulate problems with shocks and a conservative, entropy stable, narrow-stencil finite difference approach is used to approximate viscous terms.

  3. A new method for inferring carbon monoxide concentrations from gas filter radiometer data

    NASA Technical Reports Server (NTRS)

    Wallio, H. A.; Reichle, H. G., Jr.; Casas, J. C.; Gormsen, B. B.

    1981-01-01

    A method for inferring carbon monoxide concentrations from gas filter radiometer data is presented. The technique can closely approximate the results of more costly line-by-line radiative transfer calculations over a wide range of altitudes, ground temperatures, and carbon monoxide concentrations. The technique can also be used over a larger range of conditions than those used for the regression analysis. Because the influence of the carbon monoxide mixing ratio requires only addition, multiplication and a minimum of logic, the method can be implemented on very small computers or microprocessors.

  4. Reduced-rank approximations to the far-field transform in the gridded fast multipole method

    NASA Astrophysics Data System (ADS)

    Hesford, Andrew J.; Waag, Robert C.

    2011-05-01

    The fast multipole method (FMM) has been shown to have a reduced computational dependence on the size of finest-level groups of elements when the elements are positioned on a regular grid and FFT convolution is used to represent neighboring interactions. However, transformations between plane-wave expansions used for FMM interactions and pressure distributions used for neighboring interactions remain significant contributors to the cost of FMM computations when finest-level groups are large. The transformation operators, which are forward and inverse Fourier transforms with the wave space confined to the unit sphere, are smooth and well approximated using reduced-rank decompositions that further reduce the computational dependence of the FMM on finest-level group size. The adaptive cross approximation (ACA) is selected to represent the forward and adjoint far-field transformation operators required by the FMM. However, the actual error of the ACA is found to be greater than that predicted using traditional estimates, and the ACA generally performs worse than the approximation resulting from a truncated singular-value decomposition (SVD). To overcome these issues while avoiding the cost of a full-scale SVD, the ACA is employed with more stringent accuracy demands and recompressed using a reduced, truncated SVD. The results show a greatly reduced approximation error that performs comparably to the full-scale truncated SVD without degrading the asymptotic computational efficiency associated with ACA matrix assembly.

  5. Reduced-Rank Approximations to the Far-Field Transform in the Gridded Fast Multipole Method.

    PubMed

    Hesford, Andrew J; Waag, Robert C

    2011-05-10

    The fast multipole method (FMM) has been shown to have a reduced computational dependence on the size of finest-level groups of elements when the elements are positioned on a regular grid and FFT convolution is used to represent neighboring interactions. However, transformations between plane-wave expansions used for FMM interactions and pressure distributions used for neighboring interactions remain significant contributors to the cost of FMM computations when finest-level groups are large. The transformation operators, which are forward and inverse Fourier transforms with the wave space confined to the unit sphere, are smooth and well approximated using reduced-rank decompositions that further reduce the computational dependence of the FMM on finest-level group size. The adaptive cross approximation (ACA) is selected to represent the forward and adjoint far-field transformation operators required by the FMM. However, the actual error of the ACA is found to be greater than that predicted using traditional estimates, and the ACA generally performs worse than the approximation resulting from a truncated singular-value decomposition (SVD). To overcome these issues while avoiding the cost of a full-scale SVD, the ACA is employed with more stringent accuracy demands and recompressed using a reduced, truncated SVD. The results show a greatly reduced approximation error that performs comparably to the full-scale truncated SVD without degrading the asymptotic computational efficiency associated with ACA matrix assembly.

  6. Reduced-Rank Approximations to the Far-Field Transform in the Gridded Fast Multipole Method

    PubMed Central

    Hesford, Andrew J.; Waag, Robert C.

    2011-01-01

    The fast multipole method (FMM) has been shown to have a reduced computational dependence on the size of finest-level groups of elements when the elements are positioned on a regular grid and FFT convolution is used to represent neighboring interactions. However, transformations between plane-wave expansions used for FMM interactions and pressure distributions used for neighboring interactions remain significant contributors to the cost of FMM computations when finest-level groups are large. The transformation operators, which are forward and inverse Fourier transforms with the wave space confined to the unit sphere, are smooth and well approximated using reduced-rank decompositions that further reduce the computational dependence of the FMM on finest-level group size. The adaptive cross approximation (ACA) is selected to represent the forward and adjoint far-field transformation operators required by the FMM. However, the actual error of the ACA is found to be greater than that predicted using traditional estimates, and the ACA generally performs worse than the approximation resulting from a truncated singular-value decomposition (SVD). To overcome these issues while avoiding the cost of a full-scale SVD, the ACA is employed with more stringent accuracy demands and recompressed using a reduced, truncated SVD. The results show a greatly reduced approximation error that performs comparably to the full-scale truncated SVD without degrading the asymptotic computational efficiency associated with ACA matrix assembly. PMID:21552350

  7. Approximate tensor-product preconditioners for very high order discontinuous Galerkin methods

    NASA Astrophysics Data System (ADS)

    Pazner, Will; Persson, Per-Olof

    2018-02-01

    In this paper, we develop a new tensor-product based preconditioner for discontinuous Galerkin methods with polynomial degrees higher than those typically employed. This preconditioner uses an automatic, purely algebraic method to approximate the exact block Jacobi preconditioner by Kronecker products of several small, one-dimensional matrices. Traditional matrix-based preconditioners require O (p2d) storage and O (p3d) computational work, where p is the degree of basis polynomials used, and d is the spatial dimension. Our SVD-based tensor-product preconditioner requires O (p d + 1) storage, O (p d + 1) work in two spatial dimensions, and O (p d + 2) work in three spatial dimensions. Combined with a matrix-free Newton-Krylov solver, these preconditioners allow for the solution of DG systems in linear time in p per degree of freedom in 2D, and reduce the computational complexity from O (p9) to O (p5) in 3D. Numerical results are shown in 2D and 3D for the advection, Euler, and Navier-Stokes equations, using polynomials of degree up to p = 30. For many test cases, the preconditioner results in similar iteration counts when compared with the exact block Jacobi preconditioner, and performance is significantly improved for high polynomial degrees p.

  8. Direct application of Padé approximant for solving nonlinear differential equations.

    PubMed

    Vazquez-Leal, Hector; Benhammouda, Brahim; Filobello-Nino, Uriel; Sarmiento-Reyes, Arturo; Jimenez-Fernandez, Victor Manuel; Garcia-Gervacio, Jose Luis; Huerta-Chua, Jesus; Morales-Mendoza, Luis Javier; Gonzalez-Lee, Mario

    2014-01-01

    This work presents a direct procedure to apply Padé method to find approximate solutions for nonlinear differential equations. Moreover, we present some cases study showing the strength of the method to generate highly accurate rational approximate solutions compared to other semi-analytical methods. The type of tested nonlinear equations are: a highly nonlinear boundary value problem, a differential-algebraic oscillator problem, and an asymptotic problem. The high accurate handy approximations obtained by the direct application of Padé method shows the high potential if the proposed scheme to approximate a wide variety of problems. What is more, the direct application of the Padé approximant aids to avoid the previous application of an approximative method like Taylor series method, homotopy perturbation method, Adomian Decomposition method, homotopy analysis method, variational iteration method, among others, as tools to obtain a power series solutions to post-treat with the Padé approximant. 34L30.

  9. Multi-hybrid method for investigation of EM scattering from inhomogeneous object above a dielectric rough surface

    NASA Astrophysics Data System (ADS)

    Li, Jie; Guo, LiXin; He, Qiong; Wei, Bing

    2012-10-01

    An iterative strategy combining Kirchhoff approximation^(KA) with the hybrid finite element-boundary integral (FE-BI) method is presented in this paper to study the interactions between the inhomogeneous object and the underlying rough surface. KA is applied to study scattering from underlying rough surfaces, whereas FE-BI deals with scattering from the above target. Both two methods use updated excitation sources. Huygens equivalence principle and an iterative strategy are employed to consider the multi-scattering effects. This hybrid FE-BI-KA scheme is an improved and generalized version of previous hybrid Kirchhoff approximation-method of moments (KA-MoM). This newly presented hybrid method has the following advantages: (1) the feasibility of modeling multi-scale scattering problems (large scale underlying surface and small scale target); (2) low memory requirement as in hybrid KA-MoM; (3) the ability to deal with scattering from inhomogeneous (including coated or layered) scatterers above rough surfaces. The numerical results are given to evaluate the accuracy of the multi-hybrid technique; the computing time and memory requirements consumed in specific numerical simulation of FE-BI-KA are compared with those of MoM. The convergence performance is analyzed by studying the iteration number variation caused by related parameters. Then bistatic scattering from inhomogeneous object of different configurations above dielectric Gaussian rough surface is calculated and the influences of dielectric compositions and surface roughness on the scattering pattern are discussed.

  10. Zonal methods for the parallel execution of range-limited N-body simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowers, Kevin J.; Dror, Ron O.; Shaw, David E.

    2007-01-20

    Particle simulations in fields ranging from biochemistry to astrophysics require the evaluation of interactions between all pairs of particles separated by less than some fixed interaction radius. The applicability of such simulations is often limited by the time required for calculation, but the use of massive parallelism to accelerate these computations is typically limited by inter-processor communication requirements. Recently, Snir [M. Snir, A note on N-body computations with cutoffs, Theor. Comput. Syst. 37 (2004) 295-318] and Shaw [D.E. Shaw, A fast, scalable method for the parallel evaluation of distance-limited pairwise particle interactions, J. Comput. Chem. 26 (2005) 1318-1328] independently introducedmore » two distinct methods that offer asymptotic reductions in the amount of data transferred between processors. In the present paper, we show that these schemes represent special cases of a more general class of methods, and introduce several new algorithms in this class that offer practical advantages over all previously described methods for a wide range of problem parameters. We also show that several of these algorithms approach an approximate lower bound on inter-processor data transfer.« less

  11. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.

  12. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.

  13. Approximate analysis of thermal convection in a crystal-growth cell for Spacelab 3

    NASA Technical Reports Server (NTRS)

    Dressler, R. F.

    1982-01-01

    The transient and steady thermal convection in microgravity is described. The approach is applicable to many three dimensional flows in containers of various shapes with various thermal gradients imposed. The method employs known analytical solutions to two dimensional thermal flows in simpler geometries, and does not require recourse to numerical calculations by computer.

  14. An electric-analog simulation of elliptic partial differential equations using finite element theory

    USGS Publications Warehouse

    Franke, O.L.; Pinder, G.F.; Patten, E.P.

    1982-01-01

    Elliptic partial differential equations can be solved using the Galerkin-finite element method to generate the approximating algebraic equations, and an electrical network to solve the resulting matrices. Some element configurations require the use of networks containing negative resistances which, while physically realizable, are more expensive and time-consuming to construct. ?? 1982.

  15. High-order cyclo-difference techniques: An alternative to finite differences

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Otto, John C.

    1993-01-01

    The summation-by-parts energy norm is used to establish a new class of high-order finite-difference techniques referred to here as 'cyclo-difference' techniques. These techniques are constructed cyclically from stable subelements, and require no numerical boundary conditions; when coupled with the simultaneous approximation term (SAT) boundary treatment, they are time asymptotically stable for an arbitrary hyperbolic system. These techniques are similar to spectral element techniques and are ideally suited for parallel implementation, but do not require special collocation points or orthogonal basis functions. The principal focus is on methods of sixth-order formal accuracy or less; however, these methods could be extended in principle to any arbitrary order of accuracy.

  16. Defining and quantifying the social phenotype in autism.

    PubMed

    Klin, Ami; Jones, Warren; Schultz, Robert; Volkmar, Fred; Cohen, Donald

    2002-06-01

    Genetic and neurofunctional research in autism has highlighted the need for improved characterization of the core social disorder defining the broad spectrum of syndrome manifestations. This article reviews the advantages and limitations of current methods for the refinement and quantification of this highly heterogeneous social phenotype. The study of social visual pursuit by use of eye-tracking technology is offered as a paradigm for novel tools incorporating these requirements and as a research effort that builds on the emerging synergy of different branches of social neuroscience. Advances in the area will require increased consideration of processes underlying experimental results and a closer approximation of experimental methods to the naturalistic demands inherent in real-life social situations.

  17. Simultaneous quaternion estimation (QUEST) and bias determination

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    1989-01-01

    Tests of a new method for the simultaneous estimation of spacecraft attitude and sensor biases, based on a quaternion estimation algorithm minimizing Wahba's loss function are presented. The new method is compared with a conventional batch least-squares differential correction algorithm. The estimates are based on data from strapdown gyros and star trackers, simulated with varying levels of Gaussian noise for both inertially-fixed and Earth-pointing reference attitudes. Both algorithms solve for the spacecraft attitude and the gyro drift rate biases. They converge to the same estimates at the same rate for inertially-fixed attitude, but the new algorithm converges more slowly than the differential correction for Earth-pointing attitude. The slower convergence of the new method for non-zero attitude rates is believed to be due to the use of an inadequate approximation for a partial derivative matrix. The new method requires about twice the computational effort of the differential correction. Improving the approximation for the partial derivative matrix in the new method is expected to improve its convergence at the cost of increased computational effort.

  18. Differential computation method used to calibrate the angle-centroid relationship in coaxial reverse Hartmann test

    NASA Astrophysics Data System (ADS)

    Li, Xinji; Hui, Mei; Zhao, Zhu; Liu, Ming; Dong, Liquan; Kong, Lingqin; Zhao, Yuejin

    2018-05-01

    A differential computation method is presented to improve the precision of calibration for coaxial reverse Hartmann test (RHT). In the calibration, the accuracy of the distance measurement greatly influences the surface shape test, as demonstrated in the mathematical analyses. However, high-precision absolute distance measurement is difficult in the calibration. Thus, a differential computation method that only requires the relative distance was developed. In the proposed method, a liquid crystal display screen successively displayed two regular dot matrix patterns with different dot spacing. In a special case, images on the detector exhibited similar centroid distributions during the reflector translation. Thus, the critical value of the relative displacement distance and the centroid distributions of the dots on the detector were utilized to establish the relationship between the rays at certain angles and the detector coordinates. Experiments revealed the approximately linear behavior of the centroid variation with the relative displacement distance. With the differential computation method, we increased the precision of traditional calibration 10-5 rad root mean square. The precision of the RHT was increased by approximately 100 nm.

  19. Density-to-Potential Inversions to Guide Development of Exchange-Correlation Approximations at Finite Temperature

    NASA Astrophysics Data System (ADS)

    Jensen, Daniel; Wasserman, Adam; Baczewski, Andrew

    The construction of approximations to the exchange-correlation potential for warm dense matter (WDM) is a topic of significant recent interest. In this work, we study the inverse problem of Kohn-Sham (KS) DFT as a means of guiding functional design at zero temperature and in WDM. Whereas the forward problem solves the KS equations to produce a density from a specified exchange-correlation potential, the inverse problem seeks to construct the exchange-correlation potential from specified densities. These two problems require different computational methods and convergence criteria despite sharing the same mathematical equations. We present two new inversion methods based on constrained variational and PDE-constrained optimization methods. We adapt these methods to finite temperature calculations to reveal the exchange-correlation potential's temperature dependence in WDM-relevant conditions. The different inversion methods presented are applied to both non-interacting and interacting model systems for comparison. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Security Administration under contract DE-AC04-94.

  20. MIST - MINIMUM-STATE METHOD FOR RATIONAL APPROXIMATION OF UNSTEADY AERODYNAMIC FORCE COEFFICIENT MATRICES

    NASA Technical Reports Server (NTRS)

    Karpel, M.

    1994-01-01

    Various control analysis, design, and simulation techniques of aeroservoelastic systems require the equations of motion to be cast in a linear, time-invariant state-space form. In order to account for unsteady aerodynamics, rational function approximations must be obtained to represent them in the first order equations of the state-space formulation. A computer program, MIST, has been developed which determines minimum-state approximations of the coefficient matrices of the unsteady aerodynamic forces. The Minimum-State Method facilitates the design of lower-order control systems, analysis of control system performance, and near real-time simulation of aeroservoelastic phenomena such as the outboard-wing acceleration response to gust velocity. Engineers using this program will be able to calculate minimum-state rational approximations of the generalized unsteady aerodynamic forces. Using the Minimum-State formulation of the state-space equations, they will be able to obtain state-space models with good open-loop characteristics while reducing the number of aerodynamic equations by an order of magnitude more than traditional approaches. These low-order state-space mathematical models are good for design and simulation of aeroservoelastic systems. The computer program, MIST, accepts tabular values of the generalized aerodynamic forces over a set of reduced frequencies. It then determines approximations to these tabular data in the LaPlace domain using rational functions. MIST provides the capability to select the denominator coefficients in the rational approximations, to selectably constrain the approximations without increasing the problem size, and to determine and emphasize critical frequency ranges in determining the approximations. MIST has been written to allow two types data weighting options. The first weighting is a traditional normalization of the aerodynamic data to the maximum unit value of each aerodynamic coefficient. The second allows weighting the importance of different tabular values in determining the approximations based upon physical characteristics of the system. Specifically, the physical weighting capability is such that each tabulated aerodynamic coefficient, at each reduced frequency value, is weighted according to the effect of an incremental error of this coefficient on aeroelastic characteristics of the system. In both cases, the resulting approximations yield a relatively low number of aerodynamic lag states in the subsequent state-space model. MIST is written in ANSI FORTRAN 77 for DEC VAX series computers running VMS. It requires approximately 1Mb of RAM for execution. The standard distribution medium for this package is a 9-track 1600 BPI magnetic tape in DEC VAX FILES-11 format. It is also available on a TK50 tape cartridge in DEC VAX BACKUP format. MIST was developed in 1991. DEC VAX and VMS are trademarks of Digital Equipment Corporation. FORTRAN 77 is a registered trademark of Lahey Computer Systems, Inc.

  1. Contour-based image warping

    NASA Astrophysics Data System (ADS)

    Chan, Kwai H.; Lau, Rynson W.

    1996-09-01

    Image warping concerns about transforming an image from one spatial coordinate to another. It is widely used for the vidual effect of deforming and morphing images in the film industry. A number of warping techniques have been introduced, which are mainly based on the corresponding pair mapping of feature points, feature vectors or feature patches (mostly triangular or quadrilateral). However, very often warping of an image object with an arbitrary shape is required. This requires a warping technique which is based on boundary contour instead of feature points or feature line-vectors. In addition, when feature point or feature vector based techniques are used, approximation of the object boundary by using point or vectors is required. In this case, the matching process of the corresponding pairs will be very time consuming if a fine approximation is required. In this paper, we propose a contour-based warping technique for warping image objects with arbitrary shapes. The novel idea of the new method is the introduction of mathematical morphology to allow a more flexible control of image warping. Two morphological operators are used as contour determinators. The erosion operator is used to warp image contents which are inside a user specified contour while the dilation operation is used to warp image contents which are outside of the contour. This new method is proposed to assist further development of a semi-automatic motion morphing system when accompanied with robust feature extractors such as deformable template or active contour model.

  2. Solution of axisymmetric and two-dimensional inviscid flow over blunt bodies by the method of lines

    NASA Technical Reports Server (NTRS)

    Hamilton, H. H., II

    1978-01-01

    Comparisons with experimental data and the results of other computational methods demonstrated that very accurate solutions can be obtained by using relatively few lines with the method of lines approach. This method is semidiscrete and has relatively low core storage requirements as compared with fully discrete methods since very little data were stored across the shock layer. This feature is very attractive for three dimensional problems because it enables computer storage requirements to be reduced by approximately an order of magnitude. In the present study it was found that nine lines was a practical upper limit for two dimensional and axisymmetric problems. This condition limits application of the method to smooth body geometries where relatively few lines would be adequate to describe changes in the flow variables around the body. Extension of the method to three dimensions was conceptually straightforward; however, three dimensional applications would also be limited to smooth body geometries although not necessarily to total of nine lines.

  3. Survey of meshless and generalized finite element methods: A unified approach

    NASA Astrophysics Data System (ADS)

    Babuška, Ivo; Banerjee, Uday; Osborn, John E.

    In the past few years meshless methods for numerically solving partial differential equations have come into the focus of interest, especially in the engineering community. This class of methods was essentially stimulated by difficulties related to mesh generation. Mesh generation is delicate in many situations, for instance, when the domain has complicated geometry; when the mesh changes with time, as in crack propagation, and remeshing is required at each time step; when a Lagrangian formulation is employed, especially with nonlinear PDEs. In addition, the need for flexibility in the selection of approximating functions (e.g., the flexibility to use non-polynomial approximating functions), has played a significant role in the development of meshless methods. There are many recent papers, and two books, on meshless methods; most of them are of an engineering character, without any mathematical analysis.In this paper we address meshless methods and the closely related generalized finite element methods for solving linear elliptic equations, using variational principles. We give a unified mathematical theory with proofs, briefly address implementational aspects, present illustrative numerical examples, and provide a list of references to the current literature.The aim of the paper is to provide a survey of a part of this new field, with emphasis on mathematics. We present proofs of essential theorems because we feel these proofs are essential for the understanding of the mathematical aspects of meshless methods, which has approximation theory as a major ingredient. As always, any new field is stimulated by and related to older ideas. This will be visible in our paper.

  4. A time-dependent neutron transport method of characteristics formulation with time derivative propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoffman, Adam J., E-mail: adamhoff@umich.edu; Lee, John C., E-mail: jcl@umich.edu

    2016-02-15

    A new time-dependent Method of Characteristics (MOC) formulation for nuclear reactor kinetics was developed utilizing angular flux time-derivative propagation. This method avoids the requirement of storing the angular flux at previous points in time to represent a discretized time derivative; instead, an equation for the angular flux time derivative along 1D spatial characteristics is derived and solved concurrently with the 1D transport characteristic equation. This approach allows the angular flux time derivative to be recast principally in terms of the neutron source time derivatives, which are approximated to high-order accuracy using the backward differentiation formula (BDF). This approach, called Sourcemore » Derivative Propagation (SDP), drastically reduces the memory requirements of time-dependent MOC relative to methods that require storing the angular flux. An SDP method was developed for 2D and 3D applications and implemented in the computer code DeCART in 2D. DeCART was used to model two reactor transient benchmarks: a modified TWIGL problem and a C5G7 transient. The SDP method accurately and efficiently replicated the solution of the conventional time-dependent MOC method using two orders of magnitude less memory.« less

  5. Uncertainty Analysis via Failure Domain Characterization: Polynomial Requirement Functions

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Munoz, Cesar A.; Narkawicz, Anthony J.; Kenny, Sean P.; Giesy, Daniel P.

    2011-01-01

    This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. A Bernstein expansion approach is used to size hyper-rectangular subsets while a sum of squares programming approach is used to size quasi-ellipsoidal subsets. These methods are applicable to requirement functions whose functional dependency on the uncertainty is a known polynomial. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the uncertainty model assumed (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.

  6. Uncertainty Analysis via Failure Domain Characterization: Unrestricted Requirement Functions

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2011-01-01

    This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. The methods developed herein, which are based on nonlinear constrained optimization, are applicable to requirement functions whose functional dependency on the uncertainty is arbitrary and whose explicit form may even be unknown. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the assumed uncertainty model (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.

  7. Higher-Order Extended Lagrangian Born–Oppenheimer Molecular Dynamics for Classical Polarizable Models

    DOE PAGES

    Albaugh, Alex; Head-Gordon, Teresa; Niklasson, Anders M. N.

    2018-01-09

    Generalized extended Lagrangian Born−Oppenheimer molecular dynamics (XLBOMD) methods provide a framework for fast iteration-free simulations of models that normally require expensive electronic ground state optimizations prior to the force evaluations at every time step. XLBOMD uses dynamically driven auxiliary degrees of freedom that fluctuate about a variationally optimized ground state of an approximate “shadow” potential which approximates the true reference potential. While the requirements for such shadow potentials are well understood, constructing such potentials in practice has previously been ad hoc, and in this work, we present a systematic development of XLBOMD shadow potentials that match the reference potential tomore » any order. We also introduce a framework for combining friction-like dissipation for the auxiliary degrees of freedom with general-order integration, a combination that was not previously possible. These developments are demonstrated with a simple fluctuating charge model and point induced dipole polarization models.« less

  8. Understanding the Sun-Earth Libration Point Orbit Formation Flying Challenges For WFIRST and Starshade

    NASA Technical Reports Server (NTRS)

    Webster, Cassandra M.; Folta, David C.

    2017-01-01

    In order to fly an occulter in formation with a telescope at the Sun-Earth L2 (SEL2) Libration Point, one must have a detailed understanding of the dy-namics that govern the restricted three body system. For initial purposes, a linear approximation is satisfactory, but operations will require a high-fidelity modeling tool along with strategic targeting methods in order to be successful. This paper focuses on the challenging dynamics of the transfer trajectories to achieve the relative positioning of two spacecraft to fly in formation at SEL2, in our case, the Wide-Field Infrared Survey Telescope (WFIRST) and a proposed Starshade. By modeling the formation transfers using a high fidelity tool, an accurate V approximation can be made to as-sist with the development of the subsystem design required for a WFIRST and Starshade formation flight mission.

  9. Higher-Order Extended Lagrangian Born-Oppenheimer Molecular Dynamics for Classical Polarizable Models.

    PubMed

    Albaugh, Alex; Head-Gordon, Teresa; Niklasson, Anders M N

    2018-02-13

    Generalized extended Lagrangian Born-Oppenheimer molecular dynamics (XLBOMD) methods provide a framework for fast iteration-free simulations of models that normally require expensive electronic ground state optimizations prior to the force evaluations at every time step. XLBOMD uses dynamically driven auxiliary degrees of freedom that fluctuate about a variationally optimized ground state of an approximate "shadow" potential which approximates the true reference potential. While the requirements for such shadow potentials are well understood, constructing such potentials in practice has previously been ad hoc, and in this work, we present a systematic development of XLBOMD shadow potentials that match the reference potential to any order. We also introduce a framework for combining friction-like dissipation for the auxiliary degrees of freedom with general-order integration, a combination that was not previously possible. These developments are demonstrated with a simple fluctuating charge model and point induced dipole polarization models.

  10. Higher-Order Extended Lagrangian Born–Oppenheimer Molecular Dynamics for Classical Polarizable Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albaugh, Alex; Head-Gordon, Teresa; Niklasson, Anders M. N.

    Generalized extended Lagrangian Born−Oppenheimer molecular dynamics (XLBOMD) methods provide a framework for fast iteration-free simulations of models that normally require expensive electronic ground state optimizations prior to the force evaluations at every time step. XLBOMD uses dynamically driven auxiliary degrees of freedom that fluctuate about a variationally optimized ground state of an approximate “shadow” potential which approximates the true reference potential. While the requirements for such shadow potentials are well understood, constructing such potentials in practice has previously been ad hoc, and in this work, we present a systematic development of XLBOMD shadow potentials that match the reference potential tomore » any order. We also introduce a framework for combining friction-like dissipation for the auxiliary degrees of freedom with general-order integration, a combination that was not previously possible. These developments are demonstrated with a simple fluctuating charge model and point induced dipole polarization models.« less

  11. Development status of a high cooling capacity single stage pulse tube cryocooler

    NASA Astrophysics Data System (ADS)

    Hirayama, T.; Li, R.; Y Xu, M.; Zhu, S. W.

    2017-12-01

    High temperature superconducting (HTS) applications require high-capacity and high-reliability cooling solutions to keep HTS materials at temperatures of approximately 80 K. In order to meet such requirements, Sumitomo Heavy Industries, Ltd.(SHI) has been developing high cooling capacity GM-type active-buffer pulse tube cryocooler. An experimental unit was designed, built and tested. A cooling capacity of 390.5 W at 80 K, COP 0.042 was achieved with an input power of approximately 9 kW. The cold stage usually reaches a stable temperature of about 25 K within one hour starting at room temperature. Also, a simplified analysis was carried out to better understand the experimental unit. In the analysis, the regenerator, thermal conduction, heat exchanger and radiation losses were calculated. The net cooling capacity was about 80% of the PV work. The experimental results, the analysis method and results are reported in this paper.

  12. Application of the conjugate-gradient method to ground-water models

    USGS Publications Warehouse

    Manteuffel, T.A.; Grove, D.B.; Konikow, Leonard F.

    1984-01-01

    The conjugate-gradient method can solve efficiently and accurately finite-difference approximations to the ground-water flow equation. An aquifer-simulation model using the conjugate-gradient method was applied to a problem of ground-water flow in an alluvial aquifer at the Rocky Mountain Arsenal, Denver, Colorado. For this application, the accuracy and efficiency of the conjugate-gradient method compared favorably with other available methods for steady-state flow. However, its efficiency relative to other available methods depends on the nature of the specific problem. The main advantage of the conjugate-gradient method is that it does not require the use of iteration parameters, thereby eliminating this partly subjective procedure. (USGS)

  13. Numerical methods for stochastic differential equations

    NASA Astrophysics Data System (ADS)

    Kloeden, Peter; Platen, Eckhard

    1991-06-01

    The numerical analysis of stochastic differential equations differs significantly from that of ordinary differential equations due to the peculiarities of stochastic calculus. This book provides an introduction to stochastic calculus and stochastic differential equations, both theory and applications. The main emphasise is placed on the numerical methods needed to solve such equations. It assumes an undergraduate background in mathematical methods typical of engineers and physicists, through many chapters begin with a descriptive summary which may be accessible to others who only require numerical recipes. To help the reader develop an intuitive understanding of the underlying mathematicals and hand-on numerical skills exercises and over 100 PC Exercises (PC-personal computer) are included. The stochastic Taylor expansion provides the key tool for the systematic derivation and investigation of discrete time numerical methods for stochastic differential equations. The book presents many new results on higher order methods for strong sample path approximations and for weak functional approximations, including implicit, predictor-corrector, extrapolation and variance-reduction methods. Besides serving as a basic text on such methods. the book offers the reader ready access to a large number of potential research problems in a field that is just beginning to expand rapidly and is widely applicable.

  14. Local Laplacian Coding From Theoretical Analysis of Local Coding Schemes for Locally Linear Classification.

    PubMed

    Pang, Junbiao; Qin, Lei; Zhang, Chunjie; Zhang, Weigang; Huang, Qingming; Yin, Baocai

    2015-12-01

    Local coordinate coding (LCC) is a framework to approximate a Lipschitz smooth function by combining linear functions into a nonlinear one. For locally linear classification, LCC requires a coding scheme that heavily determines the nonlinear approximation ability, posing two main challenges: 1) the locality making faraway anchors have smaller influences on current data and 2) the flexibility balancing well between the reconstruction of current data and the locality. In this paper, we address the problem from the theoretical analysis of the simplest local coding schemes, i.e., local Gaussian coding and local student coding, and propose local Laplacian coding (LPC) to achieve the locality and the flexibility. We apply LPC into locally linear classifiers to solve diverse classification tasks. The comparable or exceeded performances of state-of-the-art methods demonstrate the effectiveness of the proposed method.

  15. Response surface method in geotechnical/structural analysis, phase 1

    NASA Astrophysics Data System (ADS)

    Wong, F. S.

    1981-02-01

    In the response surface approach, an approximating function is fit to a long running computer code based on a limited number of code calculations. The approximating function, called the response surface, is then used to replace the code in subsequent repetitive computations required in a statistical analysis. The procedure of the response surface development and feasibility of the method are shown using a sample problem in slop stability which is based on data from centrifuge experiments of model soil slopes and involves five random soil parameters. It is shown that a response surface can be constructed based on as few as four code calculations and that the response surface is computationally extremely efficient compared to the code calculation. Potential applications of this research include probabilistic analysis of dynamic, complex, nonlinear soil/structure systems such as slope stability, liquefaction, and nuclear reactor safety.

  16. An efficient nonlinear finite-difference approach in the computational modeling of the dynamics of a nonlinear diffusion-reaction equation in microbial ecology.

    PubMed

    Macías-Díaz, J E; Macías, Siegfried; Medina-Ramírez, I E

    2013-12-01

    In this manuscript, we present a computational model to approximate the solutions of a partial differential equation which describes the growth dynamics of microbial films. The numerical technique reported in this work is an explicit, nonlinear finite-difference methodology which is computationally implemented using Newton's method. Our scheme is compared numerically against an implicit, linear finite-difference discretization of the same partial differential equation, whose computer coding requires an implementation of the stabilized bi-conjugate gradient method. Our numerical results evince that the nonlinear approach results in a more efficient approximation to the solutions of the biofilm model considered, and demands less computer memory. Moreover, the positivity of initial profiles is preserved in the practice by the nonlinear scheme proposed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Costs of measuring leaf area index of corn

    NASA Technical Reports Server (NTRS)

    Daughtry, C. S. T.; Hollinger, S. E.

    1984-01-01

    The magnitude of plant-to-plant variability of leaf area of corn plants selected from uniform plots was examined and four representative methods for measuring leaf area index (LAI) were evaluated. The number of plants required and the relative costs for each sampling method were calculated to detect 10, 20, and 50% differences in LAI using 0.05 and 0.01 tests of significance and a 90% probability of success (beta = 0.1). The natural variability of leaf area per corn plant was nearly 10%. Additional variability or experimental error may be introduced by the measurement technique employed and by nonuniformity within the plot. Direct measurement of leaf area with an electronic area meter had the lowest CV, required that the fewest plants be sampled, but required approximately the same amount of time as the leaf area/weight ratio method to detect comparable differences. Indirect methods based on measurements of length and width of leaves required more plants but less total time than the direct method. Unless the coefficients for converting length and width to area are verified frequently, the indirect methods may be biased. When true differences in LAI among treatments exceed 50% of mean, all four methods are equal. The method of choice depends on the resources available, the differences to be detected, and what additional information, such as leaf weight or stalk weight, is also desired.

  18. Krylov subspace methods for computing hydrodynamic interactions in Brownian dynamics simulations

    PubMed Central

    Ando, Tadashi; Chow, Edmond; Saad, Yousef; Skolnick, Jeffrey

    2012-01-01

    Hydrodynamic interactions play an important role in the dynamics of macromolecules. The most common way to take into account hydrodynamic effects in molecular simulations is in the context of a Brownian dynamics simulation. However, the calculation of correlated Brownian noise vectors in these simulations is computationally very demanding and alternative methods are desirable. This paper studies methods based on Krylov subspaces for computing Brownian noise vectors. These methods are related to Chebyshev polynomial approximations, but do not require eigenvalue estimates. We show that only low accuracy is required in the Brownian noise vectors to accurately compute values of dynamic and static properties of polymer and monodisperse suspension models. With this level of accuracy, the computational time of Krylov subspace methods scales very nearly as O(N2) for the number of particles N up to 10 000, which was the limit tested. The performance of the Krylov subspace methods, especially the “block” version, is slightly better than that of the Chebyshev method, even without taking into account the additional cost of eigenvalue estimates required by the latter. Furthermore, at N = 10 000, the Krylov subspace method is 13 times faster than the exact Cholesky method. Thus, Krylov subspace methods are recommended for performing large-scale Brownian dynamics simulations with hydrodynamic interactions. PMID:22897254

  19. A finite difference Davidson procedure to sidestep full ab initio hessian calculation: Application to characterization of stationary points and transition state searches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharada, Shaama Mallikarjun; Bell, Alexis T., E-mail: mhg@bastille.cchem.berkeley.edu, E-mail: bell@cchem.berkeley.edu; Head-Gordon, Martin, E-mail: mhg@bastille.cchem.berkeley.edu, E-mail: bell@cchem.berkeley.edu

    2014-04-28

    The cost of calculating nuclear hessians, either analytically or by finite difference methods, during the course of quantum chemical analyses can be prohibitive for systems containing hundreds of atoms. In many applications, though, only a few eigenvalues and eigenvectors, and not the full hessian, are required. For instance, the lowest one or two eigenvalues of the full hessian are sufficient to characterize a stationary point as a minimum or a transition state (TS), respectively. We describe here a method that can eliminate the need for hessian calculations for both the characterization of stationary points as well as searches for saddlemore » points. A finite differences implementation of the Davidson method that uses only first derivatives of the energy to calculate the lowest eigenvalues and eigenvectors of the hessian is discussed. This method can be implemented in conjunction with geometry optimization methods such as partitioned-rational function optimization (P-RFO) to characterize stationary points on the potential energy surface. With equal ease, it can be combined with interpolation methods that determine TS guess structures, such as the freezing string method, to generate approximate hessian matrices in lieu of full hessians as input to P-RFO for TS optimization. This approach is shown to achieve significant cost savings relative to exact hessian calculation when applied to both stationary point characterization as well as TS optimization. The basic reason is that the present approach scales one power of system size lower since the rate of convergence is approximately independent of the size of the system. Therefore, the finite-difference Davidson method is a viable alternative to full hessian calculation for stationary point characterization and TS search particularly when analytical hessians are not available or require substantial computational effort.« less

  20. RES: Regularized Stochastic BFGS Algorithm

    NASA Astrophysics Data System (ADS)

    Mokhtari, Aryan; Ribeiro, Alejandro

    2014-12-01

    RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.

  1. Online adaptive optimal control for continuous-time nonlinear systems with completely unknown dynamics

    NASA Astrophysics Data System (ADS)

    Lv, Yongfeng; Na, Jing; Yang, Qinmin; Wu, Xing; Guo, Yu

    2016-01-01

    An online adaptive optimal control is proposed for continuous-time nonlinear systems with completely unknown dynamics, which is achieved by developing a novel identifier-critic-based approximate dynamic programming algorithm with a dual neural network (NN) approximation structure. First, an adaptive NN identifier is designed to obviate the requirement of complete knowledge of system dynamics, and a critic NN is employed to approximate the optimal value function. Then, the optimal control law is computed based on the information from the identifier NN and the critic NN, so that the actor NN is not needed. In particular, a novel adaptive law design method with the parameter estimation error is proposed to online update the weights of both identifier NN and critic NN simultaneously, which converge to small neighbourhoods around their ideal values. The closed-loop system stability and the convergence to small vicinity around the optimal solution are all proved by means of the Lyapunov theory. The proposed adaptation algorithm is also improved to achieve finite-time convergence of the NN weights. Finally, simulation results are provided to exemplify the efficacy of the proposed methods.

  2. Spectral Analysis and Metastable Absorption Measurements of High Pressure Capacitively and Inductively Coupled Radio-Frequency Argon-Helium Discharges

    DTIC Science & Technology

    2013-06-01

    density of the s5 and s3 metastable states for different discharge parameters. The absorption data was fit to an approximated Voigt profile from which...pressures are required in order to have enough spin-orbit relaxation to maintain CW lasing without significant bottlenecking. There are many methods to...for just that [(5),(12)]. This method allows for a wide study of energy levels since the limiting factor is the sensitivity of the detector and modern

  3. The Polygon-Ellipse Method of Data Compression of Weather Maps

    DTIC Science & Technology

    1994-03-28

    Report No. DOT’•FAAJRD-9416 Pr•oject Report AD-A278 958 ATC-213 The Polygon-Ellipse Method of Data Compression of Weather Maps ELDCT E J.L. GerIz 28...a o means must he- found to Compress this image. The l’olygion.Ellip.e (PE.) encoding algorithm develop.ed in this report rt-premrnt. weather regions...severely compress the image. For example, Mode S would require approximately a 10-fold compression . In addition, the algorithms used to perform the

  4. A rapid method for the determination of some antihypertensive and antipyretic drugs by thermometric titrimetry.

    PubMed

    Abbasi, U M; Chand, F; Bhanger, M I; Memon, S A

    1986-02-01

    A simple and rapid method is described for the direct thermometric determination of milligram amounts of methyl dopa, propranolol hydrochloride, 1-phenyl-3-methylpyrazolone (MPP) and 2,3-dimethyl-1-phenylpyrazol-5-one (phenazone) in the presence of excipients. The compounds are reacted with N'-bromosuccinimide and the heat of reaction is used to determine the end-point of the titration. The time required is approximately 2 min, and the accuracy is analytically acceptable.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, Hao; Ashkar, Rana; Steinke, Nina

    A method dubbed grating-based holography was recently used to determine the structure of colloidal fluids in the rectangular grooves of a diffraction grating from X-ray scattering measurements. Similar grating-based measurements have also been recently made with neutrons using a technique called spin-echo small-angle neutron scattering. The analysis of the X-ray diffraction data was done using an approximation that treats the X-ray phase change caused by the colloidal structure as a small perturbation to the overall phase pattern generated by the grating. In this paper, the adequacy of this weak phase approximation is explored for both X-ray and neutron grating holography.more » Additionally, it is found that there are several approximations hidden within the weak phase approximation that can lead to incorrect conclusions from experiments. In particular, the phase contrast for the empty grating is a critical parameter. Finally, while the approximation is found to be perfectly adequate for X-ray grating holography experiments performed to date, it cannot be applied to similar neutron experiments because the latter technique requires much deeper grating channels.« less

  6. Binarized cross-approximate entropy in crowdsensing environment.

    PubMed

    Skoric, Tamara; Mohamoud, Omer; Milovanovic, Branislav; Japundzic-Zigon, Nina; Bajic, Dragana

    2017-01-01

    Personalised monitoring in health applications has been recognised as part of the mobile crowdsensing concept, where subjects equipped with sensors extract information and share them for personal or common benefit. Limited transmission resources impose the use of local analyses methodology, but this approach is incompatible with analytical tools that require stationary and artefact-free data. This paper proposes a computationally efficient binarised cross-approximate entropy, referred to as (X)BinEn, for unsupervised cardiovascular signal processing in environments where energy and processor resources are limited. The proposed method is a descendant of the cross-approximate entropy ((X)ApEn). It operates on binary, differentially encoded data series split into m-sized vectors. The Hamming distance is used as a distance measure, while a search for similarities is performed on the vector sets. The procedure is tested on rats under shaker and restraint stress, and compared to the existing (X)ApEn results. The number of processing operations is reduced. (X)BinEn captures entropy changes in a similar manner to (X)ApEn. The coding coarseness yields an adverse effect of reduced sensitivity, but it attenuates parameter inconsistency and binary bias. A special case of (X)BinEn is equivalent to Shannon's entropy. A binary conditional entropy for m =1 vectors is embedded into the (X)BinEn procedure. (X)BinEn can be applied to a single time series as an auto-entropy method, or to a pair of time series, as a cross-entropy method. Its low processing requirements makes it suitable for mobile, battery operated, self-attached sensing devices, with limited power and processor resources. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Kurtosis Approach Nonlinear Blind Source Separation

    NASA Technical Reports Server (NTRS)

    Duong, Vu A.; Stubbemd, Allen R.

    2005-01-01

    In this paper, we introduce a new algorithm for blind source signal separation for post-nonlinear mixtures. The mixtures are assumed to be linearly mixed from unknown sources first and then distorted by memoryless nonlinear functions. The nonlinear functions are assumed to be smooth and can be approximated by polynomials. Both the coefficients of the unknown mixing matrix and the coefficients of the approximated polynomials are estimated by the gradient descent method conditional on the higher order statistical requirements. The results of simulation experiments presented in this paper demonstrate the validity and usefulness of our approach for nonlinear blind source signal separation Keywords: Independent Component Analysis, Kurtosis, Higher order statistics.

  8. A strategy for improved computational efficiency of the method of anchored distributions

    NASA Astrophysics Data System (ADS)

    Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram

    2013-06-01

    This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.

  9. A Truncated Nuclear Norm Regularization Method Based on Weighted Residual Error for Matrix Completion.

    PubMed

    Qing Liu; Zhihui Lai; Zongwei Zhou; Fangjun Kuang; Zhong Jin

    2016-01-01

    Low-rank matrix completion aims to recover a matrix from a small subset of its entries and has received much attention in the field of computer vision. Most existing methods formulate the task as a low-rank matrix approximation problem. A truncated nuclear norm has recently been proposed as a better approximation to the rank of matrix than a nuclear norm. The corresponding optimization method, truncated nuclear norm regularization (TNNR), converges better than the nuclear norm minimization-based methods. However, it is not robust to the number of subtracted singular values and requires a large number of iterations to converge. In this paper, a TNNR method based on weighted residual error (TNNR-WRE) for matrix completion and its extension model (ETNNR-WRE) are proposed. TNNR-WRE assigns different weights to the rows of the residual error matrix in an augmented Lagrange function to accelerate the convergence of the TNNR method. The ETNNR-WRE is much more robust to the number of subtracted singular values than the TNNR-WRE, TNNR alternating direction method of multipliers, and TNNR accelerated proximal gradient with Line search methods. Experimental results using both synthetic and real visual data sets show that the proposed TNNR-WRE and ETNNR-WRE methods perform better than TNNR and Iteratively Reweighted Nuclear Norm (IRNN) methods.

  10. Temporal resolution improvement using PICCS in MDCT cardiac imaging.

    PubMed

    Chen, Guang-Hong; Tang, Jie; Hsieh, Jiang

    2009-06-01

    The current paradigm for temporal resolution improvement is to add more source-detector units and/or increase the gantry rotation speed. The purpose of this article is to present an innovative alternative method to potentially improve temporal resolution by approximately a factor of 2 for all MDCT scanners without requiring hardware modification. The central enabling technology is a most recently developed image reconstruction method: Prior image constrained compressed sensing (PICCS). Using the method, cardiac CT images can be accurately reconstructed using the projection data acquired in an angular range of about 120 degrees, which is roughly 50% of the standard short-scan angular range (approximately 240 degrees for an MDCT scanner). As a result, the temporal resolution of MDCT cardiac imaging can be universally improved by approximately a factor of 2. In order to validate the proposed method, two in vivo animal experiments were conducted using a state-of-the-art 64-slice CT scanner (GE Healthcare, Waukesha, WI) at different gantry rotation times and different heart rates. One animal was scanned at heart rate of 83 beats per minute (bpm) using 400 ms gantry rotation time and the second animal was scanned at 94 bpm using 350 ms gantry rotation time, respectively. Cardiac coronary CT imaging can be successfully performed at high heart rates using a single-source MDCT scanner and projection data from a single heart beat with gantry rotation times of 400 and 350 ms. Using the proposed PICCS method, the temporal resolution of cardiac CT imaging can be effectively improved by approximately a factor of 2 without modifying any scanner hardware. This potentially provides a new method for single-source MDCT scanners to achieve reliable coronary CT imaging for patients at higher heart rates than the current heart rate limit of 70 bpm without using the well-known multisegment FBP reconstruction algorithm. This method also enables dual-source MDCT scanner to achieve higher temporal resolution without further hardware modifications.

  11. Moment method analysis of linearly tapered slot antennas: Low loss components for switched beam radiometers

    NASA Technical Reports Server (NTRS)

    Koeksal, Adnan; Trew, Robert J.; Kauffman, J. Frank

    1992-01-01

    A Moment Method Model for the radiation pattern characterization of single Linearly Tapered Slot Antennas (LTSA) in air or on a dielectric substrate is developed. This characterization consists of: (1) finding the radiated far-fields of the antenna; (2) determining the E-Plane and H-Plane beamwidths and sidelobe levels; and (3) determining the D-Plane beamwidth and cross polarization levels, as antenna parameters length, height, taper angle, substrate thickness, and the relative substrate permittivity vary. The LTSA geometry does not lend itself to analytical solution with the given parameter ranges. Therefore, a computer modeling scheme and a code are necessary to analyze the problem. This necessity imposes some further objectives or requirements on the solution method (modeling) and tool (computer code). These may be listed as follows: (1) a good approximation to the real antenna geometry; and (2) feasible computer storage and time requirements. According to these requirements, the work is concentrated on the development of efficient modeling schemes for these type of problems and on reducing the central processing unit (CPU) time required from the computer code. A Method of Moments (MoM) code is developed for the analysis of LTSA's within the parameter ranges given.

  12. Statistical Attitude Determination

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    2010-01-01

    All spacecraft require attitude determination at some level of accuracy. This can be a very coarse requirement of tens of degrees, in order to point solar arrays at the sun, or a very fine requirement in the milliarcsecond range, as required by Hubble Space Telescope. A toolbox of attitude determination methods, applicable across this wide range, has been developed over the years. There have been many advances in the thirty years since the publication of Reference, but the fundamentals remain the same. One significant change is that onboard attitude determination has largely superseded ground-based attitude determination, due to the greatly increased power of onboard computers. The availability of relatively inexpensive radiation-hardened microprocessors has led to the development of "smart" sensors, with autonomous star trackers being the first spacecraft application. Another new development is attitude determination using interferometry of radio signals from the Global Positioning System (GPS) constellation. This article reviews both the classic material and these newer developments at approximately the level of, with emphasis on. methods suitable for use onboard a spacecraft. We discuss both "single frame" methods that are based on measurements taken at a single point in time, and sequential methods that use information about spacecraft dynamics to combine the information from a time series of measurements.

  13. 2.097μ Cth:YAG flashlamp pumped high energy high efficiency laser operation (patent pending)

    NASA Astrophysics Data System (ADS)

    Bar-Joseph, Dan

    2018-02-01

    Flashlamp pumped Cth:YAG lasers are mainly used in medical applications (urology). The main laser transition is at 2.13μ and is called a quasi-three level having an emission cross-section of 7x10-21 cm2 and a ground state absorption of approximately 5%/cm. Because of the relatively low absorption, combined with a modest emission cross-section, the laser requires high reflectivity output coupling, and therefore high intra-cavity energy density which limits the output to approximately 4J/pulse for reliable operation. This paper will describe a method of efficiently generating high output energy at low intra-cavity energy density by using an alternative 2.097μ transition having an emission cross-section of 5x10-21 cm2 and a ground level absorption of approximately 14%/cm.

  14. James Webb Space Telescope (JWST) Integrated Science Instruments Module (ISIM) Electronics Compartment (IEC) Conformal Shields Composite Bond Structure Qualification Test Method

    NASA Technical Reports Server (NTRS)

    Yew, Calinda; Stephens, Matt

    2015-01-01

    The JWST IEC conformal shields are mounted onto a composite frame structure that must undergo qualification testing to satisfy mission assurance requirements. The composite frame segments are bonded together at the joints using epoxy, EA 9394. The development of a test method to verify the integrity of the bonded structure at its operating environment introduces challenges in terms of requirements definition and the attainment of success criteria. Even though protoflight thermal requirements were not achieved, the first attempt in exposing the structure to cryogenic operating conditions in a thermal vacuum environment resulted in approximately 1 bonded joints failure during mechanical pull tests performed at 1.25 times the flight loads. Failure analysis concluded that the failure mode was due to adhesive cracks that formed and propagated along stress concentrated fillets as a result of poor bond squeeze-out control during fabrication. Bond repairs were made and the structures successfully re-tested with an improved LN2 immersion test method to achieve protoflight thermal requirements.

  15. An approximate Riemann solver for thermal and chemical nonequilibrium flows

    NASA Technical Reports Server (NTRS)

    Prabhu, Ramadas K.

    1994-01-01

    Among the many methods available for the determination of inviscid fluxes across a surface of discontinuity, the flux-difference-splitting technique that employs Roe-averaged variables has been used extensively by the CFD community because of its simplicity and its ability to capture shocks exactly. This method, originally developed for perfect gas flows, has since been extended to equilibrium as well as nonequilibrium flows. Determination of the Roe-averaged variables for the case of a perfect gas flow is a simple task; however, for thermal and chemical nonequilibrium flows, some of the variables are not uniquely defined. Methods available in the literature to determine these variables seem to lack sound bases. The present paper describes a simple, yet accurate, method to determine all the variables for nonequilibrium flows in the Roe-average state. The basis for this method is the requirement that the Roe-averaged variables form a consistent set of thermodynamic variables. The present method satisfies the requirement that the square of the speed of sound be positive.

  16. Trafficking of excitatory amino acid transporter 2- laden vesiclesin cultured astrocytes: a comparison between approximate and exact determination of trajectory angles

    PubMed Central

    Cavender, Chapin E.; Gottipati, Manoj K.; Parpura, Vladimir

    2014-01-01

    A clear consensus concerning the mechanisms of intracellular secretory vesicle trafficking in astrocytes is lacking in the physiological literature. A good characterization of vesicle trafficking that may assist researchers in achieving that goal is the trajectory angle, defined as the angle between the trajectory of a vesicle and a line radial to the cell’s nucleus. In this study, we provide a precise definition of the trajectory angle, describe and compare two methods for its calculation in terms of measureable trafficking parameters, and give recommendations for the appropriate use of each method. We investigated the trafficking of excitatory amino acid transporter 2 (EAAT2) fluorescently tagged with enhanced green fluorescent protein (EGFP) to quantify and validate the usefulness of each method. The motion of fluorescent puncta—taken to represent vesicles containing EAAT2-EGFP—was found to be typical of secretory vesicle trafficking. An exact method for calculating the trajectory angle of these puncta produced no error but required large computation time. An approximate method reduced the requisite computation time but produced an error that depended on the inverse of the ratio of the punctum’s initial distance from the nucleus centroid to its maximal displacement. Fitting this dependence to a power function allowed us to establish an exclusion distance from the centroid, beyond which the approximate method is much less likely to produce an error above acceptable 5 %. We recommend that the exact method be used to calculate the trajectory angle for puncta closer to the nucleus centroid than this exclusion distance. PMID:25408463

  17. Extrapolation of rotating sound fields.

    PubMed

    Carley, Michael

    2018-03-01

    A method is presented for the computation of the acoustic field around a tonal circular source, such as a rotor or propeller, based on an exact formulation which is valid in the near and far fields. The only input data required are the pressure field sampled on a cylindrical surface surrounding the source, with no requirement for acoustic velocity or pressure gradient information. The formulation is approximated with exponentially small errors and appears to require input data at a theoretically minimal number of points. The approach is tested numerically, with and without added noise, and demonstrates excellent performance, especially when compared to extrapolation using a far-field assumption.

  18. Methods for computing color anaglyphs

    NASA Astrophysics Data System (ADS)

    McAllister, David F.; Zhou, Ya; Sullivan, Sophia

    2010-02-01

    A new computation technique is presented for calculating pixel colors in anaglyph images. The method depends upon knowing the RGB spectral distributions of the display device and the transmission functions of the filters in the viewing glasses. It requires the solution of a nonlinear least-squares program for each pixel in a stereo pair and is based on minimizing color distances in the CIEL*a*b* uniform color space. The method is compared with several techniques for computing anaglyphs including approximation in CIE space using the Euclidean and Uniform metrics, the Photoshop method and its variants, and a method proposed by Peter Wimmer. We also discuss the methods of desaturation and gamma correction for reducing retinal rivalry.

  19. A phase space approach to wave propagation with dispersion.

    PubMed

    Ben-Benjamin, Jonathan S; Cohen, Leon; Loughlin, Patrick J

    2015-08-01

    A phase space approximation method for linear dispersive wave propagation with arbitrary initial conditions is developed. The results expand on a previous approximation in terms of the Wigner distribution of a single mode. In contrast to this previously considered single-mode case, the approximation presented here is for the full wave and is obtained by a different approach. This solution requires one to obtain (i) the initial modal functions from the given initial wave, and (ii) the initial cross-Wigner distribution between different modal functions. The full wave is the sum of modal functions. The approximation is obtained for general linear wave equations by transforming the equations to phase space, and then solving in the new domain. It is shown that each modal function of the wave satisfies a Schrödinger-type equation where the equivalent "Hamiltonian" operator is the dispersion relation corresponding to the mode and where the wavenumber is replaced by the wavenumber operator. Application to the beam equation is considered to illustrate the approach.

  20. Optics of Water Microdroplets with Soot Inclusions: Exact Versus Approximate Results

    NASA Technical Reports Server (NTRS)

    Liu, Li; Mishchenko, Michael I.

    2016-01-01

    We use the recently generalized version of the multi-sphere superposition T-matrix method (STMM) to compute the scattering and absorption properties of microscopic water droplets contaminated by black carbon. The soot material is assumed to be randomly distributed throughout the droplet interior in the form of numerous small spherical inclusions. Our numerically-exact STMM results are compared with approximate ones obtained using the Maxwell-Garnett effective-medium approximation (MGA) and the Monte Carlo ray-tracing approximation (MCRTA). We show that the popular MGA can be used to calculate the droplet optical cross sections, single-scattering albedo, and asymmetry parameter provided that the soot inclusions are quasi-uniformly distributed throughout the droplet interior, but can fail in computations of the elements of the scattering matrix depending on the volume fraction of soot inclusions. The integral radiative characteristics computed with the MCRTA can deviate more significantly from their exact STMM counterparts, while accurate MCRTA computations of the phase function require droplet size parameters substantially exceeding 60.

  1. Satisfying positivity requirement in the Beyond Complex Langevin approach

    NASA Astrophysics Data System (ADS)

    Wyrzykowski, Adam; Ruba, Błażej Ruba

    2018-03-01

    The problem of finding a positive distribution, which corresponds to a given complex density, is studied. By the requirement that the moments of the positive distribution and of the complex density are equal, one can reduce the problem to solving the matching conditions. These conditions are a set of quadratic equations, thus Groebner basis method was used to find its solutions when it is restricted to a few lowest-order moments. For a Gaussian complex density, these approximate solutions are compared with the exact solution, that is known in this special case.

  2. Test functions for three-dimensional control-volume mixed finite-element methods on irregular grids

    USGS Publications Warehouse

    Naff, R.L.; Russell, T.F.; Wilson, J.D.; ,; ,; ,; ,; ,

    2000-01-01

    Numerical methods based on unstructured grids, with irregular cells, usually require discrete shape functions to approximate the distribution of quantities across cells. For control-volume mixed finite-element methods, vector shape functions are used to approximate the distribution of velocities across cells and vector test functions are used to minimize the error associated with the numerical approximation scheme. For a logically cubic mesh, the lowest-order shape functions are chosen in a natural way to conserve intercell fluxes that vary linearly in logical space. Vector test functions, while somewhat restricted by the mapping into the logical reference cube, admit a wider class of possibilities. Ideally, an error minimization procedure to select the test function from an acceptable class of candidates would be the best procedure. Lacking such a procedure, we first investigate the effect of possible test functions on the pressure distribution over the control volume; specifically, we look for test functions that allow for the elimination of intermediate pressures on cell faces. From these results, we select three forms for the test function for use in a control-volume mixed method code and subject them to an error analysis for different forms of grid irregularity; errors are reported in terms of the discrete L2 norm of the velocity error. Of these three forms, one appears to produce optimal results for most forms of grid irregularity.

  3. A class of renormalised meshless Laplacians for boundary value problems

    NASA Astrophysics Data System (ADS)

    Basic, Josip; Degiuli, Nastia; Ban, Dario

    2018-02-01

    A meshless approach to approximating spatial derivatives on scattered point arrangements is presented in this paper. Three various derivations of approximate discrete Laplace operator formulations are produced using the Taylor series expansion and renormalised least-squares correction of the first spatial derivatives. Numerical analyses are performed for the introduced Laplacian formulations, and their convergence rate and computational efficiency are examined. The tests are conducted on regular and highly irregular scattered point arrangements. The results are compared to those obtained by the smoothed particle hydrodynamics method and the finite differences method on a regular grid. Finally, the strong form of various Poisson and diffusion equations with Dirichlet or Robin boundary conditions are solved in two and three dimensions by making use of the introduced operators in order to examine their stability and accuracy for boundary value problems. The introduced Laplacian operators perform well for highly irregular point distribution and offer adequate accuracy for mesh and mesh-free numerical methods that require frequent movement of the grid or point cloud.

  4. A spectral mimetic least-squares method for the Stokes equations with no-slip boundary condition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerritsma, Marc; Bochev, Pavel

    Formulation of locally conservative least-squares finite element methods (LSFEMs) for the Stokes equations with the no-slip boundary condition has been a long standing problem. Existing LSFEMs that yield exactly divergence free velocities require non-standard boundary conditions (Bochev and Gunzburger, 2009 [3]), while methods that admit the no-slip condition satisfy the incompressibility equation only approximately (Bochev and Gunzburger, 2009 [4, Chapter 7]). Here we address this problem by proving a new non-standard stability bound for the velocity–vorticity–pressure Stokes system augmented with a no-slip boundary condition. This bound gives rise to a norm-equivalent least-squares functional in which the velocity can be approximatedmore » by div-conforming finite element spaces, thereby enabling a locally-conservative approximations of this variable. Here, we also provide a practical realization of the new LSFEM using high-order spectral mimetic finite element spaces (Kreeft et al., 2011) and report several numerical tests, which confirm its mimetic properties.« less

  5. Eye gaze tracking using correlation filters

    NASA Astrophysics Data System (ADS)

    Karakaya, Mahmut; Bolme, David; Boehnen, Chris

    2014-03-01

    In this paper, we studied a method for eye gaze tracking that provide gaze estimation from a standard webcam with a zoom lens and reduce the setup and calibration requirements for new users. Specifically, we have developed a gaze estimation method based on the relative locations of points on the top of the eyelid and eye corners. Gaze estimation method in this paper is based on the distances between top point of the eyelid and eye corner detected by the correlation filters. Advanced correlation filters were found to provide facial landmark detections that are accurate enough to determine the subjects gaze direction up to angle of approximately 4-5 degrees although calibration errors often produce a larger overall shift in the estimates. This is approximately a circle of diameter 2 inches for a screen that is arm's length from the subject. At this accuracy it is possible to figure out what regions of text or images the subject is looking but it falls short of being able to determine which word the subject has looked at.

  6. Mission and system optimization of nuclear electric propulsion vehicles for lunar and Mars missions

    NASA Technical Reports Server (NTRS)

    Gilland, James H.

    1991-01-01

    The detailed mission and system optimization of low thrust electric propulsion missions is a complex, iterative process involving interaction between orbital mechanics and system performance. Through the use of appropriate approximations, initial system optimization and analysis can be performed for a range of missions. The intent of these calculations is to provide system and mission designers with simple methods to assess system design without requiring access or detailed knowledge of numerical calculus of variations optimizations codes and methods. Approximations for the mission/system optimization of Earth orbital transfer and Mars mission have been derived. Analyses include the variation of thruster efficiency with specific impulse. Optimum specific impulse, payload fraction, and power/payload ratios are calculated. The accuracy of these methods is tested and found to be reasonable for initial scoping studies. Results of optimization for Space Exploration Initiative lunar cargo and Mars missions are presented for a range of power system and thruster options.

  7. A spectral mimetic least-squares method for the Stokes equations with no-slip boundary condition

    DOE PAGES

    Gerritsma, Marc; Bochev, Pavel

    2016-03-22

    Formulation of locally conservative least-squares finite element methods (LSFEMs) for the Stokes equations with the no-slip boundary condition has been a long standing problem. Existing LSFEMs that yield exactly divergence free velocities require non-standard boundary conditions (Bochev and Gunzburger, 2009 [3]), while methods that admit the no-slip condition satisfy the incompressibility equation only approximately (Bochev and Gunzburger, 2009 [4, Chapter 7]). Here we address this problem by proving a new non-standard stability bound for the velocity–vorticity–pressure Stokes system augmented with a no-slip boundary condition. This bound gives rise to a norm-equivalent least-squares functional in which the velocity can be approximatedmore » by div-conforming finite element spaces, thereby enabling a locally-conservative approximations of this variable. Here, we also provide a practical realization of the new LSFEM using high-order spectral mimetic finite element spaces (Kreeft et al., 2011) and report several numerical tests, which confirm its mimetic properties.« less

  8. Robust approximation of image illumination direction in a segmentation-based crater detection algorithm for spacecraft navigation

    NASA Astrophysics Data System (ADS)

    Maass, Bolko

    2016-12-01

    This paper describes an efficient and easily implemented algorithmic approach to extracting an approximation to an image's dominant projected illumination direction, based on intermediary results from a segmentation-based crater detection algorithm (CDA), at a computational cost that is negligible in comparison to that of the prior stages of the CDA. Most contemporary CDAs built for spacecraft navigation use this illumination direction as a means of improving performance or even require it to function at all. Deducing the illumination vector from the image alone reduces the reliance on external information such as the accurate knowledge of the spacecraft inertial state, accurate time base and solar system ephemerides. Therefore, a method such as the one described in this paper is a prerequisite for true "Lost in Space" operation of a purely segmentation-based crater detecting and matching method for spacecraft navigation. The proposed method is verified using ray-traced lunar elevation model data, asteroid image data, and in a laboratory setting with a camera in the loop.

  9. Attitude maneuvers of a solar-powered electric orbital transfer vehicle

    NASA Astrophysics Data System (ADS)

    Jenkin, Alan B.

    1992-08-01

    Attitude maneuver requirements of a solar-powered electric orbital transfer vehicle have been studied in detail. This involved evaluation of the yaw, pitch, and roll profiles and associated angular accelerations needed to simultaneously steer the vehicle thrust vector and maintain the solar array pointed toward the sun. Maintaining the solar array pointed exactly at the sun leads to snap roll maneuvers which have very high (theoretically unbounded) accelerations, thereby imposing large torque requirements. The problem is exacerbated by the large solar arrays which are needed to generate the high levels of power needed by electric propulsion devices. A method of eliminating the snap roll maneuvers is presented. The method involves the determination of relaxed roll profiles which approximate a forced transition between alternate exact roll profiles and incur only small errors in solar array pointing. The method makes it feasible to perform the required maneuvers using currently available attitude control technology such as reaction wheels, hot gas jets, or gimballed main engines.

  10. Identification of Mold and Dampness-Associated Respiratory Morbidity in 2 Schools: Comparison of Questionnaire Survey Responses to National Data

    ERIC Educational Resources Information Center

    Sahakian, Nancy M.; White, Sandra K.; Park, Ju-Hyeong; Cox-Ganser, Jean M.; Kreiss, Kathleen

    2008-01-01

    Background: Dampness and mold problems are frequently encountered in schools. Approximately one third of US public schools require extensive repairs or need at least 1 building replaced. This study illustrates how national data can be used to identify building-related health risks in school employees and students. Methods: School employees (n =…

  11. 76 FR 51317 - Waiver of Citizenship Requirements for Crewmembers on Commercial Fishing Vessels

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-18

    ... preferred methods are by e- mail to [email protected] (include the docket number and ``Attention..., mates, and pilots of water vessels.\\3\\ We apply a load factor of 1.48 to this wage to account for benefits, which makes the hourly wage for a captain, mate, or pilot approximately $50.33.\\4\\ At a cost of...

  12. Thiophilic paramagnetic particles as a batch separation medium for the purification of antibodies from various source materials.

    PubMed

    Dawes, Clive C; Jewess, Philip J; Murray, Deborah A

    2005-03-15

    A preparation of thiophilic agarose-based paramagnetic particles (T-Gel) has been developed with physical characteristics (particle size and particle density) that facilitate its use as a batch separation medium suitable for the large-scale purification and isolation of immunoglobulins. The medium was used to extract immunoglobulins from a wide range of starting materials, including sera, ascites fluid, tissue culture medium, and whole blood. None of these starting materials required pretreatment such as clarification by centrifugation or filtration prior to antibody extraction. The antibody purity obtained using T-Gel compared well with that obtained using protein A agarose column chromatography. Yields were approximately 30 mg of immunoglobulins per milliliter of T-Gel, and little was required in the way of specialist equipment. The method is uncomplicated and involves a roll mix extraction overnight, followed by magnetic separation to facilitate supernatant removal and subsequent washing of the particles. Elution of bound antibodies was carried out at neutral pH to yield a concentration of immunoglobulins that was approximately 7 mg/ml. The method was found to be applicable to antibody purification from the blood serum of seven different mammalian species and for all immunoglobulin classes.

  13. An adaptive sampling method for variable-fidelity surrogate models using improved hierarchical kriging

    NASA Astrophysics Data System (ADS)

    Hu, Jiexiang; Zhou, Qi; Jiang, Ping; Shao, Xinyu; Xie, Tingli

    2018-01-01

    Variable-fidelity (VF) modelling methods have been widely used in complex engineering system design to mitigate the computational burden. Building a VF model generally includes two parts: design of experiments and metamodel construction. In this article, an adaptive sampling method based on improved hierarchical kriging (ASM-IHK) is proposed to refine the improved VF model. First, an improved hierarchical kriging model is developed as the metamodel, in which the low-fidelity model is varied through a polynomial response surface function to capture the characteristics of a high-fidelity model. Secondly, to reduce local approximation errors, an active learning strategy based on a sequential sampling method is introduced to make full use of the already required information on the current sampling points and to guide the sampling process of the high-fidelity model. Finally, two numerical examples and the modelling of the aerodynamic coefficient for an aircraft are provided to demonstrate the approximation capability of the proposed approach, as well as three other metamodelling methods and two sequential sampling methods. The results show that ASM-IHK provides a more accurate metamodel at the same simulation cost, which is very important in metamodel-based engineering design problems.

  14. Performance of FFT methods in local gravity field modelling

    NASA Technical Reports Server (NTRS)

    Forsberg, Rene; Solheim, Dag

    1989-01-01

    Fast Fourier transform (FFT) methods provide a fast and efficient means of processing large amounts of gravity or geoid data in local gravity field modelling. The FFT methods, however, has a number of theoretical and practical limitations, especially the use of flat-earth approximation, and the requirements for gridded data. In spite of this the method often yields excellent results in practice when compared to other more rigorous (and computationally expensive) methods, such as least-squares collocation. The good performance of the FFT methods illustrate that the theoretical approximations are offset by the capability of taking into account more data in larger areas, especially important for geoid predictions. For best results good data gridding algorithms are essential. In practice truncated collocation approaches may be used. For large areas at high latitudes the gridding must be done using suitable map projections such as UTM, to avoid trivial errors caused by the meridian convergence. The FFT methods are compared to ground truth data in New Mexico (xi, eta from delta g), Scandinavia (N from delta g, the geoid fits to 15 cm over 2000 km), and areas of the Atlantic (delta g from satellite altimetry using Wiener filtering). In all cases the FFT methods yields results comparable or superior to other methods.

  15. GAPPARD: a computationally efficient method of approximating gap-scale disturbance in vegetation models

    NASA Astrophysics Data System (ADS)

    Scherstjanoi, M.; Kaplan, J. O.; Thürig, E.; Lischke, H.

    2013-09-01

    Models of vegetation dynamics that are designed for application at spatial scales larger than individual forest gaps suffer from several limitations. Typically, either a population average approximation is used that results in unrealistic tree allometry and forest stand structure, or models have a high computational demand because they need to simulate both a series of age-based cohorts and a number of replicate patches to account for stochastic gap-scale disturbances. The detail required by the latter method increases the number of calculations by two to three orders of magnitude compared to the less realistic population average approach. In an effort to increase the efficiency of dynamic vegetation models without sacrificing realism, we developed a new method for simulating stand-replacing disturbances that is both accurate and faster than approaches that use replicate patches. The GAPPARD (approximating GAP model results with a Probabilistic Approach to account for stand Replacing Disturbances) method works by postprocessing the output of deterministic, undisturbed simulations of a cohort-based vegetation model by deriving the distribution of patch ages at any point in time on the basis of a disturbance probability. With this distribution, the expected value of any output variable can be calculated from the output values of the deterministic undisturbed run at the time corresponding to the patch age. To account for temporal changes in model forcing (e.g., as a result of climate change), GAPPARD performs a series of deterministic simulations and interpolates between the results in the postprocessing step. We integrated the GAPPARD method in the vegetation model LPJ-GUESS, and evaluated it in a series of simulations along an altitudinal transect of an inner-Alpine valley. We obtained results very similar to the output of the original LPJ-GUESS model that uses 100 replicate patches, but simulation time was reduced by approximately the factor 10. Our new method is therefore highly suited for rapidly approximating LPJ-GUESS results, and provides the opportunity for future studies over large spatial domains, allows easier parameterization of tree species, faster identification of areas of interesting simulation results, and comparisons with large-scale datasets and results of other forest models.

  16. An improved target velocity sampling algorithm for free gas elastic scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, Paul K.; Walsh, Jonathan A.

    We present an improved algorithm for sampling the target velocity when simulating elastic scattering in a Monte Carlo neutron transport code that correctly accounts for the energy dependence of the scattering cross section. The algorithm samples the relative velocity directly, thereby avoiding a potentially inefficient rejection step based on the ratio of cross sections. Here, we have shown that this algorithm requires only one rejection step, whereas other methods of similar accuracy require two rejection steps. The method was verified against stochastic and deterministic reference results for upscattering percentages in 238U. Simulations of a light water reactor pin cell problemmore » demonstrate that using this algorithm results in a 3% or less penalty in performance when compared with an approximate method that is used in most production Monte Carlo codes« less

  17. An improved target velocity sampling algorithm for free gas elastic scattering

    DOE PAGES

    Romano, Paul K.; Walsh, Jonathan A.

    2018-02-03

    We present an improved algorithm for sampling the target velocity when simulating elastic scattering in a Monte Carlo neutron transport code that correctly accounts for the energy dependence of the scattering cross section. The algorithm samples the relative velocity directly, thereby avoiding a potentially inefficient rejection step based on the ratio of cross sections. Here, we have shown that this algorithm requires only one rejection step, whereas other methods of similar accuracy require two rejection steps. The method was verified against stochastic and deterministic reference results for upscattering percentages in 238U. Simulations of a light water reactor pin cell problemmore » demonstrate that using this algorithm results in a 3% or less penalty in performance when compared with an approximate method that is used in most production Monte Carlo codes« less

  18. Stochastic system identification in structural dynamics

    USGS Publications Warehouse

    Safak, Erdal

    1988-01-01

    Recently, new identification methods have been developed by using the concept of optimal-recursive filtering and stochastic approximation. These methods, known as stochastic identification, are based on the statistical properties of the signal and noise, and do not require the assumptions of current methods. The criterion for stochastic system identification is that the difference between the recorded output and the output from the identified system (i.e., the residual of the identification) should be equal to white noise. In this paper, first a brief review of the theory is given. Then, an application of the method is presented by using ambient vibration data from a nine-story building.

  19. Gaussian theory for spatially distributed self-propelled particles

    NASA Astrophysics Data System (ADS)

    Seyed-Allaei, Hamid; Schimansky-Geier, Lutz; Ejtehadi, Mohammad Reza

    2016-12-01

    Obtaining a reduced description with particle and momentum flux densities outgoing from the microscopic equations of motion of the particles requires approximations. The usual method, we refer to as truncation method, is to zero Fourier modes of the orientation distribution starting from a given number. Here we propose another method to derive continuum equations for interacting self-propelled particles. The derivation is based on a Gaussian approximation (GA) of the distribution of the direction of particles. First, by means of simulation of the microscopic model, we justify that the distribution of individual directions fits well to a wrapped Gaussian distribution. Second, we numerically integrate the continuum equations derived in the GA in order to compare with results of simulations. We obtain that the global polarization in the GA exhibits a hysteresis in dependence on the noise intensity. It shows qualitatively the same behavior as we find in particles simulations. Moreover, both global polarizations agree perfectly for low noise intensities. The spatiotemporal structures of the GA are also in agreement with simulations. We conclude that the GA shows qualitative agreement for a wide range of noise intensities. In particular, for low noise intensities the agreement with simulations is better as other approximations, making the GA to an acceptable candidates of describing spatially distributed self-propelled particles.

  20. An approximate viscous shock layer technique for calculating chemically reacting hypersonic flows about blunt-nosed bodies

    NASA Technical Reports Server (NTRS)

    Cheatwood, F. Mcneil; Dejarnette, Fred R.

    1991-01-01

    An approximate axisymmetric method was developed which can reliably calculate fully viscous hypersonic flows over blunt nosed bodies. By substituting Maslen's second order pressure expression for the normal momentum equation, a simplified form of the viscous shock layer (VSL) equations is obtained. This approach can solve both the subsonic and supersonic regions of the shock layer without a starting solution for the shock shape. The approach is applicable to perfect gas, equilibrium, and nonequilibrium flowfields. Since the method is fully viscous, the problems associated with a boundary layer solution with an inviscid layer solution are avoided. This procedure is significantly faster than the parabolized Navier-Stokes (PNS) or VSL solvers and would be useful in a preliminary design environment. Problems associated with a previously developed approximate VSL technique are addressed before extending the method to nonequilibrium calculations. Perfect gas (laminar and turbulent), equilibrium, and nonequilibrium solutions were generated for airflows over several analytic body shapes. Surface heat transfer, skin friction, and pressure predictions are comparable to VSL results. In addition, computed heating rates are in good agreement with experimental data. The present technique generates its own shock shape as part of its solution, and therefore could be used to provide more accurate initial shock shapes for higher order procedures which require starting solutions.

  1. Bloodstain Pattern Analysis: implementation of a fluid dynamic model for position determination of victims

    PubMed Central

    Laan, Nick; de Bruin, Karla G.; Slenter, Denise; Wilhelm, Julie; Jermy, Mark; Bonn, Daniel

    2015-01-01

    Bloodstain Pattern Analysis is a forensic discipline in which, among others, the position of victims can be determined at crime scenes on which blood has been shed. To determine where the blood source was investigators use a straight-line approximation for the trajectory, ignoring effects of gravity and drag and thus overestimating the height of the source. We determined how accurately the location of the origin can be estimated when including gravity and drag into the trajectory reconstruction. We created eight bloodstain patterns at one meter distance from the wall. The origin’s location was determined for each pattern with: the straight-line approximation, our method including gravity, and our method including both gravity and drag. The latter two methods require the volume and impact velocity of each bloodstain, which we are able to determine with a 3D scanner and advanced fluid dynamics, respectively. We conclude that by including gravity and drag in the trajectory calculation, the origin’s location can be determined roughly four times more accurately than with the straight-line approximation. Our study enables investigators to determine if the victim was sitting or standing, or it might be possible to connect wounds on the body to specific patterns, which is important for crime scene reconstruction. PMID:26099070

  2. Bloodstain Pattern Analysis: implementation of a fluid dynamic model for position determination of victims.

    PubMed

    Laan, Nick; de Bruin, Karla G; Slenter, Denise; Wilhelm, Julie; Jermy, Mark; Bonn, Daniel

    2015-06-22

    Bloodstain Pattern Analysis is a forensic discipline in which, among others, the position of victims can be determined at crime scenes on which blood has been shed. To determine where the blood source was investigators use a straight-line approximation for the trajectory, ignoring effects of gravity and drag and thus overestimating the height of the source. We determined how accurately the location of the origin can be estimated when including gravity and drag into the trajectory reconstruction. We created eight bloodstain patterns at one meter distance from the wall. The origin's location was determined for each pattern with: the straight-line approximation, our method including gravity, and our method including both gravity and drag. The latter two methods require the volume and impact velocity of each bloodstain, which we are able to determine with a 3D scanner and advanced fluid dynamics, respectively. We conclude that by including gravity and drag in the trajectory calculation, the origin's location can be determined roughly four times more accurately than with the straight-line approximation. Our study enables investigators to determine if the victim was sitting or standing, or it might be possible to connect wounds on the body to specific patterns, which is important for crime scene reconstruction.

  3. Bloodstain Pattern Analysis: implementation of a fluid dynamic model for position determination of victims

    NASA Astrophysics Data System (ADS)

    Laan, Nick; de Bruin, Karla G.; Slenter, Denise; Wilhelm, Julie; Jermy, Mark; Bonn, Daniel

    2015-06-01

    Bloodstain Pattern Analysis is a forensic discipline in which, among others, the position of victims can be determined at crime scenes on which blood has been shed. To determine where the blood source was investigators use a straight-line approximation for the trajectory, ignoring effects of gravity and drag and thus overestimating the height of the source. We determined how accurately the location of the origin can be estimated when including gravity and drag into the trajectory reconstruction. We created eight bloodstain patterns at one meter distance from the wall. The origin’s location was determined for each pattern with: the straight-line approximation, our method including gravity, and our method including both gravity and drag. The latter two methods require the volume and impact velocity of each bloodstain, which we are able to determine with a 3D scanner and advanced fluid dynamics, respectively. We conclude that by including gravity and drag in the trajectory calculation, the origin’s location can be determined roughly four times more accurately than with the straight-line approximation. Our study enables investigators to determine if the victim was sitting or standing, or it might be possible to connect wounds on the body to specific patterns, which is important for crime scene reconstruction.

  4. Fabrication of highly textured lithium cobalt oxide films by rapid thermal annealing

    DOEpatents

    Bates, John B.

    2003-04-29

    Systems and methods are described for fabrication of highly textured lithium cobalt oxide films by rapid thermal annealing. A method of forming a lithium cobalt oxide film includes depositing a film of lithium cobalt oxide on a substrate; rapidly heating the film of lithium cobalt oxide to a target temperature; and maintaining the film of lithium cobalt oxide at the target temperature for a target annealing time of at most, approximately 60 minutes. The systems and methods provide advantages because they require less time to implement and are, therefore less costly than previous techniques.

  5. Fabrication of highly textured lithium cobalt oxide films by rapid thermal annealing

    DOEpatents

    Bates, John B.

    2002-01-01

    Systems and methods are described for fabrication of highly textured lithium cobalt oxide films by rapid thermal annealing. A method of forming a lithium cobalt oxide film includes depositing a film of lithium cobalt oxide on a substrate; rapidly heating the film of lithium cobalt oxide to a target temperature; and maintaining the film of lithium cobalt oxide at the target temperature for a target annealing time of at most, approximately 60 minutes. The systems and methods provide advantages because they require less time to implement and are, therefore less costly than previous techniques.

  6. Fabrication of highly textured lithium cobalt oxide films by rapid thermal annealing

    DOEpatents

    Bates, John B.

    2003-05-13

    Systems and methods are described for fabrication of highly textured lithium cobalt oxide films by rapid thermal annealing. A method of forming a lithium cobalt oxide film includes depositing a film of lithium cobalt oxide on a substrate; rapidly heating the film of lithium cobalt oxide to a target temperature; and maintaining the film of lithium cobalt oxide at the target temperature for a target annealing time of at most, approximately 60 minutes. The systems and methods provide advantages because they require less time to implement and are, therefore less costly than previous techniques.

  7. Asymptotic (h tending to infinity) absolute stability for BDFs applied to stiff differential equations. [Backward Differentiation Formulas

    NASA Technical Reports Server (NTRS)

    Krogh, F. T.; Stewart, K.

    1984-01-01

    Methods based on backward differentiation formulas (BDFs) for solving stiff differential equations require iterating to approximate the solution of the corrector equation on each step. One hope for reducing the cost of this is to make do with iteration matrices that are known to have errors and to do no more iterations than are necessary to maintain the stability of the method. This paper, following work by Klopfenstein, examines the effect of errors in the iteration matrix on the stability of the method. Application of the results to an algorithm is discussed briefly.

  8. Speeding Fermat's factoring method

    NASA Astrophysics Data System (ADS)

    McKee, James

    A factoring method is presented which, heuristically, splits composite n in O(n^{1/4+epsilon}) steps. There are two ideas: an integer approximation to sqrt(q/p) provides an O(n^{1/2+epsilon}) algorithm in which n is represented as the difference of two rational squares; observing that if a prime m divides a square, then m^2 divides that square, a heuristic speed-up to O(n^{1/4+epsilon}) steps is achieved. The method is well-suited for use with small computers: the storage required is negligible, and one never needs to work with numbers larger than n itself.

  9. A simple algorithm to estimate the effective regional atmospheric parameters for thermal-inertia mapping

    USGS Publications Warehouse

    Watson, K.; Hummer-Miller, S.

    1981-01-01

    A method based solely on remote sensing data has been developed to estimate those meteorological effects which are required for thermal-inertia mapping. It assumes that the atmospheric fluxes are spatially invariant and that the solar, sky, and sensible heat fluxes can be approximated by a simple mathematical form. Coefficients are determined from least-squares method by fitting observational data to our thermal model. A comparison between field measurements and the model-derived flux shows the type of agreement which can be achieved. An analysis of the limitations of the method is also provided. ?? 1981.

  10. Fisher Scoring Method for Parameter Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model

    NASA Astrophysics Data System (ADS)

    Widyaningsih, Purnami; Retno Sari Saputro, Dewi; Nugrahani Putri, Aulia

    2017-06-01

    GWOLR model combines geographically weighted regression (GWR) and (ordinal logistic reression) OLR models. Its parameter estimation employs maximum likelihood estimation. Such parameter estimation, however, yields difficult-to-solve system of nonlinear equations, and therefore numerical approximation approach is required. The iterative approximation approach, in general, uses Newton-Raphson (NR) method. The NR method has a disadvantage—its Hessian matrix is always the second derivatives of each iteration so it does not always produce converging results. With regard to this matter, NR model is modified by substituting its Hessian matrix into Fisher information matrix, which is termed Fisher scoring (FS). The present research seeks to determine GWOLR model parameter estimation using Fisher scoring method and apply the estimation on data of the level of vulnerability to Dengue Hemorrhagic Fever (DHF) in Semarang. The research concludes that health facilities give the greatest contribution to the probability of the number of DHF sufferers in both villages. Based on the number of the sufferers, IR category of DHF in both villages can be determined.

  11. The Gaussian CL s method for searches of new physics

    DOE PAGES

    Qian, X.; Tan, A.; Ling, J. J.; ...

    2016-04-23

    Here we describe a method based on the CL s approach to present results in searches of new physics, under the condition that the relevant parameter space is continuous. Our method relies on a class of test statistics developed for non-nested hypotheses testing problems, denoted by ΔT, which has a Gaussian approximation to its parent distribution when the sample size is large. This leads to a simple procedure of forming exclusion sets for the parameters of interest, which we call the Gaussian CL s method. Our work provides a self-contained mathematical proof for the Gaussian CL s method, that explicitlymore » outlines the required conditions. These conditions are milder than that required by the Wilks' theorem to set confidence intervals (CIs). We illustrate the Gaussian CL s method in an example of searching for a sterile neutrino, where the CL s approach was rarely used before. We also compare data analysis results produced by the Gaussian CL s method and various CI methods to showcase their differences.« less

  12. A compatible high-order meshless method for the Stokes equations with applications to suspension flows

    NASA Astrophysics Data System (ADS)

    Trask, Nathaniel; Maxey, Martin; Hu, Xiaozhe

    2018-02-01

    A stable numerical solution of the steady Stokes problem requires compatibility between the choice of velocity and pressure approximation that has traditionally proven problematic for meshless methods. In this work, we present a discretization that couples a staggered scheme for pressure approximation with a divergence-free velocity reconstruction to obtain an adaptive, high-order, finite difference-like discretization that can be efficiently solved with conventional algebraic multigrid techniques. We use analytic benchmarks to demonstrate equal-order convergence for both velocity and pressure when solving problems with curvilinear geometries. In order to study problems in dense suspensions, we couple the solution for the flow to the equations of motion for freely suspended particles in an implicit monolithic scheme. The combination of high-order accuracy with fully-implicit schemes allows the accurate resolution of stiff lubrication forces directly from the solution of the Stokes problem without the need to introduce sub-grid lubrication models.

  13. Semiparametric Identification of Human Arm Dynamics for Flexible Control of a Functional Electrical Stimulation Neuroprosthesis

    PubMed Central

    Schearer, Eric M.; Liao, Yu-Wei; Perreault, Eric J.; Tresch, Matthew C.; Memberg, William D.; Kirsch, Robert F.; Lynch, Kevin M.

    2016-01-01

    We present a method to identify the dynamics of a human arm controlled by an implanted functional electrical stimulation neuroprosthesis. The method uses Gaussian process regression to predict shoulder and elbow torques given the shoulder and elbow joint positions and velocities and the electrical stimulation inputs to muscles. We compare the accuracy of torque predictions of nonparametric, semiparametric, and parametric model types. The most accurate of the three model types is a semiparametric Gaussian process model that combines the flexibility of a black box function approximator with the generalization power of a parameterized model. The semiparametric model predicted torques during stimulation of multiple muscles with errors less than 20% of the total muscle torque and passive torque needed to drive the arm. The identified model allows us to define an arbitrary reaching trajectory and approximately determine the muscle stimulations required to drive the arm along that trajectory. PMID:26955041

  14. A trust region approach with multivariate Padé model for optimal circuit design

    NASA Astrophysics Data System (ADS)

    Abdel-Malek, Hany L.; Ebid, Shaimaa E. K.; Mohamed, Ahmed S. A.

    2017-11-01

    Since the optimization process requires a significant number of consecutive function evaluations, it is recommended to replace the function by an easily evaluated approximation model during the optimization process. The model suggested in this article is based on a multivariate Padé approximation. This model is constructed using data points of ?, where ? is the number of parameters. The model is updated over a sequence of trust regions. This model avoids the slow convergence of linear models of ? and has features of quadratic models that need interpolation data points of ?. The proposed approach is tested by applying it to several benchmark problems. Yield optimization using such a direct method is applied to some practical circuit examples. Minimax solution leads to a suitable initial point to carry out the yield optimization process. The yield is optimized by the proposed derivative-free method for active and passive filter examples.

  15. Dual Key Speech Encryption Algorithm Based Underdetermined BSS

    PubMed Central

    Zhao, Huan; Chen, Zuo; Zhang, Xixiang

    2014-01-01

    When the number of the mixed signals is less than that of the source signals, the underdetermined blind source separation (BSS) is a significant difficult problem. Due to the fact that the great amount data of speech communications and real-time communication has been required, we utilize the intractability of the underdetermined BSS problem to present a dual key speech encryption method. The original speech is mixed with dual key signals which consist of random key signals (one-time pad) generated by secret seed and chaotic signals generated from chaotic system. In the decryption process, approximate calculation is used to recover the original speech signals. The proposed algorithm for speech signals encryption can resist traditional attacks against the encryption system, and owing to approximate calculation, decryption becomes faster and more accurate. It is demonstrated that the proposed method has high level of security and can recover the original signals quickly and efficiently yet maintaining excellent audio quality. PMID:24955430

  16. The difference between LSMC and replicating portfolio in insurance liability modeling.

    PubMed

    Pelsser, Antoon; Schweizer, Janina

    2016-01-01

    Solvency II requires insurers to calculate the 1-year value at risk of their balance sheet. This involves the valuation of the balance sheet in 1 year's time. As for insurance liabilities, closed-form solutions to their value are generally not available, insurers turn to estimation procedures. While pure Monte Carlo simulation set-ups are theoretically sound, they are often infeasible in practice. Therefore, approximation methods are exploited. Among these, least squares Monte Carlo (LSMC) and portfolio replication are prominent and widely applied in practice. In this paper, we show that, while both are variants of regression-based Monte Carlo methods, they differ in one significant aspect. While the replicating portfolio approach only contains an approximation error, which converges to zero in the limit, in LSMC a projection error is additionally present, which cannot be eliminated. It is revealed that the replicating portfolio technique enjoys numerous advantages and is therefore an attractive model choice.

  17. Environmental and human monitoring of Americium-241 utilizing extraction chromatography and alpha-spectrometry.

    PubMed

    Goldstein, S J; Hensley, C A; Armenta, C E; Peters, R J

    1997-03-01

    Recent developments in extraction chromatography have simplified the separation of americium from complex matrices in preparation for alpha-spectroscopy relative to traditional methods. Here we present results of procedures developed/adapted for water, air, and bioassay samples with less than 1 g of inorganic residue. Prior analytical methods required the use of a complex, multistage procedure for separation of americium from these matrices. The newer, simplified procedure requires only a single 2 mL extraction chromatographic separation for isolation of Am and lanthanides from other components of the sample. This method has been implemented on an extensive variety of "real" environmental and bioassay samples from the Los Alamos area, and consistently reliable and accurate results with appropriate detection limits have been obtained. The new method increases analytical throughput by a factor of approximately 2 and decreases environmental hazards from acid and mixed-waste generation relative to the prior technique. Analytical accuracy, reproducibility, and reliability are also significantly improved over the more complex and laborious method used previously.

  18. Comparison of three newton-like nonlinear least-squares methods for estimating parameters of ground-water flow models

    USGS Publications Warehouse

    Cooley, R.L.; Hill, M.C.

    1992-01-01

    Three methods of solving nonlinear least-squares problems were compared for robustness and efficiency using a series of hypothetical and field problems. A modified Gauss-Newton/full Newton hybrid method (MGN/FN) and an analogous method for which part of the Hessian matrix was replaced by a quasi-Newton approximation (MGN/QN) solved some of the problems with appreciably fewer iterations than required using only a modified Gauss-Newton (MGN) method. In these problems, model nonlinearity and a large variance for the observed data apparently caused MGN to converge more slowly than MGN/FN or MGN/QN after the sum of squared errors had almost stabilized. Other problems were solved as efficiently with MGN as with MGN/FN or MGN/QN. Because MGN/FN can require significantly more computer time per iteration and more computer storage for transient problems, it is less attractive for a general purpose algorithm than MGN/QN.

  19. Efficient calculation of the polarizability: a simplified effective-energy technique

    NASA Astrophysics Data System (ADS)

    Berger, J. A.; Reining, L.; Sottile, F.

    2012-09-01

    In a recent publication [J.A. Berger, L. Reining, F. Sottile, Phys. Rev. B 82, 041103(R) (2010)] we introduced the effective-energy technique to calculate in an accurate and numerically efficient manner the GW self-energy as well as the polarizability, which is required to evaluate the screened Coulomb interaction W. In this work we show that the effective-energy technique can be used to further simplify the expression for the polarizability without a significant loss of accuracy. In contrast to standard sum-over-state methods where huge summations over empty states are required, our approach only requires summations over occupied states. The three simplest approximations we obtain for the polarizability are explicit functionals of an independent- or quasi-particle one-body reduced density matrix. We provide evidence of the numerical accuracy of this simplified effective-energy technique as well as an analysis of our method.

  20. Bayesian alternative to the ISO-GUM's use of the Welch Satterthwaite formula

    NASA Astrophysics Data System (ADS)

    Kacker, Raghu N.

    2006-02-01

    In certain disciplines, uncertainty is traditionally expressed as an interval about an estimate for the value of the measurand. Development of such uncertainty intervals with a stated coverage probability based on the International Organization for Standardization (ISO) Guide to the Expression of Uncertainty in Measurement (GUM) requires a description of the probability distribution for the value of the measurand. The ISO-GUM propagates the estimates and their associated standard uncertainties for various input quantities through a linear approximation of the measurement equation to determine an estimate and its associated standard uncertainty for the value of the measurand. This procedure does not yield a probability distribution for the value of the measurand. The ISO-GUM suggests that under certain conditions motivated by the central limit theorem the distribution for the value of the measurand may be approximated by a scaled-and-shifted t-distribution with effective degrees of freedom obtained from the Welch-Satterthwaite (W-S) formula. The approximate t-distribution may then be used to develop an uncertainty interval with a stated coverage probability for the value of the measurand. We propose an approximate normal distribution based on a Bayesian uncertainty as an alternative to the t-distribution based on the W-S formula. A benefit of the approximate normal distribution based on a Bayesian uncertainty is that it greatly simplifies the expression of uncertainty by eliminating altogether the need for calculating effective degrees of freedom from the W-S formula. In the special case where the measurand is the difference between two means, each evaluated from statistical analyses of independent normally distributed measurements with unknown and possibly unequal variances, the probability distribution for the value of the measurand is known to be a Behrens-Fisher distribution. We compare the performance of the approximate normal distribution based on a Bayesian uncertainty and the approximate t-distribution based on the W-S formula with respect to the Behrens-Fisher distribution. The approximate normal distribution is simpler and better in this case. A thorough investigation of the relative performance of the two approximate distributions would require comparison for a range of measurement equations by numerical methods.

  1. Variationally consistent approximation scheme for charge transfer

    NASA Technical Reports Server (NTRS)

    Halpern, A. M.

    1978-01-01

    The author has developed a technique for testing various charge-transfer approximation schemes for consistency with the requirements of the Kohn variational principle for the amplitude to guarantee that the amplitude is correct to second order in the scattering wave functions. Applied to Born-type approximations for charge transfer it allows the selection of particular groups of first-, second-, and higher-Born-type terms that obey the consistency requirement, and hence yield more reliable approximation to the amplitude.

  2. Semiclassical evaluation of quantum fidelity

    NASA Astrophysics Data System (ADS)

    Vaníček, Jiří; Heller, Eric J.

    2003-11-01

    We present a numerically feasible semiclassical (SC) method to evaluate quantum fidelity decay (Loschmidt echo) in a classically chaotic system. It was thought that such evaluation would be intractable, but instead we show that a uniform SC expression not only is tractable but it also gives remarkably accurate numerical results for the standard map in both the Fermi-golden-rule and Lyapunov regimes. Because it allows Monte Carlo evaluation, the uniform expression is accurate at times when there are 1070 semiclassical contributions. Remarkably, it also explicitly contains the “building blocks” of analytical theories of recent literature, and thus permits a direct test of the approximations made by other authors in these regimes, rather than an a posteriori comparison with numerical results. We explain in more detail the extended validity of the classical perturbation approximation and show that within this approximation, the so-called “diagonal approximation” is automatic and does not require ensemble averaging.

  3. Approximating the Qualitative Vickrey Auction by a Negotiation Protocol

    NASA Astrophysics Data System (ADS)

    Hindriks, Koen V.; Tykhonov, Dmytro; de Weerdt, Mathijs

    A result of Bulow and Klemperer has suggested that auctions may be a better tool to obtain an efficient outcome than negotiation. For example, some auction mechanisms can be shown to be efficient and strategy-proof. However, they generally also require that the preferences of at least one side of the auction are publicly known. However, sometimes it is very costly, impossible, or undesirable to publicly announce such preferences. It thus is interesting to find methods that do not impose this constraint but still approximate the outcome of the auction. In this paper we show that a multi-round multi-party negotiation protocol may be used to this end if the negotiating agents are capable of learning opponent preferences. The latter condition can be met by current state of the art negotiation technology. We show that this protocol approximates the theoretical outcome predicted by a so-called Qualitative Vickrey auction mechanism (even) on a complex multi-issue domain.

  4. A frozen Gaussian approximation-based multi-level particle swarm optimization for seismic inversion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Jinglai, E-mail: jinglaili@sjtu.edu.cn; Lin, Guang, E-mail: lin491@purdue.edu; Computational Sciences and Mathematics Division, Pacific Northwest National Laboratory, Richland, WA 99352

    2015-09-01

    In this paper, we propose a frozen Gaussian approximation (FGA)-based multi-level particle swarm optimization (MLPSO) method for seismic inversion of high-frequency wave data. The method addresses two challenges in it: First, the optimization problem is highly non-convex, which makes hard for gradient-based methods to reach global minima. This is tackled by MLPSO which can escape from undesired local minima. Second, the character of high-frequency of seismic waves requires a large number of grid points in direct computational methods, and thus renders an extremely high computational demand on the simulation of each sample in MLPSO. We overcome this difficulty by threemore » steps: First, we use FGA to compute high-frequency wave propagation based on asymptotic analysis on phase plane; Then we design a constrained full waveform inversion problem to prevent the optimization search getting into regions of velocity where FGA is not accurate; Last, we solve the constrained optimization problem by MLPSO that employs FGA solvers with different fidelity. The performance of the proposed method is demonstrated by a two-dimensional full-waveform inversion example of the smoothed Marmousi model.« less

  5. An accurate and efficient method for evaluating the kernel of the integral equation relating pressure to normalwash in unsteady potential flow

    NASA Technical Reports Server (NTRS)

    Desmarais, R. N.

    1982-01-01

    This paper describes an accurate economical method for generating approximations to the kernel of the integral equation relating unsteady pressure to normalwash in nonplanar flow. The method is capable of generating approximations of arbitrary accuracy. It is based on approximating the algebraic part of the non elementary integrals in the kernel by exponential approximations and then integrating termwise. The exponent spacing in the approximation is a geometric sequence. The coefficients and exponent multiplier of the exponential approximation are computed by least squares so the method is completely automated. Exponential approximates generated in this manner are two orders of magnitude more accurate than the exponential approximation that is currently most often used for this purpose. Coefficients for 8, 12, 24, and 72 term approximations are tabulated in the report. Also, since the method is automated, it can be used to generate approximations to attain any desired trade-off between accuracy and computing cost.

  6. Flight Crew Training: Multi-Crew Pilot License Training versus Traditional Training and Its Relationship with Job Performance

    ERIC Educational Resources Information Center

    Cushing, Thomas S.

    2013-01-01

    In 2006, the International Civil Aviation Organization promulgated requirements for a Multi-Crew Pilot License for First Officers, in which the candidate attends approximately two years of ground school and trains as part of a two-person crew in a simulator of a Boeing 737 or an Airbus 320 airliner. In the traditional method, a candidate qualifies…

  7. The General Necessary Condition for the Validity of Dirac's Transition Perturbation Theory

    NASA Technical Reports Server (NTRS)

    Quang, Nguyen Vinh

    1996-01-01

    For the first time, from the natural requirements for the successive approximation the general necessary condition of validity of the Dirac's method is explicitly established. It is proved that the conception of 'the transition probability per unit time' is not valid. The 'super-platinium rules' for calculating the transition probability are derived for the arbitrarily strong time-independent perturbation case.

  8. A method for modeling contact dynamics for automated capture mechanisms

    NASA Technical Reports Server (NTRS)

    Williams, Philip J.

    1991-01-01

    Logicon Control Dynamics develops contact dynamics models for space-based docking and berthing vehicles. The models compute contact forces for the physical contact between mating capture mechanism surfaces. Realistic simulation requires proportionality constants, for calculating contact forces, to approximate surface stiffness of contacting bodies. Proportionality for rigid metallic bodies becomes quite large. Small penetrations of surface boundaries can produce large contact forces.

  9. On the calibration of continuous, high-precision delta18O and delta2H measurements using an off-axis integrated cavity output spectrometer.

    PubMed

    Wang, Lixin; Caylor, Kelly K; Dragoni, Danilo

    2009-02-01

    The (18)O and (2)H of water vapor serve as powerful tracers of hydrological processes. The typical method for determining water vapor delta(18)O and delta(2)H involves cryogenic trapping and isotope ratio mass spectrometry. Even with recent technical advances, these methods cannot resolve vapor composition at high temporal resolutions. In recent years, a few groups have developed continuous laser absorption spectroscopy (LAS) approaches for measuring delta(18)O and delta(2)H which achieve accuracy levels similar to those of lab-based mass spectrometry methods. Unfortunately, most LAS systems need cryogenic cooling and constant calibration to a reference gas, and have substantial power requirements, making them unsuitable for long-term field deployment at remote field sites. A new method called Off-Axis Integrated Cavity Output Spectroscopy (OA-ICOS) has been developed which requires extremely low-energy consumption and neither reference gas nor cryogenic cooling. In this report, we develop a relatively simple pumping system coupled to a dew point generator to calibrate an ICOS-based instrument (Los Gatos Research Water Vapor Isotope Analyzer (WVIA) DLT-100) under various pressures using liquid water with known isotopic signatures. Results show that the WVIA can be successfully calibrated using this customized system for different pressure settings, which ensure that this instrument can be combined with other gas-sampling systems. The precisions of this instrument and the associated calibration method can reach approximately 0.08 per thousand for delta(18)O and approximately 0.4 per thousand for delta(2)H. Compared with conventional mass spectrometry and other LAS-based methods, the OA-ICOS technique provides a promising alternative tool for continuous water vapor isotopic measurements in field deployments. Copyright 2009 John Wiley & Sons, Ltd.

  10. The Orbital precession around oblate spheroids

    NASA Astrophysics Data System (ADS)

    Montanus, J. M. C.

    2006-07-01

    An exact series will be given for the gravitational potential generated by an oblate gravitating source. To this end the corresponding Epstein-Hubbell type elliptic integral is evaluated. The procedure is based on the Legendre polynomial expansion method and on combinatorial techniques. The result is of interest for gravitational models based on the linearity of the gravitational potential. The series approximation for such potentials is of use for the analysis of orbital motions around a nonspherical source. It can be considered advantageous that the analysis is purely algebraic. Numerical approximations are not required. As an important example, the expression for the orbital precession will be derived for an object orbiting around an oblate homogeneous spheroid.

  11. A cubic extended interior penalty function for structural optimization

    NASA Technical Reports Server (NTRS)

    Prasad, B.; Haftka, R. T.

    1979-01-01

    This paper describes an optimization procedure for the minimum weight design of complex structures. The procedure is based on a new cubic extended interior penalty function (CEIPF) used with the sequence of unconstrained minimization technique (SUMT) and Newton's method. The Hessian matrix of the penalty function is approximated using only constraints and their derivatives. The CEIPF is designed to minimize the error in the approximation of the Hessian matrix, and as a result the number of structural analyses required is small and independent of the number of design variables. Three example problems are reported. The number of structural analyses is reduced by as much as 50 per cent below previously reported results.

  12. A General Method for Solving Systems of Non-Linear Equations

    NASA Technical Reports Server (NTRS)

    Nachtsheim, Philip R.; Deiss, Ron (Technical Monitor)

    1995-01-01

    The method of steepest descent is modified so that accelerated convergence is achieved near a root. It is assumed that the function of interest can be approximated near a root by a quadratic form. An eigenvector of the quadratic form is found by evaluating the function and its gradient at an arbitrary point and another suitably selected point. The terminal point of the eigenvector is chosen to lie on the line segment joining the two points. The terminal point found lies on an axis of the quadratic form. The selection of a suitable step size at this point leads directly to the root in the direction of steepest descent in a single step. Newton's root finding method not infrequently diverges if the starting point is far from the root. However, the current method in these regions merely reverts to the method of steepest descent with an adaptive step size. The current method's performance should match that of the Levenberg-Marquardt root finding method since they both share the ability to converge from a starting point far from the root and both exhibit quadratic convergence near a root. The Levenberg-Marquardt method requires storage for coefficients of linear equations. The current method which does not require the solution of linear equations requires more time for additional function and gradient evaluations. The classic trade off of time for space separates the two methods.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Broda, Jill Terese

    The neutron flux across the nuclear reactor core is of interest to reactor designers and others. The diffusion equation, an integro-differential equation in space and energy, is commonly used to determine the flux level. However, the solution of a simplified version of this equation when automated is very time consuming. Since the flux level changes with time, in general, this calculation must be made repeatedly. Therefore solution techniques that speed the calculation while maintaining accuracy are desirable. One factor that contributes to the solution time is the spatial flux shape approximation used. It is common practice to use the samemore » order flux shape approximation in each energy group even though this method may not be the most efficient. The one-dimensional, two-energy group diffusion equation was solved, for the node average flux and core k-effective, using two sets of spatial shape approximations for each of three reactor types. A fourth-order approximation in both energy groups forms the first set of approximations used. The second set used combines a second-order approximation with a fourth-order approximation in energy group two. Comparison of the results from the two approximation sets show that the use of a different order spatial flux shape approximation results in considerable loss in accuracy for the pressurized water reactor modeled. However, the loss in accuracy is small for the heavy water and graphite reactors modeled. The use of different order approximations in each energy group produces mixed results. Further investigation into the accuracy and computing time is required before any quantitative advantage of the use of the second-order approximation in energy group one and the fourth-order approximation in energy group two can be determined.« less

  14. Electrospray ionization and time-of-flight mass spectrometric method for simultaneous determination of spermidine and spermine.

    PubMed

    Samejima, Keijiro; Otani, Masahiro; Murakami, Yasuko; Oka, Takami; Kasai, Misao; Tsumoto, Hiroki; Kohda, Kohfuku

    2007-10-01

    A sensitive method for the determination of polyamines in mammalian cells was described using electrospray ionization and time-of-flight mass spectrometer. This method was 50-fold more sensitive than the previous method using ionspray ionization and quadrupole mass spectrometer. The method employed the partial purification and derivatization of polyamines, but allowed a measurement of multiple samples which contained picomol amounts of polyamines. Time required for data acquisition of one sample was approximately 2 min. The method was successfully applied for the determination of reduced spermidine and spermine contents in cultured cells under the inhibition of aminopropyltransferases. In addition, a new proper internal standard was proposed for the tracer experiment using (15)N-labeled polyamines.

  15. Strategies for Efficient Computation of the Expected Value of Partial Perfect Information

    PubMed Central

    Madan, Jason; Ades, Anthony E.; Price, Malcolm; Maitland, Kathryn; Jemutai, Julie; Revill, Paul; Welton, Nicky J.

    2014-01-01

    Expected value of information methods evaluate the potential health benefits that can be obtained from conducting new research to reduce uncertainty in the parameters of a cost-effectiveness analysis model, hence reducing decision uncertainty. Expected value of partial perfect information (EVPPI) provides an upper limit to the health gains that can be obtained from conducting a new study on a subset of parameters in the cost-effectiveness analysis and can therefore be used as a sensitivity analysis to identify parameters that most contribute to decision uncertainty and to help guide decisions around which types of study are of most value to prioritize for funding. A common general approach is to use nested Monte Carlo simulation to obtain an estimate of EVPPI. This approach is computationally intensive, can lead to significant sampling bias if an inadequate number of inner samples are obtained, and incorrect results can be obtained if correlations between parameters are not dealt with appropriately. In this article, we set out a range of methods for estimating EVPPI that avoid the need for nested simulation: reparameterization of the net benefit function, Taylor series approximations, and restricted cubic spline estimation of conditional expectations. For each method, we set out the generalized functional form that net benefit must take for the method to be valid. By specifying this functional form, our methods are able to focus on components of the model in which approximation is required, avoiding the complexities involved in developing statistical approximations for the model as a whole. Our methods also allow for any correlations that might exist between model parameters. We illustrate the methods using an example of fluid resuscitation in African children with severe malaria. PMID:24449434

  16. An open-chain imaginary-time path-integral sampling approach to the calculation of approximate symmetrized quantum time correlation functions.

    PubMed

    Cendagorta, Joseph R; Bačić, Zlatko; Tuckerman, Mark E

    2018-03-14

    We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.

  17. An open-chain imaginary-time path-integral sampling approach to the calculation of approximate symmetrized quantum time correlation functions

    NASA Astrophysics Data System (ADS)

    Cendagorta, Joseph R.; Bačić, Zlatko; Tuckerman, Mark E.

    2018-03-01

    We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.

  18. Results of Ponseti Brasil Program: Multicentric Study in 1621 Feet: Preliminary Results.

    PubMed

    Nogueira, Monica P; Queiroz, Ana C D B F; Melanda, Alessandro G; Tedesco, Ana P; Brandão, Antonio L G; Beling, Claudio; Violante, Francisco H; Brandão, Gilberto F; Ferreira, Laura F A; Brambila, Leandro S; Leite, Leopoldina M; Zabeu, Jose L; Kim, Jung H; Fernandes, Kalyana E; Arima, Marcia A S; Aguilar, Maria D P Q; Farias Filho, Orlando C D; Oliveira Filho, Oscar B D A; Pinho, Solange D S; Moulin, Paulo; Volpi, Reinaldo; Fox, Mark; Greenwald, Miles F; Lyle, Brandon; Morcuende, Jose A

    The Ponseti method has been shown to be the most effective treatment for congenital clubfoot. The current challenge is to establish sustainable national clubfoot treatment programs that utilize the Ponseti method and integrate it within a nation's governmental health system. The Brazilian Ponseti Program (Programa Ponseti Brasil) has increased awareness of the utility of the Ponseti method and has trained >500 Brazilian orthopaedic surgeons in it. A group of 18 of those surgeons had been able to reproduce the Ponseti clubfoot treatment, and compiled their initial results through structured spreadsheet. The study compiled 1040 patients for a total of 1621 feet. The average follow-up time was 2.3 years with an average correction time of approximately 3 months. Patients required an average of 6.40 casts to achieve correction. This study demonstrates that good initial correction rates are reproducible after training; from 1040 patients only 1.4% required a posteromedial release. Level IV.

  19. On simulation of no-slip condition in the method of discrete vortices

    NASA Astrophysics Data System (ADS)

    Shmagunov, O. A.

    2017-10-01

    When modeling flows of an incompressible fluid, it is convenient sometimes to use the method of discrete vortices (MDV), where the continuous vorticity field is approximated by a set of discrete vortex elements moving in the velocity field. The vortex elements have a clear physical interpretation, they do not require the construction of grids and are automatically adaptive, since they concentrate in the regions of greatest interest and successfully describe the flows of a non-viscous fluid. The possibility of using MDV in simulating flows of a viscous fluid was considered in the previous papers using the examples of flows past bodies with sharp edges with the no-penetration condition at solid boundaries. However, the appearance of vorticity on smooth boundaries requires the no-slip condition to be met when MDV is realized, which substantially complicates the initially simple method. In this connection, an approach is considered that allows solving the problem by simple means.

  20. Tensor hypercontraction density fitting. I. Quartic scaling second- and third-order Møller-Plesset perturbation theory

    NASA Astrophysics Data System (ADS)

    Hohenstein, Edward G.; Parrish, Robert M.; Martínez, Todd J.

    2012-07-01

    Many approximations have been developed to help deal with the O(N4) growth of the electron repulsion integral (ERI) tensor, where N is the number of one-electron basis functions used to represent the electronic wavefunction. Of these, the density fitting (DF) approximation is currently the most widely used despite the fact that it is often incapable of altering the underlying scaling of computational effort with respect to molecular size. We present a method for exploiting sparsity in three-center overlap integrals through tensor decomposition to obtain a low-rank approximation to density fitting (tensor hypercontraction density fitting or THC-DF). This new approximation reduces the 4th-order ERI tensor to a product of five matrices, simultaneously reducing the storage requirement as well as increasing the flexibility to regroup terms and reduce scaling behavior. As an example, we demonstrate such a scaling reduction for second- and third-order perturbation theory (MP2 and MP3), showing that both can be carried out in O(N4) operations. This should be compared to the usual scaling behavior of O(N5) and O(N6) for MP2 and MP3, respectively. The THC-DF technique can also be applied to other methods in electronic structure theory, such as coupled-cluster and configuration interaction, promising significant gains in computational efficiency and storage reduction.

  1. New approximate orientation averaging of the water molecule interacting with the thermal neutron

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Markovic, M.I.; Minic, D.M.; Rakic, A.D.

    1992-02-01

    This paper reports that exactly describing the time of thermal neutron collisions with water molecules, orientation averaging is performed by an exact method (EOA{sub k}) and four approximate methods (two well known and two less known). Expressions for the microscopic scattering kernel are developed. The two well-known approximate orientation averaging methods are Krieger-Nelkin (K-N) and Koppel-Young (K-Y). The results obtained by one of the two proposed approximate orientation averaging methods agree best with the corresponding results obtained by EOA{sub k}. The largest discrepancies between the EOA{sub k} results and the results of the approximate methods are obtained using the well-knowmore » K-N approximate orientation averaging method.« less

  2. Sensitivity analysis and approximation methods for general eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Murthy, D. V.; Haftka, R. T.

    1986-01-01

    Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.

  3. Overcoming computational uncertainties to reveal chemical sensitivity in single molecule conduction calculations.

    PubMed

    Solomon, Gemma C; Reimers, Jeffrey R; Hush, Noel S

    2005-06-08

    In the calculation of conduction through single molecule's approximations about the geometry and electronic structure of the system are usually made in order to simplify the problem. Previously [G. C. Solomon, J. R. Reimers, and N. S. Hush, J. Chem. Phys. 121, 6615 (2004)], we have shown that, in calculations employing cluster models for the electrodes, proper treatment of the open-shell nature of the clusters is the most important computational feature required to make the results sensitive to variations in the structural and chemical features of the system. Here, we expand this and establish a general hierarchy of requirements involving treatment of geometrical approximations. These approximations are categorized into two classes: those associated with finite-dimensional methods for representing the semi-infinite electrodes, and those associated with the chemisorption topology. We show that ca. 100 unique atoms are required in order to properly characterize each electrode: using fewer atoms leads to nonsystematic variations in conductivity that can overwhelm the subtler changes. The choice of binding site is shown to be the next most important feature, while some effects that are difficult to control experimentally concerning the orientations at each binding site are actually shown to be insignificant. Verification of this result provides a general test for the precision of computational procedures for molecular conductivity. Predictions concerning the dependence of conduction on substituent and other effects on the central molecule are found to be meaningful only when they exceed the uncertainties of the effects associated with binding-site variation.

  4. Overcoming computational uncertainties to reveal chemical sensitivity in single molecule conduction calculations

    NASA Astrophysics Data System (ADS)

    Solomon, Gemma C.; Reimers, Jeffrey R.; Hush, Noel S.

    2005-06-01

    In the calculation of conduction through single molecule's approximations about the geometry and electronic structure of the system are usually made in order to simplify the problem. Previously [G. C. Solomon, J. R. Reimers, and N. S. Hush, J. Chem. Phys. 121, 6615 (2004)], we have shown that, in calculations employing cluster models for the electrodes, proper treatment of the open-shell nature of the clusters is the most important computational feature required to make the results sensitive to variations in the structural and chemical features of the system. Here, we expand this and establish a general hierarchy of requirements involving treatment of geometrical approximations. These approximations are categorized into two classes: those associated with finite-dimensional methods for representing the semi-infinite electrodes, and those associated with the chemisorption topology. We show that ca. 100 unique atoms are required in order to properly characterize each electrode: using fewer atoms leads to nonsystematic variations in conductivity that can overwhelm the subtler changes. The choice of binding site is shown to be the next most important feature, while some effects that are difficult to control experimentally concerning the orientations at each binding site are actually shown to be insignificant. Verification of this result provides a general test for the precision of computational procedures for molecular conductivity. Predictions concerning the dependence of conduction on substituent and other effects on the central molecule are found to be meaningful only when they exceed the uncertainties of the effects associated with binding-site variation.

  5. Including screening in van der Waals corrected density functional theory calculations: the case of atoms and small molecules physisorbed on graphene.

    PubMed

    Silvestrelli, Pier Luigi; Ambrosetti, Alberto

    2014-03-28

    The Density Functional Theory (DFT)/van der Waals-Quantum Harmonic Oscillator-Wannier function (vdW-QHO-WF) method, recently developed to include the vdW interactions in approximated DFT by combining the quantum harmonic oscillator model with the maximally localized Wannier function technique, is applied to the cases of atoms and small molecules (X=Ar, CO, H2, H2O) weakly interacting with benzene and with the ideal planar graphene surface. Comparison is also presented with the results obtained by other DFT vdW-corrected schemes, including PBE+D, vdW-DF, vdW-DF2, rVV10, and by the simpler Local Density Approximation (LDA) and semilocal generalized gradient approximation approaches. While for the X-benzene systems all the considered vdW-corrected schemes perform reasonably well, it turns out that an accurate description of the X-graphene interaction requires a proper treatment of many-body contributions and of short-range screening effects, as demonstrated by adopting an improved version of the DFT/vdW-QHO-WF method. We also comment on the widespread attitude of relying on LDA to get a rough description of weakly interacting systems.

  6. Disease control in hatchery fish

    USGS Publications Warehouse

    Fish, F.F.

    1947-01-01

    The method described herein has been extensively tested, both in the laboratory and at the producing hatchery, over a period of several years. Once familiarity with the details of application have been mastered, th8 reduction in effort required to treat fish is amazing. For example, two men have treated 20 large ponds containing several million fish, in one morning with no significant increase in mortality of the fish, whereas a crew of eight men required a full day to treat a single similar pond by hand dipping the fish with a subsequent loss approximating 50 percent of the stock.

  7. Combining qualitative and quantitative spatial and temporal information in a hierarchical structure: Approximate reasoning for plan execution monitoring

    NASA Technical Reports Server (NTRS)

    Hoebel, Louis J.

    1993-01-01

    The problem of plan generation (PG) and the problem of plan execution monitoring (PEM), including updating, queries, and resource-bounded replanning, have different reasoning and representation requirements. PEM requires the integration of qualitative and quantitative information. PEM is the receiving of data about the world in which a plan or agent is executing. The problem is to quickly determine the relevance of the data, the consistency of the data with respect to the expected effects, and if execution should continue. Only spatial and temporal aspects of the plan are addressed for relevance in this work. Current temporal reasoning systems are deficient in computational aspects or expressiveness. This work presents a hybrid qualitative and quantitative system that is fully expressive in its assertion language while offering certain computational efficiencies. In order to proceed, methods incorporating approximate reasoning using hierarchies, notions of locality, constraint expansion, and absolute parameters need be used and are shown to be useful for the anytime nature of PEM.

  8. Mechanical energy flow models of rods and beams

    NASA Technical Reports Server (NTRS)

    Wohlever, J. C.; Bernhard, R. J.

    1992-01-01

    It has been proposed that the flow of mechanical energy through a structural/acoustic system may be modeled in a manner similar to that of flow of thermal energy/in a heat conduction problem. If this hypothesis is true, it would result in relatively efficient numerical models of structure-borne energy in large built-up structures. Fewer parameters are required to approximate the energy solution than are required to model the characteristic wave behavior of structural vibration by using traditional displacement formulations. The energy flow hypothesis is tested in this investigation for both longitudinal vibration in rods and transverse flexural vibrations of beams. The rod is shown to behave approximately according to the thermal energy flow analogy. However, the beam solutions behave significantly differently than predicted by the thermal analogy unless locally-space-averaged energy and power are considered. Several techniques for coupling dissimilar rods and beams are also discussed. Illustrations of the solution accuracy of the methods are included.

  9. Visualizing the deep end of sound: plotting multi-parameter results from infrasound data analysis

    NASA Astrophysics Data System (ADS)

    Perttu, A. B.; Taisne, B.

    2016-12-01

    Infrasound is sound below the threshold of human hearing: approximately 20 Hz. The field of infrasound research, like other waveform based fields relies on several standard processing methods and data visualizations, including waveform plots and spectrograms. The installation of the International Monitoring System (IMS) global network of infrasound arrays, contributed to the resurgence of infrasound research. Array processing is an important method used in infrasound research, however, this method produces data sets with a large number of parameters, and requires innovative plotting techniques. The goal in designing new figures is to be able to present easily comprehendible, and information-rich plots by careful selection of data density and plotting methods.

  10. A unified, multifidelity quasi-newton optimization method with application to aero-structural designa

    NASA Astrophysics Data System (ADS)

    Bryson, Dean Edward

    A model's level of fidelity may be defined as its accuracy in faithfully reproducing a quantity or behavior of interest of a real system. Increasing the fidelity of a model often goes hand in hand with increasing its cost in terms of time, money, or computing resources. The traditional aircraft design process relies upon low-fidelity models for expedience and resource savings. However, the reduced accuracy and reliability of low-fidelity tools often lead to the discovery of design defects or inadequacies late in the design process. These deficiencies result either in costly changes or the acceptance of a configuration that does not meet expectations. The unknown opportunity cost is the discovery of superior vehicles that leverage phenomena unknown to the designer and not illuminated by low-fidelity tools. Multifidelity methods attempt to blend the increased accuracy and reliability of high-fidelity models with the reduced cost of low-fidelity models. In building surrogate models, where mathematical expressions are used to cheaply approximate the behavior of costly data, low-fidelity models may be sampled extensively to resolve the underlying trend, while high-fidelity data are reserved to correct inaccuracies at key locations. Similarly, in design optimization a low-fidelity model may be queried many times in the search for new, better designs, with a high-fidelity model being exercised only once per iteration to evaluate the candidate design. In this dissertation, a new multifidelity, gradient-based optimization algorithm is proposed. It differs from the standard trust region approach in several ways, stemming from the new method maintaining an approximation of the inverse Hessian, that is the underlying curvature of the design problem. Whereas the typical trust region approach performs a full sub-optimization using the low-fidelity model at every iteration, the new technique finds a suitable descent direction and focuses the search along it, reducing the number of low-fidelity evaluations required. This narrowing of the search domain also alleviates the burden on the surrogate model corrections between the low- and high-fidelity data. Rather than requiring the surrogate to be accurate in a hyper-volume bounded by the trust region, the model needs only to be accurate along the forward-looking search direction. Maintaining the approximate inverse Hessian also allows the multifidelity algorithm to revert to high-fidelity optimization at any time. In contrast, the standard approach has no memory of the previously-computed high-fidelity data. The primary disadvantage of the proposed algorithm is that it may require modifications to the optimization software, whereas standard optimizers may be used as black-box drivers in the typical trust region method. A multifidelity, multidisciplinary simulation of aeroelastic vehicle performance is developed to demonstrate the optimization method. The numerical physics models include body-fitted Euler computational fluid dynamics; linear, panel aerodynamics; linear, finite-element computational structural mechanics; and reduced, modal structural bases. A central element of the multifidelity, multidisciplinary framework is a shared parametric, attributed geometric representation that ensures the analysis inputs are consistent between disciplines and fidelities. The attributed geometry also enables the transfer of data between disciplines. The new optimization algorithm, a standard trust region approach, and a single-fidelity quasi-Newton method are compared for a series of analytic test functions, using both polynomial chaos expansions and kriging to correct discrepancies between fidelity levels of data. In the aggregate, the new method requires fewer high-fidelity evaluations than the trust region approach in 51% of cases, and the same number of evaluations in 18%. The new approach also requires fewer low-fidelity evaluations, by up to an order of magnitude, in almost all cases. The efficacy of both multifidelity methods compared to single-fidelity optimization depends significantly on the behavior of the high-fidelity model and the quality of the low-fidelity approximation, though savings are realized in a large number of cases. The multifidelity algorithm is also compared to the single-fidelity quasi-Newton method for complex aeroelastic simulations. The vehicle design problem includes variables for planform shape, structural sizing, and cruise condition with constraints on trim and structural stresses. Considering the objective function reduction versus computational expenditure, the multifidelity process performs better in three of four cases in early iterations. However, the enforcement of a contracting trust region slows the multifidelity progress. Even so, leveraging the approximate inverse Hessian, the optimization can be seamlessly continued using high-fidelity data alone. Ultimately, the proposed new algorithm produced better designs in all four cases. Investigating the return on investment in terms of design improvement per computational hour confirms that the multifidelity advantage is greatest in early iterations, and managing the transition to high-fidelity optimization is critical.

  11. Solving radiative transfer with line overlaps using Gauss-Seidel algorithms

    NASA Astrophysics Data System (ADS)

    Daniel, F.; Cernicharo, J.

    2008-09-01

    Context: The improvement in observational facilities requires refining the modelling of the geometrical structures of astrophysical objects. Nevertheless, for complex problems such as line overlap in molecules showing hyperfine structure, a detailed analysis still requires a large amount of computing time and thus, misinterpretation cannot be dismissed due to an undersampling of the whole space of parameters. Aims: We extend the discussion of the implementation of the Gauss-Seidel algorithm in spherical geometry and include the case of hyperfine line overlap. Methods: We first review the basics of the short characteristics method that is used to solve the radiative transfer equations. Details are given on the determination of the Lambda operator in spherical geometry. The Gauss-Seidel algorithm is then described and, by analogy to the plan-parallel case, we see how to introduce it in spherical geometry. Doing so requires some approximations in order to keep the algorithm competitive. Finally, line overlap effects are included. Results: The convergence speed of the algorithm is compared to the usual Jacobi iterative schemes. The gain in the number of iterations is typically factors of 2 and 4 for the two implementations made of the Gauss-Seidel algorithm. This is obtained despite the introduction of approximations in the algorithm. A comparison of results obtained with and without line overlaps for N2H^+, HCN, and HNC shows that the J=3-2 line intensities are significantly underestimated in models where line overlap is neglected.

  12. Fusion reaction cross-sections using the Wong model within Skyrme energy density based semiclassical extended Thomas Fermi approach

    NASA Astrophysics Data System (ADS)

    Kumar, Raj; Sharma, Manoj K.; Gupta, Raj K.

    2011-11-01

    First, the nuclear proximity potential, obtained by using the semiclassical extended Thomas Fermi (ETF) approach in Skyrme energy density formalism (SEDF), is shown to give more realistic barriers in frozen density approximation, as compared to the sudden approximation. Then, taking advantage of the fact that, in ETF method, different Skyrme forces give different barriers (height, position and curvature), we use the ℓ-summed extended-Wong model of Gupta and collaborators (2009) [1] under frozen densities approximation for calculating the cross-sections, where the Skyrme force is chosen with proper barrier characteristics, not-requiring additional "barrier modification" effects (lowering or narrowing, etc.), for a best fit to data at sub-barrier energies. The method is applied to capture cross-section data from 48Ca + 238U, 244Pu, and 248Cm reactions and to fusion-evaporation cross-sections from 58Ni + 58Ni, 64Ni + 64Ni, and 64Ni + 100Mo reactions, with effects of deformations and orientations of nuclei included, wherever required. Interestingly, whereas the capture cross-sections in Ca-induced reactions could be fitted to any force, such as SIII, SV and GSkI, by allowing a small change of couple of units in deduced ℓ-values at below-barrier energies, the near-barrier data point of 48Ca + 248Cm reaction could not be fitted to ℓ-values deduced for below-barrier energies, calling for a check of data. On the other hand, the fusion-evaporation cross-sections in Ni-induced reactions at sub-barrier energies required different Skyrme forces, representing "modifications of the barrier", for the best fit to data at all incident center-of-mass energies E's, displaying a kind of fusion hindrance at sub-barrier energies. This barrier modification effect is taken into care here by using different Skyrme forces for reactions belonging to different regions of the periodic table. Note that more than one Skyrme force (with identical barrier characteristics) could equally well fit the same data.

  13. Rapid phenotypic antimicrobial susceptibility testing using nanoliter arrays.

    PubMed

    Avesar, Jonathan; Rosenfeld, Dekel; Truman-Rosentsvit, Marianna; Ben-Arye, Tom; Geffen, Yuval; Bercovici, Moran; Levenberg, Shulamit

    2017-07-18

    Antibiotic resistance is a major global health concern that requires action across all sectors of society. In particular, to allow conservative and effective use of antibiotics clinical settings require better diagnostic tools that provide rapid determination of antimicrobial susceptibility. We present a method for rapid and scalable antimicrobial susceptibility testing using stationary nanoliter droplet arrays that is capable of delivering results in approximately half the time of conventional methods, allowing its results to be used the same working day. In addition, we present an algorithm for automated data analysis and a multiplexing system promoting practicality and translatability for clinical settings. We test the efficacy of our approach on numerous clinical isolates and demonstrate a 2-d reduction in diagnostic time when testing bacteria isolated directly from urine samples.

  14. An accurate method for evaluating the kernel of the integral equation relating lift to downwash in unsteady potential flow

    NASA Technical Reports Server (NTRS)

    Desmarais, R. N.

    1982-01-01

    The method is capable of generating approximations of arbitrary accuracy. It is based on approximating the algebraic part of the nonelementary integrals in the kernel by exponential functions and then integrating termwise. The exponent spacing in the approximation is a geometric sequence. The coefficients and exponent multiplier of the exponential approximation are computed by least squares so the method is completely automated. Exponential approximates generated in this manner are two orders of magnitude more accurate than the exponential approximation that is currently most often used for this purpose. The method can be used to generate approximations to attain any desired trade-off between accuracy and computing cost.

  15. Three-Dimensional Wiring for Extensible Quantum Computing: The Quantum Socket

    NASA Astrophysics Data System (ADS)

    Béjanin, J. H.; McConkey, T. G.; Rinehart, J. R.; Earnest, C. T.; McRae, C. R. H.; Shiri, D.; Bateman, J. D.; Rohanizadegan, Y.; Penava, B.; Breul, P.; Royak, S.; Zapatka, M.; Fowler, A. G.; Mariantoni, M.

    2016-10-01

    Quantum computing architectures are on the verge of scalability, a key requirement for the implementation of a universal quantum computer. The next stage in this quest is the realization of quantum error-correction codes, which will mitigate the impact of faulty quantum information on a quantum computer. Architectures with ten or more quantum bits (qubits) have been realized using trapped ions and superconducting circuits. While these implementations are potentially scalable, true scalability will require systems engineering to combine quantum and classical hardware. One technology demanding imminent efforts is the realization of a suitable wiring method for the control and the measurement of a large number of qubits. In this work, we introduce an interconnect solution for solid-state qubits: the quantum socket. The quantum socket fully exploits the third dimension to connect classical electronics to qubits with higher density and better performance than two-dimensional methods based on wire bonding. The quantum socket is based on spring-mounted microwires—the three-dimensional wires—that push directly on a microfabricated chip, making electrical contact. A small wire cross section (approximately 1 mm), nearly nonmagnetic components, and functionality at low temperatures make the quantum socket ideal for operating solid-state qubits. The wires have a coaxial geometry and operate over a frequency range from dc to 8 GHz, with a contact resistance of approximately 150 m Ω , an impedance mismatch of approximately 10 Ω , and minimal cross talk. As a proof of principle, we fabricate and use a quantum socket to measure high-quality superconducting resonators at a temperature of approximately 10 mK. Quantum error-correction codes such as the surface code will largely benefit from the quantum socket, which will make it possible to address qubits located on a two-dimensional lattice. The present implementation of the socket could be readily extended to accommodate a quantum processor with a (10 ×10 )-qubit lattice, which would allow for the realization of a simple quantum memory.

  16. A Density Perturbation Method to Study the Eigenstructure of Two-Phase Flow Equation Systems

    NASA Astrophysics Data System (ADS)

    Cortes, J.; Debussche, A.; Toumi, I.

    1998-12-01

    Many interesting and challenging physical mechanisms are concerned with the mathematical notion of eigenstructure. In two-fluid models, complex phasic interactions yield a complex eigenstructure which may raise numerous problems in numerical simulations. In this paper, we develop a perturbation method to examine the eigenvalues and eigenvectors of two-fluid models. This original method, based on the stiffness of the density ratio, provides a convenient tool to study the relevance of pressure momentum interactions and allows us to get precise approximations of the whole flow eigendecomposition for minor requirements. Roe scheme is successfully implemented and some numerical tests are presented.

  17. Forebody and base region real gas flow in severe planetary entry by a factored implicit numerical method. II - Equilibrium reactive gas

    NASA Technical Reports Server (NTRS)

    Davy, W. C.; Green, M. J.; Lombard, C. K.

    1981-01-01

    The factored-implicit, gas-dynamic algorithm has been adapted to the numerical simulation of equilibrium reactive flows. Changes required in the perfect gas version of the algorithm are developed, and the method of coupling gas-dynamic and chemistry variables is discussed. A flow-field solution that approximates a Jovian entry case was obtained by this method and compared with the same solution obtained by HYVIS, a computer program much used for the study of planetary entry. Comparison of surface pressure distribution and stagnation line shock-layer profiles indicates that the two solutions agree well.

  18. Application of shifted Jacobi pseudospectral method for solving (in)finite-horizon min-max optimal control problems with uncertainty

    NASA Astrophysics Data System (ADS)

    Nikooeinejad, Z.; Delavarkhalafi, A.; Heydari, M.

    2018-03-01

    The difficulty of solving the min-max optimal control problems (M-MOCPs) with uncertainty using generalised Euler-Lagrange equations is caused by the combination of split boundary conditions, nonlinear differential equations and the manner in which the final time is treated. In this investigation, the shifted Jacobi pseudospectral method (SJPM) as a numerical technique for solving two-point boundary value problems (TPBVPs) in M-MOCPs for several boundary states is proposed. At first, a novel framework of approximate solutions which satisfied the split boundary conditions automatically for various boundary states is presented. Then, by applying the generalised Euler-Lagrange equations and expanding the required approximate solutions as elements of shifted Jacobi polynomials, finding a solution of TPBVPs in nonlinear M-MOCPs with uncertainty is reduced to the solution of a system of algebraic equations. Moreover, the Jacobi polynomials are particularly useful for boundary value problems in unbounded domain, which allow us to solve infinite- as well as finite and free final time problems by domain truncation method. Some numerical examples are given to demonstrate the accuracy and efficiency of the proposed method. A comparative study between the proposed method and other existing methods shows that the SJPM is simple and accurate.

  19. Probabilistic learning of nonlinear dynamical systems using sequential Monte Carlo

    NASA Astrophysics Data System (ADS)

    Schön, Thomas B.; Svensson, Andreas; Murray, Lawrence; Lindsten, Fredrik

    2018-05-01

    Probabilistic modeling provides the capability to represent and manipulate uncertainty in data, models, predictions and decisions. We are concerned with the problem of learning probabilistic models of dynamical systems from measured data. Specifically, we consider learning of probabilistic nonlinear state-space models. There is no closed-form solution available for this problem, implying that we are forced to use approximations. In this tutorial we will provide a self-contained introduction to one of the state-of-the-art methods-the particle Metropolis-Hastings algorithm-which has proven to offer a practical approximation. This is a Monte Carlo based method, where the particle filter is used to guide a Markov chain Monte Carlo method through the parameter space. One of the key merits of the particle Metropolis-Hastings algorithm is that it is guaranteed to converge to the "true solution" under mild assumptions, despite being based on a particle filter with only a finite number of particles. We will also provide a motivating numerical example illustrating the method using a modeling language tailored for sequential Monte Carlo methods. The intention of modeling languages of this kind is to open up the power of sophisticated Monte Carlo methods-including particle Metropolis-Hastings-to a large group of users without requiring them to know all the underlying mathematical details.

  20. A new concept for airship mooring and ground handling

    NASA Technical Reports Server (NTRS)

    Vaughan, J. C.

    1975-01-01

    Calculations were made to determine the feasibility of applying the negative air cushion (NAC) principle to the mooring of airships. Pressures required for the inflation of the flexible trunks are not excessive and the maintenance of sufficient hold down force is possible in winds up to 50 knots. Fabric strength requirements for a typical NAC sized for a 10-million cubic foot airship were found to be approximately 200 lbs./in. Corresponding power requirements range between 66-HP and 5600-HP. No consideration was given to the internal airship loads caused by the use of a NAC and further analysis in much greater detail is required before this method could be applied to an actual design, however, the basic concept appears to be sound and no problem areas of a fundamental nature are apparent.

  1. A cohort study on elderly individuals newly certified as requiring long-term care: comparison of rates of care-needs certifications between basic checklist respondents/specific health examinees and non-respondents/non-examinees of 37,000 elderlies in a city.

    PubMed

    Fujimoto, Megumi; Katsura, Toshiki; Hoshino, Akiko; Shizawa, Miho; Usui, Kanae; Yokoyama, Eri; Hara, Mayumi

    2018-05-01

    Objective: The rates of care-needs certification were mainly compared between two cohorts: 7,820 specific health checkup examinees/basic checklist respondents and 29,234 non-examinees/non-respondents. Subjects and Methods: Among approximately 37,000 elderly citizens of X City, the number of individuals newly certified as requiring long-term care were observed from the date of the first specific health checkup in 2008 to March 31, 2013. The aggregated totals of these individuals and associated factors were evaluated. Results: 1. Support Required 1, Support Required 2, and Long-term Care Required (level 1) certified individuals accounted for approximately 80% of newly certified individuals aged 65-74 years. Newly certified individuals aged 75 years and over had similar results with 37.2% of them being certified Support Required 1, 19.4% certified Support Required 2, and 22.9% certified Long-term Care Required (level 1). 2. The primary factors for care-needs certification in individuals aged 65-74 years were arthritic disorder in 27.6%, falls and bone fractures in 11.3%, and malignant neoplasm and cerebrovascular disease, among others. This was similar for individuals aged 75 years or over. 3. Of the 7,820 specific health checkup examinees/basic checklist respondents, 1,280 were newly certified as requiring long-term care (16.4%) compared to 7,878 (26.9%) of the 29,234 non-examinees/non-respondents. Therefore, the latter cohort had a significantly higher rate of individuals who were newly certified as requiring long-term care. Conclusion: Both specific health checkups and basic checklists are effective health policies to protect frailty in community elderlies.

  2. A novel condition for stable nonlinear sampled-data models using higher-order discretized approximations with zero dynamics.

    PubMed

    Zeng, Cheng; Liang, Shan; Xiang, Shuwen

    2017-05-01

    Continuous-time systems are usually modelled by the form of ordinary differential equations arising from physical laws. However, the use of these models in practice and utilizing, analyzing or transmitting these data from such systems must first invariably be discretized. More importantly, for digital control of a continuous-time nonlinear system, a good sampled-data model is required. This paper investigates the new consistency condition which is weaker than the previous similar results presented. Moreover, given the stability of the high-order approximate model with stable zero dynamics, the novel condition presented stabilizes the exact sampled-data model of the nonlinear system for sufficiently small sampling periods. An insightful interpretation of the obtained results can be made in terms of the stable sampling zero dynamics, and the new consistency condition is surprisingly associated with the relative degree of the nonlinear continuous-time system. Our controller design, based on the higher-order approximate discretized model, extends the existing methods which mainly deal with the Euler approximation. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Comparison of three methods for wind turbine capacity factor estimation.

    PubMed

    Ditkovich, Y; Kuperman, A

    2014-01-01

    Three approaches to calculating capacity factor of fixed speed wind turbines are reviewed and compared using a case study. The first "quasiexact" approach utilizes discrete wind raw data (in the histogram form) and manufacturer-provided turbine power curve (also in discrete form) to numerically calculate the capacity factor. On the other hand, the second "analytic" approach employs a continuous probability distribution function, fitted to the wind data as well as continuous turbine power curve, resulting from double polynomial fitting of manufacturer-provided power curve data. The latter approach, while being an approximation, can be solved analytically thus providing a valuable insight into aspects, affecting the capacity factor. Moreover, several other merits of wind turbine performance may be derived based on the analytical approach. The third "approximate" approach, valid in case of Rayleigh winds only, employs a nonlinear approximation of the capacity factor versus average wind speed curve, only requiring rated power and rotor diameter of the turbine. It is shown that the results obtained by employing the three approaches are very close, enforcing the validity of the analytically derived approximations, which may be used for wind turbine performance evaluation.

  4. Multi-Component Diffusion with Application To Computational Aerothermodynamics

    NASA Technical Reports Server (NTRS)

    Sutton, Kenneth; Gnoffo, Peter A.

    1998-01-01

    The accuracy and complexity of solving multicomponent gaseous diffusion using the detailed multicomponent equations, the Stefan-Maxwell equations, and two commonly used approximate equations have been examined in a two part study. Part I examined the equations in a basic study with specified inputs in which the results are applicable for many applications. Part II addressed the application of the equations in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) computational code for high-speed entries in Earth's atmosphere. The results showed that the presented iterative scheme for solving the Stefan-Maxwell equations is an accurate and effective method as compared with solutions of the detailed equations. In general, good accuracy with the approximate equations cannot be guaranteed for a species or all species in a multi-component mixture. 'Corrected' forms of the approximate equations that ensured the diffusion mass fluxes sum to zero, as required, were more accurate than the uncorrected forms. Good accuracy, as compared with the Stefan- Maxwell results, were obtained with the 'corrected' approximate equations in defining the heating rates for the three Earth entries considered in Part II.

  5. An implicit spatial and high-order temporal finite difference scheme for 2D acoustic modelling

    NASA Astrophysics Data System (ADS)

    Wang, Enjiang; Liu, Yang

    2018-01-01

    The finite difference (FD) method exhibits great superiority over other numerical methods due to its easy implementation and small computational requirement. We propose an effective FD method, characterised by implicit spatial and high-order temporal schemes, to reduce both the temporal and spatial dispersions simultaneously. For the temporal derivative, apart from the conventional second-order FD approximation, a special rhombus FD scheme is included to reach high-order accuracy in time. Compared with the Lax-Wendroff FD scheme, this scheme can achieve nearly the same temporal accuracy but requires less floating-point operation times and thus less computational cost when the same operator length is adopted. For the spatial derivatives, we adopt the implicit FD scheme to improve the spatial accuracy. Apart from the existing Taylor series expansion-based FD coefficients, we derive the least square optimisation based implicit spatial FD coefficients. Dispersion analysis and modelling examples demonstrate that, our proposed method can effectively decrease both the temporal and spatial dispersions, thus can provide more accurate wavefields.

  6. Novel surgical performance evaluation approximates Standardized Incidence Ratio with high accuracy at simple means.

    PubMed

    Gabbay, Itay E; Gabbay, Uri

    2013-01-01

    Excess adverse events may be attributable to poor surgical performance but also to case-mix, which is controlled through the Standardized Incidence Ratio (SIR). SIR calculations can be complicated, resource consuming, and unfeasible in some settings. This article suggests a novel method for SIR approximation. In order to evaluate a potential SIR surrogate measure we predefined acceptance criteria. We developed a new measure - Approximate Risk Index (ARI). "Number Needed for Event" (NNE) is the theoretical number of patients needed "to produce" one adverse event. ARI is defined as the quotient of the group of patients needed for no observed events Ge by total patients treated Ga. Our evaluation compared 2500 surgical units and over 3 million heterogeneous risk surgical patients that were induced through a computerized simulation. Surgical unit's data were computed for SIR and ARI to evaluate compliance with the predefined criteria. Approximation was evaluated by correlation analysis and performance prediction capability by Receiver Operating Characteristics (ROC) analysis. ARI strongly correlates with SIR (r(2) = 0.87, p < 0.05). ARI prediction of excessive risk revealed excellent ROC (Area Under the Curve > 0.9) 87% sensitivity and 91% specificity. ARI provides good approximation of SIR and excellent prediction capability. ARI is simple and cost-effective as it requires thorough risk evaluation of only the adverse events patients. ARI can provide a crucial screening and performance evaluation quality control tool. The ARI method may suit other clinical and epidemiological settings where relatively small fraction of the entire population is affected. Copyright © 2013 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.

  7. Diabat Interpolation for Polymorph Free-Energy Differences.

    PubMed

    Kamat, Kartik; Peters, Baron

    2017-02-02

    Existing methods to compute free-energy differences between polymorphs use harmonic approximations, advanced non-Boltzmann bias sampling techniques, and/or multistage free-energy perturbations. This work demonstrates how Bennett's diabat interpolation method ( J. Comput. Phys. 1976, 22, 245 ) can be combined with energy gaps from lattice-switch Monte Carlo techniques ( Phys. Rev. E 2000, 61, 906 ) to swiftly estimate polymorph free-energy differences. The new method requires only two unbiased molecular dynamics simulations, one for each polymorph. To illustrate the new method, we compute the free-energy difference between face-centered cubic and body-centered cubic polymorphs for a Gaussian core solid. We discuss the justification for parabolic models of the free-energy diabats and similarities to methods that have been used in studies of electron transfer.

  8. Retrieving Land Surface Temperature from Hyperspectral Thermal Infrared Data Using a Multi-Channel Method

    PubMed Central

    Zhong, Xinke; Huo, Xing; Ren, Chao; Labed, Jelila; Li, Zhao-Liang

    2016-01-01

    Land Surface Temperature (LST) is a key parameter in climate systems. The methods for retrieving LST from hyperspectral thermal infrared data either require accurate atmospheric profile data or require thousands of continuous channels. We aim to retrieve LST for natural land surfaces from hyperspectral thermal infrared data using an adapted multi-channel method taking Land Surface Emissivity (LSE) properly into consideration. In the adapted method, LST can be retrieved by a linear function of 36 brightness temperatures at Top of Atmosphere (TOA) using channels where LSE has high values. We evaluated the adapted method using simulation data at nadir and satellite data near nadir. The Root Mean Square Error (RMSE) of the LST retrieved from the simulation data is 0.90 K. Compared with an LST product from the Spinning Enhanced Visible and Infrared Imager (SEVIRI) on Meteosat, the error in the LST retrieved from the Infared Atmospheric Sounding Interferometer (IASI) is approximately 1.6 K. The adapted method can be used for the near-real-time production of an LST product and to provide the physical method to simultaneously retrieve atmospheric profiles, LST, and LSE with a first-guess LST value. The limitations of the adapted method are that it requires the minimum LSE in the spectral interval of 800–950 cm−1 larger than 0.95 and it has not been extended for off-nadir measurements. PMID:27187408

  9. Integrated System-Level Optimization for Concurrent Engineering With Parametric Subsystem Modeling

    NASA Technical Reports Server (NTRS)

    Schuman, Todd; DeWeck, Oliver L.; Sobieski, Jaroslaw

    2005-01-01

    The introduction of concurrent design practices to the aerospace industry has greatly increased the productivity of engineers and teams during design sessions as demonstrated by JPL's Team X. Simultaneously, advances in computing power have given rise to a host of potent numerical optimization methods capable of solving complex multidisciplinary optimization problems containing hundreds of variables, constraints, and governing equations. Unfortunately, such methods are tedious to set up and require significant amounts of time and processor power to execute, thus making them unsuitable for rapid concurrent engineering use. This paper proposes a framework for Integration of System-Level Optimization with Concurrent Engineering (ISLOCE). It uses parametric neural-network approximations of the subsystem models. These approximations are then linked to a system-level optimizer that is capable of reaching a solution quickly due to the reduced complexity of the approximations. The integration structure is described in detail and applied to the multiobjective design of a simplified Space Shuttle external fuel tank model. Further, a comparison is made between the new framework and traditional concurrent engineering (without system optimization) through an experimental trial with two groups of engineers. Each method is evaluated in terms of optimizer accuracy, time to solution, and ease of use. The results suggest that system-level optimization, running as a background process during integrated concurrent engineering sessions, is potentially advantageous as long as it is judiciously implemented.

  10. Calculation of the detection limit in radiation measurements with systematic uncertainties

    NASA Astrophysics Data System (ADS)

    Kirkpatrick, J. M.; Russ, W.; Venkataraman, R.; Young, B. M.

    2015-06-01

    The detection limit (LD) or Minimum Detectable Activity (MDA) is an a priori evaluation of assay sensitivity intended to quantify the suitability of an instrument or measurement arrangement for the needs of a given application. Traditional approaches as pioneered by Currie rely on Gaussian approximations to yield simple, closed-form solutions, and neglect the effects of systematic uncertainties in the instrument calibration. These approximations are applicable over a wide range of applications, but are of limited use in low-count applications, when high confidence values are required, or when systematic uncertainties are significant. One proposed modification to the Currie formulation attempts account for systematic uncertainties within a Gaussian framework. We have previously shown that this approach results in an approximation formula that works best only for small values of the relative systematic uncertainty, for which the modification of Currie's method is the least necessary, and that it significantly overestimates the detection limit or gives infinite or otherwise non-physical results for larger systematic uncertainties where such a correction would be the most useful. We have developed an alternative approach for calculating detection limits based on realistic statistical modeling of the counting distributions which accurately represents statistical and systematic uncertainties. Instead of a closed form solution, numerical and iterative methods are used to evaluate the result. Accurate detection limits can be obtained by this method for the general case.

  11. Reliable before-fabrication forecasting of normal and touch mode MEMS capacitive pressure sensor: modeling and simulation

    NASA Astrophysics Data System (ADS)

    Jindal, Sumit Kumar; Mahajan, Ankush; Raghuwanshi, Sanjeev Kumar

    2017-10-01

    An analytical model and numerical simulation for the performance of MEMS capacitive pressure sensors in both normal and touch modes is required for expected behavior of the sensor prior to their fabrication. Obtaining such information should be based on a complete analysis of performance parameters such as deflection of diaphragm, change of capacitance when the diaphragm deflects, and sensitivity of the sensor. In the literature, limited work has been carried out on the above-stated issue; moreover, due to approximation factors of polynomials, a tolerance error cannot be overseen. Reliable before-fabrication forecasting requires exact mathematical calculation of the parameters involved. A second-order polynomial equation is calculated mathematically for key performance parameters of both modes. This eliminates the approximation factor, and an exact result can be studied, maintaining high accuracy. The elimination of approximation factors and an approach of exact results are based on a new design parameter (δ) that we propose. The design parameter gives an initial hint to the designers on how the sensor will behave once it is fabricated. The complete work is aided by extensive mathematical detailing of all the parameters involved. Next, we verified our claims using MATLAB® simulation. Since MATLAB® effectively provides the simulation theory for the design approach, more complicated finite element method is not used.

  12. Uniform Foam Crush Testing for Multi-Mission Earth Entry Vehicle Impact Attenuation

    NASA Technical Reports Server (NTRS)

    Patterson, Byron W.; Glaab, Louis J.

    2012-01-01

    Multi-Mission Earth Entry Vehicles (MMEEVs) are blunt-body vehicles designed with the purpose of transporting payloads from outer space to the surface of the Earth. To achieve high-reliability and minimum weight, MMEEVs avoid use of limited-reliability systems, such as parachutes and retro-rockets, instead using built-in impact attenuators to absorb energy remaining at impact to meet landing loads requirements. The Multi-Mission Systems Analysis for Planetary Entry (M-SAPE) parametric design tool is used to facilitate the design of MMEEVs and develop the trade space. Testing was conducted to characterize the material properties of several candidate impact foam attenuators to enhance M-SAPE analysis. In the current effort, four different Rohacell foams are tested at three different, uniform, strain rates (approximately 0.17, approximately 100, approximately 13,600%/s). The primary data analysis method uses a global data smoothing technique in the frequency domain to remove noise and system natural frequencies. The results from the data indicate that the filter and smoothing technique are successful in identifying the foam crush event and removing aberrations. The effect of strain rate increases with increasing foam density. The 71-WF-HT foam may support Mars Sample Return requirements. Several recommendations to improve the drop tower test technique are identified.

  13. Visual Tracking Using 3D Data and Region-Based Active Contours

    DTIC Science & Technology

    2016-09-28

    adaptive control strategies which explicitly take uncertainty into account. Filtering methods ranging from the classical Kalman filters valid for...linear systems to the much more general particle filters also fit into this framework in a very natural manner. In particular, the particle filtering ...the number of samples required for accurate filtering increases with the dimension of the system noise. In our approach, we approximate curve

  14. Decentralized Network Interdiction Games

    DTIC Science & Technology

    2015-12-31

    approach is termed as the sample average approximation ( SAA ) method, and theories on the asymptotic convergence to the original problem’s optimal...used in the SAA method’s convergence. While we provided detailed proof of such convergence in [P3], a side benefit of the proof is that it weakens the...conditions required when applying the general SAA approach to the block-structured stochastic programming problem 17. As the conditions known in the

  15. An Investigation of a Photographic Technique of Measuring High Surface Temperatures

    NASA Technical Reports Server (NTRS)

    Siviter, James H., Jr.; Strass, H. Kurt

    1960-01-01

    A photographic method of temperature determination has been developed to measure elevated temperatures of surfaces. The technique presented herein minimizes calibration procedures and permits wide variation in emulsion developing techniques. The present work indicates that the lower limit of applicability is approximately 1,400 F when conventional cameras, emulsions, and moderate exposures are used. The upper limit is determined by the calibration technique and the accuracy required.

  16. Isolation of High-Molecular-Weight DNA from Monolayer Cultures of Mammalian Cells Using Proteinase K and Phenol.

    PubMed

    Green, Michael R; Sambrook, Joseph

    2017-07-05

    This procedure is the method of choice for purification of mammalian genomic DNA from monolayer cultures when large amounts of DNA are required, for example, for Southern blotting. Approximately 200 µg of mammalian DNA, 100-150 kb in length, is obtained from 5 × 10 7 cultured aneuploid cells (e.g., HeLa cells). © 2017 Cold Spring Harbor Laboratory Press.

  17. Mind as Space

    NASA Astrophysics Data System (ADS)

    McKinstry, Chris

    The present article describes a possible method for the automatic discovery of a universal human semantic-affective hyperspatial approximation of the human subcognitive substrate - the associative network which French (1990) asserts is the ultimate foundation of the human ability to pass the Turing Test - that does not require a machine to have direct human experience or a physical human body. This method involves automatic programming - such as Koza's genetic programming (1992) - guided in the discovery of the proposed universal hypergeometry by feedback from a Minimum Intelligent Signal Test or MIST (McKinstry, 1997) constructed from a very large number of human validated probabilistic propositions collected from a large population of Internet users. It will be argued that though a lifetime of human experience is required to pass a rigorous Turing Test, a probabilistic propositional approximation of this experience can be constructed via public participation on the Internet, and then used as a fitness function to direct the artificial evolution of a universal hypergeometry capable of classifying arbitrary propositions. A model of this hypergeometry will be presented; it predicts Miller's "Magical Number Seven" (1956) as the size of human short-term memory from fundamental hypergeometric properties. A system that can lead to the generation of novel propositions or "artificial thoughts" will also be described.

  18. A Taylor Expansion-Based Adaptive Design Strategy for Global Surrogate Modeling With Applications in Groundwater Modeling

    NASA Astrophysics Data System (ADS)

    Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing; Zhang, Guannan; Ye, Ming; Wu, Jianfeng; Wu, Jichun

    2017-12-01

    Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we develop a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.

  19. Clinical implementation and rapid commissioning of an EPID based in-vivo dosimetry system.

    PubMed

    Hanson, Ian M; Hansen, Vibeke N; Olaciregui-Ruiz, Igor; van Herk, Marcel

    2014-10-07

    Using an Electronic Portal Imaging Device (EPID) to perform in-vivo dosimetry is one of the most effective and efficient methods of verifying the safe delivery of complex radiotherapy treatments. Previous work has detailed the development of an EPID based in-vivo dosimetry system that was subsequently used to replace pre-treatment dose verification of IMRT and VMAT plans. Here we show that this system can be readily implemented on a commercial megavoltage imaging platform without modification to EPID hardware and without impacting standard imaging procedures. The accuracy and practicality of the EPID in-vivo dosimetry system was confirmed through a comparison with traditional TLD in-vivo measurements performed on five prostate patients.The commissioning time required for the EPID in-vivo dosimetry system was initially prohibitive at approximately 10 h per linac. Here we present a method of calculating linac specific EPID dosimetry correction factors that allow a single energy specific commissioning model to be applied to EPID data from multiple linacs. Using this method reduced the required per linac commissioning time to approximately 30 min.The validity of this commissioning method has been tested by analysing in-vivo dosimetry results of 1220 patients acquired on seven linacs over a period of 5 years. The average deviation between EPID based isocentre dose and expected isocentre dose for these patients was (-0.7  ±  3.2)%.EPID based in-vivo dosimetry is now the primary in-vivo dosimetry tool used at our centre and has replaced nearly all pre-treatment dose verification of IMRT treatments.

  20. Clinical implementation and rapid commissioning of an EPID based in-vivo dosimetry system

    NASA Astrophysics Data System (ADS)

    Hanson, Ian M.; Hansen, Vibeke N.; Olaciregui-Ruiz, Igor; van Herk, Marcel

    2014-10-01

    Using an Electronic Portal Imaging Device (EPID) to perform in-vivo dosimetry is one of the most effective and efficient methods of verifying the safe delivery of complex radiotherapy treatments. Previous work has detailed the development of an EPID based in-vivo dosimetry system that was subsequently used to replace pre-treatment dose verification of IMRT and VMAT plans. Here we show that this system can be readily implemented on a commercial megavoltage imaging platform without modification to EPID hardware and without impacting standard imaging procedures. The accuracy and practicality of the EPID in-vivo dosimetry system was confirmed through a comparison with traditional TLD in-vivo measurements performed on five prostate patients. The commissioning time required for the EPID in-vivo dosimetry system was initially prohibitive at approximately 10 h per linac. Here we present a method of calculating linac specific EPID dosimetry correction factors that allow a single energy specific commissioning model to be applied to EPID data from multiple linacs. Using this method reduced the required per linac commissioning time to approximately 30 min. The validity of this commissioning method has been tested by analysing in-vivo dosimetry results of 1220 patients acquired on seven linacs over a period of 5 years. The average deviation between EPID based isocentre dose and expected isocentre dose for these patients was (-0.7  ±  3.2)%. EPID based in-vivo dosimetry is now the primary in-vivo dosimetry tool used at our centre and has replaced nearly all pre-treatment dose verification of IMRT treatments.

  1. A novel method for identifying a graph-based representation of 3-D microvascular networks from fluorescence microscopy image stacks.

    PubMed

    Almasi, Sepideh; Xu, Xiaoyin; Ben-Zvi, Ayal; Lacoste, Baptiste; Gu, Chenghua; Miller, Eric L

    2015-02-01

    A novel approach to determine the global topological structure of a microvasculature network from noisy and low-resolution fluorescence microscopy data that does not require the detailed segmentation of the vessel structure is proposed here. The method is most appropriate for problems where the tortuosity of the network is relatively low and proceeds by directly computing a piecewise linear approximation to the vasculature skeleton through the construction of a graph in three dimensions whose edges represent the skeletal approximation and vertices are located at Critical Points (CPs) on the microvasculature. The CPs are defined as vessel junctions or locations of relatively large curvature along the centerline of a vessel. Our method consists of two phases. First, we provide a CP detection technique that, for junctions in particular, does not require any a priori geometric information such as direction or degree. Second, connectivity between detected nodes is determined via the solution of a Binary Integer Program (BIP) whose variables determine whether a potential edge between nodes is or is not included in the final graph. The utility function in this problem reflects both intensity-based and structural information along the path connecting the two nodes. Qualitative and quantitative results confirm the usefulness and accuracy of this method. This approach provides a mean of correctly capturing the connectivity patterns in vessels that are missed by more traditional segmentation and binarization schemes because of imperfections in the images which manifest as dim or broken vessels. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Reliable Real-Time Solution of Parametrized Partial Differential Equations: Reduced-Basis Output Bound Methods. Appendix 2

    NASA Technical Reports Server (NTRS)

    Prudhomme, C.; Rovas, D. V.; Veroy, K.; Machiels, L.; Maday, Y.; Patera, A. T.; Turinici, G.; Zang, Thomas A., Jr. (Technical Monitor)

    2002-01-01

    We present a technique for the rapid and reliable prediction of linear-functional outputs of elliptic (and parabolic) partial differential equations with affine parameter dependence. The essential components are (i) (provably) rapidly convergent global reduced basis approximations, Galerkin projection onto a space W(sub N) spanned by solutions of the governing partial differential equation at N selected points in parameter space; (ii) a posteriori error estimation, relaxations of the error-residual equation that provide inexpensive yet sharp and rigorous bounds for the error in the outputs of interest; and (iii) off-line/on-line computational procedures, methods which decouple the generation and projection stages of the approximation process. The operation count for the on-line stage, in which, given a new parameter value, we calculate the output of interest and associated error bound, depends only on N (typically very small) and the parametric complexity of the problem; the method is thus ideally suited for the repeated and rapid evaluations required in the context of parameter estimation, design, optimization, and real-time control.

  3. Metallic lithium by quantum Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sugiyama, G.; Zerah, G.; Alder, B.J.

    Lithium was chosen as the simplest known metal for the first application of quantum Monte Carlo methods in order to evaluate the accuracy of conventional one-electron band theories. Lithium has been extensively studied using such techniques. Band theory calculations have certain limitations in general and specifically in their application to lithium. Results depend on such factors as charge shape approximations (muffin tins), pseudopotentials (a special problem for lithium where the lack of rho core states requires a strong pseudopotential), and the form and parameters chosen for the exchange potential. The calculations are all one-electron methods in which the correlation effectsmore » are included in an ad hoc manner. This approximation may be particularly poor in the high compression regime, where the core states become delocalized. Furthermore, band theory provides only self-consistent results rather than strict limits on the energies. The quantum Monte Carlo method is a totally different technique using a many-body rather than a mean field approach which yields an upper bound on the energies. 18 refs., 4 figs., 1 tab.« less

  4. Eye Gaze Tracking using Correlation Filters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karakaya, Mahmut; Boehnen, Chris Bensing; Bolme, David S

    In this paper, we studied a method for eye gaze tracking that provide gaze estimation from a standard webcam with a zoom lens and reduce the setup and calibration requirements for new users. Specifically, we have developed a gaze estimation method based on the relative locations of points on the top of the eyelid and eye corners. Gaze estimation method in this paper is based on the distances between top point of the eyelid and eye corner detected by the correlation filters. Advanced correlation filters were found to provide facial landmark detections that are accurate enough to determine the subjectsmore » gaze direction up to angle of approximately 4-5 degrees although calibration errors often produce a larger overall shift in the estimates. This is approximately a circle of diameter 2 inches for a screen that is arm s length from the subject. At this accuracy it is possible to figure out what regions of text or images the subject is looking but it falls short of being able to determine which word the subject has looked at.« less

  5. An Improved Mathematical Scheme for LTE-Advanced Coexistence with FM Broadcasting Service

    PubMed Central

    Al-hetar, Abdulaziz M.

    2016-01-01

    Power spectral density (PSD) overlapping analysis is considered the surest approach to evaluate feasibility of compatibility between wireless communication systems. In this paper, a new closed-form for the Interference Signal Power Attenuation (ISPA) is mathematically derived to evaluate interference caused from Orthogonal Frequency Division Multiplexing (OFDM)-based Long Term Evolution (LTE)-Advanced into Frequency Modulation (FM) broadcasting service. In this scheme, ISPA loss due to PSD overlapping of both OFDM-based LTE-Advanced and FM broadcasting service is computed. The proposed model can estimate power attenuation loss more precisely than the Advanced Minimum Coupling Loss (A-MCL) and approximate-ISPA methods. Numerical results demonstrate that the interference power is less than that obtained using the A-MCL and approximate ISPA methods by 2.8 and 1.5 dB at the co-channel and by 5.2 and 2.2 dB at the adjacent channel with null guard band, respectively. The outperformance of this scheme over the other methods leads to more diminishing in the required physical distance between the two systems which ultimately supports efficient use of the radio frequency spectrum. PMID:27855216

  6. An Improved Mathematical Scheme for LTE-Advanced Coexistence with FM Broadcasting Service.

    PubMed

    Shamsan, Zaid Ahmed; Al-Hetar, Abdulaziz M

    2016-01-01

    Power spectral density (PSD) overlapping analysis is considered the surest approach to evaluate feasibility of compatibility between wireless communication systems. In this paper, a new closed-form for the Interference Signal Power Attenuation (ISPA) is mathematically derived to evaluate interference caused from Orthogonal Frequency Division Multiplexing (OFDM)-based Long Term Evolution (LTE)-Advanced into Frequency Modulation (FM) broadcasting service. In this scheme, ISPA loss due to PSD overlapping of both OFDM-based LTE-Advanced and FM broadcasting service is computed. The proposed model can estimate power attenuation loss more precisely than the Advanced Minimum Coupling Loss (A-MCL) and approximate-ISPA methods. Numerical results demonstrate that the interference power is less than that obtained using the A-MCL and approximate ISPA methods by 2.8 and 1.5 dB at the co-channel and by 5.2 and 2.2 dB at the adjacent channel with null guard band, respectively. The outperformance of this scheme over the other methods leads to more diminishing in the required physical distance between the two systems which ultimately supports efficient use of the radio frequency spectrum.

  7. A Simple Method for High-Lift Propeller Conceptual Design

    NASA Technical Reports Server (NTRS)

    Patterson, Michael; Borer, Nick; German, Brian

    2016-01-01

    In this paper, we present a simple method for designing propellers that are placed upstream of the leading edge of a wing in order to augment lift. Because the primary purpose of these "high-lift propellers" is to increase lift rather than produce thrust, these props are best viewed as a form of high-lift device; consequently, they should be designed differently than traditional propellers. We present a theory that describes how these props can be designed to provide a relatively uniform axial velocity increase, which is hypothesized to be advantageous for lift augmentation based on a literature survey. Computational modeling indicates that such propellers can generate the same average induced axial velocity while consuming less power and producing less thrust than conventional propeller designs. For an example problem based on specifications for NASA's Scalable Convergent Electric Propulsion Technology and Operations Research (SCEPTOR) flight demonstrator, a propeller designed with the new method requires approximately 15% less power and produces approximately 11% less thrust than one designed for minimum induced loss. Higher-order modeling and/or wind tunnel testing are needed to verify the predicted performance.

  8. Time-Dependent Parabolic Finite Difference Formulation for Harmonic Sound Propagation in a Two-Dimensional Duct with Flow

    NASA Technical Reports Server (NTRS)

    Kreider, Kevin L.; Baumeister, Kenneth J.

    1996-01-01

    An explicit finite difference real time iteration scheme is developed to study harmonic sound propagation in aircraft engine nacelles. To reduce storage requirements for future large 3D problems, the time dependent potential form of the acoustic wave equation is used. To insure that the finite difference scheme is both explicit and stable for a harmonic monochromatic sound field, a parabolic (in time) approximation is introduced to reduce the order of the governing equation. The analysis begins with a harmonic sound source radiating into a quiescent duct. This fully explicit iteration method then calculates stepwise in time to obtain the 'steady state' harmonic solutions of the acoustic field. For stability, applications of conventional impedance boundary conditions requires coupling to explicit hyperbolic difference equations at the boundary. The introduction of the time parameter eliminates the large matrix storage requirements normally associated with frequency domain solutions, and time marching attains the steady-state quickly enough to make the method favorable when compared to frequency domain methods. For validation, this transient-frequency domain method is applied to sound propagation in a 2D hard wall duct with plug flow.

  9. Calculus domains modelled using an original bool algebra based on polygons

    NASA Astrophysics Data System (ADS)

    Oanta, E.; Panait, C.; Raicu, A.; Barhalescu, M.; Axinte, T.

    2016-08-01

    Analytical and numerical computer based models require analytical definitions of the calculus domains. The paper presents a method to model a calculus domain based on a bool algebra which uses solid and hollow polygons. The general calculus relations of the geometrical characteristics that are widely used in mechanical engineering are tested using several shapes of the calculus domain in order to draw conclusions regarding the most effective methods to discretize the domain. The paper also tests the results of several CAD commercial software applications which are able to compute the geometrical characteristics, being drawn interesting conclusions. The tests were also targeting the accuracy of the results vs. the number of nodes on the curved boundary of the cross section. The study required the development of an original software consisting of more than 1700 computer code lines. In comparison with other calculus methods, the discretization using convex polygons is a simpler approach. Moreover, this method doesn't lead to large numbers as the spline approximation did, in that case being required special software packages in order to offer multiple, arbitrary precision. The knowledge resulted from this study may be used to develop complex computer based models in engineering.

  10. Fully decoupled monolithic projection method for natural convection problems

    NASA Astrophysics Data System (ADS)

    Pan, Xiaomin; Kim, Kyoungyoun; Lee, Changhoon; Choi, Jung-Il

    2017-04-01

    To solve time-dependent natural convection problems, we propose a fully decoupled monolithic projection method. The proposed method applies the Crank-Nicolson scheme in time and the second-order central finite difference in space. To obtain a non-iterative monolithic method from the fully discretized nonlinear system, we first adopt linearizations of the nonlinear convection terms and the general buoyancy term with incurring second-order errors in time. Approximate block lower-upper decompositions, along with an approximate factorization technique, are additionally employed to a global linearly coupled system, which leads to several decoupled subsystems, i.e., a fully decoupled monolithic procedure. We establish global error estimates to verify the second-order temporal accuracy of the proposed method for velocity, pressure, and temperature in terms of a discrete l2-norm. Moreover, according to the energy evolution, the proposed method is proved to be stable if the time step is less than or equal to a constant. In addition, we provide numerical simulations of two-dimensional Rayleigh-Bénard convection and periodic forced flow. The results demonstrate that the proposed method significantly mitigates the time step limitation, reduces the computational cost because only one Poisson equation is required to be solved, and preserves the second-order temporal accuracy for velocity, pressure, and temperature. Finally, the proposed method reasonably predicts a three-dimensional Rayleigh-Bénard convection for different Rayleigh numbers.

  11. GAPPARD: a computationally efficient method of approximating gap-scale disturbance in vegetation models

    NASA Astrophysics Data System (ADS)

    Scherstjanoi, M.; Kaplan, J. O.; Thürig, E.; Lischke, H.

    2013-02-01

    Models of vegetation dynamics that are designed for application at spatial scales larger than individual forest gaps suffer from several limitations. Typically, either a population average approximation is used that results in unrealistic tree allometry and forest stand structure, or models have a high computational demand because they need to simulate both a series of age-based cohorts and a number of replicate patches to account for stochastic gap-scale disturbances. The detail required by the latter method increases the number of calculations by two to three orders of magnitude compared to the less realistic population average approach. In an effort to increase the efficiency of dynamic vegetation models without sacrificing realism, and to explore patterns of spatial scaling in forests, we developed a new method for simulating stand-replacing disturbances that is both accurate and 10-50x faster than approaches that use replicate patches. The GAPPARD (approximating GAP model results with a Probabilistic Approach to account for stand Replacing Disturbances) method works by postprocessing the output of deterministic, undisturbed simulations of a cohort-based vegetation model by deriving the distribution of patch ages at any point in time on the basis of a disturbance probability. With this distribution, the expected value of any output variable can be calculated from the output values of the deterministic undisturbed run at the time corresponding to the patch age. To account for temporal changes in model forcing, e.g., as a result of climate change, GAPPARD performs a series of deterministic simulations and interpolates between the results in the postprocessing step. We integrated the GAPPARD method in the forest models LPJ-GUESS and TreeM-LPJ, and evaluated these in a series of simulations along an altitudinal transect of an inner-alpine valley. With GAPPARD applied to LPJ-GUESS results were insignificantly different from the output of the original model LPJ-GUESS using 100 replicate patches, but simulation time was reduced by approximately the factor 10. Our new method is therefore highly suited rapidly approximating LPJ-GUESS results, and provides the opportunity for future studies over large spatial domains, allows easier parameterization of tree species, faster identification of areas of interesting simulation results, and comparisons with large-scale datasets and forest models.

  12. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1993-01-01

    In this study involving advanced fluid flow codes, an incremental iterative formulation (also known as the delta or correction form) together with the well-known spatially-split approximate factorization algorithm, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For smaller 2D problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods are needed for larger 2D and future 3D applications, however, because direct methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioning of the coefficient matrix; this problem can be overcome when these equations are cast in the incremental form. These and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two sample airfoil problems: (1) subsonic low Reynolds number laminar flow; and (2) transonic high Reynolds number turbulent flow.

  13. Time-dependent importance sampling in semiclassical initial value representation calculations for time correlation functions. II. A simplified implementation.

    PubMed

    Tao, Guohua; Miller, William H

    2012-09-28

    An efficient time-dependent (TD) Monte Carlo (MC) importance sampling method has recently been developed [G. Tao and W. H. Miller, J. Chem. Phys. 135, 024104 (2011)] for the evaluation of time correlation functions using the semiclassical (SC) initial value representation (IVR) methodology. In this TD-SC-IVR method, the MC sampling uses information from both time-evolved phase points as well as their initial values, and only the "important" trajectories are sampled frequently. Even though the TD-SC-IVR was shown in some benchmark examples to be much more efficient than the traditional time-independent sampling method (which uses only initial conditions), the calculation of the SC prefactor-which is computationally expensive, especially for large systems-is still required for accepted trajectories. In the present work, we present an approximate implementation of the TD-SC-IVR method that is completely prefactor-free; it gives the time correlation function as a classical-like magnitude function multiplied by a phase function. Application of this approach to flux-flux correlation functions (which yield reaction rate constants) for the benchmark H + H(2) system shows very good agreement with exact quantum results. Limitations of the approximate approach are also discussed.

  14. Overset grid implementation of the complex Kohn variational method for electron-polyatomic molecule scattering

    NASA Astrophysics Data System (ADS)

    McCurdy, C. William; Lucchese, Robert L.; Greenman, Loren

    2017-04-01

    The complex Kohn variational method, which represents the continuum wave function in each channel using a combination of Gaussians and Bessel or Coulomb functions, has been successful in numerous applications to electron-polyatomic molecule scattering and molecular photoionization. The hybrid basis representation limits it to relatively low energies (< 50 eV) , requires an approximation to exchange matrix elements involving continuum functions, and hampers its coupling to modern electronic structure codes for the description of correlated target states. We describe a successful implementation of the method using completely adaptive overset grids to describe continuum functions, in which spherical subgrids are placed on every atomic center to complement a spherical master grid that describes the behavior at large distances. An accurate method for applying the free-particle Green's function on the grid eliminates the need to operate explicitly with the kinetic energy, enabling a rapidly convergent Arnoldi algorithm for solving linear equations on the grid, and no approximations to exchange operators are made. Results for electron scattering from several polyatomic molecules will be presented. Army Research Office, MURI, WN911NF-14-1-0383 and U. S. DOE DE-SC0012198 (at Texas A&M).

  15. Fast radiative transfer models for retrieval of cloud properties in the back-scattering region: application to DSCOVR-EPIC sensor

    NASA Astrophysics Data System (ADS)

    Molina Garcia, Victor; Sasi, Sruthy; Efremenko, Dmitry; Doicu, Adrian; Loyola, Diego

    2017-04-01

    In this work, the requirements for the retrieval of cloud properties in the back-scattering region are described, and their application to the measurements taken by the Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR) is shown. Various radiative transfer models and their linearizations are implemented, and their advantages and issues are analyzed. As radiative transfer calculations in the back-scattering region are computationally time-consuming, several acceleration techniques are also studied. The radiative transfer models analyzed include the exact Discrete Ordinate method with Matrix Exponential (DOME), the Matrix Operator method with Matrix Exponential (MOME), and the approximate asymptotic and equivalent Lambertian cloud models. To reduce the computational cost of the line-by-line (LBL) calculations, the k-distribution method, the Principal Component Analysis (PCA) and a combination of the k-distribution method plus PCA are used. The linearized radiative transfer models for retrieval of cloud properties include the Linearized Discrete Ordinate method with Matrix Exponential (LDOME), the Linearized Matrix Operator method with Matrix Exponential (LMOME) and the Forward-Adjoint Discrete Ordinate method with Matrix Exponential (FADOME). These models were applied to the EPIC oxygen-A band absorption channel at 764 nm. It is shown that the approximate asymptotic and equivalent Lambertian cloud models give inaccurate results, so an offline processor for the retrieval of cloud properties in the back-scattering region requires the use of exact models such as DOME and MOME, which behave similarly. The combination of the k-distribution method plus PCA presents similar accuracy to the LBL calculations, but it is up to 360 times faster, and the relative errors for the computed radiances are less than 1.5% compared to the results when the exact phase function is used. Finally, the linearized models studied show similar behavior, with relative errors less than 1% for the radiance derivatives, but FADOME is 2 times faster than LDOME and 2.5 times faster than LMOME.

  16. On a Solar Origin for the Cosmogenic Nuclide Event of 775 A.D.

    NASA Technical Reports Server (NTRS)

    Cliver, E. W.; Tylka, A. J.; Dietrich, W. F.; Ling, A. G.

    2014-01-01

    We explore requirements for a solar particle event (SPE) and flare capable of producing the cosmogenic nuclide event of 775 A.D., and review solar circumstances at that time. A solar source for 775 would require a greater than 1 GV spectrum approximately 45 times stronger than that of the intense high-energy SPE of 1956 February 23. This implies a greater than 30 MeV proton fluence (F(sub 30)) of approximately 8 × 10(exp 10) proton cm(exp -2), approximately 10 times larger than that of the strongest 3 month interval of SPE activity in the modern era. This inferred F(sub 30) value for the 775 SPE is inconsistent with the occurrence probability distribution for greater than 30 MeV solar proton events. The best guess value for the soft X-ray classification (total energy) of an associated flare is approximately X230 (approximately 9 × 10(exp 33) erg). For comparison, the flares on 2003 November 4 and 1859 September 1 had observed/inferred values of approximately X35 (approximately 10(exp 33) erg) and approximately X45 (approximately 2 × 10(exp 33) erg), respectively. The estimated size of the source active region for a approximately 10(exp 34) erg flare is approximately 2.5 times that of the largest region yet recorded. The 775 event occurred during a period of relatively low solar activity, with a peak smoothed amplitude about half that of the second half of the 20th century. The approximately 1945-1995 interval, the most active of the last approximately 2000 yr, failed to witness a SPE comparable to that required for the proposed solar event in 775. These considerations challenge a recent suggestion that the 775 event is likely of solar origin.

  17. Stratified Diffractive Optic Approach for Creating High Efficiency Gratings

    NASA Technical Reports Server (NTRS)

    Chambers, Diana M.; Nordin, Gregory P.

    1998-01-01

    Gratings with high efficiency in a single diffracted order can be realized with both volume holographic and diffractive optical elements. However, each method has limitations that restrict the applications in which they can be used. For example, high efficiency volume holographic gratings require an appropriate combination of thickness and permittivity modulation throughout the bulk of the material. Possible combinations of those two characteristics are limited by properties of currently available materials, thus restricting the range of applications for volume holographic gratings. Efficiency of a diffractive optic grating is dependent on its approximation of an ideal analog profile using discrete features. The size of constituent features and, consequently, the number that can be used within a required grating period restricts the applications in which diffractive optic gratings can be used. These limitations imply that there are applications which cannot be addressed by either technology. In this paper we propose to address a number of applications in this category with a new method of creating high efficiency gratings which we call stratified diffractive optic gratings. In this approach diffractive optic techniques are used to create an optical structure that emulates volume grating behavior. To illustrate the stratified diffractive optic grating concept we consider a specific application, a scanner for a space-based coherent wind lidar, with requirements that would be difficult to meet by either volume holographic or diffractive optic methods. The lidar instrument design specifies a transmissive scanner element with the input beam normally incident and the exiting beam deflected at a fixed angle from the optical axis. The element will be rotated about the optical axis to produce a conical scan pattern. The wavelength of the incident beam is 2.06 microns and the required deflection angle is 30 degrees, implying a grating period of approximately 4 microns. Creating a high efficiency volume grating with these parameters would require a grating thickness that cannot be attained with current photosensitive materials. For a diffractive optic grating, the number of binary steps necessary to produce high efficiency combined with the grating period requires feature sizes and alignment tolerances that are also unattainable with current techniques. Rotation of the grating and integration into a space-based lidar system impose the additional requirements that it be insensitive to polarization orientation, that its mass be minimized and that it be able to withstand launch and space environments.

  18. An analytic technique for statistically modeling random atomic clock errors in estimation

    NASA Technical Reports Server (NTRS)

    Fell, P. J.

    1981-01-01

    Minimum variance estimation requires that the statistics of random observation errors be modeled properly. If measurements are derived through the use of atomic frequency standards, then one source of error affecting the observable is random fluctuation in frequency. This is the case, for example, with range and integrated Doppler measurements from satellites of the Global Positioning and baseline determination for geodynamic applications. An analytic method is presented which approximates the statistics of this random process. The procedure starts with a model of the Allan variance for a particular oscillator and develops the statistics of range and integrated Doppler measurements. A series of five first order Markov processes is used to approximate the power spectral density obtained from the Allan variance.

  19. Approximate sample size formulas for the two-sample trimmed mean test with unequal variances.

    PubMed

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2007-05-01

    Yuen's two-sample trimmed mean test statistic is one of the most robust methods to apply when variances are heterogeneous. The present study develops formulas for the sample size required for the test. The formulas are applicable for the cases of unequal variances, non-normality and unequal sample sizes. Given the specified alpha and the power (1-beta), the minimum sample size needed by the proposed formulas under various conditions is less than is given by the conventional formulas. Moreover, given a specified size of sample calculated by the proposed formulas, simulation results show that Yuen's test can achieve statistical power which is generally superior to that of the approximate t test. A numerical example is provided.

  20. Apparatus for and method of monitoring for breached fuel elements

    DOEpatents

    Gross, Kenny C.; Strain, Robert V.

    1983-01-01

    This invention teaches improved apparatus for the method of detecting a breach in cladded fuel used in a nuclear reactor. The detector apparatus uses a separate bypass loop for conveying part of the reactor coolant away from the core, and at least three separate delayed-neutron detectors mounted proximate this detector loop. The detectors are spaced apart so that the coolant flow time from the core to each detector is different, and these differences are known. The delayed-neutron activity at the detectors is a function of the dealy time after the reaction in the fuel until the coolant carrying the delayed-neutron emitter passes the respective detector. This time delay is broken down into separate components including an isotopic holdup time required for the emitter to move through the fuel from the reaction to the coolant at the breach, and two transit times required for the emitter now in the coolant to flow from the breach to the detector loop and then via the loop to the detector. At least two of these time components are determined during calibrated operation of the reactor. Thereafter during normal reactor operation, repeated comparisons are made by the method of regression approximation of the third time component for the best-fit line correlating measured delayed-neutron activity against activity that is approximated according to specific equations. The equations use these time-delay components and known parameter values of the fuel and of the part and emitting daughter isotopes.

  1. Evaluation of nonlinear structural dynamic responses using a fast-running spring-mass formulation

    NASA Astrophysics Data System (ADS)

    Benjamin, A. S.; Altman, B. S.; Gruda, J. D.

    In today's world, accurate finite-element simulations of large nonlinear systems may require meshes composed of hundreds of thousands of degrees of freedom. Even with today's fast computers and the promise of ever-faster ones in the future, central processing unit (CPU) expenditures for such problems could be measured in days. Many contemporary engineering problems, such as those found in risk assessment, probabilistic structural analysis, and structural design optimization, cannot tolerate the cost or turnaround time for such CPU-intensive analyses, because these applications require a large number of cases to be run with different inputs. For many risk assessment applications, analysts would prefer running times to be measurable in minutes. There is therefore a need for approximation methods which can solve such problems far more efficiently than the very detailed methods and yet maintain an acceptable degree of accuracy. For this purpose, we have been working on two methods of approximation: neural networks and spring-mass models. This paper presents our work and results to date for spring-mass modeling and analysis, since we are further along in this area than in the neural network formulation. It describes the physical and numerical models contained in a code we developed called STRESS, which stands for 'Spring-mass Transient Response Evaluation for structural Systems'. The paper also presents results for a demonstration problem, and compares these with results obtained for the same problem using PRONTO3D, a state-of-the-art finite element code which was also developed at Sandia.

  2. Microwave analog experiments on optically soft spheroidal scatterers with weak electromagnetic signature

    NASA Astrophysics Data System (ADS)

    Saleh, H.; Charon, J.; Dauchet, J.; Tortel, H.; Geffrin, J.-M.

    2017-07-01

    Light scattering by optically soft particles is being theoretically investigated in many radiative studies. An interest is growing up to develop approximate methods when the resolution of Maxwell's equations is impractical due to time and/or memory size problems with objects of complex geometries. The participation of experimental studies is important to assess novel approximations when no reference solution is available. The microwave analogy represents an efficient solution to perform such electromagnetic measurements in controlled conditions. In this paper, we take advantage of the particular features of our microwave device to present an extensive experimental study on the electromagnetic scattering by spheroidal particles analogs with low refractive indices, as a first step toward the assessment of micro-organisms with low refractive index and heterogeneities. The spheroidal analogs are machined from a low density material and they mimic soft particles of interest to the light scattering community. The measurements are confronted to simulations obtained with Finite Element Method and T-Matrix method. A good agreement is obtained even with refractive index as low as 1.13. Scattered signals of low intensities are correctly measured and the position of the targets is precisely controlled. The forward scattering measurements show high sensitivity to noise and require careful extraction. The configuration of the measurement device reveals different technical requirements between forward and backward scattering directions. The results open interesting perspectives about novel measurement procedures as well as about the use of high prototyping technologies to manufacture analogs of precise refractive indices and shapes.

  3. Total maximum allocated load calculation of nitrogen pollutants by linking a 3D biogeochemical-hydrodynamic model with a programming model in Bohai Sea

    NASA Astrophysics Data System (ADS)

    Dai, Aiquan; Li, Keqiang; Ding, Dongsheng; Li, Yan; Liang, Shengkang; Li, Yanbin; Su, Ying; Wang, Xiulin

    2015-12-01

    The equal percent removal (EPR) method, in which pollutant reduction ratio was set as the same in all administrative regions, failed to satisfy the requirement for water quality improvement in the Bohai Sea. Such requirement was imposed by the developed Coastal Pollution Total Load Control Management. The total maximum allocated load (TMAL) of nitrogen pollutants in the sea-sink source regions (SSRs) around the Bohai Rim, which is the maximum pollutant load of every outlet under the limitation of water quality criteria, was estimated by optimization-simulation method (OSM) combined with loop approximation calculation. In OSM, water quality is simulated using a water quality model and pollutant load is calculated with a programming model. The effect of changes in pollutant loads on TMAL was discussed. Results showed that the TMAL of nitrogen pollutants in 34 SSRs was 1.49×105 ton/year. The highest TMAL was observed in summer, whereas the lowest in winter. TMAL was also higher in the Bohai Strait and central Bohai Sea and lower in the inner area of the Liaodong Bay, Bohai Bay and Laizhou Bay. In loop approximation calculation, the TMAL obtained was considered satisfactory for water quality criteria as fluctuation of concentration response matrix with pollutant loads was eliminated. Results of numerical experiment further showed that water quality improved faster and were more evident under TMAL input than that when using the EPR method

  4. Leapfrog variants of iterative methods for linear algebra equations

    NASA Technical Reports Server (NTRS)

    Saylor, Paul E.

    1988-01-01

    Two iterative methods are considered, Richardson's method and a general second order method. For both methods, a variant of the method is derived for which only even numbered iterates are computed. The variant is called a leapfrog method. Comparisons between the conventional form of the methods and the leapfrog form are made under the assumption that the number of unknowns is large. In the case of Richardson's method, it is possible to express the final iterate in terms of only the initial approximation, a variant of the iteration called the grand-leap method. In the case of the grand-leap variant, a set of parameters is required. An algorithm is presented to compute these parameters that is related to algorithms to compute the weights and abscissas for Gaussian quadrature. General algorithms to implement the leapfrog and grand-leap methods are presented. Algorithms for the important special case of the Chebyshev method are also given.

  5. Selection of active spaces for multiconfigurational wavefunctions

    NASA Astrophysics Data System (ADS)

    Keller, Sebastian; Boguslawski, Katharina; Janowski, Tomasz; Reiher, Markus; Pulay, Peter

    2015-06-01

    The efficient and accurate description of the electronic structure of strongly correlated systems is still a largely unsolved problem. The usual procedures start with a multiconfigurational (usually a Complete Active Space, CAS) wavefunction which accounts for static correlation and add dynamical correlation by perturbation theory, configuration interaction, or coupled cluster expansion. This procedure requires the correct selection of the active space. Intuitive methods are unreliable for complex systems. The inexpensive black-box unrestricted natural orbital (UNO) criterion postulates that the Unrestricted Hartree-Fock (UHF) charge natural orbitals with fractional occupancy (e.g., between 0.02 and 1.98) constitute the active space. UNOs generally approximate the CAS orbitals so well that the orbital optimization in CAS Self-Consistent Field (CASSCF) may be omitted, resulting in the inexpensive UNO-CAS method. A rigorous testing of the UNO criterion requires comparison with approximate full configuration interaction wavefunctions. This became feasible with the advent of Density Matrix Renormalization Group (DMRG) methods which can approximate highly correlated wavefunctions at affordable cost. We have compared active orbital occupancies in UNO-CAS and CASSCF calculations with DMRG in a number of strongly correlated molecules: compounds of electronegative atoms (F2, ozone, and NO2), polyenes, aromatic molecules (naphthalene, azulene, anthracene, and nitrobenzene), radicals (phenoxy and benzyl), diradicals (o-, m-, and p-benzyne), and transition metal compounds (nickel-acetylene and Cr2). The UNO criterion works well in these cases. Other symmetry breaking solutions, with the possible exception of spatial symmetry, do not appear to be essential to generate the correct active space. In the case of multiple UHF solutions, the natural orbitals of the average UHF density should be used. The problems of the UNO criterion and their potential solutions are discussed: finding the UHF solutions, discontinuities on potential energy surfaces, and inclusion of dynamical electron correlation and generalization to excited states.

  6. A comparison of transport algorithms for premixed, laminar steady state flames

    NASA Technical Reports Server (NTRS)

    Coffee, T. P.; Heimerl, J. M.

    1980-01-01

    The effects of different methods of approximating multispecies transport phenomena in models of premixed, laminar, steady state flames were studied. Five approximation methods that span a wide range of computational complexity were developed. Identical data for individual species properties were used for each method. Each approximation method is employed in the numerical solution of a set of five H2-02-N2 flames. For each flame the computed species and temperature profiles, as well as the computed flame speeds, are found to be very nearly independent of the approximation method used. This does not indicate that transport phenomena are unimportant, but rather that the selection of the input values for the individual species transport properties is more important than the selection of the method used to approximate the multispecies transport. Based on these results, a sixth approximation method was developed that is computationally efficient and provides results extremely close to the most sophisticated and precise method used.

  7. Limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method for the parameter estimation on geographically weighted ordinal logistic regression model (GWOLR)

    NASA Astrophysics Data System (ADS)

    Saputro, Dewi Retno Sari; Widyaningsih, Purnami

    2017-08-01

    In general, the parameter estimation of GWOLR model uses maximum likelihood method, but it constructs a system of nonlinear equations, making it difficult to find the solution. Therefore, an approximate solution is needed. There are two popular numerical methods: the methods of Newton and Quasi-Newton (QN). Newton's method requires large-scale time in executing the computation program since it contains Jacobian matrix (derivative). QN method overcomes the drawback of Newton's method by substituting derivative computation into a function of direct computation. The QN method uses Hessian matrix approach which contains Davidon-Fletcher-Powell (DFP) formula. The Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is categorized as the QN method which has the DFP formula attribute of having positive definite Hessian matrix. The BFGS method requires large memory in executing the program so another algorithm to decrease memory usage is needed, namely Low Memory BFGS (LBFGS). The purpose of this research is to compute the efficiency of the LBFGS method in the iterative and recursive computation of Hessian matrix and its inverse for the GWOLR parameter estimation. In reference to the research findings, we found out that the BFGS and LBFGS methods have arithmetic operation schemes, including O(n2) and O(nm).

  8. A QR accelerated volume-to-surface boundary condition for finite element solution of eddy current problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, D; Fasenfest, B; Rieben, R

    2006-09-08

    We are concerned with the solution of time-dependent electromagnetic eddy current problems using a finite element formulation on three-dimensional unstructured meshes. We allow for multiple conducting regions, and our goal is to develop an efficient computational method that does not require a computational mesh of the air/vacuum regions. This requires a sophisticated global boundary condition specifying the total fields on the conductor boundaries. We propose a Biot-Savart law based volume-to-surface boundary condition to meet this requirement. This Biot-Savart approach is demonstrated to be very accurate. In addition, this approach can be accelerated via a low-rank QR approximation of the discretizedmore » Biot-Savart law.« less

  9. 76 FR 3680 - Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-20

    ... requirements to provide customers with account information (approximately 683,969 hours) and requirements to update customer account information (approximately 777,436 hours). In addition, Rule 17a-3 contains... customers with account information, and costs for equipment and systems development. The Commission...

  10. Extraction of Carbon Dioxide from Seawater by an Electrochemical Acidification Cell. Part 1 - Initial Feasibility Studies

    DTIC Science & Technology

    2010-07-23

    approximately 142 ppm (0.0023 M), therefore approximately 23 mL of 0.100 M hydrochloric acid (HCl) acid is required per liter of seawater where Cl- is...deionized water to a total volume of 140 liters, and pH adjusted to 7.6 using hydrochloric acid (HCl); approximately 20 mLs of diluted HCl (5 mL of... hydrochloric acid was required to reduce pH in a 20 mL sample of Key West seawater to 6.0. This required 4.05E-05 moles of hydrogen ions. Based on

  11. [Theory, method and application of method R on estimation of (co)variance components].

    PubMed

    Liu, Wen-Zhong

    2004-07-01

    Theory, method and application of Method R on estimation of (co)variance components were reviewed in order to make the method be reasonably used. Estimation requires R values,which are regressions of predicted random effects that are calculated using complete dataset on predicted random effects that are calculated using random subsets of the same data. By using multivariate iteration algorithm based on a transformation matrix,and combining with the preconditioned conjugate gradient to solve the mixed model equations, the computation efficiency of Method R is much improved. Method R is computationally inexpensive,and the sampling errors and approximate credible intervals of estimates can be obtained. Disadvantages of Method R include a larger sampling variance than other methods for the same data,and biased estimates in small datasets. As an alternative method, Method R can be used in larger datasets. It is necessary to study its theoretical properties and broaden its application range further.

  12. Comparison of traditional gas chromatography (GC), headspace GC, and the microbial identification library GC system for the identification of Clostridium difficile.

    PubMed Central

    Cundy, K V; Willard, K E; Valeri, L J; Shanholtzer, C J; Singh, J; Peterson, L R

    1991-01-01

    Three gas chromatography (GC) methods were compared for the identification of 52 clinical Clostridium difficile isolates, as well as 17 non-C. difficile Clostridium isolates. Headspace GC and Microbial Identification System (MIS) GC, an automated system which utilizes a software library developed at the Virginia Polytechnic Institute to identify organisms based on the fatty acids extracted from the bacterial cell wall, were compared against the reference method of traditional GC. Headspace GC and MIS were of approximately equivalent accuracy in identifying the 52 C. difficile isolates (52 of 52 versus 51 of 52, respectively). However, 7 of 52 organisms required repeated sample preparation before an identification was achieved by the MIS method. Both systems effectively differentiated C. difficile from non-C. difficile clostridia, although the MIS method correctly identified only 9 of 17. We conclude that the headspace GC system is an accurate method of C. difficile identification, which requires only one-fifth of the sample preparation time of MIS GC and one-half of the sample preparation time of traditional GC. PMID:2007632

  13. An efficient method for isolation of representative and contamination-free population of blood platelets for proteomic studies.

    PubMed

    Wrzyszcz, Aneta; Urbaniak, Joanna; Sapa, Agnieszka; Woźniak, Mieczysław

    2017-01-01

    To date, there has been no ideal method for blood platelet isolation which allows one to obtain a preparation devoid of contaminations, reflecting the activation status and morphological features of circulating platelets. To address these requirements, we have developed a method which combines the continuous density gradient centrifugation with washing from PGI 2 -supplemented platelet-rich plasma (PRP). We have assessed the degree of erythrocyte and leukocyte contamination, recovery of platelets, morphological features, activation status, and reactivity of isolated platelets. Using our protocol, we were able to get a preparation free from contaminations, representing well the platelet population prior to the isolation in terms of size and activity. Besides this, we have obtained approximately 2 times more platelets from the same volume of blood compared to the most widely used method. From 10 ml of whole citrated blood we were able to get on average 2.7 mg of platelet-derived protein. The method of platelet isolation presented in this paper can be successfully applied to tests requiring very pure platelets, reflecting the circulating platelet state, from a small volume of blood.

  14. Response Functions for Neutron Skyshine Analyses

    NASA Astrophysics Data System (ADS)

    Gui, Ah Auu

    Neutron and associated secondary photon line-beam response functions (LBRFs) for point monodirectional neutron sources and related conical line-beam response functions (CBRFs) for azimuthally symmetric neutron sources are generated using the MCNP Monte Carlo code for use in neutron skyshine analyses employing the internal line-beam and integral conical-beam methods. The LBRFs are evaluated at 14 neutron source energies ranging from 0.01 to 14 MeV and at 18 emission angles from 1 to 170 degrees. The CBRFs are evaluated at 13 neutron source energies in the same energy range and at 13 source polar angles (1 to 89 degrees). The response functions are approximated by a three parameter formula that is continuous in source energy and angle using a double linear interpolation scheme. These response function approximations are available for a source-to-detector range up to 2450 m and for the first time, give dose equivalent responses which are required for modern radiological assessments. For the CBRF, ground correction factors for neutrons and photons are calculated and approximated by empirical formulas for use in air-over-ground neutron skyshine problems with azimuthal symmetry. In addition, a simple correction procedure for humidity effects on the neutron skyshine dose is also proposed. The approximate LBRFs are used with the integral line-beam method to analyze four neutron skyshine problems with simple geometries: (1) an open silo, (2) an infinite wall, (3) a roofless rectangular building, and (4) an infinite air medium. In addition, two simple neutron skyshine problems involving an open source silo are analyzed using the integral conical-beam method. The results obtained using the LBRFs and the CBRFs are then compared with MCNP results and results of previous studies.

  15. Analytical approximate solutions for a general class of nonlinear delay differential equations.

    PubMed

    Căruntu, Bogdan; Bota, Constantin

    2014-01-01

    We use the polynomial least squares method (PLSM), which allows us to compute analytical approximate polynomial solutions for a very general class of strongly nonlinear delay differential equations. The method is tested by computing approximate solutions for several applications including the pantograph equations and a nonlinear time-delay model from biology. The accuracy of the method is illustrated by a comparison with approximate solutions previously computed using other methods.

  16. A Fast Approximate Algorithm for Mapping Long Reads to Large Reference Databases.

    PubMed

    Jain, Chirag; Dilthey, Alexander; Koren, Sergey; Aluru, Srinivas; Phillippy, Adam M

    2018-04-30

    Emerging single-molecule sequencing technologies from Pacific Biosciences and Oxford Nanopore have revived interest in long-read mapping algorithms. Alignment-based seed-and-extend methods demonstrate good accuracy, but face limited scalability, while faster alignment-free methods typically trade decreased precision for efficiency. In this article, we combine a fast approximate read mapping algorithm based on minimizers with a novel MinHash identity estimation technique to achieve both scalability and precision. In contrast to prior methods, we develop a mathematical framework that defines the types of mapping targets we uncover, establish probabilistic estimates of p-value and sensitivity, and demonstrate tolerance for alignment error rates up to 20%. With this framework, our algorithm automatically adapts to different minimum length and identity requirements and provides both positional and identity estimates for each mapping reported. For mapping human PacBio reads to the hg38 reference, our method is 290 × faster than Burrows-Wheeler Aligner-MEM with a lower memory footprint and recall rate of 96%. We further demonstrate the scalability of our method by mapping noisy PacBio reads (each ≥5 kbp in length) to the complete NCBI RefSeq database containing 838 Gbp of sequence and >60,000 genomes.

  17. Using block pulse functions for seismic vibration semi-active control of structures with MR dampers

    NASA Astrophysics Data System (ADS)

    Rahimi Gendeshmin, Saeed; Davarnia, Daniel

    2018-03-01

    This article applied the idea of block pulse functions in the semi-active control of structures. The BP functions give effective tools to approximate complex problems. The applied control algorithm has a major effect on the performance of the controlled system and the requirements of the control devices. In control problems, it is important to devise an accurate analytical technique with less computational cost. It is proved that the BP functions are fundamental tools in approximation problems which have been applied in disparate areas in last decades. This study focuses on the employment of BP functions in control algorithm concerning reduction the computational cost. Magneto-rheological (MR) dampers are one of the well-known semi-active tools that can be used to control the response of civil Structures during earthquake. For validation purposes, numerical simulations of a 5-story shear building frame with MR dampers are presented. The results of suggested method were compared with results obtained by controlling the frame by the optimal control method based on linear quadratic regulator theory. It can be seen from simulation results that the suggested method can be helpful in reducing seismic structural responses. Besides, this method has acceptable accuracy and is in agreement with optimal control method with less computational costs.

  18. Design of A Cyclone Separator Using Approximation Method

    NASA Astrophysics Data System (ADS)

    Sin, Bong-Su; Choi, Ji-Won; Lee, Kwon-Hee

    2017-12-01

    A Separator is a device installed in industrial applications to separate mixed objects. The separator of interest in this research is a cyclone type, which is used to separate a steam-brine mixture in a geothermal plant. The most important performance of the cyclone separator is the collection efficiency. The collection efficiency in this study is predicted by performing the CFD (Computational Fluid Dynamics) analysis. This research defines six shape design variables to maximize the collection efficiency. Thus, the collection efficiency is set up as the objective function in optimization process. Since the CFD analysis requires a lot of calculation time, it is impossible to obtain the optimal solution by linking the gradient-based optimization algorithm. Thus, two approximation methods are introduced to obtain an optimum design. In this process, an L18 orthogonal array is adopted as a DOE method, and kriging interpolation method is adopted to generate the metamodel for the collection efficiency. Based on the 18 analysis results, the relative importance of each variable to the collection efficiency is obtained through the ANOVA (analysis of variance). The final design is suggested considering the results obtained from two optimization methods. The fluid flow analysis of the cyclone separator is conducted by using the commercial CFD software, ANSYS-CFX.

  19. Aerodynamic influence coefficient method using singularity splines.

    NASA Technical Reports Server (NTRS)

    Mercer, J. E.; Weber, J. A.; Lesferd, E. P.

    1973-01-01

    A new numerical formulation with computed results, is presented. This formulation combines the adaptability to complex shapes offered by paneling schemes with the smoothness and accuracy of the loading function methods. The formulation employs a continuous distribution of singularity strength over a set of panels on a paneled wing. The basic distributions are independent, and each satisfies all of the continuity conditions required of the final solution. These distributions are overlapped both spanwise and chordwise (termed 'spline'). Boundary conditions are satisfied in a least square error sense over the surface using a finite summing technique to approximate the integral.

  20. Shoulder Injuries in US Astronauts Related to EVA Suit Design

    NASA Technical Reports Server (NTRS)

    Scheuring, R. A.; McCulloch, P.; Van Baalen, Mary; Minard, Charles; Watson, Richard; Blatt, T.

    2011-01-01

    Introduction: For every one hour spent performing extravehicular activity (EVA) in space, astronauts in the US space program spend approximately six to ten hours training in the EVA spacesuit at NASA-Johnson Space Center's Neutral Buoyancy Lab (NBL). In 1997, NASA introduced the planar hard upper torso (HUT) EVA spacesuit which subsequently replaced the existing pivoted HUT. An extra joint in the pivoted shoulder allows increased mobility but also increased complexity. Over the next decade a number of astronauts developed shoulder problems requiring surgical intervention, many of whom performed EVA training in the NBL. This study investigated whether changing HUT designs led to shoulder injuries requiring surgical repair. Methods: US astronaut EVA training data and spacesuit design employed were analyzed from the NBL data. Shoulder surgery data was acquired from the medical record database, and causal mechanisms were obtained from personal interviews Analysis of the individual HUT designs was performed as it related to normal shoulder biomechanics. Results: To date, 23 US astronauts have required 25 shoulder surgeries. Approximately 48% (11/23) directly attributed their injury to training in the planar HUT, whereas none attributed their injury to training in the pivoted HUT. The planar HUT design limits shoulder abduction to 90 degrees compared to approximately 120 degrees in the pivoted HUT. The planar HUT also forces the shoulder into a forward flexed position requiring active retraction and extension to increase abduction beyond 90 degrees. Discussion: Multiple factors are associated with mechanisms leading to shoulder injury requiring surgical repair. Limitations to normal shoulder mechanics, suit fit, donning/doffing, body position, pre-existing injury, tool weight and configuration, age, in-suit activity, and HUT design have all been identified as potential sources of injury. Conclusion: Crewmembers with pre-existing or current shoulder injuries or certain anthropometric body types should conduct NBL EVA training in the pivoted HUT.

  1. Flow Cytometry Techniques in Radiation Biology

    DTIC Science & Technology

    1988-06-01

    Henidtopoietic stem cells SUMMARY Hematopoietic stem cells ( HSC ) are present in the marrow at a concentration of approximately 2-3 HSC per 1000 nucleated marrow...cells. In the past, only clonogenic assays requiring 8-13 days and ten irradiated recipient rodents were available for assaying HSC . Because of the...importance of HSC in the postirradiation syndrome, we have developed a new rapid method based on flow cytometry not only to assay but also to purify and

  2. Bounded Linear Stability Margin Analysis of Nonlinear Hybrid Adaptive Control

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Boskovic, Jovan D.

    2008-01-01

    This paper presents a bounded linear stability analysis for a hybrid adaptive control that blends both direct and indirect adaptive control. Stability and convergence of nonlinear adaptive control are analyzed using an approximate linear equivalent system. A stability margin analysis shows that a large adaptive gain can lead to a reduced phase margin. This method can enable metrics-driven adaptive control whereby the adaptive gain is adjusted to meet stability margin requirements.

  3. Thermal Desorption Capability Development for Enhanced On-site Health Risk Assessment: HAPSITE (registered trademark) ER Passive Sampling in the Field

    DTIC Science & Technology

    2015-06-07

    Field-Portable Gas Chromatograph-Mass Spectrometer.” Forensic Toxicol, 2006, 24, 17-22. Smith, P. “Person-Portable Gas Chromatography : Rapid Temperature...bench-top Gas Chromatograph-Mass Spectrometer (GC-MS) system (ISQ). Nine sites were sampled and analyzed for compounds using Environmental Protection...extraction methods for Liquid Chromatography -MS (LC- MS). Additionally, TD is approximately 1000X more sensitive, requires minimal sample preparation

  4. The absolute radiometric calibration of the advanced very high resolution radiometer

    NASA Technical Reports Server (NTRS)

    Slater, P. N.; Teillet, P. M.; Ding, Y.

    1988-01-01

    The need for independent, redundant absolute radiometric calibration methods is discussed with reference to the Thematic Mapper. Uncertainty requirements for absolute calibration of between 0.5 and 4 percent are defined based on the accuracy of reflectance retrievals at an agricultural site. It is shown that even very approximate atmospheric corrections can reduce the error in reflectance retrieval to 0.02 over the reflectance range 0 to 0.4.

  5. 26 CFR 1.985-3 - United States dollar approximate separate transactions method.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... transactions method. 1.985-3 Section 1.985-3 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE... dollar approximate separate transactions method. (a) Scope and effective date—(1) Scope. This section describes the United States dollar (dollar) approximate separate transactions method of accounting (DASTM...

  6. Numerical method of lines for the relaxational dynamics of nematic liquid crystals.

    PubMed

    Bhattacharjee, A K; Menon, Gautam I; Adhikari, R

    2008-08-01

    We propose an efficient numerical scheme, based on the method of lines, for solving the Landau-de Gennes equations describing the relaxational dynamics of nematic liquid crystals. Our method is computationally easy to implement, balancing requirements of efficiency and accuracy. We benchmark our method through the study of the following problems: the isotropic-nematic interface, growth of nematic droplets in the isotropic phase, and the kinetics of coarsening following a quench into the nematic phase. Our results, obtained through solutions of the full coarse-grained equations of motion with no approximations, provide a stringent test of the de Gennes ansatz for the isotropic-nematic interface, illustrate the anisotropic character of droplets in the nucleation regime, and validate dynamical scaling in the coarsening regime.

  7. Numerical analysis of spectral properties of coupled oscillator Schroedinger operators. I - Single and double well anharmonic oscillators

    NASA Technical Reports Server (NTRS)

    Isaacson, D.; Isaacson, E. L.; Paes-Leme, P. J.; Marchesin, D.

    1981-01-01

    Several methods for computing many eigenvalues and eigenfunctions of a single anharmonic oscillator Schroedinger operator whose potential may have one or two minima are described. One of the methods requires the solution of an ill-conditioned generalized eigenvalue problem. This method has the virtue of using a bounded amount of work to achieve a given accuracy in both the single and double well regions. Rigorous bounds are given, and it is proved that the approximations converge faster than any inverse power of the size of the matrices needed to compute them. The results of computations for the g:phi(4):1 theory are presented. These results indicate that the methods actually converge exponentially fast.

  8. Born iterative reconstruction using perturbed-phase field estimates.

    PubMed

    Astheimer, Jeffrey P; Waag, Robert C

    2008-10-01

    A method of image reconstruction from scattering measurements for use in ultrasonic imaging is presented. The method employs distorted-wave Born iteration but does not require using a forward-problem solver or solving large systems of equations. These calculations are avoided by limiting intermediate estimates of medium variations to smooth functions in which the propagated fields can be approximated by phase perturbations derived from variations in a geometric path along rays. The reconstruction itself is formed by a modification of the filtered-backpropagation formula that includes correction terms to account for propagation through an estimated background. Numerical studies that validate the method for parameter ranges of interest in medical applications are presented. The efficiency of this method offers the possibility of real-time imaging from scattering measurements.

  9. A Formal Valuation Framework for Emotions and Their Control.

    PubMed

    Huys, Quentin J M; Renz, Daniel

    2017-09-15

    Computational psychiatry aims to apply mathematical and computational techniques to help improve psychiatric care. To achieve this, the phenomena under scrutiny should be within the scope of formal methods. As emotions play an important role across many psychiatric disorders, such computational methods must encompass emotions. Here, we consider formal valuation accounts of emotions. We focus on the fact that the flexibility of emotional responses and the nature of appraisals suggest the need for a model-based valuation framework for emotions. However, resource limitations make plain model-based valuation impossible and require metareasoning strategies to apportion cognitive resources adaptively. We argue that emotions may implement such metareasoning approximations by restricting the range of behaviors and states considered. We consider the processes that guide the deployment of the approximations, discerning between innate, model-free, heuristic, and model-based controllers. A formal valuation and metareasoning framework may thus provide a principled approach to examining emotions. Copyright © 2017 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  10. Improving the Accuracy of the Chebyshev Rational Approximation Method Using Substeps

    DOE PAGES

    Isotalo, Aarno; Pusa, Maria

    2016-05-01

    The Chebyshev Rational Approximation Method (CRAM) for solving the decay and depletion of nuclides is shown to have a remarkable decrease in error when advancing the system with the same time step and microscopic reaction rates as the previous step. This property is exploited here to achieve high accuracy in any end-of-step solution by dividing a step into equidistant sub-steps. The computational cost of identical substeps can be reduced significantly below that of an equal number of regular steps, as the LU decompositions for the linear solves required in CRAM only need to be formed on the first substep. Themore » improved accuracy provided by substeps is most relevant in decay calculations, where there have previously been concerns about the accuracy and generality of CRAM. Lastly, with substeps, CRAM can solve any decay or depletion problem with constant microscopic reaction rates to an extremely high accuracy for all nuclides with concentrations above an arbitrary limit.« less

  11. Resolution of the 1D regularized Burgers equation using a spatial wavelet approximation

    NASA Technical Reports Server (NTRS)

    Liandrat, J.; Tchamitchian, PH.

    1990-01-01

    The Burgers equation with a small viscosity term, initial and periodic boundary conditions is resolved using a spatial approximation constructed from an orthonormal basis of wavelets. The algorithm is directly derived from the notions of multiresolution analysis and tree algorithms. Before the numerical algorithm is described these notions are first recalled. The method uses extensively the localization properties of the wavelets in the physical and Fourier spaces. Moreover, the authors take advantage of the fact that the involved linear operators have constant coefficients. Finally, the algorithm can be considered as a time marching version of the tree algorithm. The most important point is that an adaptive version of the algorithm exists: it allows one to reduce in a significant way the number of degrees of freedom required for a good computation of the solution. Numerical results and description of the different elements of the algorithm are provided in combination with different mathematical comments on the method and some comparison with more classical numerical algorithms.

  12. Calculating the Optimum Angle of Filament-Wound Pipes in Natural Gas Transmission Pipelines Using Approximation Methods.

    PubMed

    Reza Khoshravan Azar, Mohammad; Emami Satellou, Ali Akbar; Shishesaz, Mohammad; Salavati, Bahram

    2013-04-01

    Given the increasing use of composite materials in various industries, oil and gas industry also requires that more attention should be paid to these materials. Furthermore, due to variation in choice of materials, the materials needed for the mechanical strength, resistance in critical situations such as fire, costs and other priorities of the analysis carried out on them and the most optimal for achieving certain goals, are introduced. In this study, we will try to introduce appropriate choice for use in the natural gas transmission composite pipelines. Following a 4-layered filament-wound (FW) composite pipe will consider an offer our analyses under internal pressure. The analyses' results will be calculated for different combinations of angles 15 deg, 30 deg, 45 deg, 55 deg, 60 deg, 75 deg, and 80 deg. Finally, we will compare the calculated values and the optimal angle will be gained by using the Approximation methods. It is explained that this layering is as the symmetrical.

  13. Simple, stable and reliable modeling of gas properties of organic working fluids in aerodynamic designs of turbomachinery for ORC and VCC

    NASA Astrophysics Data System (ADS)

    Kawakubo, T.

    2016-05-01

    A simple, stable and reliable modeling of the real gas nature of the working fluid is required for the aerodesigns of the turbine in the Organic Rankine Cycle and of the compressor in the Vapor Compression Cycle. Although many modern Computational Fluid Dynamics tools are capable of incorporating real gas models, simulations with such a gas model tend to be more time-consuming than those with a perfect gas model and even can be unstable due to the simulation near the saturation boundary. Thus a perfect gas approximation is still an attractive option to stably and swiftly conduct a design simulation. In this paper, an effective method of the CFD simulation with a perfect gas approximation is discussed. A method of representing the performance of the centrifugal compressor or the radial-inflow turbine by means of each set of non-dimensional performance parameters and translating the fictitious perfect gas result to the actual real gas performance is presented.

  14. Asymptotically free theory with scale invariant thermodynamics

    NASA Astrophysics Data System (ADS)

    Ferrari, Gabriel N.; Kneur, Jean-Loïc; Pinto, Marcus Benghi; Ramos, Rudnei O.

    2017-12-01

    A recently developed variational resummation technique, incorporating renormalization group properties consistently, has been shown to solve the scale dependence problem that plagues the evaluation of thermodynamical quantities, e.g., within the framework of approximations such as in the hard-thermal-loop resummed perturbation theory. This method is used in the present work to evaluate thermodynamical quantities within the two-dimensional nonlinear sigma model, which, apart from providing a technically simpler testing ground, shares some common features with Yang-Mills theories, like asymptotic freedom, trace anomaly and the nonperturbative generation of a mass gap. The present application confirms that nonperturbative results can be readily generated solely by considering the lowest-order (quasiparticle) contribution to the thermodynamic effective potential, when this quantity is required to be renormalization group invariant. We also show that when the next-to-leading correction from the method is accounted for, the results indicate convergence, apart from optimally preserving, within the approximations here considered, the sought-after scale invariance.

  15. Multi-reference approach to the calculation of photoelectron spectra including spin-orbit coupling.

    PubMed

    Grell, Gilbert; Bokarev, Sergey I; Winter, Bernd; Seidel, Robert; Aziz, Emad F; Aziz, Saadullah G; Kühn, Oliver

    2015-08-21

    X-ray photoelectron spectra provide a wealth of information on the electronic structure. The extraction of molecular details requires adequate theoretical methods, which in case of transition metal complexes has to account for effects due to the multi-configurational and spin-mixed nature of the many-electron wave function. Here, the restricted active space self-consistent field method including spin-orbit coupling is used to cope with this challenge and to calculate valence- and core-level photoelectron spectra. The intensities are estimated within the frameworks of the Dyson orbital formalism and the sudden approximation. Thereby, we utilize an efficient computational algorithm that is based on a biorthonormal basis transformation. The approach is applied to the valence photoionization of the gas phase water molecule and to the core ionization spectrum of the [Fe(H2O)6](2+) complex. The results show good agreement with the experimental data obtained in this work, whereas the sudden approximation demonstrates distinct deviations from experiments.

  16. Improved key-rate bounds for practical decoy-state quantum-key-distribution systems

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen; Zhao, Qi; Razavi, Mohsen; Ma, Xiongfeng

    2017-01-01

    The decoy-state scheme is the most widely implemented quantum-key-distribution protocol in practice. In order to account for the finite-size key effects on the achievable secret key generation rate, a rigorous statistical fluctuation analysis is required. Originally, a heuristic Gaussian-approximation technique was used for this purpose, which, despite its analytical convenience, was not sufficiently rigorous. The fluctuation analysis has recently been made rigorous by using the Chernoff bound. There is a considerable gap, however, between the key-rate bounds obtained from these techniques and that obtained from the Gaussian assumption. Here we develop a tighter bound for the decoy-state method, which yields a smaller failure probability. This improvement results in a higher key rate and increases the maximum distance over which secure key exchange is possible. By optimizing the system parameters, our simulation results show that our method almost closes the gap between the two previously proposed techniques and achieves a performance similar to that of conventional Gaussian approximations.

  17. Size response of an SMPS-APS system to commercial multi-walled carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Lee, Seung-Bok; Lee, Jun-Hyun; Bae, Gwi-Nam

    2010-02-01

    Carbon nanotubes (CNTs) are representative-engineered nanomaterials with unique properties. The safe production of CNTs urgently requires reliable tools to assess inhalation exposure. In this study, on-line aerosol instruments were employed to detect the release of multi-walled CNTs (MWCNTs) in workplace environments. The size responses of aerosol instruments consisting of both a scanning mobility particle sizer (SMPS) and an aerodynamic particle sizer (APS) were examined using five types of commercial MWCNTs. A MWCNT solution and powder were aerosolized using atomizing and shaking methods, respectively. Regardless of the phase and purity, the aerosolized MWCNTs showed consistent size distributions with both SMPS and APS. The SMPS and APS measurements revealed a dominant broad peak at approximately 200-400 nm and a distinct narrow peak at approximately 2 μm, respectively. Comparing with field application of the two aerosol instruments, the APS response could be a fingerprint of the MWCNTs in a real workplace environment. A modification of the atomizing method is recommended for the long-term inhalation toxicity studies.

  18. Kinetic description of large-scale low pressure glow discharges

    NASA Astrophysics Data System (ADS)

    Kortshagen, Uwe; Heil, Brian

    1997-10-01

    In recent years the so called ``nonlocal approximation'' to the solution of the electron Boltzmann equation has attracted considerable attention as an extremely efficient method for the kinetic modeling of low pressure discharges. However, it appears that modern discharges, which are optimized to provide large-scale plasma uniformity, are explicitly designed to work in a regime, in which the nonlocal approximation is no longer strictly valid. In the presentation we discuss results of a hybrid model, which is based on the natural division of the electron distribution function into a nonlocal body, which is determined by elastic collisions only, and a high energy part which requires a more complete treatment due to the action of inelastic collisions and wall losses of electrons. The method is applied to an inductively coupled low pressure discharge. We discuss the transition from plasma density profiles maximal on the discharge axis to plasma density profiles with off-center maxima, which has been observed in experiments. A positive feedback mechanism involved in this transition is pointed out.

  19. Hierarchical matrices implemented into the boundary integral approaches for gravity field modelling

    NASA Astrophysics Data System (ADS)

    Čunderlík, Róbert; Vipiana, Francesca

    2017-04-01

    Boundary integral approaches applied for gravity field modelling have been recently developed to solve the geodetic boundary value problems numerically, or to process satellite observations, e.g. from the GOCE satellite mission. In order to obtain numerical solutions of "cm-level" accuracy, such approaches require very refined level of the disretization or resolution. This leads to enormous memory requirements that need to be reduced. An implementation of the Hierarchical Matrices (H-matrices) can significantly reduce a numerical complexity of these approaches. A main idea of the H-matrices is based on an approximation of the entire system matrix that is split into a family of submatrices. Large submatrices are stored in factorized representation, while small submatrices are stored in standard representation. This allows reducing memory requirements significantly while improving the efficiency. The poster presents our preliminary results of implementations of the H-matrices into the existing boundary integral approaches based on the boundary element method or the method of fundamental solution.

  20. Spacelab Mission Implementation Cost Assessment (SMICA)

    NASA Technical Reports Server (NTRS)

    Guynes, B. V.

    1984-01-01

    A total savings of approximately 20 percent is attainable if: (1) mission management and ground processing schedules are compressed; (2) the equipping, staffing, and operating of the Payload Operations Control Center is revised, and (3) methods of working with experiment developers are changed. The development of a new mission implementation technique, which includes mission definition, experiment development, and mission integration/operations, is examined. The Payload Operations Control Center is to relocate and utilize new computer equipment to produce cost savings. Methods of reducing costs by minimizing the Spacelab and payload processing time during pre- and post-mission operation at KSC are analyzed. The changes required to reduce costs in the analytical integration process are studied. The influence of time, requirements accountability, and risk on costs is discussed. Recommendation for cost reductions developed by the Spacelab Mission Implementation Cost Assessment study are listed.

  1. Preliminary design of a long-endurance Mars aircraft

    NASA Technical Reports Server (NTRS)

    Colozza, Anthony J.

    1990-01-01

    The preliminary design requirements of a long endurance aircraft capable of flight within the Martian environment was determined. Both radioisotope/heat engine and PV solar array power production systems were considered. Various cases for each power system were analyzed in order to determine the necessary size, weight and power requirements of the aircraft. The analysis method used was an adaptation of the method developed by Youngblood and Talay of NASA-Langley used to design a high altitude earth based aircraft. The analysis is set up to design an aircraft which, for the given conditions, has a minimum wingspan and maximum endurance parameter. The results showed that, for a first approximation, a long endurance aircraft is feasible within the Martian environment. The size and weight of the most efficient solar aircraft were comparable to the radioisotope powered one.

  2. Simulation of water-table aquifers using specified saturated thickness

    USGS Publications Warehouse

    Sheets, Rodney A.; Hill, Mary C.; Haitjema, Henk M.; Provost, Alden M.; Masterson, John P.

    2014-01-01

    Simulating groundwater flow in a water-table (unconfined) aquifer can be difficult because the saturated thickness available for flow depends on model-calculated hydraulic heads. It is often possible to realize substantial time savings and still obtain accurate head and flow solutions by specifying an approximate saturated thickness a priori, thus linearizing this aspect of the model. This specified-thickness approximation often relies on the use of the “confined” option in numerical models, which has led to confusion and criticism of the method. This article reviews the theoretical basis for the specified-thickness approximation, derives an error analysis for relatively ideal problems, and illustrates the utility of the approximation with a complex test problem. In the transient version of our complex test problem, the specified-thickness approximation produced maximum errors in computed drawdown of about 4% of initial aquifer saturated thickness even when maximum drawdowns were nearly 20% of initial saturated thickness. In the final steady-state version, the approximation produced maximum errors in computed drawdown of about 20% of initial aquifer saturated thickness (mean errors of about 5%) when maximum drawdowns were about 35% of initial saturated thickness. In early phases of model development, such as during initial model calibration efforts, the specified-thickness approximation can be a very effective tool to facilitate convergence. The reduced execution time and increased stability obtained through the approximation can be especially useful when many model runs are required, such as during inverse model calibration, sensitivity and uncertainty analyses, multimodel analysis, and development of optimal resource management scenarios.

  3. The cost-constrained traveling salesman problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sokkappa, P.R.

    1990-10-01

    The Cost-Constrained Traveling Salesman Problem (CCTSP) is a variant of the well-known Traveling Salesman Problem (TSP). In the TSP, the goal is to find a tour of a given set of cities such that the total cost of the tour is minimized. In the CCTSP, each city is given a value, and a fixed cost-constraint is specified. The objective is to find a subtour of the cities that achieves maximum value without exceeding the cost-constraint. Thus, unlike the TSP, the CCTSP requires both selection and sequencing. As a consequence, most results for the TSP cannot be extended to the CCTSP.more » We show that the CCTSP is NP-hard and that no K-approximation algorithm or fully polynomial approximation scheme exists, unless P = NP. We also show that several special cases are polynomially solvable. Algorithms for the CCTSP, which outperform previous methods, are developed in three areas: upper bounding methods, exact algorithms, and heuristics. We found that a bounding strategy based on the knapsack problem performs better, both in speed and in the quality of the bounds, than methods based on the assignment problem. Likewise, we found that a branch-and-bound approach using the knapsack bound was superior to a method based on a common branch-and-bound method for the TSP. In our study of heuristic algorithms, we found that, when selecting modes for inclusion in the subtour, it is important to consider the neighborhood'' of the nodes. A node with low value that brings the subtour near many other nodes may be more desirable than an isolated node of high value. We found two types of repetition to be desirable: repetitions based on randomization in the subtour buildings process, and repetitions encouraging the inclusion of different subsets of the nodes. By varying the number and type of repetitions, we can adjust the computation time required by our method to obtain algorithms that outperform previous methods.« less

  4. MMOC- MODIFIED METHOD OF CHARACTERISTICS SONIC BOOM EXTRAPOLATION

    NASA Technical Reports Server (NTRS)

    Darden, C. M.

    1994-01-01

    The Modified Method of Characteristics Sonic Boom Extrapolation program (MMOC) is a sonic boom propagation method which includes shock coalescence and incorporates the effects of asymmetry due to volume and lift. MMOC numerically integrates nonlinear equations from data at a finite distance from an airplane configuration at flight altitude to yield the sonic boom pressure signature at ground level. MMOC accounts for variations in entropy, enthalpy, and gravity for nonlinear effects near the aircraft, allowing extrapolation to begin nearer the body than in previous methods. This feature permits wind tunnel sonic boom models of up to three feet in length, enabling more detailed, realistic models than the previous six-inch sizes. It has been shown that elongated airplanes flying at high altitude and high Mach numbers can produce an acceptably low sonic boom. Shock coalescence in MMOC includes three-dimensional effects. The method is based on an axisymmetric solution with asymmetric effects determined by circumferential derivatives of the standard shock equations. Bow shocks and embedded shocks can be included in the near-field. The method of characteristics approach in MMOC allows large computational steps in the radial direction without loss of accuracy. MMOC is a propagation method rather than a predictive program. Thus input data (the flow field on a cylindrical surface at approximately one body length from the axis) must be supplied from calculations or experimental results. The MMOC package contains a uniform atmosphere pressure field program and interpolation routines for computing the required flow field data. Other user supplied input to MMOC includes Mach number, flow angles, and temperature. MMOC output tabulates locations of bow shocks and embedded shocks. When the calculations reach ground level, the overpressure and distance are printed, allowing the user to plot the pressure signature. MMOC is written in FORTRAN IV for batch execution and has been implemented on a CDC 170 series computer operating under NOS with a central memory requirement of approximately 223K of 60 bit words. This program was developed in 1983.

  5. A practical material decomposition method for x-ray dual spectral computed tomography.

    PubMed

    Hu, Jingjing; Zhao, Xing

    2016-03-17

    X-ray dual spectral CT (DSCT) scans the measured object with two different x-ray spectra, and the acquired rawdata can be used to perform the material decomposition of the object. Direct calibration methods allow a faster material decomposition for DSCT and can be separated in two groups: image-based and rawdata-based. The image-based method is an approximative method, and beam hardening artifacts remain in the resulting material-selective images. The rawdata-based method generally obtains better image quality than the image-based method, but this method requires geometrically consistent rawdata. However, today's clinical dual energy CT scanners usually measure different rays for different energy spectra and acquire geometrically inconsistent rawdata sets, and thus cannot meet the requirement. This paper proposes a practical material decomposition method to perform rawdata-based material decomposition in the case of inconsistent measurement. This method first yields the desired consistent rawdata sets from the measured inconsistent rawdata sets, and then employs rawdata-based technique to perform material decomposition and reconstruct material-selective images. The proposed method was evaluated by use of simulated FORBILD thorax phantom rawdata and dental CT rawdata, and simulation results indicate that this method can produce highly quantitative DSCT images in the case of inconsistent DSCT measurements.

  6. Mean-field approximation for spacing distribution functions in classical systems

    NASA Astrophysics Data System (ADS)

    González, Diego Luis; Pimpinelli, Alberto; Einstein, T. L.

    2012-01-01

    We propose a mean-field method to calculate approximately the spacing distribution functions p(n)(s) in one-dimensional classical many-particle systems. We compare our method with two other commonly used methods, the independent interval approximation and the extended Wigner surmise. In our mean-field approach, p(n)(s) is calculated from a set of Langevin equations, which are decoupled by using a mean-field approximation. We find that in spite of its simplicity, the mean-field approximation provides good results in several systems. We offer many examples illustrating that the three previously mentioned methods give a reasonable description of the statistical behavior of the system. The physical interpretation of each method is also discussed.

  7. A method for modeling oxygen diffusion in an agent-based model with application to host-pathogen infection

    DOE PAGES

    Plimpton, Steven J.; Sershen, Cheryl L.; May, Elebeoba E.

    2015-01-01

    This paper describes a method for incorporating a diffusion field modeling oxygen usage and dispersion in a multi-scale model of Mycobacterium tuberculosis (Mtb) infection mediated granuloma formation. We implemented this method over a floating-point field to model oxygen dynamics in host tissue during chronic phase response and Mtb persistence. The method avoids the requirement of satisfying the Courant-Friedrichs-Lewy (CFL) condition, which is necessary in implementing the explicit version of the finite-difference method, but imposes an impractical bound on the time step. Instead, diffusion is modeled by a matrix-based, steady state approximate solution to the diffusion equation. Moreover, presented in figuremore » 1 is the evolution of the diffusion profiles of a containment granuloma over time.« less

  8. Comparison between iteration schemes for three-dimensional coordinate-transformed saturated-unsaturated flow model

    NASA Astrophysics Data System (ADS)

    An, Hyunuk; Ichikawa, Yutaka; Tachikawa, Yasuto; Shiiba, Michiharu

    2012-11-01

    SummaryThree different iteration methods for a three-dimensional coordinate-transformed saturated-unsaturated flow model are compared in this study. The Picard and Newton iteration methods are the common approaches for solving Richards' equation. The Picard method is simple to implement and cost-efficient (on an individual iteration basis). However it converges slower than the Newton method. On the other hand, although the Newton method converges faster, it is more complex to implement and consumes more CPU resources per iteration than the Picard method. The comparison of the two methods in finite-element model (FEM) for saturated-unsaturated flow has been well evaluated in previous studies. However, two iteration methods might exhibit different behavior in the coordinate-transformed finite-difference model (FDM). In addition, the Newton-Krylov method could be a suitable alternative for the coordinate-transformed FDM because it requires the evaluation of a 19-point stencil matrix. The formation of a 19-point stencil is quite a complex and laborious procedure. Instead, the Newton-Krylov method calculates the matrix-vector product, which can be easily approximated by calculating the differences of the original nonlinear function. In this respect, the Newton-Krylov method might be the most appropriate iteration method for coordinate-transformed FDM. However, this method involves the additional cost of taking an approximation at each Krylov iteration in the Newton-Krylov method. In this paper, we evaluated the efficiency and robustness of three iteration methods—the Picard, Newton, and Newton-Krylov methods—for simulating saturated-unsaturated flow through porous media using a three-dimensional coordinate-transformed FDM.

  9. Density estimation in wildlife surveys

    USGS Publications Warehouse

    Bart, Jonathan; Droege, Sam; Geissler, Paul E.; Peterjohn, Bruce G.; Ralph, C. John

    2004-01-01

    Several authors have recently discussed the problems with using index methods to estimate trends in population size. Some have expressed the view that index methods should virtually never be used. Others have responded by defending index methods and questioning whether better alternatives exist. We suggest that index methods are often a cost-effective component of valid wildlife monitoring but that double-sampling or another procedure that corrects for bias or establishes bounds on bias is essential. The common assertion that index methods require constant detection rates for trend estimation is mathematically incorrect; the requirement is no long-term trend in detection "ratios" (index result/parameter of interest), a requirement that is probably approximately met by many well-designed index surveys. We urge that more attention be given to defining bird density rigorously and in ways useful to managers. Once this is done, 4 sources of bias in density estimates may be distinguished: coverage, closure, surplus birds, and detection rates. Distance, double-observer, and removal methods do not reduce bias due to coverage, closure, or surplus birds. These methods may yield unbiased estimates of the number of birds present at the time of the survey, but only if their required assumptions are met, which we doubt occurs very often in practice. Double-sampling, in contrast, produces unbiased density estimates if the plots are randomly selected and estimates on the intensive surveys are unbiased. More work is needed, however, to determine the feasibility of double-sampling in different populations and habitats. We believe the tension that has developed over appropriate survey methods can best be resolved through increased appreciation of the mathematical aspects of indices, especially the effects of bias, and through studies in which candidate methods are evaluated against known numbers determined through intensive surveys.

  10. Receive Mode Analysis and Design of Microstrip Reflectarrays

    NASA Technical Reports Server (NTRS)

    Rengarajan, Sembiam

    2011-01-01

    Traditionally microstrip or printed reflectarrays are designed using the transmit mode technique. In this method, the size of each printed element is chosen so as to provide the required value of the reflection phase such that a collimated beam results along a given direction. The reflection phase of each printed element is approximated using an infinite array model. The infinite array model is an excellent engineering approximation for a large microstrip array since the size or orientation of elements exhibits a slow spatial variation. In this model, the reflection phase from a given printed element is approximated by that of an infinite array of elements of the same size and orientation when illuminated by a local plane wave. Thus the reflection phase is a function of the size (or orientation) of the element, the elevation and azimuth angles of incidence of a local plane wave, and polarization. Typically, one computes the reflection phase of the infinite array as a function of several parameters such as size/orientation, elevation and azimuth angles of incidence, and in some cases for vertical and horizontal polarization. The design requires the selection of the size/orientation of the printed element to realize the required phase by interpolating or curve fitting all the computed data. This is a substantially complicated problem, especially in applications requiring a computationally intensive commercial code to determine the reflection phase. In dual polarization applications requiring rectangular patches, one needs to determine the reflection phase as a function of five parameters (dimensions of the rectangular patch, elevation and azimuth angles of incidence, and polarization). This is an extremely complex problem. The new method employs the reciprocity principle and reaction concept, two well-known concepts in electromagnetics to derive the receive mode analysis and design techniques. In the "receive mode design" technique, the reflection phase is computed for a plane wave incident on the reflectarray from the direction of the beam peak. In antenna applications with a single collimated beam, this method is extremely simple since all printed elements see the same angles of incidence. Thus the number of parameters is reduced by two when compared to the transmit mode design. The reflection phase computation as a function of five parameters in the rectangular patch array discussed previously is reduced to a computational problem with three parameters in the receive mode. Furthermore, if the beam peak is in the broadside direction, the receive mode design is polarization independent and the reflection phase computation is a function of two parameters only. For a square patch array, it is a function of the size, one parameter only, thus making it extremely simple.

  11. A harmonic adiabatic approximation to calculate highly excited vibrational levels of ``floppy molecules''

    NASA Astrophysics Data System (ADS)

    Lauvergnat, David; Nauts, André; Justum, Yves; Chapuisat, Xavier

    2001-04-01

    The harmonic adiabatic approximation (HADA), an efficient and accurate quantum method to calculate highly excited vibrational levels of molecular systems, is presented. It is well-suited to applications to "floppy molecules" with a rather large number of atoms (N>3). A clever choice of internal coordinates naturally suggests their separation into active, slow, or large amplitude coordinates q', and inactive, fast, or small amplitude coordinates q″, which leads to an adiabatic (or Born-Oppenheimer-type) approximation (ADA), i.e., the total wave function is expressed as a product of active and inactive total wave functions. However, within the framework of the ADA, potential energy data concerning the inactive coordinates q″ are required. To reduce this need, a minimum energy domain (MED) is defined by minimizing the potential energy surface (PES) for each value of the active variables q', and a quadratic or harmonic expansion of the PES, based on the MED, is used (MED harmonic potential). In other words, the overall picture is that of a harmonic valley about the MED. In the case of only one active variable, we have a minimum energy path (MEP) and a MEP harmonic potential. The combination of the MED harmonic potential and the adiabatic approximation (harmonic adiabatic approximation: HADA) greatly reduces the size of the numerical computations, so that rather large molecules can be studied. In the present article however, the HADA is applied to our benchmark molecule HCN/CNH, to test the validity of the method. Thus, the HADA vibrational energy levels are compared and are in excellent agreement with the ADA calculations (adiabatic approximation with the full PES) of Light and Bačić [J. Chem. Phys. 87, 4008 (1987)]. Furthermore, the exact harmonic results (exact calculations without the adiabatic approximation but with the MEP harmonic potential) are compared to the exact calculations (without any sort of approximation). In addition, we compare the densities of the bending motion during the HCN/CNH isomerization, computed with the HADA and the exact wave function.

  12. Transfer Learning to Accelerate Interface Structure Searches

    NASA Astrophysics Data System (ADS)

    Oda, Hiromi; Kiyohara, Shin; Tsuda, Koji; Mizoguchi, Teruyasu

    2017-12-01

    Interfaces have atomic structures that are significantly different from those in the bulk, and play crucial roles in material properties. The central structures at the interfaces that provide properties have been extensively investigated. However, determination of even one interface structure requires searching for the stable configuration among many thousands of candidates. Here, a powerful combination of machine learning techniques based on kriging and transfer learning (TL) is proposed as a method for unveiling the interface structures. Using the kriging+TL method, thirty-three grain boundaries were systematically determined from 1,650,660 candidates in only 462 calculations, representing an increase in efficiency over conventional all-candidate calculation methods, by a factor of approximately 3,600.

  13. An evaluation of HEMT potential for millimeter-wave signal sources using interpolation and harmonic balance techniques

    NASA Technical Reports Server (NTRS)

    Kwon, Youngwoo; Pavlidis, Dimitris; Tutt, Marcel N.

    1991-01-01

    A large-signal analysis method based on an harmonic balance technique and a 2-D cubic spline interpolation function has been developed and applied to the prediction of InP-based HEMT oscillator performance for frequencies extending up to the submillimeter-wave range. The large-signal analysis method uses a limited number of DC and small-signal S-parameter data and allows the accurate characterization of HEMT large-signal behavior. The method has been validated experimentally using load-pull measurement. Oscillation frequency, power performance, and load requirements are discussed, with an operation capability of 300 GHz predicted using state-of-the-art devices (fmax is approximately equal to 450 GHz).

  14. Detecting Edges in Images by Use of Fuzzy Reasoning

    NASA Technical Reports Server (NTRS)

    Dominguez, Jesus A.; Klinko, Steve

    2003-01-01

    A method of processing digital image data to detect edges includes the use of fuzzy reasoning. The method is completely adaptive and does not require any advance knowledge of an image. During initial processing of image data at a low level of abstraction, the nature of the data is indeterminate. Fuzzy reasoning is used in the present method because it affords an ability to construct useful abstractions from approximate, incomplete, and otherwise imperfect sets of data. Humans are able to make some sense of even unfamiliar objects that have imperfect high-level representations. It appears that to perceive unfamiliar objects or to perceive familiar objects in imperfect images, humans apply heuristic algorithms to understand the images

  15. Plasma in-liquid method for reduction of zinc oxide in zinc nanoparticle synthesis

    NASA Astrophysics Data System (ADS)

    Amaliyah, Novriany; Mukasa, Shinobu; Nomura, Shinfuku; Toyota, Hiromichi; Kitamae, Tomohide

    2015-02-01

    Metal air-batteries with high-energy density are expected to be increasingly applied in electric vehicles. This will require a method of recycling air batteries, and reduction of metal oxide by generating plasma in liquid has been proposed as a possible method. Microwave-induced plasma is generated in ethanol as a reducing agent in which zinc oxide is dispersed. Analysis by energy-dispersive x-ray spectrometry (EDS) and x-ray diffraction (XRD) reveals the reduction of zinc oxide. According to images by transmission electron microscopy (TEM), cubic and hexagonal metallic zinc particles are formed in sizes of 30 to 200 nm. Additionally, spherical fiber flocculates approximately 180 nm in diameter are present.

  16. Window-based method for approximating the Hausdorff in three-dimensional range imagery

    DOEpatents

    Koch, Mark W [Albuquerque, NM

    2009-06-02

    One approach to pattern recognition is to use a template from a database of objects and match it to a probe image containing the unknown. Accordingly, the Hausdorff distance can be used to measure the similarity of two sets of points. In particular, the Hausdorff can measure the goodness of a match in the presence of occlusion, clutter, and noise. However, existing 3D algorithms for calculating the Hausdorff are computationally intensive, making them impractical for pattern recognition that requires scanning of large databases. The present invention is directed to a new method that can efficiently, in time and memory, compute the Hausdorff for 3D range imagery. The method uses a window-based approach.

  17. Finite area method for nonlinear supersonic conical flows

    NASA Technical Reports Server (NTRS)

    Sritharan, S. S.; Seebass, A. R.

    1983-01-01

    A fully conservative numerical method for the computation of steady inviscid supersonic flow about general conical bodies at incidence is described. The procedure utilizes the potential approximation and implements a body conforming mesh generator. The conical potential is assumed to have its best linear variation inside each mesh cell; a secondary interlocking cell system is used to establish the flux balance required to conserve mass. In the supersonic regions the scheme is symmetrized by adding artificial viscosity in conservation form. The algorithm is nearly an order of a magnitude faster than present Euler methods and predicts known results accurately and qualitative features such as nodal point lift off correctly. Results are compared with those of other investigators.

  18. Finite area method for nonlinear conical flows

    NASA Technical Reports Server (NTRS)

    Sritharan, S. S.; Seebass, A. R.

    1982-01-01

    A fully conservative finite area method for the computation of steady inviscid flow about general conical bodies at incidence is described. The procedure utilizes the potential approximation and implements a body conforming mesh generator. The conical potential is assumed to have its best linear variation inside each mesh cell and a secondary interlocking cell system is used to establish the flux balance required to conserve mass. In the supersonic regions the scheme is desymmetrized by adding appropriate artificial viscosity in conservation form. The algorithm is nearly an order of a magnitude faster than present Euler methods and predicts known results accurately and qualitative features such as nodal point lift off correctly. Results are compared with those of other investigations.

  19. A miniature Marine Aerosol Reference Tank (miniMART) as a compact breaking wave analogue

    NASA Astrophysics Data System (ADS)

    Stokes, M. Dale; Deane, Grant; Collins, Douglas B.; Cappa, Christopher; Bertram, Timothy; Dommer, Abigail; Schill, Steven; Forestieri, Sara; Survilo, Mathew

    2016-09-01

    In order to understand the processes governing the production of marine aerosols, repeatable, controlled methods for their generation are required. A new system, the miniature Marine Aerosol Reference Tank (miniMART), has been designed after the success of the original MART system, to approximate a small oceanic spilling breaker by producing an evolving bubble plume and surface foam patch. The smaller tank utilizes an intermittently plunging jet of water produced by a rotating water wheel, into an approximately 6 L reservoir to simulate bubble plume and foam formation and generate aerosols. This system produces bubble plumes characteristic of small whitecaps without the large external pump inherent in the original MART design. Without the pump it is possible to easily culture delicate planktonic and microbial communities in the bulk water during experiments while continuously producing aerosols for study. However, due to the reduced volume and smaller plunging jet, the absolute numbers of particles generated are approximately an order of magnitude less than in the original MART design.

  20. Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun

    1996-01-01

    In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.

Top