Sample records for traditional equating methods

  1. A Comparison of Kernel Equating and Traditional Equipercentile Equating Methods and the Parametric Bootstrap Methods for Estimating Standard Errors in Equipercentile Equating

    ERIC Educational Resources Information Center

    Choi, Sae Il

    2009-01-01

    This study used simulation (a) to compare the kernel equating method to traditional equipercentile equating methods under the equivalent-groups (EG) design and the nonequivalent-groups with anchor test (NEAT) design and (b) to apply the parametric bootstrap method for estimating standard errors of equating. A two-parameter logistic item response…

  2. A Comparison of the Kernel Equating Method with Traditional Equating Methods Using SAT[R] Data

    ERIC Educational Resources Information Center

    Liu, Jinghua; Low, Albert C.

    2008-01-01

    This study applied kernel equating (KE) in two scenarios: equating to a very similar population and equating to a very different population, referred to as a distant population, using SAT[R] data. The KE results were compared to the results obtained from analogous traditional equating methods in both scenarios. The results indicate that KE results…

  3. Comparison of Kernel Equating and Item Response Theory Equating Methods

    ERIC Educational Resources Information Center

    Meng, Yu

    2012-01-01

    The kernel method of test equating is a unified approach to test equating with some advantages over traditional equating methods. Therefore, it is important to evaluate in a comprehensive way the usefulness and appropriateness of the Kernel equating (KE) method, as well as its advantages and disadvantages compared with several popular item…

  4. Local Linear Observed-Score Equating

    ERIC Educational Resources Information Center

    Wiberg, Marie; van der Linden, Wim J.

    2011-01-01

    Two methods of local linear observed-score equating for use with anchor-test and single-group designs are introduced. In an empirical study, the two methods were compared with the current traditional linear methods for observed-score equating. As a criterion, the bias in the equated scores relative to true equating based on Lord's (1980)…

  5. Tensor-GMRES method for large sparse systems of nonlinear equations

    NASA Technical Reports Server (NTRS)

    Feng, Dan; Pulliam, Thomas H.

    1994-01-01

    This paper introduces a tensor-Krylov method, the tensor-GMRES method, for large sparse systems of nonlinear equations. This method is a coupling of tensor model formation and solution techniques for nonlinear equations with Krylov subspace projection techniques for unsymmetric systems of linear equations. Traditional tensor methods for nonlinear equations are based on a quadratic model of the nonlinear function, a standard linear model augmented by a simple second order term. These methods are shown to be significantly more efficient than standard methods both on nonsingular problems and on problems where the Jacobian matrix at the solution is singular. A major disadvantage of the traditional tensor methods is that the solution of the tensor model requires the factorization of the Jacobian matrix, which may not be suitable for problems where the Jacobian matrix is large and has a 'bad' sparsity structure for an efficient factorization. We overcome this difficulty by forming and solving the tensor model using an extension of a Newton-GMRES scheme. Like traditional tensor methods, we show that the new tensor method has significant computational advantages over the analogous Newton counterpart. Consistent with Krylov subspace based methods, the new tensor method does not depend on the factorization of the Jacobian matrix. As a matter of fact, the Jacobian matrix is never needed explicitly.

  6. Equating Scores from Adaptive to Linear Tests

    ERIC Educational Resources Information Center

    van der Linden, Wim J.

    2006-01-01

    Two local methods for observed-score equating are applied to the problem of equating an adaptive test to a linear test. In an empirical study, the methods were evaluated against a method based on the test characteristic function (TCF) of the linear test and traditional equipercentile equating applied to the ability estimates on the adaptive test…

  7. Kernel and Traditional Equipercentile Equating with Degrees of Presmoothing. Research Report. ETS RR-07-15

    ERIC Educational Resources Information Center

    Moses, Tim; Holland, Paul

    2007-01-01

    The purpose of this study was to empirically evaluate the impact of loglinear presmoothing accuracy on equating bias and variability across chained and post-stratification equating methods, kernel and percentile-rank continuization methods, and sample sizes. The results of evaluating presmoothing on equating accuracy generally agreed with those of…

  8. A New Factorisation of a General Second Order Differential Equation

    ERIC Educational Resources Information Center

    Clegg, Janet

    2006-01-01

    A factorisation of a general second order ordinary differential equation is introduced from which the full solution to the equation can be obtained by performing two integrations. The method is compared with traditional methods for solving these type of equations. It is shown how the Green's function can be derived directly from the factorisation…

  9. Soft tissue deformation estimation by spatio-temporal Kalman filter finite element method.

    PubMed

    Yarahmadian, Mehran; Zhong, Yongmin; Gu, Chengfan; Shin, Jaehyun

    2018-01-01

    Soft tissue modeling plays an important role in the development of surgical training simulators as well as in robot-assisted minimally invasive surgeries. It has been known that while the traditional Finite Element Method (FEM) promises the accurate modeling of soft tissue deformation, it still suffers from a slow computational process. This paper presents a Kalman filter finite element method to model soft tissue deformation in real time without sacrificing the traditional FEM accuracy. The proposed method employs the FEM equilibrium equation and formulates it as a filtering process to estimate soft tissue behavior using real-time measurement data. The model is temporally discretized using the Newmark method and further formulated as the system state equation. Simulation results demonstrate that the computational time of KF-FEM is approximately 10 times shorter than the traditional FEM and it is still as accurate as the traditional FEM. The normalized root-mean-square error of the proposed KF-FEM in reference to the traditional FEM is computed as 0.0116. It is concluded that the proposed method significantly improves the computational performance of the traditional FEM without sacrificing FEM accuracy. The proposed method also filters noises involved in system state and measurement data.

  10. The method of Ritz applied to the equation of Hamilton. [for pendulum systems

    NASA Technical Reports Server (NTRS)

    Bailey, C. D.

    1976-01-01

    Without any reference to the theory of differential equations, the initial value problem of the nonlinear, nonconservative double pendulum system is solved by the application of the method of Ritz to the equation of Hamilton. Also shown is an example of the reduction of the traditional eigenvalue problem of linear, homogeneous, differential equations of motion to the solution of a set of nonhomogeneous algebraic equations. No theory of differential equations is used. Solution of the time-space path of the linear oscillator is demonstrated and compared to the exact solution.

  11. Denoising by coupled partial differential equations and extracting phase by backpropagation neural networks for electronic speckle pattern interferometry.

    PubMed

    Tang, Chen; Lu, Wenjing; Chen, Song; Zhang, Zhen; Li, Botao; Wang, Wenping; Han, Lin

    2007-10-20

    We extend and refine previous work [Appl. Opt. 46, 2907 (2007)]. Combining the coupled nonlinear partial differential equations (PDEs) denoising model with the ordinary differential equations enhancement method, we propose the new denoising and enhancing model for electronic speckle pattern interferometry (ESPI) fringe patterns. Meanwhile, we propose the backpropagation neural networks (BPNN) method to obtain unwrapped phase values based on a skeleton map instead of traditional interpolations. We test the introduced methods on the computer-simulated speckle ESPI fringe patterns and experimentally obtained fringe pattern, respectively. The experimental results show that the coupled nonlinear PDEs denoising model is capable of effectively removing noise, and the unwrapped phase values obtained by the BPNN method are much more accurate than those obtained by the well-known traditional interpolation. In addition, the accuracy of the BPNN method is adjustable by changing the parameters of networks such as the number of neurons.

  12. Score Equating and Item Response Theory: Some Practical Considerations.

    ERIC Educational Resources Information Center

    Cook, Linda L.; Eignor, Daniel R.

    The purposes of this paper are five-fold to discuss: (1) when item response theory (IRT) equating methods should provide better results than traditional methods; (2) which IRT model, the three-parameter logistic or the one-parameter logistic (Rasch), is the most reasonable to use; (3) what unique contributions IRT methods can offer the equating…

  13. Application of the Discrete Regularization Method to the Inverse of the Chord Vibration Equation

    NASA Astrophysics Data System (ADS)

    Wang, Linjun; Han, Xu; Wei, Zhouchao

    The inverse problem of the initial condition about the boundary value of the chord vibration equation is ill-posed. First, we transform it into a Fredholm integral equation. Second, we discretize it by the trapezoidal formula method, and then obtain a severely ill-conditioned linear equation, which is sensitive to the disturbance of the data. In addition, the tiny error of right data causes the huge concussion of the solution. We cannot obtain good results by the traditional method. In this paper, we solve this problem by the Tikhonov regularization method, and the numerical simulations demonstrate that this method is feasible and effective.

  14. A note on the generation of phase plane plots on a digital computer. [for solution of nonlinear differential equations

    NASA Technical Reports Server (NTRS)

    Simon, M. K.

    1980-01-01

    A technique is presented for generating phase plane plots on a digital computer which circumvents the difficulties associated with more traditional methods of numerical solving nonlinear differential equations. In particular, the nonlinear differential equation of operation is formulated.

  15. Notes on a General Framework for Observed Score Equating. Research Report. ETS RR-08-59

    ERIC Educational Resources Information Center

    Moses, Tim; Holland, Paul

    2008-01-01

    The purpose of this paper is to extend von Davier, Holland, and Thayer's (2004b) framework of kernel equating so that it can incorporate raw data and traditional equipercentile equating methods. One result of this more general framework is that previous equating methodology research can be viewed more comprehensively. Another result is that the…

  16. High Frequency Acoustic Propagation using Level Set Methods

    DTIC Science & Technology

    2007-01-01

    solution of the high frequency approximation to the wave equation. Traditional solutions to the Eikonal equation in high frequency acoustics are...the Eikonal equation derived from the high frequency approximation to the wave equation, ucuH ∇±=∇ )(),( xx , with the nonnegative function c(x...For simplicity, we only consider the case ucuH ∇+=∇ )(),( xx . Two difficulties must be addressed when solving the Eikonal equation in a fixed

  17. Improving traditional balancing methods for high-speed rotors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ling, J.; Cao, Y.

    1996-01-01

    This paper introduces frequency response functions, analyzes the relationships between the frequency response functions and influence coefficients theoretically, and derives corresponding mathematical equations for high-speed rotor balancing. The relationships between the imbalance masses on the rotor and frequency response functions are also analyzed based upon the modal balancing method, and the equations related to the static and dynamic imbalance masses and the frequency response function are obtained. Experiments on a high-speed rotor balancing rig were performed to verify the theory, and the experimental data agree satisfactorily with the analytical solutions. The improvement on the traditional balancing method proposed in thismore » paper will substantially reduce the number of rotor startups required during the balancing process of rotating machinery.« less

  18. Optimization of GM(1,1) power model

    NASA Astrophysics Data System (ADS)

    Luo, Dang; Sun, Yu-ling; Song, Bo

    2013-10-01

    GM (1,1) power model is the expansion of traditional GM (1,1) model and Grey Verhulst model. Compared with the traditional models, GM (1,1) power model has the following advantage: The power exponent in the model which best matches the actual data values can be found by certain technology. So, GM (1,1) power model can reflect nonlinear features of the data, simulate and forecast with high accuracy. It's very important to determine the best power exponent during the modeling process. In this paper, according to the GM(1,1) power model of albino equation is Bernoulli equation, through variable substitution, turning it into the GM(1,1) model of the linear albino equation form, and then through the grey differential equation properly built, established GM(1,1) power model, and parameters with pattern search method solution. Finally, we illustrate the effectiveness of the new methods with the example of simulating and forecasting the promotion rates from senior secondary schools to higher education in China.

  19. A new modification in the exponential rational function method for nonlinear fractional differential equations

    NASA Astrophysics Data System (ADS)

    Ahmed, Naveed; Bibi, Sadaf; Khan, Umar; Mohyud-Din, Syed Tauseef

    2018-02-01

    We have modified the traditional exponential rational function method (ERFM) and have used it to find the exact solutions of two different fractional partial differential equations, one is the time fractional Boussinesq equation and the other is the (2+1)-dimensional time fractional Zoomeron equation. In both the cases it is observed that the modified scheme provides more types of solutions than the traditional one. Moreover, a comparison of the recent solutions is made with some already existing solutions. We can confidently conclude that the modified scheme works better and provides more types of solutions with almost similar computational cost. Our generalized solutions include periodic, soliton-like, singular soliton and kink solutions. A graphical simulation of all types of solutions is provided and the correctness of the solution is verified by direct substitution. The extended version of the solutions is expected to provide more flexibility to scientists working in the relevant field to test their simulation data.

  20. Lattice Boltzmann model for simulation of magnetohydrodynamics

    NASA Technical Reports Server (NTRS)

    Chen, Shiyi; Chen, Hudong; Martinez, Daniel; Matthaeus, William

    1991-01-01

    A numerical method, based on a discrete Boltzmann equation, is presented for solving the equations of magnetohydrodynamics (MHD). The algorithm provides advantages similar to the cellular automaton method in that it is local and easily adapted to parallel computing environments. Because of much lower noise levels and less stringent requirements on lattice size, the method appears to be more competitive with traditional solution methods. Examples show that the model accurately reproduces both linear and nonlinear MHD phenomena.

  1. Standard Errors of Equating Differences: Prior Developments, Extensions, and Simulations

    ERIC Educational Resources Information Center

    Moses, Tim; Zhang, Wenmin

    2011-01-01

    The purpose of this article was to extend the use of standard errors for equated score differences (SEEDs) to traditional equating functions. The SEEDs are described in terms of their original proposal for kernel equating functions and extended so that SEEDs for traditional linear and traditional equipercentile equating functions can be computed.…

  2. The Heat Is on: An Inquiry-Based Investigation for Specific Heat

    ERIC Educational Resources Information Center

    Herrington, Deborah G.

    2011-01-01

    A substantial number of upper-level science students and practicing physical science teachers demonstrate confusion about thermal equilibrium, heat transfer, heat capacity, and specific heat capacity. The traditional method of instruction, which involves learning the related definitions and equations, using equations to solve heat transfer…

  3. Choosing the Best Method to Introduce Accounting.

    ERIC Educational Resources Information Center

    Guerrieri, Donald J.

    1988-01-01

    Of the traditional approaches to teaching accounting--single entry, journal, "T" account, balance sheet, and accounting equation--the author recommends the accounting equation approach. It is the foundation of the double entry system, new material is easy to introduce, and it provides students with a rationale for understanding basic concepts.…

  4. Using a topographic index to distribute variable source area runoff predicted with the SCS curve-number equation

    NASA Astrophysics Data System (ADS)

    Lyon, Steve W.; Walter, M. Todd; Gérard-Marchant, Pierre; Steenhuis, Tammo S.

    2004-10-01

    Because the traditional Soil Conservation Service curve-number (SCS-CN) approach continues to be used ubiquitously in water quality models, new application methods are needed that are consistent with variable source area (VSA) hydrological processes in the landscape. We developed and tested a distributed approach for applying the traditional SCS-CN equation to watersheds where VSA hydrology is a dominant process. Predicting the location of source areas is important for watershed planning because restricting potentially polluting activities from runoff source areas is fundamental to controlling non-point-source pollution. The method presented here used the traditional SCS-CN approach to predict runoff volume and spatial extent of saturated areas and a topographic index, like that used in TOPMODEL, to distribute runoff source areas through watersheds. The resulting distributed CN-VSA method was applied to two subwatersheds of the Delaware basin in the Catskill Mountains region of New York State and one watershed in south-eastern Australia to produce runoff-probability maps. Observed saturated area locations in the watersheds agreed with the distributed CN-VSA method. Results showed good agreement with those obtained from the previously validated soil moisture routing (SMR) model. When compared with the traditional SCS-CN method, the distributed CN-VSA method predicted a similar total volume of runoff, but vastly different locations of runoff generation. Thus, the distributed CN-VSA approach provides a physically based method that is simple enough to be incorporated into water quality models, and other tools that currently use the traditional SCS-CN method, while still adhering to the principles of VSA hydrology.

  5. Numerical solution to generalized Burgers'-Fisher equation using Exp-function method hybridized with heuristic computation.

    PubMed

    Malik, Suheel Abdullah; Qureshi, Ijaz Mansoor; Amir, Muhammad; Malik, Aqdas Naveed; Haq, Ihsanul

    2015-01-01

    In this paper, a new heuristic scheme for the approximate solution of the generalized Burgers'-Fisher equation is proposed. The scheme is based on the hybridization of Exp-function method with nature inspired algorithm. The given nonlinear partial differential equation (NPDE) through substitution is converted into a nonlinear ordinary differential equation (NODE). The travelling wave solution is approximated by the Exp-function method with unknown parameters. The unknown parameters are estimated by transforming the NODE into an equivalent global error minimization problem by using a fitness function. The popular genetic algorithm (GA) is used to solve the minimization problem, and to achieve the unknown parameters. The proposed scheme is successfully implemented to solve the generalized Burgers'-Fisher equation. The comparison of numerical results with the exact solutions, and the solutions obtained using some traditional methods, including adomian decomposition method (ADM), homotopy perturbation method (HPM), and optimal homotopy asymptotic method (OHAM), show that the suggested scheme is fairly accurate and viable for solving such problems.

  6. Numerical Solution to Generalized Burgers'-Fisher Equation Using Exp-Function Method Hybridized with Heuristic Computation

    PubMed Central

    Malik, Suheel Abdullah; Qureshi, Ijaz Mansoor; Amir, Muhammad; Malik, Aqdas Naveed; Haq, Ihsanul

    2015-01-01

    In this paper, a new heuristic scheme for the approximate solution of the generalized Burgers'-Fisher equation is proposed. The scheme is based on the hybridization of Exp-function method with nature inspired algorithm. The given nonlinear partial differential equation (NPDE) through substitution is converted into a nonlinear ordinary differential equation (NODE). The travelling wave solution is approximated by the Exp-function method with unknown parameters. The unknown parameters are estimated by transforming the NODE into an equivalent global error minimization problem by using a fitness function. The popular genetic algorithm (GA) is used to solve the minimization problem, and to achieve the unknown parameters. The proposed scheme is successfully implemented to solve the generalized Burgers'-Fisher equation. The comparison of numerical results with the exact solutions, and the solutions obtained using some traditional methods, including adomian decomposition method (ADM), homotopy perturbation method (HPM), and optimal homotopy asymptotic method (OHAM), show that the suggested scheme is fairly accurate and viable for solving such problems. PMID:25811858

  7. Are traditional body fat equations and anthropometry valid to estimate body fat in children and adolescents living with HIV?

    PubMed

    Lima, Luiz Rodrigo Augustemak de; Martins, Priscila Custódio; Junior, Carlos Alencar Souza Alves; Castro, João Antônio Chula de; Silva, Diego Augusto Santos; Petroski, Edio Luiz

    The aim of this study was to assess the validity of traditional anthropometric equations and to develop predictive equations of total body and trunk fat for children and adolescents living with HIV based on anthropometric measurements. Forty-eight children and adolescents of both sexes (24 boys) aged 7-17 years, living in Santa Catarina, Brazil, participated in the study. Dual-energy X-ray absorptiometry was used as the reference method to evaluate total body and trunk fat. Height, body weight, circumferences and triceps, subscapular, abdominal and calf skinfolds were measured. The traditional equations of Lohman and Slaughter were used to estimate body fat. Multiple regression models were fitted to predict total body fat (Model 1) and trunk fat (Model 2) using a backward selection procedure. Model 1 had an R 2 =0.85 and a standard error of the estimate of 1.43. Model 2 had an R 2 =0.80 and standard error of the estimate=0.49. The traditional equations of Lohman and Slaughter showed poor performance in estimating body fat in children and adolescents living with HIV. The prediction models using anthropometry provided reliable estimates and can be used by clinicians and healthcare professionals to monitor total body and trunk fat in children and adolescents living with HIV. Copyright © 2017 Sociedade Brasileira de Infectologia. Published by Elsevier Editora Ltda. All rights reserved.

  8. Polynomial mixture method of solving ordinary differential equations

    NASA Astrophysics Data System (ADS)

    Shahrir, Mohammad Shazri; Nallasamy, Kumaresan; Ratnavelu, Kuru; Kamali, M. Z. M.

    2017-11-01

    In this paper, a numerical solution of fuzzy quadratic Riccati differential equation is estimated using a proposed new approach that provides mixture of polynomials where iteratively the right mixture will be generated. This mixture provide a generalized formalism of traditional Neural Networks (NN). Previous works have shown reliable results using Runge-Kutta 4th order (RK4). This can be achieved by solving the 1st Order Non-linear Differential Equation (ODE) that is found commonly in Riccati differential equation. Research has shown improved results relatively to the RK4 method. It can be said that Polynomial Mixture Method (PMM) shows promising results with the advantage of continuous estimation and improved accuracy that can be produced over Mabood et al, RK-4, Multi-Agent NN and Neuro Method (NM).

  9. Khater method for nonlinear Sharma Tasso-Olever (STO) equation of fractional order

    NASA Astrophysics Data System (ADS)

    Bibi, Sadaf; Mohyud-Din, Syed Tauseef; Khan, Umar; Ahmed, Naveed

    In this work, we have implemented a direct method, known as Khater method to establish exact solutions of nonlinear partial differential equations of fractional order. Number of solutions provided by this method is greater than other traditional methods. Exact solutions of nonlinear fractional order Sharma Tasso-Olever (STO) equation are expressed in terms of kink, travelling wave, periodic and solitary wave solutions. Modified Riemann-Liouville derivative and Fractional complex transform have been used for compatibility with fractional order sense. Solutions have been graphically simulated for understanding the physical aspects and importance of the method. A comparative discussion between our established results and the results obtained by existing ones is also presented. Our results clearly reveal that the proposed method is an effective, powerful and straightforward technique to work out new solutions of various types of differential equations of non-integer order in the fields of applied sciences and engineering.

  10. The Multifaceted Variable Approach: Selection of Method in Solving Simple Linear Equations

    ERIC Educational Resources Information Center

    Tahir, Salma; Cavanagh, Michael

    2010-01-01

    This paper presents a comparison of the solution strategies used by two groups of Year 8 students as they solved linear equations. The experimental group studied algebra following a multifaceted variable approach, while the comparison group used a traditional approach. Students in the experimental group employed different solution strategies,…

  11. An Extension of IRT-Based Equating to the Dichotomous Testlet Response Theory Model

    ERIC Educational Resources Information Center

    Tao, Wei; Cao, Yi

    2016-01-01

    Current procedures for equating number-correct scores using traditional item response theory (IRT) methods assume local independence. However, when tests are constructed using testlets, one concern is the violation of the local item independence assumption. The testlet response theory (TRT) model is one way to accommodate local item dependence.…

  12. Exploring Students' Understanding of Ordinary Differential Equations Using Computer Algebraic System (CAS)

    ERIC Educational Resources Information Center

    Maat, Siti Mistima; Zakaria, Effandi

    2011-01-01

    Ordinary differential equations (ODEs) are one of the important topics in engineering mathematics that lead to the understanding of technical concepts among students. This study was conducted to explore the students' understanding of ODEs when they solve ODE questions using a traditional method as well as a computer algebraic system, particularly…

  13. Intraocular lens power estimation by accurate ray tracing for eyes underwent previous refractive surgeries

    NASA Astrophysics Data System (ADS)

    Yang, Que; Wang, Shanshan; Wang, Kai; Zhang, Chunyu; Zhang, Lu; Meng, Qingyu; Zhu, Qiudong

    2015-08-01

    For normal eyes without history of any ocular surgery, traditional equations for calculating intraocular lens (IOL) power, such as SRK-T, Holladay, Higis, SRK-II, et al., all were relativley accurate. However, for eyes underwent refractive surgeries, such as LASIK, or eyes diagnosed as keratoconus, these equations may cause significant postoperative refractive error, which may cause poor satisfaction after cataract surgery. Although some methods have been carried out to solve this problem, such as Hagis-L equation[1], or using preoperative data (data before LASIK) to estimate K value[2], no precise equations were available for these eyes. Here, we introduced a novel intraocular lens power estimation method by accurate ray tracing with optical design software ZEMAX. Instead of using traditional regression formula, we adopted the exact measured corneal elevation distribution, central corneal thickness, anterior chamber depth, axial length, and estimated effective lens plane as the input parameters. The calculation of intraocular lens power for a patient with keratoconus and another LASIK postoperative patient met very well with their visual capacity after cataract surgery.

  14. An enriched finite element method to fractional advection-diffusion equation

    NASA Astrophysics Data System (ADS)

    Luan, Shengzhi; Lian, Yanping; Ying, Yuping; Tang, Shaoqiang; Wagner, Gregory J.; Liu, Wing Kam

    2017-08-01

    In this paper, an enriched finite element method with fractional basis [ 1,x^{α }] for spatial fractional partial differential equations is proposed to obtain more stable and accurate numerical solutions. For pure fractional diffusion equation without advection, the enriched Galerkin finite element method formulation is demonstrated to simulate the exact solution successfully without any numerical oscillation, which is advantageous compared to the traditional Galerkin finite element method with integer basis [ 1,x] . For fractional advection-diffusion equation, the oscillatory behavior becomes complex due to the introduction of the advection term which can be characterized by a fractional element Peclet number. For the purpose of addressing the more complex numerical oscillation, an enriched Petrov-Galerkin finite element method is developed by using a dimensionless fractional stabilization parameter, which is formulated through a minimization of the residual of the nodal solution. The effectiveness and accuracy of the enriched finite element method are demonstrated by a series of numerical examples of fractional diffusion equation and fractional advection-diffusion equation, including both one-dimensional and two-dimensional, steady-state and time-dependent cases.

  15. Determination of the transmission coefficients for quantum structures using FDTD method.

    PubMed

    Peng, Yangyang; Wang, Xiaoying; Sui, Wenquan

    2011-12-01

    The purpose of this work is to develop a simple method to incorporate quantum effect in traditional finite-difference time-domain (FDTD) simulators. Witch could make it possible to co-simulate systems include quantum structures and traditional components. In this paper, tunneling transmission coefficient is calculated by solving time-domain Schrödinger equation with a developed FDTD technique, called FDTD-S method. To validate the feasibility of the method, a simple resonant tunneling diode (RTD) structure model has been simulated using the proposed method. The good agreement between the numerical and analytical results proves its accuracy. The effectness and accuracy of this approach makes it a potential method for analysis and design of hybrid systems includes quantum structures and traditional components.

  16. The use of spectral methods in bidomain studies.

    PubMed

    Trayanova, N; Pilkington, T

    1992-01-01

    A Fourier transform method is developed for solving the bidomain coupled differential equations governing the intracellular and extracellular potentials on a finite sheet of cardiac cells undergoing stimulation. The spectral formulation converts the system of differential equations into a "diagonal" system of algebraic equations. Solving the algebraic equations directly and taking the inverse transform of the potentials proved numerically less expensive than solving the coupled differential equations by means of traditional numerical techniques, such as finite differences; the comparison between the computer execution times showed that the Fourier transform method was about 40 times faster than the finite difference method. By application of the Fourier transform method, transmembrane potential distributions in the two-dimensional myocardial slice were calculated. For a tissue characterized by a ratio of the intra- to extracellular conductivities that is different in all principal directions, the transmembrane potential distribution exhibits a rather complicated geometrical pattern. The influence of the different anisotropy ratios, the finite tissue size, and the stimuli configuration on the pattern of membrane polarization is investigated.

  17. Direct simulation of groundwater age

    USGS Publications Warehouse

    Goode, Daniel J.

    1996-01-01

    A new method is proposed to simulate groundwater age directly, by use of an advection-dispersion transport equation with a distributed zero-order source of unit (1) strength, corresponding to the rate of aging. The dependent variable in the governing equation is the mean age, a mass-weighted average age. The governing equation is derived from residence-time-distribution concepts for the case of steady flow. For the more general case of transient flow, a transient governing equation for age is derived from mass-conservation principles applied to conceptual “age mass.” The age mass is the product of the water mass and its age, and age mass is assumed to be conserved during mixing. Boundary conditions include zero age mass flux across all noflow and inflow boundaries and no age mass dispersive flux across outflow boundaries. For transient-flow conditions, the initial distribution of age must be known. The solution of the governing transport equation yields the spatial distribution of the mean groundwater age and includes diffusion, dispersion, mixing, and exchange processes that typically are considered only through tracer-specific solute transport simulation. Traditional methods have relied on advective transport to predict point values of groundwater travel time and age. The proposed method retains the simplicity and tracer-independence of advection-only models, but incorporates the effects of dispersion and mixing on volume-averaged age. Example simulations of age in two idealized regional aquifer systems, one homogeneous and the other layered, demonstrate the agreement between the proposed method and traditional particle-tracking approaches and illustrate use of the proposed method to determine the effects of diffusion, dispersion, and mixing on groundwater age.

  18. Insights into the School Mathematics Tradition from Solving Linear Equations

    ERIC Educational Resources Information Center

    Buchbinder, Orly; Chazan, Daniel; Fleming, Elizabeth

    2015-01-01

    In this article, we explore how the solving of linear equations is represented in English­-language algebra text books from the early nineteenth century when schooling was becoming institutionalized, and then survey contemporary teachers. In the text books, we identify the increasing presence of a prescribed order of steps (a canonical method) for…

  19. Divergence preserving discrete surface integral methods for Maxwell's curl equations using non-orthogonal unstructured grids

    NASA Technical Reports Server (NTRS)

    Madsen, Niel K.

    1992-01-01

    Several new discrete surface integral (DSI) methods for solving Maxwell's equations in the time-domain are presented. These methods, which allow the use of general nonorthogonal mixed-polyhedral unstructured grids, are direct generalizations of the canonical staggered-grid finite difference method. These methods are conservative in that they locally preserve divergence or charge. Employing mixed polyhedral cells, (hexahedral, tetrahedral, etc.) these methods allow more accurate modeling of non-rectangular structures and objects because the traditional stair-stepped boundary approximations associated with the orthogonal grid based finite difference methods can be avoided. Numerical results demonstrating the accuracy of these new methods are presented.

  20. Unsplit complex frequency shifted perfectly matched layer for second-order wave equation using auxiliary differential equations.

    PubMed

    Gao, Yingjie; Zhang, Jinhai; Yao, Zhenxing

    2015-12-01

    The complex frequency shifted perfectly matched layer (CFS-PML) can improve the absorbing performance of PML for nearly grazing incident waves. However, traditional PML and CFS-PML are based on first-order wave equations; thus, they are not suitable for second-order wave equation. In this paper, an implementation of CFS-PML for second-order wave equation is presented using auxiliary differential equations. This method is free of both convolution calculations and third-order temporal derivatives. As an unsplit CFS-PML, it can reduce the nearly grazing incidence. Numerical experiments show that it has better absorption than typical PML implementations based on second-order wave equation.

  1. Advanced Monte Carlo methods for thermal radiation transport

    NASA Astrophysics Data System (ADS)

    Wollaber, Allan B.

    During the past 35 years, the Implicit Monte Carlo (IMC) method proposed by Fleck and Cummings has been the standard Monte Carlo approach to solving the thermal radiative transfer (TRT) equations. However, the IMC equations are known to have accuracy limitations that can produce unphysical solutions. In this thesis, we explicitly provide the IMC equations with a Monte Carlo interpretation by including particle weight as one of its arguments. We also develop and test a stability theory for the 1-D, gray IMC equations applied to a nonlinear problem. We demonstrate that the worst case occurs for 0-D problems, and we extend the results to a stability algorithm that may be used for general linearizations of the TRT equations. We derive gray, Quasidiffusion equations that may be deterministically solved in conjunction with IMC to obtain an inexpensive, accurate estimate of the temperature at the end of the time step. We then define an average temperature T* to evaluate the temperature-dependent problem data in IMC, and we demonstrate that using T* is more accurate than using the (traditional) beginning-of-time-step temperature. We also propose an accuracy enhancement to the IMC equations: the use of a time-dependent "Fleck factor". This Fleck factor can be considered an automatic tuning of the traditionally defined user parameter alpha, which generally provides more accurate solutions at an increased cost relative to traditional IMC. We also introduce a global weight window that is proportional to the forward scalar intensity calculated by the Quasidiffusion method. This weight window improves the efficiency of the IMC calculation while conserving energy. All of the proposed enhancements are tested in 1-D gray and frequency-dependent problems. These enhancements do not unconditionally eliminate the unphysical behavior that can be seen in the IMC calculations. However, for fixed spatial and temporal grids, they suppress them and clearly work to make the solution more accurate. Overall, the work presented represents first steps along several paths that can be taken to improve the Monte Carlo simulations of TRT problems.

  2. Boundary element modelling of dynamic behavior of piecewise homogeneous anisotropic elastic solids

    NASA Astrophysics Data System (ADS)

    Igumnov, L. A.; Markov, I. P.; Litvinchuk, S. Yu

    2018-04-01

    A traditional direct boundary integral equations method is applied to solve three-dimensional dynamic problems of piecewise homogeneous linear elastic solids. The materials of homogeneous parts are considered to be generally anisotropic. The technique used to solve the boundary integral equations is based on the boundary element method applied together with the Radau IIA convolution quadrature method. A numerical example of suddenly loaded 3D prismatic rod consisting of two subdomains with different anisotropic elastic properties is presented to verify the accuracy of the proposed formulation.

  3. Two modified symplectic partitioned Runge-Kutta methods for solving the elastic wave equation

    NASA Astrophysics Data System (ADS)

    Su, Bo; Tuo, Xianguo; Xu, Ling

    2017-08-01

    Based on a modified strategy, two modified symplectic partitioned Runge-Kutta (PRK) methods are proposed for the temporal discretization of the elastic wave equation. The two symplectic schemes are similar in form but are different in nature. After the spatial discretization of the elastic wave equation, the ordinary Hamiltonian formulation for the elastic wave equation is presented. The PRK scheme is then applied for time integration. An additional term associated with spatial discretization is inserted into the different stages of the PRK scheme. Theoretical analyses are conducted to evaluate the numerical dispersion and stability of the two novel PRK methods. A finite difference method is used to approximate the spatial derivatives since the two schemes are independent of the spatial discretization technique used. The numerical solutions computed by the two new schemes are compared with those computed by a conventional symplectic PRK. The numerical results, which verify the new method, are superior to those generated by traditional conventional methods in seismic wave modeling.

  4. General Rotorcraft Aeromechanical Stability Program (GRASP): Theory manual

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Hopkins, A. Stewart; Kunz, Donald L.; Hinnant, Howard E.

    1990-01-01

    The general rotorcraft aeromechanical stability program (GRASP) was developed to calculate aeroelastic stability for rotorcraft in hovering flight, vertical flight, and ground contact conditions. GRASP is described in terms of its capabilities and its philosophy of modeling. The equations of motion that govern the physical system are described, as well as the analytical approximations used to derive them. The equations include the kinematical equation, the element equations, and the constraint equations. In addition, the solution procedures used by GRASP are described. GRASP is capable of treating the nonlinear static and linearized dynamic behavior of structures represented by arbitrary collections of rigid-body and beam elements. These elements may be connected in an arbitrary fashion, and are permitted to have large relative motions. The main limitation of this analysis is that periodic coefficient effects are not treated, restricting rotorcraft flight conditions to hover, axial flight, and ground contact. Instead of following the methods employed in other rotorcraft programs. GRASP is designed to be a hybrid of the finite-element method and the multibody methods used in spacecraft analysis. GRASP differs from traditional finite-element programs by allowing multiple levels of substructure in which the substructures can move and/or rotate relative to others with no small-angle approximations. This capability facilitates the modeling of rotorcraft structures, including the rotating/nonrotating interface and the details of the blade/root kinematics for various types. GRASP differs from traditional multibody programs by considering aeroelastic effects, including inflow dynamics (simple unsteady aerodynamics) and nonlinear aerodynamic coefficients.

  5. Efficient Iterative Methods Applied to the Solution of Transonic Flows

    NASA Astrophysics Data System (ADS)

    Wissink, Andrew M.; Lyrintzis, Anastasios S.; Chronopoulos, Anthony T.

    1996-02-01

    We investigate the use of an inexact Newton's method to solve the potential equations in the transonic regime. As a test case, we solve the two-dimensional steady transonic small disturbance equation. Approximate factorization/ADI techniques have traditionally been employed for implicit solutions of this nonlinear equation. Instead, we apply Newton's method using an exact analytical determination of the Jacobian with preconditioned conjugate gradient-like iterative solvers for solution of the linear systems in each Newton iteration. Two iterative solvers are tested; a block s-step version of the classical Orthomin(k) algorithm called orthogonal s-step Orthomin (OSOmin) and the well-known GMRES method. The preconditioner is a vectorizable and parallelizable version of incomplete LU (ILU) factorization. Efficiency of the Newton-Iterative method on vector and parallel computer architectures is the main issue addressed. In vectorized tests on a single processor of the Cray C-90, the performance of Newton-OSOmin is superior to Newton-GMRES and a more traditional monotone AF/ADI method (MAF) for a variety of transonic Mach numbers and mesh sizes. Newton-GMRES is superior to MAF for some cases. The parallel performance of the Newton method is also found to be very good on multiple processors of the Cray C-90 and on the massively parallel thinking machine CM-5, where very fast execution rates (up to 9 Gflops) are found for large problems.

  6. A Reconstructed Discontinuous Galerkin Method for the Compressible Navier-Stokes Equations on Arbitrary Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong Luo; Luqing Luo; Robert Nourgaliev

    2010-09-01

    A reconstruction-based discontinuous Galerkin (RDG) method is presented for the solution of the compressible Navier–Stokes equations on arbitrary grids. The RDG method, originally developed for the compressible Euler equations, is extended to discretize viscous and heat fluxes in the Navier–Stokes equations using a so-called inter-cell reconstruction, where a smooth solution is locally reconstructed using a least-squares method from the underlying discontinuous DG solution. Similar to the recovery-based DG (rDG) methods, this reconstructed DG method eliminates the introduction of ad hoc penalty or coupling terms commonly found in traditional DG methods. Unlike rDG methods, this RDG method does not need tomore » judiciously choose a proper form of a recovered polynomial, thus is simple, flexible, and robust, and can be used on arbitrary grids. The developed RDG method is used to compute a variety of flow problems on arbitrary meshes to demonstrate its accuracy, efficiency, robustness, and versatility. The numerical results indicate that this RDG method is able to deliver the same accuracy as the well-known Bassi–Rebay II scheme, at a half of its computing costs for the discretization of the viscous fluxes in the Navier–Stokes equations, clearly demonstrating its superior performance over the existing DG methods for solving the compressible Navier–Stokes equations.« less

  7. A Reconstructed Discontinuous Galerkin Method for the Compressible Navier-Stokes Equations on Arbitrary Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong Luo; Luqing Luo; Robert Nourgaliev

    2010-01-01

    A reconstruction-based discontinuous Galerkin (RDG) method is presented for the solution of the compressible Navier-Stokes equations on arbitrary grids. The RDG method, originally developed for the compressible Euler equations, is extended to discretize viscous and heat fluxes in the Navier-Stokes equations using a so-called inter-cell reconstruction, where a smooth solution is locally reconstructed using a least-squares method from the underlying discontinuous DG solution. Similar to the recovery-based DG (rDG) methods, this reconstructed DG method eliminates the introduction of ad hoc penalty or coupling terms commonly found in traditional DG methods. Unlike rDG methods, this RDG method does not need tomore » judiciously choose a proper form of a recovered polynomial, thus is simple, flexible, and robust, and can be used on arbitrary grids. The developed RDG method is used to compute a variety of flow problems on arbitrary meshes to demonstrate its accuracy, efficiency, robustness, and versatility. The numerical results indicate that this RDG method is able to deliver the same accuracy as the well-known Bassi-Rebay II scheme, at a half of its computing costs for the discretization of the viscous fluxes in the Navier-Stokes equations, clearly demonstrating its superior performance over the existing DG methods for solving the compressible Navier-Stokes equations.« less

  8. The method of space-time and conservation element and solution element: A new approach for solving the Navier-Stokes and Euler equations

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung

    1995-01-01

    A new numerical framework for solving conservation laws is being developed. This new framework differs substantially in both concept and methodology from the well-established methods, i.e., finite difference, finite volume, finite element, and spectral methods. It is conceptually simple and designed to overcome several key limitations of the above traditional methods. A two-level scheme for solving the convection-diffusion equation is constructed and used to illuminate the major differences between the present method and those previously mentioned. This explicit scheme, referred to as the a-mu scheme, has two independent marching variables.

  9. Immersed boundary method for Boltzmann model kinetic equations

    NASA Astrophysics Data System (ADS)

    Pekardan, Cem; Chigullapalli, Sruti; Sun, Lin; Alexeenko, Alina

    2012-11-01

    Three different immersed boundary method formulations are presented for Boltzmann model kinetic equations such as Bhatnagar-Gross-Krook (BGK) and Ellipsoidal statistical Bhatnagar-Gross-Krook (ESBGK) model equations. 1D unsteady IBM solution for a moving piston is compared with the DSMC results and 2D quasi-steady microscale gas damping solutions are verified by a conformal finite volume method solver. Transient analysis for a sinusoidally moving beam is also carried out for the different pressure conditions (1 atm, 0.1 atm and 0.01 atm) corresponding to Kn=0.05,0.5 and 5. Interrelaxation method (Method 2) is shown to provide a faster convergence as compared to the traditional interpolation scheme used in continuum IBM formulations. Unsteady damping in rarefied regime is characterized by a significant phase-lag which is not captured by quasi-steady approximations.

  10. Three-phase Power Flow Calculation of Low Voltage Distribution Network Considering Characteristics of Residents Load

    NASA Astrophysics Data System (ADS)

    Wang, Yaping; Lin, Shunjiang; Yang, Zhibin

    2017-05-01

    In the traditional three-phase power flow calculation of the low voltage distribution network, the load model is described as constant power. Since this model cannot reflect the characteristics of actual loads, the result of the traditional calculation is always different from the actual situation. In this paper, the load model in which dynamic load represented by air conditioners parallel with static load represented by lighting loads is used to describe characteristics of residents load, and the three-phase power flow calculation model is proposed. The power flow calculation model includes the power balance equations of three-phase (A,B,C), the current balance equations of phase 0, and the torque balancing equations of induction motors in air conditioners. And then an alternating iterative algorithm of induction motor torque balance equations with each node balance equations is proposed to solve the three-phase power flow model. This method is applied to an actual low voltage distribution network of residents load, and by the calculation of three different operating states of air conditioners, the result demonstrates the effectiveness of the proposed model and the algorithm.

  11. Benchmark solution for the Spencer-Lewis equation of electron transport theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ganapol, B.D.

    As integrated circuits become smaller, the shielding of these sensitive components against penetrating electrons becomes extremely critical. Monte Carlo methods have traditionally been the method of choice in shielding evaluations primarily because they can incorporate a wide variety of relevant physical processes. Recently, however, as a result of a more accurate numerical representation of the highly forward peaked scattering process, S/sub n/ methods for one-dimensional problems have been shown to be at least as cost-effective in comparison with Monte Carlo methods. With the development of these deterministic methods for electron transport, a need has arisen to assess the accuracy ofmore » proposed numerical algorithms and to ensure their proper coding. It is the purpose of this presentation to develop a benchmark to the Spencer-Lewis equation describing the transport of energetic electrons in solids. The solution will take advantage of the correspondence between the Spencer-Lewis equation and the transport equation describing one-group time-dependent neutron transport.« less

  12. An Alternative Method to Gauss-Jordan Elimination: Minimizing Fraction Arithmetic

    ERIC Educational Resources Information Center

    Smith, Luke; Powell, Joan

    2011-01-01

    When solving systems of equations by using matrices, many teachers present a Gauss-Jordan elimination approach to row reducing matrices that can involve painfully tedious operations with fractions (which I will call the traditional method). In this essay, I present an alternative method to row reduce matrices that does not introduce additional…

  13. A Lagrangian meshfree method applied to linear and nonlinear elasticity.

    PubMed

    Walker, Wade A

    2017-01-01

    The repeated replacement method (RRM) is a Lagrangian meshfree method which we have previously applied to the Euler equations for compressible fluid flow. In this paper we present new enhancements to RRM, and we apply the enhanced method to both linear and nonlinear elasticity. We compare the results of ten test problems to those of analytic solvers, to demonstrate that RRM can successfully simulate these elastic systems without many of the requirements of traditional numerical methods such as numerical derivatives, equation system solvers, or Riemann solvers. We also show the relationship between error and computational effort for RRM on these systems, and compare RRM to other methods to highlight its strengths and weaknesses. And to further explain the two elastic equations used in the paper, we demonstrate the mathematical procedure used to create Riemann and Sedov-Taylor solvers for them, and detail the numerical techniques needed to embody those solvers in code.

  14. A Lagrangian meshfree method applied to linear and nonlinear elasticity

    PubMed Central

    2017-01-01

    The repeated replacement method (RRM) is a Lagrangian meshfree method which we have previously applied to the Euler equations for compressible fluid flow. In this paper we present new enhancements to RRM, and we apply the enhanced method to both linear and nonlinear elasticity. We compare the results of ten test problems to those of analytic solvers, to demonstrate that RRM can successfully simulate these elastic systems without many of the requirements of traditional numerical methods such as numerical derivatives, equation system solvers, or Riemann solvers. We also show the relationship between error and computational effort for RRM on these systems, and compare RRM to other methods to highlight its strengths and weaknesses. And to further explain the two elastic equations used in the paper, we demonstrate the mathematical procedure used to create Riemann and Sedov-Taylor solvers for them, and detail the numerical techniques needed to embody those solvers in code. PMID:29045443

  15. A solution to neural field equations by a recurrent neural network method

    NASA Astrophysics Data System (ADS)

    Alharbi, Abir

    2012-09-01

    Neural field equations (NFE) are used to model the activity of neurons in the brain, it is introduced from a single neuron 'integrate-and-fire model' starting point. The neural continuum is spatially discretized for numerical studies, and the governing equations are modeled as a system of ordinary differential equations. In this article the recurrent neural network approach is used to solve this system of ODEs. This consists of a technique developed by combining the standard numerical method of finite-differences with the Hopfield neural network. The architecture of the net, energy function, updating equations, and algorithms are developed for the NFE model. A Hopfield Neural Network is then designed to minimize the energy function modeling the NFE. Results obtained from the Hopfield-finite-differences net show excellent performance in terms of accuracy and speed. The parallelism nature of the Hopfield approaches may make them easier to implement on fast parallel computers and give them the speed advantage over the traditional methods.

  16. Simplified Design Method for Tension Fasteners

    NASA Astrophysics Data System (ADS)

    Olmstead, Jim; Barker, Paul; Vandersluis, Jonathan

    2012-07-01

    Tension fastened joints design has traditionally been an iterative tradeoff between separation and strength requirements. This paper presents equations for the maximum external load that a fastened joint can support and the optimal preload to achieve this load. The equations, based on linear joint theory, account for separation and strength safety factors and variations in joint geometry, materials, preload, load-plane factor and thermal loading. The strength-normalized versions of the equations are applicable to any fastener and can be plotted to create a "Fastener Design Space", FDS. Any combination of preload and tension that falls within the FDS represents a safe joint design. The equation for the FDS apex represents the optimal preload and load capacity of a set of joints. The method can be used for preliminary design or to evaluate multiple pre-existing joints.

  17. Efficient solution of the simplified P N equations

    DOE PAGES

    Hamilton, Steven P.; Evans, Thomas M.

    2014-12-23

    We show new solver strategies for the multigroup SPN equations for nuclear reactor analysis. By forming the complete matrix over space, moments, and energy a robust set of solution strategies may be applied. Moreover, power iteration, shifted power iteration, Rayleigh quotient iteration, Arnoldi's method, and a generalized Davidson method, each using algebraic and physics-based multigrid preconditioners, have been compared on C5G7 MOX test problem as well as an operational PWR model. These results show that the most ecient approach is the generalized Davidson method, that is 30-40 times faster than traditional power iteration and 6-10 times faster than Arnoldi's method.

  18. Vortex methods for separated flows

    NASA Technical Reports Server (NTRS)

    Spalart, Philippe R.

    1988-01-01

    The numerical solution of the Euler or Navier-Stokes equations by Lagrangian vortex methods is discussed. The mathematical background is presented and includes the relationship with traditional point-vortex studies, convergence to smooth solutions of the Euler equations, and the essential differences between two and three-dimensional cases. The difficulties in extending the method to viscous or compressible flows are explained. Two-dimensional flows around bluff bodies are emphasized. Robustness of the method and the assessment of accuracy, vortex-core profiles, time-marching schemes, numerical dissipation, and efficient programming are treated. Operation counts for unbounded and periodic flows are given, and two algorithms designed to speed up the calculations are described.

  19. Factorization and the synthesis of optimal feedback kernels for differential-delay systems

    NASA Technical Reports Server (NTRS)

    Milman, Mark M.; Scheid, Robert E.

    1987-01-01

    A combination of ideas from the theories of operator Riccati equations and Volterra factorizations leads to the derivation of a novel, relatively simple set of hyperbolic equations which characterize the optimal feedback kernel for the finite-time regulator problem for autonomous differential-delay systems. Analysis of these equations elucidates the underlying structure of the feedback kernel and leads to the development of fast and accurate numerical methods for its computation. Unlike traditional formulations based on the operator Riccati equation, the gain is characterized by means of classical solutions of the derived set of equations. This leads to the development of approximation schemes which are analogous to what has been accomplished for systems of ordinary differential equations with given initial conditions.

  20. Boundary regularized integral equation formulation of the Helmholtz equation in acoustics.

    PubMed

    Sun, Qiang; Klaseboer, Evert; Khoo, Boo-Cheong; Chan, Derek Y C

    2015-01-01

    A boundary integral formulation for the solution of the Helmholtz equation is developed in which all traditional singular behaviour in the boundary integrals is removed analytically. The numerical precision of this approach is illustrated with calculation of the pressure field owing to radiating bodies in acoustic wave problems. This method facilitates the use of higher order surface elements to represent boundaries, resulting in a significant reduction in the problem size with improved precision. Problems with extreme geometric aspect ratios can also be handled without diminished precision. When combined with the CHIEF method, uniqueness of the solution of the exterior acoustic problem is assured without the need to solve hypersingular integrals.

  1. Boundary regularized integral equation formulation of the Helmholtz equation in acoustics

    PubMed Central

    Sun, Qiang; Klaseboer, Evert; Khoo, Boo-Cheong; Chan, Derek Y. C.

    2015-01-01

    A boundary integral formulation for the solution of the Helmholtz equation is developed in which all traditional singular behaviour in the boundary integrals is removed analytically. The numerical precision of this approach is illustrated with calculation of the pressure field owing to radiating bodies in acoustic wave problems. This method facilitates the use of higher order surface elements to represent boundaries, resulting in a significant reduction in the problem size with improved precision. Problems with extreme geometric aspect ratios can also be handled without diminished precision. When combined with the CHIEF method, uniqueness of the solution of the exterior acoustic problem is assured without the need to solve hypersingular integrals. PMID:26064591

  2. Review of Railgun Modeling Techniques: The Computation of Railgun Force and Other Key Factors

    NASA Astrophysics Data System (ADS)

    Eckert, Nathan James

    Currently, railgun force modeling either uses the simple "railgun force equation" or finite element methods. It is proposed here that a middle ground exists that does not require the solution of partial differential equations, is more readily implemented than finite element methods, and is more accurate than the traditional force equation. To develop this method, it is necessary to examine the core railgun factors: power supply mechanisms, the distribution of current in the rails and in the projectile which slides between them (called the armature), the magnetic field created by the current flowing through these rails, the inductance gradient (a key factor in simplifying railgun analysis, referred to as L'), the resultant Lorentz force, and the heating which accompanies this action. Common power supply technologies are investigated, and the shape of their current pulses are modeled. The main causes of current concentration are described, and a rudimentary method for computing current distribution in solid rails and a rectangular armature is shown to have promising accuracy with respect to outside finite element results. The magnetic field is modeled with two methods using the Biot-Savart law, and generally good agreement is obtained with respect to finite element methods (5.8% error on average). To get this agreement, a factor of 2 is added to the original formulation after seeing a reliable offset with FEM results. Three inductance gradient calculations are assessed, and though all agree with FEM results, the Kerrisk method and a regression analysis method developed by Murugan et al. (referred to as the LRM here) perform the best. Six railgun force computation methods are investigated, including the traditional railgun force equation, an equation produced by Waindok and Piekielny, and four methods inspired by the work of Xu et al. Overall, good agreement between the models and outside data is found, but each model's accuracy varies significantly between comparisons. Lastly, an approximation of the temperature profile in railgun rails originally presented by McCorkle and Bahder is replicated. In total, this work describes railgun technology and moderately complex railgun modeling methods, but is inconclusive about the presence of a middle-ground modeling method.

  3. A direct method of extracting surface recombination velocity from an electron beam induced current line scan

    NASA Astrophysics Data System (ADS)

    Ong, Vincent K. S.

    1998-04-01

    The extraction of diffusion length and surface recombination velocity in a semiconductor with the use of an electron beam induced current line scan has traditionally been done by fitting the line scan into complicated theoretical equations. It was recently shown that a much simpler equation is sufficient for the extraction of diffusion length. The linearization coefficient is the only variable that is needed to be adjusted in the curve fitting process. However, complicated equations are still necessary for the extraction of surface recombination velocity. It is shown in this article that it is indeed possible to extract surface recombination velocity with a simple equation, using only one variable, the linearization coefficient. An intuitive feel for the reason behind the method was discussed. The accuracy of the method was verified with the use of three-dimensional computer simulation, and was found to be even slightly better than that of the best existing method.

  4. Efficient iterative methods applied to the solution of transonic flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wissink, A.M.; Lyrintzis, A.S.; Chronopoulos, A.T.

    1996-02-01

    We investigate the use of an inexact Newton`s method to solve the potential equations in the transonic regime. As a test case, we solve the two-dimensional steady transonic small disturbance equation. Approximate factorization/ADI techniques have traditionally been employed for implicit solutions of this nonlinear equation. Instead, we apply Newton`s method using an exact analytical determination of the Jacobian with preconditioned conjugate gradient-like iterative solvers for solution of the linear systems in each Newton iteration. Two iterative solvers are tested; a block s-step version of the classical Orthomin(k) algorithm called orthogonal s-step Orthomin (OSOmin) and the well-known GIVIRES method. The preconditionermore » is a vectorizable and parallelizable version of incomplete LU (ILU) factorization. Efficiency of the Newton-Iterative method on vector and parallel computer architectures is the main issue addressed. In vectorized tests on a single processor of the Cray C-90, the performance of Newton-OSOmin is superior to Newton-GMRES and a more traditional monotone AF/ADI method (MAF) for a variety of transonic Mach numbers and mesh sizes. Newton- GIVIRES is superior to MAF for some cases. The parallel performance of the Newton method is also found to be very good on multiple processors of the Cray C-90 and on the massively parallel thinking machine CM-5, where very fast execution rates (up to 9 Gflops) are found for large problems. 38 refs., 14 figs., 7 tabs.« less

  5. Comparison of Fully-Compressible Equation Sets for Atmospheric Dynamics

    NASA Technical Reports Server (NTRS)

    Ahmad, Nashat N.

    2016-01-01

    Traditionally, the equation for the conservation of energy used in atmospheric models is based on potential temperature and is used in place of the total energy conservation. This paper compares the application of the two equations sets for both the Euler and the Navier-Stokes solutions using several benchmark test cases. A high-resolution wave-propagation method which accurately takes into account the source term due to gravity is used for computing the non-hydrostatic atmospheric flows. It is demonstrated that there is little to no difference between the results obtained using the two different equation sets for Euler as well as Navier-Stokes solutions.

  6. Establish Effective Lower Bounds of Watershed Slope for Traditional Hydrologic Methods

    DOT National Transportation Integrated Search

    2012-06-01

    Equations to estimate timing parameters for a watershed contain watershed slope as a principal parameter and : estimates are usually inversely proportional to topographic slope. Hence as slope vanishes, the estimates approach : infinity. The research...

  7. AP-Cloud: Adaptive particle-in-cloud method for optimal solutions to Vlasov–Poisson equation

    DOE PAGES

    Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin; ...

    2016-04-19

    We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes ofmore » computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Here, simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.« less

  8. AP-Cloud: Adaptive Particle-in-Cloud method for optimal solutions to Vlasov–Poisson equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Xingyu; Samulyak, Roman, E-mail: roman.samulyak@stonybrook.edu; Computational Science Initiative, Brookhaven National Laboratory, Upton, NY 11973

    We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes ofmore » computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.« less

  9. AP-Cloud: Adaptive particle-in-cloud method for optimal solutions to Vlasov–Poisson equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin

    We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes ofmore » computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Here, simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.« less

  10. Numerical simulations of microwave heating of liquids: enhancements using Krylov subspace methods

    NASA Astrophysics Data System (ADS)

    Lollchund, M. R.; Dookhitram, K.; Sunhaloo, M. S.; Boojhawon, R.

    2013-04-01

    In this paper, we compare the performances of three iterative solvers for large sparse linear systems arising in the numerical computations of incompressible Navier-Stokes (NS) equations. These equations are employed mainly in the simulation of microwave heating of liquids. The emphasis of this work is on the application of Krylov projection techniques such as Generalized Minimal Residual (GMRES) to solve the Pressure Poisson Equations that result from discretisation of the NS equations. The performance of the GMRES method is compared with the traditional Gauss-Seidel (GS) and point successive over relaxation (PSOR) techniques through their application to simulate the dynamics of water housed inside a vertical cylindrical vessel which is subjected to microwave radiation. It is found that as the mesh size increases, GMRES gives the fastest convergence rate in terms of computational times and number of iterations.

  11. Enriched reproducing kernel particle method for fractional advection-diffusion equation

    NASA Astrophysics Data System (ADS)

    Ying, Yuping; Lian, Yanping; Tang, Shaoqiang; Liu, Wing Kam

    2018-06-01

    The reproducing kernel particle method (RKPM) has been efficiently applied to problems with large deformations, high gradients and high modal density. In this paper, it is extended to solve a nonlocal problem modeled by a fractional advection-diffusion equation (FADE), which exhibits a boundary layer with low regularity. We formulate this method on a moving least-square approach. Via the enrichment of fractional-order power functions to the traditional integer-order basis for RKPM, leading terms of the solution to the FADE can be exactly reproduced, which guarantees a good approximation to the boundary layer. Numerical tests are performed to verify the proposed approach.

  12. Neural network method for lossless two-conductor transmission line equations based on the IELM algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Yunlei; Hou, Muzhou; Luo, Jianshu; Liu, Taohua

    2018-06-01

    With the increasing demands for vast amounts of data and high-speed signal transmission, the use of multi-conductor transmission lines is becoming more common. The impact of transmission lines on signal transmission is thus a key issue affecting the performance of high-speed digital systems. To solve the problem of lossless two-conductor transmission line equations (LTTLEs), a neural network model and algorithm are explored in this paper. By selecting the product of two triangular basis functions as the activation function of hidden layer neurons, we can guarantee the separation of time, space, and phase orthogonality. By adding the initial condition to the neural network, an improved extreme learning machine (IELM) algorithm for solving the network weight is obtained. This is different to the traditional method for converting the initial condition into the iterative constraint condition. Calculation software for solving the LTTLEs based on the IELM algorithm is developed. Numerical experiments show that the results are consistent with those of the traditional method. The proposed neural network algorithm can find the terminal voltage of the transmission line and also the voltage of any observation point. It is possible to calculate the value at any given point by using the neural network model to solve the transmission line equation.

  13. A purely Lagrangian method for simulating the shallow water equations on a sphere using smooth particle hydrodynamics

    NASA Astrophysics Data System (ADS)

    Capecelatro, Jesse

    2018-03-01

    It has long been suggested that a purely Lagrangian solution to global-scale atmospheric/oceanic flows can potentially outperform tradition Eulerian schemes. Meanwhile, a demonstration of a scalable and practical framework remains elusive. Motivated by recent progress in particle-based methods when applied to convection dominated flows, this work presents a fully Lagrangian method for solving the inviscid shallow water equations on a rotating sphere in a smooth particle hydrodynamics framework. To avoid singularities at the poles, the governing equations are solved in Cartesian coordinates, augmented with a Lagrange multiplier to ensure that fluid particles are constrained to the surface of the sphere. An underlying grid in spherical coordinates is used to facilitate efficient neighbor detection and parallelization. The method is applied to a suite of canonical test cases, and conservation, accuracy, and parallel performance are assessed.

  14. Wavelets and distributed approximating functionals

    NASA Astrophysics Data System (ADS)

    Wei, G. W.; Kouri, D. J.; Hoffman, D. K.

    1998-07-01

    A general procedure is proposed for constructing father and mother wavelets that have excellent time-frequency localization and can be used to generate entire wavelet families for use as wavelet transforms. One interesting feature of our father wavelets (scaling functions) is that they belong to a class of generalized delta sequences, which we refer to as distributed approximating functionals (DAFs). We indicate this by the notation wavelet-DAFs. Correspondingly, the mother wavelets generated from these wavelet-DAFs are appropriately called DAF-wavelets. Wavelet-DAFs can be regarded as providing a pointwise (localized) spectral method, which furnishes a bridge between the traditional global methods and local methods for solving partial differential equations. They are shown to provide extremely accurate numerical solutions for a number of nonlinear partial differential equations, including the Korteweg-de Vries (KdV) equation, for which a previous method has encountered difficulties (J. Comput. Phys. 132 (1997) 233).

  15. Theoretical analysis for double-liquid variable focus lens

    NASA Astrophysics Data System (ADS)

    Peng, Runling; Chen, Jiabi; Zhuang, Songlin

    2007-09-01

    In this paper, various structures for double-liquid variable focus lens are introduced. And based on an energy minimization method, explicit calculations and detailed analyses upon an extended Young-type equation are given for double-liquid lenses with cylindrical electrode. Such an equation is especially applicable to liquid-liquid-solid tri-phase systems. It is a little different from the traditional Young equation that was derived according to vapor-liquid-solid triphase systems. The electrowetting effect caused by an external voltage changes the interface shape between two liquids as well as the focal length of the lens. Based on the extended Young-type equation, the relationship between the focal length and the external voltage can also be derived. Corresponding equations and simulation results are presented.

  16. Implicit Kalman filtering

    NASA Technical Reports Server (NTRS)

    Skliar, M.; Ramirez, W. F.

    1997-01-01

    For an implicitly defined discrete system, a new algorithm for Kalman filtering is developed and an efficient numerical implementation scheme is proposed. Unlike the traditional explicit approach, the implicit filter can be readily applied to ill-conditioned systems and allows for generalization to descriptor systems. The implementation of the implicit filter depends on the solution of the congruence matrix equation (A1)(Px)(AT1) = Py. We develop a general iterative method for the solution of this equation, and prove necessary and sufficient conditions for convergence. It is shown that when the system matrices of an implicit system are sparse, the implicit Kalman filter requires significantly less computer time and storage to implement as compared to the traditional explicit Kalman filter. Simulation results are presented to illustrate and substantiate the theoretical developments.

  17. Lagrangian averages, averaged Lagrangians, and the mean effects of fluctuations in fluid dynamics.

    PubMed

    Holm, Darryl D.

    2002-06-01

    We begin by placing the generalized Lagrangian mean (GLM) equations for a compressible adiabatic fluid into the Euler-Poincare (EP) variational framework of fluid dynamics, for an averaged Lagrangian. This is the Lagrangian averaged Euler-Poincare (LAEP) theorem. Next, we derive a set of approximate small amplitude GLM equations (glm equations) at second order in the fluctuating displacement of a Lagrangian trajectory from its mean position. These equations express the linear and nonlinear back-reaction effects on the Eulerian mean fluid quantities by the fluctuating displacements of the Lagrangian trajectories in terms of their Eulerian second moments. The derivation of the glm equations uses the linearized relations between Eulerian and Lagrangian fluctuations, in the tradition of Lagrangian stability analysis for fluids. The glm derivation also uses the method of averaged Lagrangians, in the tradition of wave, mean flow interaction. Next, the new glm EP motion equations for incompressible ideal fluids are compared with the Euler-alpha turbulence closure equations. An alpha model is a GLM (or glm) fluid theory with a Taylor hypothesis closure. Such closures are based on the linearized fluctuation relations that determine the dynamics of the Lagrangian statistical quantities in the Euler-alpha equations. Thus, by using the LAEP theorem, we bridge between the GLM equations and the Euler-alpha closure equations, through the small-amplitude glm approximation in the EP variational framework. We conclude by highlighting a new application of the GLM, glm, and alpha-model results for Lagrangian averaged ideal magnetohydrodynamics. (c) 2002 American Institute of Physics.

  18. Coarse-grained computation for particle coagulation and sintering processes by linking Quadrature Method of Moments with Monte-Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zou Yu, E-mail: yzou@Princeton.ED; Kavousanakis, Michail E., E-mail: mkavousa@Princeton.ED; Kevrekidis, Ioannis G., E-mail: yannis@Princeton.ED

    2010-07-20

    The study of particle coagulation and sintering processes is important in a variety of research studies ranging from cell fusion and dust motion to aerosol formation applications. These processes are traditionally simulated using either Monte-Carlo methods or integro-differential equations for particle number density functions. In this paper, we present a computational technique for cases where we believe that accurate closed evolution equations for a finite number of moments of the density function exist in principle, but are not explicitly available. The so-called equation-free computational framework is then employed to numerically obtain the solution of these unavailable closed moment equations bymore » exploiting (through intelligent design of computational experiments) the corresponding fine-scale (here, Monte-Carlo) simulation. We illustrate the use of this method by accelerating the computation of evolving moments of uni- and bivariate particle coagulation and sintering through short simulation bursts of a constant-number Monte-Carlo scheme.« less

  19. Black hole evolution by spectral methods

    NASA Astrophysics Data System (ADS)

    Kidder, Lawrence E.; Scheel, Mark A.; Teukolsky, Saul A.; Carlson, Eric D.; Cook, Gregory B.

    2000-10-01

    Current methods of evolving a spacetime containing one or more black holes are plagued by instabilities that prohibit long-term evolution. Some of these instabilities may be due to the numerical method used, traditionally finite differencing. In this paper, we explore the use of a pseudospectral collocation (PSC) method for the evolution of a spherically symmetric black hole spacetime in one dimension using a hyperbolic formulation of Einstein's equations. We demonstrate that our PSC method is able to evolve a spherically symmetric black hole spacetime forever without enforcing constraints, even if we add dynamics via a Klein-Gordon scalar field. We find that, in contrast with finite-differencing methods, black hole excision is a trivial operation using PSC applied to a hyperbolic formulation of Einstein's equations. We discuss the extension of this method to three spatial dimensions.

  20. The study of the Boltzmann equation of solid-gas two-phase flow with three-dimensional BGK model

    NASA Astrophysics Data System (ADS)

    Liu, Chang-jiang; Pang, Song; Xu, Qiang; He, Ling; Yang, Shao-peng; Qing, Yun-jie

    2018-06-01

    The motion of many solid-gas two-phase flows can be described by the Boltzmann equation. In order to simplify the Boltzmann equation, the convective-diffusion term is reserved and the collision term is replaced by the three-dimensional Bharnagar-Gross-Krook (BGK) model. Then the simplified Boltzmann equation is solved by homotopy perturbation method (HPM), and its approximate analytical solution is obtained. Through the analyzing, it is proved that the analytical solution satisfies all the constraint conditions, and its formation is in accord with the formation of the solution that is obtained by traditional Chapman-Enskog method, and the solving process of HPM is much more simple and convenient. This preliminarily shows the effectiveness and rapidness of HPM to solve the Boltzmann equation. The results obtained herein provide some theoretical basis for the further study of dynamic model of solid-gas two-phase flows, such as the sturzstrom of high-speed distant landslide caused by microseism and the sand storm caused by strong breeze.

  1. An Assessment of the Impact of Implementing Innovative Teaching Methods on Teaching Loads at Golden West College.

    ERIC Educational Resources Information Center

    Parsons, Gary L.

    This study examines the faculty workload policy of a community college that makes extensive use of non-traditional, innovative teaching methods. To measure workload, a mathematical equation whose sum was expressed as 100% was designed to include five factors: instructional hours, number of preparations, weekly student contact hours (WSCH), outside…

  2. Integral-equation based methods for parameter estimation in output pulses of radiation detectors: Application in nuclear medicine and spectroscopy

    NASA Astrophysics Data System (ADS)

    Mohammadian-Behbahani, Mohammad-Reza; Saramad, Shahyar

    2018-04-01

    Model based analysis methods are relatively new approaches for processing the output data of radiation detectors in nuclear medicine imaging and spectroscopy. A class of such methods requires fast algorithms for fitting pulse models to experimental data. In order to apply integral-equation based methods for processing the preamplifier output pulses, this article proposes a fast and simple method for estimating the parameters of the well-known bi-exponential pulse model by solving an integral equation. The proposed method needs samples from only three points of the recorded pulse as well as its first and second order integrals. After optimizing the sampling points, the estimation results were calculated and compared with two traditional integration-based methods. Different noise levels (signal-to-noise ratios from 10 to 3000) were simulated for testing the functionality of the proposed method, then it was applied to a set of experimental pulses. Finally, the effect of quantization noise was assessed by studying different sampling rates. Promising results by the proposed method endorse it for future real-time applications.

  3. ML 3.0 smoothed aggregation user's guide.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sala, Marzio; Hu, Jonathan Joseph; Tuminaro, Raymond Stephen

    2004-05-01

    ML is a multigrid preconditioning package intended to solve linear systems of equations Az = b where A is a user supplied n x n sparse matrix, b is a user supplied vector of length n and x is a vector of length n to be computed. ML should be used on large sparse linear systems arising from partial differential equation (PDE) discretizations. While technically any linear system can be considered, ML should be used on linear systems that correspond to things that work well with multigrid methods (e.g. elliptic PDEs). ML can be used as a stand-alone package ormore » to generate preconditioners for a traditional iterative solver package (e.g. Krylov methods). We have supplied support for working with the AZTEC 2.1 and AZTECOO iterative package [15]. However, other solvers can be used by supplying a few functions. This document describes one specific algebraic multigrid approach: smoothed aggregation. This approach is used within several specialized multigrid methods: one for the eddy current formulation for Maxwell's equations, and a multilevel and domain decomposition method for symmetric and non-symmetric systems of equations (like elliptic equations, or compressible and incompressible fluid dynamics problems). Other methods exist within ML but are not described in this document. Examples are given illustrating the problem definition and exercising multigrid options.« less

  4. ML 3.1 smoothed aggregation user's guide.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sala, Marzio; Hu, Jonathan Joseph; Tuminaro, Raymond Stephen

    2004-10-01

    ML is a multigrid preconditioning package intended to solve linear systems of equations Ax = b where A is a user supplied n x n sparse matrix, b is a user supplied vector of length n and x is a vector of length n to be computed. ML should be used on large sparse linear systems arising from partial differential equation (PDE) discretizations. While technically any linear system can be considered, ML should be used on linear systems that correspond to things that work well with multigrid methods (e.g. elliptic PDEs). ML can be used as a stand-alone package ormore » to generate preconditioners for a traditional iterative solver package (e.g. Krylov methods). We have supplied support for working with the Aztec 2.1 and AztecOO iterative package [16]. However, other solvers can be used by supplying a few functions. This document describes one specific algebraic multigrid approach: smoothed aggregation. This approach is used within several specialized multigrid methods: one for the eddy current formulation for Maxwell's equations, and a multilevel and domain decomposition method for symmetric and nonsymmetric systems of equations (like elliptic equations, or compressible and incompressible fluid dynamics problems). Other methods exist within ML but are not described in this document. Examples are given illustrating the problem definition and exercising multigrid options.« less

  5. Divergence correction schemes in finite difference method for 3D tensor CSAMT in axial anisotropic media

    NASA Astrophysics Data System (ADS)

    Wang, Kunpeng; Tan, Handong; Zhang, Zhiyong; Li, Zhiqiang; Cao, Meng

    2017-05-01

    Resistivity anisotropy and full-tensor controlled-source audio-frequency magnetotellurics (CSAMT) have gradually become hot research topics. However, much of the current anisotropy research for tensor CSAMT only focuses on the one-dimensional (1D) solution. As the subsurface is rarely 1D, it is necessary to study three-dimensional (3D) model response. The staggered-grid finite difference method is an effective simulation method for 3D electromagnetic forward modelling. Previous studies have suggested using the divergence correction to constrain the iterative process when using a staggered-grid finite difference model so as to accelerate the 3D forward speed and enhance the computational accuracy. However, the traditional divergence correction method was developed assuming an isotropic medium. This paper improves the traditional isotropic divergence correction method and derivation process to meet the tensor CSAMT requirements for anisotropy using the volume integral of the divergence equation. This method is more intuitive, enabling a simple derivation of a discrete equation and then calculation of coefficients related to the anisotropic divergence correction equation. We validate the result of our 3D computational results by comparing them to the results computed using an anisotropic, controlled-source 2.5D program. The 3D resistivity anisotropy model allows us to evaluate the consequences of using the divergence correction at different frequencies and for two orthogonal finite length sources. Our results show that the divergence correction plays an important role in 3D tensor CSAMT resistivity anisotropy research and offers a solid foundation for inversion of CSAMT data collected over an anisotropic body.

  6. Multi-Dimensional Asymptotically Stable 4th Order Accurate Schemes for the Diffusion Equation

    NASA Technical Reports Server (NTRS)

    Abarbanel, Saul; Ditkowski, Adi

    1996-01-01

    An algorithm is presented which solves the multi-dimensional diffusion equation on co mplex shapes to 4th-order accuracy and is asymptotically stable in time. This bounded-error result is achieved by constructing, on a rectangular grid, a differentiation matrix whose symmetric part is negative definite. The differentiation matrix accounts for the Dirichlet boundary condition by imposing penalty like terms. Numerical examples in 2-D show that the method is effective even where standard schemes, stable by traditional definitions fail.

  7. Dissipation-preserving spectral element method for damped seismic wave equations

    NASA Astrophysics Data System (ADS)

    Cai, Wenjun; Zhang, Huai; Wang, Yushun

    2017-12-01

    This article describes the extension of the conformal symplectic method to solve the damped acoustic wave equation and the elastic wave equations in the framework of the spectral element method. The conformal symplectic method is a variation of conventional symplectic methods to treat non-conservative time evolution problems, which has superior behaviors in long-time stability and dissipation preservation. To reveal the intrinsic dissipative properties of the model equations, we first reformulate the original systems in their equivalent conformal multi-symplectic structures and derive the corresponding conformal symplectic conservation laws. We thereafter separate each system into a conservative Hamiltonian system and a purely dissipative ordinary differential equation system. Based on the splitting methodology, we solve the two subsystems respectively. The dissipative one is cheaply solved by its analytic solution. While for the conservative system, we combine a fourth-order symplectic Nyström method in time and the spectral element method in space to cover the circumstances in realistic geological structures involving complex free-surface topography. The Strang composition method is adopted thereby to concatenate the corresponding two parts of solutions and generate the completed conformal symplectic method. A relative larger Courant number than that of the traditional Newmark scheme is found in the numerical experiments in conjunction with a spatial sampling of approximately 5 points per wavelength. A benchmark test for the damped acoustic wave equation validates the effectiveness of our proposed method in precisely capturing dissipation rate. The classical Lamb problem is used to demonstrate the ability of modeling Rayleigh wave in elastic wave propagation. More comprehensive numerical experiments are presented to investigate the long-time simulation, low dispersion and energy conservation properties of the conformal symplectic methods in both the attenuating homogeneous and heterogeneous media.

  8. A Summary of the Space-Time Conservation Element and Solution Element (CESE) Method

    NASA Technical Reports Server (NTRS)

    Wang, Xiao-Yen J.

    2015-01-01

    The space-time Conservation Element and Solution Element (CESE) method for solving conservation laws is examined for its development motivation and design requirements. The characteristics of the resulting scheme are discussed. The discretization of the Euler equations is presented to show readers how to construct a scheme based on the CESE method. The differences and similarities between the CESE method and other traditional methods are discussed. The strengths and weaknesses of the method are also addressed.

  9. Seismic modeling with radial basis function-generated finite differences (RBF-FD) – a simplified treatment of interfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, Bradley, E-mail: brma7253@colorado.edu; Fornberg, Bengt, E-mail: Fornberg@colorado.edu

    In a previous study of seismic modeling with radial basis function-generated finite differences (RBF-FD), we outlined a numerical method for solving 2-D wave equations in domains with material interfaces between different regions. The method was applicable on a mesh-free set of data nodes. It included all information about interfaces within the weights of the stencils (allowing the use of traditional time integrators), and was shown to solve problems of the 2-D elastic wave equation to 3rd-order accuracy. In the present paper, we discuss a refinement of that method that makes it simpler to implement. It can also improve accuracy formore » the case of smoothly-variable model parameter values near interfaces. We give several test cases that demonstrate the method solving 2-D elastic wave equation problems to 4th-order accuracy, even in the presence of smoothly-curved interfaces with jump discontinuities in the model parameters.« less

  10. Seismic modeling with radial basis function-generated finite differences (RBF-FD) - a simplified treatment of interfaces

    NASA Astrophysics Data System (ADS)

    Martin, Bradley; Fornberg, Bengt

    2017-04-01

    In a previous study of seismic modeling with radial basis function-generated finite differences (RBF-FD), we outlined a numerical method for solving 2-D wave equations in domains with material interfaces between different regions. The method was applicable on a mesh-free set of data nodes. It included all information about interfaces within the weights of the stencils (allowing the use of traditional time integrators), and was shown to solve problems of the 2-D elastic wave equation to 3rd-order accuracy. In the present paper, we discuss a refinement of that method that makes it simpler to implement. It can also improve accuracy for the case of smoothly-variable model parameter values near interfaces. We give several test cases that demonstrate the method solving 2-D elastic wave equation problems to 4th-order accuracy, even in the presence of smoothly-curved interfaces with jump discontinuities in the model parameters.

  11. An efficient method for model refinement in diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Zirak, A. R.; Khademi, M.

    2007-11-01

    Diffuse optical tomography (DOT) is a non-linear, ill-posed, boundary value and optimization problem which necessitates regularization. Also, Bayesian methods are suitable owing to measurements data are sparse and correlated. In such problems which are solved with iterative methods, for stabilization and better convergence, the solution space must be small. These constraints subject to extensive and overdetermined system of equations which model retrieving criteria specially total least squares (TLS) must to refine model error. Using TLS is limited to linear systems which is not achievable when applying traditional Bayesian methods. This paper presents an efficient method for model refinement using regularized total least squares (RTLS) for treating on linearized DOT problem, having maximum a posteriori (MAP) estimator and Tikhonov regulator. This is done with combination Bayesian and regularization tools as preconditioner matrices, applying them to equations and then using RTLS to the resulting linear equations. The preconditioning matrixes are guided by patient specific information as well as a priori knowledge gained from the training set. Simulation results illustrate that proposed method improves the image reconstruction performance and localize the abnormally well.

  12. Analysis of a renormalization group method and normal form theory for perturbed ordinary differential equations

    NASA Astrophysics Data System (ADS)

    DeVille, R. E. Lee; Harkin, Anthony; Holzer, Matt; Josić, Krešimir; Kaper, Tasso J.

    2008-06-01

    For singular perturbation problems, the renormalization group (RG) method of Chen, Goldenfeld, and Oono [Phys. Rev. E. 49 (1994) 4502-4511] has been shown to be an effective general approach for deriving reduced or amplitude equations that govern the long time dynamics of the system. It has been applied to a variety of problems traditionally analyzed using disparate methods, including the method of multiple scales, boundary layer theory, the WKBJ method, the Poincaré-Lindstedt method, the method of averaging, and others. In this article, we show how the RG method may be used to generate normal forms for large classes of ordinary differential equations. First, we apply the RG method to systems with autonomous perturbations, and we show that the reduced or amplitude equations generated by the RG method are equivalent to the classical Poincaré-Birkhoff normal forms for these systems up to and including terms of O(ɛ2), where ɛ is the perturbation parameter. This analysis establishes our approach and generalizes to higher order. Second, we apply the RG method to systems with nonautonomous perturbations, and we show that the reduced or amplitude equations so generated constitute time-asymptotic normal forms, which are based on KBM averages. Moreover, for both classes of problems, we show that the main coordinate changes are equivalent, up to translations between the spaces in which they are defined. In this manner, our results show that the RG method offers a new approach for deriving normal forms for nonautonomous systems, and it offers advantages since one can typically more readily identify resonant terms from naive perturbation expansions than from the nonautonomous vector fields themselves. Finally, we establish how well the solution to the RG equations approximates the solution of the original equations on time scales of O(1/ɛ).

  13. A new method for reconstruction of solar irradiance

    NASA Astrophysics Data System (ADS)

    Privalsky, Victor

    2018-07-01

    The purpose of this research is to show how time series should be reconstructed using an example with the data on total solar irradiation (TSI) of the Earth and on sunspot numbers (SSN) since 1749. The traditional approach through regression equation(s) is designed for time-invariant vectors of random variables and is not applicable to time series, which present random functions of time. The autoregressive reconstruction (ARR) method suggested here requires fitting a multivariate stochastic difference equation to the target/proxy time series. The reconstruction is done through the scalar equation for the target time series with the white noise term excluded. The time series approach is shown to provide a better reconstruction of TSI than the correlation/regression method. A reconstruction criterion is introduced which allows one to define in advance the achievable level of success in the reconstruction. The conclusion is that time series, including the total solar irradiance, cannot be reconstructed properly if the data are not treated as sample records of random processes and analyzed in both time and frequency domains.

  14. Strain gage selection in loads equations using a genetic algorithm

    NASA Technical Reports Server (NTRS)

    1994-01-01

    Traditionally, structural loads are measured using strain gages. A loads calibration test must be done before loads can be accurately measured. In one measurement method, a series of point loads is applied to the structure, and loads equations are derived via the least squares curve fitting algorithm using the strain gage responses to the applied point loads. However, many research structures are highly instrumented with strain gages, and the number and selection of gages used in a loads equation can be problematic. This paper presents an improved technique using a genetic algorithm to choose the strain gages used in the loads equations. Also presented are a comparison of the genetic algorithm performance with the current T-value technique and a variant known as the Best Step-down technique. Examples are shown using aerospace vehicle wings of high and low aspect ratio. In addition, a significant limitation in the current methods is revealed. The genetic algorithm arrived at a comparable or superior set of gages with significantly less human effort, and could be applied in instances when the current methods could not.

  15. Proposed solution methodology for the dynamically coupled nonlinear geared rotor mechanics equations

    NASA Technical Reports Server (NTRS)

    Mitchell, L. D.; David, J. W.

    1983-01-01

    The equations which describe the three-dimensional motion of an unbalanced rigid disk in a shaft system are nonlinear and contain dynamic-coupling terms. Traditionally, investigators have used an order analysis to justify ignoring the nonlinear terms in the equations of motion, producing a set of linear equations. This paper will show that, when gears are included in such a rotor system, the nonlinear dynamic-coupling terms are potentially as large as the linear terms. Because of this, one must attempt to solve the nonlinear rotor mechanics equations. A solution methodology is investigated to obtain approximate steady-state solutions to these equations. As an example of the use of the technique, a simpler set of equations is solved and the results compared to numerical simulations. These equations represent the forced, steady-state response of a spring-supported pendulum. These equations were chosen because they contain the type of nonlinear terms found in the dynamically-coupled nonlinear rotor equations. The numerical simulations indicate this method is reasonably accurate even when the nonlinearities are large.

  16. Three Dimensional Time Dependent Stochastic Method for Cosmic-ray Modulation

    NASA Astrophysics Data System (ADS)

    Pei, C.; Bieber, J. W.; Burger, R. A.; Clem, J. M.

    2009-12-01

    A proper understanding of the different behavior of intensities of galactic cosmic rays in different solar cycle phases requires solving the modulation equation with time dependence. We present a detailed description of our newly developed stochastic approach for cosmic ray modulation which we believe is the first attempt to solve the time dependent Parker equation in 3D evolving from our 3D steady state stochastic approach, which has been benchmarked extensively by using the finite difference method. Our 3D stochastic method is different from other stochastic approaches in literature (Ball et al 2005, Miyake et al 2005, and Florinski 2008) in several ways. For example, we employ spherical coordinates which makes the code much more efficient by reducing coordinate transformations. What's more, our stochastic differential equations are different from others because our map from Parker's original equation to the Fokker-Planck equation extends the method used by Jokipii and Levy 1977 while others don't although all 3D stochastic methods are essentially based on Ito formula. The advantage of the stochastic approach is that it also gives the probability information of travel times and path lengths of cosmic rays besides the intensities. We show that excellent agreement exists between solutions obtained by our steady state stochastic method and by the traditional finite difference method. We also show time dependent solutions for an idealized heliosphere which has a Parker magnetic field, a planar current sheet, and a simple initial condition.

  17. Comparisons of estimates of annual exceedance-probability discharges for small drainage basins in Iowa, based on data through water year 2013.

    DOT National Transportation Integrated Search

    2015-01-01

    Traditionally, the Iowa Department of Transportation : has used the Iowa Runoff Chart and single-variable regional-regression equations (RREs) from a U.S. Geological Survey : report (published in 1987) as the primary methods to estimate : annual exce...

  18. Comparisons of estimates of annual exceedance-probability discharges for small drainage basins in Iowa, based on data through water year 2013 : [summary].

    DOT National Transportation Integrated Search

    2015-01-01

    Traditionally, the Iowa DOT has used the Iowa Runoff Chart and single-variable regional regression equations (RREs) from a USGS report : (published in 1987) as the primary methods to estimate annual exceedance-probability discharge : (AEPD) for small...

  19. Variance Estimation Using Replication Methods in Structural Equation Modeling with Complex Sample Data

    ERIC Educational Resources Information Center

    Stapleton, Laura M.

    2008-01-01

    This article discusses replication sampling variance estimation techniques that are often applied in analyses using data from complex sampling designs: jackknife repeated replication, balanced repeated replication, and bootstrapping. These techniques are used with traditional analyses such as regression, but are currently not used with structural…

  20. Element-by-element Solution Procedures for Nonlinear Structural Analysis

    NASA Technical Reports Server (NTRS)

    Hughes, T. J. R.; Winget, J. M.; Levit, I.

    1984-01-01

    Element-by-element approximate factorization procedures are proposed for solving the large finite element equation systems which arise in nonlinear structural mechanics. Architectural and data base advantages of the present algorithms over traditional direct elimination schemes are noted. Results of calculations suggest considerable potential for the methods described.

  1. A comparison of numerical solutions of partial differential equations with probabilistic and possibilistic parameters for the quantification of uncertainty in subsurface solute transport.

    PubMed

    Zhang, Kejiang; Achari, Gopal; Li, Hua

    2009-11-03

    Traditionally, uncertainty in parameters are represented as probabilistic distributions and incorporated into groundwater flow and contaminant transport models. With the advent of newer uncertainty theories, it is now understood that stochastic methods cannot properly represent non random uncertainties. In the groundwater flow and contaminant transport equations, uncertainty in some parameters may be random, whereas those of others may be non random. The objective of this paper is to develop a fuzzy-stochastic partial differential equation (FSPDE) model to simulate conditions where both random and non random uncertainties are involved in groundwater flow and solute transport. Three potential solution techniques namely, (a) transforming a probability distribution to a possibility distribution (Method I) then a FSPDE becomes a fuzzy partial differential equation (FPDE), (b) transforming a possibility distribution to a probability distribution (Method II) and then a FSPDE becomes a stochastic partial differential equation (SPDE), and (c) the combination of Monte Carlo methods and FPDE solution techniques (Method III) are proposed and compared. The effects of these three methods on the predictive results are investigated by using two case studies. The results show that the predictions obtained from Method II is a specific case of that got from Method I. When an exact probabilistic result is needed, Method II is suggested. As the loss or gain of information during a probability-possibility (or vice versa) transformation cannot be quantified, their influences on the predictive results is not known. Thus, Method III should probably be preferred for risk assessments.

  2. Test Score Equating Using a Mini-Version Anchor and a Midi Anchor: A Case Study Using SAT[R] Data

    ERIC Educational Resources Information Center

    Liu, Jinghua; Sinharay, Sandip; Holland, Paul W.; Curley, Edward; Feigenbaum, Miriam

    2011-01-01

    This study explores an anchor that is different from the traditional miniature anchor in test score equating. In contrast to a traditional "mini" anchor that has the same spread of item difficulties as the tests to be equated, the studied anchor, referred to as a "midi" anchor (Sinharay & Holland), has a smaller spread of…

  3. An automatic step adjustment method for average power analysis technique used in fiber amplifiers

    NASA Astrophysics Data System (ADS)

    Liu, Xue-Ming

    2006-04-01

    An automatic step adjustment (ASA) method for average power analysis (APA) technique used in fiber amplifiers is proposed in this paper for the first time. In comparison with the traditional APA technique, the proposed method has suggested two unique merits such as a higher order accuracy and an ASA mechanism, so that it can significantly shorten the computing time and improve the solution accuracy. A test example demonstrates that, by comparing to the APA technique, the proposed method increases the computing speed by more than a hundredfold under the same errors. By computing the model equations of erbium-doped fiber amplifiers, the numerical results show that our method can improve the solution accuracy by over two orders of magnitude at the same amplifying section number. The proposed method has the capacity to rapidly and effectively compute the model equations of fiber Raman amplifiers and semiconductor lasers.

  4. Rapid analysis of scattering from periodic dielectric structures using accelerated Cartesian expansions.

    PubMed

    Baczewski, Andrew D; Miller, Nicholas C; Shanker, Balasubramaniam

    2012-04-01

    The analysis of fields in periodic dielectric structures arise in numerous applications of recent interest, ranging from photonic bandgap structures and plasmonically active nanostructures to metamaterials. To achieve an accurate representation of the fields in these structures using numerical methods, dense spatial discretization is required. This, in turn, affects the cost of analysis, particularly for integral-equation-based methods, for which traditional iterative methods require O(N2) operations, N being the number of spatial degrees of freedom. In this paper, we introduce a method for the rapid solution of volumetric electric field integral equations used in the analysis of doubly periodic dielectric structures. The crux of our method is the accelerated Cartesian expansion algorithm, which is used to evaluate the requisite potentials in O(N) cost. Results are provided that corroborate our claims of acceleration without compromising accuracy, as well as the application of our method to a number of compelling photonics applications.

  5. A New Ghost Cell/Level Set Method for Moving Boundary Problems: Application to Tumor Growth

    PubMed Central

    Macklin, Paul

    2011-01-01

    In this paper, we present a ghost cell/level set method for the evolution of interfaces whose normal velocity depend upon the solutions of linear and nonlinear quasi-steady reaction-diffusion equations with curvature-dependent boundary conditions. Our technique includes a ghost cell method that accurately discretizes normal derivative jump boundary conditions without smearing jumps in the tangential derivative; a new iterative method for solving linear and nonlinear quasi-steady reaction-diffusion equations; an adaptive discretization to compute the curvature and normal vectors; and a new discrete approximation to the Heaviside function. We present numerical examples that demonstrate better than 1.5-order convergence for problems where traditional ghost cell methods either fail to converge or attain at best sub-linear accuracy. We apply our techniques to a model of tumor growth in complex, heterogeneous tissues that consists of a nonlinear nutrient equation and a pressure equation with geometry-dependent jump boundary conditions. We simulate the growth of glioblastoma (an aggressive brain tumor) into a large, 1 cm square of brain tissue that includes heterogeneous nutrient delivery and varied biomechanical characteristics (white matter, gray matter, cerebrospinal fluid, and bone), and we observe growth morphologies that are highly dependent upon the variations of the tissue characteristics—an effect observed in real tumor growth. PMID:21331304

  6. A reference equation for maximal aerobic power for treadmill and cycle ergometer exercise testing: Analysis from the FRIEND registry.

    PubMed

    de Souza E Silva, Christina G; Kaminsky, Leonard A; Arena, Ross; Christle, Jeffrey W; Araújo, Claudio Gil S; Lima, Ricardo M; Ashley, Euan A; Myers, Jonathan

    2018-05-01

    Background Maximal oxygen uptake (VO 2 max) is a powerful predictor of health outcomes. Valid and portable reference values are integral to interpreting measured VO 2 max; however, available reference standards lack validation and are specific to exercise mode. This study was undertaken to develop and validate a single equation for normal standards for VO 2 max for the treadmill or cycle ergometer in men and women. Methods Healthy individuals ( N = 10,881; 67.8% men, 20-85 years) who performed a maximal cardiopulmonary exercise test on either a treadmill or a cycle ergometer were studied. Of these, 7617 and 3264 individuals were randomly selected for development and validation of the equation, respectively. A Brazilian sample (1619 individuals) constituted a second validation cohort. The prediction equation was determined using multiple regression analysis, and comparisons were made with the widely-used Wasserman and European equations. Results Age, sex, weight, height and exercise mode were significant predictors of VO 2 max. The regression equation was: VO 2 max (ml kg -1  min -1 ) = 45.2 - 0.35*Age - 10.9*Sex (male = 1; female = 2) - 0.15*Weight (pounds) + 0.68*Height (inches) - 0.46*Exercise Mode (treadmill = 1; bike = 2) ( R = 0.79, R 2  = 0.62, standard error of the estimate = 6.6 ml kg -1  min -1 ). Percentage predicted VO 2 max for the US and Brazilian validation cohorts were 102.8% and 95.8%, respectively. The new equation performed better than traditional equations, particularly among women and individuals ≥60 years old. Conclusion A combined equation was developed for normal standards for VO 2 max for different exercise modes derived from a US national registry. The equation provided a lower average error between measured and predicted VO 2 max than traditional equations even when applied to an independent cohort. Additional studies are needed to determine its portability.

  7. Prediction of Undsteady Flows in Turbomachinery Using the Linearized Euler Equations on Deforming Grids

    NASA Technical Reports Server (NTRS)

    Clark, William S.; Hall, Kenneth C.

    1994-01-01

    A linearized Euler solver for calculating unsteady flows in turbomachinery blade rows due to both incident gusts and blade motion is presented. The model accounts for blade loading, blade geometry, shock motion, and wake motion. Assuming that the unsteadiness in the flow is small relative to the nonlinear mean solution, the unsteady Euler equations can be linearized about the mean flow. This yields a set of linear variable coefficient equations that describe the small amplitude harmonic motion of the fluid. These linear equations are then discretized on a computational grid and solved using standard numerical techniques. For transonic flows, however, one must use a linear discretization which is a conservative linearization of the non-linear discretized Euler equations to ensure that shock impulse loads are accurately captured. Other important features of this analysis include a continuously deforming grid which eliminates extrapolation errors and hence, increases accuracy, and a new numerically exact, nonreflecting far-field boundary condition treatment based on an eigenanalysis of the discretized equations. Computational results are presented which demonstrate the computational accuracy and efficiency of the method and demonstrate the effectiveness of the deforming grid, far-field nonreflecting boundary conditions, and shock capturing techniques. A comparison of the present unsteady flow predictions to other numerical, semi-analytical, and experimental methods shows excellent agreement. In addition, the linearized Euler method presented requires one or two orders-of-magnitude less computational time than traditional time marching techniques making the present method a viable design tool for aeroelastic analyses.

  8. A collocation--Galerkin finite element model of cardiac action potential propagation.

    PubMed

    Rogers, J M; McCulloch, A D

    1994-08-01

    A new computational method was developed for modeling the effects of the geometric complexity, nonuniform muscle fiber orientation, and material inhomogeneity of the ventricular wall on cardiac impulse propagation. The method was used to solve a modification to the FitzHugh-Nagumo system of equations. The geometry, local muscle fiber orientation, and material parameters of the domain were defined using linear Lagrange or cubic Hermite finite element interpolation. Spatial variations of time-dependent excitation and recovery variables were approximated using cubic Hermite finite element interpolation, and the governing finite element equations were assembled using the collocation method. To overcome the deficiencies of conventional collocation methods on irregular domains, Galerkin equations for the no-flux boundary conditions were used instead of collocation equations for the boundary degrees-of-freedom. The resulting system was evolved using an adaptive Runge-Kutta method. Converged two-dimensional simulations of normal propagation showed that this method requires less CPU time than a traditional finite difference discretization. The model also reproduced several other physiologic phenomena known to be important in arrhythmogenesis including: Wenckebach periodicity, slowed propagation and unidirectional block due to wavefront curvature, reentry around a fixed obstacle, and spiral wave reentry. In a new result, we observed wavespeed variations and block due to nonuniform muscle fiber orientation. The findings suggest that the finite element method is suitable for studying normal and pathological cardiac activation and has significant advantages over existing techniques.

  9. An extension of stochastic hierarchy equations of motion for the equilibrium correlation functions

    NASA Astrophysics Data System (ADS)

    Ke, Yaling; Zhao, Yi

    2017-06-01

    A traditional stochastic hierarchy equations of motion method is extended into the correlated real-time and imaginary-time propagations, in this paper, for its applications in calculating the equilibrium correlation functions. The central idea is based on a combined employment of stochastic unravelling and hierarchical techniques for the temperature-dependent and temperature-free parts of the influence functional, respectively, in the path integral formalism of the open quantum systems coupled to a harmonic bath. The feasibility and validity of the proposed method are justified in the emission spectra of homodimer compared to those obtained through the deterministic hierarchy equations of motion. Besides, it is interesting to find that the complex noises generated from a small portion of real-time and imaginary-time cross terms can be safely dropped to produce the stable and accurate position and flux correlation functions in a broad parameter regime.

  10. A Nonlinear Diffusion Equation-Based Model for Ultrasound Speckle Noise Removal

    NASA Astrophysics Data System (ADS)

    Zhou, Zhenyu; Guo, Zhichang; Zhang, Dazhi; Wu, Boying

    2018-04-01

    Ultrasound images are contaminated by speckle noise, which brings difficulties in further image analysis and clinical diagnosis. In this paper, we address this problem in the view of nonlinear diffusion equation theories. We develop a nonlinear diffusion equation-based model by taking into account not only the gradient information of the image, but also the information of the gray levels of the image. By utilizing the region indicator as the variable exponent, we can adaptively control the diffusion type which alternates between the Perona-Malik diffusion and the Charbonnier diffusion according to the image gray levels. Furthermore, we analyze the proposed model with respect to the theoretical and numerical properties. Experiments show that the proposed method achieves much better speckle suppression and edge preservation when compared with the traditional despeckling methods, especially in the low gray level and low-contrast regions.

  11. Efficient statistically accurate algorithms for the Fokker-Planck equation in large dimensions

    NASA Astrophysics Data System (ADS)

    Chen, Nan; Majda, Andrew J.

    2018-02-01

    Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace and is therefore computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O (100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6 dimensions with only small errors.

  12. Bioelectric impedance and hydrostatic weighing with and without head submersion in persons who are morbidly obese.

    PubMed

    Heath, E M; Adams, T D; Daines, M M; Hunt, S C

    1998-08-01

    To compare hydrostatic weighing with and without head submersion and bioelectric impedance analysis (BIA) for measurement of body composition of persons who are morbidly obese. Body composition was determined using 3 methods: hydrostatic weighing with and without head submersion and BIA. Residual volume for the hydrostatic weighing calculation was determined by body plethysmography. Subjects were 16 morbidly obese men (142.5 kg mean body weight) and 30 morbidly obese women (125.9 kg mean body weight) living in the Salt Lake County, Utah, area. Morbid obesity was defined as 40 kg or more over ideal weight. One-way, repeated-measures analysis of variance was followed by Scheffé post hoc tests; body-fat measurement method served as the repeated variable and percentage of body fat as the dependent variable. Men and women were analyzed separately. In addition, degree of agreement between the 3 methods of determining body composition was determined. A regression equation was used to calculate body density for hydrostatic weighing without head submersion. Two new BIA regression equations were developed from the data of the 16 men and 30 women. Values for percentage body fat from hydrostatic weighing with and without head submersion (41.8% vs 41.7%, respectively) were the same for men but differed for women (52.2% vs 49.4%, respectively, P < .0001). Values for body fat percentage measured by BIA were significantly lower for men (36.1%) and women (43.1%) (for both, P < .0001) compared with values from hydrostatic weighing methods. BIA underpredicted percentage body fat by a mean of 5.7% in men and 9.1% in women compared with the traditional hydrostatic weighing method. BIA tended to underpredict the measurement of percentage body fat in male and female subjects who were morbidly obese. Hydrostatic weighing without head submersion provides an accurate, acceptable, and convenient alternative method for body composition assessment of the morbidly obese population in comparison with the traditional hydrostatic weighing method. In population screening or other settings where underwater weighing is impractical, population-specific BIA regression equations should be used because general BIA equations lead to consistent underprediction of percentage body fat compared with hydrostatic weighing.

  13. Quantum mechanical generalized phase-shift approach to atom-surface scattering: a Feshbach projection approach to dealing with closed channel effects.

    PubMed

    Maji, Kaushik; Kouri, Donald J

    2011-03-28

    We have developed a new method for solving quantum dynamical scattering problems, using the time-independent Schrödinger equation (TISE), based on a novel method to generalize a "one-way" quantum mechanical wave equation, impose correct boundary conditions, and eliminate exponentially growing closed channel solutions. The approach is readily parallelized to achieve approximate N(2) scaling, where N is the number of coupled equations. The full two-way nature of the TISE is included while propagating the wave function in the scattering variable and the full S-matrix is obtained. The new algorithm is based on a "Modified Cayley" operator splitting approach, generalizing earlier work where the method was applied to the time-dependent Schrödinger equation. All scattering variable propagation approaches to solving the TISE involve solving a Helmholtz-type equation, and for more than one degree of freedom, these are notoriously ill-behaved, due to the unavoidable presence of exponentially growing contributions to the numerical solution. Traditionally, the method used to eliminate exponential growth has posed a major obstacle to the full parallelization of such propagation algorithms. We stabilize by using the Feshbach projection operator technique to remove all the nonphysical exponentially growing closed channels, while retaining all of the propagating open channel components, as well as exponentially decaying closed channel components.

  14. Self-referencing Taper Curves for Loblolly Pine

    Treesearch

    Mike Strub; Chris Cieszewski; David Hyink

    2005-01-01

    We compare the traditional fitting of relative diameter over relative height with methods based on self-referencing functions and stochastic parameter estimation using data collected by the Virginia Polytechnic Institute and State University Growth and Yield Cooperative. Two sets of self-referencing equations assume known diameter at 4.5 feet inside (dib) and outside (...

  15. Implementation of the SPH Procedure Within the MOOSE Finite Element Framework

    NASA Astrophysics Data System (ADS)

    Laurier, Alexandre

    The goal of this thesis was to implement the SPH homogenization procedure within the MOOSE finite element framework at INL. Before this project, INL relied on DRAGON to do their SPH homogenization which was not flexible enough for their needs. As such, the SPH procedure was implemented for the neutron diffusion equation with the traditional, Selengut and true Selengut normalizations. Another aspect of this research was to derive the SPH corrected neutron transport equations and implement them in the same framework. Following in the footsteps of other articles, this feature was implemented and tested successfully with both the PN and S N transport calculation schemes. Although the results obtained for the power distribution in PWR assemblies show no advantages over the use of the SPH diffusion equation, we believe the inclusion of this transport correction will allow for better results in cases where either P N or SN are required. An additional aspect of this research was the implementation of a novel way of solving the non-linear SPH problem. Traditionally, this was done through a Picard, fixed-point iterative process whereas the new implementation relies on MOOSE's Preconditioned Jacobian-Free Newton Krylov (PJFNK) method to allow for a direct solution to the non-linear problem. This novel implementation showed a decrease in calculation time by a factor reaching 50 and generated SPH factors that correspond to those obtained through a fixed-point iterative process with a very tight convergence criteria: epsilon < 10-8. The use of the PJFNK SPH procedure also allows to reach convergence in problems containing important reflector regions and void boundary conditions, something that the traditional SPH method has never been able to achieve. At times when the PJFNK method cannot reach convergence to the SPH problem, a hybrid method is used where by the traditional SPH iteration forces the initial condition to be within the radius of convergence of the Newton method. This new method was tested on a simplified model of INL's TREAT reactor, a problem that includes very important graphite reflector regions as well as vacuum boundary conditions with great success. To demonstrate the power of PJFNK SPH on a more common case, the correction was applied to a simplified PWR reactor core from the BEAVRS benchmark that included 15 assemblies and the water reflector to obtain very good results. This opens up the possibility to apply the SPH correction to full reactor cores in order to reduce homogenization errors for use in transient or multi-physics calculations.

  16. The electromagnetic pendulum in quickly changing magnetic field of constant intensity

    NASA Astrophysics Data System (ADS)

    Rodyukov, F. F.; Shepeljavyi, A. I.

    2018-05-01

    The Lagrange-Maxwell equations for the pendulum in the form of a conductive frame, which is suspended in a uniform sinusoidal electromagnetic field of constant intensity, are obtained. The procedure for obtaining simplified mathematical models by a traditional method of separating fast and slow motions with subsiquent averaging a fast time is used. It is shown that this traditional approach may lead to inappropriate mathematical models. Suggested ways on how this can be avoided for the case are considered. The main statements by numerical experiments are illustrated.

  17. Solution of elliptic partial differential equations by fast Poisson solvers using a local relaxation factor. 1: One-step method

    NASA Technical Reports Server (NTRS)

    Chang, S. C.

    1986-01-01

    An algorithm for solving a large class of two- and three-dimensional nonseparable elliptic partial differential equations (PDE's) is developed and tested. It uses a modified D'Yakanov-Gunn iterative procedure in which the relaxation factor is grid-point dependent. It is easy to implement and applicable to a variety of boundary conditions. It is also computationally efficient, as indicated by the results of numerical comparisons with other established methods. Furthermore, the current algorithm has the advantage of possessing two important properties which the traditional iterative methods lack; that is: (1) the convergence rate is relatively insensitive to grid-cell size and aspect ratio, and (2) the convergence rate can be easily estimated by using the coefficient of the PDE being solved.

  18. Transient simulation of hydropower station with consideration of three-dimensional unsteady flow in turbine

    NASA Astrophysics Data System (ADS)

    Huang, W. D.; Fan, H. G.; Chen, N. X.

    2012-11-01

    To study the interaction between the transient flow in pipe and the unsteady turbulent flow in turbine, a coupled model of the transient flow in the pipe and three-dimensional unsteady flow in the turbine is developed based on the method of characteristics and the fluid governing equation in the accelerated rotational relative coordinate. The load-rejection process under the closing of guide vanes of the hydraulic power plant is simulated by the coupled method, the traditional transient simulation method and traditional three-dimensional unsteady flow calculation method respectively and the results are compared. The pressure, unit flux and rotation speed calculated by three methods show a similar change trend. However, because the elastic water hammer in the pipe and the pressure fluctuation in the turbine have been considered in the coupled method, the increase of pressure at spiral inlet is higher and the pressure fluctuation in turbine is stronger.

  19. New distributed activation energy model: numerical solution and application to pyrolysis kinetics of some types of biomass.

    PubMed

    Cai, Junmeng; Liu, Ronghou

    2008-05-01

    In the present paper, a new distributed activation energy model has been developed, considering the reaction order and the dependence of frequency factor on temperature. The proposed DAEM cannot be solved directly in a closed from, thus a method was used to obtain the numerical solution of the new DAEM equation. Two numerical examples to illustrate the proposed method were presented. The traditional DAEM and new DAEM have been used to simulate the pyrolytic process of some types of biomass. The new DAEM fitted the experimental data much better than the traditional DAEM as the dependence of the frequency factor on temperature was taken into account.

  20. Application of mathematical model methods for optimization tasks in construction materials technology

    NASA Astrophysics Data System (ADS)

    Fomina, E. V.; Kozhukhova, N. I.; Sverguzova, S. V.; Fomin, A. E.

    2018-05-01

    In this paper, the regression equations method for design of construction material was studied. Regression and polynomial equations representing the correlation between the studied parameters were proposed. The logic design and software interface of the regression equations method focused on parameter optimization to provide the energy saving effect at the stage of autoclave aerated concrete design considering the replacement of traditionally used quartz sand by coal mining by-product such as argillite. The mathematical model represented by a quadric polynomial for the design of experiment was obtained using calculated and experimental data. This allowed the estimation of relationship between the composition and final properties of the aerated concrete. The surface response graphically presented in a nomogram allowed the estimation of concrete properties in response to variation of composition within the x-space. The optimal range of argillite content was obtained leading to a reduction of raw materials demand, development of target plastic strength of aerated concrete as well as a reduction of curing time before autoclave treatment. Generally, this method allows the design of autoclave aerated concrete with required performance without additional resource and time costs.

  1. Structure-preserving spectral element method in attenuating seismic wave modeling

    NASA Astrophysics Data System (ADS)

    Cai, Wenjun; Zhang, Huai

    2016-04-01

    This work describes the extension of the conformal symplectic method to solve the damped acoustic wave equation and the elastic wave equations in the framework of the spectral element method. The conformal symplectic method is a variation of conventional symplectic methods to treat non-conservative time evolution problems which has superior behaviors in long-time stability and dissipation preservation. To construct the conformal symplectic method, we first reformulate the damped acoustic wave equation and the elastic wave equations in their equivalent conformal multi-symplectic structures, which naturally reveal the intrinsic properties of the original systems, especially, the dissipation laws. We thereafter separate each structures into a conservative Hamiltonian system and a purely dissipative ordinary differential equation system. Based on the splitting methodology, we solve the two subsystems respectively. The dissipative one is cheaply solved by its analytic solution. While for the conservative system, we combine a fourth-order symplectic Nyström method in time and the spectral element method in space to cover the circumstances in realistic geological structures involving complex free-surface topography. The Strang composition method is adopted thereby to concatenate the corresponding two parts of solutions and generate the completed numerical scheme, which is conformal symplectic and can therefore guarantee the numerical stability and dissipation preservation after a large time modeling. Additionally, a relative larger Courant number than that of the traditional Newmark scheme is found in the numerical experiments in conjunction with a spatial sampling of approximately 5 points per wavelength. A benchmark test for the damped acoustic wave equation validates the effectiveness of our proposed method in precisely capturing dissipation rate. The classical Lamb problem is used to demonstrate the ability of modeling Rayleigh-wave propagation. More comprehensive numerical experiments are presented to investigate the long-time simulation, low dispersion and energy conservation properties of the conformal symplectic method in both the attenuating homogeneous and heterogeneous mediums.

  2. A general time-dependent stochastic method for solving Parker's transport equation in spherical coordinates

    NASA Astrophysics Data System (ADS)

    Pei, C.; Bieber, J. W.; Burger, R. A.; Clem, J.

    2010-12-01

    We present a detailed description of our newly developed stochastic approach for solving Parker's transport equation, which we believe is the first attempt to solve it with time dependence in 3-D, evolving from our 3-D steady state stochastic approach. Our formulation of this method is general and is valid for any type of heliospheric magnetic field, although we choose the standard Parker field as an example to illustrate the steps to calculate the transport of galactic cosmic rays. Our 3-D stochastic method is different from other stochastic approaches in the literature in several ways. For example, we employ spherical coordinates to integrate directly, which makes the code much more efficient by reducing coordinate transformations. What is more, the equivalence between our stochastic differential equations and Parker's transport equation is guaranteed by Ito's theorem in contrast to some other approaches. We generalize the technique for calculating particle flux based on the pseudoparticle trajectories for steady state solutions and for time-dependent solutions in 3-D. To validate our code, first we show that good agreement exists between solutions obtained by our steady state stochastic method and a traditional finite difference method. Then we show that good agreement also exists for our time-dependent method for an idealized and simplified heliosphere which has a Parker magnetic field and a simple initial condition for two different inner boundary conditions.

  3. Numerical solution of the exact cavity equations of motion for an unstable optical resonator.

    PubMed

    Bowers, M S; Moody, S E

    1990-09-20

    We solve numerically, we believe for the first time, the exact cavity equations of motion for a realistic unstable resonator with a simple gain saturation model. The cavity equations of motion, first formulated by Siegman ["Exact Cavity Equations for Lasers with Large Output Coupling," Appl. Phys. Lett. 36, 412-414 (1980)], and which we term the dynamic coupled modes (DCM) method of solution, solve for the full 3-D time dependent electric field inside the optical cavity by expanding the field in terms of the actual diffractive transverse eigenmodes of the bare (gain free) cavity with time varying coefficients. The spatially varying gain serves to couple the bare cavity transverse modes and to scatter power from mode to mode. We show that the DCM method numerically converges with respect to the number of eigenmodes in the basis set. The intracavity intensity in the numerical example shown reaches a steady state, and this steady state distribution is compared with that computed from the traditional Fox and Li approach using a fast Fourier transform propagation algorithm. The output wavefronts from both methods are quite similar, and the computed output powers agree to within 10%. The usefulness and advantages of using this method for predicting the output of a laser, especially pulsed lasers used for coherent detection, are discussed.

  4. A direct Primitive Variable Recovery Scheme for hyperbolic conservative equations: The case of relativistic hydrodynamics.

    PubMed

    Aguayo-Ortiz, A; Mendoza, S; Olvera, D

    2018-01-01

    In this article we develop a Primitive Variable Recovery Scheme (PVRS) to solve any system of coupled differential conservative equations. This method obtains directly the primitive variables applying the chain rule to the time term of the conservative equations. With this, a traditional finite volume method for the flux is applied in order avoid violation of both, the entropy and "Rankine-Hugoniot" jump conditions. The time evolution is then computed using a forward finite difference scheme. This numerical technique evades the recovery of the primitive vector by solving an algebraic system of equations as it is often used and so, it generalises standard techniques to solve these kind of coupled systems. The article is presented bearing in mind special relativistic hydrodynamic numerical schemes with an added pedagogical view in the appendix section in order to easily comprehend the PVRS. We present the convergence of the method for standard shock-tube problems of special relativistic hydrodynamics and a graphical visualisation of the errors using the fluctuations of the numerical values with respect to exact analytic solutions. The PVRS circumvents the sometimes arduous computation that arises from standard numerical methods techniques, which obtain the desired primitive vector solution through an algebraic polynomial of the charges.

  5. A direct Primitive Variable Recovery Scheme for hyperbolic conservative equations: The case of relativistic hydrodynamics

    PubMed Central

    Mendoza, S.; Olvera, D.

    2018-01-01

    In this article we develop a Primitive Variable Recovery Scheme (PVRS) to solve any system of coupled differential conservative equations. This method obtains directly the primitive variables applying the chain rule to the time term of the conservative equations. With this, a traditional finite volume method for the flux is applied in order avoid violation of both, the entropy and “Rankine-Hugoniot” jump conditions. The time evolution is then computed using a forward finite difference scheme. This numerical technique evades the recovery of the primitive vector by solving an algebraic system of equations as it is often used and so, it generalises standard techniques to solve these kind of coupled systems. The article is presented bearing in mind special relativistic hydrodynamic numerical schemes with an added pedagogical view in the appendix section in order to easily comprehend the PVRS. We present the convergence of the method for standard shock-tube problems of special relativistic hydrodynamics and a graphical visualisation of the errors using the fluctuations of the numerical values with respect to exact analytic solutions. The PVRS circumvents the sometimes arduous computation that arises from standard numerical methods techniques, which obtain the desired primitive vector solution through an algebraic polynomial of the charges. PMID:29659602

  6. An improved local radial point interpolation method for transient heat conduction analysis

    NASA Astrophysics Data System (ADS)

    Wang, Feng; Lin, Gao; Zheng, Bao-Jing; Hu, Zhi-Qiang

    2013-06-01

    The smoothing thin plate spline (STPS) interpolation using the penalty function method according to the optimization theory is presented to deal with transient heat conduction problems. The smooth conditions of the shape functions and derivatives can be satisfied so that the distortions hardly occur. Local weak forms are developed using the weighted residual method locally from the partial differential equations of the transient heat conduction. Here the Heaviside step function is used as the test function in each sub-domain to avoid the need for a domain integral. Essential boundary conditions can be implemented like the finite element method (FEM) as the shape functions possess the Kronecker delta property. The traditional two-point difference method is selected for the time discretization scheme. Three selected numerical examples are presented in this paper to demonstrate the availability and accuracy of the present approach comparing with the traditional thin plate spline (TPS) radial basis functions.

  7. Information-Velocity Metric for the Flow of Information through an Organization: Application to Decision Support

    DTIC Science & Technology

    2009-06-17

    pyramid. Hh represents the amount of human-to- human communication that limits v(info). Hh represents a traditional but inefficient, unscalable, and...Equa- tion (20) weights evenly improved efficiency of sharing information (by moving away from tradi- tional human-to- human communication methods and...the right time. The second line of equation (20) implies that human-to- human communication methods are inefficient and unscalable. For example, an

  8. Examining Potential Boundary Bias Effects in Kernel Smoothing on Equating: An Introduction for the Adaptive and Epanechnikov Kernels.

    PubMed

    Cid, Jaime A; von Davier, Alina A

    2015-05-01

    Test equating is a method of making the test scores from different test forms of the same assessment comparable. In the equating process, an important step involves continuizing the discrete score distributions. In traditional observed-score equating, this step is achieved using linear interpolation (or an unscaled uniform kernel). In the kernel equating (KE) process, this continuization process involves Gaussian kernel smoothing. It has been suggested that the choice of bandwidth in kernel smoothing controls the trade-off between variance and bias. In the literature on estimating density functions using kernels, it has also been suggested that the weight of the kernel depends on the sample size, and therefore, the resulting continuous distribution exhibits bias at the endpoints, where the samples are usually smaller. The purpose of this article is (a) to explore the potential effects of atypical scores (spikes) at the extreme ends (high and low) on the KE method in distributions with different degrees of asymmetry using the randomly equivalent groups equating design (Study I), and (b) to introduce the Epanechnikov and adaptive kernels as potential alternative approaches to reducing boundary bias in smoothing (Study II). The beta-binomial model is used to simulate observed scores reflecting a range of different skewed shapes.

  9. New explicit equations for the accurate calculation of the growth and evaporation of hydrometeors by the diffusion of water vapor

    NASA Technical Reports Server (NTRS)

    Srivastava, R. C.; Coen, J. L.

    1992-01-01

    The traditional explicit growth equation has been widely used to calculate the growth and evaporation of hydrometeors by the diffusion of water vapor. This paper reexamines the assumptions underlying the traditional equation and shows that large errors (10-30 percent in some cases) result if it is used carelessly. More accurate explicit equations are derived by approximating the saturation vapor-density difference as a quadratic rather than a linear function of the temperature difference between the particle and ambient air. These new equations, which reduce the error to less than a few percent, merit inclusion in a broad range of atmospheric models.

  10. Frequentist Model Averaging in Structural Equation Modelling.

    PubMed

    Jin, Shaobo; Ankargren, Sebastian

    2018-06-04

    Model selection from a set of candidate models plays an important role in many structural equation modelling applications. However, traditional model selection methods introduce extra randomness that is not accounted for by post-model selection inference. In the current study, we propose a model averaging technique within the frequentist statistical framework. Instead of selecting an optimal model, the contributions of all candidate models are acknowledged. Valid confidence intervals and a [Formula: see text] test statistic are proposed. A simulation study shows that the proposed method is able to produce a robust mean-squared error, a better coverage probability, and a better goodness-of-fit test compared to model selection. It is an interesting compromise between model selection and the full model.

  11. A Simple Method to Calculate the Temperature Dependence of the Gibbs Energy and Chemical Equilibrium Constants

    ERIC Educational Resources Information Center

    Vargas, Francisco M.

    2014-01-01

    The temperature dependence of the Gibbs energy and important quantities such as Henry's law constants, activity coefficients, and chemical equilibrium constants is usually calculated by using the Gibbs-Helmholtz equation. Although, this is a well-known approach and traditionally covered as part of any physical chemistry course, the required…

  12. A parametric finite element method for solid-state dewetting problems with anisotropic surface energies

    NASA Astrophysics Data System (ADS)

    Bao, Weizhu; Jiang, Wei; Wang, Yan; Zhao, Quan

    2017-02-01

    We propose an efficient and accurate parametric finite element method (PFEM) for solving sharp-interface continuum models for solid-state dewetting of thin films with anisotropic surface energies. The governing equations of the sharp-interface models belong to a new type of high-order (4th- or 6th-order) geometric evolution partial differential equations about open curve/surface interface tracking problems which include anisotropic surface diffusion flow and contact line migration. Compared to the traditional methods (e.g., marker-particle methods), the proposed PFEM not only has very good accuracy, but also poses very mild restrictions on the numerical stability, and thus it has significant advantages for solving this type of open curve evolution problems with applications in the simulation of solid-state dewetting. Extensive numerical results are reported to demonstrate the accuracy and high efficiency of the proposed PFEM.

  13. Direct modeling for computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Xu, Kun

    2015-06-01

    All fluid dynamic equations are valid under their modeling scales, such as the particle mean free path and mean collision time scale of the Boltzmann equation and the hydrodynamic scale of the Navier-Stokes (NS) equations. The current computational fluid dynamics (CFD) focuses on the numerical solution of partial differential equations (PDEs), and its aim is to get the accurate solution of these governing equations. Under such a CFD practice, it is hard to develop a unified scheme that covers flow physics from kinetic to hydrodynamic scales continuously because there is no such governing equation which could make a smooth transition from the Boltzmann to the NS modeling. The study of fluid dynamics needs to go beyond the traditional numerical partial differential equations. The emerging engineering applications, such as air-vehicle design for near-space flight and flow and heat transfer in micro-devices, do require further expansion of the concept of gas dynamics to a larger domain of physical reality, rather than the traditional distinguishable governing equations. At the current stage, the non-equilibrium flow physics has not yet been well explored or clearly understood due to the lack of appropriate tools. Unfortunately, under the current numerical PDE approach, it is hard to develop such a meaningful tool due to the absence of valid PDEs. In order to construct multiscale and multiphysics simulation methods similar to the modeling process of constructing the Boltzmann or the NS governing equations, the development of a numerical algorithm should be based on the first principle of physical modeling. In this paper, instead of following the traditional numerical PDE path, we introduce direct modeling as a principle for CFD algorithm development. Since all computations are conducted in a discretized space with limited cell resolution, the flow physics to be modeled has to be done in the mesh size and time step scales. Here, the CFD is more or less a direct construction of discrete numerical evolution equations, where the mesh size and time step will play dynamic roles in the modeling process. With the variation of the ratio between mesh size and local particle mean free path, the scheme will capture flow physics from the kinetic particle transport and collision to the hydrodynamic wave propagation. Based on the direct modeling, a continuous dynamics of flow motion will be captured in the unified gas-kinetic scheme. This scheme can be faithfully used to study the unexplored non-equilibrium flow physics in the transition regime.

  14. GIS Based Distributed Runoff Predictions in Variable Source Area Watersheds Employing the SCS-Curve Number

    NASA Astrophysics Data System (ADS)

    Steenhuis, T. S.; Mendoza, G.; Lyon, S. W.; Gerard Marchant, P.; Walter, M. T.; Schneiderman, E.

    2003-04-01

    Because the traditional Soil Conservation Service Curve Number (SCS-CN) approach continues to be ubiquitously used in GIS-BASED water quality models, new application methods are needed that are consistent with variable source area (VSA) hydrological processes in the landscape. We developed within an integrated GIS modeling environment a distributed approach for applying the traditional SCS-CN equation to watersheds where VSA hydrology is a dominant process. Spatial representation of hydrologic processes is important for watershed planning because restricting potentially polluting activities from runoff source areas is fundamental to controlling non-point source pollution. The methodology presented here uses the traditional SCS-CN method to predict runoff volume and spatial extent of saturated areas and uses a topographic index to distribute runoff source areas through watersheds. The resulting distributed CN-VSA method was incorporated in an existing GWLF water quality model and applied to sub-watersheds of the Delaware basin in the Catskill Mountains region of New York State. We found that the distributed CN-VSA approach provided a physically-based method that gives realistic results for watersheds with VSA hydrology.

  15. Analysis of Wien filter spectra from Hall thruster plumes.

    PubMed

    Huang, Wensheng; Shastry, Rohit

    2015-07-01

    A method for analyzing the Wien filter spectra obtained from the plumes of Hall thrusters is derived and presented. The new method extends upon prior work by deriving the integration equations for the current and species fractions. Wien filter spectra from the plume of the NASA-300M Hall thruster are analyzed with the presented method and the results are used to examine key trends. The new integration method is found to produce results slightly different from the traditional area-under-the-curve method. The use of different velocity distribution forms when performing curve-fits to the peaks in the spectra is compared. Additional comparison is made with the scenario where the current fractions are assumed to be proportional to the heights of peaks. The comparison suggests that the calculated current fractions are not sensitive to the choice of form as long as both the height and width of the peaks are accounted for. Conversely, forms that only account for the height of the peaks produce inaccurate results. Also presented are the equations for estimating the uncertainty associated with applying curve fits and charge-exchange corrections. These uncertainty equations can be used to plan the geometry of the experimental setup.

  16. An overview of longitudinal data analysis methods for neurological research.

    PubMed

    Locascio, Joseph J; Atri, Alireza

    2011-01-01

    The purpose of this article is to provide a concise, broad and readily accessible overview of longitudinal data analysis methods, aimed to be a practical guide for clinical investigators in neurology. In general, we advise that older, traditional methods, including (1) simple regression of the dependent variable on a time measure, (2) analyzing a single summary subject level number that indexes changes for each subject and (3) a general linear model approach with a fixed-subject effect, should be reserved for quick, simple or preliminary analyses. We advocate the general use of mixed-random and fixed-effect regression models for analyses of most longitudinal clinical studies. Under restrictive situations or to provide validation, we recommend: (1) repeated-measure analysis of covariance (ANCOVA), (2) ANCOVA for two time points, (3) generalized estimating equations and (4) latent growth curve/structural equation models.

  17. First-Order Hyperbolic System Method for Time-Dependent Advection-Diffusion Problems

    NASA Technical Reports Server (NTRS)

    Mazaheri, Alireza; Nishikawa, Hiroaki

    2014-01-01

    A time-dependent extension of the first-order hyperbolic system method for advection-diffusion problems is introduced. Diffusive/viscous terms are written and discretized as a hyperbolic system, which recovers the original equation in the steady state. The resulting scheme offers advantages over traditional schemes: a dramatic simplification in the discretization, high-order accuracy in the solution gradients, and orders-of-magnitude convergence acceleration. The hyperbolic advection-diffusion system is discretized by the second-order upwind residual-distribution scheme in a unified manner, and the system of implicit-residual-equations is solved by Newton's method over every physical time step. The numerical results are presented for linear and nonlinear advection-diffusion problems, demonstrating solutions and gradients produced to the same order of accuracy, with rapid convergence over each physical time step, typically less than five Newton iterations.

  18. Nonlocal Symmetry and Interaction Solutions of a Generalized Kadomtsev—Petviashvili Equation

    NASA Astrophysics Data System (ADS)

    Huang, Li-Li; Chen, Yong; Ma, Zheng-Yi

    2016-08-01

    A generalized Kadomtsev—Petviashvili equation is studied by nonlocal symmetry method and consistent Riccati expansion (CRE) method in this paper. Applying the truncated Painlevé analysis to the generalized Kadomtsev—Petviashvili equation, some Bäcklund transformations (BTs) including auto-BT and non-auto-BT are obtained. The auto-BT leads to a nonlocal symmetry which corresponds to the residual of the truncated Painlevé expansion. Then the nonlocal symmetry is localized to the corresponding nonlocal group by introducing two new variables. Further, by applying the Lie point symmetry method to the prolonged system, a new type of finite symmetry transformation is derived. In addition, the generalized Kadomtsev—Petviashvili equation is proved consistent Riccati expansion (CRE) solvable. As a result, the soliton-cnoidal wave interaction solutions of the equation are explicitly given, which are difficult to be found by other traditional methods. Moreover, figures are given out to show the properties of the explicit analytic interaction solutions. Supported by the Global Change Research Program of China under Grant No. 2015CB953904, National Natural Science Foundation of under Grant Nos. 11275072 and 11435005, Doctoral Program of Higher Education of China under Grant No. 20120076110024, the Network Information Physics Calculation of Basic Research Innovation Research Group of China under Grant No. 61321064, and Shanghai Collaborative Innovation Center of Trustworthy Software for Internet of Things under Grant No. ZF1213, and Zhejiang Provincial Natural Science Foundation of China under Grant No. LY14A010005

  19. Results of including geometric nonlinearities in an aeroelastic model of an F/A-18

    NASA Technical Reports Server (NTRS)

    Buttrill, Carey S.

    1989-01-01

    An integrated, nonlinear simulation model suitable for aeroelastic modeling of fixed-wing aircraft has been developed. While the author realizes that the subject of modeling rotating, elastic structures is not closed, it is believed that the equations of motion developed and applied herein are correct to second order and are suitable for use with typical aircraft structures. The equations are not suitable for large elastic deformation. In addition, the modeling framework generalizes both the methods and terminology of non-linear rigid-body airplane simulation and traditional linear aeroelastic modeling. Concerning the importance of angular/elastic inertial coupling in the dynamic analysis of fixed-wing aircraft, the following may be said. The rigorous inclusion of said coupling is not without peril and must be approached with care. In keeping with the same engineering judgment that guided the development of the traditional aeroelastic equations, the effect of non-linear inertial effects for most airplane applications is expected to be small. A parameter does not tell the whole story, however, and modes flagged by the parameter as significant also need to be checked to see if the coupling is not a one-way path, i.e., the inertially affected modes can influence other modes.

  20. Gibbs Sampler-Based λ-Dynamics and Rao-Blackwell Estimator for Alchemical Free Energy Calculation.

    PubMed

    Ding, Xinqiang; Vilseck, Jonah Z; Hayes, Ryan L; Brooks, Charles L

    2017-06-13

    λ-dynamics is a generalized ensemble method for alchemical free energy calculations. In traditional λ-dynamics, the alchemical switch variable λ is treated as a continuous variable ranging from 0 to 1 and an empirical estimator is utilized to approximate the free energy. In the present article, we describe an alternative formulation of λ-dynamics that utilizes the Gibbs sampler framework, which we call Gibbs sampler-based λ-dynamics (GSLD). GSLD, like traditional λ-dynamics, can be readily extended to calculate free energy differences between multiple ligands in one simulation. We also introduce a new free energy estimator, the Rao-Blackwell estimator (RBE), for use in conjunction with GSLD. Compared with the current empirical estimator, the advantage of RBE is that RBE is an unbiased estimator and its variance is usually smaller than the current empirical estimator. We also show that the multistate Bennett acceptance ratio equation or the unbinned weighted histogram analysis method equation can be derived using the RBE. We illustrate the use and performance of this new free energy computational framework by application to a simple harmonic system as well as relevant calculations of small molecule relative free energies of solvation and binding to a protein receptor. Our findings demonstrate consistent and improved performance compared with conventional alchemical free energy methods.

  1. Two-dimensional coupled mathematical modeling of fluvial processes with intense sediment transport and rapid bed evolution

    NASA Astrophysics Data System (ADS)

    Yue, Zhiyuan; Cao, Zhixian; Li, Xin; Che, Tao

    2008-09-01

    Alluvial rivers may experience intense sediment transport and rapid bed evolution under a high flow regime, for which traditional decoupled mathematical river models based on simplified conservation equations are not applicable. A two-dimensional coupled mathematical model is presented, which is generally applicable to the fluvial processes with either intense or weak sediment transport. The governing equations of the model comprise the complete shallow water hydrodynamic equations closed with Manning roughness for boundary resistance and empirical relationships for sediment exchange with the erodible bed. The second-order Total-Variation-Diminishing version of the Weighted-Average-Flux method, along with the HLLC approximate Riemann Solver, is adapted to solve the governing equations, which can properly resolve shock waves and contact discontinuities. The model is applied to the pilot study of the flooding due to a sudden outburst of a real glacial-lake.

  2. Trajectory control method of stratospheric airship based on the sliding mode control and prediction in wind field

    NASA Astrophysics Data System (ADS)

    Zhang, Jia-shi; Yang, Xi-xiang

    2017-11-01

    The stratospheric airship has the characteristics of large inertia, long time delay and large disturbance of wind field , so the trajectory control is very difficult .Build the lateral three degrees of freedom dynamic model which consider the wind interference , the dynamics equation is linearized by the small perturbation theory, propose a trajectory control method Combine with the sliding mode control and prediction, design the trajectory controller , takes the HAA airship as the reference to carry out simulation analysis. Results show that the improved sliding mode control with front-feedback method not only can solve well control problems of airship trajectory in wind field, but also can effectively improve the control accuracy of the traditional sliding mode control method, solved problems that using the traditional sliding mode control to control. It provides a useful reference for dynamic modeling and trajectory control of stratospheric airship.

  3. Krylov Deferred Correction Accelerated Method of Lines Transpose for Parabolic Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jia, Jun; Jingfang, Huang

    2008-01-01

    In this paper, a new class of numerical methods for the accurate and efficient solutions of parabolic partial differential equations is presented. Unlike traditional method of lines (MoL), the new {\\bf \\it Krylov deferred correction (KDC) accelerated method of lines transpose (MoL^T)} first discretizes the temporal direction using Gaussian type nodes and spectral integration, and symbolically applies low-order time marching schemes to form a preconditioned elliptic system, which is then solved iteratively using Newton-Krylov techniques such as Newton-GMRES or Newton-BiCGStab method. Each function evaluation in the Newton-Krylov method is simply one low-order time-stepping approximation of the error by solving amore » decoupled system using available fast elliptic equation solvers. Preliminary numerical experiments show that the KDC accelerated MoL^T technique is unconditionally stable, can be spectrally accurate in both temporal and spatial directions, and allows optimal time-step sizes in long-time simulations.« less

  4. Multigrid Acceleration of Time-Accurate DNS of Compressible Turbulent Flow

    NASA Technical Reports Server (NTRS)

    Broeze, Jan; Geurts, Bernard; Kuerten, Hans; Streng, Martin

    1996-01-01

    An efficient scheme for the direct numerical simulation of 3D transitional and developed turbulent flow is presented. Explicit and implicit time integration schemes for the compressible Navier-Stokes equations are compared. The nonlinear system resulting from the implicit time discretization is solved with an iterative method and accelerated by the application of a multigrid technique. Since we use central spatial discretizations and no artificial dissipation is added to the equations, the smoothing method is less effective than in the more traditional use of multigrid in steady-state calculations. Therefore, a special prolongation method is needed in order to obtain an effective multigrid method. This simulation scheme was studied in detail for compressible flow over a flat plate. In the laminar regime and in the first stages of turbulent flow the implicit method provides a speed-up of a factor 2 relative to the explicit method on a relatively coarse grid. At increased resolution this speed-up is enhanced correspondingly.

  5. Rapid analysis of scattering from periodic dielectric structures using accelerated Cartesian expansions

    DOE PAGES

    Baczewski, Andrew David; Miller, Nicholas C.; Shanker, Balasubramaniam

    2012-03-22

    Here, the analysis of fields in periodic dielectric structures arise in numerous applications of recent interest, ranging from photonic bandgap structures and plasmonically active nanostructures to metamaterials. To achieve an accurate representation of the fields in these structures using numerical methods, dense spatial discretization is required. This, in turn, affects the cost of analysis, particularly for integral-equation-based methods, for which traditional iterative methods require Ο(Ν 2) operations, Ν being the number of spatial degrees of freedom. In this paper, we introduce a method for the rapid solution of volumetric electric field integral equations used in the analysis of doubly periodicmore » dielectric structures. The crux of our method is the accelerated Cartesian expansion algorithm, which is used to evaluate the requisite potentials in Ο(Ν) cost. Results are provided that corroborate our claims of acceleration without compromising accuracy, as well as the application of our method to a number of compelling photonics applications.« less

  6. Electric potential calculation in molecular simulation of electric double layer capacitors

    NASA Astrophysics Data System (ADS)

    Wang, Zhenxing; Olmsted, David L.; Asta, Mark; Laird, Brian B.

    2016-11-01

    For the molecular simulation of electric double layer capacitors (EDLCs), a number of methods have been proposed and implemented to determine the one-dimensional electric potential profile between the two electrodes at a fixed potential difference. In this work, we compare several of these methods for a model LiClO4-acetonitrile/graphite EDLC simulated using both the traditional fixed-charged method (FCM), in which a fixed charge is assigned a priori to the electrode atoms, or the recently developed constant potential method (CPM) (2007 J. Chem. Phys. 126 084704), where the electrode charges are allowed to fluctuate to keep the potential fixed. Based on an analysis of the full three-dimensional electric potential field, we suggest a method for determining the averaged one-dimensional electric potential profile that can be applied to both the FCM and CPM simulations. Compared to traditional methods based on numerically solving the one-dimensional Poisson’s equation, this method yields better accuracy and no supplemental assumptions.

  7. Alternative methods for ray tracing in uniaxial media. Application to negative refraction

    NASA Astrophysics Data System (ADS)

    Bellver-Cebreros, Consuelo; Rodriguez-Danta, Marcelo

    2007-03-01

    In previous papers [C. Bellver-Cebreros, M. Rodriguez-Danta, Eikonal equation, alternative expression of Fresnel's equation and Mohr's construction in optical anisotropic media, Opt. Commun. 189 (2001) 193; C. Bellver-Cebreros, M. Rodriguez-Danta, Internal conical refraction in biaxial media and graphical plane constructions deduced from Mohr's method, Opt. Commun. 212 (2002) 199; C. Bellver-Cebreros, M. Rodriguez-Danta, Refraccion conica externa en medios biaxicos a partir de la construccion de Mohr, Opt. Pura AppliE 36 (2003) 33], the authors have developed a method based on the local properties of dielectric permittivity tensor and on Mohr's plane graphical construction in order to study the behaviour of locally plane light waves in anisotropic media. In this paper, this alternative methodology is compared with the traditional one, by emphasizing the simplicity of the former when studying ray propagation through uniaxial media (comparison is possible since, in this case, traditional construction becomes also plane). An original and simple graphical method is proposed in order to determine the direction of propagation given by the wave vector from the knowledge of the extraordinary ray direction (given by Poynting vector). Some properties of light rays in these media not described in the literature are obtained. Finally, two applications are considered: a description of optical birefringence under normal incidence and the study of negative refraction in uniaxial media.

  8. Strain and grain size of TiO2 nanoparticles from TEM, Raman spectroscopy and XRD: The revisiting of the Williamson-Hall plot method

    NASA Astrophysics Data System (ADS)

    Kibasomba, Pierre M.; Dhlamini, Simon; Maaza, Malik; Liu, Chuan-Pu; Rashad, Mohamed M.; Rayan, Diaa A.; Mwakikunga, Bonex W.

    2018-06-01

    The Williamson-Hall (W-H) equation, which has been used to obtain relative crystallite sizes and strains between samples since 1962, is revisited. A modified W-H equation is derived which takes into account the Scherrer equation, first published in 1918, (which traditionally gives more absolute crystallite size prediction) and strain prediction from Raman spectra. It is found that W-H crystallite sizes are on average 2.11 ± 0.01 times smaller than the sizes from Scherrer equation. Furthermore the strain from the W-H plots when compared to strain obtained from Raman spectral red-shifts yield factors whose values depend on the phases in the materials - whether anatase, rutile or brookite. Two main phases are identified in the annealing temperatures (350 °C-700 °C) chosen herein - anatase and brookite. A transition temperature of 550 °C has been found for nano-TiO2 to irreversibly transform from brookite to anatase by plotting the Raman peak shifts against the annealing temperatures. The W-H underestimation on the strain in the brookite phase gives W-H/Raman factor of 3.10 ± 0.05 whereas for the anatase phase, one gets 2.46 ± 0.03. The new βtot2cos2θ-sinθ plot and when fitted with a polynomial yield less strain but much better matching with experimental TEM crystallite sizes and the agglomerates than both the traditional Williamson-Hall and the Scherrer methods. There is greater improvement in the model when linearized - that is the βtotcos2θ-sinθ plot rather than the βtot2cos2θ-sinθ plot.

  9. Hermite WENO limiting for multi-moment finite-volume methods using the ADER-DT time discretization for 1-D systems of conservation laws

    DOE PAGES

    Norman, Matthew R.

    2014-11-24

    New Hermite Weighted Essentially Non-Oscillatory (HWENO) interpolants are developed and investigated within the Multi-Moment Finite-Volume (MMFV) formulation using the ADER-DT time discretization. Whereas traditional WENO methods interpolate pointwise, function-based WENO methods explicitly form a non-oscillatory, high-order polynomial over the cell in question. This study chooses a function-based approach and details how fast convergence to optimal weights for smooth flow is ensured. Methods of sixth-, eighth-, and tenth-order accuracy are developed. We compare these against traditional single-moment WENO methods of fifth-, seventh-, ninth-, and eleventh-order accuracy to compare against more familiar methods from literature. The new HWENO methods improve upon existingmore » HWENO methods (1) by giving a better resolution of unreinforced contact discontinuities and (2) by only needing a single HWENO polynomial to update both the cell mean value and cell mean derivative. Test cases to validate and assess these methods include 1-D linear transport, the 1-D inviscid Burger's equation, and the 1-D inviscid Euler equations. Smooth and non-smooth flows are used for evaluation. These HWENO methods performed better than comparable literature-standard WENO methods for all regimes of discontinuity and smoothness in all tests herein. They exhibit improved optimal accuracy due to the use of derivatives, and they collapse to solutions similar to typical WENO methods when limiting is required. The study concludes that the new HWENO methods are robust and effective when used in the ADER-DT MMFV framework. Finally, these results are intended to demonstrate capability rather than exhaust all possible implementations.« less

  10. A Simple and Accurate Rate-Driven Infiltration Model

    NASA Astrophysics Data System (ADS)

    Cui, G.; Zhu, J.

    2017-12-01

    In this study, we develop a novel Rate-Driven Infiltration Model (RDIMOD) for simulating infiltration into soils. Unlike traditional methods, RDIMOD avoids numerically solving the highly non-linear Richards equation or simply modeling with empirical parameters. RDIMOD employs infiltration rate as model input to simulate one-dimensional infiltration process by solving an ordinary differential equation. The model can simulate the evolutions of wetting front, infiltration rate, and cumulative infiltration on any surface slope including vertical and horizontal directions. Comparing to the results from the Richards equation for both vertical infiltration and horizontal infiltration, RDIMOD simply and accurately predicts infiltration processes for any type of soils and soil hydraulic models without numerical difficulty. Taking into account the accuracy, capability, and computational effectiveness and stability, RDIMOD can be used in large-scale hydrologic and land-atmosphere modeling.

  11. The numerical dynamic for highly nonlinear partial differential equations

    NASA Technical Reports Server (NTRS)

    Lafon, A.; Yee, H. C.

    1992-01-01

    Problems associated with the numerical computation of highly nonlinear equations in computational fluid dynamics are set forth and analyzed in terms of the potential ranges of spurious behaviors. A reaction-convection equation with a nonlinear source term is employed to evaluate the effects related to spatial and temporal discretizations. The discretization of the source term is described according to several methods, and the various techniques are shown to have a significant effect on the stability of the spurious solutions. Traditional linearized stability analyses cannot provide the level of confidence required for accurate fluid dynamics computations, and the incorporation of nonlinear analysis is proposed. Nonlinear analysis based on nonlinear dynamical systems complements the conventional linear approach and is valuable in the analysis of hypersonic aerodynamics and combustion phenomena.

  12. Thermodynamic Analysis of Chemically Reacting Mixtures-Comparison of First and Second Order Models.

    PubMed

    Pekař, Miloslav

    2018-01-01

    Recently, a method based on non-equilibrium continuum thermodynamics which derives thermodynamically consistent reaction rate models together with thermodynamic constraints on their parameters was analyzed using a triangular reaction scheme. The scheme was kinetically of the first order. Here, the analysis is further developed for several first and second order schemes to gain a deeper insight into the thermodynamic consistency of rate equations and relationships between chemical thermodynamic and kinetics. It is shown that the thermodynamic constraints on the so-called proper rate coefficient are usually simple sign restrictions consistent with the supposed reaction directions. Constraints on the so-called coupling rate coefficients are more complex and weaker. This means more freedom in kinetic coupling between reaction steps in a scheme, i.e., in the kinetic effects of other reactions on the rate of some reaction in a reacting system. When compared with traditional mass-action rate equations, the method allows a reduction in the number of traditional rate constants to be evaluated from data, i.e., a reduction in the dimensionality of the parameter estimation problem. This is due to identifying relationships between mass-action rate constants (relationships which also include thermodynamic equilibrium constants) which have so far been unknown.

  13. Solution Methods for Certain Evolution Equations

    NASA Astrophysics Data System (ADS)

    Vega-Guzman, Jose Manuel

    Solution methods for certain linear and nonlinear evolution equations are presented in this dissertation. Emphasis is placed mainly on the analytical treatment of nonautonomous differential equations, which are challenging to solve despite the existent numerical and symbolic computational software programs available. Ideas from the transformation theory are adopted allowing one to solve the problems under consideration from a non-traditional perspective. First, the Cauchy initial value problem is considered for a class of nonautonomous and inhomogeneous linear diffusion-type equation on the entire real line. Explicit transformations are used to reduce the equations under study to their corresponding standard forms emphasizing on natural relations with certain Riccati(and/or Ermakov)-type systems. These relations give solvability results for the Cauchy problem of the parabolic equation considered. The superposition principle allows to solve formally this problem from an unconventional point of view. An eigenfunction expansion approach is also considered for this general evolution equation. Examples considered to corroborate the efficacy of the proposed solution methods include the Fokker-Planck equation, the Black-Scholes model and the one-factor Gaussian Hull-White model. The results obtained in the first part are used to solve the Cauchy initial value problem for certain inhomogeneous Burgers-type equation. The connection between linear (the Diffusion-type) and nonlinear (Burgers-type) parabolic equations is stress in order to establish a strong commutative relation. Traveling wave solutions of a nonautonomous Burgers equation are also investigated. Finally, it is constructed explicitly the minimum-uncertainty squeezed states for quantum harmonic oscillators. They are derived by the action of corresponding maximal kinematical invariance group on the standard ground state solution. It is shown that the product of the variances attains the required minimum value only at the instances that one variance is a minimum and the other is a maximum, when the squeezing of one of the variances occurs. Such explicit construction is possible due to the relation between the diffusion-type equation studied in the first part and the time-dependent Schrodinger equation. A modication of the radiation field operators for squeezed photons in a perfect cavity is also suggested with the help of a nonstandard solution of Heisenberg's equation of motion.

  14. A stochastic hybrid systems based framework for modeling dependent failure processes

    PubMed Central

    Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying

    2017-01-01

    In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods. PMID:28231313

  15. A stochastic hybrid systems based framework for modeling dependent failure processes.

    PubMed

    Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying

    2017-01-01

    In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods.

  16. Discrete Variational Approach for Modeling Laser-Plasma Interactions

    NASA Astrophysics Data System (ADS)

    Reyes, J. Paxon; Shadwick, B. A.

    2014-10-01

    The traditional approach for fluid models of laser-plasma interactions begins by approximating fields and derivatives on a grid in space and time, leading to difference equations that are manipulated to create a time-advance algorithm. In contrast, by introducing the spatial discretization at the level of the action, the resulting Euler-Lagrange equations have particular differencing approximations that will exactly satisfy discrete versions of the relevant conservation laws. For example, applying a spatial discretization in the Lagrangian density leads to continuous-time, discrete-space equations and exact energy conservation regardless of the spatial grid resolution. We compare the results of two discrete variational methods using the variational principles from Chen and Sudan and Brizard. Since the fluid system conserves energy and momentum, the relative errors in these conserved quantities are well-motivated physically as figures of merit for a particular method. This work was supported by the U. S. Department of Energy under Contract No. DE-SC0008382 and by the National Science Foundation under Contract No. PHY-1104683.

  17. Probabilistic analysis of wind-induced vibration mitigation of structures by fluid viscous dampers

    NASA Astrophysics Data System (ADS)

    Chen, Jianbing; Zeng, Xiaoshu; Peng, Yongbo

    2017-11-01

    The high-rise buildings usually suffer from excessively large wind-induced vibrations, and thus vibration control systems might be necessary. Fluid viscous dampers (FVDs) with nonlinear power law against velocity are widely employed. With the transition of design method from traditional frequency domain approaches to more refined direct time domain approaches, the difficulty of time integration of these systems occurs sometimes. In the present paper, firstly the underlying reason of the difficulty is revealed by identifying that the equations of motion of high-rise buildings installed with FVDs are sometimes stiff differential equations. Thus, an approach effective for stiff differential systems, i.e., the backward difference formula (BDF), is then introduced, and verified to be effective for the equation of motion of wind-induced vibration controlled systems. Comparative studies are performed among some methods, including the Newmark method, KR-alpha method, energy-based linearization method and the statistical linearization method. Based on the above results, a 20-story steel frame structure is taken as a practical example. Particularly, the randomness of structural parameters and of wind loading input is emphasized. The extreme values of the responses are examined, showing the effectiveness of the proposed approach, and also necessitating the refined probabilistic analysis in the design of wind-induced vibration mitigation systems.

  18. Using instrumental variables to estimate a Cox's proportional hazards regression subject to additive confounding

    PubMed Central

    Tosteson, Tor D.; Morden, Nancy E.; Stukel, Therese A.; O'Malley, A. James

    2014-01-01

    The estimation of treatment effects is one of the primary goals of statistics in medicine. Estimation based on observational studies is subject to confounding. Statistical methods for controlling bias due to confounding include regression adjustment, propensity scores and inverse probability weighted estimators. These methods require that all confounders are recorded in the data. The method of instrumental variables (IVs) can eliminate bias in observational studies even in the absence of information on confounders. We propose a method for integrating IVs within the framework of Cox's proportional hazards model and demonstrate the conditions under which it recovers the causal effect of treatment. The methodology is based on the approximate orthogonality of an instrument with unobserved confounders among those at risk. We derive an estimator as the solution to an estimating equation that resembles the score equation of the partial likelihood in much the same way as the traditional IV estimator resembles the normal equations. To justify this IV estimator for a Cox model we perform simulations to evaluate its operating characteristics. Finally, we apply the estimator to an observational study of the effect of coronary catheterization on survival. PMID:25506259

  19. Using instrumental variables to estimate a Cox's proportional hazards regression subject to additive confounding.

    PubMed

    MacKenzie, Todd A; Tosteson, Tor D; Morden, Nancy E; Stukel, Therese A; O'Malley, A James

    2014-06-01

    The estimation of treatment effects is one of the primary goals of statistics in medicine. Estimation based on observational studies is subject to confounding. Statistical methods for controlling bias due to confounding include regression adjustment, propensity scores and inverse probability weighted estimators. These methods require that all confounders are recorded in the data. The method of instrumental variables (IVs) can eliminate bias in observational studies even in the absence of information on confounders. We propose a method for integrating IVs within the framework of Cox's proportional hazards model and demonstrate the conditions under which it recovers the causal effect of treatment. The methodology is based on the approximate orthogonality of an instrument with unobserved confounders among those at risk. We derive an estimator as the solution to an estimating equation that resembles the score equation of the partial likelihood in much the same way as the traditional IV estimator resembles the normal equations. To justify this IV estimator for a Cox model we perform simulations to evaluate its operating characteristics. Finally, we apply the estimator to an observational study of the effect of coronary catheterization on survival.

  20. Non-intrusive reduced order modeling of nonlinear problems using neural networks

    NASA Astrophysics Data System (ADS)

    Hesthaven, J. S.; Ubbiali, S.

    2018-06-01

    We develop a non-intrusive reduced basis (RB) method for parametrized steady-state partial differential equations (PDEs). The method extracts a reduced basis from a collection of high-fidelity solutions via a proper orthogonal decomposition (POD) and employs artificial neural networks (ANNs), particularly multi-layer perceptrons (MLPs), to accurately approximate the coefficients of the reduced model. The search for the optimal number of neurons and the minimum amount of training samples to avoid overfitting is carried out in the offline phase through an automatic routine, relying upon a joint use of the Latin hypercube sampling (LHS) and the Levenberg-Marquardt (LM) training algorithm. This guarantees a complete offline-online decoupling, leading to an efficient RB method - referred to as POD-NN - suitable also for general nonlinear problems with a non-affine parametric dependence. Numerical studies are presented for the nonlinear Poisson equation and for driven cavity viscous flows, modeled through the steady incompressible Navier-Stokes equations. Both physical and geometrical parametrizations are considered. Several results confirm the accuracy of the POD-NN method and show the substantial speed-up enabled at the online stage as compared to a traditional RB strategy.

  1. A new method to calculate unsteady particle kinematics and drag coefficient in a subsonic post-shock flow

    NASA Astrophysics Data System (ADS)

    Bordoloi, Ankur D.; Ding, Liuyang; Martinez, Adam A.; Prestridge, Katherine; Adrian, Ronald J.

    2018-07-01

    We introduce a new method (piecewise integrated dynamics equation fit, PIDEF) that uses the particle dynamics equation to determine unsteady kinematics and drag coefficient (C D) for a particle in subsonic post-shock flow. The uncertainty of this method is assessed based on simulated trajectories for both quasi-steady and unsteady flow conditions. Traditional piecewise polynomial fitting (PPF) shows high sensitivity to measurement error and the function used to describe C D, creating high levels of relative error (1) when applied to unsteady shock-accelerated flows. The PIDEF method provides reduced uncertainty in calculations of unsteady acceleration and drag coefficient for both quasi-steady and unsteady flows. This makes PIDEF a preferable method over PPF for complex flows where the temporal response of C D is unknown. We apply PIDEF to experimental measurements of particle trajectories from 8-pulse particle tracking and determine the effect of incident Mach number on relaxation kinematics and drag coefficient of micron-sized particles.

  2. Gaussian representation of high-intensity focused ultrasound beams.

    PubMed

    Soneson, Joshua E; Myers, Matthew R

    2007-11-01

    A method for fast numerical simulation of high-intensity focused ultrasound beams is derived. The method is based on the frequency-domain representation of the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation, and assumes for each harmonic a Gaussian transverse pressure distribution at all distances from the transducer face. The beamwidths of the harmonics are constrained to vary inversely with the square root of the harmonic number, and as such this method may be viewed as an extension of a quasilinear approximation. The technique is capable of determining pressure or intensity fields of moderately nonlinear high-intensity focused ultrasound beams in water or biological tissue, usually requiring less than a minute of computer time on a modern workstation. Moreover, this method is particularly well suited to high-gain simulations since, unlike traditional finite-difference methods, it is not subject to resolution limitations in the transverse direction. Results are shown to be in reasonable agreement with numerical solutions of the full KZK equation in both tissue and water for moderately nonlinear beams.

  3. Radiation Heat Transfer Between Diffuse-Gray Surfaces Using Higher Order Finite Elements

    NASA Technical Reports Server (NTRS)

    Gould, Dana C.

    2000-01-01

    This paper presents recent work on developing methods for analyzing radiation heat transfer between diffuse-gray surfaces using p-version finite elements. The work was motivated by a thermal analysis of a High Speed Civil Transport (HSCT) wing structure which showed the importance of radiation heat transfer throughout the structure. The analysis also showed that refining the finite element mesh to accurately capture the temperature distribution on the internal structure led to very large meshes with unacceptably long execution times. Traditional methods for calculating surface-to-surface radiation are based on assumptions that are not appropriate for p-version finite elements. Two methods for determining internal radiation heat transfer are developed for one and two-dimensional p-version finite elements. In the first method, higher-order elements are divided into a number of sub-elements. Traditional methods are used to determine radiation heat flux along each sub-element and then mapped back to the parent element. In the second method, the radiation heat transfer equations are numerically integrated over the higher-order element. Comparisons with analytical solutions show that the integration scheme is generally more accurate than the sub-element method. Comparison to results from traditional finite elements shows that significant reduction in the number of elements in the mesh is possible using higher-order (p-version) finite elements.

  4. An Overview of Longitudinal Data Analysis Methods for Neurological Research

    PubMed Central

    Locascio, Joseph J.; Atri, Alireza

    2011-01-01

    The purpose of this article is to provide a concise, broad and readily accessible overview of longitudinal data analysis methods, aimed to be a practical guide for clinical investigators in neurology. In general, we advise that older, traditional methods, including (1) simple regression of the dependent variable on a time measure, (2) analyzing a single summary subject level number that indexes changes for each subject and (3) a general linear model approach with a fixed-subject effect, should be reserved for quick, simple or preliminary analyses. We advocate the general use of mixed-random and fixed-effect regression models for analyses of most longitudinal clinical studies. Under restrictive situations or to provide validation, we recommend: (1) repeated-measure analysis of covariance (ANCOVA), (2) ANCOVA for two time points, (3) generalized estimating equations and (4) latent growth curve/structural equation models. PMID:22203825

  5. First-principles studies on the equation-of-state, thermal-conductivity, and opacity of deuterium-tritium and polystyrene (CH) for inertial confinement fusion applications

    DOE PAGES

    Hu, Suxing; Collins, Lee A.; Goncharov, V. N.; ...

    2016-05-26

    Using first-principles (FP) methods, we have performed ab initio compute for the equation of state (EOS), thermal conductivity, and opacity of deuterium-tritium (DT) in a wide range of densities and temperatures for inertial confinement fusion (ICF) applications. These systematic investigations have recently been expanded to accurately compute the plasma properties of CH ablators under extreme conditions. In particular, the first-principles EOS and thermal-conductivity tables of CH are self-consistently built from such FP calculations, which are benchmarked by experimental measurements. When compared with the traditional models used for these plasma properties in hydrocodes, significant differences have been identified in the warmmore » dense plasma regime. When these FP-calculated properties of DT and CH were used in our hydrodynamic simulations of ICF implosions, we found that the target performance in terms of neutron yield and energy gain can vary by a factor of 2 to 3, relative to traditional model simulations.« less

  6. Kernel Equating Under the Non-Equivalent Groups With Covariates Design

    PubMed Central

    Bränberg, Kenny

    2015-01-01

    When equating two tests, the traditional approach is to use common test takers and/or common items. Here, the idea is to use variables correlated with the test scores (e.g., school grades and other test scores) as a substitute for common items in a non-equivalent groups with covariates (NEC) design. This is performed in the framework of kernel equating and with an extension of the method developed for post-stratification equating in the non-equivalent groups with anchor test design. Real data from a college admissions test were used to illustrate the use of the design. The equated scores from the NEC design were compared with equated scores from the equivalent group (EG) design, that is, equating with no covariates as well as with equated scores when a constructed anchor test was used. The results indicate that the NEC design can produce lower standard errors compared with an EG design. When covariates were used together with an anchor test, the smallest standard errors were obtained over a large range of test scores. The results obtained, that an EG design equating can be improved by adjusting for differences in test score distributions caused by differences in the distribution of covariates, are useful in practice because not all standardized tests have anchor tests. PMID:29881012

  7. Kernel Equating Under the Non-Equivalent Groups With Covariates Design.

    PubMed

    Wiberg, Marie; Bränberg, Kenny

    2015-07-01

    When equating two tests, the traditional approach is to use common test takers and/or common items. Here, the idea is to use variables correlated with the test scores (e.g., school grades and other test scores) as a substitute for common items in a non-equivalent groups with covariates (NEC) design. This is performed in the framework of kernel equating and with an extension of the method developed for post-stratification equating in the non-equivalent groups with anchor test design. Real data from a college admissions test were used to illustrate the use of the design. The equated scores from the NEC design were compared with equated scores from the equivalent group (EG) design, that is, equating with no covariates as well as with equated scores when a constructed anchor test was used. The results indicate that the NEC design can produce lower standard errors compared with an EG design. When covariates were used together with an anchor test, the smallest standard errors were obtained over a large range of test scores. The results obtained, that an EG design equating can be improved by adjusting for differences in test score distributions caused by differences in the distribution of covariates, are useful in practice because not all standardized tests have anchor tests.

  8. KINETIC-J: A computational kernel for solving the linearized Vlasov equation applied to calculations of the kinetic, configuration space plasma current for time harmonic wave electric fields

    NASA Astrophysics Data System (ADS)

    Green, David L.; Berry, Lee A.; Simpson, Adam B.; Younkin, Timothy R.

    2018-04-01

    We present the KINETIC-J code, a computational kernel for evaluating the linearized Vlasov equation with application to calculating the kinetic plasma response (current) to an applied time harmonic wave electric field. This code addresses the need for a configuration space evaluation of the plasma current to enable kinetic full-wave solvers for waves in hot plasmas to move beyond the limitations of the traditional Fourier spectral methods. We benchmark the kernel via comparison with the standard k →-space forms of the hot plasma conductivity tensor.

  9. Classical Coset Hamiltonian for the Electronic Motion and its Application to Anderson Localization and Hammett Equation

    NASA Astrophysics Data System (ADS)

    Xing, Guan; Wu, Guo-Zhen

    2001-02-01

    A classical coset Hamiltonian is introduced for the system of one electron in multi-sites. By this Hamiltonian, the dynamical behaviour of the electronic motion can be readily simulated. The simulation reproduces the retardation of the electron density decay in a lattice with site energies randomly distributed - an analogy with Anderson localization. This algorithm is also applied to reproduce the Hammett equation which relates the reaction rate with the property of the substitutions in the organic chemical reactions. The advantages and shortcomings of this algorithm, as contrasted with traditional quantum methods such as the molecular orbital theory, are also discussed.

  10. Numerical realization of the variational method for generating self-trapped beams

    NASA Astrophysics Data System (ADS)

    Duque, Erick I.; Lopez-Aguayo, Servando; Malomed, Boris A.

    2018-03-01

    We introduce a numerical variational method based on the Rayleigh-Ritz optimization principle for predicting two-dimensional self-trapped beams in nonlinear media. This technique overcomes the limitation of the traditional variational approximation in performing analytical Lagrangian integration and differentiation. Approximate soliton solutions of a generalized nonlinear Schr\\"odinger equation are obtained, demonstrating robustness of the beams of various types (fundamental, vortices, multipoles, azimuthons) in the course of their propagation. The algorithm offers possibilities to produce more sophisticated soliton profiles in general nonlinear models.

  11. Time-domain least-squares migration using the Gaussian beam summation method

    NASA Astrophysics Data System (ADS)

    Yang, Jidong; Zhu, Hejun; McMechan, George; Yue, Yubo

    2018-04-01

    With a finite recording aperture, a limited source spectrum and unbalanced illumination, traditional imaging methods are insufficient to generate satisfactory depth profiles with high resolution and high amplitude fidelity. This is because traditional migration uses the adjoint operator of the forward modeling rather than the inverse operator. We propose a least-squares migration approach based on the time-domain Gaussian beam summation, which helps to balance subsurface illumination and improve image resolution. Based on the Born approximation for the isotropic acoustic wave equation, we derive a linear time-domain Gaussian beam modeling operator, which significantly reduces computational costs in comparison with the spectral method. Then, we formulate the corresponding adjoint Gaussian beam migration, as the gradient of an L2-norm waveform misfit function. An L1-norm regularization is introduced to the inversion to enhance the robustness of least-squares migration, and an approximated diagonal Hessian is used as a preconditioner to speed convergence. Synthetic and field data examples demonstrate that the proposed approach improves imaging resolution and amplitude fidelity in comparison with traditional Gaussian beam migration.

  12. Time-domain least-squares migration using the Gaussian beam summation method

    NASA Astrophysics Data System (ADS)

    Yang, Jidong; Zhu, Hejun; McMechan, George; Yue, Yubo

    2018-07-01

    With a finite recording aperture, a limited source spectrum and unbalanced illumination, traditional imaging methods are insufficient to generate satisfactory depth profiles with high resolution and high amplitude fidelity. This is because traditional migration uses the adjoint operator of the forward modelling rather than the inverse operator. We propose a least-squares migration approach based on the time-domain Gaussian beam summation, which helps to balance subsurface illumination and improve image resolution. Based on the Born approximation for the isotropic acoustic wave equation, we derive a linear time-domain Gaussian beam modelling operator, which significantly reduces computational costs in comparison with the spectral method. Then, we formulate the corresponding adjoint Gaussian beam migration, as the gradient of an L2-norm waveform misfit function. An L1-norm regularization is introduced to the inversion to enhance the robustness of least-squares migration, and an approximated diagonal Hessian is used as a pre-conditioner to speed convergence. Synthetic and field data examples demonstrate that the proposed approach improves imaging resolution and amplitude fidelity in comparison with traditional Gaussian beam migration.

  13. Density Weighted FDF Equations for Simulations of Turbulent Reacting Flows

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2011-01-01

    In this report, we briefly revisit the formulation of density weighted filtered density function (DW-FDF) for large eddy simulation (LES) of turbulent reacting flows, which was proposed by Jaberi et al. (Jaberi, F.A., Colucci, P.J., James, S., Givi, P. and Pope, S.B., Filtered mass density function for Large-eddy simulation of turbulent reacting flows, J. Fluid Mech., vol. 401, pp. 85-121, 1999). At first, we proceed the traditional derivation of the DW-FDF equations by using the fine grained probability density function (FG-PDF), then we explore another way of constructing the DW-FDF equations by starting directly from the compressible Navier-Stokes equations. We observe that the terms which are unclosed in the traditional DW-FDF equations are now closed in the newly constructed DW-FDF equations. This significant difference and its practical impact on the computational simulations may deserve further studies.

  14. A full vectorial generalized discontinuous Galerkin beam propagation method (GDG-BPM) for nonsmooth electromagnetic fields in waveguides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan Kai; Cai Wei; Ji Xia

    2008-07-20

    In this paper, we propose a new full vectorial generalized discontinuous Galerkin beam propagation method (GDG-BPM) to accurately handle the discontinuities in electromagnetic fields associated with wave propagations in inhomogeneous optical waveguides. The numerical method is a combination of the traditional beam propagation method (BPM) with a newly developed generalized discontinuous Galerkin (GDG) method [K. Fan, W. Cai, X. Ji, A generalized discontinuous Galerkin method (GDG) for Schroedinger equations with nonsmooth solutions, J. Comput. Phys. 227 (2008) 2387-2410]. The GDG method is based on a reformulation, using distributional variables to account for solution jumps across material interfaces, of Schroedinger equationsmore » resulting from paraxial approximations of vector Helmholtz equations. Four versions of the GDG-BPM are obtained for either the electric or magnetic field components. Modeling of wave propagations in various optical fibers using the full vectorial GDG-BPM is included. Numerical results validate the high order accuracy and the flexibility of the method for various types of interface jump conditions.« less

  15. Efficient Statistically Accurate Algorithms for the Fokker-Planck Equation in Large Dimensions

    NASA Astrophysics Data System (ADS)

    Chen, N.; Majda, A.

    2017-12-01

    Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method, which is based on an effective data assimilation framework, provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace. Therefore, it is computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from the traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has a significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O(100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6 dimensions with only small errors.

  16. Supporting second grade lower secondary school students’ understanding of linear equation system in two variables using ethnomathematics

    NASA Astrophysics Data System (ADS)

    Nursyahidah, F.; Saputro, B. A.; Rubowo, M. R.

    2018-03-01

    The aim of this research is to know the students’ understanding of linear equation system in two variables using Ethnomathematics and to acquire learning trajectory of linear equation system in two variables for the second grade of lower secondary school students. This research used methodology of design research that consists of three phases, there are preliminary design, teaching experiment, and retrospective analysis. Subject of this study is 28 second grade students of Sekolah Menengah Pertama (SMP) 37 Semarang. The result of this research shows that the students’ understanding in linear equation system in two variables can be stimulated by using Ethnomathematics in selling buying tradition in Peterongan traditional market in Central Java as a context. All of strategies and model that was applied by students and also their result discussion shows how construction and contribution of students can help them to understand concept of linear equation system in two variables. All the activities that were done by students produce learning trajectory to gain the goal of learning. Each steps of learning trajectory of students have an important role in understanding the concept from informal to the formal level. Learning trajectory using Ethnomathematics that is produced consist of watching video of selling buying activity in Peterongan traditional market to construct linear equation in two variables, determine the solution of linear equation in two variables, construct model of linear equation system in two variables from contextual problem, and solving a contextual problem related to linear equation system in two variables.

  17. Population density equations for stochastic processes with memory kernels

    NASA Astrophysics Data System (ADS)

    Lai, Yi Ming; de Kamps, Marc

    2017-06-01

    We present a method for solving population density equations (PDEs)-a mean-field technique describing homogeneous populations of uncoupled neurons—where the populations can be subject to non-Markov noise for arbitrary distributions of jump sizes. The method combines recent developments in two different disciplines that traditionally have had limited interaction: computational neuroscience and the theory of random networks. The method uses a geometric binning scheme, based on the method of characteristics, to capture the deterministic neurodynamics of the population, separating the deterministic and stochastic process cleanly. We can independently vary the choice of the deterministic model and the model for the stochastic process, leading to a highly modular numerical solution strategy. We demonstrate this by replacing the master equation implicit in many formulations of the PDE formalism by a generalization called the generalized Montroll-Weiss equation—a recent result from random network theory—describing a random walker subject to transitions realized by a non-Markovian process. We demonstrate the method for leaky- and quadratic-integrate and fire neurons subject to spike trains with Poisson and gamma-distributed interspike intervals. We are able to model jump responses for both models accurately to both excitatory and inhibitory input under the assumption that all inputs are generated by one renewal process.

  18. Delay differential equations via the matrix Lambert W function and bifurcation analysis: application to machine tool chatter.

    PubMed

    Yi, Sun; Nelson, Patrick W; Ulsoy, A Galip

    2007-04-01

    In a turning process modeled using delay differential equations (DDEs), we investigate the stability of the regenerative machine tool chatter problem. An approach using the matrix Lambert W function for the analytical solution to systems of delay differential equations is applied to this problem and compared with the result obtained using a bifurcation analysis. The Lambert W function, known to be useful for solving scalar first-order DDEs, has recently been extended to a matrix Lambert W function approach to solve systems of DDEs. The essential advantages of the matrix Lambert W approach are not only the similarity to the concept of the state transition matrix in lin ear ordinary differential equations, enabling its use for general classes of linear delay differential equations, but also the observation that we need only the principal branch among an infinite number of roots to determine the stability of a system of DDEs. The bifurcation method combined with Sturm sequences provides an algorithm for determining the stability of DDEs without restrictive geometric analysis. With this approach, one can obtain the critical values of delay, which determine the stability of a system and hence the preferred operating spindle speed without chatter. We apply both the matrix Lambert W function and the bifurcation analysis approach to the problem of chatter stability in turning, and compare the results obtained to existing methods. The two new approaches show excellent accuracy and certain other advantages, when compared to traditional graphical, computational and approximate methods.

  19. Technological pedagogical content knowledge of junior high school mathematics teachers in teaching linear equation

    NASA Astrophysics Data System (ADS)

    Wati, S.; Fitriana, L.; Mardiyana

    2018-04-01

    Linear equation is one of the topics in mathematics that are considered difficult. Student difficulties of understanding linear equation can be caused by lack of understanding this concept and the way of teachers teach. TPACK is a way to understand the complex relationships between teaching and content taught through the use of specific teaching approaches and supported by the right technology tools. This study aims to identify TPACK of junior high school mathematics teachers in teaching linear equation. The method used in the study was descriptive. In the first phase, a survey using a questionnaire was carried out on 45 junior high school mathematics teachers in teaching linear equation. While in the second phase, the interview involved three teachers. The analysis of data used were quantitative and qualitative technique. The result PCK revealed teachers emphasized developing procedural and conceptual knowledge through reliance on traditional in teaching linear equation. The result of TPK revealed teachers’ lower capacity to deal with the general information and communications technologies goals across the curriculum in teaching linear equation. The result indicated that PowerPoint constitutes TCK modal technological capability in teaching linear equation. The result of TPACK seems to suggest a low standard in teachers’ technological skills across a variety of mathematics education goals in teaching linear equation. This means that the ability of teachers’ TPACK in teaching linear equation still needs to be improved.

  20. Research on Standard Errors of Equating Differences. Research Report. ETS RR-10-25

    ERIC Educational Resources Information Center

    Moses, Tim; Zhang, Wenmin

    2010-01-01

    In this paper, the "standard error of equating difference" (SEED) is described in terms of originally proposed kernel equating functions (von Davier, Holland, & Thayer, 2004) and extended to incorporate traditional linear and equipercentile functions. These derivations expand on prior developments of SEEDs and standard errors of equating and…

  1. Electromagnetic field computation at fractal dimensions

    NASA Astrophysics Data System (ADS)

    Zubair, M.; Ang, Y. S.; Ang, L. K.

    According to Mandelbrot's work on fractals, many objects are in fractional dimensions that the traditional calculus or differential equations are not sufficient. Thus fractional models solving the relevant differential equations are critical to understand the physical dynamics of such objects. In this work, we develop computational electromagnetics or Maxwell equations in fractional dimensions. For a given degree of imperfection, impurity, roughness, anisotropy or inhomogeneity, we consider the complicated object can be formulated into a fractional dimensional continuous object characterized by an effective fractional dimension D, which can be calculated from a self-developed algorithm. With this non-integer value of D, we develop the computational methods to design and analyze the EM scattering problems involving rough surfaces or irregularities in an efficient framework. The fractional electromagnetic based model can be extended to other key differential equations such as Schrodinger or Dirac equations, which will be useful for design of novel 2D materials stacked up in complicated device configuration for applications in electronics and photonics. This work is supported by Singapore Temasek Laboratories (TL) Seed Grant (IGDS S16 02 05 1).

  2. Assimilating concentration observations for transport and dispersion modeling in a meandering wind field

    NASA Astrophysics Data System (ADS)

    Haupt, Sue Ellen; Beyer-Lout, Anke; Long, Kerrie J.; Young, George S.

    Assimilating concentration data into an atmospheric transport and dispersion model can provide information to improve downwind concentration forecasts. The forecast model is typically a one-way coupled set of equations: the meteorological equations impact the concentration, but the concentration does not generally affect the meteorological field. Thus, indirect methods of using concentration data to influence the meteorological variables are required. The problem studied here involves a simple wind field forcing Gaussian dispersion. Two methods of assimilating concentration data to infer the wind direction are demonstrated. The first method is Lagrangian in nature and treats the puff as an entity using feature extraction coupled with nudging. The second method is an Eulerian field approach akin to traditional variational approaches, but minimizes the error by using a genetic algorithm (GA) to directly optimize the match between observations and predictions. Both methods show success at inferring the wind field. The GA-variational method, however, is more accurate but requires more computational time. Dynamic assimilation of a continuous release modeled by a Gaussian plume is also demonstrated using the genetic algorithm approach.

  3. Fast 3D elastic micro-seismic source location using new GPU features

    NASA Astrophysics Data System (ADS)

    Xue, Qingfeng; Wang, Yibo; Chang, Xu

    2016-12-01

    In this paper, we describe new GPU features and their applications in passive seismic - micro-seismic location. Locating micro-seismic events is quite important in seismic exploration, especially when searching for unconventional oil and gas resources. Different from the traditional ray-based methods, the wave equation method, such as the method we use in our paper, has a remarkable advantage in adapting to low signal-to-noise ratio conditions and does not need a person to select the data. However, because it has a conspicuous deficiency due to its computation cost, these methods are not widely used in industrial fields. To make the method useful, we implement imaging-like wave equation micro-seismic location in a 3D elastic media and use GPU to accelerate our algorithm. We also introduce some new GPU features into the implementation to solve the data transfer and GPU utilization problems. Numerical and field data experiments show that our method can achieve a more than 30% performance improvement in GPU implementation just by using these new features.

  4. Discretization of the induced-charge boundary integral equation.

    PubMed

    Bardhan, Jaydeep P; Eisenberg, Robert S; Gillespie, Dirk

    2009-07-01

    Boundary-element methods (BEMs) for solving integral equations numerically have been used in many fields to compute the induced charges at dielectric boundaries. In this paper, we consider a more accurate implementation of BEM in the context of ions in aqueous solution near proteins, but our results are applicable more generally. The ions that modulate protein function are often within a few angstroms of the protein, which leads to the significant accumulation of polarization charge at the protein-solvent interface. Computing the induced charge accurately and quickly poses a numerical challenge in solving a popular integral equation using BEM. In particular, the accuracy of simulations can depend strongly on seemingly minor details of how the entries of the BEM matrix are calculated. We demonstrate that when the dielectric interface is discretized into flat tiles, the qualocation method of Tausch [IEEE Trans Comput.-Comput.-Aided Des. 20, 1398 (2001)] to compute the BEM matrix elements is always more accurate than the traditional centroid-collocation method. Qualocation is not more expensive to implement than collocation and can save significant computational time by reducing the number of boundary elements needed to discretize the dielectric interfaces.

  5. Discretization of the induced-charge boundary integral equation

    NASA Astrophysics Data System (ADS)

    Bardhan, Jaydeep P.; Eisenberg, Robert S.; Gillespie, Dirk

    2009-07-01

    Boundary-element methods (BEMs) for solving integral equations numerically have been used in many fields to compute the induced charges at dielectric boundaries. In this paper, we consider a more accurate implementation of BEM in the context of ions in aqueous solution near proteins, but our results are applicable more generally. The ions that modulate protein function are often within a few angstroms of the protein, which leads to the significant accumulation of polarization charge at the protein-solvent interface. Computing the induced charge accurately and quickly poses a numerical challenge in solving a popular integral equation using BEM. In particular, the accuracy of simulations can depend strongly on seemingly minor details of how the entries of the BEM matrix are calculated. We demonstrate that when the dielectric interface is discretized into flat tiles, the qualocation method of Tausch [IEEE Trans Comput.-Comput.-Aided Des. 20, 1398 (2001)] to compute the BEM matrix elements is always more accurate than the traditional centroid-collocation method. Qualocation is not more expensive to implement than collocation and can save significant computational time by reducing the number of boundary elements needed to discretize the dielectric interfaces.

  6. A mathematical method for precisely calculating the radiographic angles of the cup after total hip arthroplasty.

    PubMed

    Zhao, Jing-Xin; Su, Xiu-Yun; Xiao, Ruo-Xiu; Zhao, Zhe; Zhang, Li-Hai; Zhang, Li-Cheng; Tang, Pei-Fu

    2016-11-01

    We established a mathematical method to precisely calculate the radiographic anteversion (RA) and radiographic inclination (RI) angles of the acetabular cup based on anterior-posterior (AP) pelvic radiographs after total hip arthroplasty. Using Mathematica software, a mathematical model for an oblique cone was established to simulate how AP pelvic radiographs are obtained and to address the relationship between the two-dimensional and three-dimensional geometry of the opening circle of the cup. In this model, the vertex was the X-ray beam source, and the generatrix was the ellipse in radiographs projected from the opening circle of the acetabular cup. Using this model, we established a series of mathematical formulas to reveal the differences between the true RA and RI cup angles and the measurements results achieved using traditional methods and AP pelvic radiographs and to precisely calculate the RA and RI cup angles based on post-operative AP pelvic radiographs. Statistical analysis indicated that traditional methods should be used with caution if traditional measurements methods are used to calculate the RA and RI cup angles with AP pelvic radiograph. The entire calculation process could be performed by an orthopedic surgeon with mathematical knowledge of basic matrix and vector equations. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  7. Bukhvostov-Lipatov model and quantum-classical duality

    NASA Astrophysics Data System (ADS)

    Bazhanov, Vladimir V.; Lukyanov, Sergei L.; Runov, Boris A.

    2018-02-01

    The Bukhvostov-Lipatov model is an exactly soluble model of two interacting Dirac fermions in 1 + 1 dimensions. The model describes weakly interacting instantons and anti-instantons in the O (3) non-linear sigma model. In our previous work [arxiv:arXiv:1607.04839] we have proposed an exact formula for the vacuum energy of the Bukhvostov-Lipatov model in terms of special solutions of the classical sinh-Gordon equation, which can be viewed as an example of a remarkable duality between integrable quantum field theories and integrable classical field theories in two dimensions. Here we present a complete derivation of this duality based on the classical inverse scattering transform method, traditional Bethe ansatz techniques and analytic theory of ordinary differential equations. In particular, we show that the Bethe ansatz equations defining the vacuum state of the quantum theory also define connection coefficients of an auxiliary linear problem for the classical sinh-Gordon equation. Moreover, we also present details of the derivation of the non-linear integral equations determining the vacuum energy and other spectral characteristics of the model in the case when the vacuum state is filled by 2-string solutions of the Bethe ansatz equations.

  8. A combined finite element-boundary integral formulation for solution of two-dimensional scattering problems via CGFFT. [Conjugate Gradient Fast Fourier Transformation

    NASA Technical Reports Server (NTRS)

    Collins, Jeffery D.; Volakis, John L.; Jin, Jian-Ming

    1990-01-01

    A new technique is presented for computing the scattering by 2-D structures of arbitrary composition. The proposed solution approach combines the usual finite element method with the boundary-integral equation to formulate a discrete system. This is subsequently solved via the conjugate gradient (CG) algorithm. A particular characteristic of the method is the use of rectangular boundaries to enclose the scatterer. Several of the resulting boundary integrals are therefore convolutions and may be evaluated via the fast Fourier transform (FFT) in the implementation of the CG algorithm. The solution approach offers the principal advantage of having O(N) memory demand and employs a 1-D FFT versus a 2-D FFT as required with a traditional implementation of the CGFFT algorithm. The speed of the proposed solution method is compared with that of the traditional CGFFT algorithm, and results for rectangular bodies are given and shown to be in excellent agreement with the moment method.

  9. A combined finite element and boundary integral formulation for solution via CGFFT of 2-dimensional scattering problems

    NASA Technical Reports Server (NTRS)

    Collins, Jeffery D.; Volakis, John L.

    1989-01-01

    A new technique is presented for computing the scattering by 2-D structures of arbitrary composition. The proposed solution approach combines the usual finite element method with the boundary integral equation to formulate a discrete system. This is subsequently solved via the conjugate gradient (CG) algorithm. A particular characteristic of the method is the use of rectangular boundaries to enclose the scatterer. Several of the resulting boundary integrals are therefore convolutions and may be evaluated via the fast Fourier transform (FFT) in the implementation of the CG algorithm. The solution approach offers the principle advantage of having O(N) memory demand and employs a 1-D FFT versus a 2-D FFT as required with a traditional implementation of the CGFFT algorithm. The speed of the proposed solution method is compared with that of the traditional CGFFT algorithm, and results for rectangular bodies are given and shown to be in excellent agreement with the moment method.

  10. Energy/dissipation-preserving Birkhoffian multi-symplectic methods for Maxwell's equations with dissipation terms

    DOE PAGES

    Su, Hongling; Li, Shengtai

    2016-02-03

    In this study, we propose two new energy/dissipation-preserving Birkhoffian multi-symplectic methods (Birkhoffian and Birkhoffian box) for Maxwell's equations with dissipation terms. After investigating the non-autonomous and autonomous Birkhoffian formalism for Maxwell's equations with dissipation terms, we first apply a novel generating functional theory to the non-autonomous Birkhoffian formalism to propose our Birkhoffian scheme, and then implement a central box method to the autonomous Birkhoffian formalism to derive the Birkhoffian box scheme. We have obtained four formal local conservation laws and three formal energy global conservation laws. We have also proved that both of our derived schemes preserve the discrete versionmore » of the global/local conservation laws. Furthermore, the stability, dissipation and dispersion relations are also investigated for the schemes. Theoretical analysis shows that the schemes are unconditionally stable, dissipation-preserving for Maxwell's equations in a perfectly matched layer (PML) medium and have second order accuracy in both time and space. Numerical experiments for problems with exact theoretical results are given to demonstrate that the Birkhoffian multi-symplectic schemes are much more accurate in preserving energy than both the exponential finite-difference time-domain (FDTD) method and traditional Hamiltonian scheme. Finally, we also solve the electromagnetic pulse (EMP) propagation problem and the numerical results show that the Birkhoffian scheme recovers the magnitude of the current source and reaction history very well even after long time propagation.« less

  11. Energy/dissipation-preserving Birkhoffian multi-symplectic methods for Maxwell's equations with dissipation terms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Su, Hongling; Li, Shengtai

    In this study, we propose two new energy/dissipation-preserving Birkhoffian multi-symplectic methods (Birkhoffian and Birkhoffian box) for Maxwell's equations with dissipation terms. After investigating the non-autonomous and autonomous Birkhoffian formalism for Maxwell's equations with dissipation terms, we first apply a novel generating functional theory to the non-autonomous Birkhoffian formalism to propose our Birkhoffian scheme, and then implement a central box method to the autonomous Birkhoffian formalism to derive the Birkhoffian box scheme. We have obtained four formal local conservation laws and three formal energy global conservation laws. We have also proved that both of our derived schemes preserve the discrete versionmore » of the global/local conservation laws. Furthermore, the stability, dissipation and dispersion relations are also investigated for the schemes. Theoretical analysis shows that the schemes are unconditionally stable, dissipation-preserving for Maxwell's equations in a perfectly matched layer (PML) medium and have second order accuracy in both time and space. Numerical experiments for problems with exact theoretical results are given to demonstrate that the Birkhoffian multi-symplectic schemes are much more accurate in preserving energy than both the exponential finite-difference time-domain (FDTD) method and traditional Hamiltonian scheme. Finally, we also solve the electromagnetic pulse (EMP) propagation problem and the numerical results show that the Birkhoffian scheme recovers the magnitude of the current source and reaction history very well even after long time propagation.« less

  12. Cardiovascular risk assessment: addition of CKD and race to the Framingham equation

    PubMed Central

    Drawz, Paul E.; Baraniuk, Sarah; Davis, Barry R.; Brown, Clinton D.; Colon, Pedro J.; Cujyet, Aloysius B.; Dart, Richard A.; Graumlich, James F.; Henriquez, Mario A.; Moloo, Jamaluddin; Sakalayen, Mohammed G.; Simmons, Debra L.; Stanford, Carol; Sweeney, Mary Ellen; Wong, Nathan D.; Rahman, Mahboob

    2012-01-01

    Background/Aims The value of the Framingham equation in predicting cardiovascular risk in African Americans and patients with chronic kidney disease (CKD) is unclear. The purpose of the study was to evaluate whether the addition of CKD and race to the Framingham equation improves risk stratification in hypertensive patients. Methods Participants in the Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial (ALLHAT) were studied. Those randomized to doxazosin, age greater than 74 years, and those with a history of coronary heart disease (CHD) were excluded. Two risk stratification models were developed using Cox proportional hazards models in a two-thirds developmental sample. The first model included the traditional Framingham risk factors. The second model included the traditional risk factors plus CKD, defined by eGFR categories, and stratification by race (Black vs. Non-Black). The primary outcome was a composite of fatal CHD, nonfatal MI, coronary revascularization, and hospitalized angina. Results There were a total of 19,811 eligible subjects. In the validation cohort, there was no difference in C-statistics between the Framingham equation and the ALLHAT model including CKD and race. This was consistent across subgroups by race and gender and among those with CKD. One exception was among Non-Black women where the C-statistic was higher for the Framingham equation (0.68 vs 0.65, P=0.02). Additionally, net reclassification improvement was not significant for any subgroup based on race and gender, ranging from −5.5% to 4.4%. Conclusion The addition of CKD status and stratification by race does not improve risk prediction in high-risk hypertensive patients. PMID:23194494

  13. Brownian dynamics without Green's functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Delong, Steven; Donev, Aleksandar, E-mail: donev@courant.nyu.edu; Usabiaga, Florencio Balboa

    2014-04-07

    We develop a Fluctuating Immersed Boundary (FIB) method for performing Brownian dynamics simulations of confined particle suspensions. Unlike traditional methods which employ analytical Green's functions for Stokes flow in the confined geometry, the FIB method uses a fluctuating finite-volume Stokes solver to generate the action of the response functions “on the fly.” Importantly, we demonstrate that both the deterministic terms necessary to capture the hydrodynamic interactions among the suspended particles, as well as the stochastic terms necessary to generate the hydrodynamically correlated Brownian motion, can be generated by solving the steady Stokes equations numerically only once per time step. Thismore » is accomplished by including a stochastic contribution to the stress tensor in the fluid equations consistent with fluctuating hydrodynamics. We develop novel temporal integrators that account for the multiplicative nature of the noise in the equations of Brownian dynamics and the strong dependence of the mobility on the configuration for confined systems. Notably, we propose a random finite difference approach to approximating the stochastic drift proportional to the divergence of the configuration-dependent mobility matrix. Through comparisons with analytical and existing computational results, we numerically demonstrate the ability of the FIB method to accurately capture both the static (equilibrium) and dynamic properties of interacting particles in flow.« less

  14. A T Matrix Method Based upon Scalar Basis Functions

    NASA Technical Reports Server (NTRS)

    Mackowski, D.W.; Kahnert, F. M.; Mishchenko, Michael I.

    2013-01-01

    A surface integral formulation is developed for the T matrix of a homogenous and isotropic particle of arbitrary shape, which employs scalar basis functions represented by the translation matrix elements of the vector spherical wave functions. The formulation begins with the volume integral equation for scattering by the particle, which is transformed so that the vector and dyadic components in the equation are replaced with associated dipole and multipole level scalar harmonic wave functions. The approach leads to a volume integral formulation for the T matrix, which can be extended, by use of Green's identities, to the surface integral formulation. The result is shown to be equivalent to the traditional surface integral formulas based on the VSWF basis.

  15. A Deconvolution Approach for the Mapping of Acoustic Sources (DAMAS) Determined from Phased Microphone Arrays

    NASA Technical Reports Server (NTRS)

    Brooks, Thomas F.; Humphreys, William M.

    2006-01-01

    Current processing of acoustic array data is burdened with considerable uncertainty. This study reports an original methodology that serves to demystify array results, reduce misinterpretation, and accurately quantify position and strength of acoustic sources. Traditional array results represent noise sources that are convolved with array beamform response functions, which depend on array geometry, size (with respect to source position and distributions), and frequency. The Deconvolution Approach for the Mapping of Acoustic Sources (DAMAS) method removes beamforming characteristics from output presentations. A unique linear system of equations accounts for reciprocal influence at different locations over the array survey region. It makes no assumption beyond the traditional processing assumption of statistically independent noise sources. The full rank equations are solved with a new robust iterative method. DAMAS is quantitatively validated using archival data from a variety of prior high-lift airframe component noise studies, including flap edge/cove, trailing edge, leading edge, slat, and calibration sources. Presentations are explicit and straightforward, as the noise radiated from a region of interest is determined by simply summing the mean-squared values over that region. DAMAS can fully replace existing array processing and presentations methodology in most applications. It appears to dramatically increase the value of arrays to the field of experimental acoustics.

  16. A consistent hierarchy of generalized kinetic equation approximations to the master equation applied to surface catalysis.

    PubMed

    Herschlag, Gregory J; Mitran, Sorin; Lin, Guang

    2015-06-21

    We develop a hierarchy of approximations to the master equation for systems that exhibit translational invariance and finite-range spatial correlation. Each approximation within the hierarchy is a set of ordinary differential equations that considers spatial correlations of varying lattice distance; the assumption is that the full system will have finite spatial correlations and thus the behavior of the models within the hierarchy will approach that of the full system. We provide evidence of this convergence in the context of one- and two-dimensional numerical examples. Lower levels within the hierarchy that consider shorter spatial correlations are shown to be up to three orders of magnitude faster than traditional kinetic Monte Carlo methods (KMC) for one-dimensional systems, while predicting similar system dynamics and steady states as KMC methods. We then test the hierarchy on a two-dimensional model for the oxidation of CO on RuO2(110), showing that low-order truncations of the hierarchy efficiently capture the essential system dynamics. By considering sequences of models in the hierarchy that account for longer spatial correlations, successive model predictions may be used to establish empirical approximation of error estimates. The hierarchy may be thought of as a class of generalized phenomenological kinetic models since each element of the hierarchy approximates the master equation and the lowest level in the hierarchy is identical to a simple existing phenomenological kinetic models.

  17. Improved Linear Algebra Methods for Redshift Computation from Limited Spectrum Data - II

    NASA Technical Reports Server (NTRS)

    Foster, Leslie; Waagen, Alex; Aijaz, Nabella; Hurley, Michael; Luis, Apolo; Rinsky, Joel; Satyavolu, Chandrika; Gazis, Paul; Srivastava, Ashok; Way, Michael

    2008-01-01

    Given photometric broadband measurements of a galaxy, Gaussian processes may be used with a training set to solve the regression problem of approximating the redshift of this galaxy. However, in practice solving the traditional Gaussian processes equation is too slow and requires too much memory. We employed several methods to avoid this difficulty using algebraic manipulation and low-rank approximation, and were able to quickly approximate the redshifts in our testing data within 17 percent of the known true values using limited computational resources. The accuracy of one method, the V Formulation, is comparable to the accuracy of the best methods currently used for this problem.

  18. Final Report - Subcontract B623760

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bank, R.

    2017-11-17

    During my visit to LLNL during July 17{27, 2017, I worked on linear system solvers. The two level hierarchical solver that initiated our study was developed to solve linear systems arising from hp adaptive finite element calculations, and is implemented in the PLTMG software package, version 12. This preconditioner typically requires 3-20% of the space used by the stiffness matrix for higher order elements. It has multigrid like convergence rates for a wide variety of PDEs (self-adjoint positive de nite elliptic equations, convection dominated convection-diffusion equations, and highly indefinite Helmholtz equations, among others). The convergence rate is not independent ofmore » the polynomial degree p as p ! 1, but but remains strong for p 9, which is the highest polynomial degree allowed in PLTMG, due to limitations of the numerical quadrature rules implemented in the software package. A more complete description of the method and some numerical experiments illustrating its effectiveness appear in. Like traditional geometric multilevel methods, this scheme relies on knowledge of the underlying finite element space in order to construct the smoother and the coarse grid correction.« less

  19. Numerical realization of the variational method for generating self-trapped beams.

    PubMed

    Duque, Erick I; Lopez-Aguayo, Servando; Malomed, Boris A

    2018-03-19

    We introduce a numerical variational method based on the Rayleigh-Ritz optimization principle for predicting two-dimensional self-trapped beams in nonlinear media. This technique overcomes the limitation of the traditional variational approximation in performing analytical Lagrangian integration and differentiation. Approximate soliton solutions of a generalized nonlinear Schrödinger equation are obtained, demonstrating robustness of the beams of various types (fundamental, vortices, multipoles, azimuthons) in the course of their propagation. The algorithm offers possibilities to produce more sophisticated soliton profiles in general nonlinear models.

  20. Groundwater Source Identification Using Backward Fractional-Derivative Models

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Sun, H.; Zheng, C.

    2017-12-01

    The forward Fractional Advection Dispersion Equation (FADE) provides a useful model for non-Fickian transport in heterogeneous porous media. This presentation introduces the corresponding backward FADE model, to identify groundwater source location and release time. The backward method is developed from the theory of inverse problems, and the resultant backward FADE differs significantly from the traditional backward ADE because the fractional derivative is not self-adjoint and the probability density function for backward locations is highly skewed. Finally, the method is validated using tracer data from well-known field experiments.

  1. A new method to calculate unsteady particle kinematics and drag coefficient in a subsonic post-shock flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bordoloi, Ankur D.; Ding, Liuyang; Martinez, Adam A.

    In this paper, we introduce a new method (piecewise integrated dynamics equation fit, PIDEF) that uses the particle dynamics equation to determine unsteady kinematics and drag coefficient (C D) for a particle in subsonic post-shock flow. The uncertainty of this method is assessed based on simulated trajectories for both quasi-steady and unsteady flow conditions. Traditional piecewise polynomial fitting (PPF) shows high sensitivity to measurement error and the function used to describe C D, creating high levels of relative error (>>1) when applied to unsteady shock-accelerated flows. The PIDEF method provides reduced uncertainty in calculations of unsteady acceleration and drag coefficientmore » for both quasi-steady and unsteady flows. This makes PIDEF a preferable method over PPF for complex flows where the temporal response of C D is unknown. Finally, we apply PIDEF to experimental measurements of particle trajectories from 8-pulse particle tracking and determine the effect of incident Mach number on relaxation kinematics and drag coefficient of micron-sized particles.« less

  2. A new method to calculate unsteady particle kinematics and drag coefficient in a subsonic post-shock flow

    DOE PAGES

    Bordoloi, Ankur D.; Ding, Liuyang; Martinez, Adam A.; ...

    2018-04-26

    In this paper, we introduce a new method (piecewise integrated dynamics equation fit, PIDEF) that uses the particle dynamics equation to determine unsteady kinematics and drag coefficient (C D) for a particle in subsonic post-shock flow. The uncertainty of this method is assessed based on simulated trajectories for both quasi-steady and unsteady flow conditions. Traditional piecewise polynomial fitting (PPF) shows high sensitivity to measurement error and the function used to describe C D, creating high levels of relative error (>>1) when applied to unsteady shock-accelerated flows. The PIDEF method provides reduced uncertainty in calculations of unsteady acceleration and drag coefficientmore » for both quasi-steady and unsteady flows. This makes PIDEF a preferable method over PPF for complex flows where the temporal response of C D is unknown. Finally, we apply PIDEF to experimental measurements of particle trajectories from 8-pulse particle tracking and determine the effect of incident Mach number on relaxation kinematics and drag coefficient of micron-sized particles.« less

  3. Local polynomial chaos expansion for linear differential equations with high dimensional random inputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yi; Jakeman, John; Gittelson, Claude

    2015-01-08

    In this paper we present a localized polynomial chaos expansion for partial differential equations (PDE) with random inputs. In particular, we focus on time independent linear stochastic problems with high dimensional random inputs, where the traditional polynomial chaos methods, and most of the existing methods, incur prohibitively high simulation cost. Furthermore, the local polynomial chaos method employs a domain decomposition technique to approximate the stochastic solution locally. In each subdomain, a subdomain problem is solved independently and, more importantly, in a much lower dimensional random space. In a postprocesing stage, accurate samples of the original stochastic problems are obtained frommore » the samples of the local solutions by enforcing the correct stochastic structure of the random inputs and the coupling conditions at the interfaces of the subdomains. Overall, the method is able to solve stochastic PDEs in very large dimensions by solving a collection of low dimensional local problems and can be highly efficient. In our paper we present the general mathematical framework of the methodology and use numerical examples to demonstrate the properties of the method.« less

  4. Analyzing Axial Stress and Deformation of Tubular for Steam Injection Process in Deviated Wells Based on the Varied (T, P) Fields

    PubMed Central

    Liu, Yunqiang; Xu, Jiuping; Wang, Shize; Qi, Bin

    2013-01-01

    The axial stress and deformation of high temperature high pressure deviated gas wells are studied. A new model is multiple nonlinear equation systems by comprehensive consideration of axial load of tubular string, internal and external fluid pressure, normal pressure between the tubular and well wall, and friction and viscous friction of fluid flowing. The varied temperature and pressure fields were researched by the coupled differential equations concerning mass, momentum, and energy equations instead of traditional methods. The axial load, the normal pressure, the friction, and four deformation lengths of tubular string are got ten by means of the dimensionless iterative interpolation algorithm. The basic data of the X Well, 1300 meters deep, are used for case history calculations. The results and some useful conclusions can provide technical reliability in the process of designing well testing in oil or gas wells. PMID:24163623

  5. Some aspects of steam-water flow simulation in geothermal wells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shulyupin, Alexander N.

    1996-01-24

    Actual aspects of steam-water simulation in geothermal wells are considered: necessary quality of a simulator, flow regimes, mass conservation equation, momentum conservation equation, energy conservation equation and condition equations. Shortcomings of traditional hydraulic approach are noted. Main questions of simulator development by the hydraulic approach are considered. New possibilities of a simulation with the structure approach employment are noted.

  6. Effect of Differential Item Functioning on Test Equating

    ERIC Educational Resources Information Center

    Kabasakal, Kübra Atalay; Kelecioglu, Hülya

    2015-01-01

    This study examines the effect of differential item functioning (DIF) items on test equating through multilevel item response models (MIRMs) and traditional IRMs. The performances of three different equating models were investigated under 24 different simulation conditions, and the variables whose effects were examined included sample size, test…

  7. [Fast optimization of stepwise gradient conditions for ternary mobile phase in reversed-phase high performance liquid chromatography].

    PubMed

    Shan, Yi-chu; Zhang, Yu-kui; Zhao, Rui-huan

    2002-07-01

    In high performance liquid chromatography, it is necessary to apply multi-composition gradient elution for the separation of complex samples such as environmental and biological samples. Multivariate stepwise gradient elution is one of the most efficient elution modes, because it combines the high selectivity of multi-composition mobile phase and shorter analysis time of gradient elution. In practical separations, the separation selectivity of samples can be effectively adjusted by using ternary mobile phase. For the optimization of these parameters, the retention equation of samples must be obtained at first. Traditionally, several isocratic experiments are used to get the retention equation of solute. However, it is time consuming especially for the separation of complex samples with a wide range of polarity. A new method for the fast optimization of ternary stepwise gradient elution was proposed based on the migration rule of solute in column. First, the coefficients of retention equation of solute are obtained by running several linear gradient experiments, then the optimal separation conditions are searched according to the hierarchical chromatography response function which acts as the optimization criterion. For each kind of organic modifier, two initial linear gradient experiments are used to obtain the primary coefficients of retention equation of each solute. For ternary mobile phase, only four linear gradient runs are needed to get the coefficients of retention equation. Then the retention times of solutes under arbitrary mobile phase composition can be predicted. The initial optimal mobile phase composition is obtained by resolution mapping for all of the solutes. A hierarchical chromatography response function is used to evaluate the separation efficiencies and search the optimal elution conditions. In subsequent optimization, the migrating distance of solute in the column is considered to decide the mobile phase composition and sustaining time of the latter steps until all the solutes are eluted out. Thus the first stepwise gradient elution conditions are predicted. If the resolution of samples under the predicted optimal separation conditions is satisfactory, the optimization procedure is stopped; otherwise, the coefficients of retention equation are adjusted according to the experimental results under the previously predicted elution conditions. Then the new stepwise gradient elution conditions are predicted repeatedly until satisfactory resolution is obtained. Normally, the satisfactory separation conditions can be found only after six experiments by using the proposed method. In comparison with the traditional optimization method, the time needed to finish the optimization procedure can be greatly reduced. The method has been validated by its application to the separation of several samples such as amino acid derivatives, aromatic amines, in which satisfactory separations were obtained with predicted resolution.

  8. Character expansion methods for matrix models of dually weighted graphs

    NASA Astrophysics Data System (ADS)

    Kazakov, Vladimir A.; Staudacher, Matthias; Wynter, Thomas

    1996-04-01

    We consider generalized one-matrix models in which external fields allow control over the coordination numbers on both the original and dual lattices. We rederive in a simple fashion a character expansion formula for these models originally due to Itzykson and Di Francesco, and then demonstrate how to take the large N limit of this expansion. The relationship to the usual matrix model resolvent is elucidated. Our methods give as a by-product an extremely simple derivation of the Migdal integral equation describing the large N limit of the Itzykson-Zuber formula. We illustrate and check our methods by analysing a number of models solvable by traditional means. We then proceed to solve a new model: a sum over planar graphys possessing even coordination numbers on both the original and the dual lattice. We conclude by formulating the equations for the case of arbitrary sets of even, self-dual coupling constants. This opens the way for studying the deep problems of phase transitions from random to flat lattices. January 1995

  9. Constraint reasoning in deep biomedical models.

    PubMed

    Cruz, Jorge; Barahona, Pedro

    2005-05-01

    Deep biomedical models are often expressed by means of differential equations. Despite their expressive power, they are difficult to reason about and make decisions, given their non-linearity and the important effects that the uncertainty on data may cause. The objective of this work is to propose a constraint reasoning framework to support safe decisions based on deep biomedical models. The methods used in our approach include the generic constraint propagation techniques for reducing the bounds of uncertainty of the numerical variables complemented with new constraint reasoning techniques that we developed to handle differential equations. The results of our approach are illustrated in biomedical models for the diagnosis of diabetes, tuning of drug design and epidemiology where it was a valuable decision-supporting tool notwithstanding the uncertainty on data. The main conclusion that follows from the results is that, in biomedical decision support, constraint reasoning may be a worthwhile alternative to traditional simulation methods, especially when safe decisions are required.

  10. Structural equation modeling in pediatric psychology: overview and review of applications.

    PubMed

    Nelson, Timothy D; Aylward, Brandon S; Steele, Ric G

    2008-08-01

    To describe the use of structural equation modeling (SEM) in the Journal of Pediatric Psychology (JPP) and to discuss the usefulness of SEM applications in pediatric psychology research. The use of SEM in JPP between 1997 and 2006 was examined and compared to leading journals in clinical psychology, clinical child psychology, and child development. SEM techniques were used in <4% of the empirical articles appearing in JPP between 1997 and 2006. SEM was used less frequently in JPP than in other clinically relevant journals over the past 10 years. However, results indicated a recent increase in JPP studies employing SEM techniques. SEM is an under-utilized class of techniques within pediatric psychology research, although investigations employing these methods are becoming more prevalent. Despite its infrequent use to date, SEM is a potentially useful tool for advancing pediatric psychology research with a number of advantages over traditional statistical methods.

  11. A compatible high-order meshless method for the Stokes equations with applications to suspension flows

    NASA Astrophysics Data System (ADS)

    Trask, Nathaniel; Maxey, Martin; Hu, Xiaozhe

    2018-02-01

    A stable numerical solution of the steady Stokes problem requires compatibility between the choice of velocity and pressure approximation that has traditionally proven problematic for meshless methods. In this work, we present a discretization that couples a staggered scheme for pressure approximation with a divergence-free velocity reconstruction to obtain an adaptive, high-order, finite difference-like discretization that can be efficiently solved with conventional algebraic multigrid techniques. We use analytic benchmarks to demonstrate equal-order convergence for both velocity and pressure when solving problems with curvilinear geometries. In order to study problems in dense suspensions, we couple the solution for the flow to the equations of motion for freely suspended particles in an implicit monolithic scheme. The combination of high-order accuracy with fully-implicit schemes allows the accurate resolution of stiff lubrication forces directly from the solution of the Stokes problem without the need to introduce sub-grid lubrication models.

  12. The solution of radiative transfer problems in molecular bands without the LTE assumption by accelerated lambda iteration methods

    NASA Technical Reports Server (NTRS)

    Kutepov, A. A.; Kunze, D.; Hummer, D. G.; Rybicki, G. B.

    1991-01-01

    An iterative method based on the use of approximate transfer operators, which was designed initially to solve multilevel NLTE line formation problems in stellar atmospheres, is adapted and applied to the solution of the NLTE molecular band radiative transfer in planetary atmospheres. The matrices to be constructed and inverted are much smaller than those used in the traditional Curtis matrix technique, which makes possible the treatment of more realistic problems using relatively small computers. This technique converges much more rapidly than straightforward iteration between the transfer equation and the equations of statistical equilibrium. A test application of this new technique to the solution of NLTE radiative transfer problems for optically thick and thin bands (the 4.3 micron CO2 band in the Venusian atmosphere and the 4.7 and 2.3 micron CO bands in the earth's atmosphere) is described.

  13. Parallel multigrid smoothing: polynomial versus Gauss-Seidel

    NASA Astrophysics Data System (ADS)

    Adams, Mark; Brezina, Marian; Hu, Jonathan; Tuminaro, Ray

    2003-07-01

    Gauss-Seidel is often the smoother of choice within multigrid applications. In the context of unstructured meshes, however, maintaining good parallel efficiency is difficult with multiplicative iterative methods such as Gauss-Seidel. This leads us to consider alternative smoothers. We discuss the computational advantages of polynomial smoothers within parallel multigrid algorithms for positive definite symmetric systems. Two particular polynomials are considered: Chebyshev and a multilevel specific polynomial. The advantages of polynomial smoothing over traditional smoothers such as Gauss-Seidel are illustrated on several applications: Poisson's equation, thin-body elasticity, and eddy current approximations to Maxwell's equations. While parallelizing the Gauss-Seidel method typically involves a compromise between a scalable convergence rate and maintaining high flop rates, polynomial smoothers achieve parallel scalable multigrid convergence rates without sacrificing flop rates. We show that, although parallel computers are the main motivation, polynomial smoothers are often surprisingly competitive with Gauss-Seidel smoothers on serial machines.

  14. Score Estimating Equations from Embedded Likelihood Functions under Accelerated Failure Time Model

    PubMed Central

    NING, JING; QIN, JING; SHEN, YU

    2014-01-01

    SUMMARY The semiparametric accelerated failure time (AFT) model is one of the most popular models for analyzing time-to-event outcomes. One appealing feature of the AFT model is that the observed failure time data can be transformed to identically independent distributed random variables without covariate effects. We describe a class of estimating equations based on the score functions for the transformed data, which are derived from the full likelihood function under commonly used semiparametric models such as the proportional hazards or proportional odds model. The methods of estimating regression parameters under the AFT model can be applied to traditional right-censored survival data as well as more complex time-to-event data subject to length-biased sampling. We establish the asymptotic properties and evaluate the small sample performance of the proposed estimators. We illustrate the proposed methods through applications in two examples. PMID:25663727

  15. Improving runoff risk estimates: Formulating runoff as a bivariate process using the SCS curve number method

    NASA Astrophysics Data System (ADS)

    Shaw, Stephen B.; Walter, M. Todd

    2009-03-01

    The Soil Conservation Service curve number (SCS-CN) method is widely used to predict storm runoff for hydraulic design purposes, such as sizing culverts and detention basins. As traditionally used, the probability of calculated runoff is equated to the probability of the causative rainfall event, an assumption that fails to account for the influence of variations in soil moisture on runoff generation. We propose a modification to the SCS-CN method that explicitly incorporates rainfall return periods and the frequency of different soil moisture states to quantify storm runoff risks. Soil moisture status is assumed to be correlated to stream base flow. Fundamentally, this approach treats runoff as the outcome of a bivariate process instead of dictating a 1:1 relationship between causative rainfall and resulting runoff volumes. Using data from the Fall Creek watershed in western New York and the headwaters of the French Broad River in the mountains of North Carolina, we show that our modified SCS-CN method improves frequency discharge predictions in medium-sized watersheds in the eastern United States in comparison to the traditional application of the method.

  16. Determination of arsenic in traditional Chinese medicine by microwave digestion with flow injection-inductively coupled plasma mass spectrometry (FI-ICP-MS).

    PubMed

    Ong, E S; Yong, Y L; Woo, S O

    1999-01-01

    A simple, rapid, and sensitive method with high sample throughput was developed for determining arsenic in traditional Chinese medicine (TCM) in the form of uncoated tablets, sugar-coated tablets, black pills, capsules, powders, and syrups. The method involves microwave digestion with flow injection-inductively coupled plasma mass spectrometry (FI-ICP-MS). Method precision was 2.7-10.1% (relative standard deviation, n = 6) for different concentrations of arsenic in different TCM samples analyzed by different analysts on different days. Method accuracy was checked with a certified reference material (sea lettuce, Ulva lactuca, BCR CRM 279) for external calibration and by spiking arsenic standard into different TCMs. Recoveries of 89-92% were obtained for the certified reference material and higher than 95% for spiked TCMs. Matrix interference was insignificant for samples analyzed by the method of standard addition. Hence, no correction equation was used in the analysis of arsenic in the samples studied. Sample preparation using microwave digestion gave results that were very similar to those obtained by conventional wet acid digestion using nitric acid.

  17. Chemical Continuous Time Random Walks

    NASA Astrophysics Data System (ADS)

    Aquino, T.; Dentz, M.

    2017-12-01

    Traditional methods for modeling solute transport through heterogeneous media employ Eulerian schemes to solve for solute concentration. More recently, Lagrangian methods have removed the need for spatial discretization through the use of Monte Carlo implementations of Langevin equations for solute particle motions. While there have been recent advances in modeling chemically reactive transport with recourse to Lagrangian methods, these remain less developed than their Eulerian counterparts, and many open problems such as efficient convergence and reconstruction of the concentration field remain. We explore a different avenue and consider the question: In heterogeneous chemically reactive systems, is it possible to describe the evolution of macroscopic reactant concentrations without explicitly resolving the spatial transport? Traditional Kinetic Monte Carlo methods, such as the Gillespie algorithm, model chemical reactions as random walks in particle number space, without the introduction of spatial coordinates. The inter-reaction times are exponentially distributed under the assumption that the system is well mixed. In real systems, transport limitations lead to incomplete mixing and decreased reaction efficiency. We introduce an arbitrary inter-reaction time distribution, which may account for the impact of incomplete mixing. This process defines an inhomogeneous continuous time random walk in particle number space, from which we derive a generalized chemical Master equation and formulate a generalized Gillespie algorithm. We then determine the modified chemical rate laws for different inter-reaction time distributions. We trace Michaelis-Menten-type kinetics back to finite-mean delay times, and predict time-nonlocal macroscopic reaction kinetics as a consequence of broadly distributed delays. Non-Markovian kinetics exhibit weak ergodicity breaking and show key features of reactions under local non-equilibrium.

  18. Fast-forward Langevin dynamics with momentum flips

    NASA Astrophysics Data System (ADS)

    Hijazi, Mahdi; Wilkins, David M.; Ceriotti, Michele

    2018-05-01

    Stochastic thermostats based on the Langevin equation, in which a system is coupled to an external heat bath, are popular methods for temperature control in molecular dynamics simulations due to their ergodicity and their ease of implementation. Traditionally, these thermostats suffer from sluggish behavior in the limit of high friction, unlike thermostats of the Nosé-Hoover family whose performance degrades more gently in the strong coupling regime. We propose a simple and easy-to-implement modification to the integration scheme of the Langevin algorithm that addresses the fundamental source of the overdamped behavior of high-friction Langevin dynamics: if the action of the thermostat causes the momentum of a particle to change direction, it is flipped back. This fast-forward Langevin equation preserves the momentum distribution and so guarantees the correct equilibrium sampling. It mimics the quadratic behavior of Nosé-Hoover thermostats and displays similarly good performance in the strong coupling limit. We test the efficiency of this scheme by applying it to a 1-dimensional harmonic oscillator, as well as to water and Lennard-Jones polymers. The sampling efficiency of the fast-forward Langevin equation thermostat, measured by the correlation time of relevant system variables, is at least as good as the traditional Langevin thermostat, and in the overdamped regime, the fast-forward thermostat performs much better, improving the efficiency by an order of magnitude at the highest frictions we considered.

  19. Bidirectional Elastic Image Registration Using B-Spline Affine Transformation

    PubMed Central

    Gu, Suicheng; Meng, Xin; Sciurba, Frank C.; Wang, Chen; Kaminski, Naftali; Pu, Jiantao

    2014-01-01

    A registration scheme termed as B-spline affine transformation (BSAT) is presented in this study to elastically align two images. We define an affine transformation instead of the traditional translation at each control point. Mathematically, BSAT is a generalized form of the affine transformation and the traditional B-Spline transformation (BST). In order to improve the performance of the iterative closest point (ICP) method in registering two homologous shapes but with large deformation, a bi-directional instead of the traditional unidirectional objective / cost function is proposed. In implementation, the objective function is formulated as a sparse linear equation problem, and a sub-division strategy is used to achieve a reasonable efficiency in registration. The performance of the developed scheme was assessed using both two-dimensional (2D) synthesized dataset and three-dimensional (3D) volumetric computed tomography (CT) data. Our experiments showed that the proposed B-spline affine model could obtain reasonable registration accuracy. PMID:24530210

  20. Multigrid methods with space–time concurrency

    DOE PAGES

    Falgout, R. D.; Friedhoff, S.; Kolev, Tz. V.; ...

    2017-10-06

    Here, we consider the comparison of multigrid methods for parabolic partial differential equations that allow space–time concurrency. With current trends in computer architectures leading towards systems with more, but not faster, processors, space–time concurrency is crucial for speeding up time-integration simulations. In contrast, traditional time-integration techniques impose serious limitations on parallel performance due to the sequential nature of the time-stepping approach, allowing spatial concurrency only. This paper considers the three basic options of multigrid algorithms on space–time grids that allow parallelism in space and time: coarsening in space and time, semicoarsening in the spatial dimensions, and semicoarsening in the temporalmore » dimension. We develop parallel software and performance models to study the three methods at scales of up to 16K cores and introduce an extension of one of them for handling multistep time integration. We then discuss advantages and disadvantages of the different approaches and their benefit compared to traditional space-parallel algorithms with sequential time stepping on modern architectures.« less

  1. Multigrid methods with space–time concurrency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Falgout, R. D.; Friedhoff, S.; Kolev, Tz. V.

    Here, we consider the comparison of multigrid methods for parabolic partial differential equations that allow space–time concurrency. With current trends in computer architectures leading towards systems with more, but not faster, processors, space–time concurrency is crucial for speeding up time-integration simulations. In contrast, traditional time-integration techniques impose serious limitations on parallel performance due to the sequential nature of the time-stepping approach, allowing spatial concurrency only. This paper considers the three basic options of multigrid algorithms on space–time grids that allow parallelism in space and time: coarsening in space and time, semicoarsening in the spatial dimensions, and semicoarsening in the temporalmore » dimension. We develop parallel software and performance models to study the three methods at scales of up to 16K cores and introduce an extension of one of them for handling multistep time integration. We then discuss advantages and disadvantages of the different approaches and their benefit compared to traditional space-parallel algorithms with sequential time stepping on modern architectures.« less

  2. Complex basis functions for molecular resonances: Methodology and applications

    NASA Astrophysics Data System (ADS)

    White, Alec; McCurdy, C. William; Head-Gordon, Martin

    The computation of positions and widths of metastable electronic states is a challenge for molecular electronic structure theory because, in addition to the difficulty of the many-body problem, such states obey scattering boundary conditions. These resonances cannot be addressed with naïve application of traditional bound state electronic structure theory. Non-Hermitian electronic structure methods employing complex basis functions is one way that we may rigorously treat resonances within the framework of traditional electronic structure theory. In this talk, I will discuss our recent work in this area including the methodological extension from single determinant SCF-based approaches to highly correlated levels of wavefunction-based theory such as equation of motion coupled cluster and many-body perturbation theory. These approaches provide a hierarchy of theoretical methods for the computation of positions and widths of molecular resonances. Within this framework, we may also examine properties of resonances including the dependence of these parameters on molecular geometry. Some applications of these methods to temporary anions and dianions will also be discussed.

  3. General Equation Set Solver for Compressible and Incompressible Turbomachinery Flows

    NASA Technical Reports Server (NTRS)

    Sondak, Douglas L.; Dorney, Daniel J.

    2002-01-01

    Turbomachines for propulsion applications operate with many different working fluids and flow conditions. The flow may be incompressible, such as in the liquid hydrogen pump in a rocket engine, or supersonic, such as in the turbine which may drive the hydrogen pump. Separate codes have traditionally been used for incompressible and compressible flow solvers. The General Equation Set (GES) method can be used to solve both incompressible and compressible flows, and it is not restricted to perfect gases, as are many compressible-flow turbomachinery solvers. An unsteady GES turbomachinery flow solver has been developed and applied to both air and water flows through turbines. It has been shown to be an excellent alternative to maintaining two separate codes.

  4. Development of Advanced Methods of Structural and Trajectory Analysis for Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Ardema, Mark D.; Windhorst, Robert; Phillips, James

    1998-01-01

    This paper develops a near-optimal guidance law for generating minimum fuel, time, or cost fixed-range trajectories for supersonic transport aircraft. The approach uses a choice of new state variables along with singular perturbation techniques to time-scale decouple the dynamic equations into multiple equations of single order (second order for the fast dynamics). Application of the maximum principle to each of the decoupled equations, as opposed to application to the original coupled equations, avoids the two point boundary value problem and transforms the problem from one of a functional optimization to one of multiple function optimizations. It is shown that such an approach produces well known aircraft performance results such as minimizing the Brequet factor for minimum fuel consumption and the energy climb path. Furthermore, the new state variables produce a consistent calculation of flight path angle along the trajectory, eliminating one of the deficiencies in the traditional energy state approximation. In addition, jumps in the energy climb path are smoothed out by integration of the original dynamic equations at constant load factor. Numerical results performed for a supersonic transport design show that a pushover dive followed by a pullout at nominal load factors are sufficient maneuvers to smooth the jump.

  5. Symmetric and arbitrarily high-order Birkhoff-Hermite time integrators and their long-time behaviour for solving nonlinear Klein-Gordon equations

    NASA Astrophysics Data System (ADS)

    Liu, Changying; Iserles, Arieh; Wu, Xinyuan

    2018-03-01

    The Klein-Gordon equation with nonlinear potential occurs in a wide range of application areas in science and engineering. Its computation represents a major challenge. The main theme of this paper is the construction of symmetric and arbitrarily high-order time integrators for the nonlinear Klein-Gordon equation by integrating Birkhoff-Hermite interpolation polynomials. To this end, under the assumption of periodic boundary conditions, we begin with the formulation of the nonlinear Klein-Gordon equation as an abstract second-order ordinary differential equation (ODE) and its operator-variation-of-constants formula. We then derive a symmetric and arbitrarily high-order Birkhoff-Hermite time integration formula for the nonlinear abstract ODE. Accordingly, the stability, convergence and long-time behaviour are rigorously analysed once the spatial differential operator is approximated by an appropriate positive semi-definite matrix, subject to suitable temporal and spatial smoothness. A remarkable characteristic of this new approach is that the requirement of temporal smoothness is reduced compared with the traditional numerical methods for PDEs in the literature. Numerical results demonstrate the advantage and efficiency of our time integrators in comparison with the existing numerical approaches.

  6. An iterative particle filter approach for coupled hydro-geophysical inversion of a controlled infiltration experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manoli, Gabriele, E-mail: manoli@dmsa.unipd.it; Nicholas School of the Environment, Duke University, Durham, NC 27708; Rossi, Matteo

    The modeling of unsaturated groundwater flow is affected by a high degree of uncertainty related to both measurement and model errors. Geophysical methods such as Electrical Resistivity Tomography (ERT) can provide useful indirect information on the hydrological processes occurring in the vadose zone. In this paper, we propose and test an iterated particle filter method to solve the coupled hydrogeophysical inverse problem. We focus on an infiltration test monitored by time-lapse ERT and modeled using Richards equation. The goal is to identify hydrological model parameters from ERT electrical potential measurements. Traditional uncoupled inversion relies on the solution of two sequentialmore » inverse problems, the first one applied to the ERT measurements, the second one to Richards equation. This approach does not ensure an accurate quantitative description of the physical state, typically violating mass balance. To avoid one of these two inversions and incorporate in the process more physical simulation constraints, we cast the problem within the framework of a SIR (Sequential Importance Resampling) data assimilation approach that uses a Richards equation solver to model the hydrological dynamics and a forward ERT simulator combined with Archie's law to serve as measurement model. ERT observations are then used to update the state of the system as well as to estimate the model parameters and their posterior distribution. The limitations of the traditional sequential Bayesian approach are investigated and an innovative iterative approach is proposed to estimate the model parameters with high accuracy. The numerical properties of the developed algorithm are verified on both homogeneous and heterogeneous synthetic test cases based on a real-world field experiment.« less

  7. New Developments in the Method of Space-Time Conservation Element and Solution Element-Applications to Two-Dimensional Time-Marching Problems

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung; Wang, Xiao-Yen; Chow, Chuen-Yen

    1994-01-01

    A new numerical discretization method for solving conservation laws is being developed. This new approach differs substantially in both concept and methodology from the well-established methods, i.e., finite difference, finite volume, finite element, and spectral methods. It is motivated by several important physical/numerical considerations and designed to avoid several key limitations of the above traditional methods. As a result of the above considerations, a set of key principles for the design of numerical schemes was put forth in a previous report. These principles were used to construct several numerical schemes that model a 1-D time-dependent convection-diffusion equation. These schemes were then extended to solve the time-dependent Euler and Navier-Stokes equations of a perfect gas. It was shown that the above schemes compared favorably with the traditional schemes in simplicity, generality, and accuracy. In this report, the 2-D versions of the above schemes, except the Navier-Stokes solver, are constructed using the same set of design principles. Their constructions are simplified greatly by the use of a nontraditional space-time mesh. Its use results in the simplest stencil possible, i.e., a tetrahedron in a 3-D space-time with a vertex at the upper time level and other three at the lower time level. Because of the similarity in their design, each of the present 2-D solvers virtually shares with its 1-D counterpart the same fundamental characteristics. Moreover, it is shown that the present Euler solver is capable of generating highly accurate solutions for a famous 2-D shock reflection problem. Specifically, both the incident and the reflected shocks can be resolved by a single data point without the presence of numerical oscillations near the discontinuity.

  8. Inferring Gene Regulatory Networks by Singular Value Decomposition and Gravitation Field Algorithm

    PubMed Central

    Zheng, Ming; Wu, Jia-nan; Huang, Yan-xin; Liu, Gui-xia; Zhou, You; Zhou, Chun-guang

    2012-01-01

    Reconstruction of gene regulatory networks (GRNs) is of utmost interest and has become a challenge computational problem in system biology. However, every existing inference algorithm from gene expression profiles has its own advantages and disadvantages. In particular, the effectiveness and efficiency of every previous algorithm is not high enough. In this work, we proposed a novel inference algorithm from gene expression data based on differential equation model. In this algorithm, two methods were included for inferring GRNs. Before reconstructing GRNs, singular value decomposition method was used to decompose gene expression data, determine the algorithm solution space, and get all candidate solutions of GRNs. In these generated family of candidate solutions, gravitation field algorithm was modified to infer GRNs, used to optimize the criteria of differential equation model, and search the best network structure result. The proposed algorithm is validated on both the simulated scale-free network and real benchmark gene regulatory network in networks database. Both the Bayesian method and the traditional differential equation model were also used to infer GRNs, and the results were used to compare with the proposed algorithm in our work. And genetic algorithm and simulated annealing were also used to evaluate gravitation field algorithm. The cross-validation results confirmed the effectiveness of our algorithm, which outperforms significantly other previous algorithms. PMID:23226565

  9. An optimization-based approach for solving a time-harmonic multiphysical wave problem with higher-order schemes

    NASA Astrophysics Data System (ADS)

    Mönkölä, Sanna

    2013-06-01

    This study considers developing numerical solution techniques for the computer simulations of time-harmonic fluid-structure interaction between acoustic and elastic waves. The focus is on the efficiency of an iterative solution method based on a controllability approach and spectral elements. We concentrate on the model, in which the acoustic waves in the fluid domain are modeled by using the velocity potential and the elastic waves in the structure domain are modeled by using displacement. Traditionally, the complex-valued time-harmonic equations are used for solving the time-harmonic problems. Instead of that, we focus on finding periodic solutions without solving the time-harmonic problems directly. The time-dependent equations can be simulated with respect to time until a time-harmonic solution is reached, but the approach suffers from poor convergence. To overcome this challenge, we follow the approach first suggested and developed for the acoustic wave equations by Bristeau, Glowinski, and Périaux. Thus, we accelerate the convergence rate by employing a controllability method. The problem is formulated as a least-squares optimization problem, which is solved with the conjugate gradient (CG) algorithm. Computation of the gradient of the functional is done directly for the discretized problem. A graph-based multigrid method is used for preconditioning the CG algorithm.

  10. Quasi-Monte Carlo Methods Applied to Tau-Leaping in Stochastic Biological Systems.

    PubMed

    Beentjes, Casper H L; Baker, Ruth E

    2018-05-25

    Quasi-Monte Carlo methods have proven to be effective extensions of traditional Monte Carlo methods in, amongst others, problems of quadrature and the sample path simulation of stochastic differential equations. By replacing the random number input stream in a simulation procedure by a low-discrepancy number input stream, variance reductions of several orders have been observed in financial applications. Analysis of stochastic effects in well-mixed chemical reaction networks often relies on sample path simulation using Monte Carlo methods, even though these methods suffer from typical slow [Formula: see text] convergence rates as a function of the number of sample paths N. This paper investigates the combination of (randomised) quasi-Monte Carlo methods with an efficient sample path simulation procedure, namely [Formula: see text]-leaping. We show that this combination is often more effective than traditional Monte Carlo simulation in terms of the decay of statistical errors. The observed convergence rate behaviour is, however, non-trivial due to the discrete nature of the models of chemical reactions. We explain how this affects the performance of quasi-Monte Carlo methods by looking at a test problem in standard quadrature.

  11. Aerodynamic design optimization using sensitivity analysis and computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay; Eleshaky, Mohamed E.

    1991-01-01

    A new and efficient method is presented for aerodynamic design optimization, which is based on a computational fluid dynamics (CFD)-sensitivity analysis algorithm. The method is applied to design a scramjet-afterbody configuration for an optimized axial thrust. The Euler equations are solved for the inviscid analysis of the flow, which in turn provides the objective function and the constraints. The CFD analysis is then coupled with the optimization procedure that uses a constrained minimization method. The sensitivity coefficients, i.e. gradients of the objective function and the constraints, needed for the optimization are obtained using a quasi-analytical method rather than the traditional brute force method of finite difference approximations. During the one-dimensional search of the optimization procedure, an approximate flow analysis (predicted flow) based on a first-order Taylor series expansion is used to reduce the computational cost. Finally, the sensitivity of the optimum objective function to various design parameters, which are kept constant during the optimization, is computed to predict new optimum solutions. The flow analysis of the demonstrative example are compared with the experimental data. It is shown that the method is more efficient than the traditional methods.

  12. Projection methods for the numerical solution of Markov chain models

    NASA Technical Reports Server (NTRS)

    Saad, Youcef

    1989-01-01

    Projection methods for computing stationary probability distributions for Markov chain models are presented. A general projection method is a method which seeks an approximation from a subspace of small dimension to the original problem. Thus, the original matrix problem of size N is approximated by one of dimension m, typically much smaller than N. A particularly successful class of methods based on this principle is that of Krylov subspace methods which utilize subspaces of the form span(v,av,...,A(exp m-1)v). These methods are effective in solving linear systems and eigenvalue problems (Lanczos, Arnoldi,...) as well as nonlinear equations. They can be combined with more traditional iterative methods such as successive overrelaxation, symmetric successive overrelaxation, or with incomplete factorization methods to enhance convergence.

  13. Vortex methods for separated flows

    NASA Technical Reports Server (NTRS)

    Spalart, Philippe R.

    1988-01-01

    The numerical solution of the Euler or Navier-Stokes equations by Lagrangian vortex methods is discussed. The mathematical background is presented in an elementary fashion and includes the relationship with traditional point-vortex studies, the convergence to smooth solutions of the Euler equations, and the essential differences between two- and three-dimensional cases. The difficulties in extending the method to viscous or compressible flows are explained. The overlap with the excellent review articles available is kept to a minimum and more emphasis is placed on the area of expertise, namely two-dimensional flows around bluff bodies. When solid walls are present, complete mathematical models are not available and a more heuristic attitude must be adopted. The imposition of inviscid and viscous boundary conditions without conformal mappings or image vortices and the creation of vorticity along solid walls are examined in detail. Methods for boundary-layer treatment and the question of the Kutta condition are discussed. Practical aspects and tips helpful in creating a method that really works are explained. The topics include the robustness of the method and the assessment of accuracy, vortex-core profiles, timemarching schemes, numerical dissipation, and efficient programming. Calculations of flows past streamlined or bluff bodies are used as examples when appropriate.

  14. Extension of the KLI approximation toward the exact optimized effective potential.

    PubMed

    Iafrate, G J; Krieger, J B

    2013-03-07

    The integral equation for the optimized effective potential (OEP) is utilized in a compact form from which an accurate OEP solution for the spin-unrestricted exchange-correlation potential, Vxcσ, is obtained for any assumed orbital-dependent exchange-correlation energy functional. The method extends beyond the Krieger-Li-Iafrate (KLI) approximation toward the exact OEP result. The compact nature of the OEP equation arises by replacing the integrals involving the Green's function terms in the traditional OEP equation by an equivalent first-order perturbation theory wavefunction often referred to as the "orbital shift" function. Significant progress is then obtained by solving the equation for the first order perturbation theory wavefunction by use of Dalgarno functions which are determined from well known methods of partial differential equations. The use of Dalgarno functions circumvents the need to explicitly address the Green's functions and the associated problems with "sum over states" numerics; as well, the Dalgarno functions provide ease in dealing with inherent singularities arising from the origin and the zeros of the occupied orbital wavefunctions. The Dalgarno approach for finding a solution to the OEP equation is described herein, and a detailed illustrative example is presented for the special case of a spherically symmetric exchange-correlation potential. For the case of spherical symmetry, the relevant Dalgarno function is derived by direct integration of the appropriate radial equation while utilizing a user friendly method which explicitly treats the singular behavior at the origin and at the nodal singularities arising from the zeros of the occupied states. The derived Dalgarno function is shown to be an explicit integral functional of the exact OEP Vxcσ, thus allowing for the reduction of the OEP equation to a self-consistent integral equation for the exact exchange-correlation potential; the exact solution to this integral equation can be determined by iteration with the natural zeroth order correction given by the KLI exchange-correlation potential. Explicit analytic results are provided to illustrate the first order iterative correction beyond the KLI approximation. The derived correction term to the KLI potential explicitly involves spatially weighted products of occupied orbital densities in any assumed orbital-dependent exchange-correlation energy functional; as well, the correction term is obtained with no adjustable parameters. Moreover, if the equation for the exact optimized effective potential is further iterated, one can obtain the OEP as accurately as desired.

  15. Extension of the KLI approximation toward the exact optimized effective potential

    NASA Astrophysics Data System (ADS)

    Iafrate, G. J.; Krieger, J. B.

    2013-03-01

    The integral equation for the optimized effective potential (OEP) is utilized in a compact form from which an accurate OEP solution for the spin-unrestricted exchange-correlation potential, Vxcσ, is obtained for any assumed orbital-dependent exchange-correlation energy functional. The method extends beyond the Krieger-Li-Iafrate (KLI) approximation toward the exact OEP result. The compact nature of the OEP equation arises by replacing the integrals involving the Green's function terms in the traditional OEP equation by an equivalent first-order perturbation theory wavefunction often referred to as the "orbital shift" function. Significant progress is then obtained by solving the equation for the first order perturbation theory wavefunction by use of Dalgarno functions which are determined from well known methods of partial differential equations. The use of Dalgarno functions circumvents the need to explicitly address the Green's functions and the associated problems with "sum over states" numerics; as well, the Dalgarno functions provide ease in dealing with inherent singularities arising from the origin and the zeros of the occupied orbital wavefunctions. The Dalgarno approach for finding a solution to the OEP equation is described herein, and a detailed illustrative example is presented for the special case of a spherically symmetric exchange-correlation potential. For the case of spherical symmetry, the relevant Dalgarno function is derived by direct integration of the appropriate radial equation while utilizing a user friendly method which explicitly treats the singular behavior at the origin and at the nodal singularities arising from the zeros of the occupied states. The derived Dalgarno function is shown to be an explicit integral functional of the exact OEP Vxcσ, thus allowing for the reduction of the OEP equation to a self-consistent integral equation for the exact exchange-correlation potential; the exact solution to this integral equation can be determined by iteration with the natural zeroth order correction given by the KLI exchange-correlation potential. Explicit analytic results are provided to illustrate the first order iterative correction beyond the KLI approximation. The derived correction term to the KLI potential explicitly involves spatially weighted products of occupied orbital densities in any assumed orbital-dependent exchange-correlation energy functional; as well, the correction term is obtained with no adjustable parameters. Moreover, if the equation for the exact optimized effective potential is further iterated, one can obtain the OEP as accurately as desired.

  16. Nonlinear acoustic wave equations with fractional loss operators.

    PubMed

    Prieur, Fabrice; Holm, Sverre

    2011-09-01

    Fractional derivatives are well suited to describe wave propagation in complex media. When introduced in classical wave equations, they allow a modeling of attenuation and dispersion that better describes sound propagation in biological tissues. Traditional constitutive equations from solid mechanics and heat conduction are modified using fractional derivatives. They are used to derive a nonlinear wave equation which describes attenuation and dispersion laws that match observations. This wave equation is a generalization of the Westervelt equation, and also leads to a fractional version of the Khokhlov-Zabolotskaya-Kuznetsov and Burgers' equations. © 2011 Acoustical Society of America

  17. Remote sensing as a source of land cover information utilized in the universal soil loss equation

    NASA Technical Reports Server (NTRS)

    Morris-Jones, D. R.; Morgan, K. M.; Kiefer, R. W.; Scarpace, F. L.

    1979-01-01

    In this study, methods for gathering the land use/land cover information required by the USLE were investigated with medium altitude, multi-date color and color infrared 70-mm positive transparencies using human and computer-based interpretation techniques. Successful results, which compare favorably with traditional field study methods, were obtained within the test site watershed with airphoto data sources and human airphoto interpretation techniques. Computer-based interpretation techniques were not capable of identifying soil conservation practices but were successful to varying degrees in gathering other types of desired land use/land cover information.

  18. Experimental study on water content detection of traditional masonry based on infrared thermal image

    NASA Astrophysics Data System (ADS)

    Zhang, Baoqing; Lei, Zukang

    2017-10-01

    Based on infrared thermal imaging technology for seepage test of two kinds of brick masonry, find out the relationship between the distribution of one-dimensional two brick surface temperature distribution and one-dimensional surface moisture content were determined after seepage brick masonry minimum temperature zone and water content determination method of the highest point of the regression equation, the relationship between temperature and moisture content of the brick masonry reflected the quantitative and establish the initial wet masonry building disease analysis method, then the infrared technology is applied to the protection of historic buildings in.

  19. Soft tissue modelling with conical springs.

    PubMed

    Omar, Nadzeri; Zhong, Yongmin; Jazar, Reza N; Subic, Aleksandar; Smith, Julian; Shirinzadeh, Bijan

    2015-01-01

    This paper presents a new method for real-time modelling soft tissue deformation. It improves the traditional mass-spring model with conical springs to deal with nonlinear mechanical behaviours of soft tissues. A conical spring model is developed to predict soft tissue deformation with reference to deformation patterns. The model parameters are formulated according to tissue deformation patterns and the nonlinear behaviours of soft tissues are modelled with the stiffness variation of conical spring. Experimental results show that the proposed method can describe different tissue deformation patterns using one single equation and also exhibit the typical mechanical behaviours of soft tissues.

  20. Linear stability analysis of detonations via numerical computation and dynamic mode decomposition

    NASA Astrophysics Data System (ADS)

    Kabanov, Dmitry I.; Kasimov, Aslan R.

    2018-03-01

    We introduce a new method to investigate linear stability of gaseous detonations that is based on an accurate shock-fitting numerical integration of the linearized reactive Euler equations with a subsequent analysis of the computed solution via the dynamic mode decomposition. The method is applied to the detonation models based on both the standard one-step Arrhenius kinetics and two-step exothermic-endothermic reaction kinetics. Stability spectra for all cases are computed and analyzed. The new approach is shown to be a viable alternative to the traditional normal-mode analysis used in detonation theory.

  1. Polynomial probability distribution estimation using the method of moments

    PubMed Central

    Mattsson, Lars; Rydén, Jesper

    2017-01-01

    We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram–Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation. PMID:28394949

  2. Polynomial probability distribution estimation using the method of moments.

    PubMed

    Munkhammar, Joakim; Mattsson, Lars; Rydén, Jesper

    2017-01-01

    We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram-Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation.

  3. The analysis of thin walled composite laminated helicopter rotor with hierarchical warping functions and finite element method

    NASA Astrophysics Data System (ADS)

    Zhu, Dechao; Deng, Zhongmin; Wang, Xingwei

    2001-08-01

    In the present paper, a series of hierarchical warping functions is developed to analyze the static and dynamic problems of thin walled composite laminated helicopter rotors composed of several layers with single closed cell. This method is the development and extension of the traditional constrained warping theory of thin walled metallic beams, which had been proved very successful since 1940s. The warping distribution along the perimeter of each layer is expanded into a series of successively corrective warping functions with the traditional warping function caused by free torsion or free bending as the first term, and is assumed to be piecewise linear along the thickness direction of layers. The governing equations are derived based upon the variational principle of minimum potential energy for static analysis and Rayleigh Quotient for free vibration analysis. Then the hierarchical finite element method is introduced to form a numerical algorithm. Both static and natural vibration problems of sample box beams are analyzed with the present method to show the main mechanical behavior of the thin walled composite laminated helicopter rotor.

  4. Scalable Preconditioners for Structure Preserving Discretizations of Maxwell Equations in First Order Form

    DOE PAGES

    Phillips, Edward Geoffrey; Shadid, John N.; Cyr, Eric C.

    2018-05-01

    Here, we report multiple physical time-scales can arise in electromagnetic simulations when dissipative effects are introduced through boundary conditions, when currents follow external time-scales, and when material parameters vary spatially. In such scenarios, the time-scales of interest may be much slower than the fastest time-scales supported by the Maxwell equations, therefore making implicit time integration an efficient approach. The use of implicit temporal discretizations results in linear systems in which fast time-scales, which severely constrain the stability of an explicit method, can manifest as so-called stiff modes. This study proposes a new block preconditioner for structure preserving (also termed physicsmore » compatible) discretizations of the Maxwell equations in first order form. The intent of the preconditioner is to enable the efficient solution of multiple-time-scale Maxwell type systems. An additional benefit of the developed preconditioner is that it requires only a traditional multigrid method for its subsolves and compares well against alternative approaches that rely on specialized edge-based multigrid routines that may not be readily available. Lastly, results demonstrate parallel scalability at large electromagnetic wave CFL numbers on a variety of test problems.« less

  5. Scalable Preconditioners for Structure Preserving Discretizations of Maxwell Equations in First Order Form

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phillips, Edward Geoffrey; Shadid, John N.; Cyr, Eric C.

    Here, we report multiple physical time-scales can arise in electromagnetic simulations when dissipative effects are introduced through boundary conditions, when currents follow external time-scales, and when material parameters vary spatially. In such scenarios, the time-scales of interest may be much slower than the fastest time-scales supported by the Maxwell equations, therefore making implicit time integration an efficient approach. The use of implicit temporal discretizations results in linear systems in which fast time-scales, which severely constrain the stability of an explicit method, can manifest as so-called stiff modes. This study proposes a new block preconditioner for structure preserving (also termed physicsmore » compatible) discretizations of the Maxwell equations in first order form. The intent of the preconditioner is to enable the efficient solution of multiple-time-scale Maxwell type systems. An additional benefit of the developed preconditioner is that it requires only a traditional multigrid method for its subsolves and compares well against alternative approaches that rely on specialized edge-based multigrid routines that may not be readily available. Lastly, results demonstrate parallel scalability at large electromagnetic wave CFL numbers on a variety of test problems.« less

  6. A Study of Shuttlecock's Trajectory in Badminton.

    PubMed

    Chen, Lung-Ming; Pan, Yi-Hsiang; Chen, Yung-Jen

    2009-01-01

    The main purpose of this study was to construct and validate a motion equation for the flight of the badminton and to find the relationship between the air resistance force and a shuttlecock's speed. This research method was based on motion laws of aerodynamics. It applied aerodynamic theories to construct motion equation of a shuttlecock's flying trajectory under the effects of gravitational force and air resistance force. The result showed that the motion equation of a shuttlecock's flight trajectory could be constructed by determining the terminal velocity. The predicted shuttlecock trajectory fitted the measured data fairly well. The results also revealed that the drag force was proportional to the square of a shuttlecock velocity. Furthermore, the angle and strength of a stroke could influence trajectory. Finally, this study suggested that we could use a scientific approach to measure a shuttlecock's velocity objectively when testing the quality of shuttlecocks. And could be used to replace the traditional subjective method of the Badminton World Federation based on players' striking shuttlecocks, as well as applying research findings to improve professional knowledge of badminton player training. Key pointsThe motion equation of a shuttlecock's flying trajectory could be constructed by determining the terminal velocity in aerodynamics.Air drag force is proportional to the square of a shuttlecock velocity. Furthermore, the angle and strength of a stroke could influence trajectory.

  7. Finite-Strain Fractional-Order Viscoelastic (FOV) Material Models and Numerical Methods for Solving Them

    NASA Technical Reports Server (NTRS)

    Freed, Alan D.; Diethelm, Kai; Gray, Hugh R. (Technical Monitor)

    2002-01-01

    Fraction-order viscoelastic (FOV) material models have been proposed and studied in 1D since the 1930's, and were extended into three dimensions in the 1970's under the assumption of infinitesimal straining. It was not until 1997 that Drozdov introduced the first finite-strain FOV constitutive equations. In our presentation, we shall continue in this tradition by extending the standard, FOV, fluid and solid, material models introduced in 1971 by Caputo and Mainardi into 3D constitutive formula applicable for finite-strain analyses. To achieve this, we generalize both the convected and co-rotational derivatives of tensor fields to fractional order. This is accomplished by defining them first as body tensor fields and then mapping them into space as objective Cartesian tensor fields. Constitutive equations are constructed using both variants for fractional rate, and their responses are contrasted in simple shear. After five years of research and development, we now possess a basic suite of numerical tools necessary to study finite-strain FOV constitutive equations and their iterative refinement into a mature collection of material models. Numerical methods still need to be developed for efficiently solving fraction al-order integrals, derivatives, and differential equations in a finite element setting where such constitutive formulae would need to be solved at each Gauss point in each element of a finite model, which can number into the millions in today's analysis.

  8. Equation solvers for distributed-memory computers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1994-01-01

    A large number of scientific and engineering problems require the rapid solution of large systems of simultaneous equations. The performance of parallel computers in this area now dwarfs traditional vector computers by nearly an order of magnitude. This talk describes the major issues involved in parallel equation solvers with particular emphasis on the Intel Paragon, IBM SP-1 and SP-2 processors.

  9. [The in vitro dissolution of total composition of the tablet of rhizomes of Ligusticum chuanxiong components and in vitro-in vivo correlation by the method of area under the absorbance-wavelength curve].

    PubMed

    Lai, Hong-qiang; Hu, Yue; Li, Xiao-dong

    2015-06-01

    To discuss the availability of evaluation on the dissolution studies of the multicomponents in traditional Chinese medicine, the in vitro dissolution of total composition of the tablet of rhizomes of Ligusticum chuanxiong components and its correlation with the in vivo were studied by the method of area under the absorbance-wavelength curve (AUAWC). Taken the tablet of rhizomes of Ligusticum chuanxiong components which is composed of sodium ferulate and ligustrazine hydrochloride as subject model, the dissolution tests were carried out with basket method. The plasma concentrations of tablets in different rats were determined by AUAWC at different interval times. The in vivo absorption percentage was calculated by Wagner-Nelson equation to evaluate the in vitro and in vivo correlation. According to the results, the cumulative dissolution in vitro of total composition of tablets of rhizomes of Ligusticum chuanxiong components at 60 min was 90.65% in water by AUAWC. The in vivo pharmacokinetics is fitted with an one-compartment model. The linear equation based on the cumulative dissolution rate (fr) and absorption percentage (fa) at 5, 10, 20, 30 and 60 min was fa = 0.819 7 fr+0.183 and the correlation coefficient was 0.959 5, which showed a good correlation between the in vitro dissolution and the in vivo absorption percentage. The method of AUAWC can be used accurately, feasibly and conveniently to evaluate the in vitro and in vivo correlation of total composition of tablets of rhizomes of Ligusticum chuanxiong components, which will provide better guidance to study the in vitro and in vivo correlation of sustained release preparation etc under complex system of traditional Chinese medicine in the future.

  10. Astronomical Characteristics of Cheonsang-yeolcha-bunyajido from the Perspective of Manufacturing Methods

    NASA Astrophysics Data System (ADS)

    Ahn, Sang-Hyeon

    2015-03-01

    I investigated a method for drawing the star chart in the planisphere Cheonsang-yeolcha-bunyajido. The outline of the star chart can be constructed by considering the astronomical information given in the planisphere alone and the drawing method described in Xin-Tangshu; further the chart can be completed by using additional information on the shape and linking method of asterisms out of an inherited star chart. The circles of perpetual visibility, the equator, and the circle of perpetual invisibility are concentric, and their common center locates the Tianshu-xing, which was defined to be a pole star in the Han dynasty. The radius of the circle of perpetual visibility was modified in accordance with the latitude of Seoul, whereas the other circles were drawn for the latitude of 35°, which had been the reference latitude in ancient Chinese astronomy. The ecliptic was drawn as an exact circle by parallel transference of the equator circle to fix the location of the equinoxes at the positions recorded in the epitaph of the planisphere. The positions of equinoxes originated from the Han dynasty. The 365 ticks around the boundary of the circle of perpetual invisibility were possibly drawn by segmenting the circumference with an arc length instead of a chord length with the ratio of the circumference of a circle to its diameter as accurate as 3.14 presumed. The 12 equatorial sectors were drawn on the boundary of the star-chart in accordance with the beginning and ending lodge angles given in the epitaph that originated from the Han dynasty. The determinative lines for the 28 lunar lodges were drawn to intersect their determinative stars, but seven determinative stars are deviated. According to the treatises of the Tang dynasty, these anomalies were inherited from charts of the period earlier than the Tang dynasty. Thus, the star chart in Cheonsang-yeolcha-bunyajido preserves the old tradition that had existed before the present Chinese tradition reformed in approximately 700 CE. In conclusion, the star chart in Cheonsang-yeolcha-bunyajido shows the sky of the former Han dynasty with the equator modified to the latitude of Seoul.

  11. Determination of minimum enzymatic decolorization time of reactive dye solution by spectroscopic & mathematical approach.

    PubMed

    Celebi, Mithat; Ozdemir, Zafer Omer; Eroglu, Emre; Altikatoglu, Melda; Guney, Ibrahim

    2015-02-01

    Synthetic dyes are very important for textile dyeing, paper printing, color photography and petroleum products. Traditional methods of dye removal include biodegradation, precipitation, adsorption, chemical degradation, photo degradation, and chemical coagulation. Dye decolorization with enzymatic reaction is an important issue for several research field (chemistry, environment) In this study, minimum decolorization time of Remazol Brilliant Blue R dye with Horseradish peroxidase enzyme was calculated using with mathematical equation depending on experimental data. Dye decolorization was determined by monitoring the absorbance decrease at the specific maximum wavelength for dye. All experiments were carried out with different initial dye concentrations of Remazol Brilliant Blue R at 25 degrees C constant temperature for 30 minutes. The development of the least squares estimators for a nonlinear model brings about complications not encountered in the case of the linear model. Decolorization times for completely removal of dye were calculated according to equation. It was shown that mathematical equation was conformed exponential curve for dye degradation.

  12. Stefan blowing effects on MHD bioconvection flow of a nanofluid in the presence of gyrotactic microorganisms with active and passive nanoparticles flux

    NASA Astrophysics Data System (ADS)

    Giri, Shib Sankar; Das, Kalidas; Kundu, Prabir Kumar

    2017-02-01

    The present paper investigates the effect of Stefan blowing on the hydro-magnetic bioconvection of a water-based nanofluid flow containing gyrotactic microorganisms through a permeable surface. Also we studied both actively and passively the controlled flux of nanoparticles and the effect of a surface slip at the wall. We adopt a similarity approach to reduce the leading partial differential equations into ordinary differential equations along with two separate boundary conditions (active and passive) and solve the resulting equations numerically by employing the RK-4 method through the shooting technique to perform the flow analysis. Discussions on the effect of emerging flow parameter on the flow characteristic are made properly through graphs and charts. We observed that the effects of the traditional Lewis number and suction/blowing parameter on temperature distribution and microorganism concentration are converse to each other. A fair result comparison of the present paper with formerly obtained results is given.

  13. Validity and reproducibility of a novel method for time-course evaluation of diet-induced thermogenesis in a respiratory chamber.

    PubMed

    Usui, Chiyoko; Ando, Takafumi; Ohkawara, Kazunori; Miyake, Rieko; Oshima, Yoshitake; Hibi, Masanobu; Oishi, Sachiko; Tokuyama, Kumpei; Tanaka, Shigeho

    2015-05-01

    We developed a novel method for computing diet-induced thermogenesis (DIT) in a respiratory chamber and evaluated the validity and reproducibility of the method. We hypothesized that DIT may be calculated as the difference between postprandial energy expenditure (EE) and estimated EE (sum of basal metabolic rate and physical activity (PA)-related EE). The estimated EE was derived from the regression equation between EE from respiration and PA intensity in the fasting state. It may be possible to evaluate the time course of DIT using this novel technique. In a validity study, we examined whether DIT became zero (theoretical value) for 6 h of fasting in 11 subjects. The mean value of DIT calculated by the novel and traditional methods was 22.4 ± 13.4 and 3.4 ± 31.8 kcal/6 h, respectively. In the reproducibility study, 15 adult subjects lived in the respiratory chamber for over 24 h on two occasions. The DIT over 15 h of postprandial wake time was calculated. There were no significant differences in the mean values of DIT between the two test days. The within-subject day-to-day coefficient of variation for calculated DIT with the novel and traditional methods was approximately 35% and 25%, respectively. The novel method did not have superior reproducibility compared with that of the traditional method. However when comparing the smaller variation in the fasting state than the theoretical value (zero), the novel method may be better for evaluating interindividual differences in DIT than the traditional method and also has the ability to evaluate the time-course. © 2015 The Authors. Physiological Reports published by Wiley Periodicals, Inc. on behalf of the American Physiological Society and The Physiological Society.

  14. A New Continuous Rotation IMU Alignment Algorithm Based on Stochastic Modeling for Cost Effective North-Finding Applications

    PubMed Central

    Li, Yun; Wu, Wenqi; Jiang, Qingan; Wang, Jinling

    2016-01-01

    Based on stochastic modeling of Coriolis vibration gyros by the Allan variance technique, this paper discusses Angle Random Walk (ARW), Rate Random Walk (RRW) and Markov process gyroscope noises which have significant impacts on the North-finding accuracy. A new continuous rotation alignment algorithm for a Coriolis vibration gyroscope Inertial Measurement Unit (IMU) is proposed in this paper, in which the extended observation equations are used for the Kalman filter to enhance the estimation of gyro drift errors, thus improving the north-finding accuracy. Theoretical and numerical comparisons between the proposed algorithm and the traditional ones are presented. The experimental results show that the new continuous rotation alignment algorithm using the extended observation equations in the Kalman filter is more efficient than the traditional two-position alignment method. Using Coriolis vibration gyros with bias instability of 0.1°/h, a north-finding accuracy of 0.1° (1σ) is achieved by the new continuous rotation alignment algorithm, compared with 0.6° (1σ) north-finding accuracy for the two-position alignment and 1° (1σ) for the fixed-position alignment. PMID:27983585

  15. An improved algorithm for the determination of the system paramters of a visual binary by least squares

    NASA Astrophysics Data System (ADS)

    Xu, Yu-Lin

    The problem of computing the orbit of a visual binary from a set of observed positions is reconsidered. It is a least squares adjustment problem, if the observational errors follow a bias-free multivariate Gaussian distribution and the covariance matrix of the observations is assumed to be known. The condition equations are constructed to satisfy both the conic section equation and the area theorem, which are nonlinear in both the observations and the adjustment parameters. The traditional least squares algorithm, which employs condition equations that are solved with respect to the uncorrelated observations and either linear in the adjustment parameters or linearized by developing them in Taylor series by first-order approximation, is inadequate in our orbit problem. D.C. Brown proposed an algorithm solving a more general least squares adjustment problem in which the scalar residual function, however, is still constructed by first-order approximation. Not long ago, a completely general solution was published by W.H Jefferys, who proposed a rigorous adjustment algorithm for models in which the observations appear nonlinearly in the condition equations and may be correlated, and in which construction of the normal equations and the residual function involves no approximation. This method was successfully applied in our problem. The normal equations were first solved by Newton's scheme. Practical examples show that this converges fast if the observational errors are sufficiently small and the initial approximate solution is sufficiently accurate, and that it fails otherwise. Newton's method was modified to yield a definitive solution in the case the normal approach fails, by combination with the method of steepest descent and other sophisticated algorithms. Practical examples show that the modified Newton scheme can always lead to a final solution. The weighting of observations, the orthogonal parameters and the efficiency of a set of adjustment parameters are also considered. The definition of efficiency is revised.

  16. Primary Multi-frequency Data Analyze in Electrical Impedance Scanning.

    PubMed

    Liu, Ruigang; Dong, Xiuzhen; Fu, Feng; Shi, Xuetao; You, Fusheng; Ji, Zhenyu

    2005-01-01

    This paper deduced the Cole-Cole arc equation in form of admittance by the traditional Cole-Cole equation in form of impedance. Comparing to the latter, the former is more adaptive to the electrical impedance scanning which using lower frequency region. When using our own electrical impedance scanning device at 50-5000Hz, the measurement data separated on the arc of the former, while collected near the direct current resistor on the arc of the latter. The four parameters of the former can be evaluated by the least square method. The frequency of the imaginary part of admittance reaching maximum can be calculated by the Cole-Cole parameters. In conclusion, the Cole-Cole arc in form of admittance is more effective to multi-frequency data analyze at lower frequency region, like EIS.

  17. A computational approach for hypersonic nonequilibrium radiation utilizing space partition algorithm and Gauss quadrature

    NASA Astrophysics Data System (ADS)

    Shang, J. S.; Andrienko, D. A.; Huang, P. G.; Surzhikov, S. T.

    2014-06-01

    An efficient computational capability for nonequilibrium radiation simulation via the ray tracing technique has been accomplished. The radiative rate equation is iteratively coupled with the aerodynamic conservation laws including nonequilibrium chemical and chemical-physical kinetic models. The spectral properties along tracing rays are determined by a space partition algorithm of the nearest neighbor search process, and the numerical accuracy is further enhanced by a local resolution refinement using the Gauss-Lobatto polynomial. The interdisciplinary governing equations are solved by an implicit delta formulation through the diminishing residual approach. The axisymmetric radiating flow fields over the reentry RAM-CII probe have been simulated and verified with flight data and previous solutions by traditional methods. A computational efficiency gain nearly forty times is realized over that of the existing simulation procedures.

  18. Modelling the effect of structural QSAR parameters on skin penetration using genetic programming

    NASA Astrophysics Data System (ADS)

    Chung, K. K.; Do, D. Q.

    2010-09-01

    In order to model relationships between chemical structures and biological effects in quantitative structure-activity relationship (QSAR) data, an alternative technique of artificial intelligence computing—genetic programming (GP)—was investigated and compared to the traditional method—statistical. GP, with the primary advantage of generating mathematical equations, was employed to model QSAR data and to define the most important molecular descriptions in QSAR data. The models predicted by GP agreed with the statistical results, and the most predictive models of GP were significantly improved when compared to the statistical models using ANOVA. Recently, artificial intelligence techniques have been applied widely to analyse QSAR data. With the capability of generating mathematical equations, GP can be considered as an effective and efficient method for modelling QSAR data.

  19. Smart Utilization of Tertiary Instructional Modes

    ERIC Educational Resources Information Center

    Hamilton, John; Tee, Singwhat

    2010-01-01

    This empirical research surveys first year tertiary business students across different campuses regarding their perceived views concerning traditional, blended and flexible instructional approaches. A structural equation modeling approach shows traditional instructional modes deliver lower levels of student-perceived learning quality, learning…

  20. A neural net-based approach to software metrics

    NASA Technical Reports Server (NTRS)

    Boetticher, G.; Srinivas, Kankanahalli; Eichmann, David A.

    1992-01-01

    Software metrics provide an effective method for characterizing software. Metrics have traditionally been composed through the definition of an equation. This approach is limited by the fact that all the interrelationships among all the parameters be fully understood. This paper explores an alternative, neural network approach to modeling metrics. Experiments performed on two widely accepted metrics, McCabe and Halstead, indicate that the approach is sound, thus serving as the groundwork for further exploration into the analysis and design of software metrics.

  1. Double-resolution electron holography with simple Fourier transform of fringe-shifted holograms.

    PubMed

    Volkov, V V; Han, M G; Zhu, Y

    2013-11-01

    We propose a fringe-shifting holographic method with an appropriate image wave recovery algorithm leading to exact solution of holographic equations. With this new method the complex object image wave recovered from holograms appears to have much less traditional artifacts caused by the autocorrelation band present practically in all Fourier transformed holograms. The new analytical solutions make possible a double-resolution electron holography free from autocorrelation band artifacts and thus push the limits for phase resolution. The new image wave recovery algorithm uses a popular Fourier solution of the side band-pass filter technique, while the fringe-shifting holographic method is simple to implement in practice. Published by Elsevier B.V.

  2. The Boltzmann equation in the difference formulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Szoke, Abraham; Brooks III, Eugene D.

    2015-05-06

    First we recall the assumptions that are needed for the validity of the Boltzmann equation and for the validity of the compressible Euler equations. We then present the difference formulation of these equations and make a connection with the time-honored Chapman - Enskog expansion. We discuss the hydrodynamic limit and calculate the thermal conductivity of a monatomic gas, using a simplified approximation for the collision term. Our formulation is more consistent and simpler than the traditional derivation.

  3. A third-order gas-kinetic CPR method for the Euler and Navier-Stokes equations on triangular meshes

    NASA Astrophysics Data System (ADS)

    Zhang, Chao; Li, Qibing; Fu, Song; Wang, Z. J.

    2018-06-01

    A third-order accurate gas-kinetic scheme based on the correction procedure via reconstruction (CPR) framework is developed for the Euler and Navier-Stokes equations on triangular meshes. The scheme combines the accuracy and efficiency of the CPR formulation with the multidimensional characteristics and robustness of the gas-kinetic flux solver. Comparing with high-order finite volume gas-kinetic methods, the current scheme is more compact and efficient by avoiding wide stencils on unstructured meshes. Unlike the traditional CPR method where the inviscid and viscous terms are treated differently, the inviscid and viscous fluxes in the current scheme are coupled and computed uniformly through the kinetic evolution model. In addition, the present scheme adopts a fully coupled spatial and temporal gas distribution function for the flux evaluation, achieving high-order accuracy in both space and time within a single step. Numerical tests with a wide range of flow problems, from nearly incompressible to supersonic flows with strong shocks, for both inviscid and viscous problems, demonstrate the high accuracy and efficiency of the present scheme.

  4. SPH-based numerical simulations of flow slides in municipal solid waste landfills.

    PubMed

    Huang, Yu; Dai, Zili; Zhang, Weijie; Huang, Maosong

    2013-03-01

    Most municipal solid waste (MSW) is disposed of in landfills. Over the past few decades, catastrophic flow slides have occurred in MSW landfills around the world, causing substantial economic damage and occasionally resulting in human victims. It is therefore important to predict the run-out, velocity and depth of such slides in order to provide adequate mitigation and protection measures. To overcome the limitations of traditional numerical methods for modelling flow slides, a mesh-free particle method entitled smoothed particle hydrodynamics (SPH) is introduced in this paper. The Navier-Stokes equations were adopted as the governing equations and a Bingham model was adopted to analyse the relationship between material stress rates and particle motion velocity. The accuracy of the model is assessed using a series of verifications, and then flow slides that occurred in landfills located in Sarajevo and Bandung were simulated to extend its applications. The simulated results match the field data well and highlight the capability of the proposed SPH modelling method to simulate such complex phenomena as flow slides in MSW landfills.

  5. Intrinsic noise analyzer: a software package for the exploration of stochastic biochemical kinetics using the system size expansion.

    PubMed

    Thomas, Philipp; Matuschek, Hannes; Grima, Ramon

    2012-01-01

    The accepted stochastic descriptions of biochemical dynamics under well-mixed conditions are given by the Chemical Master Equation and the Stochastic Simulation Algorithm, which are equivalent. The latter is a Monte-Carlo method, which, despite enjoying broad availability in a large number of existing software packages, is computationally expensive due to the huge amounts of ensemble averaging required for obtaining accurate statistical information. The former is a set of coupled differential-difference equations for the probability of the system being in any one of the possible mesoscopic states; these equations are typically computationally intractable because of the inherently large state space. Here we introduce the software package intrinsic Noise Analyzer (iNA), which allows for systematic analysis of stochastic biochemical kinetics by means of van Kampen's system size expansion of the Chemical Master Equation. iNA is platform independent and supports the popular SBML format natively. The present implementation is the first to adopt a complementary approach that combines state-of-the-art analysis tools using the computer algebra system Ginac with traditional methods of stochastic simulation. iNA integrates two approximation methods based on the system size expansion, the Linear Noise Approximation and effective mesoscopic rate equations, which to-date have not been available to non-expert users, into an easy-to-use graphical user interface. In particular, the present methods allow for quick approximate analysis of time-dependent mean concentrations, variances, covariances and correlations coefficients, which typically outperforms stochastic simulations. These analytical tools are complemented by automated multi-core stochastic simulations with direct statistical evaluation and visualization. We showcase iNA's performance by using it to explore the stochastic properties of cooperative and non-cooperative enzyme kinetics and a gene network associated with circadian rhythms. The software iNA is freely available as executable binaries for Linux, MacOSX and Microsoft Windows, as well as the full source code under an open source license.

  6. Intrinsic Noise Analyzer: A Software Package for the Exploration of Stochastic Biochemical Kinetics Using the System Size Expansion

    PubMed Central

    Grima, Ramon

    2012-01-01

    The accepted stochastic descriptions of biochemical dynamics under well-mixed conditions are given by the Chemical Master Equation and the Stochastic Simulation Algorithm, which are equivalent. The latter is a Monte-Carlo method, which, despite enjoying broad availability in a large number of existing software packages, is computationally expensive due to the huge amounts of ensemble averaging required for obtaining accurate statistical information. The former is a set of coupled differential-difference equations for the probability of the system being in any one of the possible mesoscopic states; these equations are typically computationally intractable because of the inherently large state space. Here we introduce the software package intrinsic Noise Analyzer (iNA), which allows for systematic analysis of stochastic biochemical kinetics by means of van Kampen’s system size expansion of the Chemical Master Equation. iNA is platform independent and supports the popular SBML format natively. The present implementation is the first to adopt a complementary approach that combines state-of-the-art analysis tools using the computer algebra system Ginac with traditional methods of stochastic simulation. iNA integrates two approximation methods based on the system size expansion, the Linear Noise Approximation and effective mesoscopic rate equations, which to-date have not been available to non-expert users, into an easy-to-use graphical user interface. In particular, the present methods allow for quick approximate analysis of time-dependent mean concentrations, variances, covariances and correlations coefficients, which typically outperforms stochastic simulations. These analytical tools are complemented by automated multi-core stochastic simulations with direct statistical evaluation and visualization. We showcase iNA’s performance by using it to explore the stochastic properties of cooperative and non-cooperative enzyme kinetics and a gene network associated with circadian rhythms. The software iNA is freely available as executable binaries for Linux, MacOSX and Microsoft Windows, as well as the full source code under an open source license. PMID:22723865

  7. Conservational PDF Equations of Turbulence

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2010-01-01

    Recently we have revisited the traditional probability density function (PDF) equations for the velocity and species in turbulent incompressible flows. They are all unclosed due to the appearance of various conditional means which are modeled empirically. However, we have observed that it is possible to establish a closed velocity PDF equation and a closed joint velocity and species PDF equation through conditions derived from the integral form of the Navier-Stokes equations. Although, in theory, the resulted PDF equations are neither general nor unique, they nevertheless lead to the exact transport equations for the first moment as well as all higher order moments. We refer these PDF equations as the conservational PDF equations. This observation is worth further exploration for its validity and CFD application

  8. Shelf Life Prediction for Canned Gudeg using Accelerated Shelf Life Testing (ASLT) Based on Arrhenius Method

    NASA Astrophysics Data System (ADS)

    Nurhayati, R.; Rahayu NH, E.; Susanto, A.; Khasanah, Y.

    2017-04-01

    Gudeg is traditional food from Yogyakarta. It is consist of jackfruit, chicken, egg and coconut milk. Gudeg generally have a short shelf life. Canning or commercial sterilization is one way to extend the shelf life of gudeg. This aims of this research is to predict the shelf life of Andrawinaloka canned gudeg with Accelerated Shelf Life Test methods, Arrhenius model. Canned gudeg stored at three different temperature, there are 37, 50 and 60°C for two months. Measuring the number of Thio Barbituric Acid (TBA), as a critical aspect, were tested every 7 days. Arrhenius model approach is done with the equation order 0 and order 1. The analysis showed that the equation of order 0 can be used as an approach to estimating the shelf life of canned gudeg. The storage of Andrawinaloka canned gudeg at 30°C is predicted untill 21 months and 24 months for 25°C.

  9. Flow Equation Approach to the Statistics of Nonlinear Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Marston, J. B.; Hastings, M. B.

    2005-03-01

    The probability distribution function of non-linear dynamical systems is governed by a linear framework that resembles quantum many-body theory, in which stochastic forcing and/or averaging over initial conditions play the role of non-zero . Besides the well-known Fokker-Planck approach, there is a related Hopf functional methodootnotetextUriel Frisch, Turbulence: The Legacy of A. N. Kolmogorov (Cambridge University Press, 1995) chapter 9.5.; in both formalisms, zero modes of linear operators describe the stationary non-equilibrium statistics. To access the statistics, we investigate the method of continuous unitary transformationsootnotetextS. D. Glazek and K. G. Wilson, Phys. Rev. D 48, 5863 (1993); Phys. Rev. D 49, 4214 (1994). (also known as the flow equation approachootnotetextF. Wegner, Ann. Phys. 3, 77 (1994).), suitably generalized to the diagonalization of non-Hermitian matrices. Comparison to the more traditional cumulant expansion method is illustrated with low-dimensional attractors. The treatment of high-dimensional dynamical systems is also discussed.

  10. Extending the capability of GYRE to calculate tidally forced stellar oscillations

    NASA Astrophysics Data System (ADS)

    Guo, Zhao; Gies, Douglas R.

    2016-01-01

    Tidally forced oscillations have been observed in many eccentric binary systems, such as KOI-54 and many other 'heart beat stars'. The tidal response of the star can be calculated by solving a revised stellar oscillations equations.The open-source stellar oscillation code GYRE (Townsend & Teitler 2013) can be used to solve the free stellar oscillation equations in both adiabatic and non-adiabatic cases. It uses a novel matrix exponential method which avoids many difficulties of the classical shooting and relaxation method. The new version also includes the effect of rotation in traditional approximation.After showing the code flow of GYRE, we revise its subroutines and extend its capability to calculate tidallyforced oscillations in both adiabatic and non-adiabatic cases following the procedure in the CAFein code (Valsecchi et al. 2013). In the end, we compare the tidal eigenfunctions with those calculated from CAFein.More details of the revision and a simple version of the code in MATLAB can be obtained upon request.

  11. Evolutionary design of a generalized polynomial neural network for modelling sediment transport in clean pipes

    NASA Astrophysics Data System (ADS)

    Ebtehaj, Isa; Bonakdari, Hossein; Khoshbin, Fatemeh

    2016-10-01

    To determine the minimum velocity required to prevent sedimentation, six different models were proposed to estimate the densimetric Froude number (Fr). The dimensionless parameters of the models were applied along with a combination of the group method of data handling (GMDH) and the multi-target genetic algorithm. Therefore, an evolutionary design of the generalized GMDH was developed using a genetic algorithm with a specific coding scheme so as not to restrict connectivity configurations to abutting layers only. In addition, a new preserving mechanism by the multi-target genetic algorithm was utilized for the Pareto optimization of GMDH. The results indicated that the most accurate model was the one that used the volumetric concentration of sediment (CV), relative hydraulic radius (d/R), dimensionless particle number (Dgr) and overall sediment friction factor (λs) in estimating Fr. Furthermore, the comparison between the proposed method and traditional equations indicated that GMDH is more accurate than existing equations.

  12. Statistics of Macroturbulence from Flow Equations

    NASA Astrophysics Data System (ADS)

    Marston, Brad; Iadecola, Thomas; Qi, Wanming

    2012-02-01

    Probability distribution functions of stochastically-driven and frictionally-damped fluids are governed by a linear framework that resembles quantum many-body theory. Besides the Fokker-Planck approach, there is a closely related Hopf functional methodfootnotetextOokie Ma and J. B. Marston, J. Stat. Phys. Th. Exp. P10007 (2005).; in both formalisms, zero modes of linear operators describe the stationary non-equilibrium statistics. To access the statistics, we generalize the flow equation approachfootnotetextF. Wegner, Ann. Phys. 3, 77 (1994). (also known as the method of continuous unitary transformationsfootnotetextS. D. Glazek and K. G. Wilson, Phys. Rev. D 48, 5863 (1993); Phys. Rev. D 49, 4214 (1994).) to find the zero mode. We test the approach using a prototypical model of geophysical and astrophysical flows on a rotating sphere that spontaneously organizes into a coherent jet. Good agreement is found with low-order equal-time statistics accumulated by direct numerical simulation, the traditional method. Different choices for the generators of the continuous transformations, and for closure approximations of the operator algebra, are discussed.

  13. Methods of Attenuation Correction for Dual-Wavelength and Dual-Polarization Weather Radar Data

    NASA Technical Reports Server (NTRS)

    Meneghini, R.; Liao, L.

    2007-01-01

    In writing the integral equations for the median mass diameter and number concentration, or comparable parameters of the raindrop size distribution, it is apparent that the forms of the equations for dual-polarization and dual-wavelength radar data are identical when attenuation effects are included. The differential backscattering and extinction coefficients appear in both sets of equations: for the dual-polarization equations, the differences are taken with respect to polarization at a fixed frequency while for the dual-wavelength equations, the differences are taken with respect to frequency at a fixed polarization. An alternative to the integral equation formulation is that based on the k-Z (attenuation coefficient-radar reflectivity factor) parameterization. This-technique was originally developed for attenuating single-wavelength radars, a variation of which has been applied to the TRMM Precipitation Radar data (PR). Extensions of this method have also been applied to dual-polarization data. In fact, it is not difficult to show that nearly identical equations are applicable as well to dualwavelength radar data. In this case, the equations for median mass diameter and number concentration take the form of coupled, but non-integral equations. Differences between this and the integral equation formulation are a consequence of the different ways in which attenuation correction is performed under the two formulations. For both techniques, the equations can be solved either forward from the radar outward or backward from the final range gate toward the radar. Although the forward-going solutions tend to be unstable as the attenuation out to the range of interest becomes large in some sense, an independent estimate of path attenuation is not required. This is analogous to the case of an attenuating single-wavelength radar where the forward solution to the Hitschfeld-Bordan equation becomes unstable as the attenuation increases. To circumvent this problem, the equations can be expressed in the form of a final-value problem so that the recursion begins at the far range gate and proceeds inward towards the radar. Solving the problem in this way traditionally requires estimates of path attenuation to the final gate: in the case of orthogonal linear polarizations, the attenuations at horizontal and vertical polarizations (same frequency) are required while in the dual-wavelength case, attenuations at the two frequencies (same polarization) are required.

  14. Gastric residual volume (GRV) and gastric contents measurement by refractometry.

    PubMed

    Chang, Wei-Kuo; McClave, Stephen A; Hsieh, Chung-Bao; Chao, You-Chen

    2007-01-01

    Traditional use of gastric residual volumes (GRVs), obtained by aspiration from a nasogastric tube, is inaccurate and cannot differentiate components of the gastric contents (gastric secretion vs delivered formula). The use of refractometry and 3 mathematical equations has been proposed as a method to calculate the formula concentration, GRV, and formula volume. In this paper, we have validated these mathematical equations so that they can be implemented in clinical practice. Each of 16 patients receiving a nasogastric tube had 50 mL of water followed by 100 mL of dietary formula (Osmolite HN, Abbott Laboratories, Columbus, OH) infused into the stomach. After mixing, gastric content was aspirated for the first Brix value (BV) measurement by refractometry. Then, 50 mL of water was infused into the stomach and a second BV was measured. The procedure of infusion of dietary formula (100 mL) and then water (50 mL) was repeated and followed by subsequent BV measurement. The same procedure was performed in an in vitro experiment. Formula concentration, GRV, and formula volume were calculated from the derived mathematical equations. The formula concentrations, GRVs, and formula volumes calculated by using refractometry and the mathematical equations were close to the true values obtained from both in vivo and in vitro validation experiments. Using this method, measurement of the BV of gastric contents is simple, reproducible, and inexpensive. Refractometry and the derived mathematical equations may be used to measure formula concentration, GRV, and formula volume, and also to serve as a tool for monitoring the gastric contents of patients receiving nasogastric feeding.

  15. Peridynamic Multiscale Finite Element Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Costa, Timothy; Bond, Stephen D.; Littlewood, David John

    The problem of computing quantum-accurate design-scale solutions to mechanics problems is rich with applications and serves as the background to modern multiscale science research. The prob- lem can be broken into component problems comprised of communicating across adjacent scales, which when strung together create a pipeline for information to travel from quantum scales to design scales. Traditionally, this involves connections between a) quantum electronic structure calculations and molecular dynamics and between b) molecular dynamics and local partial differ- ential equation models at the design scale. The second step, b), is particularly challenging since the appropriate scales of molecular dynamic andmore » local partial differential equation models do not overlap. The peridynamic model for continuum mechanics provides an advantage in this endeavor, as the basic equations of peridynamics are valid at a wide range of scales limiting from the classical partial differential equation models valid at the design scale to the scale of molecular dynamics. In this work we focus on the development of multiscale finite element methods for the peridynamic model, in an effort to create a mathematically consistent channel for microscale information to travel from the upper limits of the molecular dynamics scale to the design scale. In particular, we first develop a Nonlocal Multiscale Finite Element Method which solves the peridynamic model at multiple scales to include microscale information at the coarse-scale. We then consider a method that solves a fine-scale peridynamic model to build element-support basis functions for a coarse- scale local partial differential equation model, called the Mixed Locality Multiscale Finite Element Method. Given decades of research and development into finite element codes for the local partial differential equation models of continuum mechanics there is a strong desire to couple local and nonlocal models to leverage the speed and state of the art of local models with the flexibility and accuracy of the nonlocal peridynamic model. In the mixed locality method this coupling occurs across scales, so that the nonlocal model can be used to communicate material heterogeneity at scales inappropriate to local partial differential equation models. Additionally, the computational burden of the weak form of the peridynamic model is reduced dramatically by only requiring that the model be solved on local patches of the simulation domain which may be computed in parallel, taking advantage of the heterogeneous nature of next generation computing platforms. Addition- ally, we present a novel Galerkin framework, the 'Ambulant Galerkin Method', which represents a first step towards a unified mathematical analysis of local and nonlocal multiscale finite element methods, and whose future extension will allow the analysis of multiscale finite element methods that mix models across scales under certain assumptions of the consistency of those models.« less

  16. An evaluation of collision models in the Method of Moments for rarefied gas problems

    NASA Astrophysics Data System (ADS)

    Emerson, David; Gu, Xiao-Jun

    2014-11-01

    The Method of Moments offers an attractive approach for solving gaseous transport problems that are beyond the limit of validity of the Navier-Stokes-Fourier equations. Recent work has demonstrated the capability of the regularized 13 and 26 moment equations for solving problems when the Knudsen number, Kn (where Kn is the ratio of the mean free path of a gas to a typical length scale of interest), is in the range 0.1 and 1.0-the so-called transition regime. In comparison to numerical solutions of the Boltzmann equation, the Method of Moments has captured both qualitatively, and quantitatively, results of classical test problems in kinetic theory, e.g. velocity slip in Kramers' problem, temperature jump in Knudsen layers, the Knudsen minimum etc. However, most of these results have been obtained for Maxwell molecules, where molecules repel each other according to an inverse fifth-power rule. Recent work has incorporated more traditional collision models such as BGK, S-model, and ES-BGK, the latter being important for thermal problems where the Prandtl number can vary. We are currently investigating the impact of these collision models on fundamental low-speed problems of particular interest to micro-scale flows that will be discussed and evaluated in the presentation. Engineering and Physical Sciences Research Council under Grant EP/I011927/1 and CCP12.

  17. Interpreting the Coulomb-field approximation for generalized-Born electrostatics using boundary-integral equation theory.

    PubMed

    Bardhan, Jaydeep P

    2008-10-14

    The importance of molecular electrostatic interactions in aqueous solution has motivated extensive research into physical models and numerical methods for their estimation. The computational costs associated with simulations that include many explicit water molecules have driven the development of implicit-solvent models, with generalized-Born (GB) models among the most popular of these. In this paper, we analyze a boundary-integral equation interpretation for the Coulomb-field approximation (CFA), which plays a central role in most GB models. This interpretation offers new insights into the nature of the CFA, which traditionally has been assessed using only a single point charge in the solute. The boundary-integral interpretation of the CFA allows the use of multiple point charges, or even continuous charge distributions, leading naturally to methods that eliminate the interpolation inaccuracies associated with the Still equation. This approach, which we call boundary-integral-based electrostatic estimation by the CFA (BIBEE/CFA), is most accurate when the molecular charge distribution generates a smooth normal displacement field at the solute-solvent boundary, and CFA-based GB methods perform similarly. Conversely, both methods are least accurate for charge distributions that give rise to rapidly varying or highly localized normal displacement fields. Supporting this analysis are comparisons of the reaction-potential matrices calculated using GB methods and boundary-element-method (BEM) simulations. An approximation similar to BIBEE/CFA exhibits complementary behavior, with superior accuracy for charge distributions that generate rapidly varying normal fields and poorer accuracy for distributions that produce smooth fields. This approximation, BIBEE by preconditioning (BIBEE/P), essentially generates initial guesses for preconditioned Krylov-subspace iterative BEMs. Thus, iterative refinement of the BIBEE/P results recovers the BEM solution; excellent agreement is obtained in only a few iterations. The boundary-integral-equation framework may also provide a means to derive rigorous results explaining how the empirical correction terms in many modern GB models significantly improve accuracy despite their simple analytical forms.

  18. An analysis code for the Rapid Engineering Estimation of Momentum and Energy Losses (REMEL)

    NASA Technical Reports Server (NTRS)

    Dechant, Lawrence J.

    1994-01-01

    Nonideal behavior has traditionally been modeled by defining efficiency (a comparison between actual and isentropic processes), and subsequent specification by empirical or heuristic methods. With the increasing complexity of aeropropulsion system designs, the reliability of these more traditional methods is uncertain. Computational fluid dynamics (CFD) and experimental methods can provide this information but are expensive in terms of human resources, cost, and time. This report discusses an alternative to empirical and CFD methods by applying classical analytical techniques and a simplified flow model to provide rapid engineering estimates of these losses based on steady, quasi-one-dimensional governing equations including viscous and heat transfer terms (estimated by Reynold's analogy). A preliminary verification of REMEL has been compared with full Navier-Stokes (FNS) and CFD boundary layer computations for several high-speed inlet and forebody designs. Current methods compare quite well with more complex method results and solutions compare very well with simple degenerate and asymptotic results such as Fanno flow, isentropic variable area flow, and a newly developed, combined variable area duct with friction flow solution. These solution comparisons may offer an alternative to transitional and CFD-intense methods for the rapid estimation of viscous and heat transfer losses in aeropropulsion systems.

  19. System identification of a small low-cost unmanned aerial vehicle using flight data from low-cost sensors

    NASA Astrophysics Data System (ADS)

    Hoffer, Nathan Von

    Remote sensing has traditionally been done with satellites and manned aircraft. While. these methods can yield useful scientificc data, satellites and manned aircraft have limitations in data frequency, process time, and real time re-tasking. Small low-cost unmanned aerial vehicles (UAVs) provide greater possibilities for personal scientic research than traditional remote sensing platforms. Precision aerial data requires an accurate vehicle dynamics model for controller development, robust flight characteristics, and fault tolerance. One method of developing a model is system identification (system ID). In this thesis system ID of a small low-cost fixed-wing T-tail UAV is conducted. The linerized longitudinal equations of motion are derived from first principles. Foundations of Recursive Least Squares (RLS) are presented along with RLS with an Error Filtering Online Learning scheme (EFOL). Sensors, data collection, data consistency checking, and data processing are described. Batch least squares (BLS) and BLS with EFOL are used to identify aerodynamic coecoefficients of the UAV. Results of these two methods with flight data are discussed.

  20. Improving the evaluation of therapeutic interventions in multiple sclerosis: the role of new psychometric methods.

    PubMed

    Hobart, J; Cano, S

    2009-02-01

    In this monograph we examine the added value of new psychometric methods (Rasch measurement and Item Response Theory) over traditional psychometric approaches by comparing and contrasting their psychometric evaluations of existing sets of rating scale data. We have concentrated on Rasch measurement rather than Item Response Theory because we believe that it is the more advantageous method for health measurement from a conceptual, theoretical and practical perspective. Our intention is to provide an authoritative document that describes the principles of Rasch measurement and the practice of Rasch analysis in a clear, detailed, non-technical form that is accurate and accessible to clinicians and researchers in health measurement. A comparison was undertaken of traditional and new psychometric methods in five large sets of rating scale data: (1) evaluation of the Rivermead Mobility Index (RMI) in data from 666 participants in the Cannabis in Multiple Sclerosis (CAMS) study; (2) evaluation of the Multiple Sclerosis Impact Scale (MSIS-29) in data from 1725 people with multiple sclerosis; (3) evaluation of test-retest reliability of MSIS-29 in data from 150 people with multiple sclerosis; (4) examination of the use of Rasch analysis to equate scales purporting to measure the same health construct in 585 people with multiple sclerosis; and (5) comparison of relative responsiveness of the Barthel Index and Functional Independence Measure in data from 1400 people undergoing neurorehabilitation. Both Rasch measurement and Item Response Theory are conceptually and theoretically superior to traditional psychometric methods. Findings from each of the five studies show that Rasch analysis is empirically superior to traditional psychometric methods for evaluating rating scales, developing rating scales, analysing rating scale data, understanding and measuring stability and change, and understanding the health constructs we seek to quantify. There is considerable added value in using Rasch analysis rather than traditional psychometric methods in health measurement. Future research directions include the need to reproduce our findings in a range of clinical populations, detailed head-to-head comparisons of Rasch analysis and Item Response Theory, and the application of Rasch analysis to clinical practice.

  1. A Non-Incompressible Non-Boussinesq (NINB) framework for studying atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Yan, C.; Archer, C. L.; Xie, S.; Ghaisas, N.

    2015-12-01

    The incompressible assumption is widely used for studying the turbulent atmospheric boundary layer (ABL) and is generally accepted when the Mach number < ~0.3 (velocity < ~100 m/s). Since the tips of modern wind turbine blades can reach and exceed this threshold, neglecting air compressibility will introduce errors. In addition, if air incompressibility does not hold, then the Boussinesq approximation, by which air density is treated as a constant except in the gravity term of the Navier-Stokes equation, is also invalidated. Here, we propose a new theoretical framework, called NINB for Non-Incompressible Non-Boussinesq, in which air is not considered incompressible and air density is treated as a non-turbulent 4D variable. First, the NINB mass, momentum, and energy conservation equations are developed using Reynolds averaging. Second, numerical simulations of the NINB equations, coupled with a k-epsilon turbulence model, are performed with the finite-volume method. Wind turbines are modeled with the actuator-line model using SOWFA (Software for Offshore/onshore Wind Farm Applications). Third, NINB results are compared with the traditional incompressible buoyant simulations performed by SOWFA with the same set up. The results show differences between NINB and traditional simulations in the neutral atmosphere with a wind turbine. The largest differences in wind speed (up to 1 m/s), turbulent kinetic energy (~10%), dissipation rate (~5%), and shear stress (~10%) occur near the turbine tip region. The power generation differences are 5-15% (depending on setup). These preliminary results suggest that compressibility effects are non-negligible around wind turbines and should be taken into account when forecasting wind power. Since only a few extra terms are introduced, the NINB framework may be an alternative to the traditional incompressible Boussinesq framework for studying the turbulent ABL in general (i.e., without turbines) in the absence of shock waves.

  2. New Generalized Equation for Predicting Maximal Oxygen Uptake (from the Fitness Registry and the Importance of Exercise National Database).

    PubMed

    Kokkinos, Peter; Kaminsky, Leonard A; Arena, Ross; Zhang, Jiajia; Myers, Jonathan

    2017-08-15

    Impaired cardiorespiratory fitness (CRF) is closely linked to chronic illness and associated with adverse events. The American College of Sports Medicine (ACSM) regression equations (ACSM equations) developed to estimate oxygen uptake have known limitations leading to well-documented overestimation of CRF, especially at higher work rates. Thus, there is a need to explore alternative equations to more accurately predict CRF. We assessed maximal oxygen uptake (VO 2 max) obtained directly by open-circuit spirometry in 7,983 apparently healthy subjects who participated in the Fitness Registry and the Importance of Exercise National Database (FRIEND). We randomly sampled 70% of the participants from each of the following age categories: <40, 40 to 50, 50 to 70, and ≥70 and used the remaining 30% for validation. Multivariable linear regression analysis was applied to identify the most relevant variables and construct the best prediction model for VO 2 max. Treadmill speed and treadmill speed × grade were considered in the final model as predictors of measured VO 2 max and the following equation was generated: VO 2 max in ml O 2 /kg/min = speed (m/min) × (0.17 + fractional grade × 0.79) + 3.5. The FRIEND equation predicted VO 2 max with an overall error >4 times lower than the error associated with the traditional ACSM equations (5.1 ± 18.3% vs 21.4 ± 24.9%, respectively). Overestimation associated with the ACSM equation was accentuated when different protocols were considered separately. In conclusion, The FRIEND equation predicts VO 2 max more precisely than the traditional ACSM equations with an overall error >4 times lower than that associated with the ACSM equations. Published by Elsevier Inc.

  3. Helicopter time-domain electromagnetic numerical simulation based on Leapfrog ADI-FDTD

    NASA Astrophysics Data System (ADS)

    Guan, S.; Ji, Y.; Li, D.; Wu, Y.; Wang, A.

    2017-12-01

    We present a three-dimension (3D) Alternative Direction Implicit Finite-Difference Time-Domain (Leapfrog ADI-FDTD) method for the simulation of helicopter time-domain electromagnetic (HTEM) detection. This method is different from the traditional explicit FDTD, or ADI-FDTD. Comparing with the explicit FDTD, leapfrog ADI-FDTD algorithm is no longer limited by Courant-Friedrichs-Lewy(CFL) condition. Thus, the time step is longer. Comparing with the ADI-FDTD, we reduce the equations from 12 to 6 and .the Leapfrog ADI-FDTD method will be easier for the general simulation. First, we determine initial conditions which are adopted from the existing method presented by Wang and Tripp(1993). Second, we derive Maxwell equation using a new finite difference equation by Leapfrog ADI-FDTD method. The purpose is to eliminate sub-time step and retain unconditional stability characteristics. Third, we add the convolution perfectly matched layer (CPML) absorbing boundary condition into the leapfrog ADI-FDTD simulation and study the absorbing effect of different parameters. Different absorbing parameters will affect the absorbing ability. We find the suitable parameters after many numerical experiments. Fourth, We compare the response with the 1-Dnumerical result method for a homogeneous half-space to verify the correctness of our algorithm.When the model contains 107*107*53 grid points, the conductivity is 0.05S/m. The results show that Leapfrog ADI-FDTD need less simulation time and computer storage space, compared with ADI-FDTD. The calculation speed decreases nearly four times, memory occupation decreases about 32.53%. Thus, this algorithm is more efficient than the conventional ADI-FDTD method for HTEM detection, and is more precise than that of explicit FDTD in the late time.

  4. Spirometric Reference Equations for Elderly Chinese in Jinan Aged 60–84 Years

    PubMed Central

    Tian, Xin-Yu; Liu, Chun-Hong; Wang, De-Xiang; Ji, Xiu-Li; Shi, Hui; Zheng, Chun-Yan; Xie, Meng-Shuang; Xiao, Wei

    2018-01-01

    Background: The interpretation of spirometry varies on different reference values. Older people are usually underrepresented in published predictive values. This study aimed at developing spirometric reference equations for elderly Chinese in Jinan aged 60–84 years and to compare them to previous equations. Methods: The project covered all of Jinan city, and the recruitment period lasted 9 months from January 1, 2017 to September 30, 2017, 434 healthy people aged 60–84 years who had never smoked (226 females and 208 males) were recruited to undergo spirometry. Vital capacity (VC), forced VC (FVC), forced expiratory volume in 1 s (FEV1), FEV1/FVC, FEV1/VC, FEV6, peak expiratory flow, and forced expiratory flow at 25%, 50%, 75%, and 25–75% of FVC exhaled (FEF25%, FEF50%, FEF75%, and FEF25–75%) were analyzed. Reference equations for mean and the lower limit of normal (LLN) were derived using the lambda-mu-sigma method. Comparisons between new and previous equations were performed by paired t-test. Results: New reference equations were developed from the sample. The LLN of FEV1/FVC, FEF25–75% computed using the 2012-Global Lung Function Initiative (GLI) and 2006-Hong Kong equations were both lower than the new equations. The biggest degree of difference for FEV1/FVC was 19% (70.46% vs. 59.29%, t = 33.954, P < 0.01) and for maximal midexpiratory flow (MMEF, equals to FEF25–75%) was 22% (0.82 vs. 0.67, t = 21.303, P < 0.01). The 1990-North China and 2009-North China equations predicted higher mean values of FEV1/FVC and FEF25–75% than the present model. The biggest degrees of difference were −4% (78.31% vs. 81.27%, t = −85.359, P < 0.01) and −60% (2.11 vs. 4.68, t = −170.287, P < 0.01), respectively. Conclusions: The newly developed spirometric reference equations are applicable to elderly Chinese in Jinan. The 2012-GLI and 2006-Hong Kong equations may lead to missed diagnoses of obstructive ventilatory defects and the small airway dysfunction, while traditional linear equations for all ages may lead to overdiagnosis. PMID:29553052

  5. Why history matters: Ab initio rederivation of Fresnel equations confirms microscopic theory of refractive index

    NASA Astrophysics Data System (ADS)

    Starke, R.; Schober, G. A. H.

    2018-03-01

    We provide a systematic theoretical, experimental, and historical critique of the standard derivation of Fresnel's equations, which shows in particular that these well-established equations actually contradict the traditional, macroscopic approach to electrodynamics in media. Subsequently, we give a rederivation of Fresnel's equations which is exclusively based on the microscopic Maxwell equations and hence in accordance with modern first-principles materials physics. In particular, as a main outcome of this analysis being of a more general interest, we propose the most general boundary conditions on electric and magnetic fields which are valid on the microscopic level.

  6. Ensemble Averaged Probability Density Function (APDF) for Compressible Turbulent Reacting Flows

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2012-01-01

    In this paper, we present a concept of the averaged probability density function (APDF) for studying compressible turbulent reacting flows. The APDF is defined as an ensemble average of the fine grained probability density function (FG-PDF) with a mass density weighting. It can be used to exactly deduce the mass density weighted, ensemble averaged turbulent mean variables. The transport equation for APDF can be derived in two ways. One is the traditional way that starts from the transport equation of FG-PDF, in which the compressible Navier- Stokes equations are embedded. The resulting transport equation of APDF is then in a traditional form that contains conditional means of all terms from the right hand side of the Navier-Stokes equations except for the chemical reaction term. These conditional means are new unknown quantities that need to be modeled. Another way of deriving the transport equation of APDF is to start directly from the ensemble averaged Navier-Stokes equations. The resulting transport equation of APDF derived from this approach appears in a closed form without any need for additional modeling. The methodology of ensemble averaging presented in this paper can be extended to other averaging procedures: for example, the Reynolds time averaging for statistically steady flow and the Reynolds spatial averaging for statistically homogeneous flow. It can also be extended to a time or spatial filtering procedure to construct the filtered density function (FDF) for the large eddy simulation (LES) of compressible turbulent reacting flows.

  7. Forced cubic Schrödinger equation with Robin boundary data: large-time asymptotics

    PubMed Central

    Kaikina, Elena I.

    2013-01-01

    We consider the initial-boundary-value problem for the cubic nonlinear Schrödinger equation, formulated on a half-line with inhomogeneous Robin boundary data. We study traditionally important problems of the theory of nonlinear partial differential equations, such as the global-in-time existence of solutions to the initial-boundary-value problem and the asymptotic behaviour of solutions for large time. PMID:24204185

  8. Microstructure Images Restoration of Metallic Materials Based upon KSVD and Smoothing Penalty Sparse Representation Approach.

    PubMed

    Li, Qing; Liang, Steven Y

    2018-04-20

    Microstructure images of metallic materials play a significant role in industrial applications. To address image degradation problem of metallic materials, a novel image restoration technique based on K-means singular value decomposition (KSVD) and smoothing penalty sparse representation (SPSR) algorithm is proposed in this work, the microstructure images of aluminum alloy 7075 (AA7075) material are used as examples. To begin with, to reflect the detail structure characteristics of the damaged image, the KSVD dictionary is introduced to substitute the traditional sparse transform basis (TSTB) for sparse representation. Then, due to the image restoration, modeling belongs to a highly underdetermined equation, and traditional sparse reconstruction methods may cause instability and obvious artifacts in the reconstructed images, especially reconstructed image with many smooth regions and the noise level is strong, thus the SPSR (here, q = 0.5) algorithm is designed to reconstruct the damaged image. The results of simulation and two practical cases demonstrate that the proposed method has superior performance compared with some state-of-the-art methods in terms of restoration performance factors and visual quality. Meanwhile, the grain size parameters and grain boundaries of microstructure image are discussed before and after they are restored by proposed method.

  9. Arbitrary Lagrangian-Eulerian Method with Local Structured Adaptive Mesh Refinement for Modeling Shock Hydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, R W; Pember, R B; Elliott, N S

    2001-10-22

    A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. This method facilitates the solution of problems currently at and beyond the boundary of soluble problems by traditional ALE methods by focusing computational resources where they are required through dynamic adaption. Many of the core issues involved in the development of the combined ALEAMR method hinge upon the integration of AMR with a staggered grid Lagrangian integration method. The novel components of the method are mainly driven by the need to reconcile traditionalmore » AMR techniques, which are typically employed on stationary meshes with cell-centered quantities, with the staggered grids and grid motion employed by Lagrangian methods. Numerical examples are presented which demonstrate the accuracy and efficiency of the method.« less

  10. Linear approximations of global behaviors in nonlinear systems with moderate or strong noise

    NASA Astrophysics Data System (ADS)

    Liang, Junhao; Din, Anwarud; Zhou, Tianshou

    2018-03-01

    While many physical or chemical systems can be modeled by nonlinear Langevin equations (LEs), dynamical analysis of these systems is challenging in the cases of moderate and strong noise. Here we develop a linear approximation scheme, which can transform an often intractable LE into a linear set of binomial moment equations (BMEs). This scheme provides a feasible way to capture nonlinear behaviors in the sense of probability distribution and is effective even when the noise is moderate or big. Based on BMEs, we further develop a noise reduction technique, which can effectively handle tough cases where traditional small-noise theories are inapplicable. The overall method not only provides an approximation-based paradigm to analysis of the local and global behaviors of nonlinear noisy systems but also has a wide range of applications.

  11. Remote sensing as a tool for estimating soil erosion potential

    NASA Technical Reports Server (NTRS)

    Morris-Jones, D. R.; Morgan, K. M.; Kiefer, R. W.

    1979-01-01

    The Universal Soil Loss Equation is a frequently used methodology for estimating soil erosion potential. The Universal Soil Loss Equation requires a variety of types of geographic information (e.g. topographic slope, soil erodibility, land use, crop type, and soil conservation practice) in order to function. This information is traditionally gathered from topographic maps, soil surveys, field surveys, and interviews with farmers. Remote sensing data sources and interpretation techniques provide an alternative method for collecting information regarding land use, crop type, and soil conservation practice. Airphoto interpretation techniques and medium altitude, multi-date color and color infrared positive transparencies (70mm) were utilized in this study to determine their effectiveness for gathering the desired land use/land cover data. Successful results were obtained within the test site, a 6136 hectare watershed in Dane County, Wisconsin.

  12. MOOSE: A parallel computational framework for coupled systems of nonlinear equations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Derek Gaston; Chris Newman; Glen Hansen

    Systems of coupled, nonlinear partial differential equations (PDEs) often arise in simulation of nuclear processes. MOOSE: Multiphysics Object Oriented Simulation Environment, a parallel computational framework targeted at the solution of such systems, is presented. As opposed to traditional data-flow oriented computational frameworks, MOOSE is instead founded on the mathematical principle of Jacobian-free Newton-Krylov (JFNK) solution methods. Utilizing the mathematical structure present in JFNK, physics expressions are modularized into `Kernels,'' allowing for rapid production of new simulation tools. In addition, systems are solved implicitly and fully coupled, employing physics based preconditioning, which provides great flexibility even with large variance in timemore » scales. A summary of the mathematics, an overview of the structure of MOOSE, and several representative solutions from applications built on the framework are presented.« less

  13. Differential form representation of stochastic electromagnetic fields

    NASA Astrophysics Data System (ADS)

    Haider, Michael; Russer, Johannes A.

    2017-09-01

    In this work, we revisit the theory of stochastic electromagnetic fields using exterior differential forms. We present a short overview as well as a brief introduction to the application of differential forms in electromagnetic theory. Within the framework of exterior calculus we derive equations for the second order moments, describing stochastic electromagnetic fields. Since the resulting objects are continuous quantities in space, a discretization scheme based on the Method of Moments (MoM) is introduced for numerical treatment. The MoM is applied in such a way, that the notation of exterior calculus is maintained while we still arrive at the same set of algebraic equations as obtained for the case of formulating the theory using the traditional notation of vector calculus. We conclude with an analytic calculation of the radiated electric field of two Hertzian dipole, excited by uncorrelated random currents.

  14. A finite element: Boundary integral method for electromagnetic scattering. Ph.D. Thesis Technical Report, Feb. - Sep. 1992

    NASA Technical Reports Server (NTRS)

    Collins, J. D.; Volakis, John L.

    1992-01-01

    A method that combines the finite element and boundary integral techniques for the numerical solution of electromagnetic scattering problems is presented. The finite element method is well known for requiring a low order storage and for its capability to model inhomogeneous structures. Of particular emphasis in this work is the reduction of the storage requirement by terminating the finite element mesh on a boundary in a fashion which renders the boundary integrals in convolutional form. The fast Fourier transform is then used to evaluate these integrals in a conjugate gradient solver, without a need to generate the actual matrix. This method has a marked advantage over traditional integral equation approaches with respect to the storage requirement of highly inhomogeneous structures. Rectangular, circular, and ogival mesh termination boundaries are examined for two-dimensional scattering. In the case of axially symmetric structures, the boundary integral matrix storage is reduced by exploiting matrix symmetries and solving the resulting system via the conjugate gradient method. In each case several results are presented for various scatterers aimed at validating the method and providing an assessment of its capabilities. Important in methods incorporating boundary integral equations is the issue of internal resonance. A method is implemented for their removal, and is shown to be effective in the two-dimensional and three-dimensional applications.

  15. Conjugate-gradient optimization method for orbital-free density functional calculations.

    PubMed

    Jiang, Hong; Yang, Weitao

    2004-08-01

    Orbital-free density functional theory as an extension of traditional Thomas-Fermi theory has attracted a lot of interest in the past decade because of developments in both more accurate kinetic energy functionals and highly efficient numerical methodology. In this paper, we developed a conjugate-gradient method for the numerical solution of spin-dependent extended Thomas-Fermi equation by incorporating techniques previously used in Kohn-Sham calculations. The key ingredient of the method is an approximate line-search scheme and a collective treatment of two spin densities in the case of spin-dependent extended Thomas-Fermi problem. Test calculations for a quartic two-dimensional quantum dot system and a three-dimensional sodium cluster Na216 with a local pseudopotential demonstrate that the method is accurate and efficient. (c) 2004 American Institute of Physics.

  16. The FLAME-slab method for electromagnetic wave scattering in aperiodic slabs

    NASA Astrophysics Data System (ADS)

    Mansha, Shampy; Tsukerman, Igor; Chong, Y. D.

    2017-12-01

    The proposed numerical method, "FLAME-slab," solves electromagnetic wave scattering problems for aperiodic slab structures by exploiting short-range regularities in these structures. The computational procedure involves special difference schemes with high accuracy even on coarse grids. These schemes are based on Trefftz approximations, utilizing functions that locally satisfy the governing differential equations, as is done in the Flexible Local Approximation Method (FLAME). Radiation boundary conditions are implemented via Fourier expansions in the air surrounding the slab. When applied to ensembles of slab structures with identical short-range features, such as amorphous or quasicrystalline lattices, the method is significantly more efficient, both in runtime and in memory consumption, than traditional approaches. This efficiency is due to the fact that the Trefftz functions need to be computed only once for the whole ensemble.

  17. A novel data reduction technique for single slanted hot-wire measurements used to study incompressible compressor tip leakage flows

    NASA Astrophysics Data System (ADS)

    Berdanier, Reid A.; Key, Nicole L.

    2016-03-01

    The single slanted hot-wire technique has been used extensively as a method for measuring three velocity components in turbomachinery applications. The cross-flow orientation of probes with respect to the mean flow in rotating machinery results in detrimental prong interference effects when using multi-wire probes. As a result, the single slanted hot-wire technique is often preferred. Typical data reduction techniques solve a set of nonlinear equations determined by curve fits to calibration data. A new method is proposed which utilizes a look-up table method applied to a simulated triple-wire sensor with application to turbomachinery environments having subsonic, incompressible flows. Specific discussion regarding corrections for temperature and density changes present in a multistage compressor application is included, and additional consideration is given to the experimental error which accompanies each data reduction process. Hot-wire data collected from a three-stage research compressor with two rotor tip clearances are used to compare the look-up table technique with the traditional nonlinear equation method. The look-up table approach yields velocity errors of less than 5 % for test conditions deviating by more than 20 °C from calibration conditions (on par with the nonlinear solver method), while requiring less than 10 % of the computational processing time.

  18. New developments in the method of space-time conservation element and solution element: Applications to the Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung

    1993-01-01

    A new numerical framework for solving conservation laws is being developed. This new approach differs substantially in both concept and methodology from the well-established methods--i.e., finite difference, finite volume, finite element, and spectral methods. It is conceptually simple and designed to avoid several key limitations to the above traditional methods. An explicit model scheme for solving a simple 1-D unsteady convection-diffusion equation is constructed and used to illuminate major differences between the current method and those mentioned above. Unexpectedly, its amplification factors for the pure convection and pure diffusion cases are identical to those of the Leapfrog and the DuFort-Frankel schemes, respectively. Also, this explicit scheme and its Navier-Stokes extension have the unusual property that their stabilities are limited only by the CFL condition. Moreover, despite the fact that it does not use any flux-limiter or slope-limiter, the Navier-Stokes solver is capable of generating highly accurate shock tube solutions with shock discontinuities being resolved within one mesh interval. An accurate Euler solver also is constructed through another extension. It has many unusual properties, e.g., numerical diffusion at all mesh points can be controlled by a set of local parameters.

  19. Anti-Transgender Prejudice: A Structural Equation Model of Associated Constructs

    ERIC Educational Resources Information Center

    Tebbe, Esther N.; Moradi, Bonnie

    2012-01-01

    This study aimed to identify theoretically relevant key correlates of anti-transgender prejudice. Specifically, structural equation modeling was used to test the unique relations of anti-lesbian, gay, and bisexual (LGB) prejudice; traditional gender role attitudes; need for closure; and social dominance orientation with anti-transgender prejudice.…

  20. The Educational Worth of Zimbabwean Stone Sculpture.

    ERIC Educational Resources Information Center

    Moyo, Daniel

    A study analyzed the worth of one traditional craft, Zimbabwean stone sculpture, in terms of its educational and employment values. The study was prompted by general attitudes that tend to equate "good" education with education that is modeled on Western industrial technology, whereas traditional education, which includes crafts…

  1. Equations of state for crystalline zirconium iodide: The role of dispersion

    NASA Astrophysics Data System (ADS)

    Rossi, Matthew L.; Taylor, Christopher D.

    2013-02-01

    We present the first-principle equations of state of several zirconium iodides, ZrI2, ZrI3, and ZrI4, computed using density functional theory methods that apply various methods for introducing the dispersion correction. Iodides formed due to reaction of molecular or atomic iodine with zirconium and zircaloys are of particular interest due to their application to the cladding material used in the fabrication of nuclear fuel rods. Stress corrosion cracking (SCC), associated with fission product chemistry with the clad material, is a major concern in the life cycle of nuclear fuels, as many of the observed rod failures have occurred due to pellet-cladding chemical interactions (PCCI) [A. Atrens, G. Dannhäuser, G. Bäro, Stress-corrosion-cracking of zircaloy-4 cladding tubes, Journal of Nuclear Materials 126 (1984) 91-102; P. Rudling, R. Adamson, B. Cox, F. Garzarolli, A. Strasser, High burn-up fuel issues, Nuclear Engineering and Technology 40 (2008) 1-8]. A proper understanding of the physical properties of the corrosion products is, therefore, required for the development of a comprehensive SCC model. In this particular work, we emphasize that, while existing modeling techniques include methods to compute crystal structures and associated properties, it is important to capture intermolecular forces not traditionally included, such as van der Waals (dispersion) correction. Furthermore, crystal structures with stoichiometries favoring a high I:Zr ratio are found to be particularly sensitive, such that traditional density functional theory approaches that do not incorporate dispersion incorrectly predict significantly larger volumes of the lattice. This latter point is related to the diffuse nature of the iodide electron cloud.

  2. Characterization of the Bell-Shaped Vibratory Angular Rate Gyro

    PubMed Central

    Liu, Ning; Su, Zhong; Li, Qing; Fu, MengYin; Liu, Hong; Fan, JunFang

    2013-01-01

    The bell-shaped vibratory angular rate gyro (abbreviated as BVG) is a novel shell vibratory gyroscope, which is inspired by the Chinese traditional bell. It sensitizes angular velocity through the standing wave precession effect. The bell-shaped resonator is a core component of the BVG and looks like the millimeter-grade Chinese traditional bell, such as QianLong Bell and Yongle Bell. It is made of Ni43CrTi, which is a constant modulus alloy. The exciting element, control element and detection element are uniformly distributed and attached to the resonator, respectively. This work presents the design, analysis and experimentation on the BVG. It is most important to analyze the vibratory character of the bell-shaped resonator. The strain equation, internal force and the resonator's equilibrium differential equation are derived in the orthogonal curvilinear coordinate system. When the input angular velocity is existent on the sensitive axis, an analysis of the vibratory character is performed using the theory of thin shells. On this basis, the mode shape function and the simplified second order normal vibration mode dynamical equation are obtained. The coriolis coupling relationship about the primary mode and secondary mode is established. The methods of the signal processing and control loop are presented. Analyzing the impact resistance property of the bell-shaped resonator, which is compared with other shell resonators using the Finite Element Method, demonstrates that BVG has the advantage of a better impact resistance property. A reasonable means of installation and a prototypal gyro are designed. The gyroscopic effect of the BVG is characterized through experiments. Experimental results show that the BVG has not only the advantages of low cost, low power, long work life, high sensitivity, and so on, but, also, of a simple structure and a better impact resistance property for low and medium angular velocity measurements. PMID:23966183

  3. Characterization of the bell-shaped vibratory angular rate gyro.

    PubMed

    Liu, Ning; Su, Zhong; Li, Qing; Fu, MengYin; Liu, Hong; Fan, JunFang

    2013-08-07

    The bell-shaped vibratory angular rate gyro (abbreviated as BVG) is a novel shell vibratory gyroscope, which is inspired by the Chinese traditional bell. It sensitizes angular velocity through the standing wave precession effect. The bell-shaped resonator is a core component of the BVG and looks like the millimeter-grade Chinese traditional bell, such as QianLong Bell and Yongle Bell. It is made of Ni43CrTi, which is a constant modulus alloy. The exciting element, control element and detection element are uniformly distributed and attached to the resonator, respectively. This work presents the design, analysis and experimentation on the BVG. It is most important to analyze the vibratory character of the bell-shaped resonator. The strain equation, internal force and the resonator's equilibrium differential equation are derived in the orthogonal curvilinear coordinate system. When the input angular velocity is existent on the sensitive axis, an analysis of the vibratory character is performed using the theory of thin shells. On this basis, the mode shape function and the simplified second order normal vibration mode dynamical equation are obtained. The coriolis coupling relationship about the primary mode and secondary mode is established. The methods of the signal processing and control loop are presented. Analyzing the impact resistance property of the bell-shaped resonator, which is compared with other shell resonators using the Finite Element Method, demonstrates that BVG has the advantage of a better impact resistance property. A reasonable means of installation and a prototypal gyro are designed. The gyroscopic effect of the BVG is characterized through experiments. Experimental results show that the BVG has not only the advantages of low cost, low power, long work life, high sensitivity, and so on, but, also, of a simple structure and a better impact resistance property for low and medium angular velocity measurements.

  4. New Examination of the Traditional Raman Lidar Technique II: Temperature Dependence Aerosol Scattering Ratio and Water Vapor Mixing Ratio Equations

    NASA Technical Reports Server (NTRS)

    Whiteman, David N.; Abshire, James B. (Technical Monitor)

    2002-01-01

    In a companion paper, the temperature dependence of Raman scattering and its influence on the Raman water vapor signal and the lidar equations was examined. New forms of the lidar equation were developed to account for this temperature sensitivity. Here we use those results to derive the temperature dependent forms of the equations for the aerosol scattering ratio, aerosol backscatter coefficient, extinction to backscatter ratio and water vapor mixing ratio. Pertinent analysis examples are presented to illustrate each calculation.

  5. Development of advanced methods for analysis of experimental data in diffusion

    NASA Astrophysics Data System (ADS)

    Jaques, Alonso V.

    There are numerous experimental configurations and data analysis techniques for the characterization of diffusion phenomena. However, the mathematical methods for estimating diffusivities traditionally do not take into account the effects of experimental errors in the data, and often require smooth, noiseless data sets to perform the necessary analysis steps. The current methods used for data smoothing require strong assumptions which can introduce numerical "artifacts" into the data, affecting confidence in the estimated parameters. The Boltzmann-Matano method is used extensively in the determination of concentration - dependent diffusivities, D(C), in alloys. In the course of analyzing experimental data, numerical integrations and differentiations of the concentration profile are performed. These methods require smoothing of the data prior to analysis. We present here an approach to the Boltzmann-Matano method that is based on a regularization method to estimate a differentiation operation on the data, i.e., estimate the concentration gradient term, which is important in the analysis process for determining the diffusivity. This approach, therefore, has the potential to be less subjective, and in numerical simulations shows an increased accuracy in the estimated diffusion coefficients. We present a regression approach to estimate linear multicomponent diffusion coefficients that eliminates the need pre-treat or pre-condition the concentration profile. This approach fits the data to a functional form of the mathematical expression for the concentration profile, and allows us to determine the diffusivity matrix directly from the fitted parameters. Reformulation of the equation for the analytical solution is done in order to reduce the size of the problem and accelerate the convergence. The objective function for the regression can incorporate point estimations for error in the concentration, improving the statistical confidence in the estimated diffusivity matrix. Case studies are presented to demonstrate the reliability and the stability of the method. To the best of our knowledge there is no published analysis of the effects of experimental errors on the reliability of the estimates for the diffusivities. For the case of linear multicomponent diffusion, we analyze the effects of the instrument analytical spot size, positioning uncertainty, and concentration uncertainty on the resulting values of the diffusivities. These effects are studied using Monte Carlo method on simulated experimental data. Several useful scaling relationships were identified which allow more rigorous and quantitative estimates of the errors in the measured data, and are valuable for experimental design. To further analyze anomalous diffusion processes, where traditional diffusional transport equations do not hold, we explore the use of fractional calculus in analytically representing these processes is proposed. We use the fractional calculus approach for anomalous diffusion processes occurring through a finite plane sheet with one face held at a fixed concentration, the other held at zero, and the initial concentration within the sheet equal to zero. This problem is related to cases in nature where diffusion is enhanced relative to the classical process, and the order of differentiation is not necessarily a second--order differential equation. That is, differentiation is of fractional order alpha, where 1 ≤ alpha < 2. For alpha = 2, the presented solutions reduce to the classical second-order diffusion solution for the conditions studied. The solution obtained allows the analysis of permeation experiments. Frequently, hydrogen diffusion is analyzed using electrochemical permeation methods using the traditional, Fickian-based theory. Experimental evidence shows the latter analytical approach is not always appropiate, because reported data shows qualitative (and quantitative) deviation from its theoretical scaling predictions. Preliminary analysis of data shows better agreement with fractional diffusion analysis when compared to traditional square-root scaling. Although there is a large amount of work in the estimation of the diffusivity from experimental data, reported studies typically present only the analytical description for the diffusivity, without scattering. However, because these studies do not consider effects produced by instrument analysis, their direct applicability is limited. We propose alternatives to address these, and to evaluate their influence on the final resulting diffusivity values.

  6. A Novel Non-Intrusive Method to Resolve the Thermal-Dome-Effect of Pyranometers: Radiometric Calibration and Implications

    NASA Technical Reports Server (NTRS)

    Ji, Qiang; Tsay, Si-Chee; Lau, K. M.; Hansell, R. A.; Butler, J. J.; Cooper, J. W.

    2011-01-01

    Traditionally the calibration equation for pyranometers assumes that the measured solar irradiance is solely proportional to the thermopile's output voltage; therefore only a single calibration factor is derived. This causes additional measurement uncertainties because it does not capture sufficient information to correctly account for a pyranometer's thermal effect. In our updated calibration equation, temperatures from the pyranometer's dome and case are incorporated to describe the instrument's thermal behavior, and a new set of calibration constants are determined, thereby reducing measurement uncertainties. In this paper, we demonstrate why a pyranometer's uncertainty using the traditional calibration equation is always larger than a-few-percent, but with the new approach can become much less than 1% after the thermal issue is resolved. The highlighted calibration results are based on NIST-traceable light sources under controlled laboratory conditions. The significance of the new approach lends itself to not only avoiding the uncertainty caused by a pyranometer's thermal effect but also the opportunity to better isolate and characterize other instrumental artifacts, such as angular response and non-linearity of the thermopile, to further reduce additional uncertainties. We also discuss some of the implications, including an example of how the thermal issue can potentially impact climate studies by evaluating aerosol's direct-radiative effect using field measurements with and without considering the pyranometer's thermal effect. The results of radiative transfer model simulation show that a pyranometer's thermal effect on solar irradiance measurements at the surface can be translated into a significant alteration of the calculated distribution of solar energy inside the column atmosphere.

  7. A Novel Nonintrusive Method to Resolve the Thermal Dome Effect of Pyranometers: Radiometric Calibration and Implications

    NASA Technical Reports Server (NTRS)

    Ji. Q.; Tsay, S.-C.; Lau, K. M.; Hansell, R. A.; Butler, J. J.; Cooper, J. W.

    2011-01-01

    Traditionally the calibration equation for pyranometers assumes that the measured solar irradiance is solely proportional to the thermopile s output voltage; therefore, only a single calibration factor is derived. This causes additional measurement uncertainties because it does not capture sufficient information to correctly account for a pyranometer s thermal effect. In our updated calibration equation, temperatures from the pyranometer's dome and case are incorporated to describe the instrument's thermal behavior, and a new set of calibration constants are determined, thereby reducing measurement uncertainties. In this paper, we demonstrate why a pyranometer's uncertainty using the traditional calibration equation is always larger than a few percent, but with the new approach can become much less than 1% after the thermal issue is resolved. The highlighted calibration results are based on NIST traceable light sources under controlled laboratory conditions. The significance of the new approach lends itself to not only avoiding the uncertainty caused by a pyranometer's thermal effect but also the opportunity to better isolate and characterize other instrumental artifacts, such as angular response and nonlinearity of the thermopile, to further reduce additional uncertainties. We also discuss some of the implications, including an example of how the thermal issue can potentially impact climate studies by evaluating aerosol s direct radiative effect using field measurements with and without considering the pyranometer s thermal effect. The results of radiative transfer model simulation show that a pyranometer s thermal effect on solar irradiance measurements at the surface can be translated into a significant alteration of the calculated distribution of solar energy inside the column atmosphere.

  8. An evaluation of solution algorithms and numerical approximation methods for modeling an ion exchange process

    NASA Astrophysics Data System (ADS)

    Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.

    2010-07-01

    The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte-Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward difference formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte-Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.

  9. On-demand Reporting of Risk-adjusted and Smoothed Rates for Quality Profiling in ACS NSQIP.

    PubMed

    Cohen, Mark E; Liu, Yaoming; Huffman, Kristopher M; Ko, Clifford Y; Hall, Bruce L

    2016-12-01

    Surgical quality improvement depends on hospitals having accurate and timely information about comparative performance. Profiling accuracy is improved by risk adjustment and shrinkage adjustment to stabilize estimates. These adjustments are included in ACS NSQIP reports, where hospital odds ratios (OR) are estimated using hierarchical models built on contemporaneous data. However, the timeliness of feedback remains an issue. We describe an alternative, nonhierarchical approach, which yields risk- and shrinkage-adjusted rates. In contrast to our "Traditional" NSQIP method, this approach uses preexisting equations, built on historical data, which permits hospitals to have near immediate access to profiling results. We compared our traditional method to this new "on-demand" approach with respect to outlier determinations, kappa statistics, and correlations between logged OR and standardized rates, for 12 models (4 surgical groups by 3 outcomes). When both methods used the same contemporaneous data, there were similar numbers of hospital outliers and correlations between logged OR and standardized rates were high. However, larger differences were observed when the effect of contemporaneous versus historical data was added to differences in statistical methodology. The on-demand, nonhierarchical approach provides results similar to the traditional hierarchical method and offers immediacy, an "over-time" perspective, application to a broader range of models and data subsets, and reporting of more easily understood rates. Although the nonhierarchical method results are now available "on-demand" in a web-based application, the hierarchical approach has advantages, which support its continued periodic publication as the gold standard for hospital profiling in the program.

  10. Fast and robust standard-deviation-based method for bulk motion compensation in phase-based functional OCT.

    PubMed

    Wei, Xiang; Camino, Acner; Pi, Shaohua; Cepurna, William; Huang, David; Morrison, John C; Jia, Yali

    2018-05-01

    Phase-based optical coherence tomography (OCT), such as OCT angiography (OCTA) and Doppler OCT, is sensitive to the confounding phase shift introduced by subject bulk motion. Traditional bulk motion compensation methods are limited by their accuracy and computing cost-effectiveness. In this Letter, to the best of our knowledge, we present a novel bulk motion compensation method for phase-based functional OCT. Bulk motion associated phase shift can be directly derived by solving its equation using a standard deviation of phase-based OCTA and Doppler OCT flow signals. This method was evaluated on rodent retinal images acquired by a prototype visible light OCT and human retinal images acquired by a commercial system. The image quality and computational speed were significantly improved, compared to two conventional phase compensation methods.

  11. A novel navigation method used in a ballistic missile

    NASA Astrophysics Data System (ADS)

    Qian, Hua-ming; Sun, Long; Cai, Jia-nan; Peng, Yu

    2013-10-01

    The traditional strapdown inertial/celestial integrated navigation method used in a ballistic missile cannot accurately estimate the accelerometer bias. It might cause a divergence of navigation errors. To solve this problem, a new navigation method named strapdown inertial/starlight refractive celestial integrated navigation is proposed. To verify the feasibility of the proposed method, a simulated program of a ballistic missile is presented. The simulation results indicated that, when multiple refraction stars are used, the proposed method can accurately estimate the accelerometer bias, and suppress the divergence of navigation errors completely. Specifically, in order to apply this method to a ballistic missile, a novel measurement equation based on stellar refraction was developed. Furthermore a method to calculate the number of refraction stars observed by the stellar sensor was given. Finally, the relationship between the number of refraction stars used and the navigation accuracy is analysed.

  12. A Time-Regularized, Multiple Gravity-Assist Low-Thrust, Bounded-Impulse Model for Trajectory Optimization

    NASA Technical Reports Server (NTRS)

    Ellison, Donald H.; Englander, Jacob A.; Conway, Bruce A.

    2017-01-01

    The multiple gravity assist low-thrust (MGALT) trajectory model combines the medium-fidelity Sims-Flanagan bounded-impulse transcription with a patched-conics flyby model and is an important tool for preliminary trajectory design. While this model features fast state propagation via Keplers equation and provides a pleasingly accurate estimation of the total mass budget for the eventual flight suitable integrated trajectory it does suffer from one major drawback, namely its temporal spacing of the control nodes. We introduce a variant of the MGALT transcription that utilizes the generalized anomaly from the universal formulation of Keplers equation as a decision variable in addition to the trajectory phase propagation time. This results in two improvements over the traditional model. The first is that the maneuver locations are equally spaced in generalized anomaly about the orbit rather than time. The second is that the Kepler propagator now has the generalized anomaly as its independent variable instead of time and thus becomes an iteration-free propagation method. The new algorithm is outlined, including the impact that this has on the computation of Jacobian entries for numerical optimization, and a motivating application problem is presented that illustrates the improvements that this model has over the traditional MGALT transcription.

  13. A Time-Regularized Multiple Gravity-Assist Low-Thrust Bounded-Impulse Model for Trajectory Optimization

    NASA Technical Reports Server (NTRS)

    Ellison, Donald H.; Englander, Jacob A.; Conway, Bruce A.

    2017-01-01

    The multiple gravity assist low-thrust (MGALT) trajectory model combines the medium-fidelity Sims-Flanagan bounded-impulse transcription with a patched-conics flyby model and is an important tool for preliminary trajectory design. While this model features fast state propagation via Kepler's equation and provides a pleasingly accurate estimation of the total mass budget for the eventual flight-suitable integrated trajectory it does suffer from one major drawback, namely its temporal spacing of the control nodes. We introduce a variant of the MGALT transcription that utilizes the generalized anomaly from the universal formulation of Kepler's equation as a decision variable in addition to the trajectory phase propagation time. This results in two improvements over the traditional model. The first is that the maneuver locations are equally spaced in generalized anomaly about the orbit rather than time. The second is that the Kepler propagator now has the generalized anomaly as its independent variable instead of time and thus becomes an iteration-free propagation method. The new algorithm is outlined, including the impact that this has on the computation of Jacobian entries for numerical optimization, and a motivating application problem is presented that illustrates the improvements that this model has over the traditional MGALT transcription.

  14. 6Li in a three-body model with realistic Forces: Separable versus nonseparable approach

    NASA Astrophysics Data System (ADS)

    Hlophe, L.; Lei, Jin; Elster, Ch.; Nogga, A.; Nunes, F. M.

    2017-12-01

    Background: Deuteron induced reactions are widely used to probe nuclear structure and astrophysical information. Those (d ,p ) reactions may be viewed as three-body reactions and described with Faddeev techniques. Purpose: Faddeev equations in momentum space have a long tradition of utilizing separable interactions in order to arrive at sets of coupled integral equations in one variable. However, it needs to be demonstrated that their solution based on separable interactions agrees exactly with solutions based on nonseparable forces. Methods: Momentum space Faddeev equations are solved with nonseparable and separable forces as coupled integral equations. Results: The ground state of 6Li is calculated via momentum space Faddeev equations using the CD-Bonn neutron-proton force and a Woods-Saxon type neutron(proton)-4He force. For the latter the Pauli-forbidden S -wave bound state is projected out. This result is compared to a calculation in which the interactions in the two-body subsystems are represented by separable interactions derived in the Ernst-Shakin-Thaler (EST) framework. Conclusions: We find that calculations based on the separable representation of the interactions and the original interactions give results that agree to four significant figures for the binding energy, provided that energy and momentum support points of the EST expansion are chosen independently. The momentum distributions computed in both approaches also fully agree with each other.

  15. Derivation of a Multiparameter Gamma Model for Analyzing the Residence-Time Distribution Function for Nonideal Flow Systems as an Alternative to the Advection-Dispersion Equation

    DOE PAGES

    Embry, Irucka; Roland, Victor; Agbaje, Oluropo; ...

    2013-01-01

    A new residence-time distribution (RTD) function has been developed and applied to quantitative dye studies as an alternative to the traditional advection-dispersion equation (AdDE). The new method is based on a jointly combined four-parameter gamma probability density function (PDF). The gamma residence-time distribution (RTD) function and its first and second moments are derived from the individual two-parameter gamma distributions of randomly distributed variables, tracer travel distance, and linear velocity, which are based on their relationship with time. The gamma RTD function was used on a steady-state, nonideal system modeled as a plug-flow reactor (PFR) in the laboratory to validate themore » effectiveness of the model. The normalized forms of the gamma RTD and the advection-dispersion equation RTD were compared with the normalized tracer RTD. The normalized gamma RTD had a lower mean-absolute deviation (MAD) (0.16) than the normalized form of the advection-dispersion equation (0.26) when compared to the normalized tracer RTD. The gamma RTD function is tied back to the actual physical site due to its randomly distributed variables. The results validate using the gamma RTD as a suitable alternative to the advection-dispersion equation for quantitative tracer studies of non-ideal flow systems.« less

  16. Quantum limited performance of optical receivers

    NASA Astrophysics Data System (ADS)

    Farrell, Thomas C.

    2018-05-01

    While the fundamental performance limit for traditional radio frequency (RF) communications is often set by background noise on the channel, the fundamental limit for optical communications is set by the quantum nature of light. Both types of systems are based on electro-magnetic waves, differing only in carrier frequency. It is, in fact, the frequency that determines which of these limits dominates. We explore this in the first part of this paper. This leads to a difference in methods of analysis of the two different types of systems. While equations predicting the probability of bit error for RF systems are usually based on the signal to background noise ratio, similar equations for optical systems are often based on the physics of the quantum limit and are simply a function of the detected signal energy received per bit. These equations are derived in the second part of this paper for several frequently used modulation schemes: On-off keying (OOK), pulse position modulation (PPM), and binary differential phase shift keying (DPSK). While these equations ignore the effects of background noise and non-quantum internal noise sources in the detector and receiver electronics, they provide a useful bound for obtainable performance of optical communication systems. For example, these equations may be used in initial link budgets to assess the feasibility of system architectures, even before specific receiver designs are considered.

  17. Thermodynamics of Highly Concentrated Aqueous Electrolytes: Based on Boltzmann's eponymous equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ally, Moonis Raza

    This sharply focused book invites the reader to explore the chemical thermodynamics of highly concentrated aqueous electrolytes from a different vantage point than traditional methods. The book's foundation is deeply rooted in Ludwig Boltzmann's eponymous equation. The pathway from micro to macro thermodynamics is explained heuristically, in a step-by-step approach. Concepts and mathematical formalism are explained in detail to captivate and maintain interest as the algebra twists and turns. Every significant result is derived in a lucid and piecemeal fashion. Application of the theory is exemplified with examples. It is amazing to realize that Boltamann's simple equation contains sufficient informationmore » from which such an elaborate theory can emerge. This book is suitable for undergraduate and graduate level classes in chemical engineering, chemistry, geochemistry, environmental sciences, and those studying aerosol particles in the troposphere. Students interested in understanding how thermodynamic theories may be developed would be inspired by the methodology. The author wishes that readers get as much excitement reading this book as he did writing it.« less

  18. Reduced Stress Tensor and Dissipation and the Transport of Lamb Vector

    NASA Technical Reports Server (NTRS)

    Wu, Jie-Zhi; Zhou, Ye; Wu, Jian-Ming

    1996-01-01

    We develop a methodology to ensure that the stress tensor, regardless of its number of independent components, can be reduced to an exactly equivalent one which has the same number of independent components as the surface force. It is applicable to the momentum balance if the shear viscosity is constant. A direct application of this method to the energy balance also leads to a reduction of the dissipation rate of kinetic energy. Following this procedure, significant saving in analysis and computation may be achieved. For turbulent flows, this strategy immediately implies that a given Reynolds stress model can always be replaced by a reduced one before putting it into computation. Furthermore, we show how the modeling of Reynolds stress tensor can be reduced to that of the mean turbulent Lamb vector alone, which is much simpler. As a first step of this alternative modeling development, we derive the governing equations for the Lamb vector and its square. These equations form a basis of new second-order closure schemes and, we believe, should be favorably compared to that of traditional Reynolds stress transport equation.

  19. Zeroth Law, Entropy, Equilibrium, and All That

    NASA Astrophysics Data System (ADS)

    Canagaratna, Sebastian G.

    2008-05-01

    The place of the zeroth law in the teaching of thermodynamics is examined in the context of the recent discussion by Gislason and Craig of some problems involving the establishment of thermal equilibrium. The concept of thermal equilibrium is introduced through the zeroth law. The relation between the zeroth law and the second law in the traditional approach to thermodynamics is discussed. It is shown that the traditional approach does not need to appeal to the second law to solve with rigor the type of problems discussed by Gislason and Craig: in problems not involving chemical reaction, the zeroth law and the condition for mechanical equilibrium, complemented by the first law and any necessary equations of state, are sufficient to determine the final state. We have to invoke the second law only if we wish to calculate the change of entropy. Since most students are exposed to a traditional approach to thermodynamics, the examples of Gislason and Craig are re-examined in terms of the traditional formulation. The maximization of the entropy in the final state can be verified in the traditional approach quite directly by the use of the fundamental equations of thermodynamics. This approach uses relatively simple mathematics in as general a setting as possible.

  20. Artificial Intelligence Procedures for Tree Taper Estimation within a Complex Vegetation Mosaic in Brazil

    PubMed Central

    Nunes, Matheus Henrique

    2016-01-01

    Tree stem form in native tropical forests is very irregular, posing a challenge to establishing taper equations that can accurately predict the diameter at any height along the stem and subsequently merchantable volume. Artificial intelligence approaches can be useful techniques in minimizing estimation errors within complex variations of vegetation. We evaluated the performance of Random Forest® regression tree and Artificial Neural Network procedures in modelling stem taper. Diameters and volume outside bark were compared to a traditional taper-based equation across a tropical Brazilian savanna, a seasonal semi-deciduous forest and a rainforest. Neural network models were found to be more accurate than the traditional taper equation. Random forest showed trends in the residuals from the diameter prediction and provided the least precise and accurate estimations for all forest types. This study provides insights into the superiority of a neural network, which provided advantages regarding the handling of local effects. PMID:27187074

  1. Artificial Intelligence Procedures for Tree Taper Estimation within a Complex Vegetation Mosaic in Brazil.

    PubMed

    Nunes, Matheus Henrique; Görgens, Eric Bastos

    2016-01-01

    Tree stem form in native tropical forests is very irregular, posing a challenge to establishing taper equations that can accurately predict the diameter at any height along the stem and subsequently merchantable volume. Artificial intelligence approaches can be useful techniques in minimizing estimation errors within complex variations of vegetation. We evaluated the performance of Random Forest® regression tree and Artificial Neural Network procedures in modelling stem taper. Diameters and volume outside bark were compared to a traditional taper-based equation across a tropical Brazilian savanna, a seasonal semi-deciduous forest and a rainforest. Neural network models were found to be more accurate than the traditional taper equation. Random forest showed trends in the residuals from the diameter prediction and provided the least precise and accurate estimations for all forest types. This study provides insights into the superiority of a neural network, which provided advantages regarding the handling of local effects.

  2. Population stochastic modelling (PSM)--an R package for mixed-effects models based on stochastic differential equations.

    PubMed

    Klim, Søren; Mortensen, Stig Bousgaard; Kristensen, Niels Rode; Overgaard, Rune Viig; Madsen, Henrik

    2009-06-01

    The extension from ordinary to stochastic differential equations (SDEs) in pharmacokinetic and pharmacodynamic (PK/PD) modelling is an emerging field and has been motivated in a number of articles [N.R. Kristensen, H. Madsen, S.H. Ingwersen, Using stochastic differential equations for PK/PD model development, J. Pharmacokinet. Pharmacodyn. 32 (February(1)) (2005) 109-141; C.W. Tornøe, R.V. Overgaard, H. Agersø, H.A. Nielsen, H. Madsen, E.N. Jonsson, Stochastic differential equations in NONMEM: implementation, application, and comparison with ordinary differential equations, Pharm. Res. 22 (August(8)) (2005) 1247-1258; R.V. Overgaard, N. Jonsson, C.W. Tornøe, H. Madsen, Non-linear mixed-effects models with stochastic differential equations: implementation of an estimation algorithm, J. Pharmacokinet. Pharmacodyn. 32 (February(1)) (2005) 85-107; U. Picchini, S. Ditlevsen, A. De Gaetano, Maximum likelihood estimation of a time-inhomogeneous stochastic differential model of glucose dynamics, Math. Med. Biol. 25 (June(2)) (2008) 141-155]. PK/PD models are traditionally based ordinary differential equations (ODEs) with an observation link that incorporates noise. This state-space formulation only allows for observation noise and not for system noise. Extending to SDEs allows for a Wiener noise component in the system equations. This additional noise component enables handling of autocorrelated residuals originating from natural variation or systematic model error. Autocorrelated residuals are often partly ignored in PK/PD modelling although violating the hypothesis for many standard statistical tests. This article presents a package for the statistical program R that is able to handle SDEs in a mixed-effects setting. The estimation method implemented is the FOCE(1) approximation to the population likelihood which is generated from the individual likelihoods that are approximated using the Extended Kalman Filter's one-step predictions.

  3. Linear-scaling implementation of molecular response theory in self-consistent field electronic-structure theory.

    PubMed

    Coriani, Sonia; Høst, Stinne; Jansík, Branislav; Thøgersen, Lea; Olsen, Jeppe; Jørgensen, Poul; Reine, Simen; Pawłowski, Filip; Helgaker, Trygve; Sałek, Paweł

    2007-04-21

    A linear-scaling implementation of Hartree-Fock and Kohn-Sham self-consistent field theories for the calculation of frequency-dependent molecular response properties and excitation energies is presented, based on a nonredundant exponential parametrization of the one-electron density matrix in the atomic-orbital basis, avoiding the use of canonical orbitals. The response equations are solved iteratively, by an atomic-orbital subspace method equivalent to that of molecular-orbital theory. Important features of the subspace method are the use of paired trial vectors (to preserve the algebraic structure of the response equations), a nondiagonal preconditioner (for rapid convergence), and the generation of good initial guesses (for robust solution). As a result, the performance of the iterative method is the same as in canonical molecular-orbital theory, with five to ten iterations needed for convergence. As in traditional direct Hartree-Fock and Kohn-Sham theories, the calculations are dominated by the construction of the effective Fock/Kohn-Sham matrix, once in each iteration. Linear complexity is achieved by using sparse-matrix algebra, as illustrated in calculations of excitation energies and frequency-dependent polarizabilities of polyalanine peptides containing up to 1400 atoms.

  4. Visualization of atomic-scale phenomena in superconductors: application to FeSe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choubey, Peayush; Berlijn, Tom; Kreisel, Andreas

    Here we propose a simple method of calculating inhomogeneous, atomic-scale phenomena in superconductors which makes use of the wave function information traditionally discarded in the construction of tight-binding models used in the Bogoliubov-de Gennes equations. The method uses symmetry- based first principles Wannier functions to visualize the effects of superconducting pairing on the distribution of electronic states over atoms within a crystal unit cell. Local symmetries lower than the global lattice symmetry can thus be exhibited as well, rendering theoretical comparisons with scanning tunneling spectroscopy data much more useful. As a simple example, we discuss the geometric dimer states observedmore » near defects in superconducting FeSe.« less

  5. Visualization of atomic-scale phenomena in superconductors: application to FeSe

    DOE PAGES

    Choubey, Peayush; Berlijn, Tom; Kreisel, Andreas; ...

    2014-10-31

    Here we propose a simple method of calculating inhomogeneous, atomic-scale phenomena in superconductors which makes use of the wave function information traditionally discarded in the construction of tight-binding models used in the Bogoliubov-de Gennes equations. The method uses symmetry- based first principles Wannier functions to visualize the effects of superconducting pairing on the distribution of electronic states over atoms within a crystal unit cell. Local symmetries lower than the global lattice symmetry can thus be exhibited as well, rendering theoretical comparisons with scanning tunneling spectroscopy data much more useful. As a simple example, we discuss the geometric dimer states observedmore » near defects in superconducting FeSe.« less

  6. Using Structural Equation Models with Latent Variables to Study Student Growth and Development.

    ERIC Educational Resources Information Center

    Pike, Gary R.

    1991-01-01

    Analysis of data on freshman-to-senior developmental gains in 722 University of Tennessee-Knoxville students provides evidence of the advantages of structural equation modeling with latent variables and suggests that the group differences identified by traditional analysis of variance and covariance techniques may be an artifact of measurement…

  7. Short term evaluation of harvesting systems for ecosystem management

    Treesearch

    Michael D. Erickson; Penn Peters; Curt Hassler

    1995-01-01

    Continuous time/motion studies have traditionally been the basis for productivity estimates of timber harvesting systems. The detailed data from such studies permits the researcher or analyst to develop mathematical relationships based on stand, system, and stem attributes for describing machine cycle times. The resulting equation(s) allow the analyst to estimate...

  8. An Evaluation of Three Approximate Item Response Theory Models for Equating Test Scores.

    ERIC Educational Resources Information Center

    Marco, Gary L.; And Others

    Three item response models were evaluated for estimating item parameters and equating test scores. The models, which approximated the traditional three-parameter model, included: (1) the Rasch one-parameter model, operationalized in the BICAL computer program; (2) an approximate three-parameter logistic model based on coarse group data divided…

  9. Prescriptive Statements and Educational Practice: What Can Structural Equation Modeling (SEM) Offer?

    ERIC Educational Resources Information Center

    Martin, Andrew J.

    2011-01-01

    Longitudinal structural equation modeling (SEM) can be a basis for making prescriptive statements on educational practice and offers yields over "traditional" statistical techniques under the general linear model. The extent to which prescriptive statements can be made will rely on the appropriate accommodation of key elements of research design,…

  10. Quantification of Cardiorespiratory Fitness in Healthy Nonobese and Obese Men and Women

    PubMed Central

    Lorenzo, Santiago

    2012-01-01

    Background: The quantification and interpretation of cardiorespiratory fitness (CRF) in obesity is important for adequately assessing cardiovascular conditioning, underlying comorbidities, and properly evaluating disease risk. We retrospectively compared peak oxygen uptake (V˙o2peak) (ie, CRF) in absolute terms, and relative terms (% predicted) using three currently suggested prediction equations (Equations R, W, and G). Methods: There were 19 nonobese and 66 obese participants. Subjects underwent hydrostatic weighing and incremental cycling to exhaustion. Subject characteristics were analyzed by independent t test, and % predicted V˙o2peak by a two-way analysis of variance (group and equation) with repeated measures on one factor (equation). Results: V˙o2peak (L/min) was not different between nonobese and obese adults (2.35 ± 0.80 [SD] vs 2.39 ± 0.68 L/min). V˙o2peak was higher (P < .02) relative to body mass and lean body mass in the nonobese (34 ± 8 mL/min/kg vs 22 ± 5 mL/min/kg, 42 ± 9 mL/min/lean body mass vs 37 ± 6 mL/min/lean body mass). Cardiorespiratory fitness assessed as % predicted was not different in the nonobese and obese (91% ± 17% predicted vs 95% ± 15% predicted) using Equation R, while using Equation W and G, CRF was lower (P < .05) but within normal limits in the obese (94 ± 15 vs 87 ± 11; 101% ± 17% predicted vs 90% ± 12% predicted, respectively), depending somewhat on sex. Conclusions: Traditional methods of reporting V˙o2peak do not allow adequate assessment and quantification of CRF in obese adults. Predicted V˙o2peak does allow a normalized evaluation of CRF in the obese, although care must be taken in selecting the most appropriate prediction equation, especially in women. In general, otherwise healthy obese are not grossly deconditioned as is commonly believed, although CRF may be slightly higher in nonobese subjects depending on the uniqueness of the prediction equation. PMID:21940772

  11. Identifying model error in metabolic flux analysis - a generalized least squares approach.

    PubMed

    Sokolenko, Stanislav; Quattrociocchi, Marco; Aucoin, Marc G

    2016-09-13

    The estimation of intracellular flux through traditional metabolic flux analysis (MFA) using an overdetermined system of equations is a well established practice in metabolic engineering. Despite the continued evolution of the methodology since its introduction, there has been little focus on validation and identification of poor model fit outside of identifying "gross measurement error". The growing complexity of metabolic models, which are increasingly generated from genome-level data, has necessitated robust validation that can directly assess model fit. In this work, MFA calculation is framed as a generalized least squares (GLS) problem, highlighting the applicability of the common t-test for model validation. To differentiate between measurement and model error, we simulate ideal flux profiles directly from the model, perturb them with estimated measurement error, and compare their validation to real data. Application of this strategy to an established Chinese Hamster Ovary (CHO) cell model shows how fluxes validated by traditional means may be largely non-significant due to a lack of model fit. With further simulation, we explore how t-test significance relates to calculation error and show that fluxes found to be non-significant have 2-4 fold larger error (if measurement uncertainty is in the 5-10 % range). The proposed validation method goes beyond traditional detection of "gross measurement error" to identify lack of fit between model and data. Although the focus of this work is on t-test validation and traditional MFA, the presented framework is readily applicable to other regression analysis methods and MFA formulations.

  12. [Study of spatial stratified sampling strategy of Oncomelania hupensis snail survey based on plant abundance].

    PubMed

    Xun-Ping, W; An, Z

    2017-07-27

    Objective To optimize and simplify the survey method of Oncomelania hupensis snails in marshland endemic regions of schistosomiasis, so as to improve the precision, efficiency and economy of the snail survey. Methods A snail sampling strategy (Spatial Sampling Scenario of Oncomelania based on Plant Abundance, SOPA) which took the plant abundance as auxiliary variable was explored and an experimental study in a 50 m×50 m plot in a marshland in the Poyang Lake region was performed. Firstly, the push broom surveyed data was stratified into 5 layers by the plant abundance data; then, the required numbers of optimal sampling points of each layer through Hammond McCullagh equation were calculated; thirdly, every sample point in the line with the Multiple Directional Interpolation (MDI) placement scheme was pinpointed; and finally, the comparison study among the outcomes of the spatial random sampling strategy, the traditional systematic sampling method, the spatial stratified sampling method, Sandwich spatial sampling and inference and SOPA was performed. Results The method (SOPA) proposed in this study had the minimal absolute error of 0.213 8; and the traditional systematic sampling method had the largest estimate, and the absolute error was 0.924 4. Conclusion The snail sampling strategy (SOPA) proposed in this study obtains the higher estimation accuracy than the other four methods.

  13. Khokhlov Zabolotskaya Kuznetsov type equation: nonlinear acoustics in heterogeneous media

    NASA Astrophysics Data System (ADS)

    Kostin, Ilya; Panasenko, Grigory

    2006-04-01

    The KZK type equation introduced in this Note differs from the traditional form of the KZK model known in acoustics by the assumptions on the nonlinear term. For this modified form, a global existence and uniqueness result is established for the case of non-constant coefficients. Afterwards the asymptotic behaviour of the solution of the KZK type equation with rapidly oscillating coefficients is studied. To cite this article: I. Kostin, G. Panasenko, C. R. Mecanique 334 (2006).

  14. Hybrid ODE/SSA methods and the cell cycle model

    NASA Astrophysics Data System (ADS)

    Wang, S.; Chen, M.; Cao, Y.

    2017-07-01

    Stochastic effect in cellular systems has been an important topic in systems biology. Stochastic modeling and simulation methods are important tools to study stochastic effect. Given the low efficiency of stochastic simulation algorithms, the hybrid method, which combines an ordinary differential equation (ODE) system with a stochastic chemically reacting system, shows its unique advantages in the modeling and simulation of biochemical systems. The efficiency of hybrid method is usually limited by reactions in the stochastic subsystem, which are modeled and simulated using Gillespie's framework and frequently interrupt the integration of the ODE subsystem. In this paper we develop an efficient implementation approach for the hybrid method coupled with traditional ODE solvers. We also compare the efficiency of hybrid methods with three widely used ODE solvers RADAU5, DASSL, and DLSODAR. Numerical experiments with three biochemical models are presented. A detailed discussion is presented for the performances of three ODE solvers.

  15. Estimation of Newborn Risk for Child or Adolescent Obesity: Lessons from Longitudinal Birth Cohorts

    PubMed Central

    Morandi, Anita; Meyre, David; Lobbens, Stéphane; Kleinman, Ken; Kaakinen, Marika; Rifas-Shiman, Sheryl L.; Vatin, Vincent; Gaget, Stefan; Pouta, Anneli; Hartikainen, Anna-Liisa; Laitinen, Jaana; Ruokonen, Aimo; Das, Shikta; Khan, Anokhi Ali; Elliott, Paul; Maffeis, Claudio; Gillman, Matthew W.

    2012-01-01

    Objectives Prevention of obesity should start as early as possible after birth. We aimed to build clinically useful equations estimating the risk of later obesity in newborns, as a first step towards focused early prevention against the global obesity epidemic. Methods We analyzed the lifetime Northern Finland Birth Cohort 1986 (NFBC1986) (N = 4,032) to draw predictive equations for childhood and adolescent obesity from traditional risk factors (parental BMI, birth weight, maternal gestational weight gain, behaviour and social indicators), and a genetic score built from 39 BMI/obesity-associated polymorphisms. We performed validation analyses in a retrospective cohort of 1,503 Italian children and in a prospective cohort of 1,032 U.S. children. Results In the NFBC1986, the cumulative accuracy of traditional risk factors predicting childhood obesity, adolescent obesity, and childhood obesity persistent into adolescence was good: AUROC = 0·78[0·74–0.82], 0·75[0·71–0·79] and 0·85[0·80–0·90] respectively (all p<0·001). Adding the genetic score produced discrimination improvements ≤1%. The NFBC1986 equation for childhood obesity remained acceptably accurate when applied to the Italian and the U.S. cohort (AUROC = 0·70[0·63–0·77] and 0·73[0·67–0·80] respectively) and the two additional equations for childhood obesity newly drawn from the Italian and the U.S. datasets showed good accuracy in respective cohorts (AUROC = 0·74[0·69–0·79] and 0·79[0·73–0·84]) (all p<0·001). The three equations for childhood obesity were converted into simple Excel risk calculators for potential clinical use. Conclusion This study provides the first example of handy tools for predicting childhood obesity in newborns by means of easily recorded information, while it shows that currently known genetic variants have very little usefulness for such prediction. PMID:23209618

  16. Numerical simulation of the vortical flow around a pitching airfoil

    NASA Astrophysics Data System (ADS)

    Fu, Xiang; Li, Gaohua; Wang, Fuxin

    2017-04-01

    In order to study the dynamic behaviors of the flapping wing, the vortical flow around a pitching NACA0012 airfoil is investigated. The unsteady flow field is obtained by a very efficient zonal procedure based on the velocity-vorticity formulation and the Reynolds number based on the chord length of the airfoil is set to 1 million. The zonal procedure divides up the whole computation domain in to three zones: potential flow zone, boundary layer zone and Navier-Stokes zone. Since the vorticity is absent in the potential flow zone, the vorticity transport equation needs only to be solved in the boundary layer zone and Navier-Stokes zone. Moreover, the boundary layer equations are solved in the boundary layer zone. This arrangement drastically reduces the computation time against the traditional numerical method. After the flow field computation, the evolution of the vortices around the airfoil is analyzed in detail.

  17. Aeroservoelastic modeling and applications using minimum-state approximations of the unsteady aerodynamics

    NASA Technical Reports Server (NTRS)

    Tiffany, Sherwood H.; Karpel, Mordechay

    1989-01-01

    Various control analysis, design, and simulation techniques for aeroelastic applications require the equations of motion to be cast in a linear time-invariant state-space form. Unsteady aerodynamics forces have to be approximated as rational functions of the Laplace variable in order to put them in this framework. For the minimum-state method, the number of denominator roots in the rational approximation. Results are shown of applying various approximation enhancements (including optimization, frequency dependent weighting of the tabular data, and constraint selection) with the minimum-state formulation to the active flexible wing wind-tunnel model. The results demonstrate that good models can be developed which have an order of magnitude fewer augmenting aerodynamic equations more than traditional approaches. This reduction facilitates the design of lower order control systems, analysis of control system performance, and near real-time simulation of aeroservoelastic phenomena.

  18. KEEN Wave Simulations: Comparing various PIC to various fixed grid Vlasov to Phase-Space Adaptive Sparse Tiling & Effective Lagrangian (PASTEL) Techniques

    NASA Astrophysics Data System (ADS)

    Afeyan, Bedros; Larson, David; Shadwick, Bradley; Sydora, Richard

    2017-10-01

    We compare various ways of solving the Vlasov-Poisson and Vlasov-Maxwell equations on rather demanding nonlinear kinetic phenomena associated with KEEN and KEEPN waves. KEEN stands for Kinetic, Electrostatic, Electron Nonlinear, and KEEPN, for electron-positron or pair plasmas analogs. Because these self-organized phase space structures are not steady-state, or single mode, or fluid or low order moment equation limited, typical techniques with low resolution or too much noise will distort the answer too much, too soon, and fail. This will be shown via Penrose criteria triggers for instability at the formation stage as well as particle orbit statistics in fully formed KEEN waves and KEEN-KEEN and KEEN-EPW interacting states. We will argue that PASTEL is a viable alternative to traditional methods with reasonable chances of success in higher dimensions. Work supported by a Grant from AFOSR PEEP.

  19. [Quantitative estimation of vegetation cover and management factor in USLE and RUSLE models by using remote sensing data: a review].

    PubMed

    Wu, Chang-Guang; Li, Sheng; Ren, Hua-Dong; Yao, Xiao-Hua; Huang, Zi-Jie

    2012-06-01

    Soil loss prediction models such as universal soil loss equation (USLE) and its revised universal soil loss equation (RUSLE) are the useful tools for risk assessment of soil erosion and planning of soil conservation at regional scale. To make a rational estimation of vegetation cover and management factor, the most important parameters in USLE or RUSLE, is particularly important for the accurate prediction of soil erosion. The traditional estimation based on field survey and measurement is time-consuming, laborious, and costly, and cannot rapidly extract the vegetation cover and management factor at macro-scale. In recent years, the development of remote sensing technology has provided both data and methods for the estimation of vegetation cover and management factor over broad geographic areas. This paper summarized the research findings on the quantitative estimation of vegetation cover and management factor by using remote sensing data, and analyzed the advantages and the disadvantages of various methods, aimed to provide reference for the further research and quantitative estimation of vegetation cover and management factor at large scale.

  20. A Systematic Approach to Determining the Identifiability of Multistage Carcinogenesis Models.

    PubMed

    Brouwer, Andrew F; Meza, Rafael; Eisenberg, Marisa C

    2017-07-01

    Multistage clonal expansion (MSCE) models of carcinogenesis are continuous-time Markov process models often used to relate cancer incidence to biological mechanism. Identifiability analysis determines what model parameter combinations can, theoretically, be estimated from given data. We use a systematic approach, based on differential algebra methods traditionally used for deterministic ordinary differential equation (ODE) models, to determine identifiable combinations for a generalized subclass of MSCE models with any number of preinitation stages and one clonal expansion. Additionally, we determine the identifiable combinations of the generalized MSCE model with up to four clonal expansion stages, and conjecture the results for any number of clonal expansion stages. The results improve upon previous work in a number of ways and provide a framework to find the identifiable combinations for further variations on the MSCE models. Finally, our approach, which takes advantage of the Kolmogorov backward equations for the probability generating functions of the Markov process, demonstrates that identifiability methods used in engineering and mathematics for systems of ODEs can be applied to continuous-time Markov processes. © 2016 Society for Risk Analysis.

  1. [Effect of temperature on the aerobic degradation of vitamin C in citric fruit juices].

    PubMed

    Alvarado, J D; Palacios Viteri, N

    1989-12-01

    By means of the method of the 2,4-dinitrophenylhydrazine the total ascorbic acid content in lima, lemon, tangerine and grapefruit juices, fresh and kept at four temperatures and different times, was determined. It was confirmed that in all the cases, the aerobic degradation of ascorbic acid follows a kinetic first order and that the values of the reaction rate are different between species and even between varieties of lemon and tangerine. The values of the equation terms are reported, and examples of application given. Within a range from 20 degrees to 92 degrees C, the effect of temperature on the velocity of the ascorbic acid degradation is described satisfactorily following the Arrhenius equation, in accordance with which, the corresponding values of activation energy are calculated to compare them with other published values. With the simple application of the method, in two steps, and considering that the L-ascorbic acid and the L-dehydroascorbic acid are predominant, the results can be used to calculate the vitamin C losses in citric fruit juices, indicated when they are processed by traditional thermal treatments.

  2. Advancing the Theory of Nuclear Reactions with Rare Isotopes: From the Laboratory to the Cosmos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elster, Charlotte

    2015-06-01

    The mission of the TORUS Topical Collaboration is to develop new methods that will advance nuclear reaction theory for unstable isotopes by using three-body techniques to improve direct-reaction calculations, and, by using a new partial-fusion theory, to integrate descriptions of direct and compound-nucleus reactions. Ohio University concentrates its efforts on the first part of the mission. Since direct measurements are often not feasible, indirect methods, e.g. (d,p) reactions, should be used. Those (d,p) reactions may be viewed as three-body reactions and described with Faddeev techniques. Faddeev equations in momentum space have a long tradition of utilizing separable interactions in ordermore » to arrive at sets of coupled integral equations in one variable. While there exist several separable representations for the nucleon-nucleon interaction, the optical potential between a neutron (proton) and a nucleus is not readily available in separable form. For this reason we first embarked in introducing a separable representation for complex phenomenological optical potentials of Woods-Saxon type.« less

  3. Modeling and optimization of dough recipe for breadsticks

    NASA Astrophysics Data System (ADS)

    Krivosheev, A. Yu; Ponomareva, E. I.; Zhuravlev, A. A.; Lukina, S. I.; Alekhina, N. N.

    2018-05-01

    During the work, the authors studied the combined effect of non-traditional raw materials on indicators of quality breadsticks, mathematical methods of experiment planning were applied. The main factors chosen were the dosages of flaxseed flour and grape seed oil. The output parameters were the swelling factor of the products and their strength. Optimization of the formulation composition of the dough for bread sticks was carried out by experimental- statistical methods. As a result of the experiment, mathematical models were constructed in the form of regression equations, adequately describing the process of studies. The statistical processing of the experimental data was carried out by the criteria of Student, Cochran and Fisher (with a confidence probability of 0.95). A mathematical interpretation of the regression equations was given. Optimization of the formulation of the dough for bread sticks was carried out by the method of uncertain Lagrange multipliers. The rational values of the factors were determined: the dosage of flaxseed flour - 14.22% and grape seed oil - 7.8%, ensuring the production of products with the best combination of swelling ratio and strength. On the basis of the data obtained, a recipe and a method for the production of breadsticks "Idea" were proposed (TU (Russian Technical Specifications) 9117-443-02068106-2017).

  4. Proxy-equation paradigm: A strategy for massively parallel asynchronous computations

    NASA Astrophysics Data System (ADS)

    Mittal, Ankita; Girimaji, Sharath

    2017-09-01

    Massively parallel simulations of transport equation systems call for a paradigm change in algorithm development to achieve efficient scalability. Traditional approaches require time synchronization of processing elements (PEs), which severely restricts scalability. Relaxing synchronization requirement introduces error and slows down convergence. In this paper, we propose and develop a novel "proxy equation" concept for a general transport equation that (i) tolerates asynchrony with minimal added error, (ii) preserves convergence order and thus, (iii) expected to scale efficiently on massively parallel machines. The central idea is to modify a priori the transport equation at the PE boundaries to offset asynchrony errors. Proof-of-concept computations are performed using a one-dimensional advection (convection) diffusion equation. The results demonstrate the promise and advantages of the present strategy.

  5. [Correlation between physical characteristics of sticks and quality of traditional Chinese medicine pills prepared by plastic molded method].

    PubMed

    Wang, Ling; Xian, Jiechen; Hong, Yanlong; Lin, Xiao; Feng, Yi

    2012-05-01

    To quantify the physical characteristics of sticks of traditional Chinese medicine (TCM) honeyed pills prepared by the plastic molded method and the correlation of adhesiveness and plasticity-related parameters of sticks and quality of pills, in order to find major parameters and the appropriate range impacting pill quality. Sticks were detected by texture analyzer for their physical characteristic parameters such as hardness and compression action, and pills were observed by visual evaluation for their quality. The correlation of both data was determined by the stepwise discriminant analysis. Stick physical characteristic parameter l(CD) can exactly depict the adhesiveness, with the discriminant equation of Y0 - Y1 = 6.415 - 41.594l(CD). When Y0 < Y1, pills were scattered well; when Y0 > Y1, pills were adhesive with each other. Pills' physical characteristic parameters l(CD) and l(AC), Ar, Tr can exactly depict smoothness of pills, with the discriminant equation of Z0 - Z1 = -195.318 + 78.79l(AC) - 3 258. 982Ar + 3437.935Tr. When Z0 < Z1, pills were smooth on surface. When Z0 > Z1, pills were rough on surface. The stepwise discriminant analysis is made to show the obvious correlation between key physical characteristic parameters l(CD) and l(AC), Ar, Tr of sticks and appearance quality of pills, defining the molding process for preparing pills by the plastic molded and qualifying ranges of key physical characteristic parameters characterizing intermediate sticks, in order to provide theoretical basis for prescription screening and technical parameter adjustment for pills.

  6. Humidity and Gravimetric Equivalency Adjustments for Nephelometer-Based Particulate Matter Measurements of Emissions from Solid Biomass Fuel Use in Cookstoves

    PubMed Central

    Soneja, Sutyajeet; Chen, Chen; Tielsch, James M.; Katz, Joanne; Zeger, Scott L.; Checkley, William; Curriero, Frank C.; Breysse, Patrick N.

    2014-01-01

    Great uncertainty exists around indoor biomass burning exposure-disease relationships due to lack of detailed exposure data in large health outcome studies. Passive nephelometers can be used to estimate high particulate matter (PM) concentrations during cooking in low resource environments. Since passive nephelometers do not have a collection filter they are not subject to sampler overload. Nephelometric concentration readings can be biased due to particle growth in high humid environments and differences in compositional and size dependent aerosol characteristics. This paper explores relative humidity (RH) and gravimetric equivalency adjustment approaches to be used for the pDR-1000 used to assess indoor PM concentrations for a cookstove intervention trial in Nepal. Three approaches to humidity adjustment performed equivalently (similar root mean squared error). For gravimetric conversion, the new linear regression equation with log-transformed variables performed better than the traditional linear equation. In addition, gravimetric conversion equations utilizing a spline or quadratic term were examined. We propose a humidity adjustment equation encompassing the entire RH range instead of adjusting for RH above an arbitrary 60% threshold. Furthermore, we propose new integrated RH and gravimetric conversion methods because they have one response variable (gravimetric PM2.5 concentration), do not contain an RH threshold, and is straightforward. PMID:24950062

  7. Humidity and gravimetric equivalency adjustments for nephelometer-based particulate matter measurements of emissions from solid biomass fuel use in cookstoves.

    PubMed

    Soneja, Sutyajeet; Chen, Chen; Tielsch, James M; Katz, Joanne; Zeger, Scott L; Checkley, William; Curriero, Frank C; Breysse, Patrick N

    2014-06-19

    Great uncertainty exists around indoor biomass burning exposure-disease relationships due to lack of detailed exposure data in large health outcome studies. Passive nephelometers can be used to estimate high particulate matter (PM) concentrations during cooking in low resource environments. Since passive nephelometers do not have a collection filter they are not subject to sampler overload. Nephelometric concentration readings can be biased due to particle growth in high humid environments and differences in compositional and size dependent aerosol characteristics. This paper explores relative humidity (RH) and gravimetric equivalency adjustment approaches to be used for the pDR-1000 used to assess indoor PM concentrations for a cookstove intervention trial in Nepal. Three approaches to humidity adjustment performed equivalently (similar root mean squared error). For gravimetric conversion, the new linear regression equation with log-transformed variables performed better than the traditional linear equation. In addition, gravimetric conversion equations utilizing a spline or quadratic term were examined. We propose a humidity adjustment equation encompassing the entire RH range instead of adjusting for RH above an arbitrary 60% threshold. Furthermore, we propose new integrated RH and gravimetric conversion methods because they have one response variable (gravimetric PM2.5 concentration), do not contain an RH threshold, and is straightforward.

  8. Applying Boundary Conditions Using a Time-Dependent Lagrangian for Modeling Laser-Plasma Interactions

    NASA Astrophysics Data System (ADS)

    Reyes, Jonathan; Shadwick, B. A.

    2016-10-01

    Modeling the evolution of a short, intense laser pulse propagating through an underdense plasma is of particular interest in the physics of laser-plasma interactions. Numerical models are typically created by first discretizing the equations of motion and then imposing boundary conditions. Using the variational principle of Chen and Sudan, we spatially discretize the Lagrangian density to obtain discrete equations of motion and a discrete energy conservation law which is exactly satisfied regardless of the spatial grid resolution. Modifying the derived equations of motion (e.g., enforcing boundary conditions) generally ruins energy conservation. However, time-dependent terms can be added to the Lagrangian which force the equations of motion to have the desired boundary conditions. Although some foresight is needed to choose these time-dependent terms, this approach provides a mechanism for energy to exit the closed system while allowing the conservation law to account for the loss. An appropriate time discretization scheme is selected based on stability analysis and resolution requirements. We present results using this variational approach in a co-moving coordinate system and compare such results to those using traditional second-order methods. This work was supported by the U. S. Department of Energy under Contract No. DE-SC0008382 and by the National Science Foundation under Contract No. PHY- 1104683.

  9. Modelling Evolutionary Algorithms with Stochastic Differential Equations.

    PubMed

    Heredia, Jorge Pérez

    2017-11-20

    There has been renewed interest in modelling the behaviour of evolutionary algorithms (EAs) by more traditional mathematical objects, such as ordinary differential equations or Markov chains. The advantage is that the analysis becomes greatly facilitated due to the existence of well established methods. However, this typically comes at the cost of disregarding information about the process. Here, we introduce the use of stochastic differential equations (SDEs) for the study of EAs. SDEs can produce simple analytical results for the dynamics of stochastic processes, unlike Markov chains which can produce rigorous but unwieldy expressions about the dynamics. On the other hand, unlike ordinary differential equations (ODEs), they do not discard information about the stochasticity of the process. We show that these are especially suitable for the analysis of fixed budget scenarios and present analogues of the additive and multiplicative drift theorems from runtime analysis. In addition, we derive a new more general multiplicative drift theorem that also covers non-elitist EAs. This theorem simultaneously allows for positive and negative results, providing information on the algorithm's progress even when the problem cannot be optimised efficiently. Finally, we provide results for some well-known heuristics namely Random Walk (RW), Random Local Search (RLS), the (1+1) EA, the Metropolis Algorithm (MA), and the Strong Selection Weak Mutation (SSWM) algorithm.

  10. Stress Formulation in Three-Dimensional Elasticity

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Hopkins, Dale A.

    2001-01-01

    The theory of elasticity evolved over centuries through the contributions of eminent scientists like Cauchy, Navier, Hooke Saint Venant, and others. It was deemed complete when Saint Venant provided the strain formulation in 1860. However, unlike Cauchy, who addressed equilibrium in the field and on the boundary, the strain formulation was confined only to the field. Saint Venant overlooked the compatibility on the boundary. Because of this deficiency, a direct stress formulation could not be developed. Stress with traditional methods must be recovered by backcalculation: differentiating either the displacement or the stress function. We have addressed the compatibility on the boundary. Augmentation of these conditions has completed the stress formulation in elasticity, opening up a way for a direct determination of stress without the intermediate step of calculating the displacement or the stress function. This Completed Beltrami-Michell Formulation (CBMF) can be specialized to derive the traditional methods, but the reverse is not possible. Elasticity solutions must be verified for the compliance of the new equation because the boundary compatibility conditions expressed in terms of displacement are not trivially satisfied. This paper presents the variational derivation of the stress formulation, illustrates the method, examines attributes and benefits, and outlines the future course of research.

  11. Acceleration of stable TTI P-wave reverse-time migration with GPUs

    NASA Astrophysics Data System (ADS)

    Kim, Youngseo; Cho, Yongchae; Jang, Ugeun; Shin, Changsoo

    2013-03-01

    When a pseudo-acoustic TTI (tilted transversely isotropic) coupled wave equation is used to implement reverse-time migration (RTM), shear wave energy is significantly included in the migration image. Because anisotropy has intrinsic elastic characteristics, coupling P-wave and S-wave modes in the pseudo-acoustic wave equation is inevitable. In RTM with only primary energy or the P-wave mode in seismic data, the S-wave energy is regarded as noise for the migration image. To solve this problem, we derive a pure P-wave equation for TTI media that excludes the S-wave energy. Additionally, we apply the rapid expansion method (REM) based on a Chebyshev expansion and a pseudo-spectral method (PSM) to calculate spatial derivatives in the wave equation. When REM is incorporated with the PSM for the spatial derivatives, wavefields with high numerical accuracy can be obtained without grid dispersion when performing numerical wave modeling. Another problem in the implementation of TTI RTM is that wavefields in an area with high gradients of dip or azimuth angles can be blown up in the progression of the forward and backward algorithms of the RTM. We stabilize the wavefields by applying a spatial-frequency domain high-cut filter when calculating the spatial derivatives using the PSM. In addition, to increase performance speed, the graphic processing unit (GPU) architecture is used instead of traditional CPU architecture. To confirm the degree of acceleration compared to the CPU version on our RTM, we then analyze the performance measurements according to the number of GPUs employed.

  12. The differential equation of an arbitrary reflecting surface

    NASA Astrophysics Data System (ADS)

    Melka, Richard F.; Berrettini, Vincent D.; Yousif, Hashim A.

    2018-05-01

    A differential equation describing the reflection of a light ray incident upon an arbitrary reflecting surface is obtained using the law of reflection. The derived equation is written in terms of a parameter and the value of this parameter determines the nature of the reflecting surface. Under various parametric constraints, the solution of the differential equation leads to the various conic surfaces but is not generally solvable. In addition, the dynamics of the light reflections from the conic surfaces are executed in the Mathematica software. Our derivation is the converse of the traditional approach and our analysis assumes a relation between the object distance and the image distance. This leads to the differential equation of the reflecting surface.

  13. Algebraic Manipulation as Motion within a Landscape

    ERIC Educational Resources Information Center

    Wittmann, Michael C.; Flood, Virginia J.; Black, Katrina E.

    2013-01-01

    We show that students rearranging the terms of a mathematical equation in order to separate variables prior to integration use gestures and speech to manipulate the mathematical terms on the page. They treat the terms of the equation as physical objects in a landscape, capable of being moved around. We analyze our results within the tradition of…

  14. Three Measures of Returns to Education: An Illustration for the Case of Spain

    ERIC Educational Resources Information Center

    Arrazola, Maria; de Hevia, Jose

    2008-01-01

    In this article, in a context of wage equations with sample selection, we propose a novel interpretation of the partial effects linked to education as additional measures of returns to education that complement the traditional one, which is directly obtained from the estimation of the wage offer equation. Using European Household Panel data for…

  15. Iterative Strain-Gage Balance Calibration Data Analysis for Extended Independent Variable Sets

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred

    2011-01-01

    A new method was developed that makes it possible to use an extended set of independent calibration variables for an iterative analysis of wind tunnel strain gage balance calibration data. The new method permits the application of the iterative analysis method whenever the total number of balance loads and other independent calibration variables is greater than the total number of measured strain gage outputs. Iteration equations used by the iterative analysis method have the limitation that the number of independent and dependent variables must match. The new method circumvents this limitation. It simply adds a missing dependent variable to the original data set by using an additional independent variable also as an additional dependent variable. Then, the desired solution of the regression analysis problem can be obtained that fits each gage output as a function of both the original and additional independent calibration variables. The final regression coefficients can be converted to data reduction matrix coefficients because the missing dependent variables were added to the data set without changing the regression analysis result for each gage output. Therefore, the new method still supports the application of the two load iteration equation choices that the iterative method traditionally uses for the prediction of balance loads during a wind tunnel test. An example is discussed in the paper that illustrates the application of the new method to a realistic simulation of temperature dependent calibration data set of a six component balance.

  16. Efficient Fluid Dynamic Design Optimization Using Cartesian Grids

    NASA Technical Reports Server (NTRS)

    Dadone, A.; Grossman, B.; Sellers, Bill (Technical Monitor)

    2004-01-01

    This report is subdivided in three parts. The first one reviews a new approach to the computation of inviscid flows using Cartesian grid methods. The crux of the method is the curvature-corrected symmetry technique (CCST) developed by the present authors for body-fitted grids. The method introduces ghost cells near the boundaries whose values are developed from an assumed flow-field model in vicinity of the wall consisting of a vortex flow, which satisfies the normal momentum equation and the non-penetration condition. The CCST boundary condition was shown to be substantially more accurate than traditional boundary condition approaches. This improved boundary condition is adapted to a Cartesian mesh formulation, which we call the Ghost Body-Cell Method (GBCM). In this approach, all cell centers exterior to the body are computed with fluxes at the four surrounding cell edges. There is no need for special treatment corresponding to cut cells which complicate other Cartesian mesh methods.

  17. Numerical Modeling of Ablation Heat Transfer

    NASA Technical Reports Server (NTRS)

    Ewing, Mark E.; Laker, Travis S.; Walker, David T.

    2013-01-01

    A unique numerical method has been developed for solving one-dimensional ablation heat transfer problems. This paper provides a comprehensive description of the method, along with detailed derivations of the governing equations. This methodology supports solutions for traditional ablation modeling including such effects as heat transfer, material decomposition, pyrolysis gas permeation and heat exchange, and thermochemical surface erosion. The numerical scheme utilizes a control-volume approach with a variable grid to account for surface movement. This method directly supports implementation of nontraditional models such as material swelling and mechanical erosion, extending capabilities for modeling complex ablation phenomena. Verifications of the numerical implementation are provided using analytical solutions, code comparisons, and the method of manufactured solutions. These verifications are used to demonstrate solution accuracy and proper error convergence rates. A simple demonstration of a mechanical erosion (spallation) model is also provided to illustrate the unique capabilities of the method.

  18. A Galerkin discretisation-based identification for parameters in nonlinear mechanical systems

    NASA Astrophysics Data System (ADS)

    Liu, Zuolin; Xu, Jian

    2018-04-01

    In the paper, a new parameter identification method is proposed for mechanical systems. Based on the idea of Galerkin finite-element method, the displacement over time history is approximated by piecewise linear functions, and the second-order terms in model equation are eliminated by integrating by parts. In this way, the lost function of integration form is derived. Being different with the existing methods, the lost function actually is a quadratic sum of integration over the whole time history. Then for linear or nonlinear systems, the optimisation of the lost function can be applied with traditional least-squares algorithm or the iterative one, respectively. Such method could be used to effectively identify parameters in linear and arbitrary nonlinear mechanical systems. Simulation results show that even under the condition of sparse data or low sampling frequency, this method could still guarantee high accuracy in identifying linear and nonlinear parameters.

  19. Monitoring scale scores over time via quality control charts, model-based approaches, and time series techniques.

    PubMed

    Lee, Yi-Hsuan; von Davier, Alina A

    2013-07-01

    Maintaining a stable score scale over time is critical for all standardized educational assessments. Traditional quality control tools and approaches for assessing scale drift either require special equating designs, or may be too time-consuming to be considered on a regular basis with an operational test that has a short time window between an administration and its score reporting. Thus, the traditional methods are not sufficient to catch unusual testing outcomes in a timely manner. This paper presents a new approach for score monitoring and assessment of scale drift. It involves quality control charts, model-based approaches, and time series techniques to accommodate the following needs of monitoring scale scores: continuous monitoring, adjustment of customary variations, identification of abrupt shifts, and assessment of autocorrelation. Performance of the methodologies is evaluated using manipulated data based on real responses from 71 administrations of a large-scale high-stakes language assessment.

  20. Genetic reinforcement learning through symbiotic evolution for fuzzy controller design.

    PubMed

    Juang, C F; Lin, J Y; Lin, C T

    2000-01-01

    An efficient genetic reinforcement learning algorithm for designing fuzzy controllers is proposed in this paper. The genetic algorithm (GA) adopted in this paper is based upon symbiotic evolution which, when applied to fuzzy controller design, complements the local mapping property of a fuzzy rule. Using this Symbiotic-Evolution-based Fuzzy Controller (SEFC) design method, the number of control trials, as well as consumed CPU time, are considerably reduced when compared to traditional GA-based fuzzy controller design methods and other types of genetic reinforcement learning schemes. Moreover, unlike traditional fuzzy controllers, which partition the input space into a grid, SEFC partitions the input space in a flexible way, thus creating fewer fuzzy rules. In SEFC, different types of fuzzy rules whose consequent parts are singletons, fuzzy sets, or linear equations (TSK-type fuzzy rules) are allowed. Further, the free parameters (e.g., centers and widths of membership functions) and fuzzy rules are all tuned automatically. For the TSK-type fuzzy rule especially, which put the proposed learning algorithm in use, only the significant input variables are selected to participate in the consequent of a rule. The proposed SEFC design method has been applied to different simulated control problems, including the cart-pole balancing system, a magnetic levitation system, and a water bath temperature control system. The proposed SEFC has been verified to be efficient and superior from these control problems, and from comparisons with some traditional GA-based fuzzy systems.

  1. Comparative study on thermodynamic characteristics of AgCuZnSn brazing alloys

    NASA Astrophysics Data System (ADS)

    Wang, Xingxing; Li, Shuai; Peng, Jin

    2018-01-01

    AgCuZnSn brazing alloys were prepared based on the BAg50CuZn filler metal through electroplating diffusion process, and melting alloying method. The thermodynamics of phase transformations of those fillers were analyzed by non-isothermal differentiation and integration methods of thermal analysis kinetics. In this study, it was demonstrated that as the Sn content increased, the reaction fractional integral curves of AgCuZnSn fillers from solid to liquid became straighter at the endothermic peak. Under the same Sn contents, the reaction fractional integral curve of the Sn-plated filler metal was straighter, and the phase transformation activation energy was higher compared to the traditional silver filler metal. At the 7.2 wt% Sn content, the activation energies and pre-exponential factors of the two fillers reached the maximum, then the phase transformation rate equations of the Sn-plated silver filler and the traditional filler were determined as: k = 1.41 × 1032exp(-5.56 × 105/RT), k = 7.29 × 1020exp(-3.64 × 105/RT), respectively.

  2. Model creation of moving redox reaction boundary in agarose gel electrophoresis by traditional potassium permanganate method.

    PubMed

    Xie, Hai-Yang; Liu, Qian; Li, Jia-Hao; Fan, Liu-Yin; Cao, Cheng-Xi

    2013-02-21

    A novel moving redox reaction boundary (MRRB) model was developed for studying electrophoretic behaviors of analytes involving redox reaction on the principle of moving reaction boundary (MRB). Traditional potassium permanganate method was used to create the boundary model in agarose gel electrophoresis because of the rapid reaction rate associated with MnO(4)(-) ions and Fe(2+) ions. MRB velocity equation was proposed to describe the general functional relationship between velocity of moving redox reaction boundary (V(MRRB)) and concentration of reactant, and can be extrapolated to similar MRB techniques. Parameters affecting the redox reaction boundary were investigated in detail. Under the selected conditions, good linear relationship between boundary movement distance and time were obtained. The potential application of MRRB in electromigration redox reaction titration was performed in two different concentration levels. The precision of the V(MRRB) was studied and the relative standard deviations were below 8.1%, illustrating the good repeatability achieved in this experiment. The proposed MRRB model enriches the MRB theory and also provides a feasible realization of manual control of redox reaction process in electrophoretic analysis.

  3. Microstructure Images Restoration of Metallic Materials Based upon KSVD and Smoothing Penalty Sparse Representation Approach

    PubMed Central

    Liang, Steven Y.

    2018-01-01

    Microstructure images of metallic materials play a significant role in industrial applications. To address image degradation problem of metallic materials, a novel image restoration technique based on K-means singular value decomposition (KSVD) and smoothing penalty sparse representation (SPSR) algorithm is proposed in this work, the microstructure images of aluminum alloy 7075 (AA7075) material are used as examples. To begin with, to reflect the detail structure characteristics of the damaged image, the KSVD dictionary is introduced to substitute the traditional sparse transform basis (TSTB) for sparse representation. Then, due to the image restoration, modeling belongs to a highly underdetermined equation, and traditional sparse reconstruction methods may cause instability and obvious artifacts in the reconstructed images, especially reconstructed image with many smooth regions and the noise level is strong, thus the SPSR (here, q = 0.5) algorithm is designed to reconstruct the damaged image. The results of simulation and two practical cases demonstrate that the proposed method has superior performance compared with some state-of-the-art methods in terms of restoration performance factors and visual quality. Meanwhile, the grain size parameters and grain boundaries of microstructure image are discussed before and after they are restored by proposed method. PMID:29677163

  4. Additive Runge-Kutta Schemes for Convection-Diffusion-Reaction Equations

    NASA Technical Reports Server (NTRS)

    Kennedy, Christopher A.; Carpenter, Mark H.

    2001-01-01

    Additive Runge-Kutta (ARK) methods are investigated for application to the spatially discretized one-dimensional convection-diffusion-reaction (CDR) equations. First, accuracy, stability, conservation, and dense output are considered for the general case when N different Runge-Kutta methods are grouped into a single composite method. Then, implicit-explicit, N = 2, additive Runge-Kutta ARK2 methods from third- to fifth-order are presented that allow for integration of stiff terms by an L-stable, stiffly-accurate explicit, singly diagonally implicit Runge-Kutta (ESDIRK) method while the nonstiff terms are integrated with a traditional explicit Runge-Kutta method (ERK). Coupling error terms are of equal order to those of the elemental methods. Derived ARK2 methods have vanishing stability functions for very large values of the stiff scaled eigenvalue, z(exp [I]) goes to infinity, and retain high stability efficiency in the absence of stiffness, z(exp [I]) goes to zero. Extrapolation-type stage-value predictors are provided based on dense-output formulae. Optimized methods minimize both leading order ARK2 error terms and Butcher coefficient magnitudes as well as maximize conservation properties. Numerical tests of the new schemes on a CDR problem show negligible stiffness leakage and near classical order convergence rates. However, tests on three simple singular-perturbation problems reveal generally predictable order reduction. Error control is best managed with a PID-controller. While results for the fifth-order method are disappointing, both the new third- and fourth-order methods are at least as efficient as existing ARK2 methods while offering error control and stage-value predictors.

  5. Rotating non-Boussinesq Rayleigh-Benard convection

    NASA Astrophysics Data System (ADS)

    Moroz, Vadim Vladimir

    This thesis makes quantitative predictions about the formation and stability of hexagonal and roll patterns in convecting system unbounded in horizontal direction. Starting from the Navier-Stokes, heat and continuity equations, the convection problem is then reduced to normal form equations using equivariant bifurcation theory. The relative stabilities of patterns lying on a hexagonal lattice in Fourier space are then determined using appropriate amplitude equations, with coefficients obtained via asymptotic expansion of the governing partial differential equations, with the conducting state being the base state, and the control parameter and the non-Boussinesq effects being small. The software package Mathematica was used to calculate amplitude coefficients of the appropriate coupled Ginzburg-Landau equations for the rigid-rigid and free-free case. A Galerkin code (initial version of which was written by W. Pesch et al.) is used to determine pattern stability further from onset and for strongly non-Boussinesq fluids. Specific predictions about the stability of hexagon and roll patterns for realistic experimental conditions are made. The dependence of the stability of the convective patterns on the Rayleigh number, planform wavenumber and the rotation rate is studied. Long- and shortwave instabilities, both steady and oscillatory, are identified. For small Prandtl numbers oscillatory sideband instabilities are found already very close to onset. A resonant mode interaction in hexagonal patterns arising in non-Boussinesq Rayleigh-Benard convection is studied using symmetry group methods. The lowest-order coupling terms for interacting patterns are identified. A bifurcation analysis of the resulting system of equations shows that the bifurcation is transcritical. Stability properties of resulting patterns are discussed. It is found that for some fluid properties the traditional hexagon convection solution does not exist. Analytical results are supported by numerical solutions of the convection equations using the Galerkin procedure and a Floquet analysis.

  6. Overview of Icing Physics Relevant to Scaling

    NASA Technical Reports Server (NTRS)

    Anderson, David N.; Tsao, Jen-Ching

    2005-01-01

    An understanding of icing physics is required for the development of both scaling methods and ice-accretion prediction codes. This paper gives an overview of our present understanding of the important physical processes and the associated similarity parameters that determine the shape of Appendix C ice accretions. For many years it has been recognized that ice accretion processes depend on flow effects over the model, on droplet trajectories, on the rate of water collection and time of exposure, and, for glaze ice, on a heat balance. For scaling applications, equations describing these events have been based on analyses at the stagnation line of the model and have resulted in the identification of several non-dimensional similarity parameters. The parameters include the modified inertia parameter of the water drop, the accumulation parameter and the freezing fraction. Other parameters dealing with the leading edge heat balance have also been used for convenience. By equating scale expressions for these parameters to the values to be simulated a set of equations is produced which can be solved for the scale test conditions. Studies in the past few years have shown that at least one parameter in addition to those mentioned above is needed to describe surface-water effects, and some of the traditional parameters may not be as significant as once thought. Insight into the importance of each parameter, and the physical processes it represents, can be made by viewing whether ice shapes change, and the extent of the change, when each parameter is varied. Experimental evidence is presented to establish the importance of each of the traditionally used parameters and to identify the possible form of a new similarity parameter to be used for scaling.

  7. Gap-filling methods to impute eddy covariance flux data by preserving variance.

    NASA Astrophysics Data System (ADS)

    Kunwor, S.; Staudhammer, C. L.; Starr, G.; Loescher, H. W.

    2015-12-01

    To represent carbon dynamics, in terms of exchange of CO2 between the terrestrial ecosystem and the atmosphere, eddy covariance (EC) data has been collected using eddy flux towers from various sites across globe for more than two decades. However, measurements from EC data are missing for various reasons: precipitation, routine maintenance, or lack of vertical turbulence. In order to have estimates of net ecosystem exchange of carbon dioxide (NEE) with high precision and accuracy, robust gap-filling methods to impute missing data are required. While the methods used so far have provided robust estimates of the mean value of NEE, little attention has been paid to preserving the variance structures embodied by the flux data. Preserving the variance of these data will provide unbiased and precise estimates of NEE over time, which mimic natural fluctuations. We used a non-linear regression approach with moving windows of different lengths (15, 30, and 60-days) to estimate non-linear regression parameters for one year of flux data from a long-leaf pine site at the Joseph Jones Ecological Research Center. We used as our base the Michaelis-Menten and Van't Hoff functions. We assessed the potential physiological drivers of these parameters with linear models using micrometeorological predictors. We then used a parameter prediction approach to refine the non-linear gap-filling equations based on micrometeorological conditions. This provides us an opportunity to incorporate additional variables, such as vapor pressure deficit (VPD) and volumetric water content (VWC) into the equations. Our preliminary results indicate that improvements in gap-filling can be gained with a 30-day moving window with additional micrometeorological predictors (as indicated by lower root mean square error (RMSE) of the predicted values of NEE). Our next steps are to use these parameter predictions from moving windows to gap-fill the data with and without incorporation of potential driver variables of the parameters traditionally used. Then, comparisons of the predicted values from these methods and 'traditional' gap-filling methods (using 12 fixed monthly windows) will be assessed to show the scale of preserving variance. Further, this method will be applied to impute artificially created gaps for analyzing if variance is preserved.

  8. Matrix Pseudospectral Method for (Visco)Elastic Tides Modeling of Planetary Bodies

    NASA Astrophysics Data System (ADS)

    Zabranova, Eliska; Hanyk, Ladidslav; Matyska, Ctirad

    2010-05-01

    We deal with the equations and boundary conditions describing deformation and gravitational potential of prestressed spherically symmetric elastic bodies by decomposing governing equations into a series of boundary value problems (BVP) for ordinary differential equations (ODE) of the second order. In contrast to traditional Runge-Kutta integration techniques, highly accurate pseudospectral schemes are employed to directly discretize the BVP on Chebyshev grids and a set of linear algebraic equations with an almost block diagonal matrix is derived. As a consequence of keeping the governing ODEs of the second order instead of the usual first-order equations, the resulting algebraic system is half-sized but derivatives of the model parameters are required. Moreover, they can be easily evaluated for models, where structural parametres are piecewise polynomially dependent. Both accuracy and efficiency of the method are tested by evaluating the tidal Love numbers for the Earth's model PREM. Finally, we also derive complex Love numbers for models with the Maxwell viscoelastic rheology, where viscosity is a depth-dependent function. The method is applied to evaluation of the tidal Love numbers for models of Mars and Venus. The Love numbers of the two Martian models - the former optimized to cosmochemical data and the latter to the moment of inertia (Sohl and Spohn, 1997) - are h2=0.172 (0.212) and k2=0.093 (0.113). For Venus, the value of k2=0.295 (Konopliv and Yoder, 1996), obtained from the gravity-field analysis, is consistent with the results for our model with the liquid-core radius of 3110 km (Zábranová et al., 2009). Together with rapid evaluation of free oscillation periods by an analogous method, this combined matrix approach could by employed as an efficient numerical tool in structural studies of planetary bodies. REFERENCES Konopliv, A. S. and Yoder, C. F., 1996. Venusian k2 tidal Love number from Magellan and PVO tracking data, Geophys. Res. Lett., 23, 1857-1860. Sohl, F., and Spohn, T., 1997. The interior structure of Mars: Implications from SNC meteorites, J. Geophys. Res., 102, 1613-1635. Zabranova, E., Hanyk L. and Matyska, C.: Matrix Pseudospectral Method for Elastic Tides Modeling. In: Holota P. (Ed.): Mission and Passion: Science. A volume dedicated to Milan Bursa on the occasion of his 80th birthday. Published by the Czech National Committee of Geodesy and Geophysics. Prague, 2009, pp. 243-260.

  9. Model-based traction force microscopy reveals differential tension in cellular actin bundles.

    PubMed

    Soiné, Jérôme R D; Brand, Christoph A; Stricker, Jonathan; Oakes, Patrick W; Gardel, Margaret L; Schwarz, Ulrich S

    2015-03-01

    Adherent cells use forces at the cell-substrate interface to sense and respond to the physical properties of their environment. These cell forces can be measured with traction force microscopy which inverts the equations of elasticity theory to calculate them from the deformations of soft polymer substrates. We introduce a new type of traction force microscopy that in contrast to traditional methods uses additional image data for cytoskeleton and adhesion structures and a biophysical model to improve the robustness of the inverse procedure and abolishes the need for regularization. We use this method to demonstrate that ventral stress fibers of U2OS-cells are typically under higher mechanical tension than dorsal stress fibers or transverse arcs.

  10. Model-based Traction Force Microscopy Reveals Differential Tension in Cellular Actin Bundles

    PubMed Central

    Soiné, Jérôme R. D.; Brand, Christoph A.; Stricker, Jonathan; Oakes, Patrick W.; Gardel, Margaret L.; Schwarz, Ulrich S.

    2015-01-01

    Adherent cells use forces at the cell-substrate interface to sense and respond to the physical properties of their environment. These cell forces can be measured with traction force microscopy which inverts the equations of elasticity theory to calculate them from the deformations of soft polymer substrates. We introduce a new type of traction force microscopy that in contrast to traditional methods uses additional image data for cytoskeleton and adhesion structures and a biophysical model to improve the robustness of the inverse procedure and abolishes the need for regularization. We use this method to demonstrate that ventral stress fibers of U2OS-cells are typically under higher mechanical tension than dorsal stress fibers or transverse arcs. PMID:25748431

  11. Variation objective analyses for cyclone studies

    NASA Technical Reports Server (NTRS)

    Achtemeier, G. L.; Kidder, S. Q.; Ochs, H. T.

    1985-01-01

    The objectives were to: (1) develop an objective analysis technique that will maximize the information content of data available from diverse sources, with particular emphasis on the incorporation of observations from satellites with those from more traditional immersion techniques; and (2) to develop a diagnosis of the state of the synoptic scale atmosphere on a much finer scale over a much broader region than is presently possible to permit studies of the interactions and energy transfers between global, synoptic and regional scale atmospheric processes. The variational objective analysis model consists of the two horizontal momentum equations, the hydrostatic equation, and the integrated continuity equation for a dry hydrostatic atmosphere. Preliminary tests of the model with the SESMAE I data set are underway for 12 GMT 10 April 1979. At this stage of purpose of the analysis is not the diagnosis of atmospheric structures but rather the validation of the model. Model runs for rawinsonde data and with the precision modulus weights set to force most of the adjustment of the wind field to the mass field have produced 90 to 95 percent reductions in the imbalance of the initial data after only 4-cycles through the Euler-Lagrange equations. Sensitivity tests for linear stability of the 11 Euler-Lagrange equations that make up the VASP Model 1 indicate that there will be a lower limit to the scales of motion that can be resolved by this method. Linear stability criteria are violated where there is large horizontal wind shear near the upper tropospheric jet.

  12. The comparative analysis of rocks' resistance to forward-slanting disc cutters and traditionally installed disc cutters

    NASA Astrophysics Data System (ADS)

    Zhang, Zhao-Huang; Fei, Sun; Liang, Meng

    2016-08-01

    At present, disc cutters of a full face rock tunnel boring machine are mostly mounted in the traditional way. Practical use in engineering projects reveals that this installation method not only heavily affects the operation life of disc cutters, but also increases the energy consumption of a full face rock tunnel boring machine. To straighten out this issue, therefore, a rock-breaking model is developed for disc cutters' movement after the research on the rock breaking of forward-slanting disc cutters. Equations of its displacement are established based on the analysis of velocity vector of a disc cutter's rock-breaking point. The functional relations then are brought forward between the displacement parameters of a rock-breaking point and its coordinate through the analysis of micro displacement of a rock-breaking point. Thus, the geometric equations of rock deformation are derived for the forward-slanting installation of disc cutters. With a linear relationship remaining between the acting force and its deformation either before or after the leap breaking, the constitutive relation of rock deformation can be expressed in the form of generalized Hooke law, hence the comparative analysis of the variation in the resistance of rock to the disc cutters mounted in the forward-slanting way with that in the traditional way. It is discovered that with the same penetration, strain of the rock in contact with forward-slanting disc cutters is apparently on the decline, in other words, the resistance of rock to disc cutters is reduced. Thus wear of disc cutters resulted from friction is lowered and energy consumption is correspondingly decreased. It will be useful for the development of installation and design theory of disc cutters, and significant for the breakthrough in the design of full face rock tunnel boring machine.

  13. Three-dimensional forward modeling and inversion of marine CSEM data in anisotropic conductivity structures

    NASA Astrophysics Data System (ADS)

    Han, B.; Li, Y.

    2016-12-01

    We present a three-dimensional (3D) forward and inverse modeling code for marine controlled-source electromagnetic (CSEM) surveys in anisotropic media. The forward solution is based on a primary/secondary field approach, in which secondary fields are solved using a staggered finite-volume (FV) method and primary fields are solved for 1D isotropic background models analytically. It is shown that it is rather straightforward to extend the isotopic 3D FV algorithm to a triaxial anisotropic one, while additional coefficients are required to account for full tensor conductivity. To solve the linear system resulting from FV discretization of Maxwell' s equations, both iterative Krylov solvers (e.g. BiCGSTAB) and direct solvers (e.g. MUMPS) have been implemented, makes the code flexible for different computing platforms and different problems. For iterative soloutions, the linear system in terms of electromagnetic potentials (A-Phi) is used to precondition the original linear system, transforming the discretized Curl-Curl equations to discretized Laplace-like equations, thus much more favorable numerical properties can be obtained. Numerical experiments suggest that this A-Phi preconditioner can dramatically improve the convergence rate of an iterative solver and high accuracy can be achieved without divergence correction even for low frequencies. To efficiently calculate the sensitivities, i.e. the derivatives of CSEM data with respect to tensor conductivity, the adjoint method is employed. For inverse modeling, triaxial anisotropy is taken into account. Since the number of model parameters to be resolved of triaxial anisotropic medias is twice or thrice that of isotropic medias, the data-space version of the Gauss-Newton (GN) minimization method is preferred due to its lower computational cost compared with the traditional model-space GN method. We demonstrate the effectiveness of the code with synthetic examples.

  14. An Exploration of a Quantitative Reasoning Instructional Approach to Linear Equations in Two Variables with Community College Students

    ERIC Educational Resources Information Center

    Belue, Paul T.; Cavey, Laurie Overman; Kinzel, Margaret T.

    2017-01-01

    In this exploratory study, we examined the effects of a quantitative reasoning instructional approach to linear equations in two variables on community college students' conceptual understanding, procedural fluency, and reasoning ability. This was done in comparison to the use of a traditional procedural approach for instruction on the same topic.…

  15. Propagating Qualitative Values Through Quantitative Equations

    NASA Technical Reports Server (NTRS)

    Kulkarni, Deepak

    1992-01-01

    In most practical problems where traditional numeric simulation is not adequate, one need to reason about a system with both qualitative and quantitative equations. In this paper, we address the problem of propagating qualitative values represented as interval values through quantitative equations. Previous research has produced exponential-time algorithms for approximate solution of the problem. These may not meet the stringent requirements of many real time applications. This paper advances the state of art by producing a linear-time algorithm that can propagate a qualitative value through a class of complex quantitative equations exactly and through arbitrary algebraic expressions approximately. The algorithm was found applicable to Space Shuttle Reaction Control System model.

  16. Direct Calculation of the Scattering Amplitude Without Partial Wave Decomposition. III; Inclusion of Correlation Effects

    NASA Technical Reports Server (NTRS)

    Shertzer, Janine; Temkin, Aaron

    2007-01-01

    In the first two papers in this series, we developed a method for studying electron-hydrogen scattering that does not use partial wave analysis. We constructed an ansatz for the wave function in both the static and static exchange approximations and calculated the full scattering amplitude. Here we go beyond the static exchange approximation, and include correlation in the wave function via a modified polarized orbital. This correlation function provides a significant improvement over the static exchange approximation: the resultant elastic scattering amplitudes are in very good agreement with fully converged partial wave calculations for electron-hydrogen scattering. A fully variational modification of this approach is discussed in the conclusion of the article Popular summary of Direct calculation of the scattering amplitude without partial wave expansion. III ....." by J. Shertzer and A. Temkin. In this paper we continue the development of In this paper we continue the development of a new approach to the way in which researchers have traditionally used to calculate the scattering cross section of (low-energy) electrons from atoms. The basic mathematical problem is to solve the Schroedinger Equation (SE) corresponding the above physical process. Traditionally it was always the case that the SE was reduced to a sequence of one-dimensional (ordinary) differential equations - called partial waves which were solved and from the solutions "phase shifts" were extracted, from which the scattering cross section was calculated.

  17. Formulation and Implementation of Nonlinear Integral Equations to Model Neural Dynamics Within the Vertebrate Retina.

    PubMed

    Eshraghian, Jason K; Baek, Seungbum; Kim, Jun-Ho; Iannella, Nicolangelo; Cho, Kyoungrok; Goo, Yong Sook; Iu, Herbert H C; Kang, Sung-Mo; Eshraghian, Kamran

    2018-02-13

    Existing computational models of the retina often compromise between the biophysical accuracy and a hardware-adaptable methodology of implementation. When compared to the current modes of vision restoration, algorithmic models often contain a greater correlation between stimuli and the affected neural network, but lack physical hardware practicality. Thus, if the present processing methods are adapted to complement very-large-scale circuit design techniques, it is anticipated that it will engender a more feasible approach to the physical construction of the artificial retina. The computational model presented in this research serves to provide a fast and accurate predictive model of the retina, a deeper understanding of neural responses to visual stimulation, and an architecture that can realistically be transformed into a hardware device. Traditionally, implicit (or semi-implicit) ordinary differential equations (OES) have been used for optimal speed and accuracy. We present a novel approach that requires the effective integration of different dynamical time scales within a unified framework of neural responses, where the rod, cone, amacrine, bipolar, and ganglion cells correspond to the implemented pathways. Furthermore, we show that adopting numerical integration can both accelerate retinal pathway simulations by more than 50% when compared with traditional ODE solvers in some cases, and prove to be a more realizable solution for the hardware implementation of predictive retinal models.

  18. Analysis of the Effects of Thermal Environment on Optical Systems for Navigation Guidance and Control in Supersonic Aircraft Based on Empirical Equations

    PubMed Central

    Cheng, Xuemin; Yang, Yikang; Hao, Qun

    2016-01-01

    The thermal environment is an important factor in the design of optical systems. This study investigated the thermal analysis technology of optical systems for navigation guidance and control in supersonic aircraft by developing empirical equations for the front temperature gradient and rear thermal diffusion distance, and for basic factors such as flying parameters and the structure of the optical system. Finite element analysis (FEA) was used to study the relationship between flying and front dome parameters and the system temperature field. Systematic deduction was then conducted based on the effects of the temperature field on the physical geometry and ray tracing performance of the front dome and rear optical lenses, by deriving the relational expressions between the system temperature field and the spot size and positioning precision of the rear optical lens. The optical systems used for navigation guidance and control in supersonic aircraft when the flight speed is in the range of 1–5 Ma were analysed using the derived equations. Using this new method it was possible to control the precision within 10% when considering the light spot received by the four-quadrant detector, and computation time was reduced compared with the traditional method of separately analysing the temperature field of the front dome and rear optical lens using FEA. Thus, the method can effectively increase the efficiency of parameter analysis and computation in an airborne optical system, facilitating the systematic, effective and integrated thermal analysis of airborne optical systems for navigation guidance and control. PMID:27763515

  19. Analysis of the Effects of Thermal Environment on Optical Systems for Navigation Guidance and Control in Supersonic Aircraft Based on Empirical Equations.

    PubMed

    Cheng, Xuemin; Yang, Yikang; Hao, Qun

    2016-10-17

    The thermal environment is an important factor in the design of optical systems. This study investigated the thermal analysis technology of optical systems for navigation guidance and control in supersonic aircraft by developing empirical equations for the front temperature gradient and rear thermal diffusion distance, and for basic factors such as flying parameters and the structure of the optical system. Finite element analysis (FEA) was used to study the relationship between flying and front dome parameters and the system temperature field. Systematic deduction was then conducted based on the effects of the temperature field on the physical geometry and ray tracing performance of the front dome and rear optical lenses, by deriving the relational expressions between the system temperature field and the spot size and positioning precision of the rear optical lens. The optical systems used for navigation guidance and control in supersonic aircraft when the flight speed is in the range of 1-5 Ma were analysed using the derived equations. Using this new method it was possible to control the precision within 10% when considering the light spot received by the four-quadrant detector, and computation time was reduced compared with the traditional method of separately analysing the temperature field of the front dome and rear optical lens using FEA. Thus, the method can effectively increase the efficiency of parameter analysis and computation in an airborne optical system, facilitating the systematic, effective and integrated thermal analysis of airborne optical systems for navigation guidance and control.

  20. Self-optimizing Pitch Control for Large Scale Wind Turbine Based on ADRC

    NASA Astrophysics Data System (ADS)

    Xia, Anjun; Hu, Guoqing; Li, Zheng; Huang, Dongxiao; Wang, Fengxiang

    2018-01-01

    Since wind turbine is a complex nonlinear and strong coupling system, traditional PI control method can hardly achieve good control performance. A self-optimizing pitch control method based on the active-disturbance-rejection control theory is proposed in this paper. A linear model of the wind turbine is derived by linearizing the aerodynamic torque equation and the dynamic response of wind turbine is transformed into a first-order linear system. An expert system is designed to optimize the amplification coefficient according to the pitch rate and the speed deviation. The purpose of the proposed control method is to regulate the amplification coefficient automatically and keep the variations of pitch rate and rotor speed in proper ranges. Simulation results show that the proposed pitch control method has the ability to modify the amplification coefficient effectively, when it is not suitable, and keep the variations of pitch rate and rotor speed in proper ranges

  1. An adaptive gridless methodology in one dimension

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snyder, N.T.; Hailey, C.E.

    1996-09-01

    Gridless numerical analysis offers great potential for accurately solving for flow about complex geometries or moving boundary problems. Because gridless methods do not require point connection, the mesh cannot twist or distort. The gridless method utilizes a Taylor series about each point to obtain the unknown derivative terms from the current field variable estimates. The governing equation is then numerically integrated to determine the field variables for the next iteration. Effects of point spacing and Taylor series order on accuracy are studied, and they follow similar trends of traditional numerical techniques. Introducing adaption by point movement using a spring analogymore » allows the solution method to track a moving boundary. The adaptive gridless method models linear, nonlinear, steady, and transient problems. Comparison with known analytic solutions is given for these examples. Although point movement adaption does not provide a significant increase in accuracy, it helps capture important features and provides an improved solution.« less

  2. Von Neumann stability analysis of globally divergence-free RKDG schemes for the induction equation using multidimensional Riemann solvers

    NASA Astrophysics Data System (ADS)

    Balsara, Dinshaw S.; Käppeli, Roger

    2017-05-01

    In this paper we focus on the numerical solution of the induction equation using Runge-Kutta Discontinuous Galerkin (RKDG)-like schemes that are globally divergence-free. The induction equation plays a role in numerical MHD and other systems like it. It ensures that the magnetic field evolves in a divergence-free fashion; and that same property is shared by the numerical schemes presented here. The algorithms presented here are based on a novel DG-like method as it applies to the magnetic field components in the faces of a mesh. (I.e., this is not a conventional DG algorithm for conservation laws.) The other two novel building blocks of the method include divergence-free reconstruction of the magnetic field and multidimensional Riemann solvers; both of which have been developed in recent years by the first author. Since the method is linear, a von Neumann stability analysis is carried out in two-dimensions to understand its stability properties. The von Neumann stability analysis that we develop in this paper relies on transcribing from a modal to a nodal DG formulation in order to develop discrete evolutionary equations for the nodal values. These are then coupled to a suitable Runge-Kutta timestepping strategy so that one can analyze the stability of the entire scheme which is suitably high order in space and time. We show that our scheme permits CFL numbers that are comparable to those of traditional RKDG schemes. We also analyze the wave propagation characteristics of the method and show that with increasing order of accuracy the wave propagation becomes more isotropic and free of dissipation for a larger range of long wavelength modes. This makes a strong case for investing in higher order methods. We also use the von Neumann stability analysis to show that the divergence-free reconstruction and multidimensional Riemann solvers are essential algorithmic ingredients of a globally divergence-free RKDG-like scheme. Numerical accuracy analyses of the RKDG-like schemes are presented and compared with the accuracy of PNPM schemes. It is found that PNPM retrieve much of the accuracy of the RKDG-like schemes while permitting a larger CFL number.

  3. Advances in visual representation of molecular potentials.

    PubMed

    Du, Qi-Shi; Huang, Ri-Bo; Chou, Kuo-Chen

    2010-06-01

    The recent advances in visual representations of molecular properties in 3D space are summarized, and their applications in molecular modeling study and rational drug design are introduced. The visual representation methods provide us with detailed insights into protein-ligand interactions, and hence can play a major role in elucidating the structure or reactivity of a biomolecular system. Three newly developed computation and visualization methods for studying the physical and chemical properties of molecules are introduced, including their electrostatic potential, lipophilicity potential and excess chemical potential. The newest application examples of visual representations in structure-based rational drug are presented. The 3D electrostatic potentials, calculated using the empirical method (EM-ESP), in which the classical Coulomb equation and traditional atomic partial changes are discarded, are highly consistent with the results by the higher level quantum chemical method. The 3D lipophilicity potentials, computed by the heuristic molecular lipophilicity potential method based on the principles of quantum mechanics and statistical mechanics, are more accurate and reliable than those by using the traditional empirical methods. The 3D excess chemical potentials, derived by the reference interaction site model-hypernetted chain theory, provide a new tool for computational chemistry and molecular modeling. For structure-based drug design, the visual representations of molecular properties will play a significant role in practical applications. It is anticipated that the new advances in computational chemistry will stimulate the development of molecular modeling methods, further enriching the visual representation techniques for rational drug design, as well as other relevant fields in life science.

  4. “Live” Formulations of International Association for the properties of Water and Steam (IAPWS)

    NASA Astrophysics Data System (ADS)

    Ochkov, V. F.; Orlov, K. A.; Gurke, S.

    2017-11-01

    Online publication of IAPWS formulations for calculation of the properties of water and steam is reviewed. The advantages of electronic delivery via Internet over traditional publication on paper are examined. Online calculation can be used with or without formulas or equations printed in traditional publications. Online calculations should preferably free of charge and compatible across multiple platforms (Windows, Android, Linux). Other requirements include availability of multilingual interface, traditional math operators and functions, 2D and 3D graphic capabilities, animation, numerical and symbolic math, tools for solving equation systems, local functions, etc. Using of online visualization tools for verification of functions for calculating thermophysical properties of substances is reviewed. Specific examples are provided of tools for the modeling of the properties of chemical substances, including desktop and online calculation software, downloadable online calculations, and calculations that use server technologies such as Mathcad Calculation Server (see the site of National Research University “Moscow Power Engineering Institute”) and SMath (see the site of Knovel, an Elsevier company).

  5. Comparison between results of solution of Burgers' equation and Laplace's equation by Galerkin and least-square finite element methods

    NASA Astrophysics Data System (ADS)

    Adib, Arash; Poorveis, Davood; Mehraban, Farid

    2018-03-01

    In this research, two equations are considered as examples of hyperbolic and elliptic equations. In addition, two finite element methods are applied for solving of these equations. The purpose of this research is the selection of suitable method for solving each of two equations. Burgers' equation is a hyperbolic equation. This equation is a pure advection (without diffusion) equation. This equation is one-dimensional and unsteady. A sudden shock wave is introduced to the model. This wave moves without deformation. In addition, Laplace's equation is an elliptical equation. This equation is steady and two-dimensional. The solution of Laplace's equation in an earth dam is considered. By solution of Laplace's equation, head pressure and the value of seepage in the directions X and Y are calculated in different points of earth dam. At the end, water table is shown in the earth dam. For Burgers' equation, least-square method can show movement of wave with oscillation but Galerkin method can not show it correctly (the best method for solving of the Burgers' equation is discrete space by least-square finite element method and discrete time by forward difference.). For Laplace's equation, Galerkin and least square methods can show water table correctly in earth dam.

  6. Minimal subspace rotation on the Stiefel manifold for stabilization and enhancement of projection-based reduced order models for the compressible Navier–Stokes equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balajewicz, Maciej; Tezaur, Irina; Dowell, Earl

    For a projection-based reduced order model (ROM) of a fluid flow to be stable and accurate, the dynamics of the truncated subspace must be taken into account. This paper proposes an approach for stabilizing and enhancing projection-based fluid ROMs in which truncated modes are accounted for a priori via a minimal rotation of the projection subspace. Attention is focused on the full non-linear compressible Navier–Stokes equations in specific volume form as a step toward a more general formulation for problems with generic non-linearities. Unlike traditional approaches, no empirical turbulence modeling terms are required, and consistency between the ROM and themore » Navier–Stokes equation from which the ROM is derived is maintained. Mathematically, the approach is formulated as a trace minimization problem on the Stiefel manifold. As a result, the reproductive as well as predictive capabilities of the method are evaluated on several compressible flow problems, including a problem involving laminar flow over an airfoil with a high angle of attack, and a channel-driven cavity flow problem.« less

  7. Minimal subspace rotation on the Stiefel manifold for stabilization and enhancement of projection-based reduced order models for the compressible Navier–Stokes equations

    DOE PAGES

    Balajewicz, Maciej; Tezaur, Irina; Dowell, Earl

    2016-05-25

    For a projection-based reduced order model (ROM) of a fluid flow to be stable and accurate, the dynamics of the truncated subspace must be taken into account. This paper proposes an approach for stabilizing and enhancing projection-based fluid ROMs in which truncated modes are accounted for a priori via a minimal rotation of the projection subspace. Attention is focused on the full non-linear compressible Navier–Stokes equations in specific volume form as a step toward a more general formulation for problems with generic non-linearities. Unlike traditional approaches, no empirical turbulence modeling terms are required, and consistency between the ROM and themore » Navier–Stokes equation from which the ROM is derived is maintained. Mathematically, the approach is formulated as a trace minimization problem on the Stiefel manifold. As a result, the reproductive as well as predictive capabilities of the method are evaluated on several compressible flow problems, including a problem involving laminar flow over an airfoil with a high angle of attack, and a channel-driven cavity flow problem.« less

  8. Multi-dimensional upwinding-based implicit LES for the vorticity transport equations

    NASA Astrophysics Data System (ADS)

    Foti, Daniel; Duraisamy, Karthik

    2017-11-01

    Complex turbulent flows such as rotorcraft and wind turbine wakes are characterized by the presence of strong coherent structures that can be compactly described by vorticity variables. The vorticity-velocity formulation of the incompressible Navier-Stokes equations is employed to increase numerical efficiency. Compared to the traditional velocity-pressure formulation, high order numerical methods and sub-grid scale models for the vorticity transport equation (VTE) have not been fully investigated. Consistent treatment of the convection and stretching terms also needs to be addressed. Our belief is that, by carefully designing sharp gradient-capturing numerical schemes, coherent structures can be more efficiently captured using the vorticity-velocity formulation. In this work, a multidimensional upwind approach for the VTE is developed using the generalized Riemann problem-based scheme devised by Parish et al. (Computers & Fluids, 2016). The algorithm obtains high resolution by augmenting the upwind fluxes with transverse and normal direction corrections. The approach is investigated with several canonical vortex-dominated flows including isolated and interacting vortices and turbulent flows. The capability of the technique to represent sub-grid scale effects is also assessed. Navy contract titled ``Turbulence Modelling Across Disparate Length Scales for Naval Computational Fluid Dynamics Applications,'' through Continuum Dynamics, Inc.

  9. Bounding solutions of geometrically nonlinear viscoelastic problems

    NASA Technical Reports Server (NTRS)

    Stubstad, J. M.; Simitses, G. J.

    1985-01-01

    Integral transform techniques, such as the Laplace transform, provide simple and direct methods for solving viscoelastic problems formulated within a context of linear material response and using linear measures for deformation. Application of the transform operator reduces the governing linear integro-differential equations to a set of algebraic relations between the transforms of the unknown functions, the viscoelastic operators, and the initial and boundary conditions. Inversion either directly or through the use of the appropriate convolution theorem, provides the time domain response once the unknown functions have been expressed in terms of sums, products or ratios of known transforms. When exact inversion is not possible approximate techniques may provide accurate results. The overall problem becomes substantially more complex when nonlinear effects must be included. Situations where a linear material constitutive law can still be productively employed but where the magnitude of the resulting time dependent deformations warrants the use of a nonlinear kinematic analysis are considered. The governing equations will be nonlinear integro-differential equations for this class of problems. Thus traditional as well as approximate techniques, such as cited above, cannot be employed since the transform of a nonlinear function is not explicitly expressible.

  10. Bounding solutions of geometrically nonlinear viscoelastic problems

    NASA Technical Reports Server (NTRS)

    Stubstad, J. M.; Simitses, G. J.

    1986-01-01

    Integral transform techniques, such as the Laplace transform, provide simple and direct methods for solving viscoelastic problems formulated within a context of linear material response and using linear measures for deformation. Application of the transform operator reduces the governing linear integro-differential equations to a set of algebraic relations between the transforms of the unknown functions, the viscoelastic operators, and the initial and boundary conditions. Inversion either directly or through the use of the appropriate convolution theorem, provides the time domain response once the unknown functions have been expressed in terms of sums, products or ratios of known transforms. When exact inversion is not possible approximate techniques may provide accurate results. The overall problem becomes substantially more complex when nonlinear effects must be included. Situations where a linear material constitutive law can still be productively employed but where the magnitude of the resulting time dependent deformations warrants the use of a nonlinear kinematic analysis are considered. The governing equations will be nonlinear integro-differential equations for this class of problems. Thus traditional as well as approximate techniques, such as cited above, cannot be employed since the transform of a nonlinear function is not explicitly expressible.

  11. Light manipulation with flat and conformal inhomogeneous dispersive impedance sheets: an efficient FDTD modeling.

    PubMed

    Jafar-Zanjani, Samad; Cheng, Jierong; Mosallaei, Hossein

    2016-04-10

    An efficient auxiliary differential equation method for incorporating 2D inhomogeneous dispersive impedance sheets in the finite-difference time-domain solver is presented. This unique proposed method can successfully solve optical problems of current interest involving 2D sheets. It eliminates the need for ultrafine meshing in the thickness direction, resulting in a significant reduction of computation time and memory requirements. We apply the method to characterize a novel broad-beam leaky-wave antenna created by cascading three sinusoidally modulated reactance surfaces and also to study the effect of curvature on the radiation characteristic of a conformal impedance sheet holographic antenna. Considerable improvement in the simulation time based on our technique in comparison with the traditional volumetric model is reported. Both applications are of great interest in the field of antennas and 2D sheets.

  12. An Efficient Multiblock Method for Aerodynamic Analysis and Design on Distributed Memory Systems

    NASA Technical Reports Server (NTRS)

    Reuther, James; Alonso, Juan Jose; Vassberg, John C.; Jameson, Antony; Martinelli, Luigi

    1997-01-01

    The work presented in this paper describes the application of a multiblock gridding strategy to the solution of aerodynamic design optimization problems involving complex configurations. The design process is parallelized using the MPI (Message Passing Interface) Standard such that it can be efficiently run on a variety of distributed memory systems ranging from traditional parallel computers to networks of workstations. Substantial improvements to the parallel performance of the baseline method are presented, with particular attention to their impact on the scalability of the program as a function of the mesh size. Drag minimization calculations at a fixed coefficient of lift are presented for a business jet configuration that includes the wing, body, pylon, aft-mounted nacelle, and vertical and horizontal tails. An aerodynamic design optimization is performed with both the Euler and Reynolds Averaged Navier-Stokes (RANS) equations governing the flow solution and the results are compared. These sample calculations establish the feasibility of efficient aerodynamic optimization of complete aircraft configurations using the RANS equations as the flow model. There still exists, however, the need for detailed studies of the importance of a true viscous adjoint method which holds the promise of tackling the minimization of not only the wave and induced components of drag, but also the viscous drag.

  13. Synchronized parameter optimization of the double freeform lenses illumination system used for the CF-LCoS pico-projectors

    NASA Astrophysics Data System (ADS)

    Chen, Enguo; Liu, Peng; Yu, Feihong

    2012-10-01

    A novel synchronized optimization method of multiple freeform surfaces is proposed and applied to double lenses illumination system design of CF-LCoS pico-projectors. Based on Snell's law and the energy conservation law, a series of first-order partial differential equations are derived for the multiple freeform surfaces of the initial system. By assigning the light deflection angle to each freeform surface, multiple surfaces can be obtained simultaneously by solving the corresponding equations, meanwhile the restricted angle on CF-LCoS is guaranteed. In order to improve the spatial uniformity, the multi-surfaces are synchronously optimized by using simplex algorithm for an extended LED source. Design example shows that the double lenses based illumination system, which employs a single 2 mm×2 mm LED chip and a CF-LCoS panel with a diagonal of 0.59 inches satisfies the needs of pico-projector. Moreover, analytical result indicates that the design method represents substantial improvement and practical significance over traditional CF-LCoS projection system, which could offer outstanding performance with both portability and low cost. The synchronized optimization design method could not only realize collimating and uniform illumination, but also could be introduced to other specific light conditions.

  14. System Synthesis in Preliminary Aircraft Design using Statistical Methods

    NASA Technical Reports Server (NTRS)

    DeLaurentis, Daniel; Mavris, Dimitri N.; Schrage, Daniel P.

    1996-01-01

    This paper documents an approach to conceptual and preliminary aircraft design in which system synthesis is achieved using statistical methods, specifically design of experiments (DOE) and response surface methodology (RSM). These methods are employed in order to more efficiently search the design space for optimum configurations. In particular, a methodology incorporating three uses of these techniques is presented. First, response surface equations are formed which represent aerodynamic analyses, in the form of regression polynomials, which are more sophisticated than generally available in early design stages. Next, a regression equation for an overall evaluation criterion is constructed for the purpose of constrained optimization at the system level. This optimization, though achieved in a innovative way, is still traditional in that it is a point design solution. The methodology put forward here remedies this by introducing uncertainty into the problem, resulting a solutions which are probabilistic in nature. DOE/RSM is used for the third time in this setting. The process is demonstrated through a detailed aero-propulsion optimization of a high speed civil transport. Fundamental goals of the methodology, then, are to introduce higher fidelity disciplinary analyses to the conceptual aircraft synthesis and provide a roadmap for transitioning from point solutions to probabalistic designs (and eventually robust ones).

  15. Optimal trajectory planning of free-floating space manipulator using differential evolution algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Mingming; Luo, Jianjun; Fang, Jing; Yuan, Jianping

    2018-03-01

    The existence of the path dependent dynamic singularities limits the volume of available workspace of free-floating space robot and induces enormous joint velocities when such singularities are met. In order to overcome this demerit, this paper presents an optimal joint trajectory planning method using forward kinematics equations of free-floating space robot, while joint motion laws are delineated with application of the concept of reaction null-space. Bézier curve, in conjunction with the null-space column vectors, are applied to describe the joint trajectories. Considering the forward kinematics equations of the free-floating space robot, the trajectory planning issue is consequently transferred to an optimization issue while the control points to construct the Bézier curve are the design variables. A constrained differential evolution (DE) scheme with premature handling strategy is implemented to find the optimal solution of the design variables while specific objectives and imposed constraints are satisfied. Differ from traditional methods, we synthesize null-space and specialized curve to provide a novel viewpoint for trajectory planning of free-floating space robot. Simulation results are presented for trajectory planning of 7 degree-of-freedom (DOF) kinematically redundant manipulator mounted on a free-floating spacecraft and demonstrate the feasibility and effectiveness of the proposed method.

  16. A Comparison of QSAR Based Thermo and Water Solvation Property Prediction Tools and Experimental Data for Selected Traditional Chemical Warfare Agents and Simulants

    DTIC Science & Technology

    2014-07-01

    Labs uses parameterized Hammett -type equations to describe 1500 possible combinations of more than 650 ionizable functional groups. The change in...of the form ⋯ , ⋯ Equation (1) where Ypred is the predicted property, c0 is a constant, c1 to cn are coefficients from the...regression to the training set of measurements, X1 to Xn represent molecular or fragment or field-based descriptors, and the final term in Equation 1

  17. Key Technology of Real-Time Road Navigation Method Based on Intelligent Data Research

    PubMed Central

    Tang, Haijing; Liang, Yu; Huang, Zhongnan; Wang, Taoyi; He, Lin; Du, Yicong; Ding, Gangyi

    2016-01-01

    The effect of traffic flow prediction plays an important role in routing selection. Traditional traffic flow forecasting methods mainly include linear, nonlinear, neural network, and Time Series Analysis method. However, all of them have some shortcomings. This paper analyzes the existing algorithms on traffic flow prediction and characteristics of city traffic flow and proposes a road traffic flow prediction method based on transfer probability. This method first analyzes the transfer probability of upstream of the target road and then makes the prediction of the traffic flow at the next time by using the traffic flow equation. Newton Interior-Point Method is used to obtain the optimal value of parameters. Finally, it uses the proposed model to predict the traffic flow at the next time. By comparing the existing prediction methods, the proposed model has proven to have good performance. It can fast get the optimal value of parameters faster and has higher prediction accuracy, which can be used to make real-time traffic flow prediction. PMID:27872637

  18. Solution of a modified fractional diffusion equation

    NASA Astrophysics Data System (ADS)

    Langlands, T. A. M.

    2006-07-01

    Recently, a modified fractional diffusion equation has been proposed [I. Sokolov, J. Klafter, From diffusion to anomalous diffusion: a century after Einstein's brownian motion, Chaos 15 (2005) 026103; A.V. Chechkin, R. Gorenflo, I.M. Sokolov, V.Yu. Gonchar, Distributed order time fractional diffusion equation, Frac. Calc. Appl. Anal. 6 (3) (2003) 259279; I.M. Sokolov, A.V. Checkin, J. Klafter, Distributed-order fractional kinetics, Acta. Phys. Pol. B 35 (2004) 1323.] for describing processes that become less anomalous as time progresses by the inclusion of a second fractional time derivative acting on the diffusion term. In this letter we give the solution of the modified equation on an infinite domain. In contrast to the solution of the traditional fractional diffusion equation, the solution of the modified equation requires an infinite series of Fox functions instead of a single Fox function.

  19. Role of Turbulent Prandtl Number on Heat Flux at Hypersonic Mach Number

    NASA Technical Reports Server (NTRS)

    Xiao, X.; Edwards, J. R.; Hassan, H. A.

    2004-01-01

    Present simulation of turbulent flows involving shock wave/boundary layer interaction invariably overestimates heat flux by almost a factor of two. One possible reason for such a performance is a result of the fact that the turbulence models employed make use of Morkovin's hypothesis. This hypothesis is valid for non-hypersonic Mach numbers and moderate rates of heat transfer. At hypersonic Mach numbers, high rates of heat transfer exist in regions where shock wave/boundary layer interactions are important. As a result, one should not expect traditional turbulence models to yield accurate results. The goal of this investigation is to explore the role of a variable Prandtl number formulation in predicting heat flux in flows dominated by strong shock wave/boundary layer interactions. The intended applications involve external flows in the absence of combustion such as those encountered in supersonic inlets. This can be achieved by adding equations for the temperature variance and its dissipation rate. Such equations can be derived from the exact Navier-Stokes equations. Traditionally, modeled equations are based on the low speed energy equation where the pressure gradient term and the term responsible for energy dissipation are ignored. It is clear that such assumptions are not valid for hypersonic flows. The approach used here is based on the procedure used in deriving the k-zeta model, in which the exact equations that governed k, the variance of velocity, and zeta, the variance of vorticity, were derived and modeled. For the variable turbulent Prandtl number, the exact equations that govern the temperature variance and its dissipation rate are derived and modeled term by term. The resulting set of equations are free of damping and wall functions and are coordinate-system independent. Moreover, modeled correlations are tensorially consistent and invariant under Galilean transformation. The final set of equations will be given in the paper.

  20. Finite Element analyses of soil bioengineered slopes

    NASA Astrophysics Data System (ADS)

    Tamagnini, Roberto; Switala, Barbara Maria; Sudan Acharya, Madhu; Wu, Wei; Graf, Frank; Auer, Michael; te Kamp, Lothar

    2014-05-01

    Soil Bioengineering methods are not only effective from an economical point of view, but they are also interesting as fully ecological solutions. The presented project is aimed to define a numerical model which includes the impact of vegetation on slope stability, considering both mechanical and hydrological effects. In this project, a constitutive model has been developed that accounts for the multi-phase nature of the soil, namely the partly saturated condition and it also includes the effects of a biological component. The constitutive equation is implemented in the Finite Element (FE) software Comes-Geo with an implicit integration scheme that accounts for the collapse of the soils structure due to wetting. The mathematical formulation of the constitutive equations is introduced by means of thermodynamics and it simulates the growth of the biological system during the time. The numerical code is then applied in the analysis of an ideal rainfall induced landslide. The slope is analyzed for vegetated and non-vegetated conditions. The final results allow to quantitatively assessing the impact of vegetation on slope stability. This allows drawing conclusions and choosing whenever it is worthful to use soil bioengineering methods in slope stabilization instead of traditional approaches. The application of the FE methods show some advantages with respect to the commonly used limit equilibrium analyses, because it can account for the real coupled strain-diffusion nature of the problem. The mechanical strength of roots is in fact influenced by the stress evolution into the slope. Moreover, FE method does not need a pre-definition of any failure surface. FE method can also be used in monitoring the progressive failure of the soil bio-engineered system as it calculates the amount of displacements and strains of the model slope. The preliminary study results show that the formulated equations can be useful for analysis and evaluation of different soil bio-engineering methods of slope stabilization.

  1. Use of lignin extracted from different plant sources as standards in the spectrophotometric acetyl bromide lignin method.

    PubMed

    Fukushima, Romualdo S; Kerley, Monty S

    2011-04-27

    A nongravimetric acetyl bromide lignin (ABL) method was evaluated to quantify lignin concentration in a variety of plant materials. The traditional approach to lignin quantification required extraction of lignin with acidic dioxane and its isolation from each plant sample to construct a standard curve via spectrophotometric analysis. Lignin concentration was then measured in pre-extracted plant cell walls. However, this presented a methodological complexity because extraction and isolation procedures are lengthy and tedious, particularly if there are many samples involved. This work was targeted to simplify lignin quantification. Our hypothesis was that any lignin, regardless of its botanical origin, could be used to construct a standard curve for the purpose of determining lignin concentration in a variety of plants. To test our hypothesis, lignins were isolated from a range of diverse plants and, along with three commercial lignins, standard curves were built and compared among them. Slopes and intercepts derived from these standard curves were close enough to allow utilization of a mean extinction coefficient in the regression equation to estimate lignin concentration in any plant, independent of its botanical origin. Lignin quantification by use of a common regression equation obviates the steps of lignin extraction, isolation, and standard curve construction, which substantially expedites the ABL method. Acetyl bromide lignin method is a fast, convenient analytical procedure that may routinely be used to quantify lignin.

  2. A Thermodynamically-consistent FBA-based Approach to Biogeochemical Reaction Modeling

    NASA Astrophysics Data System (ADS)

    Shapiro, B.; Jin, Q.

    2015-12-01

    Microbial rates are critical to understanding biogeochemical processes in natural environments. Recently, flux balance analysis (FBA) has been applied to predict microbial rates in aquifers and other settings. FBA is a genome-scale constraint-based modeling approach that computes metabolic rates and other phenotypes of microorganisms. This approach requires a prior knowledge of substrate uptake rates, which is not available for most natural microbes. Here we propose to constrain substrate uptake rates on the basis of microbial kinetics. Specifically, we calculate rates of respiration (and fermentation) using a revised Monod equation; this equation accounts for both the kinetics and thermodynamics of microbial catabolism. Substrate uptake rates are then computed from the rates of respiration, and applied to FBA to predict rates of microbial growth. We implemented this method by linking two software tools, PHREEQC and COBRA Toolbox. We applied this method to acetotrophic methanogenesis by Methanosarcina barkeri, and compared the simulation results to previous laboratory observations. The new method constrains acetate uptake by accounting for the kinetics and thermodynamics of methanogenesis, and predicted well the observations of previous experiments. In comparison, traditional methods of dynamic-FBA constrain acetate uptake on the basis of enzyme kinetics, and failed to reproduce the experimental results. These results show that microbial rate laws may provide a better constraint than enzyme kinetics for applying FBA to biogeochemical reaction modeling.

  3. Modeling underwater noise propagation from marine hydrokinetic power devices through a time-domain, velocity-pressure solution

    DOE PAGES

    Hafla, Erin; Johnson, Erick; Johnson, C. Nathan; ...

    2018-06-01

    Marine hydrokinetic (MHK) devices generate electricity from the motion of tidal and ocean currents, as well as ocean waves, to provide an additional source of renewable energy available to the United States. These devices are a source of anthropogenic noise in the marine ecosystem and must meet regulatory guidelines that mandate a maximum amount of noise that may be generated. In the absence of measured levels from in situ deployments, a model for predicting the propagation of sound from an array of MHK sources in a real environment is essential. A set of coupled, linearized velocity-pressure equations in the time-domainmore » are derived and presented in this paper, which are an alternative solution to the Helmholtz and wave equation methods traditionally employed. Discretizing these equations on a three-dimensional (3D), finite-difference grid ultimately permits a finite number of complex sources and spatially varying sound speeds, bathymetry, and bed composition. The solution to this system of equations has been parallelized in an acoustic-wave propagation package developed at Sandia National Labs, called Paracousti. This work presents the broadband sound pressure levels from a single source in two-dimensional (2D) ideal and Pekeris wave-guides and in a 3D domain with a sloping boundary. Furthermore, the paper concludes with demonstration of Paracousti for an array of MHK sources in a simple wave-guide.« less

  4. Modeling underwater noise propagation from marine hydrokinetic power devices through a time-domain, velocity-pressure solution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hafla, Erin; Johnson, Erick; Johnson, C. Nathan

    Marine hydrokinetic (MHK) devices generate electricity from the motion of tidal and ocean currents, as well as ocean waves, to provide an additional source of renewable energy available to the United States. These devices are a source of anthropogenic noise in the marine ecosystem and must meet regulatory guidelines that mandate a maximum amount of noise that may be generated. In the absence of measured levels from in situ deployments, a model for predicting the propagation of sound from an array of MHK sources in a real environment is essential. A set of coupled, linearized velocity-pressure equations in the time-domainmore » are derived and presented in this paper, which are an alternative solution to the Helmholtz and wave equation methods traditionally employed. Discretizing these equations on a three-dimensional (3D), finite-difference grid ultimately permits a finite number of complex sources and spatially varying sound speeds, bathymetry, and bed composition. The solution to this system of equations has been parallelized in an acoustic-wave propagation package developed at Sandia National Labs, called Paracousti. This work presents the broadband sound pressure levels from a single source in two-dimensional (2D) ideal and Pekeris wave-guides and in a 3D domain with a sloping boundary. Furthermore, the paper concludes with demonstration of Paracousti for an array of MHK sources in a simple wave-guide.« less

  5. Efficient Low Dissipative High Order Schemes for Multiscale MHD Flows

    NASA Technical Reports Server (NTRS)

    Sjoegreen, Bjoern; Yee, Helen C.; Mansour, Nagi (Technical Monitor)

    2002-01-01

    Accurate numerical simulations of complex multiscale compressible viscous flows, especially high speed turbulence combustion and acoustics, demand high order schemes with adaptive numerical dissipation controls. Standard high resolution shock-capturing methods are too dissipative to capture the small scales and/or long-time wave propagations without extreme grid refinements and small time steps. An integrated approach for the control of numerical dissipation in high order schemes for the compressible Euler and Navier-Stokes equations has been developed and verified by the authors and collaborators. These schemes are suitable for the problems in question. Basically, the scheme consists of sixth-order or higher non-dissipative spatial difference operators as the base scheme. To control the amount of numerical dissipation, multiresolution wavelets are used as sensors to adaptively limit the amount and to aid the selection and/or blending of the appropriate types of numerical dissipation to be used. Magnetohydrodynamics (MHD) waves play a key role in drag reduction in highly maneuverable high speed combat aircraft, in space weather forecasting, and in the understanding of the dynamics of the evolution of our solar system and the main sequence stars. Although there exist a few well-studied second and third-order high-resolution shock-capturing schemes for the MHD in the literature, these schemes are too diffusive and not practical for turbulence/combustion MHD flows. On the other hand, extension of higher than third-order high-resolution schemes to the MHD system of equations is not straightforward. Unlike the hydrodynamic equations, the inviscid MHD system is non-strictly hyperbolic with non-convex fluxes. The wave structures and shock types are different from their hydrodynamic counterparts. Many of the non-traditional hydrodynamic shocks are not fully understood. Consequently, reliable and highly accurate numerical schemes for multiscale MHD equations pose a great challenge to algorithm development. In addition, controlling the numerical error of the divergence free condition of the magnetic fields for high order methods has been a stumbling block. Lower order methods are not practical for the astrophysical problems in question. We propose to extend our hydrodynamics schemes to the MHD equations with several desired properties over commonly used MHD schemes.

  6. An experimental detrending approach to attributing change of pan evaporation in comparison with the traditional partial differential method

    NASA Astrophysics Data System (ADS)

    Wang, Tingting; Sun, Fubao; Xia, Jun; Liu, Wenbin; Sang, Yanfang

    2017-04-01

    In predicting how droughts and hydrological cycles would change in a warming climate, change of atmospheric evaporative demand measured by pan evaporation (Epan) is one crucial element to be understood. Over the last decade, the derived partial differential (PD) form of the PenPan equation is a prevailing attribution approach to attributing changes to Epan worldwide. However, the independency among climatic variables required by the PD approach cannot be met using long term observations. Here we designed a series of numerical experiments to attribute changes of Epan over China by detrending each climatic variable, i.e., an experimental detrending approach, to address the inter-correlation among climate variables, and made comparison with the traditional PD method. The results show that the detrending approach is superior not only to a complicate system with multi-variables and mixing algorithm like aerodynamic component (Ep,A) and Epan, but also to a simple case like radiative component (Ep,R), when compared with traditional PD method. The major reason for this is the strong and significant inter-correlation of input meteorological forcing. Very similar and fine attributing results have been achieved based on detrending approach and PD method after eliminating the inter-correlation of input through a randomize approach. The contribution of Rh and Ta in net radiation and thus Ep,R, which has been overlooked based on the PD method but successfully detected by detrending approach, provides some explanation to the comparing results. We adopted the control run from the detrending approach and applied it to made adjustment of PD method. Much improvement has been made and thus proven this adjustment an effective way in attributing changes to Epan. Hence, the detrending approach and the adjusted PD method are well recommended in attributing changes in hydrological models to better understand and predict water and energy cycle.

  7. One Solution of the Forward Problem of DC Resistivity Well Logging by the Method of Volume Integral Equations with Allowance for Induced Polarization

    NASA Astrophysics Data System (ADS)

    Kevorkyants, S. S.

    2018-03-01

    For theoretically studying the intensity of the influence exerted by the polarization of the rocks on the results of direct current (DC) well logging, a solution is suggested for the direct inner problem of the DC electric logging in the polarizable model of plane-layered medium containing a heterogeneity by the example of the three-layer model of the hosting medium. Initially, the solution is presented in the form of a traditional vector volume-integral equation of the second kind (IE2) for the electric current density vector. The vector IE2 is solved by the modified iteration-dissipation method. By the transformations, the initial IE2 is reduced to the equation with the contraction integral operator for an axisymmetric model of electrical well-logging of the three-layer polarizable medium intersected by an infinitely long circular cylinder. The latter simulates the borehole with a zone of penetration where the sought vector consists of the radial J r and J z axial (relative to the cylinder's axis) components. The decomposition of the obtained vector IE2 into scalar components and the discretization in the coordinates r and z lead to a heterogeneous system of linear algebraic equations with a block matrix of the coefficients representing 2x2 matrices whose elements are the triple integrals of the mixed derivatives of the second-order Green's function with respect to the parameters r, z, r', and z'. With the use of the analytical transformations and standard integrals, the integrals over the areas of the partition cells and azimuthal coordinate are reduced to single integrals (with respect to the variable t = cos ϕ on the interval [-1, 1]) calculated by the Gauss method for numerical integration. For estimating the effective coefficient of polarization of the complex medium, it is suggested to use the Siegel-Komarov formula.

  8. Quantification of cardiorespiratory fitness in healthy nonobese and obese men and women.

    PubMed

    Lorenzo, Santiago; Babb, Tony G

    2012-04-01

    The quantification and interpretation of cardiorespiratory fitness (CRF) in obesity is important for adequately assessing cardiovascular conditioning, underlying comorbidities, and properly evaluating disease risk. We retrospectively compared peak oxygen uptake (VO(2)peak) (ie, CRF) in absolute terms, and relative terms (% predicted) using three currently suggested prediction equations (Equations R, W, and G). There were 19 nonobese and 66 obese participants. Subjects underwent hydrostatic weighing and incremental cycling to exhaustion. Subject characteristics were analyzed by independent t test, and % predicted VO(2)peak by a two-way analysis of variance (group and equation) with repeated measures on one factor (equation). VO(2)peak (L/min) was not different between nonobese and obese adults (2.35 ± 0.80 [SD] vs 2.39 ± 0.68 L/min). VO(2)peak was higher (P < .02) relative to body mass and lean body mass in the nonobese (34 ± 8 mL/min/kg vs 22 ± 5 mL/min/kg, 42 ± 9 mL/min/lean body mass vs 37 ± 6 mL/min/lean body mass). Cardiorespiratory fitness assessed as % predicted was not different in the nonobese and obese (91% ± 17% predicted vs 95% ± 15% predicted) using Equation R, while using Equation W and G, CRF was lower (P < .05) but within normal limits in the obese (94 ± 15 vs 87 ± 11; 101% ± 17% predicted vs 90% ± 12% predicted, respectively), depending somewhat on sex. Traditional methods of reporting VO(2)peak do not allow adequate assessment and quantification of CRF in obese adults. Predicted VO(2)peak does allow a normalized evaluation of CRF in the obese, although care must be taken in selecting the most appropriate prediction equation, especially in women. In general, otherwise healthy obese are not grossly deconditioned as is commonly believed, although CRF may be slightly higher in nonobese subjects depending on the uniqueness of the prediction equation.

  9. The novel application of artificial neural network on bioelectrical impedance analysis to assess the body composition in elderly

    PubMed Central

    2013-01-01

    Background This study aims to improve accuracy of Bioelectrical Impedance Analysis (BIA) prediction equations for estimating fat free mass (FFM) of the elderly by using non-linear Back Propagation Artificial Neural Network (BP-ANN) model and to compare the predictive accuracy with the linear regression model by using energy dual X-ray absorptiometry (DXA) as reference method. Methods A total of 88 Taiwanese elderly adults were recruited in this study as subjects. Linear regression equations and BP-ANN prediction equation were developed using impedances and other anthropometrics for predicting the reference FFM measured by DXA (FFMDXA) in 36 male and 26 female Taiwanese elderly adults. The FFM estimated by BIA prediction equations using traditional linear regression model (FFMLR) and BP-ANN model (FFMANN) were compared to the FFMDXA. The measuring results of an additional 26 elderly adults were used to validate than accuracy of the predictive models. Results The results showed the significant predictors were impedance, gender, age, height and weight in developed FFMLR linear model (LR) for predicting FFM (coefficient of determination, r2 = 0.940; standard error of estimate (SEE) = 2.729 kg; root mean square error (RMSE) = 2.571kg, P < 0.001). The above predictors were set as the variables of the input layer by using five neurons in the BP-ANN model (r2 = 0.987 with a SD = 1.192 kg and relatively lower RMSE = 1.183 kg), which had greater (improved) accuracy for estimating FFM when compared with linear model. The results showed a better agreement existed between FFMANN and FFMDXA than that between FFMLR and FFMDXA. Conclusion When compared the performance of developed prediction equations for estimating reference FFMDXA, the linear model has lower r2 with a larger SD in predictive results than that of BP-ANN model, which indicated ANN model is more suitable for estimating FFM. PMID:23388042

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dechant, Lawrence J.

    Wave packet analysis provides a connection between linear small disturbance theory and subsequent nonlinear turbulent spot flow behavior. The traditional association between linear stability analysis and nonlinear wave form is developed via the method of stationary phase whereby asymptotic (simplified) mean flow solutions are used to estimate dispersion behavior and stationary phase approximation are used to invert the associated Fourier transform. The resulting process typically requires nonlinear algebraic equations inversions that can be best performed numerically, which partially mitigates the value of the approximation as compared to a more complete, e.g. DNS or linear/nonlinear adjoint methods. To obtain a simpler,more » closed-form analytical result, the complete packet solution is modeled via approximate amplitude (linear convected kinematic wave initial value problem) and local sinusoidal (wave equation) expressions. Significantly, the initial value for the kinematic wave transport expression follows from a separable variable coefficient approximation to the linearized pressure fluctuation Poisson expression. The resulting amplitude solution, while approximate in nature, nonetheless, appears to mimic many of the global features, e.g. transitional flow intermittency and pressure fluctuation magnitude behavior. A low wave number wave packet models also recover meaningful auto-correlation and low frequency spectral behaviors.« less

  11. Broadening of polymer chromatographic signals: Analysis, quantification and correction through effective diffusion coefficients.

    PubMed

    Suárez, Inmaculada; Coto, Baudilio

    2015-08-14

    Average molecular weights and polydispersity indexes are some of the most important parameters considered in the polymer characterization. Usually, gel permeation chromatography (GPC) and multi angle light scattering (MALS) are used for this determination, but GPC values are overestimated due to the dispersion introduced by the column separation. Several procedures were proposed to correct such effect usually involving more complex calibration processes. In this work, a new method of calculation has been considered including diffusion effects. An equation for the concentration profile due to diffusion effects along the GPC column was considered to be a Fickian function and polystyrene narrow standards were used to determine effective diffusion coefficients. The molecular weight distribution function of mono and poly disperse polymers was interpreted as a sum of several Fickian functions representing a sample formed by only few kind of polymer chains with specific molecular weight and diffusion coefficient. Proposed model accurately fit the concentration profile along the whole elution time range as checked by the computed standard deviation. Molecular weights obtained by this new method are similar to those obtained by MALS or traditional GPC while polydispersity index values are intermediate between those obtained by the traditional GPC combined to Universal Calibration method and the MALS method. Values for Pearson and Lin coefficients shows improvement in the correlation of polydispersity index values determined by GPC and MALS methods when diffusion coefficients and new methods are used. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Damage Based Analysis (DBA) - Theory, Derivation and Practical Application Using Both an Acceleration and Pseudo Velocity Approach

    NASA Technical Reports Server (NTRS)

    Grillo, Vince

    2017-01-01

    The objective of this presentation is to give a brief overview of the theory behind the (DBA) method, an overview of the derivation and a practical application of the theory using the Python computer language. The Theory and Derivation will use both Acceleration and Pseudo Velocity methods to derive a series of equations for processing by Python. We will take the results and compare both Acceleration and Pseudo Velocity methods and discuss implementation of the Python functions. Also, we will discuss the efficiency of the methods and the amount of computer time required for the solution. In conclusion, (DBA) offers a powerful method to evaluate the amount of energy imparted into a system in the form of both Amplitude and Duration during qualification testing and flight environments. Many forms of steady state and transient vibratory motion can be characterized using this technique. (DBA) provides a more robust alternative to traditional methods such Power Spectral Density (PSD) using a maximax approach.

  13. Damage Based Analysis (DBA): Theory, Derivation and Practical Application - Using Both an Acceleration and Pseudo-Velocity Approach

    NASA Technical Reports Server (NTRS)

    Grillo, Vince

    2016-01-01

    The objective of this presentation is to give a brief overview of the theory behind the (DBA) method, an overview of the derivation and a practical application of the theory using the Python computer language. The Theory and Derivation will use both Acceleration and Pseudo Velocity methods to derive a series of equations for processing by Python. We will take the results and compare both Acceleration and Pseudo Velocity methods and discuss implementation of the Python functions. Also, we will discuss the efficiency of the methods and the amount of computer time required for the solution. In conclusion, (DBA) offers a powerful method to evaluate the amount of energy imparted into a system in the form of both Amplitude and Duration during qualification testing and flight environments. Many forms of steady state and transient vibratory motion can be characterized using this technique. (DBA) provides a more robust alternative to traditional methods such Power Spectral Density (PSD) using a Maximax approach.

  14. A stable 1D multigroup high-order low-order method

    DOE PAGES

    Yee, Ben Chung; Wollaber, Allan Benton; Haut, Terry Scot; ...

    2016-07-13

    The high-order low-order (HOLO) method is a recently developed moment-based acceleration scheme for solving time-dependent thermal radiative transfer problems, and has been shown to exhibit orders of magnitude speedups over traditional time-stepping schemes. However, a linear stability analysis by Haut et al. (2015 Haut, T. S., Lowrie, R. B., Park, H., Rauenzahn, R. M., Wollaber, A. B. (2015). A linear stability analysis of the multigroup High-Order Low-Order (HOLO) method. In Proceedings of the Joint International Conference on Mathematics and Computation (M&C), Supercomputing in Nuclear Applications (SNA) and the Monte Carlo (MC) Method; Nashville, TN, April 19–23, 2015. American Nuclear Society.)more » revealed that the current formulation of the multigroup HOLO method was unstable in certain parameter regions. Since then, we have replaced the intensity-weighted opacity in the first angular moment equation of the low-order (LO) system with the Rosseland opacity. Furthermore, this results in a modified HOLO method (HOLO-R) that is significantly more stable.« less

  15. TSS concentration in sewers estimated from turbidity measurements by means of linear regression accounting for uncertainties in both variables.

    PubMed

    Bertrand-Krajewski, J L

    2004-01-01

    In order to replace traditional sampling and analysis techniques, turbidimeters can be used to estimate TSS concentration in sewers, by means of sensor and site specific empirical equations established by linear regression of on-site turbidity Tvalues with TSS concentrations C measured in corresponding samples. As the ordinary least-squares method is not able to account for measurement uncertainties in both T and C variables, an appropriate regression method is used to solve this difficulty and to evaluate correctly the uncertainty in TSS concentrations estimated from measured turbidity. The regression method is described, including detailed calculations of variances and covariance in the regression parameters. An example of application is given for a calibrated turbidimeter used in a combined sewer system, with data collected during three dry weather days. In order to show how the established regression could be used, an independent 24 hours long dry weather turbidity data series recorded at 2 min time interval is used, transformed into estimated TSS concentrations, and compared to TSS concentrations measured in samples. The comparison appears as satisfactory and suggests that turbidity measurements could replace traditional samples. Further developments, including wet weather periods and other types of sensors, are suggested.

  16. 3-D breast anthropometry of plus-sized women in South Africa.

    PubMed

    Pandarum, Reena; Yu, Winnie; Hunter, Lawrance

    2011-09-01

    Exploratory retail studies in South Africa indicate that plus-sized women experience problems and dissatisfaction with poorly fitting bras. The lack of 3-D anthropometric studies for the plus-size women's bra market initiated this research. 3-D body torso measurements were collected from a convenience sample of 176 plus-sized women in South Africa. 3-D breast measurements extracted from the TC(2) NX12-3-D body scanner 'breast module' software were compared with traditional tape measurements. Regression equations show that the two methods of measurement were highly correlated although, on average, the bra cup size determining factor 'bust minus underbust' obtained from the 3-D method is approximately 11% smaller than that of the manual method. It was concluded that the total bust volume correlated with the quadrant volume (r = 0.81), cup length, bust length and bust prominence, should be selected as the overall measure of bust size and not the traditional bust girth and the underbust measurement. STATEMENT OF RELEVANCE: This study contributes new data and adds to the knowledge base of anthropometry and consumer ergonomics on bra fit and support, published in this, the Ergonomics Journal, by Chen et al. (2010) on bra fit and White et al. (2009) on breast support during overground running.

  17. A fast and efficient method for device level layout analysis

    NASA Astrophysics Data System (ADS)

    Dong, YaoQi; Zou, Elaine; Pang, Jenny; Huang, Lucas; Yang, Legender; Zhang, Chunlei; Du, Chunshan; Hu, Xinyi; Wan, Qijian

    2017-03-01

    There is an increasing demand for device level layout analysis, especially as technology advances. The analysis is to study standard cells by extracting and classifying critical dimension parameters. There are couples of parameters to extract, like channel width, length, gate to active distance, and active to adjacent active distance, etc. for 14nm technology, there are some other parameters that are cared about. On the one hand, these parameters are very important for studying standard cell structures and spice model development with the goal of improving standard cell manufacturing yield and optimizing circuit performance; on the other hand, a full chip device statistics analysis can provide useful information to diagnose the yield issue. Device analysis is essential for standard cell customization and enhancements and manufacturability failure diagnosis. Traditional parasitic parameters extraction tool like Calibre xRC is powerful but it is not sufficient for this device level layout analysis application as engineers would like to review, classify and filter out the data more easily. This paper presents a fast and efficient method based on Calibre equation-based DRC (eqDRC). Equation-based DRC extends the traditional DRC technology to provide a flexible programmable modeling engine which allows the end user to define grouped multi-dimensional feature measurements using flexible mathematical expressions. This paper demonstrates how such an engine and its programming language can be used to implement critical device parameter extraction. The device parameters are extracted and stored in a DFM database which can be processed by Calibre YieldServer. YieldServer is data processing software that lets engineers query, manipulate, modify, and create data in a DFM database. These parameters, known as properties in eqDRC language, can be annotated back to the layout for easily review. Calibre DesignRev can create a HTML formatted report of the results displayed in Calibre RVE which makes it easy to share results among groups. This method has been proven and used in SMIC PDE team and SPICE team.

  18. Estimation of premorbid general fluid intelligence using traditional Chinese reading performance in Taiwanese samples.

    PubMed

    Chen, Ying-Jen; Ho, Meng-Yang; Chen, Kwan-Ju; Hsu, Chia-Fen; Ryu, Shan-Jin

    2009-08-01

    The aims of the present study were to (i) investigate if traditional Chinese word reading ability can be used for estimating premorbid general intelligence; and (ii) to provide multiple regression equations for estimating premorbid performance on Raven's Standard Progressive Matrices (RSPM), using age, years of education and Chinese Graded Word Reading Test (CGWRT) scores as predictor variables. Four hundred and twenty-six healthy volunteers (201 male, 225 female), aged 16-93 years (mean +/- SD, 41.92 +/- 18.19 years) undertook the tests individually under supervised conditions. Seventy percent of subjects were randomly allocated to the derivation group (n = 296), and the rest to the validation group (n = 130). RSPM score was positively correlated with CGWRT score and years of education. RSPM and CGWRT scores and years of education were also inversely correlated with age, but the declining trend for RSPM performance against age was steeper than that for CGWRT performance. Separate multiple regression equations were derived for estimating RSPM scores using different combinations of age, years of education, and CGWRT score for both groups. The multiple regression coefficient of each equation ranged from 0.71 to 0.80 with the standard error of estimate between 7 and 8 RSPM points. When fitting the data of one group to the equations derived from its counterpart group, the cross-validation multiple regression coefficients ranged from 0.71 to 0.79. There were no significant differences in the 'predicted-obtained' RSPM discrepancies between any equations. The regression equations derived in the present study may provide a basis for estimating premorbid RSPM performance.

  19. The Arrow of Time in the Collapse of Collisionless Self-gravitating Systems: Non-validity of the Vlasov-Poisson Equation during Violent Relaxation

    NASA Astrophysics Data System (ADS)

    Beraldo e Silva, Leandro; de Siqueira Pedra, Walter; Sodré, Laerte; Perico, Eder L. D.; Lima, Marcos

    2017-09-01

    The collapse of a collisionless self-gravitating system, with the fast achievement of a quasi-stationary state, is driven by violent relaxation, with a typical particle interacting with the time-changing collective potential. It is traditionally assumed that this evolution is governed by the Vlasov-Poisson equation, in which case entropy must be conserved. We run N-body simulations of isolated self-gravitating systems, using three simulation codes, NBODY-6 (direct summation without softening), NBODY-2 (direct summation with softening), and GADGET-2 (tree code with softening), for different numbers of particles and initial conditions. At each snapshot, we estimate the Shannon entropy of the distribution function with three different techniques: Kernel, Nearest Neighbor, and EnBiD. For all simulation codes and estimators, the entropy evolution converges to the same limit as N increases. During violent relaxation, the entropy has a fast increase followed by damping oscillations, indicating that violent relaxation must be described by a kinetic equation other than the Vlasov-Poisson equation, even for N as large as that of astronomical structures. This indicates that violent relaxation cannot be described by a time-reversible equation, shedding some light on the so-called “fundamental paradox of stellar dynamics.” The long-term evolution is well-described by the orbit-averaged Fokker-Planck model, with Coulomb logarithm values in the expected range 10{--}12. By means of NBODY-2, we also study the dependence of the two-body relaxation timescale on the softening length. The approach presented in the current work can potentially provide a general method for testing any kinetic equation intended to describe the macroscopic evolution of N-body systems.

  20. Gas-kinetic theory and Boltzmann equation of share price within an equilibrium market hypothesis and ad hoc strategy

    NASA Astrophysics Data System (ADS)

    Ausloos, M.

    2000-09-01

    Recent observations have indicated that the traditional equilibrium market hypothesis (EMH; also known as Efficient Market Hypothesis) is unrealistic. It is shown here that it is the analog of a Boltzmann equation in physics, thus having some bad properties of mean-field approximations like a Gaussian distribution of price fluctuations. A kinetic theory for prices can be simply derived, considering in a first approach that market actors have all identical relaxation times, and solved within a Chapman-Enskog like formalism. In closing the set of equations, (i) an equation of state with a pressure and (ii) the equilibrium (isothermal) equation for the price (taken as the order parameter) of a stock as a function of the volume of money available are obtained.

  1. Algorithms Bridging Quantum Computation and Chemistry

    NASA Astrophysics Data System (ADS)

    McClean, Jarrod Ryan

    The design of new materials and chemicals derived entirely from computation has long been a goal of computational chemistry, and the governing equation whose solution would permit this dream is known. Unfortunately, the exact solution to this equation has been far too expensive and clever approximations fail in critical situations. Quantum computers offer a novel solution to this problem. In this work, we develop not only new algorithms to use quantum computers to study hard problems in chemistry, but also explore how such algorithms can help us to better understand and improve our traditional approaches. In particular, we first introduce a new method, the variational quantum eigensolver, which is designed to maximally utilize the quantum resources available in a device to solve chemical problems. We apply this method in a real quantum photonic device in the lab to study the dissociation of the helium hydride (HeH+) molecule. We also enhance this methodology with architecture specific optimizations on ion trap computers and show how linear-scaling techniques from traditional quantum chemistry can be used to improve the outlook of similar algorithms on quantum computers. We then show how studying quantum algorithms such as these can be used to understand and enhance the development of classical algorithms. In particular we use a tool from adiabatic quantum computation, Feynman's Clock, to develop a new discrete time variational principle and further establish a connection between real-time quantum dynamics and ground state eigenvalue problems. We use these tools to develop two novel parallel-in-time quantum algorithms that outperform competitive algorithms as well as offer new insights into the connection between the fermion sign problem of ground states and the dynamical sign problem of quantum dynamics. Finally we use insights gained in the study of quantum circuits to explore a general notion of sparsity in many-body quantum systems. In particular we use developments from the field of compressed sensing to find compact representations of ground states. As an application we study electronic systems and find solutions dramatically more compact than traditional configuration interaction expansions, offering hope to extend this methodology to challenging systems in chemical and material design.

  2. 3D superwide-angle one-way propagator and its application in seismic modeling and imaging

    NASA Astrophysics Data System (ADS)

    Jia, Xiaofeng; Jiang, Yunong; Wu, Ru-Shan

    2018-07-01

    Traditional one-way wave-equation based propagators have been widely used in past decades. Comparing to two-way propagators, one-way methods have higher efficiency and lower memory demands. These two features are especially important in solving large-scale 3D problems. However, regular one-way propagators cannot simulate waves that propagate in large angles within 90° because of their inherent wide angle limitation. Traditional one-way can only propagate along the determined direction (e.g., z-direction), so simulation of turning waves is beyond the ability of one-way methods. We develop 3D superwide-angle one-way propagator to overcome angle limitation and to simulate turning waves with superwide-angle propagation angle (>90°) for modeling and imaging complex geological structures. Wavefields propagating along vertical and horizontal directions are combined using typical stacking scheme. A weight function related to the propagation angle is used for combining and updating wavefields in each propagating step. In the implementation, we use graphics processing units (GPU) to accelerate the process. Typical workflow is designed to exploit the advantages of GPU architecture. Numerical examples show that the method achieves higher accuracy in modeling and imaging steep structures than regular one-way propagators. Actually, superwide-angle one-way propagator can be applied based on any one-way method to improve the effects of seismic modeling and imaging.

  3. Mathematical-Artificial Neural Network Hybrid Model to Predict Roll Force during Hot Rolling of Steel

    NASA Astrophysics Data System (ADS)

    Rath, S.; Sengupta, P. P.; Singh, A. P.; Marik, A. K.; Talukdar, P.

    2013-07-01

    Accurate prediction of roll force during hot strip rolling is essential for model based operation of hot strip mills. Traditionally, mathematical models based on theory of plastic deformation have been used for prediction of roll force. In the last decade, data driven models like artificial neural network have been tried for prediction of roll force. Pure mathematical models have accuracy limitations whereas data driven models have difficulty in convergence when applied to industrial conditions. Hybrid models by integrating the traditional mathematical formulations and data driven methods are being developed in different parts of world. This paper discusses the methodology of development of an innovative hybrid mathematical-artificial neural network model. In mathematical model, the most important factor influencing accuracy is flow stress of steel. Coefficients of standard flow stress equation, calculated by parameter estimation technique, have been used in the model. The hybrid model has been trained and validated with input and output data collected from finishing stands of Hot Strip Mill, Bokaro Steel Plant, India. It has been found that the model accuracy has been improved with use of hybrid model, over the traditional mathematical model.

  4. Non-linear eigensolver-based alternative to traditional SCF methods

    NASA Astrophysics Data System (ADS)

    Gavin, Brendan; Polizzi, Eric

    2013-03-01

    The self-consistent iterative procedure in Density Functional Theory calculations is revisited using a new, highly efficient and robust algorithm for solving the non-linear eigenvector problem (i.e. H(X)X = EX;) of the Kohn-Sham equations. This new scheme is derived from a generalization of the FEAST eigenvalue algorithm, and provides a fundamental and practical numerical solution for addressing the non-linearity of the Hamiltonian with the occupied eigenvectors. In contrast to SCF techniques, the traditional outer iterations are replaced by subspace iterations that are intrinsic to the FEAST algorithm, while the non-linearity is handled at the level of a projected reduced system which is orders of magnitude smaller than the original one. Using a series of numerical examples, it will be shown that our approach can outperform the traditional SCF mixing techniques such as Pulay-DIIS by providing a high converge rate and by converging to the correct solution regardless of the choice of the initial guess. We also discuss a practical implementation of the technique that can be achieved effectively using the FEAST solver package. This research is supported by NSF under Grant #ECCS-0846457 and Intel Corporation.

  5. Handling Missing Data With Multilevel Structural Equation Modeling and Full Information Maximum Likelihood Techniques.

    PubMed

    Schminkey, Donna L; von Oertzen, Timo; Bullock, Linda

    2016-08-01

    With increasing access to population-based data and electronic health records for secondary analysis, missing data are common. In the social and behavioral sciences, missing data frequently are handled with multiple imputation methods or full information maximum likelihood (FIML) techniques, but healthcare researchers have not embraced these methodologies to the same extent and more often use either traditional imputation techniques or complete case analysis, which can compromise power and introduce unintended bias. This article is a review of options for handling missing data, concluding with a case study demonstrating the utility of multilevel structural equation modeling using full information maximum likelihood (MSEM with FIML) to handle large amounts of missing data. MSEM with FIML is a parsimonious and hypothesis-driven strategy to cope with large amounts of missing data without compromising power or introducing bias. This technique is relevant for nurse researchers faced with ever-increasing amounts of electronic data and decreasing research budgets. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  6. The Importance of the Numerical Resolution of the Laplace Equation in the optimization of a Neuronal Stimulation Technique

    NASA Astrophysics Data System (ADS)

    Faria, Paula

    2010-09-01

    For the past few years, the potential of transcranial direct current stimulation (tDCS) for the treatment of several pathologies has been investigated. Knowledge of the current density distribution is an important factor in optimizing such applications of tDCS. For this goal, we used the finite element method to solve the Laplace equation in a spherical head model in order to investigate the three dimensional distribution of the current density and the variation of its intensity with depth using different electrodes montages: the traditional one with two sponge electrodes and new electrode montages: with sponge and EEG electrodes and with EEG electrodes varying the numbers of electrodes. The simulation results confirm the effectiveness of the mixed system which may allow the use of tDCS and EEG recording concomitantly and may help to optimize this neuronal stimulation technique. The numerical results were used in a promising application of tDCS in epilepsy.

  7. Continuous-time adaptive critics.

    PubMed

    Hanselmann, Thomas; Noakes, Lyle; Zaknich, Anthony

    2007-05-01

    A continuous-time formulation of an adaptive critic design (ACD) is investigated. Connections to the discrete case are made, where backpropagation through time (BPTT) and real-time recurrent learning (RTRL) are prevalent. Practical benefits are that this framework fits in well with plant descriptions given by differential equations and that any standard integration routine with adaptive step-size does an adaptive sampling for free. A second-order actor adaptation using Newton's method is established for fast actor convergence for a general plant and critic. Also, a fast critic update for concurrent actor-critic training is introduced to immediately apply necessary adjustments of critic parameters induced by actor updates to keep the Bellman optimality correct to first-order approximation after actor changes. Thus, critic and actor updates may be performed at the same time until some substantial error build up in the Bellman optimality or temporal difference equation, when a traditional critic training needs to be performed and then another interval of concurrent actor-critic training may resume.

  8. The importance of the boundary condition in the transport of intensity equation based phase measurement

    NASA Astrophysics Data System (ADS)

    Zhang, Jialin; Chen, Qian; Li, Jiaji; Zuo, Chao

    2017-02-01

    The transport of intensity equation (TIE) is a powerful tool for direct quantitative phase retrieval in microscopy imaging. However, there may be some problems when dealing with the boundary condition of the TIE. The previous work introduces a hard-edged aperture to the camera port of the traditional bright field microscope to generate the boundary signal for the TIE solver. Under this Neumann boundary condition, we can obtain the quantitative phase without any assumption or prior knowledge about the test object and the setup. In this paper, we will demonstrate the effectiveness of this method based on some experiments in practice. The micro lens array will be used for the comparison of two TIE solvers results based on introducing the aperture or not and this accurate quantitative phase imaging technique allows measuring cell dry mass which is used in biology to follow cell cycle, to investigate cell metabolism, or to address effects of drugs.

  9. Quantitative measurement for the microstructural parameters of nano-precipitates in Al-Mg-Si-Cu alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Kai

    Size, number density and volume fraction of nano-precipitates are important microstructural parameters controlling the strengthening of materials. In this work a widely accessible, convenient, moderately time efficient method with acceptable accuracy and precision has been provided for measurement of volume fraction of nano-precipitates in crystalline materials. The method is based on the traditional but highly accurate technique of measuring foil thickness via convergent beam electron diffraction. A new equation is proposed and verified with the aid of 3-dimensional atom probe (3DAP) analysis, to compensate for the additional error resulted from the hardly distinguishable contrast of too short incomplete precipitates cutmore » by the foil surface. The method can be performed on a regular foil specimen with a modern LaB{sub 6} or field-emission-gun transmission electron microscope. Precisions around ± 16% have been obtained for precipitate volume fractions of needle-like β″/C and Q precipitates in an aged Al-Mg-Si-Cu alloy. The measured number density is close to that directly obtained using 3DAP analysis by a misfit of 4.5%, and the estimated precision for number density measurement is about ± 11%. The limitations of the method are also discussed. - Highlights: •A facile method for measuring volume fraction of nano-precipitates based on CBED •An equation to compensate for small invisible precipitates, with 3DAP verification •Precisions around ± 16% for volume fraction and ± 11% for number density.« less

  10. Solution of weakly compressible isothermal flow in landfill gas collection networks

    NASA Astrophysics Data System (ADS)

    Nec, Y.; Huculak, G.

    2017-12-01

    Pipe networks collecting gas in sanitary landfills operate under the regime of a weakly compressible isothermal flow of ideal gas. The effect of compressibility has been traditionally neglected in this application in favour of simplicity, thereby creating a conceptual incongruity between the flow equations and thermodynamic equation of state. Here the flow is solved by generalisation of the classic Darcy-Weisbach equation for an incompressible steady flow in a pipe to an ordinary differential equation, permitting continuous variation of density, viscosity and related fluid parameters, as well as head loss or gain due to gravity, in isothermal flow. The differential equation is solved analytically in the case of ideal gas for a single edge in the network. Thereafter the solution is used in an algorithm developed to construct the flow equations automatically for a network characterised by an incidence matrix, and determine pressure distribution, flow rates and all associated parameters therein.

  11. Nano-Scale Characterization of Al-Mg Nanocrystalline Alloys

    NASA Astrophysics Data System (ADS)

    Harvey, Evan; Ladani, Leila

    Materials with nano-scale microstructure have become increasingly popular due to their benefit of substantially increased strengths. The increase in strength as a result of decreasing grain size is defined by the Hall-Petch equation. With increased interest in miniaturization of components, methods of mechanical characterization of small volumes of material are necessary because traditional means such as tensile testing becomes increasingly difficult with such small test specimens. This study seeks to characterize elastic-plastic properties of nanocrystalline Al-5083 through nanoindentation and related data analysis techniques. By using nanoindentation, accurate predictions of the elastic modulus and hardness of the alloy were attained. Also, the employed data analysis model provided reasonable estimates of the plastic properties (strain-hardening exponent and yield stress) lending credibility to this procedure as an accurate, full mechanical characterization method.

  12. Structures of rotating traditional neutron stars and hyperon stars in the relativistic σ -ω model

    NASA Astrophysics Data System (ADS)

    Wen, De-hua; Chen, Wei; Wang, Xian-ju; Ai, Bao-quan; Liu, Guo-tao; Dong, Dong-qiao; Liu, Liang-gang

    The influence of rotation on the total masses and radii of neutron stars is calculated by Hartle's slow-rotation formalism, while the equation of state is considered in a relativistic σ -ω model. As the changes of the mass and radius of a real neutron star caused by rotation are very small in comparison with the total mass and radius, one can see that Hartle's approximate method is rational to deal with the rotating neutron stars. If three property values, mass, radius and period, are observed for the same neutron star, then the EOS of this neutron star could be decided entirely.

  13. Flutter and forced response of mistuned rotors using standing wave analysis

    NASA Technical Reports Server (NTRS)

    Dugundji, J.; Bundas, D. J.

    1983-01-01

    A standing wave approach is applied to the analysis of the flutter and forced response of tuned and mistuned rotors. The traditional traveling wave cascade airforces are recast into standing wave arbitrary motion form using Pade approximants, and the resulting equations of motion are written in the matrix form. Applications for vibration modes, flutter, and forced response are discussed. It is noted that the standing wave methods may prove to be more versatile for dealing with certain applications, such as coupling flutter with forced response and dynamic shaft problems, transient impulses on the rotor, low-order engine excitation, bearing motions, and mistuning effects in rotors.

  14. Flutter and forced response of mistuned rotors using standing wave analysis

    NASA Technical Reports Server (NTRS)

    Bundas, D. J.; Dungundji, J.

    1983-01-01

    A standing wave approach is applied to the analysis of the flutter and forced response of tuned and mistuned rotors. The traditional traveling wave cascade airforces are recast into standing wave arbitrary motion form using Pade approximants, and the resulting equations of motion are written in the matrix form. Applications for vibration modes, flutter, and forced response are discussed. It is noted that the standing wave methods may prove to be more versatile for dealing with certain applications, such as coupling flutter with forced response and dynamic shaft problems, transient impulses on the rotor, low-order engine excitation, bearing motion, and mistuning effects in rotors.

  15. Meros Preconditioner Package

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2004-04-01

    Meros uses the compositional, aggregation, and overload operator capabilities of TSF to provide an object-oriented package providing segregated/block preconditioners for linear systems related to fully-coupled Navier-Stokes problems. This class of preconditioners exploits the special properties of these problems to segregate the equations and use multi-level preconditioners (through ML) on the matrix sub-blocks. Several preconditioners are provided, including the Fp and BFB preconditioners of Kay & Loghin and Silvester, Elman, Kay & Wathen. The overall performance and scalability of these preconditioners approaches that of multigrid for certain types of problems. Meros also provides more traditional pressure projection methods including SIMPLE andmore » SIMPLEC.« less

  16. Analytic descriptions of cylindrical electromagnetic waves in a nonlinear medium

    PubMed Central

    Xiong, Hao; Si, Liu-Gang; Yang, Xiaoxue; Wu, Ying

    2015-01-01

    A simple but highly efficient approach for dealing with the problem of cylindrical electromagnetic waves propagation in a nonlinear medium is proposed based on an exact solution proposed recently. We derive an analytical explicit formula, which exhibiting rich interesting nonlinear effects, to describe the propagation of any amount of cylindrical electromagnetic waves in a nonlinear medium. The results obtained by using the present method are accurately concordant with the results of using traditional coupled-wave equations. As an example of application, we discuss how a third wave affects the sum- and difference-frequency generation of two waves propagation in the nonlinear medium. PMID:26073066

  17. The research of radar target tracking observed information linear filter method

    NASA Astrophysics Data System (ADS)

    Chen, Zheng; Zhao, Xuanzhi; Zhang, Wen

    2018-05-01

    Aiming at the problems of low precision or even precision divergent is caused by nonlinear observation equation in radar target tracking, a new filtering algorithm is proposed in this paper. In this algorithm, local linearization is carried out on the observed data of the distance and angle respectively. Then the kalman filter is performed on the linearized data. After getting filtered data, a mapping operation will provide the posteriori estimation of target state. A large number of simulation results show that this algorithm can solve above problems effectively, and performance is better than the traditional filtering algorithm for nonlinear dynamic systems.

  18. Comparison of local grid refinement methods for MODFLOW

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.; Leake, S.A.

    2006-01-01

    Many ground water modeling efforts use a finite-difference method to solve the ground water flow equation, and many of these models require a relatively fine-grid discretization to accurately represent the selected process in limited areas of interest. Use of a fine grid over the entire domain can be computationally prohibitive; using a variably spaced grid can lead to cells with a large aspect ratio and refinement in areas where detail is not needed. One solution is to use local-grid refinement (LGR) whereby the grid is only refined in the area of interest. This work reviews some LGR methods and identifies advantages and drawbacks in test cases using MODFLOW-2000. The first test case is two dimensional and heterogeneous; the second is three dimensional and includes interaction with a meandering river. Results include simulations using a uniform fine grid, a variably spaced grid, a traditional method of LGR without feedback, and a new shared node method with feedback. Discrepancies from the solution obtained with the uniform fine grid are investigated. For the models tested, the traditional one-way coupled approaches produced discrepancies in head up to 6.8% and discrepancies in cell-to-cell fluxes up to 7.1%, while the new method has head and cell-to-cell flux discrepancies of 0.089% and 0.14%, respectively. Additional results highlight the accuracy, flexibility, and CPU time trade-off of these methods and demonstrate how the new method can be successfully implemented to model surface water-ground water interactions. Copyright ?? 2006 The Author(s).

  19. [Series: Utilization of Differential Equations and Methods for Solving Them in Medical Physics (2)].

    PubMed

    Murase, Kenya

    2015-01-01

    In this issue, symbolic methods for solving differential equations were firstly introduced. Of the symbolic methods, Laplace transform method was also introduced together with some examples, in which this method was applied to solving the differential equations derived from a two-compartment kinetic model and an equivalent circuit model for membrane potential. Second, series expansion methods for solving differential equations were introduced together with some examples, in which these methods were used to solve Bessel's and Legendre's differential equations. In the next issue, simultaneous differential equations and various methods for solving these differential equations will be introduced together with some examples in medical physics.

  20. Validation of vibration-dissociation coupling models in hypersonic non-equilibrium separated flows

    NASA Astrophysics Data System (ADS)

    Shoev, G.; Oblapenko, G.; Kunova, O.; Mekhonoshina, M.; Kustova, E.

    2018-03-01

    The validation of recently developed models of vibration-dissociation coupling is discussed in application to numerical solutions of the Navier-Stokes equations in a two-temperature approximation for a binary N2/N flow. Vibrational-translational relaxation rates are computed using the Landau-Teller formula generalized for strongly non-equilibrium flows obtained in the framework of the Chapman-Enskog method. Dissociation rates are calculated using the modified Treanor-Marrone model taking into account the dependence of the model parameter on the vibrational state. The solutions are compared to those obtained using traditional Landau-Teller and Treanor-Marrone models, and it is shown that for high-enthalpy flows, the traditional and recently developed models can give significantly different results. The computed heat flux and pressure on the surface of a double cone are in a good agreement with experimental data available in the literature on low-enthalpy flow with strong thermal non-equilibrium. The computed heat flux on a double wedge qualitatively agrees with available data for high-enthalpy non-equilibrium flows. Different contributions to the heat flux calculated using rigorous kinetic theory methods are evaluated. Quantitative discrepancy of numerical and experimental data is discussed.

  1. Maximum Likelihood Estimations and EM Algorithms with Length-biased Data

    PubMed Central

    Qin, Jing; Ning, Jing; Liu, Hao; Shen, Yu

    2012-01-01

    SUMMARY Length-biased sampling has been well recognized in economics, industrial reliability, etiology applications, epidemiological, genetic and cancer screening studies. Length-biased right-censored data have a unique data structure different from traditional survival data. The nonparametric and semiparametric estimations and inference methods for traditional survival data are not directly applicable for length-biased right-censored data. We propose new expectation-maximization algorithms for estimations based on full likelihoods involving infinite dimensional parameters under three settings for length-biased data: estimating nonparametric distribution function, estimating nonparametric hazard function under an increasing failure rate constraint, and jointly estimating baseline hazards function and the covariate coefficients under the Cox proportional hazards model. Extensive empirical simulation studies show that the maximum likelihood estimators perform well with moderate sample sizes and lead to more efficient estimators compared to the estimating equation approaches. The proposed estimates are also more robust to various right-censoring mechanisms. We prove the strong consistency properties of the estimators, and establish the asymptotic normality of the semi-parametric maximum likelihood estimators under the Cox model using modern empirical processes theory. We apply the proposed methods to a prevalent cohort medical study. Supplemental materials are available online. PMID:22323840

  2. Dynamics of a split torque helicopter transmission

    NASA Astrophysics Data System (ADS)

    Krantz, Timothy L.

    1994-06-01

    Split torque designs, proposed as alternatives to traditional planetary designs for helicopter main rotor transmissions, can save weight and be more reliable than traditional designs. This report presents the results of an analytical study of the system dynamics and performance of a split torque gearbox that uses a balance beam mechanism for load sharing. The Lagrange method was applied to develop a system of equations of motion. The mathematical model includes time-varying gear mesh stiffness, friction, and manufacturing errors. Cornell's method for calculating the stiffness of spur gear teeth was extended and applied to helical gears. The phenomenon of sidebands spaced at shaft frequencies about gear mesh fundamental frequencies was simulated by modeling total composite gear errors as sinusoid functions. Although the gearbox has symmetric geometry, the loads and motions of the two power paths differ. Friction must be considered to properly evaluate the balance beam mechanism. For the design studied, the balance beam is not an effective device for load sharing unless the coefficient of friction is less than 0.003. The complete system stiffness as represented by the stiffness matrix used in this analysis must be considered to precisely determine the optimal tooth indexing position.

  3. Dynamics of a split torque helicopter transmission. M.S. Thesis - Cleveland State Univ.

    NASA Technical Reports Server (NTRS)

    Krantz, Timothy L.

    1994-01-01

    Split torque designs, proposed as alternatives to traditional planetary designs for helicopter main rotor transmissions, can save weight and be more reliable than traditional designs. This report presents the results of an analytical study of the system dynamics and performance of a split torque gearbox that uses a balance beam mechanism for load sharing. The Lagrange method was applied to develop a system of equations of motion. The mathematical model includes time-varying gear mesh stiffness, friction, and manufacturing errors. Cornell's method for calculating the stiffness of spur gear teeth was extended and applied to helical gears. The phenomenon of sidebands spaced at shaft frequencies about gear mesh fundamental frequencies was simulated by modeling total composite gear errors as sinusoid functions. Although the gearbox has symmetric geometry, the loads and motions of the two power paths differ. Friction must be considered to properly evaluate the balance beam mechanism. For the design studied, the balance beam is not an effective device for load sharing unless the coefficient of friction is less than 0.003. The complete system stiffness as represented by the stiffness matrix used in this analysis must be considered to precisely determine the optimal tooth indexing position.

  4. Analysis and optimization of cyclic methods in orbit computation

    NASA Technical Reports Server (NTRS)

    Pierce, S.

    1973-01-01

    The mathematical analysis and computation of the K=3, order 4; K=4, order 6; and K=5, order 7 cyclic methods and the K=5, order 6 Cowell method and some results of optimizing the 3 backpoint cyclic multistep methods for solving ordinary differential equations are presented. Cyclic methods have the advantage over traditional methods of having higher order for a given number of backpoints while at the same time having more free parameters. After considering several error sources the primary source for the cyclic methods has been isolated. The free parameters for three backpoint methods were used to minimize the effects of some of these error sources. They now yield more accuracy with the same computing time as Cowell's method on selected problems. This work is being extended to the five backpoint methods. The analysis and optimization are more difficult here since the matrices are larger and the dimension of the optimizing space is larger. Indications are that the primary error source can be reduced. This will still leave several parameters free to minimize other sources.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yee, Ben Chung; Wollaber, Allan Benton; Haut, Terry Scot

    The high-order low-order (HOLO) method is a recently developed moment-based acceleration scheme for solving time-dependent thermal radiative transfer problems, and has been shown to exhibit orders of magnitude speedups over traditional time-stepping schemes. However, a linear stability analysis by Haut et al. (2015 Haut, T. S., Lowrie, R. B., Park, H., Rauenzahn, R. M., Wollaber, A. B. (2015). A linear stability analysis of the multigroup High-Order Low-Order (HOLO) method. In Proceedings of the Joint International Conference on Mathematics and Computation (M&C), Supercomputing in Nuclear Applications (SNA) and the Monte Carlo (MC) Method; Nashville, TN, April 19–23, 2015. American Nuclear Society.)more » revealed that the current formulation of the multigroup HOLO method was unstable in certain parameter regions. Since then, we have replaced the intensity-weighted opacity in the first angular moment equation of the low-order (LO) system with the Rosseland opacity. Furthermore, this results in a modified HOLO method (HOLO-R) that is significantly more stable.« less

  6. Application of Grey Model GM(1, 1) to Ultra Short-Term Predictions of Universal Time

    NASA Astrophysics Data System (ADS)

    Lei, Yu; Guo, Min; Zhao, Danning; Cai, Hongbing; Hu, Dandan

    2016-03-01

    A mathematical model known as one-order one-variable grey differential equation model GM(1, 1) has been herein employed successfully for the ultra short-term (<10days) predictions of universal time (UT1-UTC). The results of predictions are analyzed and compared with those obtained by other methods. It is shown that the accuracy of the predictions is comparable with that obtained by other prediction methods. The proposed method is able to yield an exact prediction even though only a few observations are provided. Hence it is very valuable in the case of a small size dataset since traditional methods, e.g., least-squares (LS) extrapolation, require longer data span to make a good forecast. In addition, these results can be obtained without making any assumption about an original dataset, and thus is of high reliability. Another advantage is that the developed method is easy to use. All these reveal a great potential of the GM(1, 1) model for UT1-UTC predictions.

  7. Numerical simulation of overflow at vertical weirs using a hybrid level set/VOF method

    NASA Astrophysics Data System (ADS)

    Lv, Xin; Zou, Qingping; Reeve, Dominic

    2011-10-01

    This paper presents the applications of a newly developed free surface flow model to the practical, while challenging overflow problems for weirs. Since the model takes advantage of the strengths of both the level set and volume of fluid methods and solves the Navier-Stokes equations on an unstructured mesh, it is capable of resolving the time evolution of very complex vortical motions, air entrainment and pressure variations due to violent deformations following overflow of the weir crest. In the present study, two different types of vertical weir, namely broad-crested and sharp-crested, are considered for validation purposes. The calculated overflow parameters such as pressure head distributions, velocity distributions, and water surface profiles are compared against experimental data as well as numerical results available in literature. A very good quantitative agreement has been obtained. The numerical model, thus, offers a good alternative to traditional experimental methods in the study of weir problems.

  8. Variational coarse-graining procedure for dynamic homogenization

    NASA Astrophysics Data System (ADS)

    Liu, Chenchen; Reina, Celia

    2017-07-01

    We present a variational coarse-graining framework for heterogeneous media in the spirit of FE2 methods, that allows for a seamless transition from the traditional static scenario to dynamic loading conditions, while being applicable to general material behavior as well as to discrete or continuous representations of the material and its deformation, e.g., finite element discretizations or atomistic systems. The method automatically delivers the macroscopic equations of motion together with the generalization of Hill's averaging relations to the dynamic setting. These include the expression of the macroscopic stresses and linear momentum as a function of the microscopic fields. We further demonstrate with a proof of concept example, that the proposed theoretical framework can be used to perform multiscale numerical simulations. The results are compared with standard single-scale finite element simulations, showcasing the capability of the method to capture the dispersive nature of the medium in the range of frequencies permitted by the multiscale strategy.

  9. Krylov subspace methods on supercomputers

    NASA Technical Reports Server (NTRS)

    Saad, Youcef

    1988-01-01

    A short survey of recent research on Krylov subspace methods with emphasis on implementation on vector and parallel computers is presented. Conjugate gradient methods have proven very useful on traditional scalar computers, and their popularity is likely to increase as three-dimensional models gain importance. A conservative approach to derive effective iterative techniques for supercomputers has been to find efficient parallel/vector implementations of the standard algorithms. The main source of difficulty in the incomplete factorization preconditionings is in the solution of the triangular systems at each step. A few approaches consisting of implementing efficient forward and backward triangular solutions are described in detail. Polynomial preconditioning as an alternative to standard incomplete factorization techniques is also discussed. Another efficient approach is to reorder the equations so as to improve the structure of the matrix to achieve better parallelism or vectorization. An overview of these and other ideas and their effectiveness or potential for different types of architectures is given.

  10. The establishment and application of direct coupled electrostatic-structural field model in electrostatically controlled deployable membrane antenna

    NASA Astrophysics Data System (ADS)

    Gu, Yongzhen; Duan, Baoyan; Du, Jingli

    2018-05-01

    The electrostatically controlled deployable membrane antenna (ECDMA) is a promising space structure due to its low weight, large aperture and high precision characteristics. However, it is an extreme challenge to describe the coupled field between electrostatic and membrane structure accurately. A direct coupled method is applied to solve the coupled problem in this paper. Firstly, the membrane structure and electrostatic field are uniformly described by energy, considering the coupled problem is an energy conservation phenomenon. Then the direct coupled electrostatic-structural field governing equilibrium equations are obtained by energy variation approach. Numerical results show that the direct coupled method improves the computing efficiency by 36% compared with the traditional indirect coupled method with the same level accuracy. Finally, the prototype has been manufactured and tested and the ECDMA finite element simulations show good agreement with the experiment results as the maximum surface error difference is 6%.

  11. The spectral cell method in nonlinear earthquake modeling

    NASA Astrophysics Data System (ADS)

    Giraldo, Daniel; Restrepo, Doriam

    2017-12-01

    This study examines the applicability of the spectral cell method (SCM) to compute the nonlinear earthquake response of complex basins. SCM combines fictitious-domain concepts with the spectral-version of the finite element method to solve the wave equations in heterogeneous geophysical domains. Nonlinear behavior is considered by implementing the Mohr-Coulomb and Drucker-Prager yielding criteria. We illustrate the performance of SCM with numerical examples of nonlinear basins exhibiting physically and computationally challenging conditions. The numerical experiments are benchmarked with results from overkill solutions, and using MIDAS GTS NX, a finite element software for geotechnical applications. Our findings show good agreement between the two sets of results. Traditional spectral elements implementations allow points per wavelength as low as PPW = 4.5 for high-order polynomials. Our findings show that in the presence of nonlinearity, high-order polynomials (p ≥ 3) require mesh resolutions above of PPW ≥ 10 to ensure displacement errors below 10%.

  12. Methods for Equating Mental Tests.

    DTIC Science & Technology

    1984-11-01

    1983) compared conventional and IRT methods for equating the Test of English as a Foreign Language ( TOEFL ) after chaining. Three conventional and...three IRT equating methods were examined in this study; two sections of TOEFL were each (separately) equated. The IRT methods included the following: (a...group. A separate base form was established for each of the six equating methods. Instead of equating the base-form TOEFL to itself, the last (eighth

  13. Design Oriented Structural Modeling for Airplane Conceptual Design Optimization

    NASA Technical Reports Server (NTRS)

    Livne, Eli

    1999-01-01

    The main goal for research conducted with the support of this grant was to develop design oriented structural optimization methods for the conceptual design of airplanes. Traditionally in conceptual design airframe weight is estimated based on statistical equations developed over years of fitting airplane weight data in data bases of similar existing air- planes. Utilization of such regression equations for the design of new airplanes can be justified only if the new air-planes use structural technology similar to the technology on the airplanes in those weight data bases. If any new structural technology is to be pursued or any new unconventional configurations designed the statistical weight equations cannot be used. In such cases any structural weight estimation must be based on rigorous "physics based" structural analysis and optimization of the airframes under consideration. Work under this grant progressed to explore airframe design-oriented structural optimization techniques along two lines of research: methods based on "fast" design oriented finite element technology and methods based on equivalent plate / equivalent shell models of airframes, in which the vehicle is modelled as an assembly of plate and shell components, each simulating a lifting surface or nacelle / fuselage pieces. Since response to changes in geometry are essential in conceptual design of airplanes, as well as the capability to optimize the shape itself, research supported by this grant sought to develop efficient techniques for parametrization of airplane shape and sensitivity analysis with respect to shape design variables. Towards the end of the grant period a prototype automated structural analysis code designed to work with the NASA Aircraft Synthesis conceptual design code ACS= was delivered to NASA Ames.

  14. Application of Hill's equation for estimating area under the concentration-time curve (AUC) and use of time to AUC 90% for expressing kinetics of drug disposition.

    PubMed

    Cheng, Hsien C

    2009-01-01

    Half life and its derived pharmacokinetic parameters are calculated on an assumption that the terminal phase of drug disposition follows a constant rate of disposition. In reality, this assumption may not necessarily be the case. A new method is needed for analyzing PK parameters if the disposition does not follow a first order PK kinetic. Cumulative area under the concentration-time curve (AUC) is plotted against time to yield a hyperbolic (or sigmoidal) AUC-time relationship curve which is then analyzed by Hill's equation to yield AUC(inf), time to achieving AUC50% (T(AUC50%)) or AUC90% (T(AUC90%)), and the Hill's slope. From these parameters, an AUC-time relationship curve can be reconstructed. Projected plasma concentration can be calculated for any time point. Time at which cumulative AUC reaches 90% (T(AUC90%)) can be used as an indicator for expressing how fast a drug is cleared. Clearance is calculated in a traditional manner (i.v. dose/AUC(inf)), and the volume of distribution is proposed to be calculated at T(AUC50%) (0.5 i.v. dose/plasma concentration at T(AUC50%)). This method of estimating AUC is applicable for both i.v. and oral data. It is concluded that the Hill's equation can be used as an alternative method for estimating AUC and analysis of PK parameters if the disposition does not follow a first order kinetic. T(AUC90%) is proposed to be used as an indicator for expressing how fast a drug is cleared from the system.

  15. Prediction of android and gynoid body adiposity via a three-dimensional stereovision body imaging system and dual-energy x-ray absorptiometry

    PubMed Central

    Lee, Jane J.; Freeland-Graves, Jeanne H.; Pepper, M. Reese; Stanforth, Philip R.; Xu, Bugao

    2017-01-01

    Objective Current methods for measuring regional body fat are expensive and inconvenient compared to the relative cost-effectiveness and ease-of-use of a stereovision body imaging (SBI) system. The primary goal of this research is to develop prediction models for android and gynoid fat by body measurements assessed via SBI and dual-energy x-ray absorptiometry (DXA). Subsequently, mathematical equations for prediction of total and regional (trunk, leg) body adiposity were established via parameters measured by SBI and DXA. Methods A total of 121 participants were randomly assigned into primary and cross-validation groups. Body measurements were obtained via traditional anthropometrics, SBI, and DXA. Multiple regression analysis was conducted to develop mathematical equations by demographics and SBI assessed body measurements as independent variables and body adiposity (fat mass and percent fat) as dependent variables. The validity of the prediction models was evaluated by a split sample method and Bland-Altman analysis. Results The R2 of the prediction equations for fat mass and percent body fat were 93.2% and 76.4% for android, and 91.4% and 66.5% for gynoid, respectively. The limits of agreement for the fat mass and percent fat were − 0.06 ± 0.87 kg and − 0.11 ± 1.97 % for android and − 0.04 ± 1.58 kg and − 0.19 ± 4.27 % for gynoid. Prediction values for fat mass and percent fat were 94.6% and 88.9% for total body, 93.9% and 71.0% for trunk, and 92.4% and 64.1% for leg, respectively. Conclusions The three-dimensional (3D) SBI produces reliable parameters that can predict android and gynoid, as well as total and regional (trunk, leg) fat mass. PMID:25915106

  16. Passive Resistor Temperature Compensation for a High-Temperature Piezoresistive Pressure Sensor.

    PubMed

    Yao, Zong; Liang, Ting; Jia, Pinggang; Hong, Yingping; Qi, Lei; Lei, Cheng; Zhang, Bin; Li, Wangwang; Zhang, Diya; Xiong, Jijun

    2016-07-22

    The main limitation of high-temperature piezoresistive pressure sensors is the variation of output voltage with operating temperature, which seriously reduces their measurement accuracy. This paper presents a passive resistor temperature compensation technique whose parameters are calculated using differential equations. Unlike traditional experiential arithmetic, the differential equations are independent of the parameter deviation among the piezoresistors of the microelectromechanical pressure sensor and the residual stress caused by the fabrication process or a mismatch in the thermal expansion coefficients. The differential equations are solved using calibration data from uncompensated high-temperature piezoresistive pressure sensors. Tests conducted on the calibrated equipment at various temperatures and pressures show that the passive resistor temperature compensation produces a remarkable effect. Additionally, a high-temperature signal-conditioning circuit is used to improve the output sensitivity of the sensor, which can be reduced by the temperature compensation. Compared to traditional experiential arithmetic, the proposed passive resistor temperature compensation technique exhibits less temperature drift and is expected to be highly applicable for pressure measurements in harsh environments with large temperature variations.

  17. Passive Resistor Temperature Compensation for a High-Temperature Piezoresistive Pressure Sensor

    PubMed Central

    Yao, Zong; Liang, Ting; Jia, Pinggang; Hong, Yingping; Qi, Lei; Lei, Cheng; Zhang, Bin; Li, Wangwang; Zhang, Diya; Xiong, Jijun

    2016-01-01

    The main limitation of high-temperature piezoresistive pressure sensors is the variation of output voltage with operating temperature, which seriously reduces their measurement accuracy. This paper presents a passive resistor temperature compensation technique whose parameters are calculated using differential equations. Unlike traditional experiential arithmetic, the differential equations are independent of the parameter deviation among the piezoresistors of the microelectromechanical pressure sensor and the residual stress caused by the fabrication process or a mismatch in the thermal expansion coefficients. The differential equations are solved using calibration data from uncompensated high-temperature piezoresistive pressure sensors. Tests conducted on the calibrated equipment at various temperatures and pressures show that the passive resistor temperature compensation produces a remarkable effect. Additionally, a high-temperature signal-conditioning circuit is used to improve the output sensitivity of the sensor, which can be reduced by the temperature compensation. Compared to traditional experiential arithmetic, the proposed passive resistor temperature compensation technique exhibits less temperature drift and is expected to be highly applicable for pressure measurements in harsh environments with large temperature variations. PMID:27455271

  18. The stability of locus equation slopes across stop consonant voicing/aspiration

    NASA Astrophysics Data System (ADS)

    Sussman, Harvey M.; Modarresi, Golnaz

    2004-05-01

    The consistency of locus equation slopes as phonetic descriptors of stop place in CV sequences across voiced and voiceless aspirated stops was explored in the speech of five male speakers of American English and two male speakers of Persian. Using traditional locus equation measurement sites for F2 onsets, voiceless labial and coronal stops had significantly lower locus equation slopes relative to their voiced counterparts, whereas velars failed to show voicing differences. When locus equations were derived using F2 onsets for voiced stops that were measured closer to the stop release burst, comparable to the protocol for measuring voiceless aspirated stops, no significant effects of voicing/aspiration on locus equation slopes were observed. This methodological factor, rather than an underlying phonetic-based explanation, provides a reasonable account for the observed flatter locus equation slopes of voiceless labial and coronal stops relative to voiced cognates reported in previous studies [Molis et al., J. Acoust. Soc. Am. 95, 2925 (1994); O. Engstrand and B. Lindblom, PHONUM 4, 101-104]. [Work supported by NIH.

  19. Constitutive Equation with Varying Parameters for Superplastic Flow Behavior

    NASA Astrophysics Data System (ADS)

    Guan, Zhiping; Ren, Mingwen; Jia, Hongjie; Zhao, Po; Ma, Pinkui

    2014-03-01

    In this study, constitutive equations for superplastic materials with an extra large elongation were investigated through mechanical analysis. From the view of phenomenology, firstly, some traditional empirical constitutive relations were standardized by restricting some strain paths and parameter conditions, and the coefficients in these relations were strictly given new mechanical definitions. Subsequently, a new, general constitutive equation with varying parameters was theoretically deduced based on the general mechanical equation of state. The superplastic tension test data of Zn-5%Al alloy at 340 °C under strain rates, velocities, and loads were employed for building a new constitutive equation and examining its validity. Analysis results indicated that the constitutive equation with varying parameters could characterize superplastic flow behavior in practical superplastic forming with high prediction accuracy and without any restriction of strain path or deformation condition, showing good industrial or scientific interest. On the contrary, those empirical equations have low prediction capabilities due to constant parameters and poor applicability because of the limit of special strain path or parameter conditions based on strict phenomenology.

  20. A Bivariate Chebyshev Spectral Collocation Quasilinearization Method for Nonlinear Evolution Parabolic Equations

    PubMed Central

    Motsa, S. S.; Magagula, V. M.; Sibanda, P.

    2014-01-01

    This paper presents a new method for solving higher order nonlinear evolution partial differential equations (NPDEs). The method combines quasilinearisation, the Chebyshev spectral collocation method, and bivariate Lagrange interpolation. In this paper, we use the method to solve several nonlinear evolution equations, such as the modified KdV-Burgers equation, highly nonlinear modified KdV equation, Fisher's equation, Burgers-Fisher equation, Burgers-Huxley equation, and the Fitzhugh-Nagumo equation. The results are compared with known exact analytical solutions from literature to confirm accuracy, convergence, and effectiveness of the method. There is congruence between the numerical results and the exact solutions to a high order of accuracy. Tables were generated to present the order of accuracy of the method; convergence graphs to verify convergence of the method and error graphs are presented to show the excellent agreement between the results from this study and the known results from literature. PMID:25254252

  1. A bivariate Chebyshev spectral collocation quasilinearization method for nonlinear evolution parabolic equations.

    PubMed

    Motsa, S S; Magagula, V M; Sibanda, P

    2014-01-01

    This paper presents a new method for solving higher order nonlinear evolution partial differential equations (NPDEs). The method combines quasilinearisation, the Chebyshev spectral collocation method, and bivariate Lagrange interpolation. In this paper, we use the method to solve several nonlinear evolution equations, such as the modified KdV-Burgers equation, highly nonlinear modified KdV equation, Fisher's equation, Burgers-Fisher equation, Burgers-Huxley equation, and the Fitzhugh-Nagumo equation. The results are compared with known exact analytical solutions from literature to confirm accuracy, convergence, and effectiveness of the method. There is congruence between the numerical results and the exact solutions to a high order of accuracy. Tables were generated to present the order of accuracy of the method; convergence graphs to verify convergence of the method and error graphs are presented to show the excellent agreement between the results from this study and the known results from literature.

  2. Chemical application of diffusion quantum Monte Carlo

    NASA Technical Reports Server (NTRS)

    Reynolds, P. J.; Lester, W. A., Jr.

    1984-01-01

    The diffusion quantum Monte Carlo (QMC) method gives a stochastic solution to the Schroedinger equation. This approach is receiving increasing attention in chemical applications as a result of its high accuracy. However, reducing statistical uncertainty remains a priority because chemical effects are often obtained as small differences of large numbers. As an example, the single-triplet splitting of the energy of the methylene molecule CH sub 2 is given. The QMC algorithm was implemented on the CYBER 205, first as a direct transcription of the algorithm running on the VAX 11/780, and second by explicitly writing vector code for all loops longer than a crossover length C. The speed of the codes relative to one another as a function of C, and relative to the VAX, are discussed. The computational time dependence obtained versus the number of basis functions is discussed and this is compared with that obtained from traditional quantum chemistry codes and that obtained from traditional computer architectures.

  3. Comparing three pedagogical approaches to psychomotor skills acquisition.

    PubMed

    Willis, Ross E; Richa, Jacqueline; Oppeltz, Richard; Nguyen, Patrick; Wagner, Kelly; Van Sickle, Kent R; Dent, Daniel L

    2012-01-01

    We compared traditional pedagogical approaches such as time- and repetition-based methods with proficiency-based training. Laparoscopic novices were assigned randomly to 1 of 3 training conditions. In experiment 1, participants in the time condition practiced for 60 minutes, participants in the repetition condition performed 5 practice trials, and participants in the proficiency condition trained until reaching a predetermined proficiency goal. In experiment 2, practice time and number of trials were equated across conditions. In experiment 1, participants in the proficiency-based training conditions outperformed participants in the other 2 conditions (P < .014); however, these participants trained longer (P < .001) and performed more repetitions (P < .001). In experiment 2, despite training for similar amounts of time and number of repetitions, participants in the proficiency condition outperformed their counterparts (P < .038). In both experiments, the standard deviations for the proficiency condition were smaller than the other conditions. Proficiency-based training results in trainees who perform uniformly and at a higher level than traditional training methodologies. Copyright © 2012 Elsevier Inc. All rights reserved.

  4. Conceptual problem solving in high school physics

    NASA Astrophysics Data System (ADS)

    Docktor, Jennifer L.; Strand, Natalie E.; Mestre, José P.; Ross, Brian H.

    2015-12-01

    Problem solving is a critical element of learning physics. However, traditional instruction often emphasizes the quantitative aspects of problem solving such as equations and mathematical procedures rather than qualitative analysis for selecting appropriate concepts and principles. This study describes the development and evaluation of an instructional approach called Conceptual Problem Solving (CPS) which guides students to identify principles, justify their use, and plan their solution in writing before solving a problem. The CPS approach was implemented by high school physics teachers at three schools for major theorems and conservation laws in mechanics and CPS-taught classes were compared to control classes taught using traditional problem solving methods. Information about the teachers' implementation of the approach was gathered from classroom observations and interviews, and the effectiveness of the approach was evaluated from a series of written assessments. Results indicated that teachers found CPS easy to integrate into their curricula, students engaged in classroom discussions and produced problem solutions of a higher quality than before, and students scored higher on conceptual and problem solving measures.

  5. A generalized simplest equation method and its application to the Boussinesq-Burgers equation.

    PubMed

    Sudao, Bilige; Wang, Xiaomin

    2015-01-01

    In this paper, a generalized simplest equation method is proposed to seek exact solutions of nonlinear evolution equations (NLEEs). In the method, we chose a solution expression with a variable coefficient and a variable coefficient ordinary differential auxiliary equation. This method can yield a Bäcklund transformation between NLEEs and a related constraint equation. By dealing with the constraint equation, we can derive infinite number of exact solutions for NLEEs. These solutions include the traveling wave solutions, non-traveling wave solutions, multi-soliton solutions, rational solutions, and other types of solutions. As applications, we obtained wide classes of exact solutions for the Boussinesq-Burgers equation by using the generalized simplest equation method.

  6. A Generalized Simplest Equation Method and Its Application to the Boussinesq-Burgers Equation

    PubMed Central

    Sudao, Bilige; Wang, Xiaomin

    2015-01-01

    In this paper, a generalized simplest equation method is proposed to seek exact solutions of nonlinear evolution equations (NLEEs). In the method, we chose a solution expression with a variable coefficient and a variable coefficient ordinary differential auxiliary equation. This method can yield a Bäcklund transformation between NLEEs and a related constraint equation. By dealing with the constraint equation, we can derive infinite number of exact solutions for NLEEs. These solutions include the traveling wave solutions, non-traveling wave solutions, multi-soliton solutions, rational solutions, and other types of solutions. As applications, we obtained wide classes of exact solutions for the Boussinesq-Burgers equation by using the generalized simplest equation method. PMID:25973605

  7. Fractional analysis for nonlinear electrical transmission line and nonlinear Schroedinger equations with incomplete sub-equation

    NASA Astrophysics Data System (ADS)

    Fendzi-Donfack, Emmanuel; Nguenang, Jean Pierre; Nana, Laurent

    2018-02-01

    We use the fractional complex transform with the modified Riemann-Liouville derivative operator to establish the exact and generalized solutions of two fractional partial differential equations. We determine the solutions of fractional nonlinear electrical transmission lines (NETL) and the perturbed nonlinear Schroedinger (NLS) equation with the Kerr law nonlinearity term. The solutions are obtained for the parameters in the range (0<α≤1) of the derivative operator and we found the traditional solutions for the limiting case of α =1. We show that according to the modified Riemann-Liouville derivative, the solutions found can describe physical systems with memory effect, transient effects in electrical systems and nonlinear transmission lines, and other systems such as optical fiber.

  8. An Evaluation of Kernel Equating: Parallel Equating with Classical Methods in the SAT Subject Tests[TM] Program. Research Report. ETS RR-09-06

    ERIC Educational Resources Information Center

    Grant, Mary C.; Zhang, Lilly; Damiano, Michele

    2009-01-01

    This study investigated kernel equating methods by comparing these methods to operational equatings for two tests in the SAT Subject Tests[TM] program. GENASYS (ETS, 2007) was used for all equating methods and scaled score kernel equating results were compared to Tucker, Levine observed score, chained linear, and chained equipercentile equating…

  9. Statistical Estimation of Heterogeneities: A New Frontier in Well Testing

    NASA Astrophysics Data System (ADS)

    Neuman, S. P.; Guadagnini, A.; Illman, W. A.; Riva, M.; Vesselinov, V. V.

    2001-12-01

    Well-testing methods have traditionally relied on analytical solutions of groundwater flow equations in relatively simple domains, consisting of one or at most a few units having uniform hydraulic properties. Recently, attention has been shifting toward methods and solutions that would allow one to characterize subsurface heterogeneities in greater detail. On one hand, geostatistical inverse methods are being used to assess the spatial variability of parameters, such as permeability and porosity, on the basis of multiple cross-hole pressure interference tests. On the other hand, analytical solutions are being developed to describe the mean and variance (first and second statistical moments) of flow to a well in a randomly heterogeneous medium. Geostatistical inverse interpretation of cross-hole tests yields a smoothed but detailed "tomographic" image of how parameters actually vary in three-dimensional space, together with corresponding measures of estimation uncertainty. Moment solutions may soon allow one to interpret well tests in terms of statistical parameters such as the mean and variance of log permeability, its spatial autocorrelation and statistical anisotropy. The idea of geostatistical cross-hole tomography is illustrated through pneumatic injection tests conducted in unsaturated fractured tuff at the Apache Leap Research Site near Superior, Arizona. The idea of using moment equations to interpret well-tests statistically is illustrated through a recently developed three-dimensional solution for steady state flow to a well in a bounded, randomly heterogeneous, statistically anisotropic aquifer.

  10. Magnus integrators on multicore CPUs and GPUs

    NASA Astrophysics Data System (ADS)

    Auer, N.; Einkemmer, L.; Kandolf, P.; Ostermann, A.

    2018-07-01

    In the present paper we consider numerical methods to solve the discrete Schrödinger equation with a time dependent Hamiltonian (motivated by problems encountered in the study of spin systems). We will consider both short-range interactions, which lead to evolution equations involving sparse matrices, and long-range interactions, which lead to dense matrices. Both of these settings show very different computational characteristics. We use Magnus integrators for time integration and employ a framework based on Leja interpolation to compute the resulting action of the matrix exponential. We consider both traditional Magnus integrators (which are extensively used for these types of problems in the literature) as well as the recently developed commutator-free Magnus integrators and implement them on modern CPU and GPU (graphics processing unit) based systems. We find that GPUs can yield a significant speed-up (up to a factor of 10 in the dense case) for these types of problems. In the sparse case GPUs are only advantageous for large problem sizes and the achieved speed-ups are more modest. In most cases the commutator-free variant is superior but especially on the GPU this advantage is rather small. In fact, none of the advantage of commutator-free methods on GPUs (and on multi-core CPUs) is due to the elimination of commutators. This has important consequences for the design of more efficient numerical methods.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott Stewart, D., E-mail: dss@illinois.edu; Hernández, Alberto; Lee, Kibaek

    The estimation of pressure and temperature histories, which are required to understand chemical pathways in condensed phase explosives during detonation, is discussed. We argue that estimates made from continuum models, calibrated by macroscopic experiments, are essential to inform modern, atomistic-based reactive chemistry simulations at detonation pressures and temperatures. We present easy to implement methods for general equation of state and arbitrarily complex chemical reaction schemes that can be used to compute reactive flow histories for the constant volume, the energy process, and the expansion process on the Rayleigh line of a steady Chapman-Jouguet detonation. A brief review of state-of-the-art ofmore » two-component reactive flow models is given that highlights the Ignition and Growth model of Lee and Tarver [Phys. Fluids 23, 2362 (1980)] and the Wide-Ranging Equation of State model of Wescott, Stewart, and Davis [J. Appl. Phys. 98, 053514 (2005)]. We discuss evidence from experiments and reactive molecular dynamic simulations that motivate models that have several components, instead of the two that have traditionally been used to describe the results of macroscopic detonation experiments. We present simplified examples of a formulation for a hypothetical explosive that uses simple (ideal) equation of state forms and detailed comparisons. Then, we estimate pathways computed from two-component models of real explosive materials that have been calibrated with macroscopic experiments.« less

  12. Implicit Space-Time Conservation Element and Solution Element Schemes

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung; Himansu, Ananda; Wang, Xiao-Yen

    1999-01-01

    Artificial numerical dissipation is in important issue in large Reynolds number computations. In such computations, the artificial dissipation inherent in traditional numerical schemes can overwhelm the physical dissipation and yield inaccurate results on meshes of practical size. In the present work, the space-time conservation element and solution element method is used to construct new and accurate implicit numerical schemes such that artificial numerical dissipation will not overwhelm physical dissipation. Specifically, these schemes have the property that numerical dissipation vanishes when the physical viscosity goes to zero. These new schemes therefore accurately model the physical dissipation even when it is extremely small. The new schemes presented are two highly accurate implicit solvers for a convection-diffusion equation. The two schemes become identical in the pure convection case, and in the pure diffusion case. The implicit schemes are applicable over the whole Reynolds number range, from purely diffusive equations to convection-dominated equations with very small viscosity. The stability and consistency of the schemes are analysed, and some numerical results are presented. It is shown that, in the inviscid case, the new schemes become explicit and their amplification factors are identical to those of the Leapfrog scheme. On the other hand, in the pure diffusion case, their principal amplification factor becomes the amplification factor of the Crank-Nicolson scheme.

  13. Numerical Simulations of STOVL Hot Gas Ingestion in Ground Proximity Using a Multigrid Solution Procedure

    NASA Technical Reports Server (NTRS)

    Wang, Gang

    2003-01-01

    A multi grid solution procedure for the numerical simulation of turbulent flows in complex geometries has been developed. A Full Multigrid-Full Approximation Scheme (FMG-FAS) is incorporated into the continuity and momentum equations, while the scalars are decoupled from the multi grid V-cycle. A standard kappa-Epsilon turbulence model with wall functions has been used to close the governing equations. The numerical solution is accomplished by solving for the Cartesian velocity components either with a traditional grid staggering arrangement or with a multiple velocity grid staggering arrangement. The two solution methodologies are evaluated for relative computational efficiency. The solution procedure with traditional staggering arrangement is subsequently applied to calculate the flow and temperature fields around a model Short Take-off and Vertical Landing (STOVL) aircraft hovering in ground proximity.

  14. An efficient computational method for the approximate solution of nonlinear Lane-Emden type equations arising in astrophysics

    NASA Astrophysics Data System (ADS)

    Singh, Harendra

    2018-04-01

    The key purpose of this article is to introduce an efficient computational method for the approximate solution of the homogeneous as well as non-homogeneous nonlinear Lane-Emden type equations. Using proposed computational method given nonlinear equation is converted into a set of nonlinear algebraic equations whose solution gives the approximate solution to the Lane-Emden type equation. Various nonlinear cases of Lane-Emden type equations like standard Lane-Emden equation, the isothermal gas spheres equation and white-dwarf equation are discussed. Results are compared with some well-known numerical methods and it is observed that our results are more accurate.

  15. Composite scheme using localized relaxation with non-standard finite difference method for hyperbolic conservation laws

    NASA Astrophysics Data System (ADS)

    Kumar, Vivek; Raghurama Rao, S. V.

    2008-04-01

    Non-standard finite difference methods (NSFDM) introduced by Mickens [ Non-standard Finite Difference Models of Differential Equations, World Scientific, Singapore, 1994] are interesting alternatives to the traditional finite difference and finite volume methods. When applied to linear hyperbolic conservation laws, these methods reproduce exact solutions. In this paper, the NSFDM is first extended to hyperbolic systems of conservation laws, by a novel utilization of the decoupled equations using characteristic variables. In the second part of this paper, the NSFDM is studied for its efficacy in application to nonlinear scalar hyperbolic conservation laws. The original NSFDMs introduced by Mickens (1994) were not in conservation form, which is an important feature in capturing discontinuities at the right locations. Mickens [Construction and analysis of a non-standard finite difference scheme for the Burgers-Fisher equations, Journal of Sound and Vibration 257 (4) (2002) 791-797] recently introduced a NSFDM in conservative form. This method captures the shock waves exactly, without any numerical dissipation. In this paper, this algorithm is tested for the case of expansion waves with sonic points and is found to generate unphysical expansion shocks. As a remedy to this defect, we use the strategy of composite schemes [R. Liska, B. Wendroff, Composite schemes for conservation laws, SIAM Journal of Numerical Analysis 35 (6) (1998) 2250-2271] in which the accurate NSFDM is used as the basic scheme and localized relaxation NSFDM is used as the supporting scheme which acts like a filter. Relaxation schemes introduced by Jin and Xin [The relaxation schemes for systems of conservation laws in arbitrary space dimensions, Communications in Pure and Applied Mathematics 48 (1995) 235-276] are based on relaxation systems which replace the nonlinear hyperbolic conservation laws by a semi-linear system with a stiff relaxation term. The relaxation parameter ( λ) is chosen locally on the three point stencil of grid which makes the proposed method more efficient. This composite scheme overcomes the problem of unphysical expansion shocks and captures the shock waves with an accuracy better than the upwind relaxation scheme, as demonstrated by the test cases, together with comparisons with popular numerical methods like Roe scheme and ENO schemes.

  16. Time-stable overset grid method for hyperbolic problems using summation-by-parts operators

    NASA Astrophysics Data System (ADS)

    Sharan, Nek; Pantano, Carlos; Bodony, Daniel J.

    2018-05-01

    A provably time-stable method for solving hyperbolic partial differential equations arising in fluid dynamics on overset grids is presented in this paper. The method uses interface treatments based on the simultaneous approximation term (SAT) penalty method and derivative approximations that satisfy the summation-by-parts (SBP) property. Time-stability is proven using energy arguments in a norm that naturally relaxes to the standard diagonal norm when the overlap reduces to a traditional multiblock arrangement. The proposed overset interface closures are time-stable for arbitrary overlap arrangements. The information between grids is transferred using Lagrangian interpolation applied to the incoming characteristics, although other interpolation schemes could also be used. The conservation properties of the method are analyzed. Several one-, two-, and three-dimensional, linear and non-linear numerical examples are presented to confirm the stability and accuracy of the method. A performance comparison between the proposed SAT-based interface treatment and the commonly-used approach of injecting the interpolated data onto each grid is performed to highlight the efficacy of the SAT method.

  17. Sensitivity of control-augmented structure obtained by a system decomposition method

    NASA Technical Reports Server (NTRS)

    Sobieszczanskisobieski, Jaroslaw; Bloebaum, Christina L.; Hajela, Prabhat

    1988-01-01

    The verification of a method for computing sensitivity derivatives of a coupled system is presented. The method deals with a system whose analysis can be partitioned into subsets that correspond to disciplines and/or physical subsystems that exchange input-output data with each other. The method uses the partial sensitivity derivatives of the output with respect to input obtained for each subset separately to assemble a set of linear, simultaneous, algebraic equations that are solved for the derivatives of the coupled system response. This sensitivity analysis is verified using an example of a cantilever beam augmented with an active control system to limit the beam's dynamic displacements under an excitation force. The verification shows good agreement of the method with reference data obtained by a finite difference technique involving entire system analysis. The usefulness of a system sensitivity method in optimization applications by employing a piecewise-linear approach to the same numerical example is demonstrated. The method's principal merits are its intrinsically superior accuracy in comparison with the finite difference technique, and its compatibility with the traditional division of work in complex engineering tasks among specialty groups.

  18. A time-spectral approach to numerical weather prediction

    NASA Astrophysics Data System (ADS)

    Scheffel, Jan; Lindvall, Kristoffer; Yik, Hiu Fai

    2018-05-01

    Finite difference methods are traditionally used for modelling the time domain in numerical weather prediction (NWP). Time-spectral solution is an attractive alternative for reasons of accuracy and efficiency and because time step limitations associated with causal CFL-like criteria, typical for explicit finite difference methods, are avoided. In this work, the Lorenz 1984 chaotic equations are solved using the time-spectral algorithm GWRM (Generalized Weighted Residual Method). Comparisons of accuracy and efficiency are carried out for both explicit and implicit time-stepping algorithms. It is found that the efficiency of the GWRM compares well with these methods, in particular at high accuracy. For perturbative scenarios, the GWRM was found to be as much as four times faster than the finite difference methods. A primary reason is that the GWRM time intervals typically are two orders of magnitude larger than those of the finite difference methods. The GWRM has the additional advantage to produce analytical solutions in the form of Chebyshev series expansions. The results are encouraging for pursuing further studies, including spatial dependence, of the relevance of time-spectral methods for NWP modelling.

  19. The Examination of the Classification of Students into Performance Categories by Two Different Equating Methods

    ERIC Educational Resources Information Center

    Keller, Lisa A.; Keller, Robert R.; Parker, Pauline A.

    2011-01-01

    This study investigates the comparability of two item response theory based equating methods: true score equating (TSE), and estimated true equating (ETE). Additionally, six scaling methods were implemented within each equating method: mean-sigma, mean-mean, two versions of fixed common item parameter, Stocking and Lord, and Haebara. Empirical…

  20. Local Discontinuous Galerkin Methods for the Cahn-Hilliard Type Equations

    DTIC Science & Technology

    2007-01-01

    Kuramoto-Sivashinsky equations , the Ito-type coupled KdV equa- tions, the Kadomtsev - Petviashvili equation , and the Zakharov-Kuznetsov equation . A common...Local discontinuous Galerkin methods for the Cahn-Hilliard type equations Yinhua Xia∗, Yan Xu† and Chi-Wang Shu ‡ Abstract In this paper we develop...local discontinuous Galerkin (LDG) methods for the fourth-order nonlinear Cahn-Hilliard equation and system. The energy stability of the LDG methods is

  1. Dose equations for tube current modulation in CT scanning and the interpretation of the associated CTDI{sub vol}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dixon, Robert L.; Boone, John M.

    2013-11-15

    Purpose: The scanner-reported CTDI{sub vol} for automatic tube current modulation (TCM) has a different physical meaning from the traditional CTDI{sub vol} at constant mA, resulting in the dichotomy “CTDI{sub vol} of the first and second kinds” for which a physical interpretation is sought in hopes of establishing some commonality between the two.Methods: Rigorous equations are derived to describe the accumulated dose distributions for TCM. A comparison with formulae for scanner-reported CTDI{sub vol} clearly identifies the source of their differences. Graphical dose simulations are also provided for a variety of TCM tube current distributions (including constant mA), all having the samemore » scanner-reported CTDI{sub vol}.Results: These convolution equations and simulations show that the local dose at z depends only weakly on the local tube current i(z) due to the strong influence of scatter from all other locations along z, and that the “local CTDI{sub vol}(z)” does not represent a local dose but rather only a relative i(z) ≡ mA(z). TCM is a shift-variant technique to which the CTDI-paradigm does not apply and its application to TCM leads to a CTDI{sub vol} of the second kind which lacks relevance.Conclusions: While the traditional CTDI{sub vol} at constant mA conveys useful information (the peak dose at the center of the scan length), CTDI{sub vol} of the second kind conveys no useful information about the associated TCM dose distribution it purportedly represents and its physical interpretation remains elusive. On the other hand, the total energy absorbed E (“integral dose”) as well as its surrogate DLP remain robust between variable i(z) TCM and constant current i{sub 0} techniques, both depending only on the total mAs = t{sub 0}=i{sub 0} t{sub 0} during the beam-on time t{sub 0}.« less

  2. An objective analysis of the dynamic nature of field capacity

    NASA Astrophysics Data System (ADS)

    Twarakavi, Navin K. C.; Sakai, Masaru; Å Imå¯Nek, Jirka

    2009-10-01

    Field capacity is one of the most commonly used, and yet poorly defined, soil hydraulic properties. Traditionally, field capacity has been defined as the amount of soil moisture after excess water has drained away and the rate of downward movement has materially decreased. Unfortunately, this qualitative definition does not lend itself to an unambiguous quantitative approach for estimation. Because of the vagueness in defining what constitutes "drainage of excess water" from a soil, the estimation of field capacity has often been based upon empirical guidelines. These empirical guidelines are either time, pressure, or flux based. In this paper, we developed a numerical approach to estimate field capacity using a flux-based definition. The resulting approach was implemented on the soil parameter data set used by Schaap et al. (2001), and the estimated field capacity was compared to traditional definitions of field capacity. The developed modeling approach was implemented using the HYDRUS-1D software with the capability of simultaneously estimating field capacity for multiple soils with soil hydraulic parameter data. The Richards equation was used in conjunction with the van Genuchten-Mualem model to simulate variably saturated flow in a soil. Using the modeling approach to estimate field capacity also resulted in additional information such as (1) the pressure head, at which field capacity is attained, and (2) the drainage time needed to reach field capacity from saturated conditions under nonevaporative conditions. We analyzed the applicability of the modeling-based approach to estimate field capacity on real-world soils data. We also used the developed method to create contour diagrams showing the variation of field capacity with texture. It was found that using benchmark pressure heads to estimate field capacity from the retention curve leads to inaccurate results. Finally, a simple analytical equation was developed to predict field capacity from soil hydraulic parameter information. The analytical equation was found to be effective in its ability to predict field capacities.

  3. A Moving Mesh Finite Element Algorithm for Singular Problems in Two and Three Space Dimensions

    NASA Astrophysics Data System (ADS)

    Li, Ruo; Tang, Tao; Zhang, Pingwen

    2002-04-01

    A framework for adaptive meshes based on the Hamilton-Schoen-Yau theory was proposed by Dvinsky. In a recent work (2001, J. Comput. Phys.170, 562-588), we extended Dvinsky's method to provide an efficient moving mesh algorithm which compared favorably with the previously proposed schemes in terms of simplicity and reliability. In this work, we will further extend the moving mesh methods based on harmonic maps to deal with mesh adaptation in three space dimensions. In obtaining the variational mesh, we will solve an optimization problem with some appropriate constraints, which is in contrast to the traditional method of solving the Euler-Lagrange equation directly. The key idea of this approach is to update the interior and boundary grids simultaneously, rather than considering them separately. Application of the proposed moving mesh scheme is illustrated with some two- and three-dimensional problems with large solution gradients. The numerical experiments show that our methods can accurately resolve detail features of singular problems in 3D.

  4. Development of Boundary Condition Independent Reduced Order Thermal Models using Proper Orthogonal Decomposition

    NASA Astrophysics Data System (ADS)

    Raghupathy, Arun; Ghia, Karman; Ghia, Urmila

    2008-11-01

    Compact Thermal Models (CTM) to represent IC packages has been traditionally developed using the DELPHI-based (DEvelopment of Libraries of PHysical models for an Integrated design) methodology. The drawbacks of this method are presented, and an alternative method is proposed. A reduced-order model that provides the complete thermal information accurately with less computational resources can be effectively used in system level simulations. Proper Orthogonal Decomposition (POD), a statistical method, can be used to reduce the order of the degree of freedom or variables of the computations for such a problem. POD along with the Galerkin projection allows us to create reduced-order models that reproduce the characteristics of the system with a considerable reduction in computational resources while maintaining a high level of accuracy. The goal of this work is to show that this method can be applied to obtain a boundary condition independent reduced-order thermal model for complex components. The methodology is applied to the 1D transient heat equation.

  5. Control Theory based Shape Design for the Incompressible Navier-Stokes Equations

    NASA Astrophysics Data System (ADS)

    Cowles, G.; Martinelli, L.

    2003-12-01

    A design method for shape optimization in incompressible turbulent viscous flow has been developed and validated for inverse design. The gradient information is determined using a control theory based algorithm. With such an approach, the cost of computing the gradient is negligible. An additional adjoint system must be solved which requires the cost of a single steady state flow solution. Thus, this method has an enormous advantage over traditional finite-difference based algorithms. The method of artificial compressibility is utilized to solve both the flow and adjoint systems. An algebraic turbulence model is used to compute the eddy viscosity. The method is validated using several inverse wing design test cases. In each case, the program must modify the shape of the initial wing such that its pressure distribution matches that of the target wing. Results are shown for the inversion of both finite thickness wings as well as zero thickness wings which can be considered a model of yacht sails.

  6. Fresnel Lens Solar Concentrator Design Based on Geometric Optics and Blackbody Radiation Equations

    NASA Technical Reports Server (NTRS)

    Watson, Michael D.; Jayroe, Robert, Jr.

    1999-01-01

    Fresnel lenses have been used for years as solar concentrators in a variety of applications. Several variables effect the final design of these lenses including: lens diameter, image spot distance from the lens, and bandwidth focused in the image spot. Defining the image spot as the geometrical optics circle of least confusion and applying blackbody radiation equations the spot energy distribution can be determined. These equations are used to design a fresnel lens to produce maximum flux for a given spot size, lens diameter, and image distance. This approach results in significant increases in solar efficiency over traditional single wavelength designs.

  7. Optimal Power Flow Pursuit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Anese, Emiliano; Simonetto, Andrea

    This paper considers distribution networks featuring inverter-interfaced distributed energy resources, and develops distributed feedback controllers that continuously drive the inverter output powers to solutions of AC optimal power flow (OPF) problems. Particularly, the controllers update the power setpoints based on voltage measurements as well as given (time-varying) OPF targets, and entail elementary operations implementable onto low-cost microcontrollers that accompany power-electronics interfaces of gateways and inverters. The design of the control framework is based on suitable linear approximations of the AC power-flow equations as well as Lagrangian regularization methods. Convergence and OPF-target tracking capabilities of the controllers are analytically established. Overall,more » the proposed method allows to bypass traditional hierarchical setups where feedback control and optimization operate at distinct time scales, and to enable real-time optimization of distribution systems.« less

  8. The Missing Data Assumptions of the Nonequivalent Groups with Anchor Test (NEAT) Design and Their Implications for Test Equating. Research Report. ETS RR-09-16

    ERIC Educational Resources Information Center

    Sinharay, Sandip; Holland, Paul W.

    2008-01-01

    The nonequivalent groups with anchor test (NEAT) design involves missing data that are missing by design. Three popular equating methods that can be used with a NEAT design are the poststratification equating method, the chain equipercentile equating method, and the item-response-theory observed-score-equating method. These three methods each…

  9. Colonel Blotto Games and Lancaster's Equations: A Novel Military Modeling Combination

    NASA Technical Reports Server (NTRS)

    Collins, Andrew J.; Hester, Patrick T.

    2012-01-01

    Military strategists face a difficult task when engaged in a battle against an adversarial force. They have to predict both what tactics their opponent will employ and the outcomes of any resultant conflicts in order to make the best decision about their actions. Game theory has been the dominant technique used by analysts to investigate the possible actions that an enemy will employ. Traditional game theory can be augmented by use of Lanchester equations, a set of differential equations used to determine the outcome of a conflict. This paper demonstrates a novel combination of game theory and Lanchester equations using Colonel Blotto games. Colonel Blotto games, which are one of the oldest applications of game theory to the military domain, look at the allocation of troops and resources when fighting across multiple areas of operation. This paper demonstrates that employing Lanchester equations within a game overcomes some of practical problems faced when applying game theory.

  10. Equation-free multiscale computation: algorithms and applications.

    PubMed

    Kevrekidis, Ioannis G; Samaey, Giovanni

    2009-01-01

    In traditional physicochemical modeling, one derives evolution equations at the (macroscopic, coarse) scale of interest; these are used to perform a variety of tasks (simulation, bifurcation analysis, optimization) using an arsenal of analytical and numerical techniques. For many complex systems, however, although one observes evolution at a macroscopic scale of interest, accurate models are only given at a more detailed (fine-scale, microscopic) level of description (e.g., lattice Boltzmann, kinetic Monte Carlo, molecular dynamics). Here, we review a framework for computer-aided multiscale analysis, which enables macroscopic computational tasks (over extended spatiotemporal scales) using only appropriately initialized microscopic simulation on short time and length scales. The methodology bypasses the derivation of macroscopic evolution equations when these equations conceptually exist but are not available in closed form-hence the term equation-free. We selectively discuss basic algorithms and underlying principles and illustrate the approach through representative applications. We also discuss potential difficulties and outline areas for future research.

  11. Finite elements and finite differences for transonic flow calculations

    NASA Technical Reports Server (NTRS)

    Hafez, M. M.; Murman, E. M.; Wellford, L. C.

    1978-01-01

    The paper reviews the chief finite difference and finite element techniques used for numerical solution of nonlinear mixed elliptic-hyperbolic equations governing transonic flow. The forms of the governing equations for unsteady two-dimensional transonic flow considered are the Euler equation, the full potential equation in both conservative and nonconservative form, the transonic small-disturbance equation in both conservative and nonconservative form, and the hodograph equations for the small-disturbance case and the full-potential case. Finite difference methods considered include time-dependent methods, relaxation methods, semidirect methods, and hybrid methods. Finite element methods include finite element Lax-Wendroff schemes, implicit Galerkin method, mixed variational principles, dual iterative procedures, optimal control methods and least squares.

  12. Comparison of SUL values in oncological 18F-FDG PET/CT; The effect of new LBM formulas.

    PubMed

    Halsne, Trygve; Müller, Ebba Glørsen; Spiten, Ann-Eli; Sherwani, Alexander Gul; Mikalsen, Lars Tore Gyland; Rootwelt-Revheim, Mona-Elisabeth; Stokke, Caroline

    2018-03-29

    Due to better precision and intercompatibility, the use of lean body mass (LBM) as mass estimate in the calculation of standardized uptake values (SUV) has become more common in research and clinical studies today. Thus, the equations deciding this quantity have to be verified in order to choose the ones that best represents the actual body composition. Methods: LBM was calculated for 44 patients examined with 18 F-FDG PET/CT scans by means of James' and Janmahasatians' sex specific predictive equations and the results validated using a CT based methods. The latter method makes use of the eyes-to-thighs CT from the PET/CT acquisition protocol and segments the voxels according to Hounsfield Units. Intraclass correlation coefficients (ICC) and Bland-Altman plots have been used to assess agreement between the various methods. Results: A mean difference of 6.3kg (-15.1 kg to 2.5 kg LOA) between LBM james and LBM CT1 was found. This is higher than the observed mean difference of 3.8kg (-12.5 kg to 4.9 kg LOA) between LBM jan and LBM CT1 In addition, LBM jan had higher ICC with LBM CT1 of r I = 0.87 ( r L = 0.60, r U = 0.94) than LBM james with r I = 0.77 ( r L = 0.11, r U = 0.91). Thus, we obtained better agreement between and LBM jan and LBM CT1 Although there were exceptions, the overall effect on SUL values was that SULjames values were greater than SULjan values. Conclusion: From our results, we have verified the reliability of the LBM jan suggested formulas with a CT derived reference standard. Compared with the more traditional and available set of equations LBM james the LBM jan formulas tend to yield better agreement. Copyright © 2018 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  13. Modified Chebyshev Picard Iteration for Efficient Numerical Integration of Ordinary Differential Equations

    NASA Astrophysics Data System (ADS)

    Macomber, B.; Woollands, R. M.; Probe, A.; Younes, A.; Bai, X.; Junkins, J.

    2013-09-01

    Modified Chebyshev Picard Iteration (MCPI) is an iterative numerical method for approximating solutions of linear or non-linear Ordinary Differential Equations (ODEs) to obtain time histories of system state trajectories. Unlike other step-by-step differential equation solvers, the Runge-Kutta family of numerical integrators for example, MCPI approximates long arcs of the state trajectory with an iterative path approximation approach, and is ideally suited to parallel computation. Orthogonal Chebyshev Polynomials are used as basis functions during each path iteration; the integrations of the Picard iteration are then done analytically. Due to the orthogonality of the Chebyshev basis functions, the least square approximations are computed without matrix inversion; the coefficients are computed robustly from discrete inner products. As a consequence of discrete sampling and weighting adopted for the inner product definition, Runge phenomena errors are minimized near the ends of the approximation intervals. The MCPI algorithm utilizes a vector-matrix framework for computational efficiency. Additionally, all Chebyshev coefficients and integrand function evaluations are independent, meaning they can be simultaneously computed in parallel for further decreased computational cost. Over an order of magnitude speedup from traditional methods is achieved in serial processing, and an additional order of magnitude is achievable in parallel architectures. This paper presents a new MCPI library, a modular toolset designed to allow MCPI to be easily applied to a wide variety of ODE systems. Library users will not have to concern themselves with the underlying mathematics behind the MCPI method. Inputs are the boundary conditions of the dynamical system, the integrand function governing system behavior, and the desired time interval of integration, and the output is a time history of the system states over the interval of interest. Examples from the field of astrodynamics are presented to compare the output from the MCPI library to current state-of-practice numerical integration methods. It is shown that MCPI is capable of out-performing the state-of-practice in terms of computational cost and accuracy.

  14. A new multi-domain method based on an analytical control surface for linear and second-order mean drift wave loads on floating bodies

    NASA Astrophysics Data System (ADS)

    Liang, Hui; Chen, Xiaobo

    2017-10-01

    A novel multi-domain method based on an analytical control surface is proposed by combining the use of free-surface Green function and Rankine source function. A cylindrical control surface is introduced to subdivide the fluid domain into external and internal domains. Unlike the traditional domain decomposition strategy or multi-block method, the control surface here is not panelized, on which the velocity potential and normal velocity components are analytically expressed as a series of base functions composed of Laguerre function in vertical coordinate and Fourier series in the circumference. Free-surface Green function is applied in the external domain, and the boundary integral equation is constructed on the control surface in the sense of Galerkin collocation via integrating test functions orthogonal to base functions over the control surface. The external solution gives rise to the so-called Dirichlet-to-Neumann [DN2] and Neumann-to-Dirichlet [ND2] relations on the control surface. Irregular frequencies, which are only dependent on the radius of the control surface, are present in the external solution, and they are removed by extending the boundary integral equation to the interior free surface (circular disc) on which the null normal derivative of potential is imposed, and the dipole distribution is expressed as Fourier-Bessel expansion on the disc. In the internal domain, where the Rankine source function is adopted, new boundary integral equations are formulated. The point collocation is imposed over the body surface and free surface, while the collocation of the Galerkin type is applied on the control surface. The present method is valid in the computation of both linear and second-order mean drift wave loads. Furthermore, the second-order mean drift force based on the middle-field formulation can be calculated analytically by using the coefficients of the Fourier-Laguerre expansion.

  15. Autofocusing in digital holography using deep learning

    NASA Astrophysics Data System (ADS)

    Ren, Zhenbo; Xu, Zhimin; Lam, Edmund Y.

    2018-02-01

    In digital holography, it is critical to know the distance in order to reconstruct the multi-sectional object. This autofocusing is traditionally solved by reconstructing a stack of in-focus and out-of-focus images and using some focus metric, such as entropy or variance, to calculate the sharpness of each reconstructed image. Then the distance corresponding to the sharpest image is determined as the focal position. This method is effective but computationally demanding and time-consuming. To get an accurate estimation, one has to reconstruct many images. Sometimes after a coarse search, a refinement is needed. To overcome this problem in autofocusing, we propose to use deep learning, i.e., a convolutional neural network (CNN), to solve this problem. Autofocusing is viewed as a classification problem, in which the true distance is transferred as a label. To estimate the distance is equated to labeling a hologram correctly. To train such an algorithm, totally 1000 holograms are captured under the same environment, i.e., exposure time, incident angle, object, except the distance. There are 5 labels corresponding to 5 distances. These data are randomly split into three datasets to train, validate and test a CNN network. Experimental results show that the trained network is capable of predicting the distance without reconstructing or knowing any physical parameters about the setup. The prediction time using this method is far less than traditional autofocusing methods.

  16. Constant fields and constant gradients in open ionic channels.

    PubMed Central

    Chen, D P; Barcilon, V; Eisenberg, R S

    1992-01-01

    Ions enter cells through pores in proteins that are holes in dielectrics. The energy of interaction between ion and charge induced on the dielectric is many kT, and so the dielectric properties of channel and pore are important. We describe ionic movement by (three-dimensional) Nemst-Planck equations (including flux and net charge). Potential is described by Poisson's equation in the pore and Laplace's equation in the channel wall, allowing induced but not permanent charge. Asymptotic expansions are constructed exploiting the long narrow shape of the pore and the relatively high dielectric constant of the pore's contents. The resulting one-dimensional equations can be integrated numerically; they can be analyzed when channels are short or long (compared with the Debye length). Traditional constant field equations are derived if the induced charge is small, e.g., if the channel is short or if the total concentration gradient is zero. A constant gradient of concentration is derived if the channel is long. Plots directly comparable to experiments are given of current vs voltage, reversal potential vs. concentration, and slope conductance vs. concentration. This dielectric theory can easily be tested: its parameters can be determined by traditional constant field measurements. The dielectric theory then predicts current-voltage relations quite different from constant field, usually more linear, when gradients of total concentration are imposed. Numerical analysis shows that the interaction of ion and channel can be described by a mean potential if, but only if, the induced charge is negligible, that is to say, the electric field is spatially constant. Images FIGURE 1 PMID:1376159

  17. [Series: Utilization of Differential Equations and Methods for Solving Them in Medical Physics (1)].

    PubMed

    Murase, Kenya

    2014-01-01

    Utilization of differential equations and methods for solving them in medical physics are presented. First, the basic concept and the kinds of differential equations were overviewed. Second, separable differential equations and well-known first-order and second-order differential equations were introduced, and the methods for solving them were described together with several examples. In the next issue, the symbolic and series expansion methods for solving differential equations will be mainly introduced.

  18. Recent Advances in Laplace Transform Analytic Element Method (LT-AEM) Theory and Application to Transient Groundwater Flow

    NASA Astrophysics Data System (ADS)

    Kuhlman, K. L.; Neuman, S. P.

    2006-12-01

    Furman and Neuman (2003) proposed a Laplace Transform Analytic Element Method (LT-AEM) for transient groundwater flow. LT-AEM applies the traditionally steady-state AEM to the Laplace transformed groundwater flow equation, and back-transforms the resulting solution to the time domain using a Fourier Series numerical inverse Laplace transform method (de Hoog, et.al., 1982). We have extended the method so it can compute hydraulic head and flow velocity distributions due to any two-dimensional combination and arrangement of point, line, circular and elliptical area sinks and sources, nested circular or elliptical regions having different hydraulic properties, and areas of specified head, flux or initial condition. The strengths of all sinks and sources, and the specified head and flux values, can all vary in both space and time in an independent and arbitrary fashion. Initial conditions may vary from one area element to another. A solution is obtained by matching heads and normal fluxes along the boundary of each element. The effect which each element has on the total flow is expressed in terms of generalized Fourier series which converge rapidly (<20 terms) in most cases. As there are more matching points than unknown Fourier terms, the matching is accomplished in Laplace space using least-squares. The method is illustrated by calculating the resulting transient head and flow velocities due to an arrangement of elements in both finite and infinite domains. The 2D LT-AEM elements already developed and implemented are currently being extended to solve the 3D groundwater flow equation.

  19. Entropy-limited hydrodynamics: a novel approach to relativistic hydrodynamics

    NASA Astrophysics Data System (ADS)

    Guercilena, Federico; Radice, David; Rezzolla, Luciano

    2017-07-01

    We present entropy-limited hydrodynamics (ELH): a new approach for the computation of numerical fluxes arising in the discretization of hyperbolic equations in conservation form. ELH is based on the hybridisation of an unfiltered high-order scheme with the first-order Lax-Friedrichs method. The activation of the low-order part of the scheme is driven by a measure of the locally generated entropy inspired by the artificial-viscosity method proposed by Guermond et al. (J. Comput. Phys. 230(11):4248-4267, 2011, doi: 10.1016/j.jcp.2010.11.043). Here, we present ELH in the context of high-order finite-differencing methods and of the equations of general-relativistic hydrodynamics. We study the performance of ELH in a series of classical astrophysical tests in general relativity involving isolated, rotating and nonrotating neutron stars, and including a case of gravitational collapse to black hole. We present a detailed comparison of ELH with the fifth-order monotonicity preserving method MP5 (Suresh and Huynh in J. Comput. Phys. 136(1):83-99, 1997, doi: 10.1006/jcph.1997.5745), one of the most common high-order schemes currently employed in numerical-relativity simulations. We find that ELH achieves comparable and, in many of the cases studied here, better accuracy than more traditional methods at a fraction of the computational cost (up to {˜}50% speedup). Given its accuracy and its simplicity of implementation, ELH is a promising framework for the development of new special- and general-relativistic hydrodynamics codes well adapted for massively parallel supercomputers.

  20. Singular boundary method for global gravity field modelling

    NASA Astrophysics Data System (ADS)

    Cunderlik, Robert

    2014-05-01

    The singular boundary method (SBM) and method of fundamental solutions (MFS) are meshless boundary collocation techniques that use the fundamental solution of a governing partial differential equation (e.g. the Laplace equation) as their basis functions. They have been developed to avoid singular numerical integration as well as mesh generation in the traditional boundary element method (BEM). SBM have been proposed to overcome a main drawback of MFS - its controversial fictitious boundary outside the domain. The key idea of SBM is to introduce a concept of the origin intensity factors that isolate singularities of the fundamental solution and its derivatives using some appropriate regularization techniques. Consequently, the source points can be placed directly on the real boundary and coincide with the collocation nodes. In this study we deal with SBM applied for high-resolution global gravity field modelling. The first numerical experiment presents a numerical solution to the fixed gravimetric boundary value problem. The achieved results are compared with the numerical solutions obtained by MFS or the direct BEM indicating efficiency of all methods. In the second numerical experiments, SBM is used to derive the geopotential and its first derivatives from the Tzz components of the gravity disturbing tensor observed by the GOCE satellite mission. A determination of the origin intensity factors allows to evaluate the disturbing potential and gravity disturbances directly on the Earth's surface where the source points are located. To achieve high-resolution numerical solutions, the large-scale parallel computations are performed on the cluster with 1TB of the distributed memory and an iterative elimination of far zones' contributions is applied.

Top