Numerical study of rotating detonation engine with an array of injection holes
NASA Astrophysics Data System (ADS)
Yao, S.; Han, X.; Liu, Y.; Wang, J.
2017-05-01
This paper aims to adopt the method of injection via an array of holes in three-dimensional numerical simulations of a rotating detonation engine (RDE). The calculation is based on the Euler equations coupled with a one-step Arrhenius chemistry model. A pre-mixed stoichiometric hydrogen-air mixture is used. The present study uses a more practical fuel injection method in RDE simulations, injection via an array of holes, which is different from the previous conventional simulations where a relatively simple full injection method is usually adopted. The computational results capture some important experimental observations and a transient period after initiation. These phenomena are usually absent in conventional RDE simulations due to the use of an idealistic injection approximation. The results are compared with those obtained from other numerical studies and experiments with RDEs.
Numerical simulation study on rolling-chemical milling process of aluminum-lithium alloy skin panel
NASA Astrophysics Data System (ADS)
Huang, Z. B.; Sun, Z. G.; Sun, X. F.; Li, X. Q.
2017-09-01
Single curvature parts such as aircraft fuselage skin panels are usually manufactured by rolling-chemical milling process, which is usually faced with the problem of geometric accuracy caused by springback. In most cases, the methods of manual adjustment and multiple roll bending are used to control or eliminate the springback. However, these methods can cause the increase of product cost and cycle, and lead to material performance degradation. Therefore, it is of significance to precisely control the springback of rolling-chemical milling process. In this paper, using the method of experiment and numerical simulation on rolling-chemical milling process, the simulation model for rolling-chemical milling process of 2060-T8 aluminum-lithium alloy skin was established and testified by the comparison between numerical simulation and experiment results for the validity. Then, based on the numerical simulation model, the relative technological parameters which influence on the curvature of the skin panel were analyzed. Finally, the prediction of springback and the compensation can be realized by controlling the process parameters.
NASA Astrophysics Data System (ADS)
Wang, Dongling; Xiao, Aiguo; Li, Xueyang
2013-02-01
Based on W-transformation, some parametric symplectic partitioned Runge-Kutta (PRK) methods depending on a real parameter α are developed. For α=0, the corresponding methods become the usual PRK methods, including Radau IA-IA¯ and Lobatto IIIA-IIIB methods as examples. For any α≠0, the corresponding methods are symplectic and there exists a value α∗ such that energy is preserved in the numerical solution at each step. The existence of the parameter and the order of the numerical methods are discussed. Some numerical examples are presented to illustrate these results.
Effective numerical method of spectral analysis of quantum graphs
NASA Astrophysics Data System (ADS)
Barrera-Figueroa, Víctor; Rabinovich, Vladimir S.
2017-05-01
We present in the paper an effective numerical method for the determination of the spectra of periodic metric graphs equipped by Schrödinger operators with real-valued periodic electric potentials as Hamiltonians and with Kirchhoff and Neumann conditions at the vertices. Our method is based on the spectral parameter power series method, which leads to a series representation of the dispersion equation, which is suitable for both analytical and numerical calculations. Several important examples demonstrate the effectiveness of our method for some periodic graphs of interest that possess potentials usually found in quantum mechanics.
Optimal least-squares finite element method for elliptic problems
NASA Technical Reports Server (NTRS)
Jiang, Bo-Nan; Povinelli, Louis A.
1991-01-01
An optimal least squares finite element method is proposed for two dimensional and three dimensional elliptic problems and its advantages are discussed over the mixed Galerkin method and the usual least squares finite element method. In the usual least squares finite element method, the second order equation (-Delta x (Delta u) + u = f) is recast as a first order system (-Delta x p + u = f, Delta u - p = 0). The error analysis and numerical experiment show that, in this usual least squares finite element method, the rate of convergence for flux p is one order lower than optimal. In order to get an optimal least squares method, the irrotationality Delta x p = 0 should be included in the first order system.
Excel spreadsheet in teaching numerical methods
NASA Astrophysics Data System (ADS)
Djamila, Harimi
2017-09-01
One of the important objectives in teaching numerical methods for undergraduates’ students is to bring into the comprehension of numerical methods algorithms. Although, manual calculation is important in understanding the procedure, it is time consuming and prone to error. This is specifically the case when considering the iteration procedure used in many numerical methods. Currently, many commercial programs are useful in teaching numerical methods such as Matlab, Maple, and Mathematica. These are usually not user-friendly by the uninitiated. Excel spreadsheet offers an initial level of programming, which it can be used either in or off campus. The students will not be distracted with writing codes. It must be emphasized that general commercial software is required to be introduced later to more elaborated questions. This article aims to report on a teaching numerical methods strategy for undergraduates engineering programs. It is directed to students, lecturers and researchers in engineering field.
NASA Astrophysics Data System (ADS)
Lukyanenko, D. V.; Shishlenin, M. A.; Volkov, V. T.
2018-01-01
We propose the numerical method for solving coefficient inverse problem for a nonlinear singularly perturbed reaction-diffusion-advection equation with the final time observation data based on the asymptotic analysis and the gradient method. Asymptotic analysis allows us to extract a priory information about interior layer (moving front), which appears in the direct problem, and boundary layers, which appear in the conjugate problem. We describe and implement the method of constructing a dynamically adapted mesh based on this a priory information. The dynamically adapted mesh significantly reduces the complexity of the numerical calculations and improve the numerical stability in comparison with the usual approaches. Numerical example shows the effectiveness of the proposed method.
Dynamic one-dimensional modeling of secondary settling tanks and system robustness evaluation.
Li, Ben; Stenstrom, M K
2014-01-01
One-dimensional secondary settling tank models are widely used in current engineering practice for design and optimization, and usually can be expressed as a nonlinear hyperbolic or nonlinear strongly degenerate parabolic partial differential equation (PDE). Reliable numerical methods are needed to produce approximate solutions that converge to the exact analytical solutions. In this study, we introduced a reliable numerical technique, the Yee-Roe-Davis (YRD) method as the governing PDE solver, and compared its reliability with the prevalent Stenstrom-Vitasovic-Takács (SVT) method by assessing their simulation results at various operating conditions. The YRD method also produced a similar solution to the previously developed Method G and Enquist-Osher method. The YRD and SVT methods were also used for a time-to-failure evaluation, and the results show that the choice of numerical method can greatly impact the solution. Reliable numerical methods, such as the YRD method, are strongly recommended.
An adaptive finite element method for the inequality-constrained Reynolds equation
NASA Astrophysics Data System (ADS)
Gustafsson, Tom; Rajagopal, Kumbakonam R.; Stenberg, Rolf; Videman, Juha
2018-07-01
We present a stabilized finite element method for the numerical solution of cavitation in lubrication, modeled as an inequality-constrained Reynolds equation. The cavitation model is written as a variable coefficient saddle-point problem and approximated by a residual-based stabilized method. Based on our recent results on the classical obstacle problem, we present optimal a priori estimates and derive novel a posteriori error estimators. The method is implemented as a Nitsche-type finite element technique and shown in numerical computations to be superior to the usually applied penalty methods.
Spectral flux from low-density photospheres - Numerical results
NASA Technical Reports Server (NTRS)
Hershkowitz, S.; Linder, E.; Wagoner, R. V.
1986-01-01
Radiative transfer through sharp, quasi-static atmospheres whose opacity is dominated by hydrogen is considered at densities low enough that scattering usually dominates absorption and radiative excitations usually dominate collisional excitations. Numerical results for the continuum spectral flux are obtained for effective temperatures T(e) = 6000-16,000 K and scale heights Delta-R = 10 to the 10th - 10 to the 14th cm. Spectra are significantly different than if LTE level populations were assumed. Comparison with observations of the Type II supernova 1980k tends to increase the value of the Hubble constant previously obtained by the Baade (1926) method.
NASA Astrophysics Data System (ADS)
Zhang, Ye; Gong, Rongfang; Cheng, Xiaoliang; Gulliksson, Mårten
2018-06-01
This study considers the inverse source problem for elliptic partial differential equations with both Dirichlet and Neumann boundary data. The unknown source term is to be determined by additional boundary conditions. Unlike the existing methods found in the literature, which usually employ the first-order in time gradient-like system (such as the steepest descent methods) for numerically solving the regularized optimization problem with a fixed regularization parameter, we propose a novel method with a second-order in time dissipative gradient-like system and a dynamical selected regularization parameter. A damped symplectic scheme is proposed for the numerical solution. Theoretical analysis is given for both the continuous model and the numerical algorithm. Several numerical examples are provided to show the robustness of the proposed algorithm.
S-matrix method for the numerical determination of bound states.
NASA Technical Reports Server (NTRS)
Bhatia, A. K.; Madan, R. N.
1973-01-01
A rapid numerical technique for the determination of bound states of a partial-wave-projected Schroedinger equation is presented. First, one needs to integrate the equation only outwards as in the scattering case, and second, the number of trials necessary to determine the eigenenergy and the corresponding eigenfunction is considerably less than in the usual method. As a nontrivial example of the technique, bound states are calculated in the exchange approximation for the e-/He+ system and l equals 1 partial wave.
Newton's method: A link between continuous and discrete solutions of nonlinear problems
NASA Technical Reports Server (NTRS)
Thurston, G. A.
1980-01-01
Newton's method for nonlinear mechanics problems replaces the governing nonlinear equations by an iterative sequence of linear equations. When the linear equations are linear differential equations, the equations are usually solved by numerical methods. The iterative sequence in Newton's method can exhibit poor convergence properties when the nonlinear problem has multiple solutions for a fixed set of parameters, unless the iterative sequences are aimed at solving for each solution separately. The theory of the linear differential operators is often a better guide for solution strategies in applying Newton's method than the theory of linear algebra associated with the numerical analogs of the differential operators. In fact, the theory for the differential operators can suggest the choice of numerical linear operators. In this paper the method of variation of parameters from the theory of linear ordinary differential equations is examined in detail in the context of Newton's method to demonstrate how it might be used as a guide for numerical solutions.
Numerical simulation of stress amplification induced by crack interaction in human femur bone
NASA Astrophysics Data System (ADS)
Alia, Noor; Daud, Ruslizam; Ramli, Mohammad Fadzli; Azman, Wan Zuki; Faizal, Ahmad; Aisyah, Siti
2015-05-01
This research is about numerical simulation using a computational method which study on stress amplification induced by crack interaction in human femur bone. Cracks in human femur bone usually occur because of large load or stress applied on it. Usually, the fracture takes longer time to heal itself. At present, the crack interaction is still not well understood due to bone complexity. Thus, brittle fracture behavior of bone may be underestimated and inaccurate. This study aims to investigate the geometrical effect of double co-planar edge cracks on stress intensity factor (K) in femur bone. This research focuses to analyze the amplification effect on the fracture behavior of double co-planar edge cracks, where numerical model is developed using computational method. The concept of fracture mechanics and finite element method (FEM) are used to solve the interacting cracks problems using linear elastic fracture mechanics (LEFM) theory. As a result, this study has shown the identification of the crack interaction limit (CIL) and crack unification limit (CUL) exist in the human femur bone model developed. In future research, several improvements will be made such as varying the load, applying thickness on the model and also use different theory or method in calculating the stress intensity factor (K).
An efficient numerical algorithm for transverse impact problems
NASA Technical Reports Server (NTRS)
Sankar, B. V.; Sun, C. T.
1985-01-01
Transverse impact problems in which the elastic and plastic indentation effects are considered, involve a nonlinear integral equation for the contact force, which, in practice, is usually solved by an iterative scheme with small increments in time. In this paper, a numerical method is proposed wherein the iterations of the nonlinear problem are separated from the structural response computations. This makes the numerical procedures much simpler and also efficient. The proposed method is applied to some impact problems for which solutions are available, and they are found to be in good agreement. The effect of the magnitude of time increment on the results is also discussed.
Local lubrication model for spherical particles within incompressible Navier-Stokes flows.
Lambert, B; Weynans, L; Bergmann, M
2018-03-01
The lubrication forces are short-range hydrodynamic interactions essential to describe suspension of the particles. Usually, they are underestimated in direct numerical simulations of particle-laden flows. In this paper, we propose a lubrication model for a coupled volume penalization method and discrete element method solver that estimates the unresolved hydrodynamic forces and torques in an incompressible Navier-Stokes flow. Corrections are made locally on the surface of the interacting particles without any assumption on the global particle shape. The numerical model has been validated against experimental data and performs as well as existing numerical models that are limited to spherical particles.
Numerical modeling of an enhanced very early time electromagnetic (VETEM) prototype system
Cui, T.J.; Chew, W.C.; Aydiner, A.A.; Wright, D.L.; Smith, D.V.; Abraham, J.D.
2000-01-01
In this paper, two numerical models are presented to simulate an enhanced very early time electromagnetic (VETEM) prototype system, which is used for buried-object detection and environmental problems. Usually, the VETEM system contains a transmitting loop antenna and a receiving loop antenna, which run on a lossy ground to detect buried objects. In the first numerical model, the loop antennas are accurately analyzed using the Method of Moments (MoM) for wire antennas above or buried in lossy ground. Then, Conjugate Gradient (CG) methods, with the use of the fast Fourier transform (FFT) or MoM, are applied to investigate the scattering from buried objects. Reflected and scattered magnetic fields are evaluated at the receiving loop to calculate the output electric current. However, the working frequency for the VETEM system is usually low and, hence, two magnetic dipoles are used to replace the transmitter and receiver in the second numerical model. Comparing these two models, the second one is simple, but only valid for low frequency or small loops, while the first modeling is more general. In this paper, all computations are performed in the frequency domain, and the FFT is used to obtain the time-domain responses. Numerical examples show that simulation results from these two models fit very well when the frequency ranges from 10 kHz to 10 MHz, and both results are close to the measured data.
NASA Astrophysics Data System (ADS)
Batailly, Alain; Magnain, Benoît; Chevaugeon, Nicolas
2013-05-01
The numerical simulation of contact problems is still a delicate matter especially when large transformations are involved. In that case, relative large slidings can occur between contact surfaces and the discretization error induced by usual finite elements may not be satisfactory. In particular, usual elements lead to a facetization of the contact surface, meaning an unavoidable discontinuity of the normal vector to this surface. Uncertainty over the precision of the results, irregularity of the displacement of the contact nodes and even numerical oscillations of contact reaction force may result of such discontinuity. Among the existing methods for tackling such issue, one may consider mortar elements (Fischer and Wriggers, Comput Methods Appl Mech Eng 195:5020-5036, 2006; McDevitt and Laursen, Int J Numer Methods Eng 48:1525-1547, 2000; Puso and Laursen, Comput Methods Appl Mech Eng 93:601-629, 2004), smoothing of the contact surfaces with additional geometrical entity (B-splines or NURBS) (Belytschko et al., Int J Numer Methods Eng 55:101-125, 2002; Kikuchi, Penalty/finite element approximations of a class of unilateral contact problems. Penalty method and finite element method, ASME, New York, 1982; Legrand, Modèles de prediction de l'interaction rotor/stator dans un moteur d'avion Thèse de doctorat. PhD thesis, École Centrale de Nantes, Nantes, 2005; Muñoz, Comput Methods Appl Mech Eng 197:979-993, 2008; Wriggers and Krstulovic-Opara, J Appl Math Mech (ZAMM) 80:77-80, 2000) and, the use of isogeometric analysis (Temizer et al., Comput Methods Appl Mech Eng 200:1100-1112, 2011; Hughes et al., Comput Methods Appl Mech Eng 194:4135-4195, 2005; de Lorenzis et al., Int J Numer Meth Eng, in press, 2011). In the present paper, we focus on these last two methods which are combined with a finite element code using the bi-potential method for contact management (Feng et al., Comput Mech 36:375-383, 2005). A comparative study focusing on the pros and cons of each method regarding geometrical precision and numerical stability for contact solution is proposed. The scope of this study is limited to 2D contact problems for which we consider several types of finite elements. Test cases are given in order to illustrate this comparative study.
Numerical techniques in radiative heat transfer for general, scattering, plane-parallel media
NASA Technical Reports Server (NTRS)
Sharma, A.; Cogley, A. C.
1982-01-01
The study of radiative heat transfer with scattering usually leads to the solution of singular Fredholm integral equations. The present paper presents an accurate and efficient numerical method to solve certain integral equations that govern radiative equilibrium problems in plane-parallel geometry for both grey and nongrey, anisotropically scattering media. In particular, the nongrey problem is represented by a spectral integral of a system of nonlinear integral equations in space, which has not been solved previously. The numerical technique is constructed to handle this unique nongrey governing equation as well as the difficulties caused by singular kernels. Example problems are solved and the method's accuracy and computational speed are analyzed.
NASA Astrophysics Data System (ADS)
Wang, Wei; Shen, Jianqi
2018-06-01
The use of a shaped beam for applications relying on light scattering depends much on the ability to evaluate the beam shape coefficients (BSC) effectively. Numerical techniques for evaluating the BSCs of a shaped beam, such as the quadrature, the localized approximation (LA), the integral localized approximation (ILA) methods, have been developed within the framework of generalized Lorenz-Mie theory (GLMT). The quadrature methods usually employ the 2-/3-dimensional integrations. In this work, the expressions of the BSCs for an elliptical Gaussian beam (EGB) are simplified into the 1-dimensional integral so as to speed up the numerical computation. Numerical results of BSCs are used to reconstruct the beam field and the fidelity of the reconstructed field to the given beam field is estimated. It is demonstrated that the proposed method is much faster than the 2-dimensional integrations and it can acquire more accurate results than the LA method. Limitations of the quadrature method and also the LA method in the numerical calculation are analyzed in detail.
Numerical methods in Markov chain modeling
NASA Technical Reports Server (NTRS)
Philippe, Bernard; Saad, Youcef; Stewart, William J.
1989-01-01
Several methods for computing stationary probability distributions of Markov chains are described and compared. The main linear algebra problem consists of computing an eigenvector of a sparse, usually nonsymmetric, matrix associated with a known eigenvalue. It can also be cast as a problem of solving a homogeneous singular linear system. Several methods based on combinations of Krylov subspace techniques are presented. The performance of these methods on some realistic problems are compared.
Dekkers, A L M; Slob, W
2012-10-01
In dietary exposure assessment, statistical methods exist for estimating the usual intake distribution from daily intake data. These methods transform the dietary intake data to normal observations, eliminate the within-person variance, and then back-transform the data to the original scale. We propose Gaussian Quadrature (GQ), a numerical integration method, as an efficient way of back-transformation. We compare GQ with six published methods. One method uses a log-transformation, while the other methods, including GQ, use a Box-Cox transformation. This study shows that, for various parameter choices, the methods with a Box-Cox transformation estimate the theoretical usual intake distributions quite well, although one method, a Taylor approximation, is less accurate. Two applications--on folate intake and fruit consumption--confirmed these results. In one extreme case, some methods, including GQ, could not be applied for low percentiles. We solved this problem by modifying GQ. One method is based on the assumption that the daily intakes are log-normally distributed. Even if this condition is not fulfilled, the log-transformation performs well as long as the within-individual variance is small compared to the mean. We conclude that the modified GQ is an efficient, fast and accurate method for estimating the usual intake distribution. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ohsaku, Tadafumi
2002-08-01
We solve numerically various types of the gap equations developed in the relativistic BCS and generalized BCS framework, presented in part I of this paper. We apply the method for not only the usual solid metal but also other physical systems by using homogeneous fermion gas approximation. We examine the relativistic effects on the thermal properties and the Meissner effect of the BCS and generalized BCS superconductivity of various cases.
Richter, Christiane; Kotz, Frederik; Giselbrecht, Stefan; Helmer, Dorothea; Rapp, Bastian E
2016-06-01
The fluid mechanics of microfluidics is distinctively simpler than the fluid mechanics of macroscopic systems. In macroscopic systems effects such as non-laminar flow, convection, gravity etc. need to be accounted for all of which can usually be neglected in microfluidic systems. Still, there exists only a very limited selection of channel cross-sections for which the Navier-Stokes equation for pressure-driven Poiseuille flow can be solved analytically. From these equations, velocity profiles as well as flow rates can be calculated. However, whenever a cross-section is not highly symmetric (rectangular, elliptical or circular) the Navier-Stokes equation can usually not be solved analytically. In all of these cases, numerical methods are required. However, in many instances it is not necessary to turn to complex numerical solver packages for deriving, e.g., the velocity profile of a more complex microfluidic channel cross-section. In this paper, a simple spreadsheet analysis tool (here: Microsoft Excel) will be used to implement a simple numerical scheme which allows solving the Navier-Stokes equation for arbitrary channel cross-sections.
NASA Astrophysics Data System (ADS)
Sirikham, Adisorn; Zhao, Yifan; Mehnen, Jörn
2017-11-01
Thermography is a promising method for detecting subsurface defects, but accurate measurement of defect depth is still a big challenge because thermographic signals are typically corrupted by imaging noise and affected by 3D heat conduction. Existing methods based on numerical models are susceptible to signal noise and methods based on analytical models require rigorous assumptions that usually cannot be satisfied in practical applications. This paper presents a new method to improve the measurement accuracy of subsurface defect depth through determining the thermal wave reflection coefficient directly from observed data that is usually assumed to be pre-known. This target is achieved through introducing a new heat transfer model that includes multiple physical parameters to better describe the observed thermal behaviour in pulsed thermographic inspection. Numerical simulations are used to evaluate the performance of the proposed method against four selected state-of-the-art methods. Results show that the accuracy of depth measurement has been improved up to 10% when noise level is high and thermal wave reflection coefficients is low. The feasibility of the proposed method in real data is also validated through a case study on characterising flat-bottom holes in carbon fibre reinforced polymer (CFRP) laminates which has a wide application in various sectors of industry.
NASA Astrophysics Data System (ADS)
Kahnert, Michael
2016-07-01
Numerical solution methods for electromagnetic scattering by non-spherical particles comprise a variety of different techniques, which can be traced back to different assumptions and solution strategies applied to the macroscopic Maxwell equations. One can distinguish between time- and frequency-domain methods; further, one can divide numerical techniques into finite-difference methods (which are based on approximating the differential operators), separation-of-variables methods (which are based on expanding the solution in a complete set of functions, thus approximating the fields), and volume integral-equation methods (which are usually solved by discretisation of the target volume and invoking the long-wave approximation in each volume cell). While existing reviews of the topic often tend to have a target audience of program developers and expert users, this tutorial review is intended to accommodate the needs of practitioners as well as novices to the field. The required conciseness is achieved by limiting the presentation to a selection of illustrative methods, and by omitting many technical details that are not essential at a first exposure to the subject. On the other hand, the theoretical basis of numerical methods is explained with little compromises in mathematical rigour; the rationale is that a good grasp of numerical light scattering methods is best achieved by understanding their foundation in Maxwell's theory.
NASA Astrophysics Data System (ADS)
Lutsenko, N. A.; Fetsov, S. S.
2017-10-01
Mathematical model and numerical method are proposed for investigating the one-dimensional time-dependent gas flows through a packed bed of encapsulated Phase Change Material (PCM). The model is based on the assumption of interacting interpenetrating continua and includes equations of state, continuity, momentum conservation and energy for PCM and gas. The advantage of the method is that it does not require predicting the location of phase transition zone and can define it automatically as in a usual shock-capturing method. One of the applications of the developed numerical model is the simulation of novel Adiabatic Compressed Air Energy Storage system (A-CAES) with Thermal Energy Storage subsystem (TES) based on using the encapsulated PCM in packed bed. Preliminary test calculations give hope that the method can be effectively applied in the future for modelling the charge and discharge processes in such TES with PCM.
New insight in spiral drawing analysis methods - Application to action tremor quantification.
Legrand, André Pierre; Rivals, Isabelle; Richard, Aliénor; Apartis, Emmanuelle; Roze, Emmanuel; Vidailhet, Marie; Meunier, Sabine; Hainque, Elodie
2017-10-01
Spiral drawing is one of the standard tests used to assess tremor severity for the clinical evaluation of medical treatments. Tremor severity is estimated through visual rating of the drawings by movement disorders experts. Different approaches based on the mathematical signal analysis of the recorded spiral drawings were proposed to replace this rater dependent estimate. The objective of the present study is to propose new numerical methods and to evaluate them in terms of agreement with visual rating and reproducibility. Series of spiral drawings of patients with essential tremor were visually rated by a board of experts. In addition to the usual velocity analysis, three new numerical methods were tested and compared, namely static and dynamic unraveling, and empirical mode decomposition. The reproducibility of both visual and numerical ratings was estimated, and their agreement was evaluated. The statistical analysis demonstrated excellent agreement between visual and numerical ratings, and more reproducible results with numerical methods than with visual ratings. The velocity method and the new numerical methods are in good agreement. Among the latter, static and dynamic unravelling both display a smaller dispersion and are easier for automatic analysis. The reliable scores obtained through the proposed numerical methods allow considering that their implementation on a digitized tablet, be it connected with a computer or independent, provides an efficient automatic tool for tremor severity assessment. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.
Applying the method of fundamental solutions to harmonic problems with singular boundary conditions
NASA Astrophysics Data System (ADS)
Valtchev, Svilen S.; Alves, Carlos J. S.
2017-07-01
The method of fundamental solutions (MFS) is known to produce highly accurate numerical results for elliptic boundary value problems (BVP) with smooth boundary conditions, posed in analytic domains. However, due to the analyticity of the shape functions in its approximation basis, the MFS is usually disregarded when the boundary functions possess singularities. In this work we present a modification of the classical MFS which can be applied for the numerical solution of the Laplace BVP with Dirichlet boundary conditions exhibiting jump discontinuities. In particular, a set of harmonic functions with discontinuous boundary traces is added to the MFS basis. The accuracy of the proposed method is compared with the results form the classical MFS.
Artificial Boundary Conditions Based on the Difference Potentials Method
NASA Technical Reports Server (NTRS)
Tsynkov, Semyon V.
1996-01-01
While numerically solving a problem initially formulated on an unbounded domain, one typically truncates this domain, which necessitates setting the artificial boundary conditions (ABC's) at the newly formed external boundary. The issue of setting the ABC's appears to be most significant in many areas of scientific computing, for example, in problems originating from acoustics, electrodynamics, solid mechanics, and fluid dynamics. In particular, in computational fluid dynamics (where external problems present a wide class of practically important formulations) the proper treatment of external boundaries may have a profound impact on the overall quality and performance of numerical algorithms. Most of the currently used techniques for setting the ABC's can basically be classified into two groups. The methods from the first group (global ABC's) usually provide high accuracy and robustness of the numerical procedure but often appear to be fairly cumbersome and (computationally) expensive. The methods from the second group (local ABC's) are, as a rule, algorithmically simple, numerically cheap, and geometrically universal; however, they usually lack accuracy of computations. In this paper we first present a survey and provide a comparative assessment of different existing methods for constructing the ABC's. Then, we describe a relatively new ABC's technique of ours and review the corresponding results. This new technique, in our opinion, is currently one of the most promising in the field. It enables one to construct such ABC's that combine the advantages relevant to the two aforementioned classes of existing methods. Our approach is based on application of the difference potentials method attributable to V. S. Ryaben'kii. This approach allows us to obtain highly accurate ABC's in the form of certain (nonlocal) boundary operator equations. The operators involved are analogous to the pseudodifferential boundary projections first introduced by A. P. Calderon and then also studied by R. T. Seeley. The apparatus of the boundary pseudodifferential equations, which has formerly been used mostly in the qualitative theory of integral equations and PDE'S, is now effectively employed for developing numerical methods in the different fields of scientific computing.
Fast large scale structure perturbation theory using one-dimensional fast Fourier transforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmittfull, Marcel; Vlah, Zvonimir; McDonald, Patrick
The usual fluid equations describing the large-scale evolution of mass density in the universe can be written as local in the density, velocity divergence, and velocity potential fields. As a result, the perturbative expansion in small density fluctuations, usually written in terms of convolutions in Fourier space, can be written as a series of products of these fields evaluated at the same location in configuration space. Based on this, we establish a new method to numerically evaluate the 1-loop power spectrum (i.e., Fourier transform of the 2-point correlation function) with one-dimensional fast Fourier transforms. This is exact and a fewmore » orders of magnitude faster than previously used numerical approaches. Numerical results of the new method are in excellent agreement with the standard quadrature integration method. This fast model evaluation can in principle be extended to higher loop order where existing codes become painfully slow. Our approach follows by writing higher order corrections to the 2-point correlation function as, e.g., the correlation between two second-order fields or the correlation between a linear and a third-order field. These are then decomposed into products of correlations of linear fields and derivatives of linear fields. In conclusion, the method can also be viewed as evaluating three-dimensional Fourier space convolutions using products in configuration space, which may also be useful in other contexts where similar integrals appear.« less
Fast large scale structure perturbation theory using one-dimensional fast Fourier transforms
Schmittfull, Marcel; Vlah, Zvonimir; McDonald, Patrick
2016-05-01
The usual fluid equations describing the large-scale evolution of mass density in the universe can be written as local in the density, velocity divergence, and velocity potential fields. As a result, the perturbative expansion in small density fluctuations, usually written in terms of convolutions in Fourier space, can be written as a series of products of these fields evaluated at the same location in configuration space. Based on this, we establish a new method to numerically evaluate the 1-loop power spectrum (i.e., Fourier transform of the 2-point correlation function) with one-dimensional fast Fourier transforms. This is exact and a fewmore » orders of magnitude faster than previously used numerical approaches. Numerical results of the new method are in excellent agreement with the standard quadrature integration method. This fast model evaluation can in principle be extended to higher loop order where existing codes become painfully slow. Our approach follows by writing higher order corrections to the 2-point correlation function as, e.g., the correlation between two second-order fields or the correlation between a linear and a third-order field. These are then decomposed into products of correlations of linear fields and derivatives of linear fields. In conclusion, the method can also be viewed as evaluating three-dimensional Fourier space convolutions using products in configuration space, which may also be useful in other contexts where similar integrals appear.« less
A quasi-spectral method for Cauchy problem of 2/D Laplace equation on an annulus
NASA Astrophysics Data System (ADS)
Saito, Katsuyoshi; Nakada, Manabu; Iijima, Kentaro; Onishi, Kazuei
2005-01-01
Real numbers are usually represented in the computer as a finite number of digits hexa-decimal floating point numbers. Accordingly the numerical analysis is often suffered from rounding errors. The rounding errors particularly deteriorate the precision of numerical solution in inverse and ill-posed problems. We attempt to use a multi-precision arithmetic for reducing the rounding error evil. The use of the multi-precision arithmetic system is by the courtesy of Dr Fujiwara of Kyoto University. In this paper we try to show effectiveness of the multi-precision arithmetic by taking two typical examples; the Cauchy problem of the Laplace equation in two dimensions and the shape identification problem by inverse scattering in three dimensions. It is concluded from a few numerical examples that the multi-precision arithmetic works well on the resolution of those numerical solutions, as it is combined with the high order finite difference method for the Cauchy problem and with the eigenfunction expansion method for the inverse scattering problem.
Numerical simulation of artificial hip joint motion based on human age factor
NASA Astrophysics Data System (ADS)
Ramdhani, Safarudin; Saputra, Eko; Jamari, J.
2018-05-01
Artificial hip joint is a prosthesis (synthetic body part) which usually consists of two or more components. Replacement of the hip joint due to the occurrence of arthritis, ordinarily patients aged or older. Numerical simulation models are used to observe the range of motion in the artificial hip joint, the range of motion of joints used as the basis of human age. Finite- element analysis (FEA) is used to calculate stress von mises in motion and observes a probability of prosthetic impingement. FEA uses a three-dimensional nonlinear model and considers the position variation of acetabular liner cups. The result of numerical simulation shows that FEA method can be used to analyze the performance calculation of the artificial hip joint at this time more accurate than conventional method.
Large deformation frictional contact analysis with immersed boundary method
NASA Astrophysics Data System (ADS)
Navarro-Jiménez, José Manuel; Tur, Manuel; Albelda, José; Ródenas, Juan José
2018-01-01
This paper proposes a method of solving 3D large deformation frictional contact problems with the Cartesian Grid Finite Element Method. A stabilized augmented Lagrangian contact formulation is developed using a smooth stress field as stabilizing term, calculated by Zienckiewicz and Zhu Superconvergent Patch Recovery. The parametric definition of the CAD surfaces (usually NURBS) is considered in the definition of the contact kinematics in order to obtain an enhanced measure of the contact gap. The numerical examples show the performance of the method.
Rapid calculation method for Frenkel-type two-exciton states in one to three dimensions
NASA Astrophysics Data System (ADS)
Ajiki, Hiroshi
2014-07-01
Biexciton and two-exciton dissociated states of Frenkel-type excitons are well described by a tight-binding model with a nearest-neighbor approximation. Such two-exciton states in a finite-size lattice are usually calculated by numerical diagonalization of the Hamiltonian, which requires an increasing amount of computational time and memory as the lattice size increases. I develop here a rapid, memory-saving method to calculate the energies and wave functions of two-exciton states by employing a bisection method. In addition, an attractive interaction between two excitons in the tight-binding model can be obtained directly so that the biexciton energy agrees with the observed energy, without the need for the trial-and-error procedure implemented in the numerical diagonalization method.
Detection of Orbital Debris Collision Risks for the Automated Transfer Vehicle
NASA Technical Reports Server (NTRS)
Peret, L.; Legendre, P.; Delavault, S.; Martin, T.
2007-01-01
In this paper, we present a general collision risk assessment method, which has been applied through numerical simulations to the Automated Transfer Vehicle (ATV) case. During ATV ascent towards the International Space Station, close approaches between the ATV and objects of the USSTRACOM catalog will be monitored through collision rosk assessment. Usually, collision risk assessment relies on an exclusion volume or a probability threshold method. Probability methods are more effective than exclusion volumes but require accurate covariance data. In this work, we propose to use a criterion defined by an adaptive exclusion area. This criterion does not require any probability calculation but is more effective than exclusion volume methods as demonstrated by our numerical experiments. The results of these studies, when confirmed and finalized, will be used for the ATV operations.
Integrated Reconfigurable Intelligent Systems (IRIS) for Complex Naval Systems
2010-02-21
RKF45] and Adams Variable Step- Size Predictor - Corrector methods). While such algorithms naturally are usually used to numerically solve differential...verified by yet another function call. Due to their nature, such methods are referred to as predictor - corrector methods. While computationally expensive...CONTRACT NUMBER N00014-09- C -0394 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER N/A 6. Author(s) Dr. Dimitri N. Mavris Dr. Yongchang Li 5d
Roughness Effects on Fretting Fatigue
NASA Astrophysics Data System (ADS)
Yue, Tongyan; Abdel Wahab, Magd
2017-05-01
Fretting is a small oscillatory relative motion between two normal loaded contact surfaces. It may cause fretting fatigue, fretting wear and/or fretting corrosion damage depending on various fretting couples and working conditions. Fretting fatigue usually occurs at partial slip condition, and results in catastrophic failure at the stress levels below the fatigue limit of the material. Many parameters may affect fretting behaviour, including the applied normal load and displacement, material properties, roughness of the contact surfaces, frequency, etc. Since fretting damage is undesirable due to contacting, the effect of rough contact surfaces on fretting damage has been studied by many researchers. Experimental method on this topic is usually focusing on rough surface effects by finishing treatment and random rough surface effects in order to increase fretting fatigue life. However, most of numerical models on roughness are based on random surface. This paper reviewed both experimental and numerical methodology on the rough surface effects on fretting fatigue.
The arbitrary order mixed mimetic finite difference method for the diffusion equation
Gyrya, Vitaliy; Lipnikov, Konstantin; Manzini, Gianmarco
2016-05-01
Here, we propose an arbitrary-order accurate mimetic finite difference (MFD) method for the approximation of diffusion problems in mixed form on unstructured polygonal and polyhedral meshes. As usual in the mimetic numerical technology, the method satisfies local consistency and stability conditions, which determines the accuracy and the well-posedness of the resulting approximation. The method also requires the definition of a high-order discrete divergence operator that is the discrete analog of the divergence operator and is acting on the degrees of freedom. The new family of mimetic methods is proved theoretically to be convergent and optimal error estimates for flux andmore » scalar variable are derived from the convergence analysis. A numerical experiment confirms the high-order accuracy of the method in solving diffusion problems with variable diffusion tensor. It is worth mentioning that the approximation of the scalar variable presents a superconvergence effect.« less
NASA Astrophysics Data System (ADS)
Schanz, Martin; Ye, Wenjing; Xiao, Jinyou
2016-04-01
Transient problems can often be solved with transformation methods, where the inverse transformation is usually performed numerically. Here, the discrete Fourier transform in combination with the exponential window method is compared with the convolution quadrature method formulated as inverse transformation. Both are inverse Laplace transforms, which are formally identical but use different complex frequencies. A numerical study is performed, first with simple convolution integrals and, second, with a boundary element method (BEM) for elastodynamics. Essentially, when combined with the BEM, the discrete Fourier transform needs less frequency calculations, but finer mesh compared to the convolution quadrature method to obtain the same level of accuracy. If further fast methods like the fast multipole method are used to accelerate the boundary element method the convolution quadrature method is better, because the iterative solver needs much less iterations to converge. This is caused by the larger real part of the complex frequencies necessary for the calculation, which improves the conditions of system matrix.
NASA Astrophysics Data System (ADS)
Ding, Zhe; Li, Li; Hu, Yujin
2018-01-01
Sophisticated engineering systems are usually assembled by subcomponents with significantly different levels of energy dissipation. Therefore, these damping systems often contain multiple damping models and lead to great difficulties in analyzing. This paper aims at developing a time integration method for structural systems with multiple damping models. The dynamical system is first represented by a generally damped model. Based on this, a new extended state-space method for the damped system is derived. A modified precise integration method with Gauss-Legendre quadrature is then proposed. The numerical stability and accuracy of the proposed integration method are discussed in detail. It is verified that the method is conditionally stable and has inherent algorithmic damping, period error and amplitude decay. Numerical examples are provided to assess the performance of the proposed method compared with other methods. It is demonstrated that the method is more accurate than other methods with rather good efficiency and the stable condition is easy to be satisfied in practice.
Improving size estimates of open animal populations by incorporating information on age
Manly, Bryan F.J.; McDonald, Trent L.; Amstrup, Steven C.; Regehr, Eric V.
2003-01-01
Around the world, a great deal of effort is expended each year to estimate the sizes of wild animal populations. Unfortunately, population size has proven to be one of the most intractable parameters to estimate. The capture-recapture estimation models most commonly used (of the Jolly-Seber type) are complicated and require numerous, sometimes questionable, assumptions. The derived estimates usually have large variances and lack consistency over time. In capture–recapture studies of long-lived animals, the ages of captured animals can often be determined with great accuracy and relative ease. We show how to incorporate age information into size estimates for open populations, where the size changes through births, deaths, immigration, and emigration. The proposed method allows more precise estimates of population size than the usual models, and it can provide these estimates from two sample occasions rather than the three usually required. Moreover, this method does not require specialized programs for capture-recapture data; researchers can derive their estimates using the logistic regression module in any standard statistical package.
Five-equation and robust three-equation methods for solution verification of large eddy simulation
NASA Astrophysics Data System (ADS)
Dutta, Rabijit; Xing, Tao
2018-02-01
This study evaluates the recently developed general framework for solution verification methods for large eddy simulation (LES) using implicitly filtered LES of periodic channel flows at friction Reynolds number of 395 on eight systematically refined grids. The seven-equation method shows that the coupling error based on Hypothesis I is much smaller as compared with the numerical and modeling errors and therefore can be neglected. The authors recommend five-equation method based on Hypothesis II, which shows a monotonic convergence behavior of the predicted numerical benchmark ( S C ), and provides realistic error estimates without the need of fixing the orders of accuracy for either numerical or modeling errors. Based on the results from seven-equation and five-equation methods, less expensive three and four-equation methods for practical LES applications were derived. It was found that the new three-equation method is robust as it can be applied to any convergence types and reasonably predict the error trends. It was also observed that the numerical and modeling errors usually have opposite signs, which suggests error cancellation play an essential role in LES. When Reynolds averaged Navier-Stokes (RANS) based error estimation method is applied, it shows significant error in the prediction of S C on coarse meshes. However, it predicts reasonable S C when the grids resolve at least 80% of the total turbulent kinetic energy.
A Method for Large Eddy Simulation of Acoustic Combustion Instabilities
NASA Astrophysics Data System (ADS)
Wall, Clifton; Pierce, Charles; Moin, Parviz
2002-11-01
A method for performing Large Eddy Simulation of acoustic combustion instabilities is presented. By extending the low Mach number pressure correction method to the case of compressible flow, a numerical method is developed in which the Poisson equation for pressure is replaced by a Helmholtz equation. The method avoids the acoustic CFL condition by using implicit time advancement, leading to large efficiency gains at low Mach number. The method also avoids artificial damping of acoustic waves. The numerical method is attractive for the simulation of acoustic combustion instabilities, since these flows are typically at low Mach number, and the acoustic frequencies of interest are usually low. Both of these characteristics suggest the use of larger time steps than those allowed by an acoustic CFL condition. The turbulent combustion model used is the Combined Conserved Scalar/Level Set Flamelet model of Duchamp de Lageneste and Pitsch for partially premixed combustion. Comparison of LES results to the experiments of Besson et al will be presented.
Li, Yan
2017-05-25
The efficiency evaluation model of integrated energy system, involving many influencing factors, and the attribute values are heterogeneous and non-deterministic, usually cannot give specific numerical or accurate probability distribution characteristics, making the final evaluation result deviation. According to the characteristics of the integrated energy system, a hybrid multi-attribute decision-making model is constructed. The evaluation model considers the decision maker's risk preference. In the evaluation of the efficiency of the integrated energy system, the evaluation value of some evaluation indexes is linguistic value, or the evaluation value of the evaluation experts is not consistent. These reasons lead to ambiguity in the decision information, usually in the form of uncertain linguistic values and numerical interval values. In this paper, the risk preference of decision maker is considered when constructing the evaluation model. Interval-valued multiple-attribute decision-making method and fuzzy linguistic multiple-attribute decision-making model are proposed. Finally, the mathematical model of efficiency evaluation of integrated energy system is constructed.
Wavelet-based identification of rotor blades in passage-through-resonance tests
NASA Astrophysics Data System (ADS)
Carassale, Luigi; Marrè-Brunenghi, Michela; Patrone, Stefano
2018-01-01
Turbine blades are critical components in turbo engines and their design process usually includes experimental tests in order to validate and/or update numerical models. These tests are generally carried out on full-scale rotors having some blades instrumented with strain gauges and usually involve a run-up or a run-down phase. The quantification of damping in these conditions is rather challenging for several reasons. In this work, we show through numerical simulations that the usual identification procedures lead to a systematic overestimation of damping due both to the finite sweep velocity, as well as to the variation of the blade natural frequencies with the rotation speed. To overcome these problems, an identification procedure based on the continuous wavelet transform is proposed and validated through numerical simulation.
Yan, Chenguang; Hao, Zhiguo; Zhang, Song; Zhang, Baohui; Zheng, Tao
2015-01-01
Power transformer rupture and fire resulting from an arcing fault inside the tank usually leads to significant security risks and serious economic loss. In order to reveal the essence of tank deformation or explosion, this paper presents a 3-D numerical computational tool to simulate the structural dynamic behavior due to overpressure inside transformer tank. To illustrate the effectiveness of the proposed method, a 17.3MJ and a 6.3MJ arcing fault were simulated on a real full-scale 360MVA/220kV oil-immersed transformer model, respectively. By employing the finite element method, the transformer internal overpressure distribution, wave propagation and von-Mises stress were solved. The numerical results indicate that the increase of pressure and mechanical stress distribution are non-uniform and the stress tends to concentrate on connecting parts of the tank as the fault time evolves. Given this feature, it becomes possible to reduce the risk of transformer tank rupture through limiting the fault energy and enhancing the mechanical strength of the local stress concentrative areas. The theoretical model and numerical simulation method proposed in this paper can be used as a substitute for risky and costly field tests in fault overpressure analysis and tank mitigation design of transformers. PMID:26230392
Yan, Chenguang; Hao, Zhiguo; Zhang, Song; Zhang, Baohui; Zheng, Tao
2015-01-01
Power transformer rupture and fire resulting from an arcing fault inside the tank usually leads to significant security risks and serious economic loss. In order to reveal the essence of tank deformation or explosion, this paper presents a 3-D numerical computational tool to simulate the structural dynamic behavior due to overpressure inside transformer tank. To illustrate the effectiveness of the proposed method, a 17.3 MJ and a 6.3 MJ arcing fault were simulated on a real full-scale 360MVA/220kV oil-immersed transformer model, respectively. By employing the finite element method, the transformer internal overpressure distribution, wave propagation and von-Mises stress were solved. The numerical results indicate that the increase of pressure and mechanical stress distribution are non-uniform and the stress tends to concentrate on connecting parts of the tank as the fault time evolves. Given this feature, it becomes possible to reduce the risk of transformer tank rupture through limiting the fault energy and enhancing the mechanical strength of the local stress concentrative areas. The theoretical model and numerical simulation method proposed in this paper can be used as a substitute for risky and costly field tests in fault overpressure analysis and tank mitigation design of transformers.
Numerical solution of the quantum Lenard-Balescu equation for a non-degenerate one-component plasma
Scullard, Christian R.; Belt, Andrew P.; Fennell, Susan C.; ...
2016-09-01
We present a numerical solution of the quantum Lenard-Balescu equation using a spectral method, namely an expansion in Laguerre polynomials. This method exactly conserves both particles and kinetic energy and facilitates the integration over the dielectric function. To demonstrate the method, we solve the equilibration problem for a spatially homogeneous one-component plasma with various initial conditions. Unlike the more usual Landau/Fokker-Planck system, this method requires no input Coulomb logarithm; the logarithmic terms in the collision integral arise naturally from the equation along with the non-logarithmic order-unity terms. The spectral method can also be used to solve the Landau equation andmore » a quantum version of the Landau equation in which the integration over the wavenumber requires only a lower cutoff. We solve these problems as well and compare them with the full Lenard-Balescu solution in the weak-coupling limit. Finally, we discuss the possible generalization of this method to include spatial inhomogeneity and velocity anisotropy.« less
Globally Convergent Numerical Methods for Coefficient Inverse Problems
2008-09-23
backgrounds. Probing radiations are usually thought as electric and acoustic waves for the first two applications and light originated by lasers in...fundamental laws of physics. Electric , acoustic or light scattering properties of both unknown targets and the backgrounds are described by coefficients of...with the back-reflected data here, Army applications are quite feasible. The 2-D inverse problem of the determination of the unknown electric
The Contact Dynamics method: A nonsmooth story
NASA Astrophysics Data System (ADS)
Dubois, Frédéric; Acary, Vincent; Jean, Michel
2018-03-01
When velocity jumps are occurring, the dynamics is said to be nonsmooth. For instance, in collections of contacting rigid bodies, jumps are caused by shocks and dry friction. Without compliance at the interface, contact laws are not only non-differentiable in the usual sense but also multi-valued. Modeling contacting bodies is of interest in order to understand the behavior of numerous mechanical systems such as flexible multi-body systems, granular materials or masonry. These granular materials behave puzzlingly either like a solid or a fluid and a description in the frame of classical continuous mechanics would be welcome though far to be satisfactory nowadays. Jean-Jacques Moreau greatly contributed to convex analysis, functions of bounded variations, differential measure theory, sweeping process theory, definitive mathematical tools to deal with nonsmooth dynamics. He converted all these underlying theoretical ideas into an original nonsmooth implicit numerical method called Contact Dynamics (CD); a robust and efficient method to simulate large collections of bodies with frictional contacts and impacts. The CD method offers a very interesting complementary alternative to the family of smoothed explicit numerical methods, often called Distinct Elements Method (DEM). In this paper developments and improvements of the CD method are presented together with a critical comparative review of advantages and drawbacks of both approaches. xml:lang="fr"
DOE Office of Scientific and Technical Information (OSTI.GOV)
Le Hardy, D.; Favennec, Y., E-mail: yann.favennec@univ-nantes.fr; Rousseau, B.
The contribution of this paper relies in the development of numerical algorithms for the mathematical treatment of specular reflection on borders when dealing with the numerical solution of radiative transfer problems. The radiative transfer equation being integro-differential, the discrete ordinates method allows to write down a set of semi-discrete equations in which weights are to be calculated. The calculation of these weights is well known to be based on either a quadrature or on angular discretization, making the use of such method straightforward for the state equation. Also, the diffuse contribution of reflection on borders is usually well taken intomore » account. However, the calculation of accurate partition ratio coefficients is much more tricky for the specular condition applied on arbitrary geometrical borders. This paper presents algorithms that calculate analytically partition ratio coefficients needed in numerical treatments. The developed algorithms, combined with a decentered finite element scheme, are validated with the help of comparisons with analytical solutions before being applied on complex geometries.« less
Gaussian representation of high-intensity focused ultrasound beams.
Soneson, Joshua E; Myers, Matthew R
2007-11-01
A method for fast numerical simulation of high-intensity focused ultrasound beams is derived. The method is based on the frequency-domain representation of the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation, and assumes for each harmonic a Gaussian transverse pressure distribution at all distances from the transducer face. The beamwidths of the harmonics are constrained to vary inversely with the square root of the harmonic number, and as such this method may be viewed as an extension of a quasilinear approximation. The technique is capable of determining pressure or intensity fields of moderately nonlinear high-intensity focused ultrasound beams in water or biological tissue, usually requiring less than a minute of computer time on a modern workstation. Moreover, this method is particularly well suited to high-gain simulations since, unlike traditional finite-difference methods, it is not subject to resolution limitations in the transverse direction. Results are shown to be in reasonable agreement with numerical solutions of the full KZK equation in both tissue and water for moderately nonlinear beams.
Thermal Property Measurement of Semiconductor Melt using Modified Laser Flash Method
NASA Technical Reports Server (NTRS)
Lin, Bochuan; Zhu, Shen; Ban, Heng; Li, Chao; Scripa, Rosalla N.; Su, Ching-Hua; Lehoczky, Sandor L.
2003-01-01
This study further developed standard laser flash method to measure multiple thermal properties of semiconductor melts. The modified method can determine thermal diffusivity, thermal conductivity, and specific heat capacity of the melt simultaneously. The transient heat transfer process in the melt and its quartz container was numerically studied in detail. A fitting procedure based on numerical simulation results and the least root-mean-square error fitting to the experimental data was used to extract the values of specific heat capacity, thermal conductivity and thermal diffusivity. This modified method is a step forward from the standard laser flash method, which is usually used to measure thermal diffusivity of solids. The result for tellurium (Te) at 873 K: specific heat capacity 300.2 Joules per kilogram K, thermal conductivity 3.50 Watts per meter K, thermal diffusivity 2.04 x 10(exp -6) square meters per second, are within the range reported in literature. The uncertainty analysis showed the quantitative effect of sample geometry, transient temperature measured, and the energy of the laser pulse.
Local phase method for designing and optimizing metasurface devices.
Hsu, Liyi; Dupré, Matthieu; Ndao, Abdoulaye; Yellowhair, Julius; Kanté, Boubacar
2017-10-16
Metasurfaces have attracted significant attention due to their novel designs for flat optics. However, the approach usually used to engineer metasurface devices assumes that neighboring elements are identical, by extracting the phase information from simulations with periodic boundaries, or that near-field coupling between particles is negligible, by extracting the phase from single particle simulations. This is not the case most of the time and the approach thus prevents the optimization of devices that operate away from their optimum. Here, we propose a versatile numerical method to obtain the phase of each element within the metasurface (meta-atoms) while accounting for near-field coupling. Quantifying the phase error of each element of the metasurfaces with the proposed local phase method paves the way to the design of highly efficient metasurface devices including, but not limited to, deflectors, high numerical aperture metasurface concentrators, lenses, cloaks, and modulators.
Force sensing using 3D displacement measurements in linear elastic bodies
NASA Astrophysics Data System (ADS)
Feng, Xinzeng; Hui, Chung-Yuen
2016-07-01
In cell traction microscopy, the mechanical forces exerted by a cell on its environment is usually determined from experimentally measured displacement by solving an inverse problem in elasticity. In this paper, an innovative numerical method is proposed which finds the "optimal" traction to the inverse problem. When sufficient regularization is applied, we demonstrate that the proposed method significantly improves the widely used approach using Green's functions. Motivated by real cell experiments, the equilibrium condition of a slowly migrating cell is imposed as a set of equality constraints on the unknown traction. Our validation benchmarks demonstrate that the numeric solution to the constrained inverse problem well recovers the actual traction when the optimal regularization parameter is used. The proposed method can thus be applied to study general force sensing problems, which utilize displacement measurements to sense inaccessible forces in linear elastic bodies with a priori constraints.
NASA Astrophysics Data System (ADS)
Liu, Quansheng; Jiang, Yalong; Wu, Zhijun; Xu, Xiangyu; Liu, Qi
2018-04-01
In this study, a two-dimensional Voronoi element-based numerical manifold method (VE-NMM) is developed to analyze the granite fragmentation process by a single tunnel boring machine (TBM) cutter under different confining stresses. A Voronoi tessellation technique is adopted to generate the polygonal grain assemblage to approximate the microstructure of granite sample from the Gubei colliery of Huainan mining area in China. A modified interface contact model with cohesion and tensile strength is embedded into the numerical manifold method (NMM) to interpret the interactions between the rock grains. Numerical uniaxial compression and Brazilian splitting tests are first conducted to calibrate and validate the VE-NMM models based on the laboratory experiment results using a trial-and-error method. On this basis, numerical simulations of rock fragmentation by a single TBM cutter are conducted. The simulated crack initiation and propagation process as well as the indentation load-penetration depth behaviors in the numerical models accurately predict the laboratory indentation test results. The influence of confining stress on rock fragmentation is also investigated. Simulation results show that radial tensile cracks are more likely to be generated under a low confining stress, eventually coalescing into a major fracture along the loading axis. However, with the increase in confining stress, more side cracks initiate and coalesce, resulting in the formation of rock chips at the upper surface of the model. In addition, the peak indentation load also increases with the increasing confining stress, indicating that a higher thrust force is usually needed during the TBM boring process in deep tunnels.
NASA Astrophysics Data System (ADS)
Jafari, Azadeh; Deville, Michel O.; Fiétier, Nicolas
2008-09-01
This study discusses the capability of the constitutive laws for the matrix logarithm of the conformation tensor (LCT model) within the framework of the spectral elements method. The high Weissenberg number problems (HWNP) usually produce a lack of convergence of the numerical algorithms. Even though the question whether the HWNP is a purely numerical problem or rather a breakdown of the constitutive law of the model has remained somewhat of a mystery, it has been recognized that the selection of an appropriate constitutive equation constitutes a very crucial step although implementing a suitable numerical technique is still important for successful discrete modeling of non-Newtonian flows. The LCT model formulation of the viscoelastic equations originally suggested by Fattal and Kupferman is applied for 2-dimensional (2D) FENE-CR model. The Planar Poiseuille flow is considered as a benchmark problem to test this representation at high Weissenberg number. The numerical results are compared with numerical solution of the standard constitutive equation.
Multilevel filtering elliptic preconditioners
NASA Technical Reports Server (NTRS)
Kuo, C. C. Jay; Chan, Tony F.; Tong, Charles
1989-01-01
A class of preconditioners is presented for elliptic problems built on ideas borrowed from the digital filtering theory and implemented on a multilevel grid structure. They are designed to be both rapidly convergent and highly parallelizable. The digital filtering viewpoint allows the use of filter design techniques for constructing elliptic preconditioners and also provides an alternative framework for understanding several other recently proposed multilevel preconditioners. Numerical results are presented to assess the convergence behavior of the new methods and to compare them with other preconditioners of multilevel type, including the usual multigrid method as preconditioner, the hierarchical basis method and a recent method proposed by Bramble-Pasciak-Xu.
A Method for Large Eddy Simulation of Acoustic Combustion Instabilities
NASA Astrophysics Data System (ADS)
Wall, Clifton; Moin, Parviz
2003-11-01
A method for performing Large Eddy Simulation of acoustic combustion instabilities is presented. By extending the low Mach number pressure correction method to the case of compressible flow, a numerical method is developed in which the Poisson equation for pressure is replaced by a Helmholtz equation. The method avoids the acoustic CFL condition by using implicit time advancement, leading to large efficiency gains at low Mach number. The method also avoids artificial damping of acoustic waves. The numerical method is attractive for the simulation of acoustics combustion instabilities, since these flows are typically at low Mach number, and the acoustic frequencies of interest are usually low. Additionally, new boundary conditions based on the work of Poinsot and Lele have been developed to model the acoustic effect of a long channel upstream of the computational inlet, thus avoiding the need to include such a channel in the computational domain. The turbulent combustion model used is the Level Set model of Duchamp de Lageneste and Pitsch for premixed combustion. Comparison of LES results to the reacting experiments of Besson et al. will be presented.
NASA Astrophysics Data System (ADS)
Xu, Xianfeng; Cai, Luzhong; Li, Dailin; Mao, Jieying
2010-04-01
In phase-shifting interferometry (PSI) the reference wave is usually supposed to be an on-axis plane wave. But in practice a slight tilt of reference wave often occurs, and this tilt will introduce unexpected errors of the reconstructed object wave-front. Usually the least-square method with iterations, which is time consuming, is employed to analyze the phase errors caused by the tilt of reference wave. Here a simple effective algorithm is suggested to detect and then correct this kind of errors. In this method, only some simple mathematic operation is used, avoiding using least-square equations as needed in most methods reported before. It can be used for generalized phase-shifting interferometry with two or more frames for both smooth and diffusing objects, and the excellent performance has been verified by computer simulations. The numerical simulations show that the wave reconstruction errors can be reduced by 2 orders of magnitude.
Measurement of Antenna Bore-Sight Gain
NASA Technical Reports Server (NTRS)
Fortinberry, Jarrod; Shumpert, Thomas
2016-01-01
The absolute or free-field gain of a simple antenna can be approximated using standard antenna theory formulae or for a more accurate prediction, numerical methods may be employed to solve for antenna parameters including gain. Both of these methods will result in relatively reasonable estimates but in practice antenna gain is usually verified and documented via measurements and calibration. In this paper, a relatively simple and low-cost, yet effective means of determining the bore-sight free-field gain of a VHF/UHF antenna is proposed by using the Brewster angle relationship.
Numerical form-finding method for large mesh reflectors with elastic rim trusses
NASA Astrophysics Data System (ADS)
Yang, Dongwu; Zhang, Yiqun; Li, Peng; Du, Jingli
2018-06-01
Traditional methods for designing a mesh reflector usually treat the rim truss as rigid. Due to large aperture, light weight and high accuracy requirements on spaceborne reflectors, the rim truss deformation is indeed not negligible. In order to design a cable net with asymmetric boundaries for the front and rear nets, a cable-net form-finding method is firstly introduced. Then, the form-finding method is embedded into an iterative approach for designing a mesh reflector considering the elasticity of the supporting rim truss. By iterations on form-findings of the cable-net based on the updated boundary conditions due to the rim truss deformation, a mesh reflector with a fairly uniform tension distribution in its equilibrium state could be finally designed. Applications on offset mesh reflectors with both circular and elliptical rim trusses are illustrated. The numerical results show the effectiveness of the proposed approach and that a circular rim truss is more stable than an elliptical rim truss.
An advanced probabilistic structural analysis method for implicit performance functions
NASA Technical Reports Server (NTRS)
Wu, Y.-T.; Millwater, H. R.; Cruse, T. A.
1989-01-01
In probabilistic structural analysis, the performance or response functions usually are implicitly defined and must be solved by numerical analysis methods such as finite element methods. In such cases, the most commonly used probabilistic analysis tool is the mean-based, second-moment method which provides only the first two statistical moments. This paper presents a generalized advanced mean value (AMV) method which is capable of establishing the distributions to provide additional information for reliability design. The method requires slightly more computations than the second-moment method but is highly efficient relative to the other alternative methods. In particular, the examples show that the AMV method can be used to solve problems involving non-monotonic functions that result in truncated distributions.
Gradient-based stochastic estimation of the density matrix
NASA Astrophysics Data System (ADS)
Wang, Zhentao; Chern, Gia-Wei; Batista, Cristian D.; Barros, Kipton
2018-03-01
Fast estimation of the single-particle density matrix is key to many applications in quantum chemistry and condensed matter physics. The best numerical methods leverage the fact that the density matrix elements f(H)ij decay rapidly with distance rij between orbitals. This decay is usually exponential. However, for the special case of metals at zero temperature, algebraic decay of the density matrix appears and poses a significant numerical challenge. We introduce a gradient-based probing method to estimate all local density matrix elements at a computational cost that scales linearly with system size. For zero-temperature metals, the stochastic error scales like S-(d+2)/2d, where d is the dimension and S is a prefactor to the computational cost. The convergence becomes exponential if the system is at finite temperature or is insulating.
Profile fitting in crowded astronomical images
NASA Astrophysics Data System (ADS)
Manish, Raja
Around 18,000 known objects currently populate the near Earth space. These constitute active space assets as well as space debris objects. The tracking and cataloging of such objects relies on observations, most of which are ground based. Also, because of the great distance to the objects, only non-resolved object images can be obtained from the observations. Optical systems consist of telescope optics and a detector. Nowadays, usually CCD detectors are used. The information that is sought to be extracted from the frames are the individual object's astrometric position. In order to do so, the center of the object's image on the CCD frame has to be found. However, the observation frames that are read out of the detector are subject to noise. There are three different sources of noise: celestial background sources, the object signal itself and the sensor noise. The noise statistics are usually modeled as Gaussian or Poisson distributed or their combined distribution. In order to achieve a near real time processing, computationally fast and reliable methods for the so-called centroiding are desired; analytical methods are preferred over numerical ones of comparable accuracy. In this work, an analytic method for the centroiding is investigated and compared to numerical methods. Though the work focuses mainly on astronomical images, same principle could be applied on non-celestial images containing similar data. The method is based on minimizing weighted least squared (LS) error between observed data and the theoretical model of point sources in a novel yet simple way. Synthetic image frames have been simulated. The newly developed method is tested in both crowded and non-crowded fields where former needs additional image handling procedures to separate closely packed objects. Subsequent analysis on real celestial images corroborate the effectiveness of the approach.
Zheng, Jianwen; Lu, Jing; Chen, Kai
2013-07-01
Several methods have been proposed for the generation of the focused source, usually a virtual monopole source positioned in between the loudspeaker array and the listener. The problem of pre-echoes of the common analytical methods has been noticed, and the most concise method to cope with this problem is the angular weight method. In this paper, the interaural time and level difference, which are well related to the localization cues of human auditory systems, will be used to further investigate the effectiveness of the focused source generation methods. It is demonstrated that the combination of angular weight method and the numerical pressure matching method has comparatively better performance in a given reconstructed area.
NASA Astrophysics Data System (ADS)
Akita, T.; Takaki, R.; Shima, E.
2012-04-01
An adaptive estimation method of spacecraft thermal mathematical model is presented. The method is based on the ensemble Kalman filter, which can effectively handle the nonlinearities contained in the thermal model. The state space equations of the thermal mathematical model is derived, where both temperature and uncertain thermal characteristic parameters are considered as the state variables. In the method, the thermal characteristic parameters are automatically estimated as the outputs of the filtered state variables, whereas, in the usual thermal model correlation, they are manually identified by experienced engineers using trial-and-error approach. A numerical experiment of a simple small satellite is provided to verify the effectiveness of the presented method.
NASA Astrophysics Data System (ADS)
Hu, Zhan; Zheng, Gangtie
2016-08-01
A combined analysis method is developed in the present paper for studying the dynamic properties of a type of geometrically nonlinear vibration isolator, which is composed of push-pull configuration rings. This method combines the geometrically nonlinear theory of curved beams and the Harmonic Balance Method to overcome the difficulty in calculating the vibration and vibration transmissibility under large deformations of the ring structure. Using the proposed method, nonlinear dynamic behaviors of this isolator, such as the lock situation due to the coulomb damping and the usual jump resulting from the nonlinear stiffness, can be investigated. Numerical solutions based on the primary harmonic balance are first verified by direct integration results. Then, the whole procedure of this combined analysis method is demonstrated and validated by slowly sinusoidal sweeping experiments with different amplitudes of the base excitation. Both numerical and experimental results indicate that this type of isolator behaves as a hardening spring with increasing amplitude of the base excitation, which makes it suitable for isolating both steady-state vibrations and transient shocks.
NASA Astrophysics Data System (ADS)
Lin, S. T.; Liou, T. S.
2017-12-01
Numerical simulation of groundwater flow in anisotropic aquifers usually suffers from the lack of accuracy of calculating groundwater flux across grid blocks. Conventional two-point flux approximation (TPFA) can only obtain the flux normal to the grid interface but completely neglects the one parallel to it. Furthermore, the hydraulic gradient in a grid block estimated from TPFA can only poorly represent the hydraulic condition near the intersection of grid blocks. These disadvantages are further exacerbated when the principal axes of hydraulic conductivity, global coordinate system, and grid boundary are not parallel to one another. In order to refine the estimation the in-grid hydraulic gradient, several multiple-point flux approximation (MPFA) methods have been developed for two-dimensional groundwater flow simulations. For example, the MPFA-O method uses the hydraulic head at the junction node as an auxiliary variable which is then eliminated using the head and flux continuity conditions. In this study, a three-dimensional MPFA method will be developed for numerical simulation of groundwater flow in three-dimensional and strongly anisotropic aquifers. This new MPFA method first discretizes the simulation domain into hexahedrons. Each hexahedron is further decomposed into a certain number of tetrahedrons. The 2D MPFA-O method is then extended to these tetrahedrons, using the unknown head at the intersection of hexahedrons as an auxiliary variable along with the head and flux continuity conditions to solve for the head at the center of each hexahedron. Numerical simulations using this new MPFA method have been successfully compared with those obtained from a modified version of TOUGH2.
RT DDA: A hybrid method for predicting the scattering properties by densely packed media
NASA Astrophysics Data System (ADS)
Ramezan Pour, B.; Mackowski, D.
2017-12-01
The most accurate approaches to predicting the scattering properties of particulate media are based on exact solutions of the Maxwell's equations (MEs), such as the T-matrix and discrete dipole methods. Applying these techniques for optically thick targets is challenging problem due to the large-scale computations and are usually substituted by phenomenological radiative transfer (RT) methods. On the other hand, the RT technique is of questionable validity in media with large particle packing densities. In recent works, we used numerically exact ME solvers to examine the effects of particle concentration on the polarized reflection properties of plane parallel random media. The simulations were performed for plane parallel layers of wavelength-sized spherical particles, and results were compared with RT predictions. We have shown that RTE results monotonically converge to the exact solution as the particle volume fraction becomes smaller and one can observe a nearly perfect fit for packing densities of 2%-5%. This study describes the hybrid technique composed of exact and numerical scalar RT methods. The exact methodology in this work is the plane parallel discrete dipole approximation whereas the numerical method is based on the adding and doubling method. This approach not only decreases the computational time owing to the RT method but also includes the interference and multiple scattering effects, so it may be applicable to large particle density conditions.
On the superposition principle in interference experiments.
Sinha, Aninda; H Vijay, Aravind; Sinha, Urbasi
2015-05-14
The superposition principle is usually incorrectly applied in interference experiments. This has recently been investigated through numerics based on Finite Difference Time Domain (FDTD) methods as well as the Feynman path integral formalism. In the current work, we have derived an analytic formula for the Sorkin parameter which can be used to determine the deviation from the application of the principle. We have found excellent agreement between the analytic distribution and those that have been earlier estimated by numerical integration as well as resource intensive FDTD simulations. The analytic handle would be useful for comparing theory with future experiments. It is applicable both to physics based on classical wave equations as well as the non-relativistic Schrödinger equation.
Tensor numerical methods in quantum chemistry: from Hartree-Fock to excitation energies.
Khoromskaia, Venera; Khoromskij, Boris N
2015-12-21
We resume the recent successes of the grid-based tensor numerical methods and discuss their prospects in real-space electronic structure calculations. These methods, based on the low-rank representation of the multidimensional functions and integral operators, first appeared as an accurate tensor calculus for the 3D Hartree potential using 1D complexity operations, and have evolved to entirely grid-based tensor-structured 3D Hartree-Fock eigenvalue solver. It benefits from tensor calculation of the core Hamiltonian and two-electron integrals (TEI) in O(n log n) complexity using the rank-structured approximation of basis functions, electron densities and convolution integral operators all represented on 3D n × n × n Cartesian grids. The algorithm for calculating TEI tensor in a form of the Cholesky decomposition is based on multiple factorizations using algebraic 1D "density fitting" scheme, which yield an almost irreducible number of product basis functions involved in the 3D convolution integrals, depending on a threshold ε > 0. The basis functions are not restricted to separable Gaussians, since the analytical integration is substituted by high-precision tensor-structured numerical quadratures. The tensor approaches to post-Hartree-Fock calculations for the MP2 energy correction and for the Bethe-Salpeter excitation energies, based on using low-rank factorizations and the reduced basis method, were recently introduced. Another direction is towards the tensor-based Hartree-Fock numerical scheme for finite lattices, where one of the numerical challenges is the summation of electrostatic potentials of a large number of nuclei. The 3D grid-based tensor method for calculation of a potential sum on a L × L × L lattice manifests the linear in L computational work, O(L), instead of the usual O(L(3) log L) scaling by the Ewald-type approaches.
Equivalent linearization for fatigue life estimates of a nonlinear structure
NASA Technical Reports Server (NTRS)
Miles, R. N.
1989-01-01
An analysis is presented of the suitability of the method of equivalent linearization for estimating the fatigue life of a nonlinear structure. Comparisons are made of the fatigue life of a nonlinear plate as predicted using conventional equivalent linearization and three other more accurate methods. The excitation of the plate is assumed to be Gaussian white noise and the plate response is modeled using a single resonant mode. The methods used for comparison consist of numerical simulation, a probabalistic formulation, and a modification of equivalent linearization which avoids the usual assumption that the response process is Gaussian. Remarkably close agreement is obtained between all four methods, even for cases where the response is significantly linear.
Estimation of geopotential from satellite-to-satellite range rate data: Numerical results
NASA Technical Reports Server (NTRS)
Thobe, Glenn E.; Bose, Sam C.
1987-01-01
A technique for high-resolution geopotential field estimation by recovering the harmonic coefficients from satellite-to-satellite range rate data is presented and tested against both a controlled analytical simulation of a one-day satellite mission (maximum degree and order 8) and then against a Cowell method simulation of a 32-day mission (maximum degree and order 180). Innovations include: (1) a new frequency-domain observation equation based on kinetic energy perturbations which avoids much of the complication of the usual Keplerian element perturbation approaches; (2) a new method for computing the normalized inclination functions which unlike previous methods is both efficient and numerically stable even for large harmonic degrees and orders; (3) the application of a mass storage FFT to the entire mission range rate history; (4) the exploitation of newly discovered symmetries in the block diagonal observation matrix which reduce each block to the product of (a) a real diagonal matrix factor, (b) a real trapezoidal factor with half the number of rows as before, and (c) a complex diagonal factor; (5) a block-by-block least-squares solution of the observation equation by means of a custom-designed Givens orthogonal rotation method which is both numerically stable and tailored to the trapezoidal matrix structure for fast execution.
Numerical method for computing Maass cusp forms on triply punctured two-sphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chan, K. T.; Kamari, H. M.; Zainuddin, H.
2014-03-05
A quantum mechanical system on a punctured surface modeled on hyperbolic space has always been an important subject of research in mathematics and physics. This corresponding quantum system is governed by the Schrödinger equation whose solutions are the Maass waveforms. Spectral studies on these Maass waveforms are known to contain both continuous and discrete eigenvalues. The discrete eigenfunctions are usually called the Maass Cusp Forms (MCF) where their discrete eigenvalues are not known analytically. We introduce a numerical method based on Hejhal and Then algorithm using GridMathematica for computing MCF on a punctured surface with three cusps namely the triplymore » punctured two-sphere. We also report on a pullback algorithm for the punctured surface and a point locater algorithm to facilitate the complete pullback which are essential parts of the main algorithm.« less
NASA Astrophysics Data System (ADS)
Olajuwon, B. I.; Oyelakin, I. S.
2012-12-01
The paper investigates convection heat and mass transfer in power law fluid flow with non relaxation time past a vertical porous plate in presence of a chemical reaction, heat generation, thermo diffu- sion and thermal diffusion. The non - linear partial differential equations governing the flow are transformed into ordinary differential equations using the usual similarity method. The resulting similarity equations are solved numerically using Runge-Kutta shooting method. The results are presented as velocity, temperature and concentration profiles for pseudo plastic fluids and for different values of parameters governing the prob- lem. The skin friction, heat transfer and mass transfer rates are presented numerically in tabular form. The results show that these parameters have significant effects on the flow, heat transfer and mass transfer.
NASA Astrophysics Data System (ADS)
Mortensen, Mikael; Langtangen, Hans Petter; Wells, Garth N.
2011-09-01
Finding an appropriate turbulence model for a given flow case usually calls for extensive experimentation with both models and numerical solution methods. This work presents the design and implementation of a flexible, programmable software framework for assisting with numerical experiments in computational turbulence. The framework targets Reynolds-averaged Navier-Stokes models, discretized by finite element methods. The novel implementation makes use of Python and the FEniCS package, the combination of which leads to compact and reusable code, where model- and solver-specific code resemble closely the mathematical formulation of equations and algorithms. The presented ideas and programming techniques are also applicable to other fields that involve systems of nonlinear partial differential equations. We demonstrate the framework in two applications and investigate the impact of various linearizations on the convergence properties of nonlinear solvers for a Reynolds-averaged Navier-Stokes model.
Orientation of doubly rotated quartz plates.
Sherman, J R
1989-01-01
A derivation from classical spherical trigonometry of equations to compute the orientation of doubly-rotated quartz blanks from Bragg X-ray data is discussed. These are usually derived by compact and efficient vector methods, which are reviewed briefly. They are solved by generating a quadratic equation with numerical coefficients. Two methods exist for performing the computation from measurements against two planes: a direct solution by a quadratic equation and a process of convergent iteration. Both have a spurious solution. Measurement against three lattice planes yields a set of three linear equations the solution of which is an unambiguous result.
The Spatial-Numerical Congruity Effect in Preschoolers
ERIC Educational Resources Information Center
Patro, Katarzyna; Haman, Maciej
2012-01-01
Number-to-space mapping and its directionality are compelling topics in the study of numerical cognition. Usually, literacy and math education are thought to shape a left-to-right number line. We challenged this claim by analyzing performance of preliterate precounting preschoolers in a spatial-numerical task. In our experiment, children exhibited…
Stress analysis and damage evaluation of flawed composite laminates by hybrid-numerical methods
NASA Technical Reports Server (NTRS)
Yang, Yii-Ching
1992-01-01
Structural components in flight vehicles is often inherited flaws, such as microcracks, voids, holes, and delamination. These defects will degrade structures the same as that due to damages in service, such as impact, corrosion, and erosion. It is very important to know how a structural component can be useful and survive after these flaws and damages. To understand the behavior and limitation of these structural components researchers usually do experimental tests or theoretical analyses on structures with simulated flaws. However, neither approach has been completely successful. As Durelli states that 'Seldom does one method give a complete solution, with the most efficiency'. Examples of this principle is seen in photomechanics which additional strain-gage testing can only average stresses at locations of high concentration. On the other hand, theoretical analyses including numerical analyses are implemented with simplified assumptions which may not reflect actual boundary conditions. Hybrid-Numerical methods which combine photomechanics and numerical analysis have been used to correct this inefficiency since 1950's. But its application is limited until 1970's when modern computer codes became available. In recent years, researchers have enhanced the data obtained from photoelasticity, laser speckle, holography and moire' interferometry for input of finite element analysis on metals. Nevertheless, there is only few of literature being done on composite laminates. Therefore, this research is dedicated to this highly anisotropic material.
Higher order explicit symmetric integrators for inseparable forms of coordinates and momenta
NASA Astrophysics Data System (ADS)
Liu, Lei; Wu, Xin; Huang, Guoqing; Liu, Fuyao
2016-06-01
Pihajoki proposed the extended phase-space second-order explicit symmetric leapfrog methods for inseparable Hamiltonian systems. On the basis of this work, we survey a critical problem on how to mix the variables in the extended phase space. Numerical tests show that sequent permutations of coordinates and momenta can make the leapfrog-like methods yield the most accurate results and the optimal long-term stabilized error behaviour. We also present a novel method to construct many fourth-order extended phase-space explicit symmetric integration schemes. Each scheme represents the symmetric production of six usual second-order leapfrogs without any permutations. This construction consists of four segments: the permuted coordinates, triple product of the usual second-order leapfrog without permutations, the permuted momenta and the triple product of the usual second-order leapfrog without permutations. Similarly, extended phase-space sixth, eighth and other higher order explicit symmetric algorithms are available. We used several inseparable Hamiltonian examples, such as the post-Newtonian approach of non-spinning compact binaries, to show that one of the proposed fourth-order methods is more efficient than the existing methods; examples include the fourth-order explicit symplectic integrators of Chin and the fourth-order explicit and implicit mixed symplectic integrators of Zhong et al. Given a moderate choice for the related mixing and projection maps, the extended phase-space explicit symplectic-like methods are well suited for various inseparable Hamiltonian problems. Samples of these problems involve the algorithmic regularization of gravitational systems with velocity-dependent perturbations in the Solar system and post-Newtonian Hamiltonian formulations of spinning compact objects.
Diffusion approximation-based simulation of stochastic ion channels: which method to use?
Pezo, Danilo; Soudry, Daniel; Orio, Patricio
2014-01-01
To study the effects of stochastic ion channel fluctuations on neural dynamics, several numerical implementation methods have been proposed. Gillespie's method for Markov Chains (MC) simulation is highly accurate, yet it becomes computationally intensive in the regime of a high number of channels. Many recent works aim to speed simulation time using the Langevin-based Diffusion Approximation (DA). Under this common theoretical approach, each implementation differs in how it handles various numerical difficulties—such as bounding of state variables to [0,1]. Here we review and test a set of the most recently published DA implementations (Goldwyn et al., 2011; Linaro et al., 2011; Dangerfield et al., 2012; Orio and Soudry, 2012; Schmandt and Galán, 2012; Güler, 2013; Huang et al., 2013a), comparing all of them in a set of numerical simulations that assess numerical accuracy and computational efficiency on three different models: (1) the original Hodgkin and Huxley model, (2) a model with faster sodium channels, and (3) a multi-compartmental model inspired in granular cells. We conclude that for a low number of channels (usually below 1000 per simulated compartment) one should use MC—which is the fastest and most accurate method. For a high number of channels, we recommend using the method by Orio and Soudry (2012), possibly combined with the method by Schmandt and Galán (2012) for increased speed and slightly reduced accuracy. Consequently, MC modeling may be the best method for detailed multicompartment neuron models—in which a model neuron with many thousands of channels is segmented into many compartments with a few hundred channels. PMID:25404914
Influence of the Roof Movement Control Method on the Stability of Remnant
NASA Astrophysics Data System (ADS)
Adach-Pawelus, Karolina
2017-12-01
In the underground mines, there are geological and mining situations that necessitate leaving behind remnants in the mining field. Remnants, in the form of small, irregular parcels, are usually separated in the case of: significant problems with maintaining roof stability, high rockburst hazard, the occurrence of complex geological conditions and for random reasons (ore remnants), as well as for economic reasons (undisturbed rock remnants). Remnants left in the mining field become sites of high stress values concentration and may affect the rock in their vicinity. The values of stress inside the remnant and its vicinity, as well as the stability of the remnant, largely depend on the roof movement control method used in the mining field. The article presents the results of the numerical analysis of the influence of roof movement control method on remnant stability and the geomechanical situation in the mining field. The numerical analysis was conducted for the geological and mining conditions characteristic of Polish underground copper mines owned by KGHM Polska Miedz S.A. Numerical simulations were performed in a plane strain state by means of Phase 2 v. 8.0 software, based on the finite element method. The behaviour of remnant and rock mass in its vicinity was simulated in the subsequent steps of the room and pillar mining system for three types of roof movement control method: roof deflection, dry backfill and hydraulic backfill. The parameters of the rock mass accepted for numerical modelling were calculated by means of RocLab software on the basis of the Hoek-Brown classification. The Mohr-Coulomb strength criterion was applied.
A new PIC noise reduction technique
NASA Astrophysics Data System (ADS)
Barnes, D. C.
2014-10-01
Numerical solution of the Vlasov equation is considered in a general situation in which there is an underlying static solution (equilibrium). There are no further assumptions about dimensionality, smallenss of orbits, or disparate time scales. The semi-characteristic (SC) method for Vlasov solution is described. The usual characteristics of the equation, which are the single particle orbits, are modified in such a way that the equilibrium phase-space flow is removed. In this way, the shot noise introduced by the usual discrete particle representation of the equilibrium is static in time and can be removed completely by subtraction. An almost exact algorithm for this is based on the observation that a (infinitesimal or) discrete time step of any equilibrium MC realization is again a realization of the equilibrium, building up strings of associated simulation particles. In this way, the only added discretization error arises from the need to extrapolate backward in time the chain end points one dt using a canonical transformation. Previously developed energy-conserving time-implicit methods are applied without modification. 1D ES examples of Landau damping and velocity-space instability are given to illustrate the method.
Thornton, B S; Hung, W T; Irving, J
1991-01-01
The response decay data of living cells subject to electric polarization is associated with their relaxation distribution function (RDF) and can be determined using the inverse Laplace transform method. A new polynomial, involving a series of associated Laguerre polynomials, has been used as the approximating function for evaluating the RDF, with the advantage of avoiding the usual arbitrary trial values of a particular parameter in the numerical computations. Some numerical examples are given, followed by an application to cervical tissue. It is found that the average relaxation time and the peak amplitude of the RDF exhibit higher values for tumorous cells than normal cells and might be used as parameters to differentiate them and their associated tissues.
Information processing using a single dynamical node as complex system
Appeltant, L.; Soriano, M.C.; Van der Sande, G.; Danckaert, J.; Massar, S.; Dambre, J.; Schrauwen, B.; Mirasso, C.R.; Fischer, I.
2011-01-01
Novel methods for information processing are highly desired in our information-driven society. Inspired by the brain's ability to process information, the recently introduced paradigm known as 'reservoir computing' shows that complex networks can efficiently perform computation. Here we introduce a novel architecture that reduces the usually required large number of elements to a single nonlinear node with delayed feedback. Through an electronic implementation, we experimentally and numerically demonstrate excellent performance in a speech recognition benchmark. Complementary numerical studies also show excellent performance for a time series prediction benchmark. These results prove that delay-dynamical systems, even in their simplest manifestation, can perform efficient information processing. This finding paves the way to feasible and resource-efficient technological implementations of reservoir computing. PMID:21915110
An Application of the Acoustic Similarity Law to the Numerical Analysis of Centrifugal Fan Noise
NASA Astrophysics Data System (ADS)
Jeon, Wan-Ho; Lee, Duck-Joo; Rhee, Huinam
Centrifugal fans, which are frequently used in our daily lives and various industries, usually make severe noise problems. Generally, the centrifugal fan noise consists of tones at the blade passing frequency and its higher harmonics. These tonal sounds come from the interaction between the flow discharged from the impeller and the cutoff in the casing. Prediction of the noise from a centrifugal fan becomes more necessary to optimize the design to meet both the performance and noise criteria. However, only some limited studies on noise prediction method exist because there are difficulties in obtaining detailed information about the flow field and casing effect on noise radiation. This paper aims to investigate the noise generation mechanism of a centrifugal fan and to develop a prediction method for the unsteady flow and acoustic pressure fields. In order to do this, a numerical analysis method using acoustic similarity law is proposed, and it is verified that the method can predict the noise generation mechanism very well by comparing the predicted results with available experimental results.
OPC modeling by genetic algorithm
NASA Astrophysics Data System (ADS)
Huang, W. C.; Lai, C. M.; Luo, B.; Tsai, C. K.; Tsay, C. S.; Lai, C. W.; Kuo, C. C.; Liu, R. G.; Lin, H. T.; Lin, B. J.
2005-05-01
Optical proximity correction (OPC) is usually used to pre-distort mask layouts to make the printed patterns as close to the desired shapes as possible. For model-based OPC, a lithographic model to predict critical dimensions after lithographic processing is needed. The model is usually obtained via a regression of parameters based on experimental data containing optical proximity effects. When the parameters involve a mix of the continuous (optical and resist models) and the discrete (kernel numbers) sets, the traditional numerical optimization method may have difficulty handling model fitting. In this study, an artificial-intelligent optimization method was used to regress the parameters of the lithographic models for OPC. The implemented phenomenological models were constant-threshold models that combine diffused aerial image models with loading effects. Optical kernels decomposed from Hopkin"s equation were used to calculate aerial images on the wafer. Similarly, the numbers of optical kernels were treated as regression parameters. This way, good regression results were obtained with different sets of optical proximity effect data.
Combination of experimental and numerical methods for mechanical characterization of Al-Si alloys
NASA Astrophysics Data System (ADS)
Kruglova, A.; Roland, M.; Diebels, S.; Mücklich, F.
2017-10-01
In general, mechanical properties of Al-Si alloys strongly depend on the morphology and arrangement of microconstituents, such as primary aluminium dendrites, silicon particles, etc. Therefore, a detailed characterization of morphological and mechanical properties of the alloys is necessary to better understand the relations between the underlined properties and to tailor the material’s microstructure to the specific application needs. The mechanical characterization usually implies numerical simulations and mechanical tests, which allow to investigate the influence of different microstructural aspects on different scales. In this study, the uniaxial tension and compression tests have been carried out on Al-Si alloys having different microstructures. The mechanical behavior of the alloys has been interpreted with respect to the morphology of the microconstituents and has been correlated with the results of numerical simulations. The advantages and limitations of the experimental and numerical methods have been disclosed and the importance of combining both techniques for the interpretation of the mechanical behavior of Al-Si alloys has been shown. Thereby, it has been suggested that the density of Si particles and the size of Al dendrites are more important for the strengthening of the alloys than the size-shape features of the eutectic Si induced by the modification.
Color TV: total variation methods for restoration of vector-valued images.
Blomgren, P; Chan, T F
1998-01-01
We propose a new definition of the total variation (TV) norm for vector-valued functions that can be applied to restore color and other vector-valued images. The new TV norm has the desirable properties of 1) not penalizing discontinuities (edges) in the image, 2) being rotationally invariant in the image space, and 3) reducing to the usual TV norm in the scalar case. Some numerical experiments on denoising simple color images in red-green-blue (RGB) color space are presented.
Projection of angular momentum via linear algebra
Johnson, Calvin W.; O'Mara, Kevin D.
2017-12-01
Projection of many-body states with good angular momentum from an initial state is usually accomplished by a three-dimensional integral. Here, we show how projection can instead be done by solving a straightforward system of linear equations. We demonstrate the method and give sample applications tomore » $$^{48}$$Cr and $$^{60}$$Fe in the $pf$ shell. This new projection scheme, which is competitive against the standard numerical quadrature, should also be applicable to other quantum numbers such as isospin and particle number.« less
Projection of angular momentum via linear algebra
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Calvin W.; O'Mara, Kevin D.
Projection of many-body states with good angular momentum from an initial state is usually accomplished by a three-dimensional integral. Here, we show how projection can instead be done by solving a straightforward system of linear equations. We demonstrate the method and give sample applications tomore » $$^{48}$$Cr and $$^{60}$$Fe in the $pf$ shell. This new projection scheme, which is competitive against the standard numerical quadrature, should also be applicable to other quantum numbers such as isospin and particle number.« less
A Localized Ensemble Kalman Smoother
NASA Technical Reports Server (NTRS)
Butala, Mark D.
2012-01-01
Numerous geophysical inverse problems prove difficult because the available measurements are indirectly related to the underlying unknown dynamic state and the physics governing the system may involve imperfect models or unobserved parameters. Data assimilation addresses these difficulties by combining the measurements and physical knowledge. The main challenge in such problems usually involves their high dimensionality and the standard statistical methods prove computationally intractable. This paper develops and addresses the theoretical convergence of a new high-dimensional Monte-Carlo approach called the localized ensemble Kalman smoother.
Projection of angular momentum via linear algebra
NASA Astrophysics Data System (ADS)
Johnson, Calvin W.; O'Mara, Kevin D.
2017-12-01
Projection of many-body states with good angular momentum from an initial state is usually accomplished by a three-dimensional integral. We show how projection can instead be done by solving a straightforward system of linear equations. We demonstrate the method and give sample applications to 48Cr and 60Fe in the p f shell. This new projection scheme, which is competitive against the standard numerical quadrature, should also be applicable to other quantum numbers such as isospin and particle number.
Data Analysis Approaches for the Risk-Informed Safety Margins Characterization Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandelli, Diego; Alfonsi, Andrea; Maljovec, Daniel P.
2016-09-01
In the past decades, several numerical simulation codes have been employed to simulate accident dynamics (e.g., RELAP5-3D, RELAP-7, MELCOR, MAAP). In order to evaluate the impact of uncertainties into accident dynamics, several stochastic methodologies have been coupled with these codes. These stochastic methods range from classical Monte-Carlo and Latin Hypercube sampling to stochastic polynomial methods. Similar approaches have been introduced into the risk and safety community where stochastic methods (such as RAVEN, ADAPT, MCDET, ADS) have been coupled with safety analysis codes in order to evaluate the safety impact of timing and sequencing of events. These approaches are usually calledmore » Dynamic PRA or simulation-based PRA methods. These uncertainties and safety methods usually generate a large number of simulation runs (database storage may be on the order of gigabytes or higher). The scope of this paper is to present a broad overview of methods and algorithms that can be used to analyze and extract information from large data sets containing time dependent data. In this context, “extracting information” means constructing input-output correlations, finding commonalities, and identifying outliers. Some of the algorithms presented here have been developed or are under development within the RAVEN statistical framework.« less
NASA Astrophysics Data System (ADS)
Renko, Tanja; Ivušić, Sarah; Telišman Prtenjak, Maja; Šoljan, Vinko; Horvat, Igor
2018-03-01
In this study, a synoptic and mesoscale analysis was performed and Szilagyi's waterspout forecasting method was tested on ten waterspout events in the period of 2013-2016. Data regarding waterspout occurrences were collected from weather stations, an online survey at the official website of the National Meteorological and Hydrological Service of Croatia and eyewitness reports from newspapers and the internet. Synoptic weather conditions were analyzed using surface pressure fields, 500 hPa level synoptic charts, SYNOP reports and atmospheric soundings. For all observed waterspout events, a synoptic type was determined using the 500 hPa geopotential height chart. The occurrence of lightning activity was determined from the LINET lightning database, and waterspouts were divided into thunderstorm-related and "fair weather" ones. Mesoscale characteristics (with a focus on thermodynamic instability indices) were determined using the high-resolution (500 m grid length) mesoscale numerical weather model and model results were compared with the available observations. Because thermodynamic instability indices are usually insufficient for forecasting waterspout activity, the performance of the Szilagyi Waterspout Index (SWI) was tested using vertical atmospheric profiles provided by the mesoscale numerical model. The SWI successfully forecasted all waterspout events, even the winter events. This indicates that the Szilagyi's waterspout prognostic method could be used as a valid prognostic tool for the eastern Adriatic.
A Parallel Compact Multi-Dimensional Numerical Algorithm with Aeroacoustics Applications
NASA Technical Reports Server (NTRS)
Povitsky, Alex; Morris, Philip J.
1999-01-01
In this study we propose a novel method to parallelize high-order compact numerical algorithms for the solution of three-dimensional PDEs (Partial Differential Equations) in a space-time domain. For this numerical integration most of the computer time is spent in computation of spatial derivatives at each stage of the Runge-Kutta temporal update. The most efficient direct method to compute spatial derivatives on a serial computer is a version of Gaussian elimination for narrow linear banded systems known as the Thomas algorithm. In a straightforward pipelined implementation of the Thomas algorithm processors are idle due to the forward and backward recurrences of the Thomas algorithm. To utilize processors during this time, we propose to use them for either non-local data independent computations, solving lines in the next spatial direction, or local data-dependent computations by the Runge-Kutta method. To achieve this goal, control of processor communication and computations by a static schedule is adopted. Thus, our parallel code is driven by a communication and computation schedule instead of the usual "creative, programming" approach. The obtained parallelization speed-up of the novel algorithm is about twice as much as that for the standard pipelined algorithm and close to that for the explicit DRP algorithm.
Modeling of diatomic molecule using the Morse potential and the Verlet algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fidiani, Elok
Performing molecular modeling usually uses special software for Molecular Dynamics (MD) such as: GROMACS, NAMD, JMOL etc. Molecular dynamics is a computational method to calculate the time dependent behavior of a molecular system. In this work, MATLAB was used as numerical method for a simple modeling of some diatomic molecules: HCl, H{sub 2} and O{sub 2}. MATLAB is a matrix based numerical software, in order to do numerical analysis, all the functions and equations describing properties of atoms and molecules must be developed manually in MATLAB. In this work, a Morse potential was generated to describe the bond interaction betweenmore » the two atoms. In order to analyze the simultaneous motion of molecules, the Verlet Algorithm derived from Newton’s Equations of Motion (classical mechanics) was operated. Both the Morse potential and the Verlet algorithm were integrated using MATLAB to derive physical properties and the trajectory of the molecules. The data computed by MATLAB is always in the form of a matrix. To visualize it, Visualized Molecular Dynamics (VMD) was performed. Such method is useful for development and testing some types of interaction on a molecular scale. Besides, this can be very helpful for describing some basic principles of molecular interaction for educational purposes.« less
Optimal time points sampling in pathway modelling.
Hu, Shiyan
2004-01-01
Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.
Consistent three-equation model for thin films
NASA Astrophysics Data System (ADS)
Richard, Gael; Gisclon, Marguerite; Ruyer-Quil, Christian; Vila, Jean-Paul
2017-11-01
Numerical simulations of thin films of newtonian fluids down an inclined plane use reduced models for computational cost reasons. These models are usually derived by averaging over the fluid depth the physical equations of fluid mechanics with an asymptotic method in the long-wave limit. Two-equation models are based on the mass conservation equation and either on the momentum balance equation or on the work-energy theorem. We show that there is no two-equation model that is both consistent and theoretically coherent and that a third variable and a three-equation model are required to solve all theoretical contradictions. The linear and nonlinear properties of two and three-equation models are tested on various practical problems. We present a new consistent three-equation model with a simple mathematical structure which allows an easy and reliable numerical resolution. The numerical calculations agree fairly well with experimental measurements or with direct numerical resolutions for neutral stability curves, speed of kinematic waves and of solitary waves and depth profiles of wavy films. The model can also predict the flow reversal at the first capillary trough ahead of the main wave hump.
NASA Astrophysics Data System (ADS)
Amarti, Z.; Nurkholipah, N. S.; Anggriani, N.; Supriatna, A. K.
2018-03-01
Predicting the future of population number is among the important factors that affect the consideration in preparing a good management for the population. This has been done by various known method, one among them is by developing a mathematical model describing the growth of the population. The model usually takes form in a differential equation or a system of differential equations, depending on the complexity of the underlying properties of the population. The most widely used growth models currently are those having a sigmoid solution of time series, including the Verhulst logistic equation and the Gompertz equation. In this paper we consider the Allee effect of the Verhulst’s logistic population model. The Allee effect is a phenomenon in biology showing a high correlation between population size or density and the mean individual fitness of the population. The method used to derive the solution is the Runge-Kutta numerical scheme, since it is in general regarded as one among the good numerical scheme which is relatively easy to implement. Further exploration is done via the fuzzy theoretical approach to accommodate the impreciseness of the initial values and parameters in the model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jo, J.C.; Shin, W.K.; Choi, C.Y.
Transient heat transfer problems with phase changes (Stefan problems) occur in many engineering situations, including potential core melting and solidification during pressurized-water-reactor severe accidents, ablation of thermal shields, melting and solidification of alloys, and many others. This article addresses the numerical analysis of nonlinear transient heat transfer with melting or solidification. An effective and simple procedure is presented for the simulation of the motion of the boundary and the transient temperature field during the phase change process. To accomplish this purpose, an iterative implicit solution algorithm has been developed by employing the dual-reciprocity boundary-element method. The dual-reciprocity boundary-element approach providedmore » in this article is much simpler than the usual boundary-element method in applying a reciprocity principle and an available technique for dealing with the domain integral of the boundary element formulation simultaneously. In this article, attention is focused on two-dimensional melting (ablation)/solidification problems for simplicity. The accuracy and effectiveness of the present analysis method have been illustrated through comparisons of the calculation results of some examples of one-phase ablation/solidification problems with their known semianalytical or numerical solutions where available.« less
A comparison of two closely-related approaches to aerodynamic design optimization
NASA Technical Reports Server (NTRS)
Shubin, G. R.; Frank, P. D.
1991-01-01
Two related methods for aerodynamic design optimization are compared. The methods, called the implicit gradient approach and the variational (or optimal control) approach, both attempt to obtain gradients necessary for numerical optimization at a cost significantly less than that of the usual black-box approach that employs finite difference gradients. While the two methods are seemingly quite different, they are shown to differ (essentially) in that the order of discretizing the continuous problem, and of applying calculus, is interchanged. Under certain circumstances, the two methods turn out to be identical. We explore the relationship between these methods by applying them to a model problem for duct flow that has many features in common with transonic flow over an airfoil. We find that the gradients computed by the variational method can sometimes be sufficiently inaccurate to cause the optimization to fail.
Quadratic String Method for Locating Instantons in Tunneling Splitting Calculations.
Cvitaš, Marko T
2018-03-13
The ring-polymer instanton (RPI) method is an efficient technique for calculating approximate tunneling splittings in high-dimensional molecular systems. In the RPI method, tunneling splitting is evaluated from the properties of the minimum action path (MAP) connecting the symmetric wells, whereby the extensive sampling of the full potential energy surface of the exact quantum-dynamics methods is avoided. Nevertheless, the search for the MAP is usually the most time-consuming step in the standard numerical procedures. Recently, nudged elastic band (NEB) and string methods, originaly developed for locating minimum energy paths (MEPs), were adapted for the purpose of MAP finding with great efficiency gains [ J. Chem. Theory Comput. 2016 , 12 , 787 ]. In this work, we develop a new quadratic string method for locating instantons. The Euclidean action is minimized by propagating the initial guess (a path connecting two wells) over the quadratic potential energy surface approximated by means of updated Hessians. This allows the algorithm to take many minimization steps between the potential/gradient calls with further reductions in the computational effort, exploiting the smoothness of potential energy surface. The approach is general, as it uses Cartesian coordinates, and widely applicable, with computational effort of finding the instanton usually lower than that of determining the MEP. It can be combined with expensive potential energy surfaces or on-the-fly electronic-structure methods to explore a wide variety of molecular systems.
FASOR - A second generation shell of revolution code
NASA Technical Reports Server (NTRS)
Cohen, G. A.
1978-01-01
An integrated computer program entitled Field Analysis of Shells of Revolution (FASOR) currently under development for NASA is described. When completed, this code will treat prebuckling, buckling, initial postbuckling and vibrations under axisymmetric static loads as well as linear response and bifurcation under asymmetric static loads. Although these modes of response are treated by existing programs, FASOR extends the class of problems treated to include general anisotropy and transverse shear deformations of stiffened laminated shells. At the same time, a primary goal is to develop a program which is free of the usual problems of modeling, numerical convergence and ill-conditioning, laborious problem setup, limitations on problem size and interpretation of output. The field method is briefly described, the shell differential equations are cast in a suitable form for solution by this method and essential aspects of the input format are presented. Numerical results are given for both unstiffened and stiffened anisotropic cylindrical shells and compared with previously published analytical solutions.
Modelling radionuclide transport in fractured media with a dynamic update of K d values
Trinchero, Paolo; Painter, Scott L.; Ebrahimi, Hedieh; ...
2015-10-13
Radionuclide transport in fractured crystalline rocks is a process of interest in evaluating long term safety of potential disposal systems for radioactive wastes. Given their numerical efficiency and the absence of numerical dispersion, Lagrangian methods (e.g. particle tracking algorithms) are appealing approaches that are often used in safety assessment (SA) analyses. In these approaches, many complex geochemical retention processes are typically lumped into a single parameter: the distribution coefficient (Kd). Usually, the distribution coefficient is assumed to be constant over the time frame of interest. However, this assumption could be critical under long-term geochemical changes as it is demonstrated thatmore » the distribution coefficient depends on the background chemical conditions (e.g. pH, Eh, and major chemistry). In this study, we provide a computational framework that combines the efficiency of Lagrangian methods with a sound and explicit description of the geochemical changes of the site and their influence on the radionuclide retention properties.« less
NASA Technical Reports Server (NTRS)
Thareja, R.; Haftka, R. T.
1986-01-01
There has been recent interest in multidisciplinary multilevel optimization applied to large engineering systems. The usual approach is to divide the system into a hierarchy of subsystems with ever increasing detail in the analysis focus. Equality constraints are usually placed on various design quantities at every successive level to ensure consistency between levels. In many previous applications these equality constraints were eliminated by reducing the number of design variables. In complex systems this may not be possible and these equality constraints may have to be retained in the optimization process. In this paper the impact of such a retention is examined for a simple portal frame problem. It is shown that the equality constraints introduce numerical difficulties, and that the numerical solution becomes very sensitive to optimization parameters for a wide range of optimization algorithms.
NASA Astrophysics Data System (ADS)
Roy Choudhury, Raja; Roy Choudhury, Arundhati; Kanti Ghose, Mrinal
2013-01-01
A semi-analytical model with three optimizing parameters and a novel non-Gaussian function as the fundamental modal field solution has been proposed to arrive at an accurate solution to predict various propagation parameters of graded-index fibers with less computational burden than numerical methods. In our semi analytical formulation the optimization of core parameter U which is usually uncertain, noisy or even discontinuous, is being calculated by Nelder-Mead method of nonlinear unconstrained minimizations as it is an efficient and compact direct search method and does not need any derivative information. Three optimizing parameters are included in the formulation of fundamental modal field of an optical fiber to make it more flexible and accurate than other available approximations. Employing variational technique, Petermann I and II spot sizes have been evaluated for triangular and trapezoidal-index fibers with the proposed fundamental modal field. It has been demonstrated that, the results of the proposed solution identically match with the numerical results over a wide range of normalized frequencies. This approximation can also be used in the study of doped and nonlinear fiber amplifier.
NASA Astrophysics Data System (ADS)
Zeng, Qinglei; Liu, Zhanli; Wang, Tao; Gao, Yue; Zhuang, Zhuo
2018-02-01
In hydraulic fracturing process in shale rock, multiple fractures perpendicular to a horizontal wellbore are usually driven to propagate simultaneously by the pumping operation. In this paper, a numerical method is developed for the propagation of multiple hydraulic fractures (HFs) by fully coupling the deformation and fracturing of solid formation, fluid flow in fractures, fluid partitioning through a horizontal wellbore and perforation entry loss effect. The extended finite element method (XFEM) is adopted to model arbitrary growth of the fractures. Newton's iteration is proposed to solve these fully coupled nonlinear equations, which is more efficient comparing to the widely adopted fixed-point iteration in the literatures and avoids the need to impose fluid pressure boundary condition when solving flow equations. A secant iterative method based on the stress intensity factor (SIF) is proposed to capture different propagation velocities of multiple fractures. The numerical results are compared with theoretical solutions in literatures to verify the accuracy of the method. The simultaneous propagation of multiple HFs is simulated by the newly proposed algorithm. The coupled influences of propagation regime, stress interaction, wellbore pressure loss and perforation entry loss on simultaneous propagation of multiple HFs are investigated.
NASA Technical Reports Server (NTRS)
Duffy, S. F.; Hu, J.; Hopkins, D. A.
1995-01-01
The article begins by examining the fundamentals of traditional deterministic design philosophy. The initial section outlines the concepts of failure criteria and limit state functions two traditional notions that are embedded in deterministic design philosophy. This is followed by a discussion regarding safety factors (a possible limit state function) and the common utilization of statistical concepts in deterministic engineering design approaches. Next the fundamental aspects of a probabilistic failure analysis are explored and it is shown that deterministic design concepts mentioned in the initial portion of the article are embedded in probabilistic design methods. For components fabricated from ceramic materials (and other similarly brittle materials) the probabilistic design approach yields the widely used Weibull analysis after suitable assumptions are incorporated. The authors point out that Weibull analysis provides the rare instance where closed form solutions are available for a probabilistic failure analysis. Since numerical methods are usually required to evaluate component reliabilities, a section on Monte Carlo methods is included to introduce the concept. The article concludes with a presentation of the technical aspects that support the numerical method known as fast probability integration (FPI). This includes a discussion of the Hasofer-Lind and Rackwitz-Fiessler approximations.
A Gaussian beam method for ultrasonic non-destructive evaluation modeling
NASA Astrophysics Data System (ADS)
Jacquet, O.; Leymarie, N.; Cassereau, D.
2018-05-01
The propagation of high-frequency ultrasonic body waves can be efficiently estimated with a semi-analytic Dynamic Ray Tracing approach using paraxial approximation. Although this asymptotic field estimation avoids the computational cost of numerical methods, it may encounter several limitations in reproducing identified highly interferential features. Nevertheless, some can be managed by allowing paraxial quantities to be complex-valued. This gives rise to localized solutions, known as paraxial Gaussian beams. Whereas their propagation and transmission/reflection laws are well-defined, the fact remains that the adopted complexification introduces additional initial conditions. While their choice is usually performed according to strategies specifically tailored to limited applications, a Gabor frame method has been implemented to indiscriminately initialize a reasonable number of paraxial Gaussian beams. Since this method can be applied for an usefully wide range of ultrasonic transducers, the typical case of the time-harmonic piston radiator is investigated. Compared to the commonly used Multi-Gaussian Beam model [1], a better agreement is obtained throughout the radiated field between the results of numerical integration (or analytical on-axis solution) and the resulting Gaussian beam superposition. Sparsity of the proposed solution is also discussed.
Fuzzy neural network methodology applied to medical diagnosis
NASA Technical Reports Server (NTRS)
Gorzalczany, Marian B.; Deutsch-Mcleish, Mary
1992-01-01
This paper presents a technique for building expert systems that combines the fuzzy-set approach with artificial neural network structures. This technique can effectively deal with two types of medical knowledge: a nonfuzzy one and a fuzzy one which usually contributes to the process of medical diagnosis. Nonfuzzy numerical data is obtained from medical tests. Fuzzy linguistic rules describing the diagnosis process are provided by a human expert. The proposed method has been successfully applied in veterinary medicine as a support system in the diagnosis of canine liver diseases.
An attribute-driven statistics generator for use in a G.I.S. environment
NASA Technical Reports Server (NTRS)
Thomas, R. W.; Ritter, P. R.; Kaugars, A.
1984-01-01
When performing research using digital geographic information it is often useful to produce quantitative characterizations of the data, usually within some constraints. In the research environment the different combinations of required data and constraints can often become quite complex. This paper describes a technique that gives the researcher a powerful and flexible way to set up many possible combinations of data and constraints without having to perform numerous intermediate steps or create temporary data bands. This method provides an efficient way to produce descriptive statistics in such situations.
Geng, Zhigeng; Wang, Sijian; Yu, Menggang; Monahan, Patrick O.; Champion, Victoria; Wahba, Grace
2017-01-01
Summary In many scientific and engineering applications, covariates are naturally grouped. When the group structures are available among covariates, people are usually interested in identifying both important groups and important variables within the selected groups. Among existing successful group variable selection methods, some methods fail to conduct the within group selection. Some methods are able to conduct both group and within group selection, but the corresponding objective functions are non-convex. Such a non-convexity may require extra numerical effort. In this article, we propose a novel Log-Exp-Sum(LES) penalty for group variable selection. The LES penalty is strictly convex. It can identify important groups as well as select important variables within the group. We develop an efficient group-level coordinate descent algorithm to fit the model. We also derive non-asymptotic error bounds and asymptotic group selection consistency for our method in the high-dimensional setting where the number of covariates can be much larger than the sample size. Numerical results demonstrate the good performance of our method in both variable selection and prediction. We applied the proposed method to an American Cancer Society breast cancer survivor dataset. The findings are clinically meaningful and may help design intervention programs to improve the qualify of life for breast cancer survivors. PMID:25257196
Feature-based data assimilation in geophysics
NASA Astrophysics Data System (ADS)
Morzfeld, Matthias; Adams, Jesse; Lunderman, Spencer; Orozco, Rafael
2018-05-01
Many applications in science require that computational models and data be combined. In a Bayesian framework, this is usually done by defining likelihoods based on the mismatch of model outputs and data. However, matching model outputs and data in this way can be unnecessary or impossible. For example, using large amounts of steady state data is unnecessary because these data are redundant. It is numerically difficult to assimilate data in chaotic systems. It is often impossible to assimilate data of a complex system into a low-dimensional model. As a specific example, consider a low-dimensional stochastic model for the dipole of the Earth's magnetic field, while other field components are ignored in the model. The above issues can be addressed by selecting features of the data, and defining likelihoods based on the features, rather than by the usual mismatch of model output and data. Our goal is to contribute to a fundamental understanding of such a feature-based approach that allows us to assimilate selected aspects of data into models. We also explain how the feature-based approach can be interpreted as a method for reducing an effective dimension and derive new noise models, based on perturbed observations, that lead to computationally efficient solutions. Numerical implementations of our ideas are illustrated in four examples.
Generalized contact and improved frictional heating in the material point method
NASA Astrophysics Data System (ADS)
Nairn, J. A.; Bardenhagen, S. G.; Smith, G. D.
2017-09-01
The material point method (MPM) has proved to be an effective particle method for computational mechanics modeling of problems involving contact, but all prior applications have been limited to Coulomb friction. This paper generalizes the MPM approach for contact to handle any friction law with examples given for friction with adhesion or with a velocity-dependent coefficient of friction. Accounting for adhesion requires an extra calculation to evaluate contact area. Implementation of velocity-dependent laws usually needs numerical methods to find contacting forces. The friction process involves work which can be converted into heat. This paper provides a new method for calculating frictional heating that accounts for interfacial acceleration during the time step. The acceleration terms is small for many problems, but temporal convergence of heating effects for problems involving vibrations and high contact forces is improved by the new method. Fortunately, the new method needs few extra calculations and therefore is recommended for all simulations.
Generalized contact and improved frictional heating in the material point method
NASA Astrophysics Data System (ADS)
Nairn, J. A.; Bardenhagen, S. G.; Smith, G. D.
2018-07-01
The material point method (MPM) has proved to be an effective particle method for computational mechanics modeling of problems involving contact, but all prior applications have been limited to Coulomb friction. This paper generalizes the MPM approach for contact to handle any friction law with examples given for friction with adhesion or with a velocity-dependent coefficient of friction. Accounting for adhesion requires an extra calculation to evaluate contact area. Implementation of velocity-dependent laws usually needs numerical methods to find contacting forces. The friction process involves work which can be converted into heat. This paper provides a new method for calculating frictional heating that accounts for interfacial acceleration during the time step. The acceleration terms is small for many problems, but temporal convergence of heating effects for problems involving vibrations and high contact forces is improved by the new method. Fortunately, the new method needs few extra calculations and therefore is recommended for all simulations.
Modelling the excitation field of an optical resonator
NASA Astrophysics Data System (ADS)
Romanini, Daniele
2014-06-01
Assuming the paraxial approximation, we derive efficient recursive expressions for the projection coefficients of a Gaussian beam over the Gauss--Hermite transverse electro-magnetic (TEM) modes of an optical cavity. While previous studies considered cavities with cylindrical symmetry, our derivation accounts for "simple" astigmatism and ellipticity, which allows to deal with more realistic optical systems. The resulting expansion of the Gaussian beam over the cavity TEM modes provides accurate simulation of the excitation field distribution inside the cavity, in transmission, and in reflection. In particular, this requires including counter-propagating TEM modes, usually neglected in textbooks. As an illustrative application to a complex case, we simulate reentrant cavity configurations where Herriott spots are obtained at cavity output. We show that the case of an astigmatic cavity is also easily modelled. To our knowledge, such relevant applications are usually treated under the simplified geometrical optics approximation, or using heavier numerical methods.
Zhao, F Y; Yang, X; Chen, D Y; Ma, W Y; Zheng, J G; Zhang, X M
2014-01-01
Many studies have suggested a link between the spatial organization of genomes and fundamental biological processes such as genome reprogramming, gene expression, and differentiation. Multicolor fluorescence in situ hybridization on three-dimensionally preserved nuclei (3D-FISH), in combination with confocal microscopy, has become an effective technique for analyzing 3D genome structure and spatial patterns of defined nucleus targets including entire chromosome territories and single gene loci. This technique usually requires the simultaneous visualization of numerous targets labeled with different colored fluorochromes. Thus, the number of channels and lasers must be sufficient for the commonly used labeling scheme of 3D-FISH, "one probe-one target". However, these channels and lasers are usually restricted by a given microscope system. This paper presents a method for simultaneously delineating multiple targets in 3D-FISH using limited channels, lasers, and fluorochromes. In contrast to other labeling schemes, this method is convenient and simple for multicolor 3D-FISH studies, which may result in widespread adoption of the technique. Lastly, as an application of the method, the nucleus locations of chromosome territory 18/21 and centromere 18/21/13 in normal human lymphocytes were analyzed, which might present evidence of a radial higher order chromatin arrangement.
Tail shortening by discrete hydrodynamics
NASA Astrophysics Data System (ADS)
Kiefer, J.; Visscher, P. B.
1982-02-01
A discrete formulation of hydrodynamics was recently introduced, whose most important feature is that it is exactly renormalizable. Previous numerical work has found that it provides a more efficient and rapidly convergent method for calculating transport coefficients than the usual Green-Kubo method. The latter's convergence difficulties are due to the well-known "long-time tail" of the time correlation function which must be integrated over time. The purpose of the present paper is to present additional evidence that these difficulties are really absent in the discrete equation of motion approach. The "memory" terms in the equation of motion are calculated accurately, and shown to decay much more rapidly with time than the equilibrium time correlations do.
NASA Astrophysics Data System (ADS)
Mulchrone, Kieran F.; Meere, Patrick A.
2015-09-01
Shape fabrics of elliptical objects in rocks are usually assumed to develop by passive behavior of inclusions with respect to the surrounding material leading to shape-based strain analysis methods belonging to the Rf/ϕ family. A probability density function is derived for the orientational characteristics of populations of rigid ellipses deforming in a pure shear 2D deformation with both no-slip and slip boundary conditions. Using maximum likelihood a numerical method is developed for estimating finite strain in natural populations deforming for both mechanisms. Application to a natural example indicates the importance of the slip mechanism in explaining clast shape fabrics in deformed sediments.
Optimization methods and silicon solar cell numerical models
NASA Technical Reports Server (NTRS)
Girardini, K.
1986-01-01
The goal of this project is the development of an optimization algorithm for use with a solar cell model. It is possible to simultaneously vary design variables such as impurity concentrations, front junction depth, back junctions depth, and cell thickness to maximize the predicted cell efficiency. An optimization algorithm has been developed and interfaced with the Solar Cell Analysis Program in 1 Dimension (SCAPID). SCAPID uses finite difference methods to solve the differential equations which, along with several relations from the physics of semiconductors, describe mathematically the operation of a solar cell. A major obstacle is that the numerical methods used in SCAPID require a significant amount of computer time, and during an optimization the model is called iteratively until the design variables converge to the value associated with the maximum efficiency. This problem has been alleviated by designing an optimization code specifically for use with numerically intensive simulations, to reduce the number of times the efficiency has to be calculated to achieve convergence to the optimal solution. Adapting SCAPID so that it could be called iteratively by the optimization code provided another means of reducing the cpu time required to complete an optimization. Instead of calculating the entire I-V curve, as is usually done in SCAPID, only the efficiency is calculated (maximum power voltage and current) and the solution from previous calculations is used to initiate the next solution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, H.X.; Anh, B.V.; Dinh, T.N.
1999-07-01
This paper presents results of a numerical investigation on the behavior of melt drops falling in a gas (vapor) space and then penetrating into a liquid volume through the gas-liquid interface. The phenomenon studied here is, usually, observed when a liquid drop falls through air into a water pool and is, specially, of interest when a hypothetical severe reactor core meltdown accident is considered. The objective of this work is to study the effect of the gas-liquid interface on the dynamic evolution of the interaction area between the fragmenting melt drop and water. In the present study, the Navier-Stokes equationsmore » are solved for three phases (gas, liquid and melt-drop) using a higher-order, explicit, numerical method, called Cubic-Interpolated Pseudo-Particle (CIP) method, which is employed in combination with an advanced front-capturing scheme, named the Level Set Algorithm (LSA). By using this method, reasonable physical pictures of droplet deformation and fragmentation during movement in a stationary uniform water pool, and in a gas-liquid two-layer volume, is simulated. Effect of the gas-liquid interface on the drop deformation and fragmentation is analyzed by comparing the simulation results obtained for the two cases. Effects of the drop geometry, and of the flow conditions, on the behavior of the melt drop are also analyzed.« less
Fractions--Concepts before Symbols.
ERIC Educational Resources Information Center
Bennett, Albert B., Jr.
The learning difficulties that students experience with fractions begin immediately when they are shown fraction symbols with one numeral written above the other and told that the "top number" is called the numerator and the "bottom number" is called the denominator. This introduction to fractions will usually include a few visual diagrams to help…
Drowning in Data: Sorting through CD ROM and Computer Databases.
ERIC Educational Resources Information Center
Cates, Carl M.; Kaye, Barbara K.
This paper identifies the bibliographic and numeric databases on CD-ROM and computer diskette that should be most useful for investigators in communication, marketing, and communication education. Bibliographic databases are usually found in three formats: citations only, citations and abstracts, and full-text articles. Numeric databases are…
NASA Astrophysics Data System (ADS)
Yang, L. M.; Shu, C.; Yang, W. M.; Wang, Y.; Wu, J.
2017-08-01
In this work, an immersed boundary-simplified sphere function-based gas kinetic scheme (SGKS) is presented for the simulation of 3D incompressible flows with curved and moving boundaries. At first, the SGKS [Yang et al., "A three-dimensional explicit sphere function-based gas-kinetic flux solver for simulation of inviscid compressible flows," J. Comput. Phys. 295, 322 (2015) and Yang et al., "Development of discrete gas kinetic scheme for simulation of 3D viscous incompressible and compressible flows," J. Comput. Phys. 319, 129 (2016)], which is often applied for the simulation of compressible flows, is simplified to improve the computational efficiency for the simulation of incompressible flows. In the original SGKS, the integral domain along the spherical surface for computing conservative variables and numerical fluxes is usually not symmetric at the cell interface. This leads the expression of numerical fluxes at the cell interface to be relatively complicated. For incompressible flows, the sphere at the cell interface can be approximately considered to be symmetric as shown in this work. Besides that, the energy equation is usually not needed for the simulation of incompressible isothermal flows. With all these simplifications, the simple and explicit formulations for the conservative variables and numerical fluxes at the cell interface can be obtained. Second, to effectively implement the no-slip boundary condition for fluid flow problems with complex geometry as well as moving boundary, the implicit boundary condition-enforced immersed boundary method [Wu and Shu, "Implicit velocity correction-based immersed boundary-lattice Boltzmann method and its applications," J. Comput. Phys. 228, 1963 (2009)] is introduced into the simplified SGKS. That is, the flow field is solved by the simplified SGKS without considering the presence of an immersed body and the no-slip boundary condition is implemented by the immersed boundary method. The accuracy and efficiency of the present scheme are validated by simulating the decaying vortex flow, flow past a stationary and rotating sphere, flow past a stationary torus, and flows over dragonfly flight.
Hybrid ODE/SSA methods and the cell cycle model
NASA Astrophysics Data System (ADS)
Wang, S.; Chen, M.; Cao, Y.
2017-07-01
Stochastic effect in cellular systems has been an important topic in systems biology. Stochastic modeling and simulation methods are important tools to study stochastic effect. Given the low efficiency of stochastic simulation algorithms, the hybrid method, which combines an ordinary differential equation (ODE) system with a stochastic chemically reacting system, shows its unique advantages in the modeling and simulation of biochemical systems. The efficiency of hybrid method is usually limited by reactions in the stochastic subsystem, which are modeled and simulated using Gillespie's framework and frequently interrupt the integration of the ODE subsystem. In this paper we develop an efficient implementation approach for the hybrid method coupled with traditional ODE solvers. We also compare the efficiency of hybrid methods with three widely used ODE solvers RADAU5, DASSL, and DLSODAR. Numerical experiments with three biochemical models are presented. A detailed discussion is presented for the performances of three ODE solvers.
NASA Astrophysics Data System (ADS)
Dai, Meng-Xue; Chen, Jing-Bo; Cao, Jian
2017-07-01
Full-waveform inversion (FWI) is an ill-posed optimization problem which is sensitive to noise and initial model. To alleviate the ill-posedness of the problem, regularization techniques are usually adopted. The ℓ1-norm penalty is a robust regularization method that preserves contrasts and edges. The Orthant-Wise Limited-Memory Quasi-Newton (OWL-QN) method extends the widely-used limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method to the ℓ1-regularized optimization problems and inherits the efficiency of L-BFGS. To take advantage of the ℓ1-regularized method and the prior model information obtained from sonic logs and geological information, we implement OWL-QN algorithm in ℓ1-regularized FWI with prior model information in this paper. Numerical experiments show that this method not only improve the inversion results but also has a strong anti-noise ability.
Total Variation with Overlapping Group Sparsity for Image Deblurring under Impulse Noise
Liu, Gang; Huang, Ting-Zhu; Liu, Jun; Lv, Xiao-Guang
2015-01-01
The total variation (TV) regularization method is an effective method for image deblurring in preserving edges. However, the TV based solutions usually have some staircase effects. In order to alleviate the staircase effects, we propose a new model for restoring blurred images under impulse noise. The model consists of an ℓ1-fidelity term and a TV with overlapping group sparsity (OGS) regularization term. Moreover, we impose a box constraint to the proposed model for getting more accurate solutions. The solving algorithm for our model is under the framework of the alternating direction method of multipliers (ADMM). We use an inner loop which is nested inside the majorization minimization (MM) iteration for the subproblem of the proposed method. Compared with other TV-based methods, numerical results illustrate that the proposed method can significantly improve the restoration quality, both in terms of peak signal-to-noise ratio (PSNR) and relative error (ReE). PMID:25874860
A Robust and Efficient Method for Steady State Patterns in Reaction-Diffusion Systems
Lo, Wing-Cheong; Chen, Long; Wang, Ming; Nie, Qing
2012-01-01
An inhomogeneous steady state pattern of nonlinear reaction-diffusion equations with no-flux boundary conditions is usually computed by solving the corresponding time-dependent reaction-diffusion equations using temporal schemes. Nonlinear solvers (e.g., Newton’s method) take less CPU time in direct computation for the steady state; however, their convergence is sensitive to the initial guess, often leading to divergence or convergence to spatially homogeneous solution. Systematically numerical exploration of spatial patterns of reaction-diffusion equations under different parameter regimes requires that the numerical method be efficient and robust to initial condition or initial guess, with better likelihood of convergence to an inhomogeneous pattern. Here, a new approach that combines the advantages of temporal schemes in robustness and Newton’s method in fast convergence in solving steady states of reaction-diffusion equations is proposed. In particular, an adaptive implicit Euler with inexact solver (AIIE) method is found to be much more efficient than temporal schemes and more robust in convergence than typical nonlinear solvers (e.g., Newton’s method) in finding the inhomogeneous pattern. Application of this new approach to two reaction-diffusion equations in one, two, and three spatial dimensions, along with direct comparisons to several other existing methods, demonstrates that AIIE is a more desirable method for searching inhomogeneous spatial patterns of reaction-diffusion equations in a large parameter space. PMID:22773849
A high-order multiscale finite-element method for time-domain acoustic-wave modeling
NASA Astrophysics Data System (ADS)
Gao, Kai; Fu, Shubin; Chung, Eric T.
2018-05-01
Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructs high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss-Lobatto-Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.
A high-order multiscale finite-element method for time-domain acoustic-wave modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Kai; Fu, Shubin; Chung, Eric T.
Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructsmore » high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss–Lobatto–Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.« less
A high-order multiscale finite-element method for time-domain acoustic-wave modeling
Gao, Kai; Fu, Shubin; Chung, Eric T.
2018-02-04
Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructsmore » high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss–Lobatto–Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.« less
NASA Astrophysics Data System (ADS)
Zhou, Yuan; Tang, Eric; Luo, Jianwen; Yao, Junjie
2018-01-01
Temperature mapping during thermotherapy can help precisely control the heating process, both temporally and spatially, to efficiently kill the tumor cells and prevent the healthy tissues from heating damage. Photoacoustic tomography (PAT) has been used for noninvasive temperature mapping with high sensitivity, based on the linear correlation between the tissue's Grüneisen parameter and temperature. However, limited by the tissue's unknown optical properties and thus the optical fluence at depths beyond the optical diffusion limit, the reported PAT thermometry usually takes a ratiometric measurement at different temperatures and thus cannot provide absolute measurements. Moreover, ratiometric measurement over time at different temperatures has to assume that the tissue's optical properties do not change with temperatures, which is usually not valid due to the temperature-induced hemodynamic changes. We propose an optical-diffusion-model-enhanced PAT temperature mapping that can obtain the absolute temperature distribution in deep tissue, without the need of multiple measurements at different temperatures. Based on the initial acoustic pressure reconstructed from multi-illumination photoacoustic signals, both the local optical fluence and the optical parameters including absorption and scattering coefficients are first estimated by the optical-diffusion model, then the temperature distribution is obtained from the reconstructed Grüneisen parameters. We have developed a mathematic model for the multi-illumination PAT of absolute temperatures, and our two-dimensional numerical simulations have shown the feasibility of this new method. The proposed absolute temperature mapping method may set the technical foundation for better temperature control in deep tissue in thermotherapy.
NASA Astrophysics Data System (ADS)
Zhao, G.; Liu, J.; Chen, B.; Guo, R.; Chen, L.
2017-12-01
Forward modeling of gravitational fields at large-scale requires to consider the curvature of the Earth and to evaluate the Newton's volume integral in spherical coordinates. To acquire fast and accurate gravitational effects for subsurface structures, subsurface mass distribution is usually discretized into small spherical prisms (called tesseroids). The gravity fields of tesseroids are generally calculated numerically. One of the commonly used numerical methods is the 3D Gauss-Legendre quadrature (GLQ). However, the traditional GLQ integration suffers from low computational efficiency and relatively poor accuracy when the observation surface is close to the source region. We developed a fast and high accuracy 3D GLQ integration based on the equivalence of kernel matrix, adaptive discretization and parallelization using OpenMP. The equivalence of kernel matrix strategy increases efficiency and reduces memory consumption by calculating and storing the same matrix elements in each kernel matrix just one time. In this method, the adaptive discretization strategy is used to improve the accuracy. The numerical investigations show that the executing time of the proposed method is reduced by two orders of magnitude compared with the traditional method that without these optimized strategies. High accuracy results can also be guaranteed no matter how close the computation points to the source region. In addition, the algorithm dramatically reduces the memory requirement by N times compared with the traditional method, where N is the number of discretization of the source region in the longitudinal direction. It makes the large-scale gravity forward modeling and inversion with a fine discretization possible.
Functionalization of Probe Tips and Supports for Single-Molecule Recognition Force Microscopy
NASA Astrophysics Data System (ADS)
Ebner, Andreas; Wildling, Linda; Zhu, Rong; Rankl, Christian; Haselgrübler, Thomas; Hinterdorfer, Peter; Gruber, Hermann J.
The measuring tip of a force microscope can be converted into a monomolecular sensor if one or few "ligand" molecules are attached to the apex of the tip while maintaining ligand function. Functionalized tips are used to study fine details of receptor-ligand interaction by force spectroscopy or to map cognate "receptor" molecules on the sample surface. The receptor (or target) molecules can be present on the surface of a biological specimen; alternatively, soluble target molecules must be immobilized on ultraflat supports. This review describes the methods of tip functionalization, as well as target molecule immobilization. Silicon nitride tips, silicon chips, and mica have usually been functionalized in three steps: (1) aminofunctionalization, (2) crosslinker attachment, and (3) ligand/receptor coupling, whereby numerous crosslinkers are available to couple widely different ligand molecules. Gold-covered tips and/or supports have usually been coated with a self-assembled monolayer, on top of which the ligand/receptor molecule has been coupled either directly or via a crosslinker molecule. Apart from these general strategies, many simplified methods have been used for tip and/or support functionalization, even single-step methods such as adsorption or chemisorption being very efficient under suitable circumstances. All methods are described with the same explicitness and critical parameters are discussed. In conclusion, this review should help to find suitable methods for specific problems of tip and support functionalization.
Joint multifractal analysis based on wavelet leaders
NASA Astrophysics Data System (ADS)
Jiang, Zhi-Qiang; Yang, Yan-Hong; Wang, Gang-Jin; Zhou, Wei-Xing
2017-12-01
Mutually interacting components form complex systems and these components usually have long-range cross-correlated outputs. Using wavelet leaders, we propose a method for characterizing the joint multifractal nature of these long-range cross correlations; we call this method joint multifractal analysis based on wavelet leaders (MF-X-WL). We test the validity of the MF-X-WL method by performing extensive numerical experiments on dual binomial measures with multifractal cross correlations and bivariate fractional Brownian motions (bFBMs) with monofractal cross correlations. Both experiments indicate that MF-X-WL is capable of detecting cross correlations in synthetic data with acceptable estimating errors. We also apply the MF-X-WL method to pairs of series from financial markets (returns and volatilities) and online worlds (online numbers of different genders and different societies) and determine intriguing joint multifractal behavior.
NASA Astrophysics Data System (ADS)
García-Senz, Domingo; Cabezón, Rubén M.; Escartín, José A.; Ebinger, Kevin
2014-10-01
Context. The smoothed-particle hydrodynamics (SPH) technique is a numerical method for solving gas-dynamical problems. It has been applied to simulate the evolution of a wide variety of astrophysical systems. The method has a second-order accuracy, with a resolution that is usually much higher in the compressed regions than in the diluted zones of the fluid. Aims: We propose and check a method to balance and equalize the resolution of SPH between high- and low-density regions. This method relies on the versatility of a family of interpolators called sinc kernels, which allows increasing the interpolation quality by varying only a single parameter (the exponent of the sinc function). Methods: The proposed method was checked and validated through a number of numerical tests, from standard one-dimensional Riemann problems in shock tubes, to multidimensional simulations of explosions, hydrodynamic instabilities, and the collapse of a Sun-like polytrope. Results: The analysis of the hydrodynamical simulations suggests that the scheme devised to equalize the accuracy improves the treatment of the post-shock regions and, in general, of the rarefacted zones of fluids while causing no harm to the growth of hydrodynamic instabilities. The method is robust and easy to implement with a low computational overload. It conserves mass, energy, and momentum and reduces to the standard SPH scheme in regions of the fluid that have smooth density gradients.
Wiechert, W; de Graaf, A A
1997-07-05
The extension of metabolite balancing with carbon labeling experiments, as described by Marx et al. (Biotechnol. Bioeng. 49: 11-29), results in a much more detailed stationary metabolic flux analysis. As opposed to basic metabolite flux balancing alone, this method enables both flux directions of bidirectional reaction steps to be quantitated. However, the mathematical treatment of carbon labeling systems is much more complicated, because it requires the solution of numerous balance equations that are bilinear with respect to fluxes and fractional labeling. In this study, a universal modeling framework is presented for describing the metabolite and carbon atom flux in a metabolic network. Bidirectional reaction steps are extensively treated and their impact on the system's labeling state is investigated. Various kinds of modeling assumptions, as usually made for metabolic fluxes, are expressed by linear constraint equations. A numerical algorithm for the solution of the resulting linear constrained set of nonlinear equations is developed. The numerical stability problems caused by large bidirectional fluxes are solved by a specially developed transformation method. Finally, the simulation of carbon labeling experiments is facilitated by a flexible software tool for network synthesis. An illustrative simulation study on flux identifiability from available flux and labeling measurements in the cyclic pentose phosphate pathway of a recombinant strain of Zymomonas mobilis concludes this contribution.
NASA Astrophysics Data System (ADS)
Mourid, Amina; El Alami, Mustapha
2018-05-01
In this paper, we present a comparative thermal study of the usual insulation materials used in the building as well as the innovate one like phase change materials (PCMs). Both experimental study and numerical approach were applied in this work for summer season. In the experimental study the PCM was installed on the outer surface on the ceiling of one of two full-scale rooms located at FSAC, Casablanca. A simulation model was performed with TRNSYS’17 software. We have established as a criterion of comparison the internal temperatures. An economic study also has been carried out. Based on this latter, that the PCM is most efficient.
Parameter Estimation of Partial Differential Equation Models.
Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Carroll, Raymond J; Maity, Arnab
2013-01-01
Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown, and need to be estimated from the measurements of the dynamic system in the present of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE, and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from LIDAR data.
van den Bergh, F
2018-03-01
The slanted-edge method of spatial frequency response (SFR) measurement is usually applied to grayscale images under the assumption that any distortion of the expected straight edge is negligible. By decoupling the edge orientation and position estimation step from the edge spread function construction step, it is shown in this paper that the slanted-edge method can be extended to allow it to be applied to images suffering from significant geometric distortion, such as produced by equiangular fisheye lenses. This same decoupling also allows the slanted-edge method to be applied directly to Bayer-mosaicked images so that the SFR of the color filter array subsets can be measured directly without the unwanted influence of demosaicking artifacts. Numerical simulation results are presented to demonstrate the efficacy of the proposed deferred slanted-edge method in relation to existing methods.
NASA Astrophysics Data System (ADS)
Woldegiorgis, Befekadu Taddesse; van Griensven, Ann; Pereira, Fernando; Bauwens, Willy
2017-06-01
Most common numerical solutions used in CSTR-based in-stream water quality simulators are susceptible to instabilities and/or solution inconsistencies. Usually, they cope with instability problems by adopting computationally expensive small time steps. However, some simulators use fixed computation time steps and hence do not have the flexibility to do so. This paper presents a novel quasi-analytical solution for CSTR-based water quality simulators of an unsteady system. The robustness of the new method is compared with the commonly used fourth-order Runge-Kutta methods, the Euler method and three versions of the SWAT model (SWAT2012, SWAT-TCEQ, and ESWAT). The performance of each method is tested for different hypothetical experiments. Besides the hypothetical data, a real case study is used for comparison. The growth factors we derived as stability measures for the different methods and the R-factor—considered as a consistency measure—turned out to be very useful for determining the most robust method. The new method outperformed all the numerical methods used in the hypothetical comparisons. The application for the Zenne River (Belgium) shows that the new method provides stable and consistent BOD simulations whereas the SWAT2012 model is shown to be unstable for the standard daily computation time step. The new method unconditionally simulates robust solutions. Therefore, it is a reliable scheme for CSTR-based water quality simulators that use first-order reaction formulations.
Benzi, Roberto; Ching, Emily S C; De Angelis, Elisabetta; Procaccia, Itamar
2008-04-01
Numerical simulations of turbulent channel flows, with or without additives, are limited in the extent of the Reynolds number (Re) and Deborah number (De). The comparison of such simulations to theories of drag reduction, which are usually derived for asymptotically high Re and De, calls for some care. In this paper we present a study of drag reduction by rodlike polymers in a turbulent channel flow using direct numerical simulation and illustrate how these numerical results should be related to the recently developed theory.
Quantification of human responses
NASA Technical Reports Server (NTRS)
Steinlage, R. C.; Gantner, T. E.; Lim, P. Y. W.
1992-01-01
Human perception is a complex phenomenon which is difficult to quantify with instruments. For this reason, large panels of people are often used to elicit and aggregate subjective judgments. Print quality, taste, smell, sound quality of a stereo system, softness, and grading Olympic divers and skaters are some examples of situations where subjective measurements or judgments are paramount. We usually express what is in our mind through language as a medium but languages are limited in available choices of vocabularies, and as a result, our verbalizations are only approximate expressions of what we really have in mind. For lack of better methods to quantify subjective judgments, it is customary to set up a numerical scale such as 1, 2, 3, 4, 5 or 1, 2, 3, ..., 9, 10 for characterizing human responses and subjective judgments with no valid justification except that these scales are easy to understand and convenient to use. But these numerical scales are arbitrary simplifications of the complex human mind; the human mind is not restricted to such simple numerical variations. In fact, human responses and subjective judgments are psychophysical phenomena that are fuzzy entities and therefore difficult to handle by conventional mathematics and probability theory. The fuzzy mathematical approach provides a more realistic insight into understanding and quantifying human responses. This paper presents a method for quantifying human responses and subjective judgments without assuming a pattern of linear or numerical variation for human responses. In particular, quantification and evaluation of linguistic judgments was investigated.
Robust and fast-converging level set method for side-scan sonar image segmentation
NASA Astrophysics Data System (ADS)
Liu, Yan; Li, Qingwu; Huo, Guanying
2017-11-01
A robust and fast-converging level set method is proposed for side-scan sonar (SSS) image segmentation. First, the noise in each sonar image is removed using the adaptive nonlinear complex diffusion filter. Second, k-means clustering is used to obtain the initial presegmentation image from the denoised image, and then the distance maps of the initial contours are reinitialized to guarantee the accuracy of the numerical calculation used in the level set evolution. Finally, the satisfactory segmentation is achieved using a robust variational level set model, where the evolution control parameters are generated by the presegmentation. The proposed method is successfully applied to both synthetic image with speckle noise and real SSS images. Experimental results show that the proposed method needs much less iteration and therefore is much faster than the fuzzy local information c-means clustering method, the level set method using a gamma observation model, and the enhanced region-scalable fitting method. Moreover, the proposed method can usually obtain more accurate segmentation results compared with other methods.
Topological Semimetals Studied by Ab Initio Calculations
NASA Astrophysics Data System (ADS)
Hirayama, Motoaki; Okugawa, Ryo; Murakami, Shuichi
2018-04-01
In topological semimetals such as Weyl, Dirac, and nodal-line semimetals, the band gap closes at points or along lines in k space which are not necessarily located at high-symmetry positions in the Brillouin zone. Therefore, it is not straightforward to find these topological semimetals by ab initio calculations because the band structure is usually calculated only along high-symmetry lines. In this paper, we review recent studies on topological semimetals by ab initio calculations. We explain theoretical frameworks which can be used for the search for topological semimetal materials, and some numerical methods used in the ab initio calculations.
Long-lived oscillons from asymmetric bubbles: Existence and stability
NASA Astrophysics Data System (ADS)
Adib, Artur B.; Gleiser, Marcelo; Almeida, Carlos A.
2002-10-01
The possibility that extremely long-lived, time-dependent, and localized field configurations (``oscillons'') arise during the collapse of asymmetrical bubbles in (2+1)-dimensional φ4 models is investigated. It is found that oscillons can develop from a large spectrum of elliptically deformed bubbles. Moreover, we provide numerical evidence that such oscillons are (a) circularly symmetric and (b) linearly stable against small arbitrary radial and angular perturbations. The latter is based on a dynamical approach designed to investigate the stability of nonintegrable time-dependent configurations that is capable of probing slowly growing instabilities not seen through the usual ``spectral'' method.
Ballistic performance of a Kevlar-29 woven fibre composite under varied temperatures
NASA Astrophysics Data System (ADS)
Soykasap, O.; Colakoglu, M.
2010-05-01
Armours are usually manufactured from polymer matrix composites and used for both military and non-military purposes in different seasons, climates, and regions. The mechanical properties of the composites depend on temperature, which also affects their ballistic characteristics. The armour is used to absorb the kinetic energy of a projectile without any major injury to a person. Therefore, besides a high strength and lightness, a high damping capacity is required to absorb the impact energy transferred by the projectile. The ballistic properties of a Kevlar 29/polyvinyl butyral composite are investigated under varied temperatures in this study. The elastic modulus of the composite is determined from the natural frequency of composite specimens at different temperatures by using a damping monitoring method. Then, the backside deformation of composite plates is analysed experimentally and numerically employing the finite-element program Abaqus. The experimental and numeric results obtained are in good agreement.
Traffic Flow Density Distribution Based on FEM
NASA Astrophysics Data System (ADS)
Ma, Jing; Cui, Jianming
In analysis of normal traffic flow, it usually uses the static or dynamic model to numerical analyze based on fluid mechanics. However, in such handling process, the problem of massive modeling and data handling exist, and the accuracy is not high. Finite Element Method (FEM) is a production which is developed from the combination of a modern mathematics, mathematics and computer technology, and it has been widely applied in various domain such as engineering. Based on existing theory of traffic flow, ITS and the development of FEM, a simulation theory of the FEM that solves the problems existing in traffic flow is put forward. Based on this theory, using the existing Finite Element Analysis (FEA) software, the traffic flow is simulated analyzed with fluid mechanics and the dynamics. Massive data processing problem of manually modeling and numerical analysis is solved, and the authenticity of simulation is enhanced.
NASA Astrophysics Data System (ADS)
Jiang, Q. F.; Zhuang, M.; Zhu, Z. G.; Y Zhang, Q.; Sheng, L. H.
2017-12-01
Counter-flow plate-fin heat exchangers are commonly utilized in cryogenic applications due to their high effectiveness and compact size. For cryogenic heat exchangers in helium liquefaction/refrigeration systems, conventional design theory is no longer applicable and they are usually sensitive to longitudinal heat conduction, heat in-leak from surroundings and variable fluid properties. Governing equations based on distributed parameter method are developed to evaluate performance deterioration caused by these effects. The numerical model could also be applied in many other recuperators with different structures and, hence, available experimental data are used to validate it. For a specific case of the multi-stream heat exchanger in the EAST helium refrigerator, quantitative effects of these heat losses are further discussed, in comparison with design results obtained by the common commercial software. The numerical model could be useful to evaluate and rate the heat exchanger performance under the actual cryogenic environment.
The natural neighbor series manuals and source codes
NASA Astrophysics Data System (ADS)
Watson, Dave
1999-05-01
This software series is concerned with reconstruction of spatial functions by interpolating a set of discrete observations having two or three independent variables. There are three components in this series: (1) nngridr: an implementation of natural neighbor interpolation, 1994, (2) modemap: an implementation of natural neighbor interpolation on the sphere, 1998 and (3) orebody: an implementation of natural neighbor isosurface generation (publication incomplete). Interpolation is important to geologists because it can offer graphical insights into significant geological structure and behavior, which, although inherent in the data, may not be otherwise apparent. It also is the first step in numerical integration, which provides a primary avenue to detailed quantification of the observed spatial function. Interpolation is implemented by selecting a surface-generating rule that controls the form of a `bridge' built across the interstices between adjacent observations. The cataloging and classification of the many such rules that have been reported is a subject in itself ( Watson, 1992), and the merits of various approaches have been debated at length. However, for practical purposes, interpolation methods are usually judged on how satisfactorily they handle problematic data sets. Sparse scattered data or traverse data, especially if the functional values are highly variable, generally tests interpolation methods most severely; but one method, natural neighbor interpolation, usually does produce preferable results for such data.
Numerical integration techniques for curved-element discretizations of molecule-solvent interfaces.
Bardhan, Jaydeep P; Altman, Michael D; Willis, David J; Lippow, Shaun M; Tidor, Bruce; White, Jacob K
2007-07-07
Surface formulations of biophysical modeling problems offer attractive theoretical and computational properties. Numerical simulations based on these formulations usually begin with discretization of the surface under consideration; often, the surface is curved, possessing complicated structure and possibly singularities. Numerical simulations commonly are based on approximate, rather than exact, discretizations of these surfaces. To assess the strength of the dependence of simulation accuracy on the fidelity of surface representation, here methods were developed to model several important surface formulations using exact surface discretizations. Following and refining Zauhar's work [J. Comput.-Aided Mol. Des. 9, 149 (1995)], two classes of curved elements were defined that can exactly discretize the van der Waals, solvent-accessible, and solvent-excluded (molecular) surfaces. Numerical integration techniques are presented that can accurately evaluate nonsingular and singular integrals over these curved surfaces. After validating the exactness of the surface discretizations and demonstrating the correctness of the presented integration methods, a set of calculations are presented that compare the accuracy of approximate, planar-triangle-based discretizations and exact, curved-element-based simulations of surface-generalized-Born (sGB), surface-continuum van der Waals (scvdW), and boundary-element method (BEM) electrostatics problems. Results demonstrate that continuum electrostatic calculations with BEM using curved elements, piecewise-constant basis functions, and centroid collocation are nearly ten times more accurate than planar-triangle BEM for basis sets of comparable size. The sGB and scvdW calculations give exceptional accuracy even for the coarsest obtainable discretized surfaces. The extra accuracy is attributed to the exact representation of the solute-solvent interface; in contrast, commonly used planar-triangle discretizations can only offer improved approximations with increasing discretization and associated increases in computational resources. The results clearly demonstrate that the methods for approximate integration on an exact geometry are far more accurate than exact integration on an approximate geometry. A MATLAB implementation of the presented integration methods and sample data files containing curved-element discretizations of several small molecules are available online as supplemental material.
NASA Astrophysics Data System (ADS)
Bervillier, C.; Boisseau, B.; Giacomini, H.
2008-02-01
The relation between the Wilson-Polchinski and the Litim optimized ERGEs in the local potential approximation is studied with high accuracy using two different analytical approaches based on a field expansion: a recently proposed genuine analytical approximation scheme to two-point boundary value problems of ordinary differential equations, and a new one based on approximating the solution by generalized hypergeometric functions. A comparison with the numerical results obtained with the shooting method is made. A similar accuracy is reached in each case. Both two methods appear to be more efficient than the usual field expansions frequently used in the current studies of ERGEs (in particular for the Wilson-Polchinski case in the study of which they fail).
NASA Astrophysics Data System (ADS)
Ohwada, Taku; Shibata, Yuki; Kato, Takuma; Nakamura, Taichi
2018-06-01
Developed is a high-order accurate shock-capturing scheme for the compressible Euler/Navier-Stokes equations; the formal accuracy is 5th order in space and 4th order in time. The performance and efficiency of the scheme are validated in various numerical tests. The main ingredients of the scheme are nothing special; they are variants of the standard numerical flux, MUSCL, the usual Lagrange's polynomial and the conventional Runge-Kutta method. The scheme can compute a boundary layer accurately with a rational resolution and capture a stationary contact discontinuity sharply without inner points. And yet it is endowed with high resistance against shock anomalies (carbuncle phenomenon, post-shock oscillations, etc.). A good balance between high robustness and low dissipation is achieved by blending three types of numerical fluxes according to physical situation in an intuitively easy-to-understand way. The performance of the scheme is largely comparable to that of WENO5-Rusanov, while its computational cost is 30-40% less than of that of the advanced scheme.
Dust Storm Monitoring Using Satellite Observatory and Numerical Modeling Analysis
NASA Astrophysics Data System (ADS)
Taghavi, Farahnaz
In recent years, the frequency of dust pollution events in the Iran Southwest are increased which caused huge damage and imposed a negative impacts on air quality, airport traffic and people daily life in local areas. Dust storms in this area usually start with the formation of a low-pressure center over the Arabian Peninsula. The main objectives of this study is to asses and monitor the movement of aerosols and pollutions from origin source to local areas using satellite imagery and numerical modeling analysis. Observational analyses from NCEP such as synoptic data (Uwind,Vwind,Vorticity and Divergence Fields), upper air radiosonde, measured visibility distributions, land cover data are also used in model comparisons to show differences in occurrence of dust events. The evolution and dynamics of this phenomena are studied on the based a method to modify the initial state of NWP output using discrepancies between dynamic fields and WV imagery in a grid. Results show that satellite images offers a means to control the behavior of numeric models and also the model using land cover data improving the wind-blown dust modeling.
Modeling of large amplitude plasma blobs in three-dimensions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Angus, Justin R.; Umansky, Maxim V.
2014-01-15
Fluctuations in fusion boundary and similar plasmas often have the form of filamentary structures, or blobs, that convectively propagate radially. This may lead to the degradation of plasma facing components as well as plasma confinement. Theoretical analysis of plasma blobs usually takes advantage of the so-called Boussinesq approximation of the potential vorticity equation, which greatly simplifies the treatment analytically and numerically. This approximation is only strictly justified when the blob density amplitude is small with respect to that of the background plasma. However, this is not the case for typical plasma blobs in the far scrape-off layer region, where themore » background density is small compared to that of the blob, and results obtained based on the Boussinesq approximation are questionable. In this report, the solution of the full vorticity equation, without the usual Boussinesq approximation, is proposed via a novel numerical approach. The method is used to solve for the evolution of 2D and 3D plasma blobs in a regime where the Boussinesq approximation is not valid. The Boussinesq solution under predicts the cross field transport in 2D. However, in 3D, for parameters typical of current tokamaks, the disparity between the radial cross field transport from the Boussinesq approximation and full solution is virtually non-existent due to the effects of the drift wave instability.« less
Computational ecology as an emerging science
Petrovskii, Sergei; Petrovskaya, Natalia
2012-01-01
It has long been recognized that numerical modelling and computer simulations can be used as a powerful research tool to understand, and sometimes to predict, the tendencies and peculiarities in the dynamics of populations and ecosystems. It has been, however, much less appreciated that the context of modelling and simulations in ecology is essentially different from those that normally exist in other natural sciences. In our paper, we review the computational challenges arising in modern ecology in the spirit of computational mathematics, i.e. with our main focus on the choice and use of adequate numerical methods. Somewhat paradoxically, the complexity of ecological problems does not always require the use of complex computational methods. This paradox, however, can be easily resolved if we recall that application of sophisticated computational methods usually requires clear and unambiguous mathematical problem statement as well as clearly defined benchmark information for model validation. At the same time, many ecological problems still do not have mathematically accurate and unambiguous description, and available field data are often very noisy, and hence it can be hard to understand how the results of computations should be interpreted from the ecological viewpoint. In this scientific context, computational ecology has to deal with a new paradigm: conventional issues of numerical modelling such as convergence and stability become less important than the qualitative analysis that can be provided with the help of computational techniques. We discuss this paradigm by considering computational challenges arising in several specific ecological applications. PMID:23565336
NASA Astrophysics Data System (ADS)
Liu, J. X.; Deng, S. C.; Liang, N. G.
2008-02-01
Concrete is heterogeneous and usually described as a three-phase material, where matrix, aggregate and interface are distinguished. To take this heterogeneity into consideration, the Generalized Beam (GB) lattice model is adopted. The GB lattice model is much more computationally efficient than the beam lattice model. Numerical procedures of both quasi-static method and dynamic method are developed to simulate fracture processes in uniaxial tensile tests conducted on a concrete panel. Cases of different loading rates are compared with the quasi-static case. It is found that the inertia effect due to load increasing becomes less important and can be ignored with the loading rate decreasing, but the inertia effect due to unstable crack propagation remains considerable no matter how low the loading rate is. Therefore, an unrealistic result will be obtained if a fracture process including unstable cracking is simulated by the quasi-static procedure.
NASA Technical Reports Server (NTRS)
Jin, Jian-Ming; Volakis, John L.; Collins, Jeffery D.
1991-01-01
A review of a hybrid finite element-boundary integral formulation for scattering and radiation by two- and three-dimensional composite structures is presented. In contrast to other hybrid techniques involving the finite element method, the proposed one is in principle exact and can be implemented using a low O(N) storage. This is of particular importance for large scale applications and is a characteristic of the boundary chosen to terminate the finite element mesh, usually as close to the structure as possible. A certain class of these boundaries lead to convolutional boundary integrals which can be evaluated via the fast Fourier transform (FFT) without a need to generate a matrix; thus, retaining the O(N) storage requirement. The paper begins with a general description of the method. A number of two- and three-dimensional applications are then given, including numerical computations which demonstrate the method's accuracy, efficiency, and capability.
Analytic uncertainty and sensitivity analysis of models with input correlations
NASA Astrophysics Data System (ADS)
Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu
2018-03-01
Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.
NASA Astrophysics Data System (ADS)
Liu, Ke; Wang, Jiannian; Wang, Hai; Li, Yanqiu
2018-07-01
For the multi-lateral shearing interferometers (multi-LSIs), the measurement accuracy can be enhanced by estimating the wavefront under test with the multidirectional phase information encoded in the shearing interferogram. Usually the multi-LSIs reconstruct the test wavefront from the phase derivatives in multiple directions using the discrete Fourier transforms (DFT) method, which is only suitable to small shear ratios and relatively sensitive to noise. To improve the accuracy of multi-LSIs, wavefront reconstruction from the multidirectional phase differences using the difference Zernike polynomials fitting (DZPF) method is proposed in this paper. For the DZPF method applied in the quadriwave LSI, difference Zernike polynomials in only two orthogonal shear directions are required to represent the phase differences in multiple shear directions. In this way, the test wavefront can be reconstructed from the phase differences in multiple shear directions using a noise-variance weighted least-squares method with almost no extra computational burden, compared with the usual recovery from the phase differences in two orthogonal directions. Numerical simulation results show that the DZPF method can maintain high reconstruction accuracy in a wider range of shear ratios and has much better anti-noise performance than the DFT method. A null test experiment of the quadriwave LSI has been conducted and the experimental results show that the measurement accuracy of the quadriwave LSI can be improved from 0.0054 λ rms to 0.0029 λ rms (λ = 632.8 nm) by substituting the DFT method with the proposed DZPF method in the wavefront reconstruction process.
Prediction of the acoustic pressure above periodically uneven facings in industrial workplaces
NASA Astrophysics Data System (ADS)
Ducourneau, J.; Bos, L.; Planeau, V.; Faiz, Adil; Skali Lami, Salah; Nejade, A.
2010-05-01
The aim of this work is to predict sound pressure in front of wall facings based on periodic sound scattering surface profiles. The method involves investigating plane wave reflections randomly incident upon an uneven surface. The waveguide approach is well suited to the geometries usually encountered in industrial workplaces. This method simplifies the profile geometry by using elementary rectangular volumes. The acoustic field in the profile interstices can then be expressed as the superposition of waveguide modes. In past work, walls considered are of infinite dimensions and are subjected to a periodic surface profile in only one direction. We therefore generalise this approach by extending its applicability to "double-periodic" wall facings. Free-field measurements have been taken and the observed agreement between numerical and experimental results supports the validity of the waveguide method.
Simultaneous ocular and muscle artifact removal from EEG data by exploiting diverse statistics.
Chen, Xun; Liu, Aiping; Chen, Qiang; Liu, Yu; Zou, Liang; McKeown, Martin J
2017-09-01
Electroencephalography (EEG) recordings are frequently contaminated by both ocular and muscle artifacts. These are normally dealt with separately, by employing blind source separation (BSS) techniques relying on either second-order or higher-order statistics (SOS & HOS respectively). When HOS-based methods are used, it is usually in the setting of assuming artifacts are statistically independent to the EEG. When SOS-based methods are used, it is assumed that artifacts have autocorrelation characteristics distinct from the EEG. In reality, ocular and muscle artifacts do not completely follow the assumptions of strict temporal independence to the EEG nor completely unique autocorrelation characteristics, suggesting that exploiting HOS or SOS alone may be insufficient to remove these artifacts. Here we employ a novel BSS technique, independent vector analysis (IVA), to jointly employ HOS and SOS simultaneously to remove ocular and muscle artifacts. Numerical simulations and application to real EEG recordings were used to explore the utility of the IVA approach. IVA was superior in isolating both ocular and muscle artifacts, especially for raw EEG data with low signal-to-noise ratio, and also integrated usually separate SOS and HOS steps into a single unified step. Copyright © 2017 Elsevier Ltd. All rights reserved.
Bearup, Daniel; Petrovskaya, Natalia; Petrovskii, Sergei
2015-05-01
Monitoring of pest insects is an important part of the integrated pest management. It aims to provide information about pest insect abundance at a given location. This includes data collection, usually using traps, and their subsequent analysis and/or interpretation. However, interpretation of trap count (number of insects caught over a fixed time) remains a challenging problem. First, an increase in either the population density or insects activity can result in a similar increase in the number of insects trapped (the so called "activity-density" problem). Second, a genuine increase of the local population density can be attributed to qualitatively different ecological mechanisms such as multiplication or immigration. Identification of the true factor causing an increase in trap count is important as different mechanisms require different control strategies. In this paper, we consider a mean-field mathematical model of insect trapping based on the diffusion equation. Although the diffusion equation is a well-studied model, its analytical solution in closed form is actually available only for a few special cases, whilst in a more general case the problem has to be solved numerically. We choose finite differences as the baseline numerical method and show that numerical solution of the problem, especially in the realistic 2D case, is not at all straightforward as it requires a sufficiently accurate approximation of the diffusion fluxes. Once the numerical method is justified and tested, we apply it to the corresponding boundary problem where different types of boundary forcing describe different scenarios of pest insect immigration and reveal the corresponding patterns in the trap count growth. Copyright © 2015 Elsevier Inc. All rights reserved.
Transport of Charged Particles in Turbulent Magnetic Fields
NASA Astrophysics Data System (ADS)
Parashar, T.; Subedi, P.; Sonsrettee, W.; Blasi, P.; Ruffolo, D. J.; Matthaeus, W. H.; Montgomery, D.; Chuychai, P.; Dmitruk, P.; Wan, M.; Chhiber, R.
2017-12-01
Magnetic fields permeate the Universe. They are found in planets, stars, galaxies, and the intergalactic medium. The magnetic field found in these astrophysical systems are usually chaotic, disordered, and turbulent. The investigation of the transport of cosmic rays in magnetic turbulence is a subject of considerable interest. One of the important aspects of cosmic ray transport is to understand their diffusive behavior and to calculate the diffusion coefficient in the presence of these turbulent fields. Research has most frequently concentrated on determining the diffusion coefficient in the presence of a mean magnetic field. Here, we will particularly focus on calculating diffusion coefficients of charged particles and magnetic field lines in a fully three-dimensional isotropic turbulent magnetic field with no mean field, which may be pertinent to many astrophysical situations. For charged particles in isotropic turbulence we identify different ranges of particle energy depending upon the ratio of the Larmor radius of the charged particle to the characteristic outer length scale of the turbulence. Different theoretical models are proposed to calculate the diffusion coefficient, each applicable to a distinct range of particle energies. The theoretical ideas are tested against results of detailed numerical experiments using Monte-Carlo simulations of particle propagation in stochastic magnetic fields. We also discuss two different methods of generating random magnetic field to study charged particle propagation using numerical simulation. One method is the usual way of generating random fields with a specified power law in wavenumber space, using Gaussian random variables. Turbulence, however, is non-Gaussian, with variability that comes in bursts called intermittency. We therefore devise a way to generate synthetic intermittent fields which have many properties of realistic turbulence. Possible applications of such synthetically generated intermittent fields are discussed.
Faster modified protocol for first order reversal curve measurements
NASA Astrophysics Data System (ADS)
De Biasi, Emilio
2017-10-01
In this work we present a faster modified protocol for first order reversal curve (FORC) measurements. The main idea of this procedure is to use the information of the ascending and descending branches constructed through successive sweeps of magnetic field. The new method reduces the number of field sweeps to almost one half as compared to the traditional method. The length of each branch is reduced faster than in the usual FORC protocol. The new method implies not only a new measurement protocol but also a new recipe for the previous treatment of the data. After of these pre-processing, the FORC diagram can be obtained by the conventional methods. In the present work we show that the new FORC procedure leads to results identical to the conventional method if the system under study follows the Stoner-Wohlfarth model with interactions that do not depend of the magnetic state (up or down) of the entities, as in the Preisach model. More specifically, if the coercive and interactions fields are not correlated, and the hysteresis loops have a square shape. Some numerical examples show the comparison between the usual FORC procedure and the propose one. We also discuss that it is possible to find some differences in the case of real systems, due to the magnetic interactions. There is no reason to prefer one FORC method over the other from the point of view of the information to be obtained. On the contrary, the use of both methods could open doors for a more accurate and deep analysis.
A semi-analytical model of a time reversal cavity for high-amplitude focused ultrasound applications
NASA Astrophysics Data System (ADS)
Robin, J.; Tanter, M.; Pernot, M.
2017-09-01
Time reversal cavities (TRC) have been proposed as an efficient approach for 3D ultrasound therapy. They allow the precise spatio-temporal focusing of high-power ultrasound pulses within a large region of interest with a low number of transducers. Leaky TRCs are usually built by placing a multiple scattering medium, such as a random rod forest, in a reverberating cavity, and the final peak pressure gain of the device only depends on the temporal length of its impulse response. Such multiple scattering in a reverberating cavity is a complex phenomenon, and optimisation of the device’s gain is usually a cumbersome process, mostly empirical, and requiring numerical simulations with extremely long computation times. In this paper, we present a semi-analytical model for the fast optimisation of a TRC. This model decouples ultrasound propagation in an empty cavity and multiple scattering in a multiple scattering medium. It was validated numerically and experimentally using a 2D-TRC and numerically using a 3D-TRC. Finally, the model was used to determine rapidly the optimal parameters of the 3D-TRC which had been confirmed by numerical simulations.
Shape functions for velocity interpolation in general hexahedral cells
Naff, R.L.; Russell, T.F.; Wilson, J.D.
2002-01-01
Numerical methods for grids with irregular cells require discrete shape functions to approximate the distribution of quantities across cells. For control-volume mixed finite-element (CVMFE) methods, vector shape functions approximate velocities and vector test functions enforce a discrete form of Darcy's law. In this paper, a new vector shape function is developed for use with irregular, hexahedral cells (trilinear images of cubes). It interpolates velocities and fluxes quadratically, because as shown here, the usual Piola-transformed shape functions, which interpolate linearly, cannot match uniform flow on general hexahedral cells. Truncation-error estimates for the shape function are demonstrated. CVMFE simulations of uniform and non-uniform flow with irregular meshes show first- and second-order convergence of fluxes in the L2 norm in the presence and absence of singularities, respectively.
Properties of wavelet discretization of Black-Scholes equation
NASA Astrophysics Data System (ADS)
Finěk, Václav
2017-07-01
Using wavelet methods, the continuous problem is transformed into a well-conditioned discrete problem. And once a non-symmetric problem is given, squaring yields a symmetric positive definite formulation. However squaring usually makes the condition number of discrete problems substantially worse. This note is concerned with a wavelet based numerical solution of the Black-Scholes equation for pricing European options. We show here that in wavelet coordinates a symmetric part of the discretized equation dominates over an unsymmetric part in the standard economic environment with low interest rates. It provides some justification for using a fractional step method with implicit treatment of the symmetric part of the weak form of the Black-Scholes operator and with explicit treatment of its unsymmetric part. Then a well-conditioned discrete problem is obtained.
More on approximations of Poisson probabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kao, C
1980-05-01
Calculation of Poisson probabilities frequently involves calculating high factorials, which becomes tedious and time-consuming with regular calculators. The usual way to overcome this difficulty has been to find approximations by making use of the table of the standard normal distribution. A new transformation proposed by Kao in 1978 appears to perform better for this purpose than traditional transformations. In the present paper several approximation methods are stated and compared numerically, including an approximation method that utilizes a modified version of Kao's transformation. An approximation based on a power transformation was found to outperform those based on the square-root type transformationsmore » as proposed in literature. The traditional Wilson-Hilferty approximation and Makabe-Morimura approximation are extremely poor compared with this approximation. 4 tables. (RWR)« less
An assessment of the adaptive unstructured tetrahedral grid, Euler Flow Solver Code FELISA
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Erickson, Larry L.
1994-01-01
A three-dimensional solution-adaptive Euler flow solver for unstructured tetrahedral meshes is assessed, and the accuracy and efficiency of the method for predicting sonic boom pressure signatures about simple generic models are demonstrated. Comparison of computational and wind tunnel data and enhancement of numerical solutions by means of grid adaptivity are discussed. The mesh generation is based on the advancing front technique. The FELISA code consists of two solvers, the Taylor-Galerkin and the Runge-Kutta-Galerkin schemes, both of which are spacially discretized by the usual Galerkin weighted residual finite-element methods but with different explicit time-marching schemes to steady state. The solution-adaptive grid procedure is based on either remeshing or mesh refinement techniques. An alternative geometry adaptive procedure is also incorporated.
Zheng, Nanfeng; Lu, Haiwei; Bu, Xianhui; Feng, Pingyun
2006-04-12
Chalcogenide II-VI nanoclusters are usually prepared as isolated clusters and have defied numerous efforts to join them into covalent open-framework architecture with conventional templating methods such as protonated amines or inorganic cations commonly used to direct the formation of porous frameworks. Herein, we report the first templated synthesis of II-VI covalent superlattices from large II-VI tetrahedral clusters (i.e., [Cd32S14(SPh)38]2-). Our method takes advantage of low charge density of metal-chelate dyes that is a unique match with three-dimensional II-VI semiconductor frameworks in charge density, surface hydrophilicity-hydrophobicity, and spatial organization. In addition, metal-chelate dyes also serve to tune the optical properties of resulting dye semiconductor composite materials.
Pallen, M.
1995-01-01
Electronic mail (email) has many advantages over other forms of communication: it is easy to use, free of charge, fast, and delivers information in a digital format. As a text only medium, email is usually less formal in style than conventional correspondence and may contain acronyms and other features, such as smileys, that are peculiar to the Internet. Email client programs that run on your own microcomputer render email powerful and easy to use. With suitable encoding methods, email can be used to send any kind of computer file, including pictures, sounds, programs, and movies. Numerous biomedical electronic mailing lists and other Internet services are accessible by email. PMID:8520343
Numerical solution of 3D Navier-Stokes equations with upwind implicit schemes
NASA Technical Reports Server (NTRS)
Marx, Yves P.
1990-01-01
An upwind MUSCL type implicit scheme for the three-dimensional Navier-Stokes equations is presented. Comparison between different approximate Riemann solvers (Roe and Osher) are performed and the influence of the reconstructions schemes on the accuracy of the solution as well as on the convergence of the method is studied. A new limiter is introduced in order to remove the problems usually associated with non-linear upwind schemes. The implementation of a diagonal upwind implicit operator for the three-dimensional Navier-Stokes equations is also discussed. Finally the turbulence modeling is assessed. Good prediction of separated flows are demonstrated if a non-equilibrium turbulence model is used.
New data processing for multichannel FIR laser interferometer
NASA Astrophysics Data System (ADS)
Jun-Ben, Chen; Xiang, Gao
1989-10-01
Usually, both the probing and reference signals received by LATGS detectors of FIR interferometer pass through hardware phase discriminator and the output phase difference--hence the electron line densities is collected for analysis and display with a computerized data acquisition system(DAS). In this paper, a new numerical method for computing the phase difference in software has been developed instead of hardware phase discriminator, the temporal resolution and stability is improved. An asymmetrical Abel inversion is applied to processing the data from a seven-channel FIR HCN laser interferometer and the space-time distributions of plasma electron density in the HT-6M tokamak are derived.
[Ice application for reducing pain associated with goserelin acetate injection].
Ishii, Kaname; Nagata, Chika; Koshizaki, Eiko; Nishiuchi, Satoko
2013-10-01
We investigated the effectiveness of using an ice pack for reducing the pain associated with goserelin acetate injection. In this study, 39 patients with prostate cancer and 1 patient with breast cancer receiving hormonal therapy with goserelin acetate were enrolled. All patients completed a questionnaire regarding the use of ice application. We used the numerical rating scale (NRS) to assess the pain associated with injection. The NRS scores indicated that the pain was significantly less with ice application than with the usual method (p < 0.001). Further, ice application could decrease the duration of pain sensation. Ice application at the injection site is safe and effective for reducing pain.
Revised Chapman-Enskog analysis for a class of forcing schemes in the lattice Boltzmann method
NASA Astrophysics Data System (ADS)
Li, Q.; Zhou, P.; Yan, H. J.
2016-10-01
In the lattice Boltzmann (LB) method, the forcing scheme, which is used to incorporate an external or internal force into the LB equation, plays an important role. It determines whether the force of the system is correctly implemented in an LB model and affects the numerical accuracy. In this paper we aim to clarify a critical issue about the Chapman-Enskog analysis for a class of forcing schemes in the LB method in which the velocity in the equilibrium density distribution function is given by u =∑αeαfα / ρ , while the actual fluid velocity is defined as u ̂=u +δtF / (2 ρ ) . It is shown that the usual Chapman-Enskog analysis for this class of forcing schemes should be revised so as to derive the actual macroscopic equations recovered from these forcing schemes. Three forcing schemes belonging to the above class are analyzed, among which Wagner's forcing scheme [A. J. Wagner, Phys. Rev. E 74, 056703 (2006), 10.1103/PhysRevE.74.056703] is shown to be capable of reproducing the correct macroscopic equations. The theoretical analyses are examined and demonstrated with two numerical tests, including the simulation of Womersley flow and the modeling of flat and circular interfaces by the pseudopotential multiphase LB model.
NASA Astrophysics Data System (ADS)
Sakaizawa, Ryosuke; Kawai, Takaya; Sato, Toru; Oyama, Hiroyuki; Tsumune, Daisuke; Tsubono, Takaki; Goto, Koichi
2018-03-01
The target seas of tidal-current models are usually semi-closed bays, minimally affected by ocean currents. For these models, tidal currents are simulated in computational domains with a spatial scale of a couple hundred kilometers or less, by setting tidal elevations at their open boundaries. However, when ocean currents cannot be ignored in the sea areas of interest, such as in open seas near coastlines, it is necessary to include ocean-current effects in these tidal-current models. In this study, we developed a numerical method to analyze tidal currents near coasts by incorporating pre-calculated ocean-current velocities. First, a large regional-scale simulation with a spatial scale of several thousand kilometers was conducted and temporal changes in the ocean-current velocity at each grid point were stored. Next, the spatially and temporally interpolated ocean-current velocity was incorporated as forcing into the cross terms of the convection term of a tidal-current model having computational domains with spatial scales of hundreds of kilometers or less. Then, we applied this method to the diffusion of dissolved CO2 in a sea area off Tomakomai, Japan, and compared the numerical results and measurements to validate the proposed method.
Steel Fibre Reinforced Concrete Simulation with the SPH Method
NASA Astrophysics Data System (ADS)
Hušek, Martin; Kala, Jiří; Král, Petr; Hokeš, Filip
2017-10-01
Steel fibre reinforced concrete (SFRC) is very popular in many branches of civil engineering. Thanks to its increased ductility, it is able to resist various types of loading. When designing a structure, the mechanical behaviour of SFRC can be described by currently available material models (with equivalent material for example) and therefore no problems arise with numerical simulations. But in many scenarios, e.g. high speed loading, it would be a mistake to use such an equivalent material. Physical modelling of the steel fibres used in concrete is usually problematic, though. It is necessary to consider the fact that mesh-based methods are very unsuitable for high-speed simulations with regard to the issues that occur due to the effect of excessive mesh deformation. So-called meshfree methods are much more suitable for this purpose. The Smoothed Particle Hydrodynamics (SPH) method is currently the best choice, thanks to its advantages. However, a numerical defect known as tensile instability may appear when the SPH method is used. It causes the development of numerical (false) cracks, making simulations of ductile types of failure significantly more difficult to perform. The contribution therefore deals with the description of a procedure for avoiding this defect and successfully simulating the behaviour of SFRC with the SPH method. The essence of the problem lies in the choice of coordinates and the description of the integration domain derived from them - spatial (Eulerian kernel) or material coordinates (Lagrangian kernel). The contribution describes the behaviour of both formulations. Conclusions are drawn from the fundamental tasks, and the contribution additionally demonstrates the functionality of SFRC simulations. The random generation of steel fibres and their inclusion in simulations are also discussed. The functionality of the method is supported by the results of pressure test simulations which compare various levels of fibre reinforcement of SFRC specimens.
A Coupled model for ERT monitoring of contaminated sites
NASA Astrophysics Data System (ADS)
Wang, Yuling; Zhang, Bo; Gong, Shulan; Xu, Ya
2018-02-01
The performance of electrical resistivity tomography (ERT) system is usually investigated using a fixed resistivity distribution model in numerical simulation study. In this paper, a method to construct a time-varying resistivity model by coupling water transport, solute transport and constant current field is proposed for ERT monitoring of contaminated sites. Using the proposed method, a monitoring model is constructed for a contaminated site with a pollution region on the surface and ERT monitoring results at different time is calculated by the finite element method. The results show that ERT monitoring profiles can effectively reflect the increase of the pollution area caused by the diffusion of pollutants, but the extent of the pollution is not exactly the same as the actual situation. The model can be extended to any other case and can be used to scheme design and results analysis for ERT monitoring.
A new computational method for reacting hypersonic flows
NASA Astrophysics Data System (ADS)
Niculescu, M. L.; Cojocaru, M. G.; Pricop, M. V.; Fadgyas, M. C.; Pepelea, D.; Stoican, M. G.
2017-07-01
Hypersonic gas dynamics computations are challenging due to the difficulties to have reliable and robust chemistry models that are usually added to Navier-Stokes equations. From the numerical point of view, it is very difficult to integrate together Navier-Stokes equations and chemistry model equations because these partial differential equations have different specific time scales. For these reasons, almost all known finite volume methods fail shortly to solve this second order partial differential system. Unfortunately, the heating of Earth reentry vehicles such as space shuttles and capsules is very close linked to endothermic chemical reactions. A better prediction of wall heat flux leads to smaller safety coefficient for thermal shield of space reentry vehicle; therefore, the size of thermal shield decreases and the payload increases. For these reasons, the present paper proposes a new computational method based on chemical equilibrium, which gives accurate prediction of hypersonic heating in order to support the Earth reentry capsule design.
Wierl, Judy A.; Giddings, Elise M.P.; Bannerman, Roger T.
1998-01-01
Control of phosphorus from rural nonpoint sources is a major focus of current efforts to improve and protect water resources in Wisconsin and is recommended in almost every priority watershed plan prepared for the State's Nonpoint Source (NFS) Program. Barnyards and crop- lands usually are identified as the primary rural sources of phosphorus. Numerous questions have arisen about which of these two sources to control and about the method currently being used by the NFS program to compare phosphorus loads from barnyards and croplands. To evaluate the method, the U.S. Geological Survey (USGS). in cooperation with the Wisconsin Department of Natural Resources, used phosphorus-load and sediment-load data from streams and phosphorus concentrations in soils from the Otter Creek Watershed (located in the Sheboygan River Basin: fig. 1) in conjunction with two computer-based models.
Time Domain Estimation of Arterial Parameters using the Windkessel Model and the Monte Carlo Method
NASA Astrophysics Data System (ADS)
Gostuski, Vladimir; Pastore, Ignacio; Rodriguez Palacios, Gaspar; Vaca Diez, Gustavo; Moscoso-Vasquez, H. Marcela; Risk, Marcelo
2016-04-01
Numerous parameter estimation techniques exist for characterizing the arterial system using electrical circuit analogs. However, they are often limited by their requirements and usually high computational burdain. Therefore, a new method for estimating arterial parameters based on Monte Carlo simulation is proposed. A three element Windkessel model was used to represent the arterial system. The approach was to reduce the error between the calculated and physiological aortic pressure by randomly generating arterial parameter values, while keeping constant the arterial resistance. This last value was obtained for each subject using the arterial flow, and was a necessary consideration in order to obtain a unique set of values for the arterial compliance and peripheral resistance. The estimation technique was applied to in vivo data containing steady beats in mongrel dogs, and it reliably estimated Windkessel arterial parameters. Further, this method appears to be computationally efficient for on-line time-domain estimation of these parameters.
ULTRA-SHARP solution of the Smith-Hutton problem
NASA Technical Reports Server (NTRS)
Leonard, B. P.; Mokhtari, Simin
1992-01-01
Highly convective scalar transport involving near-discontinuities and strong streamline curvature was addressed in a paper by Smith and Hutton in 1982, comparing several different convection schemes applied to a specially devised test problem. First order methods showed significant artificial diffusion, whereas higher order methods gave less smearing but had a tendency to overshoot and oscillate. Perhaps because unphysical oscillations are more obvious than unphysical smearing, the intervening period has seen a rise in popularity of low order artificially diffusive schemes, especially in the numerical heat transfer industry. The present paper describes an alternate strategy of using non-artificially diffusive high order methods, while maintaining strictly monotonic transitions through the use of simple flux limited constraints. Limited third order upwinding is usually found to be the most cost effective basic convection scheme. Tighter resolution of discontinuities can be obtained at little additional cost by using automatic adaptive stencil expansion to higher order in local regions, as needed.
NASA Astrophysics Data System (ADS)
Buyadzhi, V. V.; Glushkov, A. V.; Khetselius, O. Yu; Bunyakova, Yu Ya; Florko, T. A.; Agayar, E. V.; Solyanikova, E. P.
2017-10-01
The present paper concerns the results of computational studying dynamics of the atmospheric pollutants (dioxide of nitrogen, sulphur etc) concentrations in an atmosphere of the industrial cities (Odessa) by using the dynamical systems and chaos theory methods. A chaotic behaviour in the nitrogen dioxide and sulphurous anhydride concentration time series at several sites of the Odessa city is numerically investigated. As usually, to reconstruct the corresponding attractor, the time delay and embedding dimension are needed. The former is determined by the methods of autocorrelation function and average mutual information, and the latter is calculated by means of a correlation dimension method and algorithm of false nearest neighbours. Further, the Lyapunov’s exponents spectrum, Kaplan-Yorke dimension and Kolmogorov entropy are computed. It has been found an existence of a low-D chaos in the time series of the atmospheric pollutants concentrations.
Efficient harvesting methods for early-stage snake and turtle embryos.
Matsubara, Yoshiyuki; Kuroiwa, Atsushi; Suzuki, Takayuki
2016-04-01
Reptile development is an intriguing research target for understating the unique morphogenesis of reptiles as well as the evolution of vertebrates. However, there are numerous difficulties associated with studying development in reptiles. The number of available reptile eggs is usually quite limited. In addition, the reptile embryo is tightly adhered to the eggshell, making it a challenge to isolate reptile embryos intact. Furthermore, there have been few reports describing efficient procedures for isolating intact embryos especially prior to pharyngula stage. Thus, the aim of this review is to present efficient procedures for obtaining early-stage reptilian embryos intact. We first describe the method for isolating early-stage embryos of the Japanese striped snake. This is the first detailed method for obtaining embryos prior to oviposition in oviparous snake species. Second, we describe an efficient strategy for isolating early-stage embryos of the soft-shelled turtle. © 2016 Japanese Society of Developmental Biologists.
Test functions for three-dimensional control-volume mixed finite-element methods on irregular grids
Naff, R.L.; Russell, T.F.; Wilson, J.D.; ,; ,; ,; ,; ,
2000-01-01
Numerical methods based on unstructured grids, with irregular cells, usually require discrete shape functions to approximate the distribution of quantities across cells. For control-volume mixed finite-element methods, vector shape functions are used to approximate the distribution of velocities across cells and vector test functions are used to minimize the error associated with the numerical approximation scheme. For a logically cubic mesh, the lowest-order shape functions are chosen in a natural way to conserve intercell fluxes that vary linearly in logical space. Vector test functions, while somewhat restricted by the mapping into the logical reference cube, admit a wider class of possibilities. Ideally, an error minimization procedure to select the test function from an acceptable class of candidates would be the best procedure. Lacking such a procedure, we first investigate the effect of possible test functions on the pressure distribution over the control volume; specifically, we look for test functions that allow for the elimination of intermediate pressures on cell faces. From these results, we select three forms for the test function for use in a control-volume mixed method code and subject them to an error analysis for different forms of grid irregularity; errors are reported in terms of the discrete L2 norm of the velocity error. Of these three forms, one appears to produce optimal results for most forms of grid irregularity.
Ordering Unstructured Meshes for Sparse Matrix Computations on Leading Parallel Systems
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Li, Xiaoye; Heber, Gerd; Biswas, Rupak
2000-01-01
The ability of computers to solve hitherto intractable problems and simulate complex processes using mathematical models makes them an indispensable part of modern science and engineering. Computer simulations of large-scale realistic applications usually require solving a set of non-linear partial differential equations (PDES) over a finite region. For example, one thrust area in the DOE Grand Challenge projects is to design future accelerators such as the SpaHation Neutron Source (SNS). Our colleagues at SLAC need to model complex RFQ cavities with large aspect ratios. Unstructured grids are currently used to resolve the small features in a large computational domain; dynamic mesh adaptation will be added in the future for additional efficiency. The PDEs for electromagnetics are discretized by the FEM method, which leads to a generalized eigenvalue problem Kx = AMx, where K and M are the stiffness and mass matrices, and are very sparse. In a typical cavity model, the number of degrees of freedom is about one million. For such large eigenproblems, direct solution techniques quickly reach the memory limits. Instead, the most widely-used methods are Krylov subspace methods, such as Lanczos or Jacobi-Davidson. In all the Krylov-based algorithms, sparse matrix-vector multiplication (SPMV) must be performed repeatedly. Therefore, the efficiency of SPMV usually determines the eigensolver speed. SPMV is also one of the most heavily used kernels in large-scale numerical simulations.
Evaporation estimates from the Dead Sea and their implications on its water balance
NASA Astrophysics Data System (ADS)
Oroud, Ibrahim M.
2011-12-01
The Dead Sea (DS) is a terminal hypersaline water body situated in the deepest part of the Jordan Valley. There is a growing interest in linking the DS to the open seas due to severe water shortages in the area and the serious geological and environmental hazards to its vicinity caused by the rapid level drop of the DS. A key issue in linking the DS with the open seas would be an accurate determination of evaporation rates. There exist large uncertainties of evaporation estimates from the DS due to the complex feedback mechanisms between meteorological forcings and thermophysical properties of hypersaline solutions. Numerous methods have been used to estimate current and historical (pre-1960) evaporation rates, with estimates differing by ˜100%. Evaporation from the DS is usually deduced indirectly using energy, water balance, or pan methods with uncertainty in many parameters. Accumulated errors resulting from these uncertainties are usually pooled into the estimates of evaporation rates. In this paper, a physically based method with minimum empirical parameters is used to evaluate historical and current evaporation estimates from the DS. The more likely figures for historical and current evaporation rates from the DS were 1,500-1,600 and 1,200-1,250 mm per annum, respectively. Results obtained are congruent with field observations and with more elaborate procedures.
Solutions to Kuessner's integral equation in unsteady flow using local basis functions
NASA Technical Reports Server (NTRS)
Fromme, J. A.; Halstead, D. W.
1975-01-01
The computational procedure and numerical results are presented for a new method to solve Kuessner's integral equation in the case of subsonic compressible flow about harmonically oscillating planar surfaces with controls. Kuessner's equation is a linear transformation from pressure to normalwash. The unknown pressure is expanded in terms of prescribed basis functions and the unknown basis function coefficients are determined in the usual manner by satisfying the given normalwash distribution either collocationally or in the complex least squares sense. The present method of solution differs from previous ones in that the basis functions are defined in a continuous fashion over a relatively small portion of the aerodynamic surface and are zero elsewhere. This method, termed the local basis function method, combines the smoothness and accuracy of distribution methods with the simplicity and versatility of panel methods. Predictions by the local basis function method for unsteady flow are shown to be in excellent agreement with other methods. Also, potential improvements to the present method and extensions to more general classes of solutions are discussed.
Scaling properties of the aerodynamic noise generated by low-speed fans
NASA Astrophysics Data System (ADS)
Canepa, Edward; Cattanei, Andrea; Mazzocut Zecchin, Fabio
2017-11-01
The spectral decomposition algorithm presented in the paper may be applied to selected parts of the SPL spectrum, i.e. to specific noise generating mechanisms. It yields the propagation and the generation functions, and indeed the Mach number scaling exponent associated with each mechanism as a function of the Strouhal number. The input data are SPL spectra obtained from measurements taken during speed ramps. Firstly, the basic theory and the implemented algorithm are described. Then, the behaviour of the new method is analysed with reference to numerically generated spectral data and the results are compared with the ones of an existing method based on the assumption that the scaling exponent is constant. Guidelines for the employment of both methods are provided. Finally, the method is applied to measurements taken on a cooling fan mounted on a test plenum designed following the ISO 10302 standards. The most common noise generating mechanisms are present and attention is focused on the low-frequency part of the spectrum, where the mechanisms are superposed. Generally, both propagation and generation functions are determined with better accuracy than the scaling exponent, whose values are usually consistent with expectations based on coherence and compactness of the acoustic sources. For periodic noise, the computed exponent is less accurate, as the related SPL data set has usually a limited size. The scaling exponent is very sensitive to the details of the experimental data, e.g. to slight inconsistencies or random errors.
NASA Astrophysics Data System (ADS)
Tatomir, Alexandru Bogdan A. C.; Flemisch, Bernd; Class, Holger; Helmig, Rainer; Sauter, Martin
2017-04-01
Geological storage of CO2 represents one viable solution to reduce greenhouse gas emission in the atmosphere. Potential leakage of CO2 storage can occur through networks of interconnected fractures. The geometrical complexity of these networks is often very high involving fractures occurring at various scales and having hierarchical structures. Such multiphase flow systems are usually hard to solve with a discrete fracture modelling (DFM) approach. Therefore, continuum fracture models assuming average properties are usually preferred. The multiple interacting continua (MINC) model is an extension of the classic double porosity model (Warren and Root, 1963) which accounts for the non-linear behaviour of the matrix-fracture interactions. For CO2 storage applications the transient representation of the inter-porosity two phase flow plays an important role. This study tests the accuracy and computational efficiency of the MINC method complemented with the multiple sub-region (MSR) upscaling procedure versus the DFM. The two phase flow MINC simulator is implemented in the free-open source numerical toolbox DuMux (www.dumux.org). The MSR (Gong et al., 2009) determines the inter-porosity terms by solving simplified local single-phase flow problems. The DFM is considered as the reference solution. The numerical examples consider a quasi-1D reservoir with a quadratic fracture system , a five-spot radial symmetric reservoir, and a completely random generated fracture system. Keywords: MINC, upscaling, two-phase flow, fractured porous media, discrete fracture model, continuum fracture model
NASA Astrophysics Data System (ADS)
Pan, Chu-Dong; Yu, Ling; Liu, Huan-Lin; Chen, Ze-Peng; Luo, Wen-Feng
2018-01-01
Moving force identification (MFI) is an important inverse problem in the field of bridge structural health monitoring (SHM). Reasonable signal structures of moving forces are rarely considered in the existing MFI methods. Interaction forces are complex because they contain both slowly-varying harmonic and impact signals due to bridge vibration and bumps on a bridge deck, respectively. Therefore, the interaction forces are usually hard to be expressed completely and sparsely by using a single basis function set. Based on the redundant concatenated dictionary and weighted l1-norm regularization method, a hybrid method is proposed for MFI in this study. The redundant dictionary consists of both trigonometric functions and rectangular functions used for matching the harmonic and impact signal features of unknown moving forces. The weighted l1-norm regularization method is introduced for formulation of MFI equation, so that the signal features of moving forces can be accurately extracted. The fast iterative shrinkage-thresholding algorithm (FISTA) is used for solving the MFI problem. The optimal regularization parameter is appropriately chosen by the Bayesian information criterion (BIC) method. In order to assess the accuracy and the feasibility of the proposed method, a simply-supported beam bridge subjected to a moving force is taken as an example for numerical simulations. Finally, a series of experimental studies on MFI of a steel beam are performed in laboratory. Both numerical and experimental results show that the proposed method can accurately identify the moving forces with a strong robustness, and it has a better performance than the Tikhonov regularization method. Some related issues are discussed as well.
Nonlinear unsteady convection on micro and nanofluids with Cattaneo-Christov heat flux
NASA Astrophysics Data System (ADS)
Mamatha Upadhya, S.; Raju, C. S. K.; Mahesha; Saleem, S.
2018-06-01
This is a theoretical study of unsteady nonlinear convection on magnetohydrodynamic fluid in a suspension of dust and graphene nanoparticles. For boosting the heat transport phenomena we consider the Cattaneo-Christov heat flux and thermal radiation. Dispersal of graphene nanoparticles in dusty fluids finds applications in biocompatibility, bio-imaging, biosensors, detection and cancer treatment, in monitoring stem cells differentiation etc. Initially the simulation is performed by amalgamation of dust (micron size) and nanoparticles into base fluid. Primarily existing partial differential system (PDEs) is changed to ordinary differential system (ODEs) with the support of usual similarity transformations. Consequently, the highly nonlinear ODEs are solved numerically through Runge-Kutta and Shooting method. The computational results for Non-dimensional temperature and velocity profiles are offered through graphs (ϕ = 0 and ϕ = 0.05) cases. Additionally, the numerical values of friction factor and heat transfer rate are tabulated numerically for various physical parameters obtained. We also validated the current outcomes with previously available study and found to be extremely acceptable. From this study we conclude that in the presence of nanofluid heat transfer rate and temperature distribution is higher compared to micro fluid.
Slump Flows inside Pipes: Numerical Results and Comparison with Experiments
NASA Astrophysics Data System (ADS)
Malekmohammadi, S.; Naccache, M. F.; Frigaard, I. A.; Martinez, D. M.
2008-07-01
In this work an analysis of the buoyancy-driven slumping flow inside a pipe is presented. This flow usually occurs when an oil well is sealed by a plug cementing process, where a cement plug is placed inside the pipe filled with a lower density fluid, displacing it towards the upper cylinder wall. Both the cement and the surrounding fluids have a non Newtonian behavior. The cement is viscoplastic and the surrounding fluid presents a shear thinning behavior. A numerical analysis was performed to evaluate the effects of some governing parameters on the slump length development. The conservation equations of mass and momentum were solved via a finite volume technique, using Fluent software (Ansys Inc.). The Volume of Fluid surface-tracking method was used to obtain the interface between the fluids and the slump length as a function of time. The results were obtained for different values of fluids densities differences, fluids rheology and pipe inclinations. The effects of these parameters on the interface shape and on the slump length versus time curve were analyzed. Moreover, the numerical results were compared to experimental ones, but some differences are observed, possibly due to chemical effects at the interface.
Approximate Solutions for Ideal Dam-Break Sediment-Laden Flows on Uniform Slopes
NASA Astrophysics Data System (ADS)
Ni, Yufang; Cao, Zhixian; Borthwick, Alistair; Liu, Qingquan
2018-04-01
Shallow water hydro-sediment-morphodynamic (SHSM) models have been applied increasingly widely in hydraulic engineering and geomorphological studies over the past few decades. Analytical and approximate solutions are usually sought to verify such models and therefore confirm their credibility. Dam-break flows are often evoked because such flows normally feature shock waves and contact discontinuities that warrant refined numerical schemes to solve. While analytical and approximate solutions to clear-water dam-break flows have been available for some time, such solutions are rare for sediment transport in dam-break flows. Here we aim to derive approximate solutions for ideal dam-break sediment-laden flows resulting from the sudden release of a finite volume of frictionless, incompressible water-sediment mixture on a uniform slope. The approximate solutions are presented for three typical sediment transport scenarios, i.e., pure advection, pure sedimentation, and concurrent entrainment and deposition. Although the cases considered in this paper are not real, the approximate solutions derived facilitate suitable benchmark tests for evaluating SHSM models, especially presently when shock waves can be numerically resolved accurately with a suite of finite volume methods, while the accuracy of the numerical solutions of contact discontinuities in sediment transport remains generally poorer.
NASA Astrophysics Data System (ADS)
Zhou, Di; Lu, Zhiliang; Guo, Tongqing; Shen, Ennan
2016-06-01
In this paper, the research on two types of unsteady flow problems in turbomachinery including blade flutter and rotor-stator interaction is made by means of numerical simulation. For the former, the energy method is often used to predict the aeroelastic stability by calculating the aerodynamic work per vibration cycle. The inter-blade phase angle (IBPA) is an important parameter in computation and may have significant effects on aeroelastic behavior. For the latter, the numbers of blades in each row are usually not equal and the unsteady rotor-stator interactions could be strong. An effective way to perform multi-row calculations is the domain scaling method (DSM). These two cases share a common point that the computational domain has to be extended to multi passages (MP) considering their respective features. The present work is aimed at modeling these two issues with the developed MP model. Computational fluid dynamics (CFD) technique is applied to resolve the unsteady Reynolds-averaged Navier-Stokes (RANS) equations and simulate the flow fields. With the parallel technique, the additional time cost due to modeling more passages can be largely decreased. Results are presented on two test cases including a vibrating rotor blade and a turbine stage.
Numerical analysis of mixing enhancement for micro-electroosmotic flow
NASA Astrophysics Data System (ADS)
Tang, G. H.; He, Y. L.; Tao, W. Q.
2010-05-01
Micro-electroosmotic flow is usually slow with negligible inertial effects and diffusion-based mixing can be problematic. To gain an improved understanding of electroosmotic mixing in microchannels, a numerical study has been carried out for channels patterned with wall blocks, and channels patterned with heterogeneous surfaces. The lattice Boltzmann method has been employed to obtain the external electric field, electric potential distribution in the electrolyte, the flow field, and the species concentration distribution within the same framework. The simulation results show that wall blocks and heterogeneous surfaces can significantly disturb the streamlines by fluid folding and stretching leading to apparently substantial improvements in mixing. However, the results show that the introduction of such features can substantially reduce the mass flow rate and thus effectively prolongs the available mixing time when the flow passes through the channel. This is a non-negligible factor on the effectiveness of the observed improvements in mixing efficiency. Compared with the heterogeneous surface distribution, the wall block cases can achieve more effective enhancement in the same mixing time. In addition, the field synergy theory is extended to analyze the mixing enhancement in electroosmotic flow. The distribution of the local synergy angle in the channel aids to evaluate the effectiveness of enhancement method.
NASA Astrophysics Data System (ADS)
Lubieniecki, Michał; Roemer, Jakub; Martowicz, Adam; Wojciechowski, Krzysztof; Uhl, Tadeusz
2016-03-01
Gas foil bearings have become widespread covering the applications of micro-turbines, motors, compressors, and turbocharges, prevalently of small size. The specific construction of the bearing, despite all of its advantages, makes it vulnerable to a local difference in heat generation rates that can be extremely detrimental. The developing thermal gradients may lead to thermal runaway or seizure that eventually causes bearing failure, usually abrupt in nature. The authors propose a method for thermal gradient removal with the use of current-controlled thermoelectric modules. To fulfill the task of control law adoption the numerical model of the heat distribution in a bearing has been built. Although sparse readings obtained experimentally with standard thermocouples are enough to determine thermal gradients successfully, validation of the bearing numerical model may be impeded. To improve spatial resolution of the experimental measurements the authors proposed a matrix of customized thermocouples located on the top foil. The foil acts as a shared conductor for each thermocouple that reduces the number of cable connections. The proof of concept of the control and measurement systems has been demonstrated in a still bearing heated by a cartridge heater.
Properties of atomic pairs produced in the collision of Bose-Einstein condensates
NASA Astrophysics Data System (ADS)
Ziń, Paweł; Wasak, Tomasz
2018-04-01
During a collision of Bose-Einstein condensates correlated pairs of atoms are emitted. The scattered massive particles, in analogy to photon pairs in quantum optics, might be used in the violation of Bell's inequalities, demonstration of Einstein-Podolsky-Rosen correlations, or sub-shot-noise atomic interferometry. Usually, a theoretical description of the collision relies either on stochastic numerical methods or on analytical treatments involving various approximations. Here, we investigate elastic scattering of atoms from colliding elongated Bose-Einstein condensates within the Bogoliubov method, carefully controlling performed approximations at every stage of the analysis. We derive expressions for the one- and two-particle correlation functions. The obtained formulas, which relate the correlation functions to the condensate wave function, are convenient for numerical calculations. We employ the variational approach for condensate wave functions to obtain analytical expressions for the correlation functions, whose properties we analyze in detail. We also present a useful semiclassical model of the process and compare its results with the quantum one. The results are relevant for recent experiments with excited helium atoms, as well as for planned experiments aimed at investigating the nonclassicality of the system.
Gómez-Velázquez, Fabiola R; Vélez-Pérez, Hugo; Espinoza-Valdez, Aurora; Romo-Vazquez, Rebeca; Salido-Ruiz, Ricardo A; Ruiz-Stovel, Vanessa; Gallardo-Moreno, Geisa B; González-Garrido, Andrés A; Berumen, Gustavo
2017-02-08
Children with mathematical difficulties usually have an impaired ability to process symbolic representations. Functional MRI methods have suggested that early frontoparietal connectivity can predict mathematic achievements; however, the study of brain connectivity during numerical processing remains unexplored. With the aim of evaluating this in children with different math proficiencies, we selected a sample of 40 children divided into two groups [high achievement (HA) and low achievement (LA)] according to their arithmetic scores in the Wide Range Achievement Test, 4th ed.. Participants performed a symbolic magnitude comparison task (i.e. determining which of two numbers is numerically larger), with simultaneous electrophysiological recording. Partial directed coherence and graph theory methods were used to estimate and depict frontoparietal connectivity in both groups. The behavioral measures showed that children with LA performed significantly slower and less accurately than their peers in the HA group. Significantly higher frontocentral connectivity was found in LA compared with HA; however, when the connectivity analysis was restricted to parietal locations, no relevant group differences were observed. These findings seem to support the notion that LA children require greater memory and attentional efforts to meet task demands, probably affecting early stages of symbolic comparison.
NASA Astrophysics Data System (ADS)
Jin, Y.; Liang, Z.
2002-12-01
The vector radiative transfer (VRT) equation is an integral-deferential equation to describe multiple scattering, absorption and transmission of four Stokes parameters in random scatter media. From the integral formal solution of VRT equation, the lower order solutions, such as the first-order scattering for a layer medium or the second order scattering for a half space, can be obtained. The lower order solutions are usually good at low frequency when high-order scattering is negligible. It won't be feasible to continue iteration for obtaining high order scattering solution because too many folds integration would be involved. In the space-borne microwave remote sensing, for example, the DMSP (Defense Meterological Satellite Program) SSM/I (Special Sensor Microwave/Imager) employed seven channels of 19, 22, 37 and 85GHz. Multiple scattering from the terrain surfaces such as snowpack cannot be neglected at these channels. The discrete ordinate and eigen-analysis method has been studied to take into account for multiple scattering and applied to remote sensing of atmospheric precipitation, snowpack etc. Snowpack was modeled as a layer of dense spherical particles, and the VRT for a layer of uniformly dense spherical particles has been numerically studied by the discrete ordinate method. However, due to surface melting and refrozen crusts, the snowpack undergoes stratifying to form inhomegeneous profiles of the ice grain size, fractional volume and physical temperature etc. It becomes necessary to study multiple scattering and emission from stratified snowpack of dense ice grains. But, the discrete ordinate and eigen-analysis method cannot be simply applied to multi-layers model, because numerically solving a set of multi-equations of VRT is difficult. Stratifying the inhomogeneous media into multi-slabs and employing the first order Mueller matrix of each thin slab, this paper developed an iterative method to derive high orders scattering solutions of whole scatter media. High order scattering and emission from inhomogeneous stratifying media of dense spherical particles are numerically obtained. The brightness temperature at low frequency such as 5.3 GHz without high order scattering and at SSM/I channels with high order scattering are obtained. This approach is also compared with the conventional discrete ordinate method for an uniform layer model. Numerical simulation for inhomogeneous snowpack is also compared with the measurements of microwave remote sensing.
Low Mass-Damping Vortex-Induced Vibrations of a Single Cylinder at Moderate Reynolds Number.
Jus, Y; Longatte, E; Chassaing, J-C; Sagaut, P
2014-10-01
The feasibility and accuracy of large eddy simulation is investigated for the case of three-dimensional unsteady flows past an elastically mounted cylinder at moderate Reynolds number. Although these flow problems are unconfined, complex wake flow patterns may be observed depending on the elastic properties of the structure. An iterative procedure is used to solve the structural dynamic equation to be coupled with the Navier-Stokes system formulated in a pseudo-Eulerian way. A moving mesh method is involved to deform the computational domain according to the motion of the fluid structure interface. Numerical simulations of vortex-induced vibrations are performed for a freely vibrating cylinder at Reynolds number 3900 in the subcritical regime under two low mass-damping conditions. A detailed physical analysis is provided for a wide range of reduced velocities, and the typical three-branch response of the amplitude behavior usually reported in the experiments is exhibited and reproduced by numerical simulation.
NASA Astrophysics Data System (ADS)
Zhang, Tie-Yan; Zhao, Yan; Xie, Xiang-Peng
2012-12-01
This paper is concerned with the problem of stability analysis of nonlinear Roesser-type two-dimensional (2D) systems. Firstly, the fuzzy modeling method for the usual one-dimensional (1D) systems is extended to the 2D case so that the underlying nonlinear 2D system can be represented by the 2D Takagi—Sugeno (TS) fuzzy model, which is convenient for implementing the stability analysis. Secondly, a new kind of fuzzy Lyapunov function, which is a homogeneous polynomially parameter dependent on fuzzy membership functions, is developed to conceive less conservative stability conditions for the TS Roesser-type 2D system. In the process of stability analysis, the obtained stability conditions approach exactness in the sense of convergence by applying some novel relaxed techniques. Moreover, the obtained result is formulated in the form of linear matrix inequalities, which can be easily solved via standard numerical software. Finally, a numerical example is also given to demonstrate the effectiveness of the proposed approach.
SPH simulation of free surface flow over a sharp-crested weir
NASA Astrophysics Data System (ADS)
Ferrari, Angela
2010-03-01
In this paper the numerical simulation of a free surface flow over a sharp-crested weir is presented. Since in this case the usual shallow water assumptions are not satisfied, we propose to solve the problem using the full weakly compressible Navier-Stokes equations with the Tait equation of state for water. The numerical method used consists of the new meshless Smooth Particle Hydrodynamics (SPH) formulation proposed by Ferrari et al. (2009) [8], that accurately tracks the free surface profile and provides monotone pressure fields. Thus, the unsteady evolution of the complex moving material interface (free surface) can been properly solved. The simulations involving about half a million of fluid particles have been run in parallel on two of the most powerful High Performance Computing (HPC) facilities in Europe. The validation of the results has been carried out analysing the pressure field and comparing the free surface profiles obtained with the SPH scheme with experimental measurements available in literature [18]. A very good quantitative agreement has been obtained.
Shock wave-free interface interaction
NASA Astrophysics Data System (ADS)
Frolov, Roman; Minev, Peter; Krechetnikov, Rouslan
2016-11-01
The problem of shock wave-free interface interaction has been widely studied in the context of compressible two-fluid flows using analytical, experimental, and numerical techniques. While various physical effects and possible interaction patterns for various geometries have been identified in the literature, the effects of viscosity and surface tension are usually neglected in such models. In our study, we apply a novel numerical algorithm for simulation of viscous compressible two-fluid flows with surface tension to investigate the influence of these effects on the shock-interface interaction. The method combines together the ideas from Finite Volume adaptation of invariant domains preserving algorithm for systems of hyperbolic conservation laws by Guermond and Popov and ADI parallel solver for viscous incompressible NSEs by Guermond and Minev. This combination has been further extended to a two-fluid flow case, including surface tension effects. Here we report on a quantitative study of how surface tension and viscosity affect the structure of the shock wave-free interface interaction region.
Banks, H T; Birch, Malcolm J; Brewin, Mark P; Greenwald, Stephen E; Hu, Shuhua; Kenz, Zackary R; Kruse, Carola; Maischak, Matthias; Shaw, Simon; Whiteman, John R
2014-04-13
We revisit a method originally introduced by Werder et al. (in Comput. Methods Appl. Mech. Engrg., 190:6685-6708, 2001) for temporally discontinuous Galerkin FEMs applied to a parabolic partial differential equation. In that approach, block systems arise because of the coupling of the spatial systems through inner products of the temporal basis functions. If the spatial finite element space is of dimension D and polynomials of degree r are used in time, the block system has dimension ( r + 1) D and is usually regarded as being too large when r > 1. Werder et al. found that the space-time coupling matrices are diagonalizable over [Formula: see text] for r ⩽ 100, and this means that the time-coupled computations within a time step can actually be decoupled. By using either continuous Galerkin or spectral element methods in space, we apply this DG-in-time methodology, for the first time, to second-order wave equations including elastodynamics with and without Kelvin-Voigt and Maxwell-Zener viscoelasticity. An example set of numerical results is given to demonstrate the favourable effect on error and computational work of the moderately high-order (up to degree 7) temporal and spatio-temporal approximations, and we also touch on an application of this method to an ambitious problem related to the diagnosis of coronary artery disease. Copyright © 2014 The Authors. International Journal for Numerical Methods in Engineering published by John Wiley & Sons Ltd.
Banks, H T; Birch, Malcolm J; Brewin, Mark P; Greenwald, Stephen E; Hu, Shuhua; Kenz, Zackary R; Kruse, Carola; Maischak, Matthias; Shaw, Simon; Whiteman, John R
2014-01-01
We revisit a method originally introduced by Werder et al. (in Comput. Methods Appl. Mech. Engrg., 190:6685–6708, 2001) for temporally discontinuous Galerkin FEMs applied to a parabolic partial differential equation. In that approach, block systems arise because of the coupling of the spatial systems through inner products of the temporal basis functions. If the spatial finite element space is of dimension D and polynomials of degree r are used in time, the block system has dimension (r + 1)D and is usually regarded as being too large when r > 1. Werder et al. found that the space-time coupling matrices are diagonalizable over for r ⩽100, and this means that the time-coupled computations within a time step can actually be decoupled. By using either continuous Galerkin or spectral element methods in space, we apply this DG-in-time methodology, for the first time, to second-order wave equations including elastodynamics with and without Kelvin–Voigt and Maxwell–Zener viscoelasticity. An example set of numerical results is given to demonstrate the favourable effect on error and computational work of the moderately high-order (up to degree 7) temporal and spatio-temporal approximations, and we also touch on an application of this method to an ambitious problem related to the diagnosis of coronary artery disease. Copyright © 2014 The Authors. International Journal for Numerical Methods in Engineering published by John Wiley & Sons Ltd. PMID:25834284
Numerical Integration Techniques for Curved-Element Discretizations of Molecule–Solvent Interfaces
Bardhan, Jaydeep P.; Altman, Michael D.; Willis, David J.; Lippow, Shaun M.; Tidor, Bruce; White, Jacob K.
2012-01-01
Surface formulations of biophysical modeling problems offer attractive theoretical and computational properties. Numerical simulations based on these formulations usually begin with discretization of the surface under consideration; often, the surface is curved, possessing complicated structure and possibly singularities. Numerical simulations commonly are based on approximate, rather than exact, discretizations of these surfaces. To assess the strength of the dependence of simulation accuracy on the fidelity of surface representation, we have developed methods to model several important surface formulations using exact surface discretizations. Following and refining Zauhar’s work (J. Comp.-Aid. Mol. Des. 9:149-159, 1995), we define two classes of curved elements that can exactly discretize the van der Waals, solvent-accessible, and solvent-excluded (molecular) surfaces. We then present numerical integration techniques that can accurately evaluate nonsingular and singular integrals over these curved surfaces. After validating the exactness of the surface discretizations and demonstrating the correctness of the presented integration methods, we present a set of calculations that compare the accuracy of approximate, planar-triangle-based discretizations and exact, curved-element-based simulations of surface-generalized-Born (sGB), surface-continuum van der Waals (scvdW), and boundary-element method (BEM) electrostatics problems. Results demonstrate that continuum electrostatic calculations with BEM using curved elements, piecewise-constant basis functions, and centroid collocation are nearly ten times more accurate than planartriangle BEM for basis sets of comparable size. The sGB and scvdW calculations give exceptional accuracy even for the coarsest obtainable discretized surfaces. The extra accuracy is attributed to the exact representation of the solute–solvent interface; in contrast, commonly used planar-triangle discretizations can only offer improved approximations with increasing discretization and associated increases in computational resources. The results clearly demonstrate that our methods for approximate integration on an exact geometry are far more accurate than exact integration on an approximate geometry. A MATLAB implementation of the presented integration methods and sample data files containing curved-element discretizations of several small molecules are available online at http://web.mit.edu/tidor. PMID:17627358
Measurement Uncertainty of Dew-Point Temperature in a Two-Pressure Humidity Generator
NASA Astrophysics Data System (ADS)
Martins, L. Lages; Ribeiro, A. Silva; Alves e Sousa, J.; Forbes, Alistair B.
2012-09-01
This article describes the measurement uncertainty evaluation of the dew-point temperature when using a two-pressure humidity generator as a reference standard. The estimation of the dew-point temperature involves the solution of a non-linear equation for which iterative solution techniques, such as the Newton-Raphson method, are required. Previous studies have already been carried out using the GUM method and the Monte Carlo method but have not discussed the impact of the approximate numerical method used to provide the temperature estimation. One of the aims of this article is to take this approximation into account. Following the guidelines presented in the GUM Supplement 1, two alternative approaches can be developed: the forward measurement uncertainty propagation by the Monte Carlo method when using the Newton-Raphson numerical procedure; and the inverse measurement uncertainty propagation by Bayesian inference, based on prior available information regarding the usual dispersion of values obtained by the calibration process. The measurement uncertainties obtained using these two methods can be compared with previous results. Other relevant issues concerning this research are the broad application to measurements that require hygrometric conditions obtained from two-pressure humidity generators and, also, the ability to provide a solution that can be applied to similar iterative models. The research also studied the factors influencing both the use of the Monte Carlo method (such as the seed value and the convergence parameter) and the inverse uncertainty propagation using Bayesian inference (such as the pre-assigned tolerance, prior estimate, and standard deviation) in terms of their accuracy and adequacy.
Numerical Researches on Dynamical Systems with Relativistic Spin
NASA Astrophysics Data System (ADS)
Han, W. B.
2010-04-01
It is well known that spinning compact binaries are one of the most important research objects in the universe. Especially, EMRIs (extreme mass ratio inspirals) involving stellar compact objects which orbit massive black holes, are considered to be primary sources of gravitational radiation (GW) which could be detected by the space-based interferometer LISA. GW signals from EMRIs can be used to test general relativity, measure the masses and spins of central black holes and study essential physics near horizons. Compared with the situation without spin, the complexity of extreme objects, most of which rotate very fast, is much higher. So the dynamics of EMRI systems are numerically and analytically studied. We focus on how the spin effects on the dynamics of these systems and the produced GW radiations. Firstly, an ideal model of spinning test particles around Kerr black hole is considered. For equatorial orbits, we present the correct expression of effective potential and analyze the stability of circular orbits. Especially, the gravitational binding energy and frame-dragging effect of extreme Kerr black hole are much bigger than those without spin. For general orbits, spin can monotonically enlarge orbital inclination and destroy the symmetry of orbits about equatorial plane. It is the most important that extreme spin can produce orbital chaos. By carefully investigating the relations between chaos and orbital parameters, we point out that chaos usually appears for orbits with small pericenter, big eccentricity and orbital inclination. It is emphasized that Poincaré section method is invalid to detect the chaos of spinning particles, and the way of systems toward chaos is the period-doubling bifurcation. Furthermore, we study how spins effect on GW radiations from spinning test particles orbiting Kerr black holes. It is found that spins can increase orbit eccentricity and then make h+ component be detected more easily. But for h× component, because spins change orbital inclination in a complicated way, it is more difficult to build GW signal templates. Secondly, based on the scalar gravity theory, a numerical relativistic model of EMRIs is constructed to consider the self-gravity and radiation reaction of low-mass objects. Finally, we develop a new method with multiple steps for Hamilton systems to meet the needs of numerical researches. This method can effectively maintain each conserved quantity of the separable Hamilton system. In addition, for constrained system with a few first integrals, we present a new numerical stabilization method named as adjustment-stabilization method, which can maintain all known conserved quantities in a given dynamical system and greatly improve the numerical accuracy. Our new method is the most complete stabilization method up to now.
Multifractal Cross Wavelet Analysis
NASA Astrophysics Data System (ADS)
Jiang, Zhi-Qiang; Gao, Xing-Lu; Zhou, Wei-Xing; Stanley, H. Eugene
Complex systems are composed of mutually interacting components and the output values of these components usually exhibit long-range cross-correlations. Using wavelet analysis, we propose a method of characterizing the joint multifractal nature of these long-range cross correlations, a method we call multifractal cross wavelet analysis (MFXWT). We assess the performance of the MFXWT method by performing extensive numerical experiments on the dual binomial measures with multifractal cross correlations and the bivariate fractional Brownian motions (bFBMs) with monofractal cross correlations. For binomial multifractal measures, we find the empirical joint multifractality of MFXWT to be in approximate agreement with the theoretical formula. For bFBMs, MFXWT may provide spurious multifractality because of the wide spanning range of the multifractal spectrum. We also apply the MFXWT method to stock market indices, and in pairs of index returns and volatilities we find an intriguing joint multifractal behavior. The tests on surrogate series also reveal that the cross correlation behavior, particularly the cross correlation with zero lag, is the main origin of cross multifractality.
Analytical N beam position monitor method
NASA Astrophysics Data System (ADS)
Wegscheider, A.; Langner, A.; Tomás, R.; Franchi, A.
2017-11-01
Measurement and correction of focusing errors is of great importance for performance and machine protection of circular accelerators. Furthermore LHC needs to provide equal luminosities to the experiments ATLAS and CMS. High demands are also set on the speed of the optics commissioning, as the foreseen operation with β*-leveling on luminosity will require many operational optics. A fast measurement of the β -function around a storage ring is usually done by using the measured phase advance between three consecutive beam position monitors (BPMs). A recent extension of this established technique, called the N-BPM method, was successfully applied for optics measurements at CERN, ALBA, and ESRF. We present here an improved algorithm that uses analytical calculations for both random and systematic errors and takes into account the presence of quadrupole, sextupole, and BPM misalignments, in addition to quadrupolar field errors. This new scheme, called the analytical N-BPM method, is much faster, further improves the measurement accuracy, and is applicable to very pushed beam optics where the existing numerical N-BPM method tends to fail.
Three-dimensional simulation of the free shear layer using the vortex-in-cell method
NASA Technical Reports Server (NTRS)
Couet, B.; Buneman, O.; Leonard, A.
1979-01-01
We present numerical simulations of the evolution of a mixing layer from an initial state of uniform vorticity with simple two- and three-dimensional small perturbations. A new method for tracing a large number of three-dimensional vortex filaments is used in the simulations. Vortex tracing by Biot-Savart interaction originally implied ideal (non-viscous) flow, but we use a 3-d mesh, Fourier transforms and filtering for vortex tracing, which implies 'modeling' of subgrid scale motion and hence some viscosity. Streamwise perturbations lead to the usual roll-up of vortex patterns with spanwise uniformity maintained. Remarkably, spanwise perturbations generate streamwise distortions of the vortex filaments and the combination of both perturbations leads to patterns with interesting features discernable in the movies and in the records of enstrophy and energy for the three components of the flow.
Extraction of Children's Friendship Relation from Activity Level
NASA Astrophysics Data System (ADS)
Kono, Aki; Shintani, Kimio; Katsuki, Takuya; Kihara, Shin'ya; Ueda, Mari; Kaneda, Shigeo; Haga, Hirohide
Children learn to fit into society through living in a group, and it's greatly influenced by their friend relations. Although preschool teachers need to observe them to assist in the growth of children's social progress and support the development each child's personality, only experienced teachers can watch over children while providing high-quality guidance. To resolve the problem, this paper proposes a mathematical and objective method that assists teachers with observation. It uses numerical data of activity level recorded by pedometers, and we make tree diagram called dendrogram based on hierarchical clustering with recorded activity level. Also, we calculate children's ``breadth'' and ``depth'' of friend relations by using more than one dendrogram. When we record children's activity level in a certain kindergarten for two months and evaluated the proposed method, the results usually coincide with remarks of teachers about the children.
Real-time optimal guidance for orbital maneuvering.
NASA Technical Reports Server (NTRS)
Cohen, A. O.; Brown, K. R.
1973-01-01
A new formulation for soft-constraint trajectory optimization is presented as a real-time optimal feedback guidance method for multiburn orbital maneuvers. Control is always chosen to minimize burn time plus a quadratic penalty for end condition errors, weighted so that early in the mission (when controllability is greatest) terminal errors are held negligible. Eventually, as controllability diminishes, the method partially relaxes but effectively still compensates perturbations in whatever subspace remains controllable. Although the soft-constraint concept is well-known in optimal control, the present formulation is novel in addressing the loss of controllability inherent in multiple burn orbital maneuvers. Moreover the necessary conditions usually obtained from a Bolza formulation are modified in this case so that the fully hard constraint formulation is a numerically well behaved subcase. As a result convergence properties have been greatly improved.
When and how do GPs record vital signs in children with acute infections? A cross-sectional study
Blacklock, Claire; Haj-Hassan, Tanya Ali; Thompson, Matthew J
2012-01-01
Background NICE recommendations and evidence from ambulatory settings promotes the use of vital signs in identifying serious infections in children. This appears to differ from usual clinical practice where GPs report measuring vital signs infrequently. Aim To identify frequency of vital sign documentation by GPs, in the assessment of children with acute infections in primary care. Design and setting Observational study in 15 general practice surgeries in Oxfordshire and Somerset, UK. Method A standardised proforma was used to extract consultation details including documentation of numerical vital signs, and words or phrases used by the GP in assessing vital signs, for 850 children aged 1 month to 16 years presenting with acute infection. Results Of the children presenting with acute infections 31.6% had one or more numerical vital signs recorded (269, 31.6%), however GP recording rate improved if free text proxies were also considered: at least one vital sign was then recorded in over half (54.1%) of children. In those with recorded numerical values for vital signs, the most frequent was temperature (210, 24.7%), followed by heart rate (62, 7.3%), respiratory rate (58, 6.8%), and capillary refill time (36, 4.2%). Words or phrases for vital signs were documented infrequently (temperature 17.6%, respiratory rate 14.6%, capillary refill time 12.5%, and heart rate 0.5%), Text relating to global assessment was documented in 313/850 (36.8%) of consultations. Conclusion GPs record vital signs using words and phrases as well as numerical methods, although overall documentation of vital signs is infrequent in children presenting with acute infections. PMID:23265227
NASA Astrophysics Data System (ADS)
Roubinet, D.; Russian, A.; Dentz, M.; Gouze, P.
2017-12-01
Characterizing and modeling hydrodynamic reactive transport in fractured rock are critical challenges for various research fields and applications including environmental remediation, geological storage, and energy production. To this end, we consider a recently developed time domain random walk (TDRW) approach, which is adapted to reproduce anomalous transport behaviors and capture heterogeneous structural and physical properties. This method is also very well suited to optimize numerical simulations by memory-shared massive parallelization and provide numerical results at various scales. So far, the TDRW approach has been applied for modeling advective-diffusive transport with mass transfer between mobile and immobile regions and simple (theoretical) reactions in heterogeneous porous media represented as single continuum domains. We extend this approach to dual-continuum representations considering a highly permeable fracture network embedded into a poorly permeable rock matrix with heterogeneous geochemical reactions occurring in both geological structures. The resulting numerical model enables us to extend the range of the modeled heterogeneity scales with an accurate representation of solute transport processes and no assumption on the Fickianity of these processes. The proposed model is compared to existing particle-based methods that are usually used to model reactive transport in fractured rocks assuming a homogeneous surrounding matrix, and is used to evaluate the impact of the matrix heterogeneity on the apparent reaction rates for different 2D and 3D simple-to-complex fracture network configurations.
A Generalized Technique in Numerical Integration
NASA Astrophysics Data System (ADS)
Safouhi, Hassan
2018-02-01
Integration by parts is one of the most popular techniques in the analysis of integrals and is one of the simplest methods to generate asymptotic expansions of integral representations. The product of the technique is usually a divergent series formed from evaluating boundary terms; however, sometimes the remaining integral is also evaluated. Due to the successive differentiation and anti-differentiation required to form the series or the remaining integral, the technique is difficult to apply to problems more complicated than the simplest. In this contribution, we explore a generalized and formalized integration by parts to create equivalent representations to some challenging integrals. As a demonstrative archetype, we examine Bessel integrals, Fresnel integrals and Airy functions.
Using real options analysis to support strategic management decisions
NASA Astrophysics Data System (ADS)
Kabaivanov, Stanimir; Markovska, Veneta; Milev, Mariyan
2013-12-01
Decision making is a complex process that requires taking into consideration multiple heterogeneous sources of uncertainty. Standard valuation and financial analysis techniques often fail to properly account for all these sources of risk as well as for all sources of additional flexibility. In this paper we explore applications of a modified binomial tree method for real options analysis (ROA) in an effort to improve decision making process. Usual cases of use of real options are analyzed with elaborate study on the applications and advantages that company management can derive from their application. A numeric results based on extending simple binomial tree approach for multiple sources of uncertainty are provided to demonstrate the improvement effects on management decisions.
Metrics for comparison of crystallographic maps
Urzhumtsev, Alexandre; Afonine, Pavel V.; Lunin, Vladimir Y.; ...
2014-10-01
Numerical comparison of crystallographic contour maps is used extensively in structure solution and model refinement, analysis and validation. However, traditional metrics such as the map correlation coefficient (map CC, real-space CC or RSCC) sometimes contradict the results of visual assessment of the corresponding maps. This article explains such apparent contradictions and suggests new metrics and tools to compare crystallographic contour maps. The key to the new methods is rank scaling of the Fourier syntheses. The new metrics are complementary to the usual map CC and can be more helpful in map comparison, in particular when only some of their aspects,more » such as regions of high density, are of interest.« less
Investigation of the Dynamic Contact Angle Using a Direct Numerical Simulation Method.
Zhu, Guangpu; Yao, Jun; Zhang, Lei; Sun, Hai; Li, Aifen; Shams, Bilal
2016-11-15
A large amount of residual oil, which exists as isolated oil slugs, remains trapped in reservoirs after water flooding. Numerous numerical studies are performed to investigate the fundamental flow mechanism of oil slugs to improve flooding efficiency. Dynamic contact angle models are usually introduced to simulate an accurate contact angle and meniscus displacement of oil slugs under a high capillary number. Nevertheless, in the oil slug flow simulation process, it is unnecessary to introduce the dynamic contact angle model because of a negligible change in the meniscus displacement after using the dynamic contact angle model when the capillary number is small. Therefore, a critical capillary number should be introduced to judge whether the dynamic contact model should be incorporated into simulations. In this study, a direct numerical simulation method is employed to simulate the oil slug flow in a capillary tube at the pore scale. The position of the interface between water and the oil slug is determined using the phase-field method. The capacity and accuracy of the model are validated using a classical benchmark: a dynamic capillary filling process. Then, different dynamic contact angle models and the factors that affect the dynamic contact angle are analyzed. The meniscus displacements of oil slugs with a dynamic contact angle and a static contact angle (SCA) are obtained during simulations, and the relative error between them is calculated automatically. The relative error limit has been defined to be 5%, beyond which the dynamic contact angle model needs to be incorporated into the simulation to approach the realistic displacement. Thus, the desired critical capillary number can be determined. A three-dimensional universal chart of critical capillary number, which functions as static contact angle and viscosity ratio, is given to provide a guideline for oil slug simulation. Also, a fitting formula is presented for ease of use.
NASA Astrophysics Data System (ADS)
Li, Xiaomin; Guo, Xueli; Guo, Haiyan
2018-06-01
Robust numerical models that describe the complex behaviors of risers are needed because these constitute dynamically sensitive systems. This paper presents a simple and efficient algorithm for the nonlinear static and dynamic analyses of marine risers. The proposed approach uses the vector form intrinsic finite element (VFIFE) method, which is based on vector mechanics theory and numerical calculation. In this method, the risers are described by a set of particles directly governed by Newton's second law and are connected by weightless elements that can only resist internal forces. The method does not require the integration of the stiffness matrix, nor does it need iterations to solve the governing equations. Due to these advantages, the method can easily increase or decrease the element and change the boundary conditions, thus representing an innovative concept of solving nonlinear behaviors, such as large deformation and large displacement. To prove the feasibility of the VFIFE method in the analysis of the risers, rigid and flexible risers belonging to two different categories of marine risers, which usually have differences in modeling and solving methods, are employed in the present study. In the analysis, the plane beam element is adopted in the simulation of interaction forces between the particles and the axial force, shear force, and bending moment are also considered. The results are compared with the conventional finite element method (FEM) and those reported in the related literature. The findings revealed that both the rigid and flexible risers could be modeled in a similar unified analysis model and that the VFIFE method is feasible for solving problems related to the complex behaviors of marine risers.
ConvAn: a convergence analyzing tool for optimization of biochemical networks.
Kostromins, Andrejs; Mozga, Ivars; Stalidzans, Egils
2012-01-01
Dynamic models of biochemical networks usually are described as a system of nonlinear differential equations. In case of optimization of models for purpose of parameter estimation or design of new properties mainly numerical methods are used. That causes problems of optimization predictability as most of numerical optimization methods have stochastic properties and the convergence of the objective function to the global optimum is hardly predictable. Determination of suitable optimization method and necessary duration of optimization becomes critical in case of evaluation of high number of combinations of adjustable parameters or in case of large dynamic models. This task is complex due to variety of optimization methods, software tools and nonlinearity features of models in different parameter spaces. A software tool ConvAn is developed to analyze statistical properties of convergence dynamics for optimization runs with particular optimization method, model, software tool, set of optimization method parameters and number of adjustable parameters of the model. The convergence curves can be normalized automatically to enable comparison of different methods and models in the same scale. By the help of the biochemistry adapted graphical user interface of ConvAn it is possible to compare different optimization methods in terms of ability to find the global optima or values close to that as well as the necessary computational time to reach them. It is possible to estimate the optimization performance for different number of adjustable parameters. The functionality of ConvAn enables statistical assessment of necessary optimization time depending on the necessary optimization accuracy. Optimization methods, which are not suitable for a particular optimization task, can be rejected if they have poor repeatability or convergence properties. The software ConvAn is freely available on www.biosystems.lv/convan. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
1990-06-01
procedures resulted in varying degrees of vessel wall injuries that occurred at the site of arterial wall dilation. These injuries included intimal ...splitting, subintimal dissection , medial tears, and submedial dissection as shown in Figure 3 (Duber et al., 1986). 5 Tunica Adventitia Tunica Media...of the leg and limbs whereas fatty and fibrofatty plaques are usually deposited in the coronary arteries . Further numerical experiments were
Maier, Richard H; Maier, Christina J; Hintner, Helmut; Bauer, Johann W; Onder, Kamil
2012-12-01
Many functional proteomic experiments make use of high-throughput technologies such as mass spectrometry combined with two-dimensional polyacrylamide gel electrophoresis and the yeast two-hybrid (Y2H) system. Currently there are even automated versions of the Y2H system available that can be used for proteome-wide research. The Y2H system has the capacity to deliver a profusion of Y2H positive colonies from a single library screen. However, subsequent analysis of these numerous primary candidates with complementary methods can be overwhelming. Therefore, a method to select the most promising candidates with strong interaction properties might be useful to reduce the number of candidates requiring further analysis. The method described here offers a new way of quantifying and rating the performance of positive Y2H candidates. The novelty lies in the detection and measurement of mRNA expression instead of proteins or conventional Y2H genetic reporters. This method correlates well with the direct genetic reporter readouts usually used in the Y2H system, and has greater sensitivity for detecting and quantifying protein-protein interactions (PPIs) than the conventional Y2H system, as demonstrated by detection of the Y2H false-negative PPI of RXR/PPARG. Approximately 20% of all proteins are not suitable for the Y2H system, the so-called autoactivators. A further advantage of this method is the possibility to evaluate molecules that usually cannot be analyzed in the Y2H system, exemplified by a VDR-LXXLL motif peptide interaction. Copyright © 2012 Elsevier Inc. All rights reserved.
Jamzad, Amoon; Setarehdan, Seyed Kamaledin
2014-04-01
The twinkling artifact is an undesired phenomenon within color Doppler sonograms that usually appears at the site of internal calcifications. Since the appearance of the twinkling artifact is correlated with the roughness of the calculi, noninvasive roughness estimation of the internal stones may be considered as a potential twinkling artifact application. This article proposes a novel quantitative approach for measurement and analysis of twinkling artifact data for roughness estimation. A phantom was developed with 7 quantified levels of roughness. The Doppler system was initially calibrated by the proposed procedure to facilitate the analysis. A total of 1050 twinkling artifact images were acquired from the phantom, and 32 novel numerical measures were introduced and computed for each image. The measures were then ranked on the basis of roughness quantification ability using different methods. The performance of the proposed twinkling artifact-based surface roughness quantification method was finally investigated for different combinations of features and classifiers. Eleven features were shown to be the most efficient numerical twinkling artifact measures in roughness characterization. The linear classifier outperformed other methods for twinkling artifact classification. The pixel count measures produced better results among the other categories. The sequential selection method showed higher accuracy than other individual rankings. The best roughness recognition average accuracy of 98.33% was obtained by the first 5 principle components and the linear classifier. The proposed twinkling artifact analysis method could recognize the phantom surface roughness with average accuracy of 98.33%. This method may also be applicable for noninvasive calculi characterization in treatment management.
A new method of passive modifications for partial frequency assignment of general structures
NASA Astrophysics Data System (ADS)
Belotti, Roberto; Ouyang, Huajiang; Richiedei, Dario
2018-01-01
The assignment of a subset of natural frequencies to vibrating systems can be conveniently achieved by means of suitable structural modifications. It has been observed that such an approach usually leads to the undesired change of the unassigned natural frequencies, which is a phenomenon known as frequency spill-over. Such an issue has been dealt with in the literature only in simple specific cases. In this paper, a new and general method is proposed that aims to assign a subset of natural frequencies with low spill-over. The optimal structural modifications are determined through a three-step procedure that considers both the prescribed eigenvalues and the feasibility constraints, assuring that the obtained solution is physically realizable. The proposed method is therefore applicable to very general vibrating systems, such as those obtained through the finite element method. The numerical difficulties that may occur as a result of employing the method are also carefully addressed. Finally, the capabilities of the method are validated in three test-cases in which both lumped and distributed parameters are modified to obtain the desired eigenvalues.
A modified sparse reconstruction method for three-dimensional synthetic aperture radar image
NASA Astrophysics Data System (ADS)
Zhang, Ziqiang; Ji, Kefeng; Song, Haibo; Zou, Huanxin
2018-03-01
There is an increasing interest in three-dimensional Synthetic Aperture Radar (3-D SAR) imaging from observed sparse scattering data. However, the existing 3-D sparse imaging method requires large computing times and storage capacity. In this paper, we propose a modified method for the sparse 3-D SAR imaging. The method processes the collection of noisy SAR measurements, usually collected over nonlinear flight paths, and outputs 3-D SAR imagery. Firstly, the 3-D sparse reconstruction problem is transformed into a series of 2-D slices reconstruction problem by range compression. Then the slices are reconstructed by the modified SL0 (smoothed l0 norm) reconstruction algorithm. The improved algorithm uses hyperbolic tangent function instead of the Gaussian function to approximate the l0 norm and uses the Newton direction instead of the steepest descent direction, which can speed up the convergence rate of the SL0 algorithm. Finally, numerical simulation results are given to demonstrate the effectiveness of the proposed algorithm. It is shown that our method, compared with existing 3-D sparse imaging method, performs better in reconstruction quality and the reconstruction time.
Covariant Conformal Decomposition of Einstein Equations
NASA Astrophysics Data System (ADS)
Gourgoulhon, E.; Novak, J.
It has been shown1,2 that the usual 3+1 form of Einstein's equations may be ill-posed. This result has been previously observed in numerical simulations3,4. We present a 3+1 type formalism inspired by these works to decompose Einstein's equations. This decomposition is motivated by the aim of stable numerical implementation and resolution of the equations. We introduce the conformal 3-``metric'' (scaled by the determinant of the usual 3-metric) which is a tensor density of weight -2/3. The Einstein equations are then derived in terms of this ``metric'', of the conformal extrinsic curvature and in terms of the associated derivative. We also introduce a flat 3-metric (the asymptotic metric for isolated systems) and the associated derivative. Finally, the generalized Dirac gauge (introduced by Smarr and York5) is used in this formalism and some examples of formulation of Einstein's equations are shown.
Joining direct and indirect inverse calibration methods to characterize karst, coastal aquifers
NASA Astrophysics Data System (ADS)
De Filippis, Giovanna; Foglia, Laura; Giudici, Mauro; Mehl, Steffen; Margiotta, Stefano; Negri, Sergio
2016-04-01
Parameter estimation is extremely relevant for accurate simulation of groundwater flow. Parameter values for models of large-scale catchments are usually derived from a limited set of field observations, which can rarely be obtained in a straightforward way from field tests or laboratory measurements on samples, due to a number of factors, including measurement errors and inadequate sampling density. Indeed, a wide gap exists between the local scale, at which most of the observations are taken, and the regional or basin scale, at which the planning and management decisions are usually made. For this reason, the use of geologic information and field data is generally made by zoning the parameter fields. However, pure zoning does not perform well in the case of fairly complex aquifers and this is particularly true for karst aquifers. In fact, the support of the hydraulic conductivity measured in the field is normally much smaller than the cell size of the numerical model, so it should be upscaled to a scale consistent with that of the numerical model discretization. Automatic inverse calibration is a valuable procedure to identify model parameter values by conditioning on observed, available data, limiting the subjective evaluations introduced with the trial-and-error technique. Many approaches have been proposed to solve the inverse problem. Generally speaking, inverse methods fall into two groups: direct and indirect methods. Direct methods allow determination of hydraulic conductivities from the groundwater flow equations which relate the conductivity and head fields. Indirect methods, instead, can handle any type of parameters, independently from the mathematical equations that govern the process, and condition parameter values and model construction on measurements of model output quantities, compared with the available observation data, through the minimization of an objective function. Both approaches have pros and cons, depending also on model complexity. For this reason, a joint procedure is proposed by merging both direct and indirect approaches, thus taking advantage of their strengths, first among them the possibility to get a hydraulic head distribution all over the domain, instead of a zonation. Pros and cons of such an integrated methodology, so far unexplored to the authors' knowledge, are derived after application to a highly heterogeneous karst, coastal aquifer located in southern Italy.
Solving the hypersingular boundary integral equation for the Burton and Miller formulation.
Langrenne, Christophe; Garcia, Alexandre; Bonnet, Marc
2015-11-01
This paper presents an easy numerical implementation of the Burton and Miller (BM) formulation, where the hypersingular Helmholtz integral is regularized by identities from the associated Laplace equation and thus needing only the evaluation of weakly singular integrals. The Helmholtz equation and its normal derivative are combined directly with combinations at edge or corner collocation nodes not used when the surface is not smooth. The hypersingular operators arising in this process are regularized and then evaluated by an indirect procedure based on discretized versions of the Calderón identities linking the integral operators for associated Laplace problems. The method is valid for acoustic radiation and scattering problems involving arbitrarily shaped three-dimensional bodies. Unlike other approaches using direct evaluation of hypersingular integrals, collocation points still coincide with mesh nodes, as is usual when using conforming elements. Using higher-order shape functions (with the boundary element method model size kept fixed) reduces the overall numerical integration effort while increasing the solution accuracy. To reduce the condition number of the resulting BM formulation at low frequencies, a regularized version α = ik/(k(2 )+ λ) of the classical BM coupling factor α = i/k is proposed. Comparisons with the combined Helmholtz integral equation Formulation method of Schenck are made for four example configurations, two of them featuring non-smooth surfaces.
Multigrid Methods for Fully Implicit Oil Reservoir Simulation
NASA Technical Reports Server (NTRS)
Molenaar, J.
1996-01-01
In this paper we consider the simultaneous flow of oil and water in reservoir rock. This displacement process is modeled by two basic equations: the material balance or continuity equations and the equation of motion (Darcy's law). For the numerical solution of this system of nonlinear partial differential equations there are two approaches: the fully implicit or simultaneous solution method and the sequential solution method. In the sequential solution method the system of partial differential equations is manipulated to give an elliptic pressure equation and a hyperbolic (or parabolic) saturation equation. In the IMPES approach the pressure equation is first solved, using values for the saturation from the previous time level. Next the saturations are updated by some explicit time stepping method; this implies that the method is only conditionally stable. For the numerical solution of the linear, elliptic pressure equation multigrid methods have become an accepted technique. On the other hand, the fully implicit method is unconditionally stable, but it has the disadvantage that in every time step a large system of nonlinear algebraic equations has to be solved. The most time-consuming part of any fully implicit reservoir simulator is the solution of this large system of equations. Usually this is done by Newton's method. The resulting systems of linear equations are then either solved by a direct method or by some conjugate gradient type method. In this paper we consider the possibility of applying multigrid methods for the iterative solution of the systems of nonlinear equations. There are two ways of using multigrid for this job: either we use a nonlinear multigrid method or we use a linear multigrid method to deal with the linear systems that arise in Newton's method. So far only a few authors have reported on the use of multigrid methods for fully implicit simulations. Two-level FAS algorithm is presented for the black-oil equations, and linear multigrid for two-phase flow problems with strong heterogeneities and anisotropies is studied. Here we consider both possibilities. Moreover we present a novel way for constructing the coarse grid correction operator in linear multigrid algorithms. This approach has the advantage in that it preserves the sparsity pattern of the fine grid matrix and it can be extended to systems of equations in a straightforward manner. We compare the linear and nonlinear multigrid algorithms by means of a numerical experiment.
NASA Astrophysics Data System (ADS)
Doummar, Joanna; Kassem, Assaad
2017-04-01
In the framework of a three-year PEER (USAID/NSF) funded project, flow in a Karst system in Lebanon (Assal) dominated by snow and semi arid conditions was simulated and successfully calibrated using an integrated numerical model (MIKE-She 2016) based on high resolution input data and detailed catchment characterization. Point source infiltration and fast flow pathways were simulated by a bypass function and a high conductive lens respectively. The approach consisted of identifying all the factors used in qualitative vulnerability methods (COP, EPIK, PI, DRASTIC, GOD) applied in karst systems and to assess their influence on recharge signals in the different hydrological karst compartments (Atmosphere, Unsaturated zone and Saturated zone) based on the integrated numerical model. These parameters are usually attributed different weights according to their estimated impact on Groundwater vulnerability. The aim of this work is to quantify the importance of each of these parameters and outline parameters that are not accounted for in standard methods, but that might play a role in the vulnerability of a system. The spatial distribution of the detailed evapotranspiration, infiltration, and recharge signals from atmosphere to unsaturated zone to saturated zone was compared and contrasted among different surface settings and under varying flow conditions (e.g., in varying slopes, land cover, precipitation intensity, and soil properties as well point source infiltration). Furthermore a sensitivity analysis of individual or coupled major parameters allows quantifying their impact on recharge and indirectly on vulnerability. The preliminary analysis yields a new methodology that accounts for most of the factors influencing vulnerability while refining the weights attributed to each one of them, based on a quantitative approach.
F--Ray: A new algorithm for efficient transport of ionizing radiation
NASA Astrophysics Data System (ADS)
Mao, Yi; Zhang, J.; Wandelt, B. D.; Shapiro, P. R.; Iliev, I. T.
2014-04-01
We present a new algorithm for the 3D transport of ionizing radiation, called F
Scope of Gradient and Genetic Algorithms in Multivariable Function Optimization
NASA Technical Reports Server (NTRS)
Shaykhian, Gholam Ali; Sen, S. K.
2007-01-01
Global optimization of a multivariable function - constrained by bounds specified on each variable and also unconstrained - is an important problem with several real world applications. Deterministic methods such as the gradient algorithms as well as the randomized methods such as the genetic algorithms may be employed to solve these problems. In fact, there are optimization problems where a genetic algorithm/an evolutionary approach is preferable at least from the quality (accuracy) of the results point of view. From cost (complexity) point of view, both gradient and genetic approaches are usually polynomial-time; there are no serious differences in this regard, i.e., the computational complexity point of view. However, for certain types of problems, such as those with unacceptably erroneous numerical partial derivatives and those with physically amplified analytical partial derivatives whose numerical evaluation involves undesirable errors and/or is messy, a genetic (stochastic) approach should be a better choice. We have presented here the pros and cons of both the approaches so that the concerned reader/user can decide which approach is most suited for the problem at hand. Also for the function which is known in a tabular form, instead of an analytical form, as is often the case in an experimental environment, we attempt to provide an insight into the approaches focusing our attention toward accuracy. Such an insight will help one to decide which method, out of several available methods, should be employed to obtain the best (least error) output. *
DOE Office of Scientific and Technical Information (OSTI.GOV)
Souza, R.F. de; Yang, D.-Ke; Lenzi, E.K.
2014-07-15
An analytical expression for the relaxation time of a nematic liquid crystal is obtained for the first time by considering the influence of surface viscosity, anchoring energy strength and cell gap, validated numerically by using the so-called relaxation method. This general equation for the molecular response time (τ{sub 0}) was derived for a vertical aligned cell and by solving an eigenvalue equation coming from the usual balance of torque equation in the Derzhanskii and Petrov formulation, recovering the usual equations in the appropriate limit. The results show that τ∼d{sup b}, where b=2 is observed only for strongly anchored cells, whilemore » for moderate to weak anchored cells, the exponent lies between 1 and 2, depending on both, surface viscosity and anchoring strength. We found that the surface viscosity is important when calculating the response time, specially for thin cells, critical for liquid crystal devices. The surface viscosity’s effect on the optical response time with pretilt is also explored. Our results bring new insights about the role of surface viscosity and its effects in applied physics. - Highlights: • The relaxation of nematic liquid crystals is calculated by taking the surface viscosity into account. • An analytical expression for the relaxation time depending on surface viscosity, anchoring strength and cell gap is obtained. • The results are numerically verified. • Surface viscosity is crucial for thin and weak anchored cells. • The effect on optical time and pretilt angle is also studied.« less
Earth Walk: Touring Our Planet's Inner Structure.
ERIC Educational Resources Information Center
Muller, Eric P.
1995-01-01
Describes an excursion that effectively helps students visualize the earth's immense size and numerous structures without the usual scale and ratio distortions found in most textbooks and allows students to compare their body's height to a scaled-down earth. (JRH)
Frequency-radial duality based photoacoustic image reconstruction.
Akramus Salehin, S M; Abhayapala, Thushara D
2012-07-01
Photoacoustic image reconstruction algorithms are usually slow due to the large sizes of data that are processed. This paper proposes a method for exact photoacoustic reconstruction for the spherical geometry in the limiting case of a continuous aperture and infinite measurement bandwidth that is faster than existing methods namely (1) backprojection method and (2) the Norton-Linzer method [S. J. Norton and M. Linzer, "Ultrasonic reflectivity imaging in three dimensions: Exact inverse scattering solution for plane, cylindrical and spherical apertures," Biomedical Engineering, IEEE Trans. BME 28, 202-220 (1981)]. The initial pressure distribution is expanded using a spherical Fourier Bessel series. The proposed method estimates the Fourier Bessel coefficients and subsequently recovers the pressure distribution. A concept of frequency-radial duality is introduced that separates the information from the different radial basis functions by using frequencies corresponding to the Bessel zeros. This approach provides a means to analyze the information obtained given a measurement bandwidth. Using order analysis and numerical experiments, the proposed method is shown to be faster than both the backprojection and the Norton-Linzer methods. Further, the reconstructed images using the proposed methodology were of similar quality to the Norton-Linzer method and were better than the approximate backprojection method.
Computational reacting gas dynamics
NASA Technical Reports Server (NTRS)
Lam, S. H.
1993-01-01
In the study of high speed flows at high altitudes, such as that encountered by re-entry spacecrafts, the interaction of chemical reactions and other non-equilibrium processes in the flow field with the gas dynamics is crucial. Generally speaking, problems of this level of complexity must resort to numerical methods for solutions, using sophisticated computational fluid dynamics (CFD) codes. The difficulties introduced by reacting gas dynamics can be classified into three distinct headings: (1) the usually inadequate knowledge of the reaction rate coefficients in the non-equilibrium reaction system; (2) the vastly larger number of unknowns involved in the computation and the expected stiffness of the equations; and (3) the interpretation of the detailed reacting CFD numerical results. The research performed accepts the premise that reacting flows of practical interest in the future will in general be too complex or 'untractable' for traditional analytical developments. The power of modern computers must be exploited. However, instead of focusing solely on the construction of numerical solutions of full-model equations, attention is also directed to the 'derivation' of the simplified model from the given full-model. In other words, the present research aims to utilize computations to do tasks which have traditionally been done by skilled theoreticians: to reduce an originally complex full-model system into an approximate but otherwise equivalent simplified model system. The tacit assumption is that once the appropriate simplified model is derived, the interpretation of the detailed numerical reacting CFD numerical results will become much easier. The approach of the research is called computational singular perturbation (CSP).
3D nozzle flow simulations including state-to-state kinetics calculation
NASA Astrophysics Data System (ADS)
Cutrone, L.; Tuttafesta, M.; Capitelli, M.; Schettino, A.; Pascazio, G.; Colonna, G.
2014-12-01
In supersonic and hypersonic flows, thermal and chemical non-equilibrium is one of the fundamental aspects that must be taken into account for the accurate characterization of the plasma. In this paper, we present an optimized methodology to approach plasma numerical simulation by state-to-state kinetics calculations in a fully 3D Navier-Stokes CFD solver. Numerical simulations of an expanding flow are presented aimed at comparing the behavior of state-to-state chemical kinetics models with respect to the macroscopic thermochemical non-equilibrium models that are usually used in the numerical computation of high temperature hypersonic flows. The comparison is focused both on the differences in the numerical results and on the computational effort associated with each approach.
Judgment under Uncertainty: Heuristics and Biases.
Tversky, A; Kahneman, D
1974-09-27
This article described three heuristics that are employed in making judgements under uncertainty: (i) representativeness, which is usually employed when people are asked to judge the probability that an object or event A belongs to class or process B; (ii) availability of instances or scenarios, which is often employed when people are asked to assess the frequency of a class or the plausibility of a particular development; and (iii) adjustment from an anchor, which is usually employed in numerical prediction when a relevant value is available. These heuristics are highly economical and usually effective, but they lead to systematic and predictable errors. A better understanding of these heuristics and of the biases to which they lead could improve judgements and decisions in situations of uncertainty.
Generalized Fourier analyses of the advection-diffusion equation - Part I: one-dimensional domains
NASA Astrophysics Data System (ADS)
Christon, Mark A.; Martinez, Mario J.; Voth, Thomas E.
2004-07-01
This paper presents a detailed multi-methods comparison of the spatial errors associated with finite difference, finite element and finite volume semi-discretizations of the scalar advection-diffusion equation. The errors are reported in terms of non-dimensional phase and group speed, discrete diffusivity, artificial diffusivity, and grid-induced anisotropy. It is demonstrated that Fourier analysis provides an automatic process for separating the discrete advective operator into its symmetric and skew-symmetric components and characterizing the spectral behaviour of each operator. For each of the numerical methods considered, asymptotic truncation error and resolution estimates are presented for the limiting cases of pure advection and pure diffusion. It is demonstrated that streamline upwind Petrov-Galerkin and its control-volume finite element analogue, the streamline upwind control-volume method, produce both an artificial diffusivity and a concomitant phase speed adjustment in addition to the usual semi-discrete artifacts observed in the phase speed, group speed and diffusivity. The Galerkin finite element method and its streamline upwind derivatives are shown to exhibit super-convergent behaviour in terms of phase and group speed when a consistent mass matrix is used in the formulation. In contrast, the CVFEM method and its streamline upwind derivatives yield strictly second-order behaviour. In Part II of this paper, we consider two-dimensional semi-discretizations of the advection-diffusion equation and also assess the affects of grid-induced anisotropy observed in the non-dimensional phase speed, and the discrete and artificial diffusivities. Although this work can only be considered a first step in a comprehensive multi-methods analysis and comparison, it serves to identify some of the relative strengths and weaknesses of multiple numerical methods in a common analysis framework. Published in 2004 by John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Yousefzadeh, M.; Battiato, I.
2017-12-01
Flow and reactive transport problems in porous media often involve complex geometries with stationary or evolving boundaries due to absorption and dissolution processes. Grid based methods (e.g. finite volume, finite element, etc.) are a vital tool for studying these problems. Yet, implementing these methods requires one to answer a very first question of what type of grid is to be used. Among different possible answers, Cartesian grids are one of the most attractive options as they possess simple discretization stencil and are usually straightforward to generate at roughly no computational cost. The Immersed Boundary Method, a Cartesian based methodology, maintains most of the useful features of the structured grids while exhibiting a high-level resilience in dealing with complex geometries. These features make it increasingly more attractive to model transport in evolving porous media as the cost of grid generation reduces greatly. Yet, stability issues and severe time-step restriction due to explicit-time implementation combined with limited studies on the implementation of Neumann (constant flux) and linear and non-linear Robin (e.g. reaction) boundary conditions (BCs) have significantly limited the applicability of IBMs to transport in porous media. We have developed an implicit IBM capable of handling all types of BCs and addressed some numerical issues, including unconditional stability criteria, compactness and reduction of spurious oscillations near the immersed boundary. We tested the method for several transport and flow scenarios, including dissolution processes in porous media, and demonstrate its capabilities. Successful validation against both experimental and numerical data has been carried out.
Finite-difference modeling with variable grid-size and adaptive time-step in porous media
NASA Astrophysics Data System (ADS)
Liu, Xinxin; Yin, Xingyao; Wu, Guochen
2014-04-01
Forward modeling of elastic wave propagation in porous media has great importance for understanding and interpreting the influences of rock properties on characteristics of seismic wavefield. However, the finite-difference forward-modeling method is usually implemented with global spatial grid-size and time-step; it consumes large amounts of computational cost when small-scaled oil/gas-bearing structures or large velocity-contrast exist underground. To overcome this handicap, combined with variable grid-size and time-step, this paper developed a staggered-grid finite-difference scheme for elastic wave modeling in porous media. Variable finite-difference coefficients and wavefield interpolation were used to realize the transition of wave propagation between regions of different grid-size. The accuracy and efficiency of the algorithm were shown by numerical examples. The proposed method is advanced with low computational cost in elastic wave simulation for heterogeneous oil/gas reservoirs.
On the Maxwellian distribution, symmetric form, and entropy conservation for the Euler equations
NASA Technical Reports Server (NTRS)
Deshpande, S. M.
1986-01-01
The Euler equations of gas dynamics have some very interesting properties in that the flux vector is a homogeneous function of the unknowns and the equations can be cast in symmetric hyperbolic form and satisfy the entropy conservation. The Euler equations are the moments of the Boltzmann equation of the kinetic theory of gases when the velocity distribution function is a Maxwellian. The present paper shows the relationship between the symmetrizability and the Maxwellian velocity distribution. The entropy conservation is in terms of the H-function, which is a slight modification of the H-function first introduced by Boltzmann in his famous H-theorem. In view of the H-theorem, it is suggested that the development of total H-diminishing (THD) numerical methods may be more profitable than the usual total variation diminishing (TVD) methods for obtaining wiggle-free solutions.
Characterisation of Feature Points in Eye Fundus Images
NASA Astrophysics Data System (ADS)
Calvo, D.; Ortega, M.; Penedo, M. G.; Rouco, J.
The retinal vessel tree adds decisive knowledge in the diagnosis of numerous opthalmologic pathologies such as hypertension or diabetes. One of the problems in the analysis of the retinal vessel tree is the lack of information in terms of vessels depth as the image acquisition usually leads to a 2D image. This situation provokes a scenario where two different vessels coinciding in a point could be interpreted as a vessel forking into a bifurcation. That is why, for traking and labelling the retinal vascular tree, bifurcations and crossovers of vessels are considered feature points. In this work a novel method for these retinal vessel tree feature points detection and classification is introduced. The method applies image techniques such as filters or thinning to obtain the adequate structure to detect the points and sets a classification of these points studying its environment. The methodology is tested using a standard database and the results show high classification capabilities.
Hadron spectrum of quenched QCD on a 32{sup 3} {times} 64 lattice
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Seyong; Sinclair, D.K.
1992-10-01
Preliminary results from a hadron spectrum calculation of quenched Quantumchromodynamics on a 32{sup 3} {times} 64 lattice at {beta} = 6.5 are reported. The hadron spectrum calculation is done with staggered quarks of masses, m{sub q}a = 0.001, 0.005 and 0.0025. We use two different sources in order to be able to extract the {Delta} mass in addition to the usual local light hadron masses. The numerical simulation is executed on the Intel Touchstone Delta computer. The peak speed of the Delta for a 16 {times} 32 mesh configuration is 41 Gflops for 32 bit precision. The sustained speed formore » our updating code is 9.5 Gflops. A multihit metropolis algorithm combined with an over-relaxation method is used in the updating and the conjugate gradient method is employed for Dirac matrix inversion. Configurations are stored every 1000 sweeps.« less
Hadron spectrum of quenched QCD on a 32[sup 3] [times] 64 lattice
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Seyong; Sinclair, D.K.
1992-10-01
Preliminary results from a hadron spectrum calculation of quenched Quantumchromodynamics on a 32[sup 3] [times] 64 lattice at [beta] = 6.5 are reported. The hadron spectrum calculation is done with staggered quarks of masses, m[sub q]a = 0.001, 0.005 and 0.0025. We use two different sources in order to be able to extract the [Delta] mass in addition to the usual local light hadron masses. The numerical simulation is executed on the Intel Touchstone Delta computer. The peak speed of the Delta for a 16 [times] 32 mesh configuration is 41 Gflops for 32 bit precision. The sustained speed formore » our updating code is 9.5 Gflops. A multihit metropolis algorithm combined with an over-relaxation method is used in the updating and the conjugate gradient method is employed for Dirac matrix inversion. Configurations are stored every 1000 sweeps.« less
Dry-growth of silver single-crystal nanowires from porous Ag structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Chuantong, E-mail: chenchuantong@sanken.osaka-u.ac.jp; Nagao, Shijo; Jiu, Jinting
A fabrication method of single crystal Ag nanowires in large scale is introduced without any chemical synthesis in wet processes, which usually generates fivefold twinned nanowires of fcc metals. Dense single-crystal nanowires grow on a mechanically polished surface of micro-porous Ag structure, which is created from Ag micro-particles. The diameter and the length of the nanowires can be controlled simply by changing the temperature and the time of the heating during the nanowire growth in air. Unique growth mechanism is described in detail, based on stress-induced migration accelerated by the micro-porous structure where the origin of Ag nanowires growth ismore » incubated. Transmission electron microscopy analysis on the single crystal nanowires is also presented. This simple method offered an alternative preparation for metallic nanowires, especially with the single crystal structure in numerous applications.« less
Antifreeze glycopeptide analogues: microwave-enhanced synthesis and functional studies.
Heggemann, Carolin; Budke, Carsten; Schomburg, Benjamin; Majer, Zsuzsa; Wissbrock, Marco; Koop, Thomas; Sewald, Norbert
2010-01-01
Antifreeze glycoproteins enable life at temperatures below the freezing point of physiological solutions. They usually consist of the repetitive tripeptide unit (-Ala-Ala-Thr-) with the disaccharide alpha-D-galactosyl-(1-3)-beta-N-acetyl-D-galactosamine attached to each hydroxyl group of threonine. Monoglycosylated analogues have been synthesized from the corresponding monoglycosylated threonine building block by microwave-assisted solid phase peptide synthesis. This method allows the preparation of analogues containing sequence variations which are not accessible by other synthetic methods. As antifreeze glycoproteins consist of numerous isoforms they are difficult to obtain in pure form from natural sources. The synthetic peptides have been structurally analyzed by CD and NMR spectroscopy in proton exchange experiments revealing a structure as flexible as reported for the native peptides. Microphysical recrystallization tests show an ice structuring influence and ice growth inhibition depending on the concentration, chain length and sequence of the peptides.
Principal Component Analysis in the Spectral Analysis of the Dynamic Laser Speckle Patterns
NASA Astrophysics Data System (ADS)
Ribeiro, K. M.; Braga, R. A., Jr.; Horgan, G. W.; Ferreira, D. D.; Safadi, T.
2014-02-01
Dynamic laser speckle is a phenomenon that interprets an optical patterns formed by illuminating a surface under changes with coherent light. Therefore, the dynamic change of the speckle patterns caused by biological material is known as biospeckle. Usually, these patterns of optical interference evolving in time are analyzed by graphical or numerical methods, and the analysis in frequency domain has also been an option, however involving large computational requirements which demands new approaches to filter the images in time. Principal component analysis (PCA) works with the statistical decorrelation of data and it can be used as a data filtering. In this context, the present work evaluated the PCA technique to filter in time the data from the biospeckle images aiming the reduction of time computer consuming and improving the robustness of the filtering. It was used 64 images of biospeckle in time observed in a maize seed. The images were arranged in a data matrix and statistically uncorrelated by PCA technique, and the reconstructed signals were analyzed using the routine graphical and numerical methods to analyze the biospeckle. Results showed the potential of the PCA tool in filtering the dynamic laser speckle data, with the definition of markers of principal components related to the biological phenomena and with the advantage of fast computational processing.
Morcos, Mina W.; Al-Jallad, Hadil; Hamdy, Reggie
2015-01-01
Bone is one of the most dynamic tissues in the human body that can heal following injury without leaving a scar. However, in instances of extensive bone loss, this intrinsic capacity of bone to heal may not be sufficient and external intervention becomes necessary. Several techniques are available to address this problem, including autogenous bone grafts and allografts. However, all these techniques have their own limitations. An alternative method is the technique of distraction osteogenesis, where gradual and controlled distraction of two bony segments after osteotomy leads to induction of new bone formation. Although distraction osteogenesis usually gives satisfactory results, its major limitation is the prolonged duration of time required before the external fixator is removed, which may lead to numerous complications. Numerous methods to accelerate bone formation in the context of distraction osteogenesis have been reported. A viable alternative to autogenous bone grafts for a source of osteogenic cells is mesenchymal stem cells from bone marrow. However, there are certain problems with bone marrow aspirate. Hence, scientists have investigated other sources for mesenchymal stem cells, specifically adipose tissue, which has been shown to be an excellent source of mesenchymal stem cells. In this paper, the potential use of adipose stem cells to stimulate bone formation is discussed. PMID:26448947
Modal Decomposition of TTV: Inferring Planet Masses and Eccentricities
NASA Astrophysics Data System (ADS)
Linial, Itai; Gilbaum, Shmuel; Sari, Re’em
2018-06-01
Transit timing variations (TTVs) are a powerful tool for characterizing the properties of transiting exoplanets. However, inferring planet properties from the observed timing variations is a challenging task, which is usually addressed by extensive numerical searches. We propose a new, computationally inexpensive method for inverting TTV signals in a planetary system of two transiting planets. To the lowest order in planetary masses and eccentricities, TTVs can be expressed as a linear combination of three functions, which we call the TTV modes. These functions depend only on the planets’ linear ephemerides, and can be either constructed analytically, or by performing three orbital integrations of the three-body system. Given a TTV signal, the underlying physical parameters are found by decomposing the data as a sum of the TTV modes. We demonstrate the use of this method by inferring the mass and eccentricity of six Kepler planets that were previously characterized in other studies. Finally we discuss the implications and future prospects of our new method.
Chirplet Wigner-Ville distribution for time-frequency representation and its application
NASA Astrophysics Data System (ADS)
Chen, G.; Chen, J.; Dong, G. M.
2013-12-01
This paper presents a Chirplet Wigner-Ville Distribution (CWVD) that is free for cross-term that usually occurs in Wigner-Ville distribution (WVD). By transforming the signal with frequency rotating operators, several mono-frequency signals without intermittent are obtained, WVD is applied to the rotated signals that is cross-term free, then some frequency shift operators corresponding to the rotating operator are utilized to relocate the signal‧s instantaneous frequencies (IFs). The operators‧ parameters come from the estimation of the IFs which are approached with a polynomial functions or spline functions. What is more, by analysis of error, the main factors for the performance of the novel method have been discovered and an effective signal extending method based on the IFs estimation has been developed to improve the energy concentration of WVD. The excellent performance of the novel method was manifested by applying it to estimate the IFs of some numerical signals and the echolocation signal emitted by the Large Brown Bat.
NASA Astrophysics Data System (ADS)
Deco, Gustavo; Martí, Daniel
2007-03-01
The analysis of transitions in stochastic neurodynamical systems is essential to understand the computational principles that underlie those perceptual and cognitive processes involving multistable phenomena, like decision making and bistable perception. To investigate the role of noise in a multistable neurodynamical system described by coupled differential equations, one usually considers numerical simulations, which are time consuming because of the need for sufficiently many trials to capture the statistics of the influence of the fluctuations on that system. An alternative analytical approach involves the derivation of deterministic differential equations for the moments of the distribution of the activity of the neuronal populations. However, the application of the method of moments is restricted by the assumption that the distribution of the state variables of the system takes on a unimodal Gaussian shape. We extend in this paper the classical moments method to the case of bimodal distribution of the state variables, such that a reduced system of deterministic coupled differential equations can be derived for the desired regime of multistability.
Topology optimization in acoustics and elasto-acoustics via a level-set method
NASA Astrophysics Data System (ADS)
Desai, J.; Faure, A.; Michailidis, G.; Parry, G.; Estevez, R.
2018-04-01
Optimizing the shape and topology (S&T) of structures to improve their acoustic performance is quite challenging. The exact position of the structural boundary is usually of critical importance, which dictates the use of geometric methods for topology optimization instead of standard density approaches. The goal of the present work is to investigate different possibilities for handling topology optimization problems in acoustics and elasto-acoustics via a level-set method. From a theoretical point of view, we detail two equivalent ways to perform the derivation of surface-dependent terms and propose a smoothing technique for treating problems of boundary conditions optimization. In the numerical part, we examine the importance of the surface-dependent term in the shape derivative, neglected in previous studies found in the literature, on the optimal designs. Moreover, we test different mesh adaptation choices, as well as technical details related to the implicit surface definition in the level-set approach. We present results in two and three-space dimensions.
NASA Astrophysics Data System (ADS)
Feng, L.; Xie, J.; Ritzwoller, M. H.
2017-12-01
Two major types of surface wave anisotropy are commonly observed by seismologists but are only rarely interpreted jointly: apparent radial anisotropy, which is the difference in propagation speed between horizontally and vertically polarized waves inferred from Love and Rayleigh waves, and apparent azimuthal anisotropy, which is the directional dependence of surface wave speeds (usually Rayleigh waves). We describe a method of inversion that interprets simultaneous observations of radial and azimuthal anisotropy under the assumption of a hexagonally symmetric elastic tensor with a tilted symmetry axis defined by dip and strike angles. With a full-waveform numerical solver based on the spectral element method (SEM), we verify the validity of the forward theory used for the inversion. We also present two examples, in the US and Tibet, in which we have successfully applied the tomographic method to demonstrate that the two types of apparent anisotropy can be interpreted jointly as a tilted hexagonally symmetric medium.
High-resolution imaging using a wideband MIMO radar system with two distributed arrays.
Wang, Dang-wei; Ma, Xiao-yan; Chen, A-Lei; Su, Yi
2010-05-01
Imaging a fast maneuvering target has been an active research area in past decades. Usually, an array antenna with multiple elements is implemented to avoid the motion compensations involved in the inverse synthetic aperture radar (ISAR) imaging. Nevertheless, there is a price dilemma due to the high level of hardware complexity compared to complex algorithm implemented in the ISAR imaging system with only one antenna. In this paper, a wideband multiple-input multiple-output (MIMO) radar system with two distributed arrays is proposed to reduce the hardware complexity of the system. Furthermore, the system model, the equivalent array production method and the imaging procedure are presented. As compared with the classical real aperture radar (RAR) imaging system, there is a very important contribution in our method that the lower hardware complexity can be involved in the imaging system since many additive virtual array elements can be obtained. Numerical simulations are provided for testing our system and imaging method.
NASA Astrophysics Data System (ADS)
Sun, Zheng; Carrillo, José A.; Shu, Chi-Wang
2018-01-01
We consider a class of time-dependent second order partial differential equations governed by a decaying entropy. The solution usually corresponds to a density distribution, hence positivity (non-negativity) is expected. This class of problems covers important cases such as Fokker-Planck type equations and aggregation models, which have been studied intensively in the past decades. In this paper, we design a high order discontinuous Galerkin method for such problems. If the interaction potential is not involved, or the interaction is defined by a smooth kernel, our semi-discrete scheme admits an entropy inequality on the discrete level. Furthermore, by applying the positivity-preserving limiter, our fully discretized scheme produces non-negative solutions for all cases under a time step constraint. Our method also applies to two dimensional problems on Cartesian meshes. Numerical examples are given to confirm the high order accuracy for smooth test cases and to demonstrate the effectiveness for preserving long time asymptotics.
Kholeif, S A
2001-06-01
A new method that belongs to the differential category for determining the end points from potentiometric titration curves is presented. It uses a preprocess to find first derivative values by fitting four data points in and around the region of inflection to a non-linear function, and then locate the end point, usually as a maximum or minimum, using an inverse parabolic interpolation procedure that has an analytical solution. The behavior and accuracy of the sigmoid and cumulative non-linear functions used are investigated against three factors. A statistical evaluation of the new method using linear least-squares method validation and multifactor data analysis are covered. The new method is generally applied to symmetrical and unsymmetrical potentiometric titration curves, and the end point is calculated using numerical procedures only. It outperforms the "parent" regular differential method in almost all factors levels and gives accurate results comparable to the true or estimated true end points. Calculated end points from selected experimental titration curves compatible with the equivalence point category of methods, such as Gran or Fortuin, are also compared with the new method.
NASA Astrophysics Data System (ADS)
Rolla, L. Barrera; Rice, H. J.
2006-09-01
In this paper a "forward-advancing" field discretization method suitable for solving the Helmholtz equation in large-scale problems is proposed. The forward wave expansion method (FWEM) is derived from a highly efficient discretization procedure based on interpolation of wave functions known as the wave expansion method (WEM). The FWEM computes the propagated sound field by means of an exclusively forward advancing solution, neglecting the backscattered field. It is thus analogous to methods such as the (one way) parabolic equation method (PEM) (usually discretized using standard finite difference or finite element methods). These techniques do not require the inversion of large system matrices and thus enable the solution of large-scale acoustic problems where backscatter is not of interest. Calculations using FWEM are presented for two propagation problems and comparisons to data computed with analytical and theoretical solutions and show this forward approximation to be highly accurate. Examples of sound propagation over a screen in upwind and downwind refracting atmospheric conditions at low nodal spacings (0.2 per wavelength in the propagation direction) are also included to demonstrate the flexibility and efficiency of the method.
Pseudo-time methods for constrained optimization problems governed by PDE
NASA Technical Reports Server (NTRS)
Taasan, Shlomo
1995-01-01
In this paper we present a novel method for solving optimization problems governed by partial differential equations. Existing methods are gradient information in marching toward the minimum, where the constrained PDE is solved once (sometimes only approximately) per each optimization step. Such methods can be viewed as a marching techniques on the intersection of the state and costate hypersurfaces while improving the residuals of the design equations per each iteration. In contrast, the method presented here march on the design hypersurface and at each iteration improve the residuals of the state and costate equations. The new method is usually much less expensive per iteration step since, in most problems of practical interest, the design equation involves much less unknowns that that of either the state or costate equations. Convergence is shown using energy estimates for the evolution equations governing the iterative process. Numerical tests show that the new method allows the solution of the optimization problem in a cost of solving the analysis problems just a few times, independent of the number of design parameters. The method can be applied using single grid iterations as well as with multigrid solvers.
Theory and computation of optimal low- and medium-thrust transfers
NASA Technical Reports Server (NTRS)
Chuang, C.-H.
1994-01-01
This report presents two numerical methods considered for the computation of fuel-optimal, low-thrust orbit transfers in large numbers of burns. The origins of these methods are observations made with the extremal solutions of transfers in small numbers of burns; there seems to exist a trend such that the longer the time allowed to perform an optimal transfer the less fuel that is used. These longer transfers are obviously of interest since they require a motor of low thrust; however, we also find a trend that the longer the time allowed to perform the optimal transfer the more burns are required to satisfy optimality. Unfortunately, this usually increases the difficulty of computation. Both of the methods described use small-numbered burn solutions to determine solutions in large numbers of burns. One method is a homotopy method that corrects for problems that arise when a solution requires a new burn or coast arc for optimality. The other method is to simply patch together long transfers from smaller ones. An orbit correction problem is solved to develop this method. This method may also lead to a good guidance law for transfer orbits with long transfer times.
The Role of Supervised Driving Requirements in Graduated Driver Licensing Programs
DOT National Transportation Integrated Search
2012-03-01
Many States require parents to certify that their teens have completed a certain amount of supervised driving practice, usually 40 to 50 hours, : before they are permitted to obtain an intermediate license. Although strongly supported by numerous gro...
A Method for Generating Reduced-Order Linear Models of Multidimensional Supersonic Inlets
NASA Technical Reports Server (NTRS)
Chicatelli, Amy; Hartley, Tom T.
1998-01-01
Simulation of high speed propulsion systems may be divided into two categories, nonlinear and linear. The nonlinear simulations are usually based on multidimensional computational fluid dynamics (CFD) methodologies and tend to provide high resolution results that show the fine detail of the flow. Consequently, these simulations are large, numerically intensive, and run much slower than real-time. ne linear simulations are usually based on large lumping techniques that are linearized about a steady-state operating condition. These simplistic models often run at or near real-time but do not always capture the detailed dynamics of the plant. Under a grant sponsored by the NASA Lewis Research Center, Cleveland, Ohio, a new method has been developed that can be used to generate improved linear models for control design from multidimensional steady-state CFD results. This CFD-based linear modeling technique provides a small perturbation model that can be used for control applications and real-time simulations. It is important to note the utility of the modeling procedure; all that is needed to obtain a linear model of the propulsion system is the geometry and steady-state operating conditions from a multidimensional CFD simulation or experiment. This research represents a beginning step in establishing a bridge between the controls discipline and the CFD discipline so that the control engineer is able to effectively use multidimensional CFD results in control system design and analysis.
NASA Astrophysics Data System (ADS)
Einkemmer, Lukas
2016-05-01
The recently developed semi-Lagrangian discontinuous Galerkin approach is used to discretize hyperbolic partial differential equations (usually first order equations). Since these methods are conservative, local in space, and able to limit numerical diffusion, they are considered a promising alternative to more traditional semi-Lagrangian schemes (which are usually based on polynomial or spline interpolation). In this paper, we consider a parallel implementation of a semi-Lagrangian discontinuous Galerkin method for distributed memory systems (so-called clusters). Both strong and weak scaling studies are performed on the Vienna Scientific Cluster 2 (VSC-2). In the case of weak scaling we observe a parallel efficiency above 0.8 for both two and four dimensional problems and up to 8192 cores. Strong scaling results show good scalability to at least 512 cores (we consider problems that can be run on a single processor in reasonable time). In addition, we study the scaling of a two dimensional Vlasov-Poisson solver that is implemented using the framework provided. All of the simulations are conducted in the context of worst case communication overhead; i.e., in a setting where the CFL (Courant-Friedrichs-Lewy) number increases linearly with the problem size. The framework introduced in this paper facilitates a dimension independent implementation of scientific codes (based on C++ templates) using both an MPI and a hybrid approach to parallelization. We describe the essential ingredients of our implementation.
2011-01-01
Background Previous studies have demonstrated that adverse events occur during chiropractic treatment. However, because of these studies design we do not know the frequency and extent of these events when compared to sham treatment. The principal aims of this study are to establish the frequency and severity of adverse effects from short term usual chiropractic treatment of the spine when compared to a sham treatment group. The secondary aim of this study is to establish the efficacy of usual short term chiropractic care for spinal pain when compared to a sham intervention. Methods One hundred and eighty participants will be randomly allocated to either usual chiropractic care or a sham intervention group. To be considered for inclusion the participants must have experienced non-specific spinal pain for at least one week. The study will be conducted at the clinics of registered chiropractors in Western Australia. Participants in each group will receive two treatments at intervals no less than one week. For the usual chiropractic care group, the selection of therapeutic techniques will be left to the chiropractors' discretion. For the sham intervention group, de-tuned ultrasound and de-tuned activator treatment will be applied by the chiropractors to the regions where spinal pain is experienced. Adverse events will be assessed two days after each appointment using a questionnaire developed for this study. The efficacy of short term chiropractic care for spinal pain will be examined at two week follow-up by assessing pain, physical function, minimum acceptable outcome, and satisfaction with care, with the use of the following outcome measures: Numerical Rating Scale, Functional Rating Index, Neck Disability Index, Minimum Acceptable Outcome Questionnaire, Oswestry Disability Index, and a global measure of treatment satisfaction. The statistician, outcome assessor, and participants will be blinded to treatment allocation. Trial registration Australia and New Zealand Clinical Trials Register (ANZCTR): ACTRN12611000542998 PMID:22040597
Model-based Acceleration Control of Turbofan Engines with a Hammerstein-Wiener Representation
NASA Astrophysics Data System (ADS)
Wang, Jiqiang; Ye, Zhifeng; Hu, Zhongzhi; Wu, Xin; Dimirovsky, Georgi; Yue, Hong
2017-05-01
Acceleration control of turbofan engines is conventionally designed through either schedule-based or acceleration-based approach. With the widespread acceptance of model-based design in aviation industry, it becomes necessary to investigate the issues associated with model-based design for acceleration control. In this paper, the challenges for implementing model-based acceleration control are explained; a novel Hammerstein-Wiener representation of engine models is introduced; based on the Hammerstein-Wiener model, a nonlinear generalized minimum variance type of optimal control law is derived; the feature of the proposed approach is that it does not require the inversion operation that usually upsets those nonlinear control techniques. The effectiveness of the proposed control design method is validated through a detailed numerical study.
Quantitative analysis of backflow of reversible pump-turbine in generating mode
NASA Astrophysics Data System (ADS)
Liu, K. H.; Zhang, Y. N.; Li, J. W.; Xian, H. Z.
2016-05-01
Significant vibration and pressure fluctuations are usually observed when pump- turbine is operated during the off-design conditions, especially turbine brake and runaway. The root cause of these instability phenomena is the abnormal unsteady flow (especially the backflow) inside the pump-turbine. In the present paper, numerical simulation method is adopted to investigate the characteristics of the flow inside the whole passage of pump-turbine with two guide vane openings (6° and 21° respectively) and three kinds of operating conditions (turbine, runaway and turbine braking respectively). A quantitative analysis of backflow is performed in both the axial and radial directions and the generation and development of backflow in the pump turbine are revealed with great details.
Measuring memory with the order of fractional derivative
NASA Astrophysics Data System (ADS)
Du, Maolin; Wang, Zaihua; Hu, Haiyan
2013-12-01
Fractional derivative has a history as long as that of classical calculus, but it is much less popular than it should be. What is the physical meaning of fractional derivative? This is still an open problem. In modeling various memory phenomena, we observe that a memory process usually consists of two stages. One is short with permanent retention, and the other is governed by a simple model of fractional derivative. With the numerical least square method, we show that the fractional model perfectly fits the test data of memory phenomena in different disciplines, not only in mechanics, but also in biology and psychology. Based on this model, we find that a physical meaning of the fractional order is an index of memory.
A hybrid nonlinear programming method for design optimization
NASA Technical Reports Server (NTRS)
Rajan, S. D.
1986-01-01
Solutions to engineering design problems formulated as nonlinear programming (NLP) problems usually require the use of more than one optimization technique. Moreover, the interaction between the user (analysis/synthesis) program and the NLP system can lead to interface, scaling, or convergence problems. An NLP solution system is presented that seeks to solve these problems by providing a programming system to ease the user-system interface. A simple set of rules is used to select an optimization technique or to switch from one technique to another in an attempt to detect, diagnose, and solve some potential problems. Numerical examples involving finite element based optimal design of space trusses and rotor bearing systems are used to illustrate the applicability of the proposed methodology.
Stabilized finite element methods to simulate the conductances of ion channels
NASA Astrophysics Data System (ADS)
Tu, Bin; Xie, Yan; Zhang, Linbo; Lu, Benzhuo
2015-03-01
We have previously developed a finite element simulator, ichannel, to simulate ion transport through three-dimensional ion channel systems via solving the Poisson-Nernst-Planck equations (PNP) and Size-modified Poisson-Nernst-Planck equations (SMPNP), and succeeded in simulating some ion channel systems. However, the iterative solution between the coupled Poisson equation and the Nernst-Planck equations has difficulty converging for some large systems. One reason we found is that the NP equations are advection-dominated diffusion equations, which causes troubles in the usual FE solution. The stabilized schemes have been applied to compute fluids flow in various research fields. However, they have not been studied in the simulation of ion transport through three-dimensional models based on experimentally determined ion channel structures. In this paper, two stabilized techniques, the SUPG and the Pseudo Residual-Free Bubble function (PRFB) are introduced to enhance the numerical robustness and convergence performance of the finite element algorithm in ichannel. The conductances of the voltage dependent anion channel (VDAC) and the anthrax toxin protective antigen pore (PA) are simulated to validate the stabilization techniques. Those two stabilized schemes give reasonable results for the two proteins, with decent agreement with both experimental data and Brownian dynamics (BD) simulations. For a variety of numerical tests, it is found that the simulator effectively avoids previous numerical instability after introducing the stabilization methods. Comparison based on our test data set between the two stabilized schemes indicates both SUPG and PRFB have similar performance (the latter is slightly more accurate and stable), while SUPG is relatively more convenient to implement.
An efficient soil water balance model based on hybrid numerical and statistical methods
NASA Astrophysics Data System (ADS)
Mao, Wei; Yang, Jinzhong; Zhu, Yan; Ye, Ming; Liu, Zhao; Wu, Jingwei
2018-04-01
Most soil water balance models only consider downward soil water movement driven by gravitational potential, and thus cannot simulate upward soil water movement driven by evapotranspiration especially in agricultural areas. In addition, the models cannot be used for simulating soil water movement in heterogeneous soils, and usually require many empirical parameters. To resolve these problems, this study derives a new one-dimensional water balance model for simulating both downward and upward soil water movement in heterogeneous unsaturated zones. The new model is based on a hybrid of numerical and statistical methods, and only requires four physical parameters. The model uses three governing equations to consider three terms that impact soil water movement, including the advective term driven by gravitational potential, the source/sink term driven by external forces (e.g., evapotranspiration), and the diffusive term driven by matric potential. The three governing equations are solved separately by using the hybrid numerical and statistical methods (e.g., linear regression method) that consider soil heterogeneity. The four soil hydraulic parameters required by the new models are as follows: saturated hydraulic conductivity, saturated water content, field capacity, and residual water content. The strength and weakness of the new model are evaluated by using two published studies, three hypothetical examples and a real-world application. The evaluation is performed by comparing the simulation results of the new model with corresponding results presented in the published studies, obtained using HYDRUS-1D and observation data. The evaluation indicates that the new model is accurate and efficient for simulating upward soil water flow in heterogeneous soils with complex boundary conditions. The new model is used for evaluating different drainage functions, and the square drainage function and the power drainage function are recommended. Computational efficiency of the new model makes it particularly suitable for large-scale simulation of soil water movement, because the new model can be used with coarse discretization in space and time.
NASA Astrophysics Data System (ADS)
Hoffmann, T. L.; Lieb, S.; Pauldrach, A. W. A.; Lesch, H.; Hultzsch, P. J. N.; Birk, G. T.
2012-08-01
Aims: The aim of this work is to verify whether turbulent magnetic reconnection can provide the additional energy input required to explain the up to now only poorly understood ionization mechanism of the diffuse ionized gas (DIG) in galaxies and its observed emission line spectra. Methods: We use a detailed non-LTE radiative transfer code that does not make use of the usual restrictive gaseous nebula approximations to compute synthetic spectra for gas at low densities. Excitation of the gas is via an additional heating term in the energy balance as well as by photoionization. Numerical values for this heating term are derived from three-dimensional resistive magnetohydrodynamic two-fluid plasma-neutral-gas simulations to compute energy dissipation rates for the DIG under typical conditions. Results: Our simulations show that magnetic reconnection can liberate enough energy to by itself fully or partially ionize the gas. However, synthetic spectra from purely thermally excited gas are incompatible with the observed spectra; a photoionization source must additionally be present to establish the correct (observed) ionization balance in the gas.
Active chatter suppression with displacement-only measurement in turning process
NASA Astrophysics Data System (ADS)
Ma, Haifeng; Wu, Jianhua; Yang, Liuqing; Xiong, Zhenhua
2017-08-01
Regenerative chatter is a major hindrance for achieving high quality and high production rate in machining processes. Various active controllers have been proposed to mitigate chatter. However, most of existing controllers were developed on the basis of multi-states feedback of the system and state observers were usually needed. Moreover, model parameters of the machining process (mass, damping and stiffness) were required in existing active controllers. In this study, an active sliding mode controller, which employs a dynamic output feedback sliding surface for the unmatched condition and an adaptive law for disturbance estimation, is designed, analyzed, and validated for chatter suppression in turning process. Only displacement measurement is required by this approach. Other sensors and state observers are not needed. Moreover, it facilitates a rapid implementation since the designed controller is established without using model parameters of the turning process. Theoretical analysis, numerical simulations and experiments on a computer numerical control (CNC) lathe are presented. It shows that the chatter can be substantially attenuated and the chatter-free region can be significantly expanded with the presented method.
NASA Astrophysics Data System (ADS)
Nicolae Lerma, A.; Bulteau, T.; Elineau, S.; Paris, F.; Pedreros, R.
2016-12-01
Marine submersion is an increasing concern for coastal cities as urban development reinforces their vulnerabilities while climate change is likely to foster the frequency and magnitude of submersions. Characterising the coastal flooding hazard is therefore of paramount importance to ensure the security of people living in such places and for coastal planning. A hazard is commonly defined as an adverse phenomenon, often represented by a magnitude of a variable of interest (e.g. flooded area), hereafter called response variable, associated with a probability of exceedance or, alternatively, a return period. Characterising the coastal flooding hazard consists in finding the correspondence between the magnitude and the return period. The difficulty lies in the fact that the assessment is usually performed using physical numerical models taking as inputs scenarios composed by multiple forcing conditions that are most of the time interdependent. Indeed, a time series of the response variable is usually not available so we have to deal instead with time series of forcing variables (e.g. water level, waves). Thus, the problem is twofold: on the one hand, the definition of scenarios is a multivariate matter; on the other hand, it is tricky and approximate to associate the resulting response, being the output of the physical numerical model, to the return period defined for the scenarios. In this study, we illustrate the problem on the district of Leucate, located in the French Mediterranean coast. A multivariate extreme value analysis of waves and water levels is performed offshore using a conditional extreme model, then two different methods are used to define and select 100-year scenarios of forcing variables: one based on joint exceedance probability contours, a method classically used in coastal risks studies, the other based on environmental contours, which are commonly used in the field of structure design engineering. We show that these two methods enable one to frame the true 100-year response variable. The selected scenarios are propagated to the shore through a high resolution flood modelling coupling overflowing and overtopping processes. Results in terms of inundated areas and inland water volumes are finally compared for the two methods, giving upper and lower bounds for the true response variables.
Algebraic multigrid domain and range decomposition (AMG-DD / AMG-RD)*
Bank, R.; Falgout, R. D.; Jones, T.; ...
2015-10-29
In modern large-scale supercomputing applications, algebraic multigrid (AMG) is a leading choice for solving matrix equations. However, the high cost of communication relative to that of computation is a concern for the scalability of traditional implementations of AMG on emerging architectures. This paper introduces two new algebraic multilevel algorithms, algebraic multigrid domain decomposition (AMG-DD) and algebraic multigrid range decomposition (AMG-RD), that replace traditional AMG V-cycles with a fully overlapping domain decomposition approach. While the methods introduced here are similar in spirit to the geometric methods developed by Brandt and Diskin [Multigrid solvers on decomposed domains, in Domain Decomposition Methods inmore » Science and Engineering, Contemp. Math. 157, AMS, Providence, RI, 1994, pp. 135--155], Mitchell [Electron. Trans. Numer. Anal., 6 (1997), pp. 224--233], and Bank and Holst [SIAM J. Sci. Comput., 22 (2000), pp. 1411--1443], they differ primarily in that they are purely algebraic: AMG-RD and AMG-DD trade communication for computation by forming global composite “grids” based only on the matrix, not the geometry. (As is the usual AMG convention, “grids” here should be taken only in the algebraic sense, regardless of whether or not it corresponds to any geometry.) Another important distinguishing feature of AMG-RD and AMG-DD is their novel residual communication process that enables effective parallel computation on composite grids, avoiding the all-to-all communication costs of the geometric methods. The main purpose of this paper is to study the potential of these two algebraic methods as possible alternatives to existing AMG approaches for future parallel machines. As a result, this paper develops some theoretical properties of these methods and reports on serial numerical tests of their convergence properties over a spectrum of problem parameters.« less
Midthune, Douglas; Dodd, Kevin W.; Freedman, Laurence S.; Krebs-Smith, Susan M.; Subar, Amy F.; Guenther, Patricia M.; Carroll, Raymond J.; Kipnis, Victor
2007-01-01
Objective We propose a new statistical method that uses information from two 24-hour recalls (24HRs) to estimate usual intake of episodically-consumed foods. Statistical Analyses Performed The method developed at the National Cancer Institute (NCI) accommodates the large number of non-consumption days that arise with foods by separating the probability of consumption from the consumption-day amount, using a two-part model. Covariates, such as sex, age, race, or information from a food frequency questionnaire (FFQ), may supplement the information from two or more 24HRs using correlated mixed model regression. The model allows for correlation between the probability of consuming a food on a single day and the consumption-day amount. Percentiles of the distribution of usual intake are computed from the estimated model parameters. Results The Eating at America's Table Study (EATS) data are used to illustrate the method to estimate the distribution of usual intake for whole grains and dark green vegetables for men and women and the distribution of usual intakes of whole grains by educational level among men. A simulation study indicates that the NCI method leads to substantial improvement over existing methods for estimating the distribution of usual intake of foods. Applications/Conclusions The NCI method provides distinct advantages over previously proposed methods by accounting for the correlation between probability of consumption and amount consumed and by incorporating covariate information. Researchers interested in estimating the distribution of usual intakes of foods for a population or subpopulation are advised to work with a statistician and incorporate the NCI method in analyses. PMID:17000190
Methods for assessing geodiversity
NASA Astrophysics Data System (ADS)
Zwoliński, Zbigniew; Najwer, Alicja; Giardino, Marco
2017-04-01
The accepted systematics of geodiversity assessment methods will be presented in three categories: qualitative, quantitative and qualitative-quantitative. Qualitative methods are usually descriptive methods that are suited to nominal and ordinal data. Quantitative methods use a different set of parameters and indicators to determine the characteristics of geodiversity in the area being researched. Qualitative-quantitative methods are a good combination of the collection of quantitative data (i.e. digital) and cause-effect data (i.e. relational and explanatory). It seems that at the current stage of the development of geodiversity research methods, qualitative-quantitative methods are the most advanced and best assess the geodiversity of the study area. Their particular advantage is the integration of data from different sources and with different substantive content. Among the distinguishing features of the quantitative and qualitative-quantitative methods for assessing geodiversity are their wide use within geographic information systems, both at the stage of data collection and data integration, as well as numerical processing and their presentation. The unresolved problem for these methods, however, is the possibility of their validation. It seems that currently the best method of validation is direct filed confrontation. Looking to the next few years, the development of qualitative-quantitative methods connected with cognitive issues should be expected, oriented towards ontology and the Semantic Web.
Quantifying the sensitivity of post-glacial sea level change to laterally varying viscosity
NASA Astrophysics Data System (ADS)
Crawford, Ophelia; Al-Attar, David; Tromp, Jeroen; Mitrovica, Jerry X.; Austermann, Jacqueline; Lau, Harriet C. P.
2018-05-01
We present a method for calculating the derivatives of measurements of glacial isostatic adjustment (GIA) with respect to the viscosity structure of the Earth and the ice sheet history. These derivatives, or kernels, quantify the linearised sensitivity of measurements to the underlying model parameters. The adjoint method is used to enable efficient calculation of theoretically exact sensitivity kernels within laterally heterogeneous earth models that can have a range of linear or non-linear viscoelastic rheologies. We first present a new approach to calculate GIA in the time domain, which, in contrast to the more usual formulation in the Laplace domain, is well suited to continuously varying earth models and to the use of the adjoint method. Benchmarking results show excellent agreement between our formulation and previous methods. We illustrate the potential applications of the kernels calculated in this way through a range of numerical calculations relative to a spherically symmetric background model. The complex spatial patterns of the sensitivities are not intuitive, and this is the first time that such effects are quantified in an efficient and accurate manner.
A Probabilistic Collocation Based Iterative Kalman Filter for Landfill Data Assimilation
NASA Astrophysics Data System (ADS)
Qiang, Z.; Zeng, L.; Wu, L.
2016-12-01
Due to the strong spatial heterogeneity of landfill, uncertainty is ubiquitous in gas transport process in landfill. To accurately characterize the landfill properties, the ensemble Kalman filter (EnKF) has been employed to assimilate the measurements, e.g., the gas pressure. As a Monte Carlo (MC) based method, the EnKF usually requires a large ensemble size, which poses a high computational cost for large scale problems. In this work, we propose a probabilistic collocation based iterative Kalman filter (PCIKF) to estimate permeability in a liquid-gas coupling model. This method employs polynomial chaos expansion (PCE) to represent and propagate the uncertainties of model parameters and states, and an iterative form of Kalman filter to assimilate the current gas pressure data. To further reduce the computation cost, the functional ANOVA (analysis of variance) decomposition is conducted, and only the first order ANOVA components are remained for PCE. Illustrated with numerical case studies, this proposed method shows significant superiority in computation efficiency compared with the traditional MC based iterative EnKF. The developed method has promising potential in reliable prediction and management of landfill gas production.
Toward unbiased estimations of the statefinder parameters
NASA Astrophysics Data System (ADS)
Aviles, Alejandro; Klapp, Jaime; Luongo, Orlando
2017-09-01
With the use of simulated supernova catalogs, we show that the statefinder parameters turn out to be poorly and biased estimated by standard cosmography. To this end, we compute their standard deviations and several bias statistics on cosmologies near the concordance model, demonstrating that these are very large, making standard cosmography unsuitable for future and wider compilations of data. To overcome this issue, we propose a new method that consists in introducing the series of the Hubble function into the luminosity distance, instead of considering the usual direct Taylor expansions of the luminosity distance. Moreover, in order to speed up the numerical computations, we estimate the coefficients of our expansions in a hierarchical manner, in which the order of the expansion depends on the redshift of every single piece of data. In addition, we propose two hybrids methods that incorporates standard cosmography at low redshifts. The methods presented here perform better than the standard approach of cosmography both in the errors and bias of the estimated statefinders. We further propose a one-parameter diagnostic to reject non-viable methods in cosmography.
Electromagnetic scattering of large structures in layered earths using integral equations
NASA Astrophysics Data System (ADS)
Xiong, Zonghou; Tripp, Alan C.
1995-07-01
An electromagnetic scattering algorithm for large conductivity structures in stratified media has been developed and is based on the method of system iteration and spatial symmetry reduction using volume electric integral equations. The method of system iteration divides a structure into many substructures and solves the resulting matrix equation using a block iterative method. The block submatrices usually need to be stored on disk in order to save computer core memory. However, this requires a large disk for large structures. If the body is discretized into equal-size cells it is possible to use the spatial symmetry relations of the Green's functions to regenerate the scattering impedance matrix in each iteration, thus avoiding expensive disk storage. Numerical tests show that the system iteration converges much faster than the conventional point-wise Gauss-Seidel iterative method. The numbers of cells do not significantly affect the rate of convergency. Thus the algorithm effectively reduces the solution of the scattering problem to an order of O(N2), instead of O(N3) as with direct solvers.
Bilinear modeling and nonlinear estimation
NASA Technical Reports Server (NTRS)
Dwyer, Thomas A. W., III; Karray, Fakhreddine; Bennett, William H.
1989-01-01
New methods are illustrated for online nonlinear estimation applied to the lateral deflection of an elastic beam on board measurements of angular rates and angular accelerations. The development of the filter equations, together with practical issues of their numerical solution as developed from global linearization by nonlinear output injection are contrasted with the usual method of the extended Kalman filter (EKF). It is shown how nonlinear estimation due to gyroscopic coupling can be implemented as an adaptive covariance filter using off-the-shelf Kalman filter algorithms. The effect of the global linearization by nonlinear output injection is to introduce a change of coordinates in which only the process noise covariance is to be updated in online implementation. This is in contrast to the computational approach which arises in EKF methods arising by local linearization with respect to the current conditional mean. Processing refinements for nonlinear estimation based on optimal, nonlinear interpolation between observations are also highlighted. In these methods the extrapolation of the process dynamics between measurement updates is obtained by replacing a transition matrix with an operator spline that is optimized off-line from responses to selected test inputs.
An implementation of the QMR method based on coupled two-term recurrences
NASA Technical Reports Server (NTRS)
Freund, Roland W.; Nachtigal, Noeel M.
1992-01-01
The authors have proposed a new Krylov subspace iteration, the quasi-minimal residual algorithm (QMR), for solving non-Hermitian linear systems. In the original implementation of the QMR method, the Lanczos process with look-ahead is used to generate basis vectors for the underlying Krylov subspaces. In the Lanczos algorithm, these basis vectors are computed by means of three-term recurrences. It has been observed that, in finite precision arithmetic, vector iterations based on three-term recursions are usually less robust than mathematically equivalent coupled two-term vector recurrences. This paper presents a look-ahead algorithm that constructs the Lanczos basis vectors by means of coupled two-term recursions. Implementation details are given, and the look-ahead strategy is described. A new implementation of the QMR method, based on this coupled two-term algorithm, is described. A simplified version of the QMR algorithm without look-ahead is also presented, and the special case of QMR for complex symmetric linear systems is considered. Results of numerical experiments comparing the original and the new implementations of the QMR method are reported.
An improved model for whole genome phylogenetic analysis by Fourier transform.
Yin, Changchuan; Yau, Stephen S-T
2015-10-07
DNA sequence similarity comparison is one of the major steps in computational phylogenetic studies. The sequence comparison of closely related DNA sequences and genomes is usually performed by multiple sequence alignments (MSA). While the MSA method is accurate for some types of sequences, it may produce incorrect results when DNA sequences undergone rearrangements as in many bacterial and viral genomes. It is also limited by its computational complexity for comparing large volumes of data. Previously, we proposed an alignment-free method that exploits the full information contents of DNA sequences by Discrete Fourier Transform (DFT), but still with some limitations. Here, we present a significantly improved method for the similarity comparison of DNA sequences by DFT. In this method, we map DNA sequences into 2-dimensional (2D) numerical sequences and then apply DFT to transform the 2D numerical sequences into frequency domain. In the 2D mapping, the nucleotide composition of a DNA sequence is a determinant factor and the 2D mapping reduces the nucleotide composition bias in distance measure, and thus improving the similarity measure of DNA sequences. To compare the DFT power spectra of DNA sequences with different lengths, we propose an improved even scaling algorithm to extend shorter DFT power spectra to the longest length of the underlying sequences. After the DFT power spectra are evenly scaled, the spectra are in the same dimensionality of the Fourier frequency space, then the Euclidean distances of full Fourier power spectra of the DNA sequences are used as the dissimilarity metrics. The improved DFT method, with increased computational performance by 2D numerical representation, can be applicable to any DNA sequences of different length ranges. We assess the accuracy of the improved DFT similarity measure in hierarchical clustering of different DNA sequences including simulated and real datasets. The method yields accurate and reliable phylogenetic trees and demonstrates that the improved DFT dissimilarity measure is an efficient and effective similarity measure of DNA sequences. Due to its high efficiency and accuracy, the proposed DFT similarity measure is successfully applied on phylogenetic analysis for individual genes and large whole bacterial genomes. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ratio-based vs. model-based methods to correct for urinary creatinine concentrations.
Jain, Ram B
2016-08-01
Creatinine-corrected urinary analyte concentration is usually computed as the ratio of the observed level of analyte concentration divided by the observed level of the urinary creatinine concentration (UCR). This ratio-based method is flawed since it implicitly assumes that hydration is the only factor that affects urinary creatinine concentrations. On the contrary, it has been shown in the literature, that age, gender, race/ethnicity, and other factors also affect UCR. Consequently, an optimal method to correct for UCR should correct for hydration as well as other factors like age, gender, and race/ethnicity that affect UCR. Model-based creatinine correction in which observed UCRs are used as an independent variable in regression models has been proposed. This study was conducted to evaluate the performance of ratio-based and model-based creatinine correction methods when the effects of gender, age, and race/ethnicity are evaluated one factor at a time for selected urinary analytes and metabolites. It was observed that ratio-based method leads to statistically significant pairwise differences, for example, between males and females or between non-Hispanic whites (NHW) and non-Hispanic blacks (NHB), more often than the model-based method. However, depending upon the analyte of interest, the reverse is also possible. The estimated ratios of geometric means (GM), for example, male to female or NHW to NHB, were also compared for the two methods. When estimated UCRs were higher for the group (for example, males) in the numerator of this ratio, these ratios were higher for the model-based method, for example, male to female ratio of GMs. When estimated UCR were lower for the group (for example, NHW) in the numerator of this ratio, these ratios were higher for the ratio-based method, for example, NHW to NHB ratio of GMs. Model-based method is the method of choice if all factors that affect UCR are to be accounted for.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Meningitis or encephalitis. (3) The number of eligible deaths is the denominator for the donation rate... eligible death criteria. The number of eligible donors is the numerator of the donation rate outcome... untoward, undesirable, and usually unanticipated event that causes death or serious injury or the risk...
Code of Federal Regulations, 2011 CFR
2011-10-01
... Meningitis or encephalitis. (3) The number of eligible deaths is the denominator for the donation rate... eligible death criteria. The number of eligible donors is the numerator of the donation rate outcome... untoward, undesirable, and usually unanticipated event that causes death or serious injury or the risk...
Code of Federal Regulations, 2014 CFR
2014-10-01
... Meningitis or encephalitis. (3) The number of eligible deaths is the denominator for the donation rate... eligible death criteria. The number of eligible donors is the numerator of the donation rate outcome... untoward, undesirable, and usually unanticipated event that causes death or serious injury or the risk...
Code of Federal Regulations, 2012 CFR
2012-10-01
... Meningitis or encephalitis. (3) The number of eligible deaths is the denominator for the donation rate... eligible death criteria. The number of eligible donors is the numerator of the donation rate outcome... untoward, undesirable, and usually unanticipated event that causes death or serious injury or the risk...
We have conducted numerical simulation studies to assess the potential for injection-induced fault reactivation and notable seismic events associated with shale-gas hydraulic fracturing operations. The modeling is generally tuned toward conditions usually encountered in the Marce...
Neill, Ushma S
2006-07-01
Scientists are usually thought to be beyond reproach, but with the recent spate of high-profile ethical transgressions by scientists, the public's trust in science and scientists is deteriorating. The numerous cases of scientific misconduct that have crossed my desk in the last year leave me disenchanted, disappointed, and disillusioned.
Laser induced heat source distribution in bio-tissues
NASA Astrophysics Data System (ADS)
Li, Xiaoxia; Fan, Shifu; Zhao, Youquan
2006-09-01
During numerical simulation of laser and tissue thermal interaction, the light fluence rate distribution should be formularized and constituted to the source term in the heat transfer equation. Usually the solution of light irradiative transport equation is given in extreme conditions such as full absorption (Lambert-Beer Law), full scattering (Lubelka-Munk theory), most scattering (Diffusion Approximation) et al. But in specific conditions, these solutions will induce different errors. The usually used Monte Carlo simulation (MCS) is more universal and exact but has difficulty to deal with dynamic parameter and fast simulation. Its area partition pattern has limits when applying FEM (finite element method) to solve the bio-heat transfer partial differential coefficient equation. Laser heat source plots of above methods showed much difference with MCS. In order to solve this problem, through analyzing different optical actions such as reflection, scattering and absorption on the laser induced heat generation in bio-tissue, a new attempt was made out which combined the modified beam broaden model and the diffusion approximation model. First the scattering coefficient was replaced by reduced scattering coefficient in the beam broaden model, which is more reasonable when scattering was treated as anisotropic scattering. Secondly the attenuation coefficient was replaced by effective attenuation coefficient in scattering dominating turbid bio-tissue. The computation results of the modified method were compared with Monte Carlo simulation and showed the model provided reasonable predictions of heat source term distribution than past methods. Such a research is useful for explaining the physical characteristics of heat source in the heat transfer equation, establishing effective photo-thermal model, and providing theory contrast for related laser medicine experiments.
Apparent negative mass in QCM sensors due to punctual rigid loading
NASA Astrophysics Data System (ADS)
Castro, P.; Resa, P.; Elvira, L.
2012-12-01
Quartz Crystal Microbalances (QCM) are highly sensitive piezoelectric sensors able to detect very small loads attached to them. These devices are widely employed in many applications including process control and industrial and environmental monitoring. Mass loading is usually related to frequency shift by the well-known Sauerbrey's equation, valid for thin rigid homogeneous films. However, a significant deviation from this equation can occur when the mass is not uniformly distributed over the surface. Whereas the effects of a thin film on a QCM have been thoroughly studied, there are relatively few results on punctual loads, even though particles are usually deposited randomly and non-uniformly on the resonator surface. In this work, we have studied the effect of punctual rigid loading on the resonant frequency shift of a QCM sensor, both experimentally and using finite element method (FEM). The FEM numerical analysis was done using COMSOL software, 3D modeling a linear elastic piezoelectric solid and introducing the properties of an AT-cut quartz crystal. It is shown that a punctual rigid mass deposition on the surface of a QCM sensor can lead to positive shifts of resonance frequency, contrary to Sauerbrey's equation.
The stability of vacancy-like defects in amorphous silicon
NASA Astrophysics Data System (ADS)
Joly, Jean-Francois; Mousseau, Normand
2013-03-01
The contribution of vacancy-like defects to the relaxation of amorphous silicon (a-Si) has been a matter of debate for a long time. Due to their disordered nature, there is a large number local environments in which such a defect can exists. Previous numerical studies the vacancy in a-Si have been limited to small systems and very short timescales. Here we use kinectic ART (k-ART), an off-lattice kinetic Monte-Carlo simulation method with on-the-fly catalog building to study the time evolution of 1000 different single vacancy configurations in a well-relaxed a-Si model. Our results show that most of the vacancies are annihlated quickly. In fact, while 16% of the 1000 isolated vacancies survive for more than 1 ns of simulated time, 0.043% remain after 1 ms and only 6 of them survive longer than 0.1 second. Diffusion of the full vacancy is only seen in 19% of the configurations and diffusion usually leads directly to the annihilation of the defect. The actual annihilation event, in which one of the defective atoms fills the vacancy, is usually similar in all the configurations but local bonding environment heavily influence its activation barrier and relaxation energy.
Numerical phase retrieval from beam intensity measurements in three planes
NASA Astrophysics Data System (ADS)
Bruel, Laurent
2003-05-01
A system and method have been developed at CEA to retrieve phase information from multiple intensity measurements along a laser beam. The device has been patented. Commonly used devices for beam measurement provide phase and intensity information separately or with a rather poor resolution whereas the MIROMA method provides both at the same time, allowing direct use of the results in numerical models. Usual phase retrieval algorithms use two intensity measurements, typically the image plane and the focal plane (Gerschberg-Saxton algorithm) related by a Fourier transform, or the image plane and a lightly defocus plane (D.L. Misell). The principal drawback of such iterative algorithms is their inability to provide unambiguous convergence in all situations. The algorithms can stagnate on bad solutions and the error between measured and calculated intensities remains unacceptable. If three planes rather than two are used, the data redundancy created confers to the method good convergence capability and noise immunity. It provides an excellent agreement between intensity determined from the retrieved phase data set in the image plane and intensity measurements in any diffraction plane. The method employed for MIROMA is inspired from GS algorithm, replacing Fourier transforms by a beam-propagating kernel with gradient search accelerating techniques and special care for phase branch cuts. A fast one dimensional algorithm provides an initial guess for the iterative algorithm. Applications of the algorithm on synthetic data find out the best reconstruction planes that have to be chosen. Robustness and sensibility are evaluated. Results on collimated and distorted laser beams are presented.
NASA Astrophysics Data System (ADS)
Kim, Tae Hee; James, Robin; Narayanan, Ram M.
2017-04-01
Fiber Reinforced Polymer or Plastic (FRP) composites have been rapidly increasing in the aerospace, automotive and marine industry, and civil engineering, because these composites show superior characteristics such as outstanding strength and stiffness, low weight, as well as anti-corrosion and easy production. Generally, the advancement of materials calls for correspondingly advanced methods and technologies for inspection and failure detection during production or maintenance, especially in the area of nondestructive testing (NDT). Among numerous inspection techniques, microwave sensing methods can be effectively used for NDT of FRP composites. FRP composite materials can be produced using various structures and materials, and various defects or flaws occur due to environmental conditions encountered during operation. However, reliable, low-cost, and easy-to-operate NDT methods have not been developed and tested. FRP composites are usually produced as multilayered structures consisting of fiber plate, matrix and core. Therefore, typical defects appearing in FRP composites are disbondings, delaminations, object inclusions, and certain kinds of barely visible impact damages. In this paper, we propose a microwave NDT method, based on synthetic aperture radar (SAR) imaging algorithms, for stand-off imaging of internal delaminations. When a microwave signal is incident on a multilayer dielectric material, the reflected signal provides a good response to interfaces and transverse cracks. An electromagnetic wave model is introduced to delineate interface widths or defect depths from the reflected waves. For the purpose of numerical analysis and simulation, multilayered composite samples with various artificial defects are assumed, and their SAR images are obtained and analyzed using a variety of high-resolution wideband waveforms.
NASA Astrophysics Data System (ADS)
Dragos, Kosmas; Smarsly, Kay
2016-04-01
System identification has been employed in numerous structural health monitoring (SHM) applications. Traditional system identification methods usually rely on centralized processing of structural response data to extract information on structural parameters. However, in wireless SHM systems the centralized processing of structural response data introduces a significant communication bottleneck. Exploiting the merits of decentralization and on-board processing power of wireless SHM systems, many system identification methods have been successfully implemented in wireless sensor networks. While several system identification approaches for wireless SHM systems have been proposed, little attention has been paid to obtaining information on the physical parameters (e.g. stiffness, damping) of the monitored structure. This paper presents a hybrid system identification methodology suitable for wireless sensor networks based on the principles of component mode synthesis (dynamic substructuring). A numerical model of the monitored structure is embedded into the wireless sensor nodes in a distributed manner, i.e. the entire model is segmented into sub-models, each embedded into one sensor node corresponding to the substructure the sensor node is assigned to. The parameters of each sub-model are estimated by extracting local mode shapes and by applying the equations of the Craig-Bampton method on dynamic substructuring. The proposed methodology is validated in a laboratory test conducted on a four-story frame structure to demonstrate the ability of the methodology to yield accurate estimates of stiffness parameters. Finally, the test results are discussed and an outlook on future research directions is provided.
Finite Element Method-Based Kinematics and Closed-Loop Control of Soft, Continuum Manipulators.
Bieze, Thor Morales; Largilliere, Frederick; Kruszewski, Alexandre; Zhang, Zhongkai; Merzouki, Rochdi; Duriez, Christian
2018-06-01
This article presents a modeling methodology and experimental validation for soft manipulators to obtain forward kinematic model (FKM) and inverse kinematic model (IKM) under quasi-static conditions (in the literature, these manipulators are usually classified as continuum robots. However, their main characteristic of interest in this article is that they create motion by deformation, as opposed to the classical use of articulations). It offers a way to obtain the kinematic characteristics of this type of soft robots that is suitable for offline path planning and position control. The modeling methodology presented relies on continuum mechanics, which does not provide analytic solutions in the general case. Our approach proposes a real-time numerical integration strategy based on finite element method with a numerical optimization based on Lagrange multipliers to obtain FKM and IKM. To reduce the dimension of the problem, at each step, a projection of the model to the constraint space (gathering actuators, sensors, and end-effector) is performed to obtain the smallest number possible of mathematical equations to be solved. This methodology is applied to obtain the kinematics of two different manipulators with complex structural geometry. An experimental comparison is also performed in one of the robots, between two other geometric approaches and the approach that is showcased in this article. A closed-loop controller based on a state estimator is proposed. The controller is experimentally validated and its robustness is evaluated using Lypunov stability method.
Metaplot: A Novel Stata Graph for Assessing Heterogeneity at a Glance
Poorolajal, J; Mahmoodi, M; Majdzadeh, R; Fotouhi, A
2010-01-01
Background: Heterogeneity is usually a major concern in meta-analysis. Although there are some statistical approaches for assessing variability across studies, here we present a new approach to heterogeneity using “MetaPlot” that investigate the influence of a single study on the overall heterogeneity. Methods: MetaPlot is a two-way (x, y) graph, which can be considered as a complementary graphical approach for testing heterogeneity. This method shows graphically as well as numerically the results of an influence analysis, in which Higgins’ I2 statistic with 95% (Confidence interval) CI are computed omitting one study in each turn and then are plotted against reciprocal of standard error (1/SE) or “precision”. In this graph, “1/SE” lies on x axis and “I2 results” lies on y axe. Results: Having a first glance at MetaPlot, one can predict to what extent omission of a single study may influence the overall heterogeneity. The precision on x-axis enables us to distinguish the size of each trial. The graph describes I2 statistic with 95% CI graphically as well as numerically in one view for prompt comparison. It is possible to implement MetaPlot for meta-analysis of different types of outcome data and summary measures. Conclusion: This method presents a simple graphical approach to identify an outlier and its effect on overall heterogeneity at a glance. We wish to suggest MetaPlot to Stata experts to prepare its module for the software. PMID:23113013
f1: a code to compute Appell's F1 hypergeometric function
NASA Astrophysics Data System (ADS)
Colavecchia, F. D.; Gasaneo, G.
2004-02-01
In this work we present the FORTRAN code to compute the hypergeometric function F1( α, β1, β2, γ, x, y) of Appell. The program can compute the F1 function for real values of the variables { x, y}, and complex values of the parameters { α, β1, β2, γ}. The code uses different strategies to calculate the function according to the ideas outlined in [F.D. Colavecchia et al., Comput. Phys. Comm. 138 (1) (2001) 29]. Program summaryTitle of the program: f1 Catalogue identifier: ADSJ Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSJ Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: none Computers: PC compatibles, SGI Origin2∗ Operating system under which the program has been tested: Linux, IRIX Programming language used: Fortran 90 Memory required to execute with typical data: 4 kbytes No. of bits in a word: 32 No. of bytes in distributed program, including test data, etc.: 52 325 Distribution format: tar gzip file External subprograms used: Numerical Recipes hypgeo [W.H. Press et al., Numerical Recipes in Fortran 77, Cambridge Univ. Press, 1996] or chyp routine of R.C. Forrey [J. Comput. Phys. 137 (1997) 79], rkf45 [L.F. Shampine and H.H. Watts, Rep. SAND76-0585, 1976]. Keywords: Numerical methods, special functions, hypergeometric functions, Appell functions, Gauss function Nature of the physical problem: Computing the Appell F1 function is relevant in atomic collisions and elementary particle physics. It is usually the result of multidimensional integrals involving Coulomb continuum states. Method of solution: The F1 function has a convergent-series definition for | x|<1 and | y|<1, and several analytic continuations for other regions of the variable space. The code tests the values of the variables and selects one of the precedent cases. In the convergence region the program uses the series definition near the origin of coordinates, and a numerical integration of the third-order differential parametric equation for the F1 function. Also detects several special cases according to the values of the parameters. Restrictions on the complexity of the problem: The code is restricted to real values of the variables { x, y}. Also, there are some parameter domains that are not covered. These usually imply differences between integer parameters that lead to negative integer arguments of Gamma functions. Typical running time: Depends basically on the variables. The computation of Table 4 of [F.D. Colavecchia et al., Comput. Phys. Comm. 138 (1) (2001) 29] (64 functions) requires approximately 0.33 s in a Athlon 900 MHz processor.
Forensic anthropology and mortuary archaeology in Lithuania.
Jankauskas, Rimantas
2009-12-01
Forensic anthropology (in Lithuania, as everywhere in Eastern Europe, traditionally considered as a narrower field--forensic osteology) has a long history, experience being gained both during exhumations of mass killings during the Second World War and the subsequent totalitarian regime, investigations of historical mass graves, identification of historical personalities and routine forensic work. Experts of this field (usually a branch of forensic medicine) routinely are solving "technical" questions of crime investigation, particularly identification of (usually dead) individuals. Practical implementation of the mission of forensic anthropology is not an easy task due to interdisciplinary character of the field. On one hand, physical anthropology has in its disposition numerous scientifically tested methods, however, their practical value in particular legal processes is limited. Reasons for these discrepancies can be related both to insufficient understanding of possibilities and limitations of forensic anthropology and archaeology by officials representing legal institutions that perform investigations, and sometimes too "academic" research, that is conducted at anthropological laboratories, when methods developed are not completely relevant to practical needs. Besides of answering to direct questions (number of individuals, sex, age, stature, population affinity, individual traits, evidence of violence), important humanitarian aspects--the individual's right for identity, the right of the relatives to know the fate of their beloved ones--should not be neglected. Practical use of other identification methods faces difficulties of their own (e.g., odontology--lack of regular dental registration system and compatible database). Two examples of forensic anthropological work of mass graves, even when the results were much influenced by the questions raised by investigators, can serve as an illustration of the above-mentioned issues.
Comparing the GPR responses of real experiment and simulation of cavity
NASA Astrophysics Data System (ADS)
Yu, H.; Nam, M. J.; Kim, C.; Lee, D. K.
2017-12-01
Seoul, capital city of South Korea, has been suffering from ground subsidence mainly caused by cavities beneath the road. Urban subsidence usually brings serious social problems such as damages of human life, properties and so on. To prevent ground subsidence, Korea government embark much money in developing techniques to detect cavities in advance. Ground penetrating radar (GPR) is known as the most effective method among geophysical surveys in exploring underground cavitied but shallow ones only. For the study of GPR responses for underground cavities, real scale physical models have been made and GPR surveys are conducted. In simulating cavities with various sizes at various depths, spheres of polystyrene have been used since the electric permittivity of polystyrene has a similar value to that of the air. However, the real scale experiments only used simple shapes of cavities due to its expensive construction cost and further changing in shapes of cavities is limited once they are built. For not only comparison between field responses for the physical model and numerical responses but also for analyzing GPR responses for more various cavity shapes in numerous environments, we conducted numerical simulation of GPR responses using three-dimensional (3D) finite difference time domain (FDTD) GPR modeling algorithm employing staggered grid. We first construct numerical modeling for models similar to the physical models to confirm considering radiation pattern in numerical modeling of GPR responses which is critical to generate similar responses to field GPR data. Further, GPR responses computed for various shapes of cavities in several different environments determine not only additional construction of the physical cavities but also analyze the characteristics of GPR responses.
Koh, Min Jung; Park, Eun Jung; Park, Sang Hoon; Jeon, Hea Rim; Kim, Mun-Gyu; Lee, Se-Jin; Kim, Sang Ho; Ok, Si Young; Kim, Soon Im
2014-01-01
Background Neck and shoulder pain is fairly common among adolescents in Korea and results in significant health problem. The aims of this prospective study was to identify the effects of education, in terms of recognition of this issue and posture correction, on prevalence and severity of neck and shoulder pain in Korean adolescents. Methods A prospective, observational cohort design was used. The 912 students from two academic high schools in the city of Seoul were eligible for the current study and 887 completed this study. After a baseline cross-sectional survey, students listened to a lecture about cervical health, focusing on good posture, habits, and stretching exercises to protect the spine, and were encouraged by their teachers to keep the appropriate position. And follow-ups were conducted 3 months later, to evaluate the effect of education. Results The prevalence of neck and shoulder pain was decreased 19.5% (from 82.5 to 66.4%). The baseline mean usual and worst numeric rating scale were 19.9/100 (95% CI, 18.1-21.7) and 31.2/100 (95% CI, 28.7-33.2), respectively. On the follow-up survey, the mean usual and worst numeric rating scale were decreased significantly by 24.1 and 21.7%, respectively, compared with baseline (P < 0.01). Of the 570 students reporting neck and shoulder pain, 16.4% responded that they had experienced improvement during the 3 months. Conclusions Education; recognition of this issue and posture correction, for cervical health appeared to be effective in decreasing the prevalence and severity of neck and shoulder pain at a 3 month follow-up. PMID:25301193
Pseudoinverse Decoding Process in Delay-Encoded Synthetic Transmit Aperture Imaging.
Gong, Ping; Kolios, Michael C; Xu, Yuan
2016-09-01
Recently, we proposed a new method to improve the signal-to-noise ratio of the prebeamformed radio-frequency data in synthetic transmit aperture (STA) imaging: the delay-encoded STA (DE-STA) imaging. In the decoding process of DE-STA, the equivalent STA data were obtained by directly inverting the coding matrix. This is usually regarded as an ill-posed problem, especially under high noise levels. Pseudoinverse (PI) is usually used instead for seeking a more stable inversion process. In this paper, we apply singular value decomposition to the coding matrix to conduct the PI. Our numerical studies demonstrate that the singular values of the coding matrix have a special distribution, i.e., all the values are the same except for the first and last ones. We compare the PI in two cases: complete PI (CPI), where all the singular values are kept, and truncated PI (TPI), where the last and smallest singular value is ignored. The PI (both CPI and TPI) DE-STA processes are tested against noise with both numerical simulations and experiments. The CPI and TPI can restore the signals stably, and the noise mainly affects the prebeamformed signals corresponding to the first transmit channel. The difference in the overall enveloped beamformed image qualities between the CPI and TPI is negligible. Thus, it demonstrates that DE-STA is a relatively stable encoding and decoding technique. Also, according to the special distribution of the singular values of the coding matrix, we propose a new efficient decoding formula that is based on the conjugate transpose of the coding matrix. We also compare the computational complexity of the direct inverse and the new formula.
Wavelet and adaptive methods for time dependent problems and applications in aerosol dynamics
NASA Astrophysics Data System (ADS)
Guo, Qiang
Time dependent partial differential equations (PDEs) are widely used as mathematical models of environmental problems. Aerosols are now clearly identified as an important factor in many environmental aspects of climate and radiative forcing processes, as well as in the health effects of air quality. The mathematical models for the aerosol dynamics with respect to size distribution are nonlinear partial differential and integral equations, which describe processes of condensation, coagulation and deposition. Simulating the general aerosol dynamic equations on time, particle size and space exhibits serious difficulties because the size dimension ranges from a few nanometer to several micrometer while the spatial dimension is usually described with kilometers. Therefore, it is an important and challenging task to develop efficient techniques for solving time dependent dynamic equations. In this thesis, we develop and analyze efficient wavelet and adaptive methods for the time dependent dynamic equations on particle size and further apply them to the spatial aerosol dynamic systems. Wavelet Galerkin method is proposed to solve the aerosol dynamic equations on time and particle size due to the fact that aerosol distribution changes strongly along size direction and the wavelet technique can solve it very efficiently. Daubechies' wavelets are considered in the study due to the fact that they possess useful properties like orthogonality, compact support, exact representation of polynomials to a certain degree. Another problem encountered in the solution of the aerosol dynamic equations results from the hyperbolic form due to the condensation growth term. We propose a new characteristic-based fully adaptive multiresolution numerical scheme for solving the aerosol dynamic equation, which combines the attractive advantages of adaptive multiresolution technique and the characteristics method. On the aspect of theoretical analysis, the global existence and uniqueness of solutions of continuous time wavelet numerical methods for the nonlinear aerosol dynamics are proved by using Schauder's fixed point theorem and the variational technique. Optimal error estimates are derived for both continuous and discrete time wavelet Galerkin schemes. We further derive reliable and efficient a posteriori error estimate which is based on stable multiresolution wavelet bases and an adaptive space-time algorithm for efficient solution of linear parabolic differential equations. The adaptive space refinement strategies based on the locality of corresponding multiresolution processes are proved to converge. At last, we develop efficient numerical methods by combining the wavelet methods proposed in previous parts and the splitting technique to solve the spatial aerosol dynamic equations. Wavelet methods along the particle size direction and the upstream finite difference method along the spatial direction are alternately used in each time interval. Numerical experiments are taken to show the effectiveness of our developed methods.
Easing the Transition to High School
ERIC Educational Resources Information Center
Lampert, Joan
2005-01-01
First-year students in high school face numerous pressures and usually have to face high school finals on their own. It does not have to be this way as a school outside Chicago, Maine East, demonstrates with its Freshman Advisory program that has senior students mentoring first year students.
Flexible cellulose nanofibril composite films with reduced hygroscopic capacity
Yan Qing; Ronald Sabo; Zhiyong Cai; Yiqiang Wu
2013-01-01
Cellulose nanofibrils (CNFs), which are generated from abundant, environmentally friendly natural plant resources, display numerous interesting properties such as outstanding mechanical strength, negligible light scattering, and low thermal expansion (Zimmermann et al., 2010). These nanofibers are usually created by mechanical fibrillation or chemical oxidation of pulp...
The Structural Algebra Option: A Discussion Paper.
ERIC Educational Resources Information Center
Kirshner, David
The goal of this paper is to renew interest in the structural option to algebra instruction. Concern for the usual secondary school algebra curriculum related to simplifying expressions, solving equations, and rationalizing numerators and denominators is viewed from three pedagogical approaches: (1) structural approach, (2) empirical approach, and…
Neill, Ushma S.
2006-01-01
Scientists are usually thought to be beyond reproach, but with the recent spate of high-profile ethical transgressions by scientists, the public’s trust in science and scientists is deteriorating. The numerous cases of scientific misconduct that have crossed my desk in the last year leave me disenchanted, disappointed, and disillusioned. PMID:16823470
A possible explanation for foreland thrust propagation
NASA Astrophysics Data System (ADS)
Panian, John; Pilant, Walter
1990-06-01
A common feature of thin-skinned fold and thrust belts is the sequential nature of foreland directed thrust systems. As a rule, younger thrusts develop in the footwalls of older thrusts, the whole sequence propagating towards the foreland in the transport direction. As each new younger thrust develops, the entire sequence is thickened; particularly in the frontal region. The compressive toe region can be likened to an advancing wave; as the mountainous thrust belt advanced the down-surface slope stresses drive thrusts ahead of it much like a surfboard rider. In an attempt to investigate the stresses in the frontal regions of thrustsheets, a numerical method has been devised from the algorithm given by McTigue and Mei [1981]. The algorithm yields a quickly computed approximate solution of the gravity- and tectonic-induced stresses of a two-dimensional homogeneous elastic half-space with an arbitrarily shaped free surface of small slope. A comparison of the numerical method with analytical examples shows excellent agreement. The numerical method was devised because it greatly facilitates the stress calculations and frees one from using the restrictive, simple topographic profiles necessary to obtain an analytical solution. The numerical version of the McTigue and Mei algorithm shows that there is a region of increased maximum resolved shear stress, τ, directly beneath the toe of the overthrust sheet. Utilizing the Mohr-Coulomb failure criterion, predicted fault lines are computed. It is shown that they flatten and become horizontal in some portions of this zone of increased τ. Thrust sheets are known to advance upon weak decollement zones. If there is a coincidence of increased τ, a weak rock layer, and a potential fault line parallel to this weak layer, we have in place all the elements necessary to initiate a new thrusting event. That is, this combination acts as a nucleating center to initiate a new thrusting event. Therefore, thrusts develop in sequence towards the foreland as a consequence of the stress concentrating abilities of the toe of the thrust sheet. The gravity- and tectonic-induced stresses due to the surface topography (usually ignored in previous analyses) of an advancing thrust sheet play a key role in the nature of shallow foreland thrust propagation.
A novel approach to the analysis of squeezed-film air damping in microelectromechanical systems
NASA Astrophysics Data System (ADS)
Yang, Weilin; Li, Hongxia; Chatterjee, Aveek N.; Elfadel, Ibrahim (Abe M.; Ender Ocak, Ilker; Zhang, TieJun
2017-01-01
Squeezed-film damping (SFD) is a phenomenon that significantly affects the performance of micro-electro-mechanical systems (MEMS). The total damping force in MEMS mainly include the viscous damping force and elastic damping force. Quality factor (Q factor) is usually used to evaluate the damping in MEMS. In this work, we measure the Q factor of a resonator through experiments in a wide range of pressure levels. In fact, experimental characterizations of MEMS have some limitations because it is difficult to conduct experiments at very high vacuum and also hard to differentiate the damping mechanisms from the overall Q factor measurements. On the other hand, classical theoretical analysis of SFD is restricted to strong assumptions and simple geometries. In this paper, a novel numerical approach, which is based on lattice Boltzmann simulations, is proposed to investigate SFD in MEMS. Our method considers the dynamics of squeezed air flow as well as fluid-solid interactions in MEMS. It is demonstrated that Q factor can be directly predicted by numerical simulation, and our simulation results agree well with experimental data. Factors that influence SFD, such as pressure, oscillating amplitude, and driving frequency, are investigated separately. Furthermore, viscous damping and elastic damping forces are quantitatively compared based on comprehensive simulation. The proposed numerical approach as well as experimental characterization enables us to reveal the insightful physics of squeezed-film air damping in MEMS.
Integration of Local Observations into the One Dimensional Fog Model PAFOG
NASA Astrophysics Data System (ADS)
Thoma, Christina; Schneider, Werner; Masbou, Matthieu; Bott, Andreas
2012-05-01
The numerical prediction of fog requires a very high vertical resolution of the atmosphere. Owing to a prohibitive computational effort of high resolution three dimensional models, operational fog forecast is usually done by means of one dimensional fog models. An important condition for a successful fog forecast with one dimensional models consists of the proper integration of observational data into the numerical simulations. The goal of the present study is to introduce new methods for the consideration of these data in the one dimensional radiation fog model PAFOG. First, it will be shown how PAFOG may be initialized with observed visibilities. Second, a nudging scheme will be presented for the inclusion of measured temperature and humidity profiles in the PAFOG simulations. The new features of PAFOG have been tested by comparing the model results with observations of the German Meteorological Service. A case study will be presented that reveals the importance of including local observations in the model calculations. Numerical results obtained with the modified PAFOG model show a distinct improvement of fog forecasts regarding the times of fog formation, dissipation as well as the vertical extent of the investigated fog events. However, model results also reveal that a further improvement of PAFOG might be possible if several empirical model parameters are optimized. This tuning can only be realized by comprehensive comparisons of model simulations with corresponding fog observations.
A mathematical model of the passage of an asteroid-comet body through the Earth’s atmosphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shaydurov, V., E-mail: shaidurov04@mail.ru; Siberian Federal University, 79 Svobodny pr., 660041 Krasnoyarsk; Shchepanovskaya, G.
In the paper, a mathematical model and a numerical algorithm are proposed for modeling the complex of phenomena which accompany the passage of a friable asteroid-comet body through the Earth’s atmosphere: the material ablation, the dissociation of molecules, and the radiation. The proposed model is constructed on the basis of the Navier-Stokes equations for viscous heat-conducting gas with an additional equation for the motion and propagation of a friable lumpy-dust material in air. The energy equation is modified for the relation between two its kinds: the usual energy of the translation of molecules (which defines the temperature and pressure) andmore » the combined energy of their rotation, oscillation, electronic excitation, dissociation, and radiation. For the mathematical model of atmosphere, the distribution of density, pressure, and temperature in height is taken as for the standard atmosphere. An asteroid-comet body is taken initially as a round body consisting of a friable lumpy-dust material with corresponding density and significant viscosity which far exceed those for the atmosphere gas. A numerical algorithm is proposed for solving the initial-boundary problem for the extended system of Navier-Stokes equations. The algorithm is the combination of the semi-Lagrangian approximation for Lagrange transport derivatives and the conforming finite element method for other terms. The implementation of these approaches is illustrated by a numerical example.« less
The application of the pilot points in groundwater numerical inversion model
NASA Astrophysics Data System (ADS)
Hu, Bin; Teng, Yanguo; Cheng, Lirong
2015-04-01
Numerical inversion simulation of groundwater has been widely applied in groundwater. Compared to traditional forward modeling, inversion model has more space to study. Zones and inversing modeling cell by cell are conventional methods. Pilot points is a method between them. The traditional inverse modeling method often uses software dividing the model into several zones with a few parameters needed to be inversed. However, distribution is usually too simple for modeler and result of simulation deviation. Inverse cell by cell will get the most actual parameter distribution in theory, but it need computational complexity greatly and quantity of survey data for geological statistical simulation areas. Compared to those methods, pilot points distribute a set of points throughout the different model domains for parameter estimation. Property values are assigned to model cells by Kriging to ensure geological units within the parameters of heterogeneity. It will reduce requirements of simulation area geological statistics and offset the gap between above methods. Pilot points can not only save calculation time, increase fitting degree, but also reduce instability of numerical model caused by numbers of parameters and other advantages. In this paper, we use pilot point in a field which structure formation heterogeneity and hydraulics parameter was unknown. We compare inversion modeling results of zones and pilot point methods. With the method of comparative analysis, we explore the characteristic of pilot point in groundwater inversion model. First, modeler generates an initial spatially correlated field given a geostatistical model by the description of the case site with the software named Groundwater Vistas 6. Defining Kriging to obtain the value of the field functions over the model domain on the basis of their values at measurement and pilot point locations (hydraulic conductivity), then we assign pilot points to the interpolated field which have been divided into 4 zones. And add range of disturbance values to inversion targets to calculate the value of hydraulic conductivity. Third, after inversion calculation (PEST), the interpolated field will minimize an objective function measuring the misfit between calculated and measured data. It's an optimization problem to find the optimum value of parameters. After the inversion modeling, the following major conclusion can be found out: (1) In a field structure formation is heterogeneity, the results of pilot point method is more real: better fitting result of parameters, more stable calculation of numerical simulation (stable residual distribution). Compared to zones, it is better of reflecting the heterogeneity of study field. (2) Pilot point method ensures that each parameter is sensitive and not entirely dependent on other parameters. Thus it guarantees the relative independence and authenticity of parameters evaluation results. However, it costs more time to calculate than zones. Key words: groundwater; pilot point; inverse model; heterogeneity; hydraulic conductivity
Strong polymer-turbulence interactions in viscoelastic turbulent channel flow.
Dallas, V; Vassilicos, J C; Hewitt, G F
2010-12-01
This paper is focused on the fundamental mechanism(s) of viscoelastic turbulence that leads to polymer-induced turbulent drag reduction phenomenon. A great challenge in this problem is the computation of viscoelastic turbulent flows, since the understanding of polymer physics is restricted to mechanical models. An effective state-of-the-art numerical method to solve the governing equation for polymers modeled as nonlinear springs, without using any artificial assumptions as usual, was implemented here on a three-dimensional channel flow geometry. The capability of this algorithm to capture the strong polymer-turbulence dynamical interactions is depicted on the results, which are much closer qualitatively to experimental observations. This allowed a more detailed study of the polymer-turbulence interactions, which yields an enhanced picture on a mechanism resulting from the polymer-turbulence energy transfers.
A tunable acoustic barrier based on periodic arrays of subwavelength slits
NASA Astrophysics Data System (ADS)
Rubio, Constanza; Uris, Antonio; Candelas, Pilar; Belmar, Francisco; Gomez-Lozano, Vicente
2015-05-01
The most usual method to reduce undesirable enviromental noise levels during its transmission is the use of acoustic barriers. A novel type of acoustic barrier based on sound transmission through subwavelength slits is presented. This system consists of two rows of periodic repetition of vertical rigid pickets separated by a slit of subwavelength width and with a misalignment between them. Here, both the experimental and the numerical analyses are presented. The acoustic barrier proposed can be easily built and is frequency tunable. The results demonstrated that the proposed barrier can be tuned to mitigate a band noise without excesive barrier thickness. The use of this system as an environmental acoustic barrier has certain advantages with regard to the ones currently used both from the constructive and the acoustical point of view.
Modeling of power transmission and stress grading for corona protection
NASA Astrophysics Data System (ADS)
Zohdi, T. I.; Abali, B. E.
2017-11-01
Electrical high voltage (HV) machines are prone to corona discharges leading to power losses as well as damage of the insulating layer. Many different techniques are applied as corona protection and computational methods aid to select the best design. In this paper we develop a reduced-order model in 1D estimating electric field and temperature distribution of a conductor wrapped with different layers, as usual for HV-machines. Many assumptions and simplifications are undertaken for this 1D model, therefore, we compare its results to a direct numerical simulation in 3D quantitatively. Both models are transient and nonlinear, giving a possibility to quickly estimate in 1D or fully compute in 3D by a computational cost. Such tools enable understanding, evaluation, and optimization of corona shielding systems for multilayered coils.
NASA Astrophysics Data System (ADS)
Dabiri, Arman; Butcher, Eric A.; Nazari, Morad
2017-02-01
Compliant impacts can be modeled using linear viscoelastic constitutive models. While such impact models for realistic viscoelastic materials using integer order derivatives of force and displacement usually require a large number of parameters, compliant impact models obtained using fractional calculus, however, can be advantageous since such models use fewer parameters and successfully capture the hereditary property. In this paper, we introduce the fractional Chebyshev collocation (FCC) method as an approximation tool for numerical simulation of several linear fractional viscoelastic compliant impact models in which the overall coefficient of restitution for the impact is studied as a function of the fractional model parameters for the first time. Other relevant impact characteristics such as hysteresis curves, impact force gradient, penetration and separation depths are also studied.
NASA Astrophysics Data System (ADS)
Devrient, M.; Da, X.; Frick, T.; Schmidt, M.
Laser transmission welding is a well known joining technology for thermoplastics. Because of the needs of lightweight, cost effective and green production thermoplastics are usually filled with glass fibers. These lead to higher absorption and more scattering within the upper joining partner with a negative influence on the welding process. Here an experimental method for the characterization of the scattering behavior of semi crystalline thermoplastics filled with short glass fibers and a finite element model of the welding process capable to consider scattering as well as an analytical model are introduced. The experimental data is used for the numerical and analytical investigation of laser transmission welding under consideration of scattering. The scattering effects of several thermoplastics onto the calculated temperature fields as well as weld seam geometries are quantified.
The dynamical conductance of graphene tunnelling structures.
Zhang, Huan; Chan, K S; Lin, Zijing
2011-12-16
The dynamical conductances of graphene tunnelling structures were numerically calculated using the scattering matrix method with the interaction effect included in a phenomenological approach. The overall single-barrier dynamical conductance is capacitative. Transmission resonances in the single-barrier structure lead to dips in the capacitative imaginary part of the response. This is different from the ac responses of typical semiconductor nanostructures, where transmission resonances usually lead to inductive peaks. The features of the dips depend on the Fermi energy. When the Fermi energy is below half of the barrier height, the dips are sharper. When the Fermi energy is higher than half of the barrier height, the dips are broader. Inductive behaviours can be observed in a double-barrier structure due to the resonances formed by reflection between the two barriers.
Empirical Bayes approach to the estimation of "unsafety": the multivariate regression method.
Hauer, E
1992-10-01
There are two kinds of clues to the unsafety of an entity: its traits (such as traffic, geometry, age, or gender) and its historical accident record. The Empirical Bayes approach to unsafety estimation makes use of both kinds of clues. It requires information about the mean and the variance of the unsafety in a "reference population" of similar entities. The method now in use for this purpose suffers from several shortcomings. First, a very large reference population is required. Second, the choice of reference population is to some extent arbitrary. Third, entities in the reference population usually cannot match the traits of the entity the unsafety of which is estimated. To alleviate these shortcomings the multivariate regression method for estimating the mean and variance of unsafety in reference populations is offered. Its logical foundations are described and its soundness is demonstrated. The use of the multivariate method makes the Empirical Bayes approach to unsafety estimation applicable to a wider range of circumstances and yields better estimates of unsafety. The application of the method to the tasks of identifying deviant entities and of estimating the effect of interventions on unsafety are discussed and illustrated by numerical examples.
A weakly-compressible Cartesian grid approach for hydrodynamic flows
NASA Astrophysics Data System (ADS)
Bigay, P.; Oger, G.; Guilcher, P.-M.; Le Touzé, D.
2017-11-01
The present article aims at proposing an original strategy to solve hydrodynamic flows. In introduction, the motivations for this strategy are developed. It aims at modeling viscous and turbulent flows including complex moving geometries, while avoiding meshing constraints. The proposed approach relies on a weakly-compressible formulation of the Navier-Stokes equations. Unlike most hydrodynamic CFD (Computational Fluid Dynamics) solvers usually based on implicit incompressible formulations, a fully-explicit temporal scheme is used. A purely Cartesian grid is adopted for numerical accuracy and algorithmic simplicity purposes. This characteristic allows an easy use of Adaptive Mesh Refinement (AMR) methods embedded within a massively parallel framework. Geometries are automatically immersed within the Cartesian grid with an AMR compatible treatment. The method proposed uses an Immersed Boundary Method (IBM) adapted to the weakly-compressible formalism and imposed smoothly through a regularization function, which stands as another originality of this work. All these features have been implemented within an in-house solver based on this WCCH (Weakly-Compressible Cartesian Hydrodynamic) method which meets the above requirements whilst allowing the use of high-order (> 3) spatial schemes rarely used in existing hydrodynamic solvers. The details of this WCCH method are presented and validated in this article.
Usual coffee intake in Brazil: results from the National Dietary Survey 2008-9.
Sousa, Alessandra Gaspar; da Costa, Teresa Helena Macedo
2015-05-28
Coffee is central to the economy of many developing countries, as well as to the world economy. However, despite the widespread consumption of coffee, there are very few available data showing the usual intake of this beverage. Surveying usual coffee intake is a way of monitoring one aspect of a population's usual dietary intake. Thus, the present study aimed to characterise the usual daily coffee intake in the Brazilian population. We used data from the National Dietary Survey collected in 2008-9 from a probabilistic sample of 34,003 Brazilians aged 10 years and older. The National Cancer Institute method was applied to obtain the usual intake based on two nonconsecutive food diaries, and descriptive statistical analyses were performed by age and sex for Brazil and its regions. The estimated average usual daily coffee intake of the Brazilian population was 163 (SE 2.8) ml. The comparison by sex showed that males had a 12% greater usual coffee intake than females. In addition, the highest intake was recorded among older males. Among the five regions surveyed, the North-East had the highest usual coffee intake (175 ml). The most common method of brewing coffee was filtered/instant coffee (71%), and the main method of sweetening beverages was with sugar (87%). In Brazil, the mean usual coffee intake corresponds to 163 ml, or 1.5 cups/d. Differences in usual coffee intake according to sex and age differed among the five Brazilian regions.
Towards accurate cosmological predictions for rapidly oscillating scalar fields as dark matter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ureña-López, L. Arturo; Gonzalez-Morales, Alma X., E-mail: lurena@ugto.mx, E-mail: alma.gonzalez@fisica.ugto.mx
2016-07-01
As we are entering the era of precision cosmology, it is necessary to count on accurate cosmological predictions from any proposed model of dark matter. In this paper we present a novel approach to the cosmological evolution of scalar fields that eases their analytic and numerical analysis at the background and at the linear order of perturbations. The new method makes use of appropriate angular variables that simplify the writing of the equations of motion, and which also show that the usual field variables play a secondary role in the cosmological dynamics. We apply the method to a scalar fieldmore » endowed with a quadratic potential and revisit its properties as dark matter. Some of the results known in the literature are recovered, and a better understanding of the physical properties of the model is provided. It is confirmed that there exists a Jeans wavenumber k {sub J} , directly related to the suppression of linear perturbations at wavenumbers k > k {sub J} , and which is verified to be k {sub J} = a √ mH . We also discuss some semi-analytical results that are well satisfied by the full numerical solutions obtained from an amended version of the CMB code CLASS. Finally we draw some of the implications that this new treatment of the equations of motion may have in the prediction of cosmological observables from scalar field dark matter models.« less
Frontal crashworthiness characterisation of a vehicle segment using curve comparison metrics.
Abellán-López, D; Sánchez-Lozano, M; Martínez-Sáez, L
2018-08-01
The objective of this work is to propose a methodology for the characterization of the collision behaviour and crashworthiness of a segment of vehicles, by selecting the vehicle that best represents that group. It would be useful in the development of deformable barriers, to be used in crash tests intended to study vehicle compatibility, as well as for the definition of the representative standard pulses used in numerical simulations or component testing. The characterisation and selection of representative vehicles is based on the objective comparison of the occupant compartment acceleration and barrier force pulses, obtained during crash tests, by using appropriate comparison metrics. This method is complemented with another one, based exclusively on the comparison of a few characteristic parameters of crash behaviour obtained from the previous curves. The method has been applied to different vehicle groups, using test data from a sample of vehicles. During this application, the performance of several metrics usually employed in the validation of simulation models have been analysed, and the most efficient ones have been selected for the task. The methodology finally defined is useful for vehicle segment characterization, taken into account aspects of crash behaviour related to the shape of the curves, difficult to represent by simple numerical parameters, and it may be tuned in future works when applied to larger and different samples. Copyright © 2018 Elsevier Ltd. All rights reserved.
SPH non-Newtonian Model for Ice Sheet and Ice Shelf Dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tartakovsky, Alexandre M.; Pan, Wenxiao; Monaghan, Joseph J.
2012-07-07
We propose a new three-dimensional smoothed particle hydrodynamics (SPH) non-Newtonian model to study coupled ice sheet and ice shelf dynamics. Most existing ice sheet numerical models use a grid-based Eulerian approach, and are usually restricted to shallow ice sheet and ice shelf approximations of the momentum conservation equation. SPH, a fully Lagrangian particle method, solves the full momentum conservation equation. SPH method also allows modeling of free-surface flows, large material deformation, and material fragmentation without employing complex front-tracking schemes, and does not require re-meshing. As a result, SPH codes are highly scalable. Numerical accuracy of the proposed SPH model ismore » first verified by simulating a plane shear flow with a free surface and the propagation of a blob of ice along a horizontal surface. Next, the SPH model is used to investigate the grounding line dynamics of ice sheet/shelf. The steady position of the grounding line, obtained from our SPH simulations, is in good agreement with laboratory observations for a wide range of bedrock slopes, ice-to-fluid density ratios, and flux. We examine the effect of non-Newtonian behavior of ice on the grounding line dynamics. The non-Newtonian constitutive model is based on Glen's law for a creeping flow of a polycrystalline ice. Finally, we investigate the effect of a bedrock geometry on a steady-state position of the grounding line.« less
Natural Medicine: Wilderness Experience Outcomes for Combat Veterans
ERIC Educational Resources Information Center
Dietrich, Zachary Clayborne; Joye, Shauna Wilson; Garcia, Joseph Amos
2015-01-01
Wilderness Experience Programs (WEPs) have been shown to enhance psychological well-being for numerous populations. However, among veteran populations, these studies have historically evaluated programs that are short-term experiences, usually less than 1 week. The current research sought to evaluate a WEP for post-9/11 combat veterans engaging in…
A DIY Ultrasonic Signal Generator for Sound Experiments
ERIC Educational Resources Information Center
Riad, Ihab F.
2018-01-01
Many physics departments around the world have electronic and mechanical workshops attached to them that can help build experimental setups and instruments for research and the training of undergraduate students. The workshops are usually run by experienced technicians and equipped with expensive lathing, computer numerical control (CNC) machines,…
Why Third World Urban Employers Usually Prefer Men.
ERIC Educational Resources Information Center
Anker, Richard; Hein, Catherine
1985-01-01
Case studies provide evidence as to why Third World employers generally prefer male workers and consider certain jobs to be more suitable for men, and other jobs, much less numerous, to be more suitable for women. The authors also draw a number of distinctions between stereotype and fact. (Author/CT)
The impact of spray adjuvants on solution physical properties and spray droplet size
USDA-ARS?s Scientific Manuscript database
Over the past several years, numerous anecdotes from aerial applicators have surfaced indicating observations of increased numbers of fine droplets seen in the applied spray clouds, usually associated with tank mixtures containing of crop oil concentrates and foliar fertilizers. Efforts were made to...
DOT National Transportation Integrated Search
1991-04-01
Results from vehicle computer simulations usually take the form of numeric data or graphs. While these graphs provide the investigator with the insight into vehicle behavior, it may be difficult to use these graphs to assess complex vehicle motion. C...
A D-Estimator for Single-Case Designs
ERIC Educational Resources Information Center
Shadish, William; Hedges, Larry; Pustejovsky, James; Rindskopf, David
2012-01-01
Over the last 10 years, numerous authors have proposed effect size estimators for single-case designs. None, however, has been shown to be equivalent to the usual between-groups standardized mean difference statistic, sometimes called d. The present paper remedies that omission. Most effect size estimators for single-case designs use the…
Gainesville's urban forest structure and composition
Francisco Francisco Escobedo; Jennifer A. Seitz; Wayne Zipperer
2009-01-01
The urban forest provides a community numerous benefits. The urban forest is composed of a mix of native and non-native species introduced by people managing this forest and by residents. Because they usually contain non-native species, many urban forests often have greater species diversity than forests in the surrounding natural...
While excess flow valves (EFV) are in extensive service and have prevented numerous pipe or hose breaks from becoming much more serious incidents, experience shows that in some cases the EFV did not perform as intended, usually because of misapplication.
Li, Ying; Shi, Xiaohu; Liang, Yanchun; Xie, Juan; Zhang, Yu; Ma, Qin
2017-01-21
RNAs have been found to carry diverse functionalities in nature. Inferring the similarity between two given RNAs is a fundamental step to understand and interpret their functional relationship. The majority of functional RNAs show conserved secondary structures, rather than sequence conservation. Those algorithms relying on sequence-based features usually have limitations in their prediction performance. Hence, integrating RNA structure features is very critical for RNA analysis. Existing algorithms mainly fall into two categories: alignment-based and alignment-free. The alignment-free algorithms of RNA comparison usually have lower time complexity than alignment-based algorithms. An alignment-free RNA comparison algorithm was proposed, in which novel numerical representations RNA-TVcurve (triple vector curve representation) of RNA sequence and corresponding secondary structure features are provided. Then a multi-scale similarity score of two given RNAs was designed based on wavelet decomposition of their numerical representation. In support of RNA mutation and phylogenetic analysis, a web server (RNA-TVcurve) was designed based on this alignment-free RNA comparison algorithm. It provides three functional modules: 1) visualization of numerical representation of RNA secondary structure; 2) detection of single-point mutation based on secondary structure; and 3) comparison of pairwise and multiple RNA secondary structures. The inputs of the web server require RNA primary sequences, while corresponding secondary structures are optional. For the primary sequences alone, the web server can compute the secondary structures using free energy minimization algorithm in terms of RNAfold tool from Vienna RNA package. RNA-TVcurve is the first integrated web server, based on an alignment-free method, to deliver a suite of RNA analysis functions, including visualization, mutation analysis and multiple RNAs structure comparison. The comparison results with two popular RNA comparison tools, RNApdist and RNAdistance, showcased that RNA-TVcurve can efficiently capture subtle relationships among RNAs for mutation detection and non-coding RNA classification. All the relevant results were shown in an intuitive graphical manner, and can be freely downloaded from this server. RNA-TVcurve, along with test examples and detailed documents, are available at: http://ml.jlu.edu.cn/tvcurve/ .
Weather models as virtual sensors to data-driven rainfall predictions in urban watersheds
NASA Astrophysics Data System (ADS)
Cozzi, Lorenzo; Galelli, Stefano; Pascal, Samuel Jolivet De Marc; Castelletti, Andrea
2013-04-01
Weather and climate predictions are a key element of urban hydrology where they are used to inform water management and assist in flood warning delivering. Indeed, the modelling of the very fast dynamics of urbanized catchments can be substantially improved by the use of weather/rainfall predictions. For example, in Singapore Marina Reservoir catchment runoff processes have a very short time of concentration (roughly one hour) and observational data are thus nearly useless for runoff predictions and weather prediction are required. Unfortunately, radar nowcasting methods do not allow to carrying out long - term weather predictions, whereas numerical models are limited by their coarse spatial scale. Moreover, numerical models are usually poorly reliable because of the fast motion and limited spatial extension of rainfall events. In this study we investigate the combined use of data-driven modelling techniques and weather variables observed/simulated with a numerical model as a way to improve rainfall prediction accuracy and lead time in the Singapore metropolitan area. To explore the feasibility of the approach, we use a Weather Research and Forecast (WRF) model as a virtual sensor network for the input variables (the states of the WRF model) to a machine learning rainfall prediction model. More precisely, we combine an input variable selection method and a non-parametric tree-based model to characterize the empirical relation between the rainfall measured at the catchment level and all possible weather input variables provided by WRF model. We explore different lead time to evaluate the model reliability for different long - term predictions, as well as different time lags to see how past information could improve results. Results show that the proposed approach allow a significant improvement of the prediction accuracy of the WRF model on the Singapore urban area.
The formation of granular fronts in debris flow - A combined experimental-numerical study
NASA Astrophysics Data System (ADS)
Leonardi, Alessandro; Cabrera, Miguel; Wittel, Falk K.; Kaitna, Roland; Mendoza, Miller; Wu, Wei; Herrmann, Hans J.
2015-04-01
Granular fronts are amongst the most spectacular features of debris flows, and are also one of the reasons why such events are associated with a strong destructive power. They are usually believed to be the result of the convective mechanism of the debris flow, combined with internal size segregation of the grains. However, the knowledge about the conditions leading to the formation of a granular front is not up to date. We present a combined study with experimental and numerical features that aims at providing insight into the phenomenon. A stationary, long-lived avalanche is created within a rotating drum. In order to mimic the composition of an actual debris flow, the material is composed by a mixture of a plastic fluid, obtained with water and kaolin powder, and a collection of monodisperse spherical particles heavier than the fluid. Tuning the material properties and the drum settings, we are able to reproduce and control the formation of a granular front. To gain insight into the internal mechanism, the same scenario is replicated in a numerical environment, using a coupling technique between a discrete solver for the particles, the Discrete Element Method, and a continuum solver for the plastic fluid, the Lattice-Boltzmann Method. The simulations compare well with the experiments, and show the internal reorganization of the material transport. The formation of a granular front is shown to be favored by a higher drum rotational speed, which in turn forces a higher shear rate on the particles, breaks their internal organization, and contrasts their natural tendency to settle. Starting from dimensional analysis, we generalize the obtained results and are able to draw implications for debris flow research.
Integral equation approach to time-dependent kinematic dynamos in finite domains
NASA Astrophysics Data System (ADS)
Xu, Mingtian; Stefani, Frank; Gerbeth, Gunter
2004-11-01
The homogeneous dynamo effect is at the root of cosmic magnetic field generation. With only a very few exceptions, the numerical treatment of homogeneous dynamos is carried out in the framework of the differential equation approach. The present paper tries to facilitate the use of integral equations in dynamo research. Apart from the pedagogical value to illustrate dynamo action within the well-known picture of the Biot-Savart law, the integral equation approach has a number of practical advantages. The first advantage is its proven numerical robustness and stability. The second and perhaps most important advantage is its applicability to dynamos in arbitrary geometries. The third advantage is its intimate connection to inverse problems relevant not only for dynamos but also for technical applications of magnetohydrodynamics. The paper provides the first general formulation and application of the integral equation approach to time-dependent kinematic dynamos, with stationary dynamo sources, in finite domains. The time dependence is restricted to the magnetic field, whereas the velocity or corresponding mean-field sources of dynamo action are supposed to be stationary. For the spherically symmetric α2 dynamo model it is shown how the general formulation is reduced to a coupled system of two radial integral equations for the defining scalars of the poloidal and toroidal field components. The integral equation formulation for spherical dynamos with general stationary velocity fields is also derived. Two numerical examples—the α2 dynamo model with radially varying α and the Bullard-Gellman model—illustrate the equivalence of the approach with the usual differential equation method. The main advantage of the method is exemplified by the treatment of an α2 dynamo in rectangular domains.
Do Evidence-Based Youth Psychotherapies Outperform Usual Clinical Care? A Multilevel Meta-Analysis
Weisz, John R.; Kuppens, Sofie; Eckshtain, Dikla; Ugueto, Ana M.; Hawley, Kristin M.; Jensen-Doss, Amanda
2013-01-01
Context Research across four decades has produced numerous empirically-tested evidence-based psychotherapies (EBPs) for youth psychopathology, developed to improve upon usual clinical interventions. Advocates argue that these should replace usual care; but do the EBPs produce better outcomes than usual care? Objective This question was addressed in a meta-analysis of 52 randomized trials directly comparing EBPs to usual care. Analyses assessed the overall effect of EBPs vs. usual care, and candidate moderators; multilevel analysis was used to address the dependency among effect sizes that is common but typically unaddressed in psychotherapy syntheses. Data Sources The PubMed, PsychINFO, and Dissertation Abstracts International databases were searched for studies from January 1, 1960 – December 31, 2010. Study Selection 507 randomized youth psychotherapy trials were identified. Of these, the 52 studies that compared EBPs to usual care were included in the meta-analysis. Data Extraction Sixteen variables (participant, treatment, and study characteristics) were extracted from each study, and effect sizes were calculated for all EBP versus usual care comparisons. Data Synthesis EBPs outperformed usual care. Mean effect size was 0.29; the probability was 58% that a randomly selected youth receiving an EBP would be better off after treatment than a randomly selected youth receiving usual care. Three variables moderated treatment benefit: Effect sizes decreased for studies conducted outside North America, for studies in which all participants were impaired enough to qualify for diagnoses, and for outcomes reported by people other than the youths and parents in therapy. For certain key groups (e.g., studies using clinically referred samples and diagnosed samples), significant EBP effects were not demonstrated. Conclusions EBPs outperformed usual care, but the EBP advantage was modest and moderated by youth, location, and assessment characteristics. There is room for improvement in EBPs, both in the magnitude and range of their benefit, relative to usual care. PMID:23754332
NASA Astrophysics Data System (ADS)
Rhazi, Dilal
In the field of aeronautics, reducing the harmful effects of acoustics constitutes a major concern at the international level and justifies the call for further research, particularly in Canada where aeronautics is a key economic sector, which operates in a context of global competition. Aircraft sidewall structure is usually of a double wall construction with a curved ribbed metallic skin and a lightweight composite or sandwich trim separated by a cavity filled with a noise control treatment. The latter is of a great importance in the transport industry, and continues to be of interest in many engineering applications. However, the insertion loss noise control treatment depends on the excitation of the supporting structure. In particular, Turbulent Boundary Layer is of interest to several industries. This excitation is difficult to simulate in laboratory conditions, given the prohibiting costs and difficulties associated with wind tunnel and in-flight tests. Numerical simulation is the only practical way to predict the response to such excitations and to analyze effects of design changes to the response to such excitation. Another kinds of excitations encountered in industrial are monopole, rain on the Roof and diffuse acoustic field. Deterministic methods can calculate in each point the spectral response of the system. Most known are numerical methods such as finite elements and boundary elements methods. These methods generally apply to the low frequency where modal behavior of the structure dominates. However, the high limit of calculation in frequency of these methods cannot be defined in a strict way because it is related to the capacity of data processing and to the nature of the studied mechanical system. With these challenges in mind, and with limitations of the main numerical codes on the market, the manufacturers have expressed the need for simple models immediately available as early as the stage of preliminary drafts. This thesis represents an attempt to address this need. A numerical tool based on two approaches (Wave and Modal) is developed. It allows a fast computation of the vibroacoustic response for multilayer structures over full frequency spectrum and for various kinds of excitations (monople, rain on the roof, diffuse acoustic filed, turbulent boundary layer) . A comparison between results obtained by the developed model, experimental tests and the finite element method is given and discussed. The results are very promising with respect to the potential of such a model for industrial use as a prediction tool, and even for design. The code can be also integrated within an SEA (Statistical Energy Analysis) strategy in order to model a full vehicle by computing in particular the insertion loss and the equivalent damping added by the sound package. Keywords: Transfer Matrix Method, Wave Approach,Turbulent Boundary Layer, Rain on the Roof, Monopole, Insertion loss, Double-wall, Sound Package.
Mode I Fracture Toughness of Rock - Intrinsic Property or Pressure-Dependent?
NASA Astrophysics Data System (ADS)
Stoeckhert, F.; Brenne, S.; Molenda, M.; Alber, M.
2016-12-01
The mode I fracture toughness of rock is usually regarded as an intrinsic material parameter independent of pressure. However, most fracture toughness laboratory tests are conducted only at ambient pressure. To investigate fracture toughness of rock under elevated pressures, sleeve fracturing laboratory experiments were conducted with various rock types and a new numerical method was developed for the evaluation of these experiments. The sleeve fracturing experiments involve rock cores with central axial boreholes that are placed in a Hoek triaxial pressure cell to apply an isostatic confining pressure. A polymere tube is pressurized inside these hollow rock cylinders until they fail by tensile fracturing. Numerical simulations incorporating fracture mechanical models are used to obtain a relation between tensile fracture propagation and injection pressure. These simulations indicate that the magnitude of the injection pressure at specimen failure is only depending on the fracture toughness of the tested material, the specimen dimensions and the magnitude of external loading. The latter two are known parameters in the experiments. Thus, the fracture toughness can be calculated from the injection pressure recorded at specimen breakdown. All specimens had a borehole diameter to outer diameter ratio of about 1:10 with outer diameters of 40 and 62 mm. The length of the specimens was about two times the diameter. Maximum external loading was 7.5 MPa corresponding to maximum injection pressures at specimen breakdown of about 100 MPa. The sample set tested in this work includes Permian and Carboniferous sandstones, Jurassic limestones, Triassic marble, Permian volcanic rocks and Devonian slate from Central Europe. The fracture toughness values determined from the sleeve fracturing experiments without confinement using the new numerical method were found to be in good agreement with those from Chevron bend testing according to the ISRM suggested methods. At elevated confining pressures, the results indicate a significant positive correlation between fracture toughness and confining pressure for most tested rock types.
NASA Astrophysics Data System (ADS)
Machado, M. R.; Adhikari, S.; Dos Santos, J. M. C.; Arruda, J. R. F.
2018-03-01
Structural parameter estimation is affected not only by measurement noise but also by unknown uncertainties which are present in the system. Deterministic structural model updating methods minimise the difference between experimentally measured data and computational prediction. Sensitivity-based methods are very efficient in solving structural model updating problems. Material and geometrical parameters of the structure such as Poisson's ratio, Young's modulus, mass density, modal damping, etc. are usually considered deterministic and homogeneous. In this paper, the distributed and non-homogeneous characteristics of these parameters are considered in the model updating. The parameters are taken as spatially correlated random fields and are expanded in a spectral Karhunen-Loève (KL) decomposition. Using the KL expansion, the spectral dynamic stiffness matrix of the beam is expanded as a series in terms of discretized parameters, which can be estimated using sensitivity-based model updating techniques. Numerical and experimental tests involving a beam with distributed bending rigidity and mass density are used to verify the proposed method. This extension of standard model updating procedures can enhance the dynamic description of structural dynamic models.
A Method to Constrain Mass and Spin of GRB Black Holes within the NDAF Model
NASA Astrophysics Data System (ADS)
Liu, Tong; Xue, Li; Zhao, Xiao-Hong; Zhang, Fu-Wen; Zhang, Bing
2016-04-01
Black holes (BHs) hide themselves behind various astronomical phenomena and their properties, I.e., mass and spin, are usually difficult to constrain. One leading candidate for the central engine model of gamma-ray bursts (GRBs) invokes a stellar mass BH and a neutrino-dominated accretion flow (NDAF), with the relativistic jet launched due to neutrino-anti-neutrino annihilations. Such a model gives rise to a matter-dominated fireball, and is suitable to interpret GRBs with a dominant thermal component with a photospheric origin. We propose a method to constrain BH mass and spin within the framework of this model and apply the method to the thermally dominant GRB 101219B, whose initial jet launching radius, r0, is constrained from the data. Using our numerical model of NDAF jets, we estimate the following constraints on the central BH: mass MBH ˜ 5-9 M⊙, spin parameter a* ≳ 0.6, and disk mass 3 M⊙ ≲ Mdisk ≲ 4 M⊙. Our results also suggest that the NDAF model is a competitive candidate for the central engine of GRBs with a strong thermal component.
Shell-model method for Gamow-Teller transitions in heavy deformed odd-mass nuclei
NASA Astrophysics Data System (ADS)
Wang, Long-Jun; Sun, Yang; Ghorui, Surja K.
2018-04-01
A shell-model method for calculating Gamow-Teller (GT) transition rates in heavy deformed odd-mass nuclei is presented. The method is developed within the framework of the projected shell model. To implement the computation requirement when many multi-quasiparticle configurations are included in the basis, a numerical advancement based on the Pfaffian formula is introduced. With this new many-body technique, it becomes feasible to perform state-by-state calculations for the GT nuclear matrix elements of β -decay and electron-capture processes, including those at high excitation energies in heavy nuclei which are usually deformed. The first results, β- decays of the well-deformed A =153 neutron-rich nuclei, are shown as the example. The known log(f t ) data corresponding to the B (GT- ) decay rates of the ground state of 153Nd to the low-lying states of 153Pm are well described. It is further shown that the B (GT) distributions can have a strong dependence on the detailed microscopic structure of relevant states of both the parent and daughter nuclei.
Why Multivariate Methods Are Usually Vital in Research: Some Basic Concepts.
ERIC Educational Resources Information Center
Thompson, Bruce
The present paper suggests that multivariate methods ought to be used more frequently in behavioral research and explores the potential consequences of failing to use multivariate methods when these methods are appropriate. The paper explores in detail two reasons why multivariate methods are usually vital. The first is that they limit the…
Numerical shockwave anomalies in presence of hydraulic jumps in the SWE with variable bed elevation.
NASA Astrophysics Data System (ADS)
Navas-Montilla, Adrian; Murillo, Javier
2017-04-01
When solving the shallow water equations appropriate numerical solvers must allow energy-dissipative solutions in presence of steady and unsteady hydraulic jumps. Hydraulic jumps are present in surface flows and may produce significant morphological changes. Unfortunately, it has been documented that some numerical anomalies may appear. These anomalies are the incorrect positioning of steady jumps and the presence of a spurious spike of discharge inside the cell containing the jump produced by a non-linearity of the Hugoniot locus connecting the states at both sides of the jump. Therefore, this problem remains unresolved in the context of Godunov's schemes applied to shallow flows. This issue is usually ignored as it does not affect to the solution in steady cases. However, it produces undesirable spurious oscillations in transient cases that can lead to misleading conclusions when moving to realistic scenarios. Using spike-reducing techniques based on the construction of interpolated fluxes, it is possible to define numerical methods including discontinuous topography that reduce the presence of the aforementioned numerical anomalies. References: T. W. Roberts, The behavior of flux difference splitting schemes near slowly moving shock waves, J. Comput. Phys., 90 (1990) 141-160. Y. Stiriba, R. Donat, A numerical study of postshock oscillations in slowly moving shock waves, Comput. Math. with Appl., 46 (2003) 719-739. E. Johnsen, S. K. Lele, Numerical errors generated in simulations of slowly moving shocks, Center for Turbulence Research, Annual Research Briefs, (2008) 1-12. D. W. Zaide, P. L. Roe, Flux functions for reducing numerical shockwave anomalies. ICCFD7, Big Island, Hawaii, (2012) 9-13. D. W. Zaide, Numerical Shockwave Anomalies, PhD thesis, Aerospace Engineering and Scientific Computing, University of Michigan, 2012. A. Navas-Montilla, J. Murillo, Energy balanced numerical schemes with very high order. The Augmented Roe Flux ADER scheme. Application to the shallow water equations, J. Comput. Phys. 290 (2015) 188-218. A. Navas-Montilla, J. Murillo, Asymptotically and exactly energy balanced augmented flux-ADER schemes with application to hyperbolic conservation laws with geometric source terms, J. Comput. Phys. 317 (2016) 108-147. J. Murillo and A. Navas-Montilla, A comprehensive explanation and exercise of the source terms in hyperbolic systems using Roe type solutions. Application to the 1D-2D shallow water equations, Advances in Water Resources {98} (2016) 70-96.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klaiman, Shachar; Gilary, Ido; Moiseyev, Nimrod
Analytical expressions for the resonances of the long-range potential (LRP), V(r)=a/r-b/r{sup 2}, as a function of the Hamiltonian parameters were derived by Doolen a long time ago [Int. J. Quant. Chem. 14, 523 (1979)]. Here we show that converged numerical results are obtained by applying the shifted complex scaling and the smooth-exterior scaling (SES) methods rather than the usual complex coordinate method (i.e., complex scaling). The narrow and broad shape-type resonances are shown to be localized inside or over the potential barrier and not inside the potential well. Therefore, the resonances for Doolen LRP's are not associated with the tunnelingmore » through the potential barrier as one might expect. The fact that the SES provides a universal reflection-free absorbing potential is, in particular, important in view of future applications. In particular, it is most convenient to calculate the molecular autoionizing resonances by adding one-electron complex absorbing potentials into the codes of the available quantum molecular electronic packages.« less
Zheng, Yongping; Zhang, Tingwei; Wu, Songjie; Zhang, Jue; Fang, Jing
2018-01-01
Molecularly imprinted polymer (MIP) films prepared by bulk polymerization suffer from numerous deficiencies, including poor mass transfer ability and difficulty in controlling reaction rate and film thickness, which usually result in poor repeatability. However, polymer film synthesized by electropolymerization methods benefit from high reproducibility, simplicity and rapidity of preparation. In the present study, an Au film served as the refractive index-sensitive metal film to couple with the light leaked out from optical fiber core and the electrode for electropolymerizing MIP film simultaneously. The manufactured probe exhibited satisfactory sensitivity and specificity. Furthermore, the surface morphology and functional groups of the synthesized MIP film were characterized by Atomic Force Microscopy (AFM) and Fourier transform infrared microspectroscopy (FTIR) for further insights into the adsorption and desorption processes. Given the low cost, label-free test, simple preparation process and fast response, this method has a potential application to monitor substances in complicated real samples for out-of-lab test in the future. PMID:29522472
Numerical prediction of pollutant dispersion and transport in an atmospheric boundary layer
NASA Astrophysics Data System (ADS)
Zeoli, Stéphanie; Bricteux, Laurent; Mech. Eng. Dpt. Team
2014-11-01
The ability to accurately predict concentration levels of air pollutant released from point sources is required in order to determine their environmental impact. A wall modeled large-eddy simulation (WMLES) of the ABL is performed using the OpenFoam based solver SOWFA (Churchfield and Lee, NREL). It uses Boussinesq approximation for buoyancy effects and takes into account Coriolis forces. A synthetic eddy method is proposed to properly model turbulence inlet velocity boundary conditions. This method will be compared with the standard pressure gradient forcing. WMLES are usually performed using a standard Smagorinsky model or its dynamic version. It is proposed here to investigate a subgrid scale (SGS) model with a better spectral behavior. To this end, a regularized variational multiscale (RVMs) model (Jeanmart and Winckelmans, 2007) is implemented together with standard wall function in order to preserve the dynamics of the large scales within the Ekman layer. The influence of the improved SGS model on the wind simulation and scalar transport will be discussed based on turbulence diagnostics.
Quantitative analysis of autophagic flux by confocal pH-imaging of autophagic intermediates
Maulucci, Giuseppe; Chiarpotto, Michela; Papi, Massimiliano; Samengo, Daniela; Pani, Giovambattista; De Spirito, Marco
2015-01-01
Although numerous techniques have been developed to monitor autophagy and to probe its cellular functions, these methods cannot evaluate in sufficient detail the autophagy process, and suffer limitations from complex experimental setups and/or systematic errors. Here we developed a method to image, contextually, the number and pH of autophagic intermediates by using the probe mRFP-GFP-LC3B as a ratiometric pH sensor. This information is expressed functionally by AIPD, the pH distribution of the number of autophagic intermediates per cell. AIPD analysis reveals how intermediates are characterized by a continuous pH distribution, in the range 4.5–6.5, and therefore can be described by a more complex set of states rather than the usual biphasic one (autophagosomes and autolysosomes). AIPD shape and amplitude are sensitive to alterations in the autophagy pathway induced by drugs or environmental states, and allow a quantitative estimation of autophagic flux by retrieving the concentrations of autophagic intermediates. PMID:26506895
Sound source measurement by using a passive sound insulation and a statistical approach
NASA Astrophysics Data System (ADS)
Dragonetti, Raffaele; Di Filippo, Sabato; Mercogliano, Francesco; Romano, Rosario A.
2015-10-01
This paper describes a measurement technique developed by the authors that allows carrying out acoustic measurements inside noisy environments reducing background noise effects. The proposed method is based on the integration of a traditional passive noise insulation system with a statistical approach. The latter is applied to signals picked up by usual sensors (microphones and accelerometers) equipping the passive sound insulation system. The statistical approach allows improving of the sound insulation given only by the passive sound insulation system at low frequency. The developed measurement technique has been validated by means of numerical simulations and measurements carried out inside a real noisy environment. For the case-studies here reported, an average improvement of about 10 dB has been obtained in a frequency range up to about 250 Hz. Considerations on the lower sound pressure level that can be measured by applying the proposed method and the measurement error related to its application are reported as well.
Numerical Estimation of Sound Transmission Loss in Launch Vehicle Payload Fairing
NASA Astrophysics Data System (ADS)
Chandana, Pawan Kumar; Tiwari, Shashi Bhushan; Vukkadala, Kishore Nath
2017-08-01
Coupled acoustic-structural analysis of a typical launch vehicle composite payload faring is carried out, and results are validated with experimental data. Depending on the frequency range of interest, prediction of vibro-acoustic behavior of a structure is usually done using the finite element method, boundary element method or through statistical energy analysis. The present study focuses on low frequency dynamic behavior of a composite payload fairing structure using both coupled and uncoupled vibro-acoustic finite element models up to 710 Hz. A vibro-acoustic model, characterizing the interaction between the fairing structure, air cavity, and satellite, is developed. The external sound pressure levels specified for the payload fairing's acoustic test are considered as external loads for the analysis. Analysis methodology is validated by comparing the interior noise levels with those obtained from full scale Acoustic tests conducted in a reverberation chamber. The present approach has application in the design and optimization of acoustic control mechanisms at lower frequencies.
The Load Distribution in Bolted or Riveted Joints in Light-Alloy Structures
NASA Technical Reports Server (NTRS)
Vogt, F.
1947-01-01
This report contains a theoretical discussion of the load distribution in bolted or riveted joints in light-alloy structures which is applicable not only for loads below the limit of proportionality but also for loads above this limit. The theory is developed for double and single shear joints. The methods given are illustrated by numerical examples and the values assumed for the bolt (or rivet) stiffnesses are based partly on theory and partly on known experimental values. It is shown that the load distribution does not vary greatly with the bolt (or rivet) stiffnesses and that for design purposes it is usually sufficient to know their order of magnitude. The theory may also be directly used for spot-welded structures and, with small modifications, for seam-welded structures, The computational work involved in the methods described is simple and may be completed in a reasonable time for most practical problems. A summary of earlier theoretical and experimental investigations on the subject is included in the report.
Disassembly Properties of Cementitious Finish Joints Using an Induction Heating Method
Ahn, Jaecheol; Noguchi, Takafumi; Kitagaki, Ryoma
2015-01-01
Efficient maintenance and upgrading of a building during its lifecycle are difficult because a cementitious finish uses materials and parts with low disassembly properties. Additionally, the reuse and recycling processes during building demolition also present numerous problems from the perspective of environmental technology. In this study, an induction heating (IH) method was used to disassemble cementitious finish joints, which are widely used to join building members and materials. The IH rapidly and selectively heated and weakened these joints. The temperature elevation characteristics of the cementitious joint materials were measured as a function of several resistor types, including wire meshes and punching metals, which are usually used for cementitious finishing. The disassembly properties were evaluated through various tests using conductive resistors in cementitious joints such as mortar. When steel fiber, punching metal, and wire mesh were used as conductive resistors, the cementitious modifiers could be weakened within 30 s. Cementitious joints with conductive resistors also showed complete disassembly with little residual bond strength.
Fitting a Point Cloud to a 3d Polyhedral Surface
NASA Astrophysics Data System (ADS)
Popov, E. V.; Rotkov, S. I.
2017-05-01
The ability to measure parameters of large-scale objects in a contactless fashion has a tremendous potential in a number of industrial applications. However, this problem is usually associated with an ambiguous task to compare two data sets specified in two different co-ordinate systems. This paper deals with the study of fitting a set of unorganized points to a polyhedral surface. The developed approach uses Principal Component Analysis (PCA) and Stretched grid method (SGM) to substitute a non-linear problem solution with several linear steps. The squared distance (SD) is a general criterion to control the process of convergence of a set of points to a target surface. The described numerical experiment concerns the remote measurement of a large-scale aerial in the form of a frame with a parabolic shape. The experiment shows that the fitting process of a point cloud to a target surface converges in several linear steps. The method is applicable to the geometry remote measurement of large-scale objects in a contactless fashion.
Modular adaptive implant based on smart materials.
Bîzdoacă, N; Tarniţă, Daniela; Tarniţă, D N
2008-01-01
Applications of biological methods and systems found in nature to the study and design of engineering systems and modern technology are defined as Bionics. The present paper describes a bionics application of shape memory alloy in construction of orthopedic implant. The main idea of this paper is related to design modular adaptive implants for fractured bones. In order to target the efficiency of medical treatment, the implant has to protect the fractured bone, for the healing period, undertaking much as is possible from the daily usual load of the healthy bones. After a particular stage of healing period is passed, using implant modularity, the load is gradually transferred to bone, assuring in this manner a gradually recover of bone function. The adaptability of this design is related to medical possibility of the physician to made the implant to correspond to patient specifically anatomy. Using a CT realistic numerical bone models, the mechanical simulation of different types of loading of the fractured bones treated with conventional method are presented. The results are commented and conclusions are formulated.
Jang, Seung-Ho; Ih, Jeong-Guon
2003-02-01
It is known that the direct method yields different results from the indirect (or load) method in measuring the in-duct acoustic source parameters of fluid machines. The load method usually comes up with a negative source resistance, although a fairly accurate prediction of radiated noise can be obtained from any method. This study is focused on the effect of the time-varying nature of fluid machines on the output results of two typical measurement methods. For this purpose, a simplified fluid machine consisting of a reservoir, a valve, and an exhaust pipe is considered as representing a typical periodic, time-varying system and the measurement situations are simulated by using the method of characteristics. The equivalent circuits for such simulations are also analyzed by considering the system as having a linear time-varying source. It is found that the results from the load method are quite sensitive to the change of cylinder pressure or valve profile, in contrast to those from the direct method. In the load method, the source admittance turns out to be predominantly dependent on the valve admittance at the calculation frequency as well as the valve and load admittances at other frequencies. In the direct method, however, the source resistance is always positive and the source admittance depends mainly upon the zeroth order of valve admittance.
Effects of Heterogeneous Diffuse Fibrosis on Arrhythmia Dynamics and Mechanism
Kazbanov, Ivan V.; ten Tusscher, Kirsten H. W. J.; Panfilov, Alexander V.
2016-01-01
Myocardial fibrosis is an important risk factor for cardiac arrhythmias. Previous experimental and numerical studies have shown that the texture and spatial distribution of fibrosis may play an important role in arrhythmia onset. Here, we investigate how spatial heterogeneity of fibrosis affects arrhythmia onset using numerical methods. We generate various tissue textures that differ by the mean amount of fibrosis, the degree of heterogeneity and the characteristic size of heterogeneity. We study the onset of arrhythmias using a burst pacing protocol. We confirm that spatial heterogeneity of fibrosis increases the probability of arrhythmia induction. This effect is more pronounced with the increase of both the spatial size and the degree of heterogeneity. The induced arrhythmias have a regular structure with the period being mostly determined by the maximal local fibrosis level. We perform ablations of the induced fibrillatory patterns to classify their type. We show that in fibrotic tissue fibrillation is usually of the mother rotor type but becomes of the multiple wavelet type with increase in tissue size. Overall, we conclude that the most important factor determining the formation and dynamics of arrhythmia in heterogeneous fibrotic tissue is the value of maximal local fibrosis. PMID:26861111
On the factors affecting porosity dissolution in selective laser sintering process
NASA Astrophysics Data System (ADS)
Ly, H.-B.; Monteiro, E.; Dal, M.; Regnier, G.
2018-05-01
Selective Laser Sintering process is one of the additive manufacturing techniques in which parts are manufactured layer by layer. During such process, gas bubbles are formed in the melted polymer due to faster polymer grains coalescence at surface than deeper in the powder bed. Although gas diffusion is possible through the polymer melt, it's usual that some porosities remain in the final part if their initial sizes are too big and solidification time too short. In this contribution, a bubble dissolution model involving fluid dynamics and mass transport has been developed to study factors affecting porosity resorption kinetic. In this model, gas diffusion follows Fick's laws and the melted polymer is supposed Newtonian. At the polymer/gas interface, surface tension is considered and Henry's law is used to relate the partial pressure of gas with its concentration in the fluid. This problem is solved numerically by means of the finite element method in 1D. After validation of the numerical tool, the influence on dissolution time of several parameters (e.g. the initial size and form of gas porosities, the viscosity, the diffusion coefficient, the surface tension constant or the ambient pressure) has been examined.
ExGUtils: A Python Package for Statistical Analysis With the ex-Gaussian Probability Density.
Moret-Tatay, Carmen; Gamermann, Daniel; Navarro-Pardo, Esperanza; Fernández de Córdoba Castellá, Pedro
2018-01-01
The study of reaction times and their underlying cognitive processes is an important field in Psychology. Reaction times are often modeled through the ex-Gaussian distribution, because it provides a good fit to multiple empirical data. The complexity of this distribution makes the use of computational tools an essential element. Therefore, there is a strong need for efficient and versatile computational tools for the research in this area. In this manuscript we discuss some mathematical details of the ex-Gaussian distribution and apply the ExGUtils package, a set of functions and numerical tools, programmed for python, developed for numerical analysis of data involving the ex-Gaussian probability density. In order to validate the package, we present an extensive analysis of fits obtained with it, discuss advantages and differences between the least squares and maximum likelihood methods and quantitatively evaluate the goodness of the obtained fits (which is usually an overlooked point in most literature in the area). The analysis done allows one to identify outliers in the empirical datasets and criteriously determine if there is a need for data trimming and at which points it should be done.
ExGUtils: A Python Package for Statistical Analysis With the ex-Gaussian Probability Density
Moret-Tatay, Carmen; Gamermann, Daniel; Navarro-Pardo, Esperanza; Fernández de Córdoba Castellá, Pedro
2018-01-01
The study of reaction times and their underlying cognitive processes is an important field in Psychology. Reaction times are often modeled through the ex-Gaussian distribution, because it provides a good fit to multiple empirical data. The complexity of this distribution makes the use of computational tools an essential element. Therefore, there is a strong need for efficient and versatile computational tools for the research in this area. In this manuscript we discuss some mathematical details of the ex-Gaussian distribution and apply the ExGUtils package, a set of functions and numerical tools, programmed for python, developed for numerical analysis of data involving the ex-Gaussian probability density. In order to validate the package, we present an extensive analysis of fits obtained with it, discuss advantages and differences between the least squares and maximum likelihood methods and quantitatively evaluate the goodness of the obtained fits (which is usually an overlooked point in most literature in the area). The analysis done allows one to identify outliers in the empirical datasets and criteriously determine if there is a need for data trimming and at which points it should be done. PMID:29765345
Simpson, Matthew J
2015-01-01
Many processes during embryonic development involve transport and reaction of molecules, or transport and proliferation of cells, within growing tissues. Mathematical models of such processes usually take the form of a reaction-diffusion partial differential equation (PDE) on a growing domain. Previous analyses of such models have mainly involved solving the PDEs numerically. Here, we present a framework for calculating the exact solution of a linear reaction-diffusion PDE on a growing domain. We derive an exact solution for a general class of one-dimensional linear reaction—diffusion process on 0
Simpson, Matthew J
2015-01-01
Many processes during embryonic development involve transport and reaction of molecules, or transport and proliferation of cells, within growing tissues. Mathematical models of such processes usually take the form of a reaction-diffusion partial differential equation (PDE) on a growing domain. Previous analyses of such models have mainly involved solving the PDEs numerically. Here, we present a framework for calculating the exact solution of a linear reaction-diffusion PDE on a growing domain. We derive an exact solution for a general class of one-dimensional linear reaction-diffusion process on 0
3D-PTV around Operational Wind Turbines
NASA Astrophysics Data System (ADS)
Brownstein, Ian; Dabiri, John
2016-11-01
Laboratory studies and numerical simulations of wind turbines are typically constrained in how they can inform operational turbine behavior. Laboratory experiments are usually unable to match both pertinent parameters of full-scale wind turbines, the Reynolds number (Re) and tip speed ratio, using scaled-down models. Additionally, numerical simulations of the flow around wind turbines are constrained by the large domain size and high Re that need to be simulated. When these simulations are preformed, turbine geometry is typically simplified resulting in flow structures near the rotor not being well resolved. In order to bypass these limitations, a quantitative flow visualization method was developed to take in situ measurements of the flow around wind turbines at the Field Laboratory for Optimized Wind Energy (FLOWE) in Lancaster, CA. The apparatus constructed was able to seed an approximately 9m x 9m x 5m volume in the wake of the turbine using artificial snow. Quantitative measurements were obtained by tracking the evolution of the artificial snow using a four camera setup. The methodology for calibrating and collecting data, as well as preliminary results detailing the flow around a 2kW vertical-axis wind turbine (VAWT), will be presented.
NASA Astrophysics Data System (ADS)
Shashkov, Andrey; Lovtsov, Alexander; Tomilin, Dmitry
2017-04-01
According to present knowledge, countless numerical simulations of the discharge plasma in Hall thrusters were conducted. However, on the one hand, adequate two-dimensional (2D) models require a lot of time to carry out numerical research of the breathing mode oscillations or the discharge structure. On the other hand, existing one-dimensional (1D) models are usually too simplistic and do not take into consideration such important phenomena as neutral-wall collisions, magnetic field induced by Hall current and double, secondary, and stepwise ionizations together. In this paper a one-dimensional with three-dimensional velocity space (1D3V) hybrid-PIC model is presented. The model is able to incorporate all the phenomena mentioned above. A new method of neutral-wall collisions simulation in described space was developed and validated. Simulation results obtained for KM-88 and KM-60 thrusters are in a good agreement with experimental data. The Bohm collision coefficient was the same for both thrusters. Neutral-wall collisions, doubly charged ions, and induced magnetic field were proved to stabilize the breathing mode oscillations in a Hall thruster under some circumstances.
Calculation of effective transport properties of partially saturated gas diffusion layers
NASA Astrophysics Data System (ADS)
Bednarek, Tomasz; Tsotridis, Georgios
2017-02-01
A large number of currently available Computational Fluid Dynamics numerical models of Polymer Electrolyte Membrane Fuel Cells (PEMFC) are based on the assumption that porous structures are mainly considered as thin and homogenous layers, hence the mass transport equations in structures such as Gas Diffusion Layers (GDL) are usually modelled according to the Darcy assumptions. Application of homogenous models implies that the effects of porous structures are taken into consideration via the effective transport properties of porosity, tortuosity, permeability (or flow resistance), diffusivity, electric and thermal conductivity. Therefore, reliable values of those effective properties of GDL play a significant role for PEMFC modelling when employing Computational Fluid Dynamics, since these parameters are required as input values for performing the numerical calculations. The objective of the current study is to calculate the effective transport properties of GDL, namely gas permeability, diffusivity and thermal conductivity, as a function of liquid water saturation by using the Lattice-Boltzmann approach. The study proposes a method of uniform water impregnation of the GDL based on the "Fine-Mist" assumption by taking into account the surface tension of water droplets and the actual shape of GDL pores.
NASA Astrophysics Data System (ADS)
Bayirli, Mehmet; Ozbey, Tuba
2013-07-01
Black deposits usually found at the surface of magnesite ore or limestone as well as red deposits in quartz veins are named as natural manganese dendrites. According to their geometrical structures, they may take variable fractal shapes. The characteristic origins of these morphologies have rarely been studied by means of numerical analyses. Hence, digital images of magnesite ore are taken from its surface with a scanner. These images are then converted to binary images in the form of 8 bits, bitmap format. As a next step, the morphological description parameters of manganese dendrites are computed by the way of scaling methods such as occupied fractions, fractal dimensions, divergent ratios, and critical exponents of scaling. The fractal dimension and the scaling range are made dependent on the fraction of the particles. Morphological description parameters can be determined according to the geometrical evaluation of the natural manganese dendrites which are formed independently from the process. The formation of manganese dendrites may also explain the stochastic selected process in the nature. These results therefore may be useful to understand the deposits in quartz vein parameters in geophysics.
On decoupling of volatility smile and term structure in inverse option pricing
NASA Astrophysics Data System (ADS)
Egger, Herbert; Hein, Torsten; Hofmann, Bernd
2006-08-01
Correct pricing of options and other financial derivatives is of great importance to financial markets and one of the key subjects of mathematical finance. Usually, parameters specifying the underlying stochastic model are not directly observable, but have to be determined indirectly from observable quantities. The identification of local volatility surfaces from market data of European vanilla options is one very important example of this type. As with many other parameter identification problems, the reconstruction of local volatility surfaces is ill-posed, and reasonable results can only be achieved via regularization methods. Moreover, due to the sparsity of data, the local volatility is not uniquely determined, but depends strongly on the kind of regularization norm used and a good a priori guess for the parameter. By assuming a multiplicative structure for the local volatility, which is motivated by the specific data situation, the inverse problem can be decomposed into two separate sub-problems. This removes part of the non-uniqueness and allows us to establish convergence and convergence rates under weak assumptions. Additionally, a numerical solution of the two sub-problems is much cheaper than that of the overall identification problem. The theoretical results are illustrated by numerical tests.
Schmid, G; Lager, D; Preiner, P; Uberbacher, R; Cecil, S
2007-01-01
In order to estimate typical radio frequency exposures from indoor used wireless communication technologies applied in homes and offices, WLAN, Bluetooth and Digital Enhanced Cordless Telecommunications systems, as well as baby surveillance devices and wireless headphones for indoor usage, have been investigated by measurements and numerical computations. Based on optimised measurement methods, field distributions and resulting exposure were assessed on selected products and real exposure scenarios. Additionally, generic scenarios have been investigated on the basis of numerical computations. The obtained results demonstrate that under usual conditions the resulting spatially (over body dimensions) averaged and 6-min time-averaged exposure for persons in the radio frequency fields of the considered applications is below approximately 0.1% of the reference level for power density according to the International Commission on Non-Ionizing Radiation Protection (ICNIRP) guidelines published in 1998. Spatial and temporal peak values can be considerably higher by 2-3 orders of magnitude. In case of some transmitting devices operated in close proximity to the body (e.g. WLAN transmitters), local exposure can reach the same order of magnitude as the basic restriction; however, none of the devices considered in this study exceeded the limits according to the ICNIRP guidelines.
Identification and assessment of hazardous compounds in drinking water.
Fawell, J K; Fielding, M
1985-12-01
The identification of organic chemicals in drinking water and their assessment in terms of potential hazardous effects are two very different but closely associated tasks. In relation to both continuous low-level background contamination and specific, often high-level, contamination due to pollution incidents, the identification of contaminants is a pre-requisite to evaluation of significant hazards. Even in the case of the rapidly developing short-term bio-assays which are applied to water to indicate a potential genotoxic hazard (for example Ames tests), identification of the active chemicals is becoming a major factor in the further assessment of the response. Techniques for the identification of low concentrations of organic chemicals in drinking water have developed remarkably since the early 1970s and methods based upon gas chromatography-mass spectrometry (GC-MS) have revolutionised qualitative analysis of water. Such techniques are limited to "volatile" chemicals and these usually constitute a small fraction of the total organic material in water. However, in recent years there have been promising developments in techniques for "non-volatile" chemicals in water. Such techniques include combined high-performance liquid chromatography-mass spectrometry (HPLC-MS) and a variety of MS methods, involving, for example, field desorption, fast atom bombardment and thermospray ionisation techniques. In the paper identification techniques in general are reviewed and likely future developments outlined. The assessment of hazards associated with chemicals identified in drinking and related waters usually centres upon toxicology - an applied science which involves numerous disciplines. The paper examines the toxicological information needed, the quality and deployment of such information and discusses future research needs. Application of short-term bio-assays to drinking water is a developing area and one which is closely involved with, and to some extent dependent on, powerful methods of identification. Recent developments are discussed.
Bugs and Movies: Using Film to Teach Microbiology
Sánchez, Manuel
2011-01-01
A YouTube channel has been created to watch commented video fragments from famous movies or TV series that can be used to teach microbiology. Although microbes are usually depicted in terms of their roles in causing infectious disease, numerous movies reflect other scientific aspects, such as biotechnological applications or bioethical issues. PMID:23653768
Preference of dendrophagous insects for forest borders
Andrey V. Gurov
1991-01-01
Numerous investigations have shown that forest insect outbreaks usually occur in specific habitats. Frequently these outbreaks do not generally extend to other territories occupied by these same host trees. Moreover, in every stand subjected to an outbreak, both slightly undamaged plots and heavily damaged plots are found. Perhaps some plots are initially more...
Numerical modeling of the energy storage and release in solar flares
NASA Technical Reports Server (NTRS)
Wu, S. T.; Weng, F. S.
1993-01-01
This paper reports on investigation of the photospheric magnetic field-line footpoint motion (usually referred to as shear motion) and magnetic flux emerging from below the surface in relation to energy storage in a solar flare. These causality relationships are demonstrated by using numerical magnetohydrodynamic simulations. From these results, one may conclude that the energy stored in solar flares is in the form of currents. The dynamic process through which these currents reach a critical value is discussed as well as how these currents lead to energy release, such as the explosive events of solar flares.
Goertz, Christine M; Long, Cynthia R; Vining, Robert D; Pohlman, Katherine A; Kane, Bridget; Corber, Lance; Walter, Joan; Coulter, Ian
2016-02-09
Low back pain is highly prevalent and one of the most common causes of disability in U.S. armed forces personnel. Currently, no single therapeutic method has been established as a gold standard treatment for this increasingly prevalent condition. One commonly used treatment, which has demonstrated consistent positive outcomes in terms of pain and function within a civilian population is spinal manipulative therapy provided by doctors of chiropractic. Chiropractic care, delivered within a multidisciplinary framework in military healthcare settings, has the potential to help improve clinical outcomes for military personnel with low back pain. However, its effectiveness in a military setting has not been well established. The primary objective of this study is to evaluate changes in pain and disability in active duty service members with low back pain who are allocated to receive usual medical care plus chiropractic care versus treatment with usual medical care alone. This pragmatic comparative effectiveness trial will enroll 750 active duty service members with low back pain at three military treatment facilities within the United States (250 from each site) who will be allocated to receive usual medical care plus chiropractic care or usual medical care alone for 6 weeks. Primary outcomes will include the numerical rating scale for pain intensity and the Roland-Morris Disability Questionnaire at week 6. Patient reported outcomes of pain, disability, bothersomeness, and back pain function will be collected at 2, 4, 6, and 12 weeks from allocation. Because low back pain is one of the leading causes of disability among U.S. military personnel, it is important to find pragmatic and conservative treatments that will treat low back pain and preserve low back function so that military readiness is maintained. Thus, it is important to evaluate the effects of the addition of chiropractic care to usual medical care on low back pain and disability. The trial discussed in this article was registered in ClinicalTrials.gov with the NCT01692275 Date of registration: 6 September 2012.
Foskey, Mark; Niethammer, Marc; Krajcevski, Pavel; Lin, Ming C.
2014-01-01
Estimation of tissue stiffness is an important means of noninvasive cancer detection. Existing elasticity reconstruction methods usually depend on a dense displacement field (inferred from ultrasound or MR images) and known external forces. Many imaging modalities, however, cannot provide details within an organ and therefore cannot provide such a displacement field. Furthermore, force exertion and measurement can be difficult for some internal organs, making boundary forces another missing parameter. We propose a general method for estimating elasticity and boundary forces automatically using an iterative optimization framework, given the desired (target) output surface. During the optimization, the input model is deformed by the simulator, and an objective function based on the distance between the deformed surface and the target surface is minimized numerically. The optimization framework does not depend on a particular simulation method and is therefore suitable for different physical models. We show a positive correlation between clinical prostate cancer stage (a clinical measure of severity) and the recovered elasticity of the organ. Since the surface correspondence is established, our method also provides a non-rigid image registration, where the quality of the deformation fields is guaranteed, as they are computed using a physics-based simulation. PMID:22893381
Redshift data and statistical inference
NASA Technical Reports Server (NTRS)
Newman, William I.; Haynes, Martha P.; Terzian, Yervant
1994-01-01
Frequency histograms and the 'power spectrum analysis' (PSA) method, the latter developed by Yu & Peebles (1969), have been widely employed as techniques for establishing the existence of periodicities. We provide a formal analysis of these two classes of methods, including controlled numerical experiments, to better understand their proper use and application. In particular, we note that typical published applications of frequency histograms commonly employ far greater numbers of class intervals or bins than is advisable by statistical theory sometimes giving rise to the appearance of spurious patterns. The PSA method generates a sequence of random numbers from observational data which, it is claimed, is exponentially distributed with unit mean and variance, essentially independent of the distribution of the original data. We show that the derived random processes is nonstationary and produces a small but systematic bias in the usual estimate of the mean and variance. Although the derived variable may be reasonably described by an exponential distribution, the tail of the distribution is far removed from that of an exponential, thereby rendering statistical inference and confidence testing based on the tail of the distribution completely unreliable. Finally, we examine a number of astronomical examples wherein these methods have been used giving rise to widespread acceptance of statistically unconfirmed conclusions.
Momentum Advection on a Staggered Mesh
NASA Astrophysics Data System (ADS)
Benson, David J.
1992-05-01
Eulerian and ALE (arbitrary Lagrangian-Eulerian) hydrodynamics programs usually split a timestep into two parts. The first part is a Lagrangian step, which calculates the incremental motion of the material. The second part is referred to as the Eulerian step, the advection step, or the remap step, and it accounts for the transport of material between cells. In most finite difference and finite element formulations, all the solution variables except the velocities are cell-centered while the velocities are edge- or vertex-centered. As a result, the advection algorithm for the momentum is, by necessity, different than the algorithm used for the other variables. This paper reviews three momentum advection methods and proposes a new one. One method, pioneered in YAQUI, creates a new staggered mesh, while the other two, used in SALE and SHALE, are cell-centered. The new method is cell-centered and its relationship to the other methods is discussed. Both pure advection and strong shock calculations are presented to substantiate the mathematical analysis. From the standpoint of numerical accuracy, both the staggered mesh and the cell-centered algorithms can give good results, while the computational costs are highly dependent on the overall architecture of a code.
2013-01-01
Background The purpose of this study is to conduct a basic analysis of the effectiveness and safety of electroacupuncture in the treatment of painful diabetic neuropathy (PDN) as compared to placebo and usual care and to evaluate the feasibility of large-scale clinical research. Methods/design This study is a protocol for a three-armed, randomized, patient-assessor-blinded (to the type of treatment), controlled pilot trial. Forty-five participants with a ≥ six month history of PDN and a mean weekly pain score of ≥ 4 on the 11-point Pain Intensity Numerical Rating Scale (PI-NRS) will be assigned to the electroacupuncture group (n = 15), sham group (n = 15) or usual care group (n = 15). The participants assigned to the electroacupuncture group will receive electroacupuncture (remaining for 30 minutes with a mixed current of 2 Hz/120 Hz and 80% of the bearable intensity) at 12 standard acupuncture points (bilateral ST36, GB39, SP9, SP6, LR3 and GB41) twice per week for eight weeks (a total of 16 sessions) as well as the usual care. The participants in the sham group will receive sham electroacupuncture (no electrical current will be passed to the needle, but the light will be seen, and the sound of the pulse generator will be heard by the participants) at non-acupuncture points as well as the usual care. The participants in the usual care group will not receive electroacupuncture treatment during the study period and will receive only the usual care. The follow-up will be in the 5th, 9th and 17th weeks after random allocation. The PI-NRS score assessed at the ninth week will be the primary outcome measurement used in this study. The Short-Form McGill Pain Questionnaire (SF-MPQ), a sleep disturbance score (11-point Likert scale), the Short-Form 36v2 Health Survey (SF-36), the Beck Depression Inventory (BDI) and the Patient Global Impression of Change (PGIC) will be used as outcome variables to evaluate the effectiveness of the acupuncture. Safety will be assessed at every visit. Discussion The result of this trial will provide a basis for the effectiveness and safety of electroacupuncture for PDN. Trial registration Clinical Research information Service. Unique identifier: KCT0000466. PMID:23866906
Identifying influential spreaders in complex networks through local effective spreading paths
NASA Astrophysics Data System (ADS)
Wang, Xiaojie; Zhang, Xue; Yi, Dongyun; Zhao, Chengli
2017-05-01
How to effectively identify a set of influential spreaders in complex networks is of great theoretical and practical value, which can help to inhibit the rapid spread of epidemics, promote the sales of products by word-of-mouth advertising, and so on. A naive strategy is to select the top ranked nodes as identified by some centrality indices, and other strategies are mainly based on greedy methods and heuristic methods. However, most of those approaches did not concern the connections between nodes. Usually, the distances between the selected spreaders are very close, leading to a serious overlapping of their influence. As a consequence, the global influence of the spreaders in networks will be greatly reduced, which largely restricts the performance of those methods. In this paper, a simple and efficient method is proposed to identify a set of discrete yet influential spreaders. By analyzing the spreading paths in the network, we present the concept of effective spreading paths and measure the influence of nodes via expectation calculation. The numerical analysis in undirected and directed networks all show that our proposed method outperforms many other centrality-based and heuristic benchmarks, especially in large-scale networks. Besides, experimental results on different spreading models and parameters demonstrates the stability and wide applicability of our method.
NASA Astrophysics Data System (ADS)
Hayashi, Toshinori; Yamada, Keiichi
Deviation of driving behavior from usual could be a sign of human error that increases the risk of traffic accidents. This paper proposes a novel method for predicting the possibility a driving behavior leads to an accident from the information on the driving behavior and the situation. In a previous work, a method of predicting the possibility by detecting the deviation of driving behavior from usual one in that situation has been proposed. In contrast, the method proposed in this paper predicts the possibility by detecting the deviation of the situation from usual one when the behavior is observed. An advantage of the proposed method is the number of the required models is independent of the variety of the situations. The method was applied to a problem of predicting accidents by right-turn driving behavior at an intersection, and the performance of the method was evaluated by experiments on a driving simulator.
Tongpeth, Jintana; Du, Huiyun; Clark, Robyn
2018-06-19
To evaluate the effectiveness of an interactive, avatar based education application to improve knowledge of and response to heart attack symptoms in people who are at risk of a heart attack. Poor knowledge of heart attack symptoms is recognised as a significant barrier to timely medical treatment. Numerous studies have demonstrated that technology can assist in patient education to improve knowledge and self-care. A single-center, non-blinded, two parallel groups, pragmatic randomized controlled trial. Seventy patients will be recruited from the coronary care unit of a public hospital. Eligible participants will be randomised to either the usual care or the intervention group (usual care plus avatar-based heart attack education app). The primary outcome of this study is knowledge. Secondary outcomes include response to heart attack symptoms, health service use and satisfaction. Study participants will be followed-up for six months. This study will evaluate the avatar based education app as a method to deliver vital information to patients. Participants' knowledge of and response to heart attack symptoms, as well as their health service use, will be assessed to evaluate the intervention effectiveness. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Reconstruction of the vulva with sensate gluteal fold flaps.
Kuokkanen, H; Mikkola, A; Nyberg, R H; Vuento, M H; Kaartinen, I; Kuoppala, T
2013-01-01
Soft-tissue reconstruction of the vulva following resection of malignancies is challenging. The function of perineal organs should be preserved and the reconstructed area should maintain an acceptable cosmetic appearance. Reconstruction with local flaps is usually sufficient in the primary phase after a radical vulvectomy. Numerous flaps have been designed for vulvar reconstruction usually based on circulation from the internal pudendal artery branches. In this paper we introduce our modification of the gluteal fold V-Y advancement flap as a primary reconstruction after a radical vulvectomy. Twenty-two patients were operated with a radical vulvectomy because of vulvar malignancies. The operation was primary in eight and secondary in 14 patients. The reconstruction of the vulva was performed in the same operation for each patient. All flaps survived completely. Wound complications were registered in three patients. Late problems with urinary stream were corrected in two patients. A local recurrence of the malignancy was observed in six patients during the follow-up period. Gluteal fold flap is easy to perform, has a low rate of complications and gives good functional results. Even a large defect can be reconstructed reliably with this method. A gluteal fold V-Y advancement flap is sensate and our modification allows the flap to be transposed with lesser dissection as presented before.
Neural correlates of the number–size interference task in children
Kaufmann, Liane; Koppelstaetter, Florian; Siedentopf, Christian; Haala, Ilka; Haberlandt, Edda; Zimmerhackl, Lothar-Bernd; Felber, Stefan; Ischebeck, Anja
2010-01-01
In this functional magnetic resonance imaging study, 17 children were asked to make numerical and physical magnitude classifications while ignoring the other stimulus dimension (number–size interference task). Digit pairs were either incongruent (3 8) or neutral (3 8). Generally, numerical magnitude interferes with font size (congruity effect). Moreover, relative to numerically adjacent digits far ones yield quicker responses (distance effect). Behaviourally, robust distance and congruity effects were observed in both tasks. imaging baselline contrasts revealed activations in frontal, parietal, occipital and cerebellar areas bilaterally. Different from results usually reported for adultssmaller distances activated frontal, but not (intra-)parietal areas in children. Congruity effects became significant only in physical comparisons. Thus, even with comparable behavioural performance, cerebral activation patterns may differ substantially between children and adults. PMID:16603917
Apparent multifractality of self-similar Lévy processes
NASA Astrophysics Data System (ADS)
Zamparo, Marco
2017-07-01
Scaling properties of time series are usually studied in terms of the scaling laws of empirical moments, which are the time average estimates of moments of the dynamic variable. Nonlinearities in the scaling function of empirical moments are generally regarded as a sign of multifractality in the data. We show that, except for the Brownian motion, this method fails to disclose the correct monofractal nature of self-similar Lévy processes. We prove that for this class of processes it produces apparent multifractality characterised by a piecewise-linear scaling function with two different regimes, which match at the stability index of the considered process. This result is motivated by previous numerical evidence. It is obtained by introducing an appropriate stochastic normalisation which is able to cure empirical moments, without hiding their dependence on time, when moments they aim at estimating do not exist.
Bladder perforation during sling procedures: diagnosis and management of injury.
Israfil-Bayli, F; Bulchandani, S; Parsons, M; Jackson, S; Toozs-Hobson, P
2014-05-01
Midurethral slings are an effective and minimally invasive treatment for stress urinary incontinence. One of the most common intraoperative complications is bladder perforation, complicating between 2 and 10% of all operations, and on average 4.7%. It is usually corrected during surgery, with repositioning of the trocars. The purpose of this video is to demonstrate a method of replacing the trocars under direct vision. This video exhibits a bladder perforation during insertion of a retropubic midurethral sling (Advantage Fit; Boston Scientific) and gives a step-by step guide to the removal and repositioning of the sling under direct visualisation. Repositioning of the trocars under direct vision in cases of bladder perforation may have numerous advantages. It may prevent damage to the urethra, possibly reduce the risk of postoperative infection and may be beneficial for trainees.
The equations of motion of a secularly precessing elliptical orbit
NASA Astrophysics Data System (ADS)
Casotto, S.; Bardella, M.
2013-01-01
The equations of motion of a secularly precessing ellipse are developed using time as the independent variable. The equations are useful when integrating numerically the perturbations about a reference trajectory which is subject to secular perturbations in the node, the argument of pericentre and the mean motion. Usually this is done in connection with Encke's method to ensure minimal rectification frequency. Similar equations are already available in the literature, but they are either given based on the true anomaly as the independent variable or in mixed mode with respect to time through the use of a supporting equation to track the anomaly. The equations developed here form a complete and independent set of six equations in time. Reformulations both of Escobal's and Kyner and Bennett's equations are also provided which lead to a more concise form.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pilat-Lohinger, E.; Bazsó, A.; Funk, B.
Gravitational perturbations in multi-planet systems caused by an accompanying star are the subject of this investigation. Our dynamical model is based on the binary star HD 41004 AB where a giant planet orbits HD 41004 A. We modify the orbital parameters of this system and analyze the motion of a hypothetical test planet surrounding HD 41004 A on an interior orbit to the detected giant planet. Our numerical computations indicate perturbations due to mean motion and secular resonances (SRs). The locations of these resonances are usually connected to high eccentricity and highly inclined motion depending strongly on the binary-planet architecture.more » As the positions of mean motion resonances can easily be determined, the main purpose of this study is to present a new semi-analytical method to determine the location of an SR without huge computational effort.« less
Photographing AIDS: On Capturing a Disease in Pictures of People with AIDS.
Engelmann, Lukas
2016-01-01
The photography of people with AIDS has been subject to numerous critiques in the 1980s and has become a controversial way of visualizing the AIDS epidemic. While most of the scholarly work on AIDS photography is based in cultural studies and concerned with popular representations, the clinical value of photographs of people with AIDS usually remains overlooked. This article addresses photographs as a "way of seeing" AIDS that contributed crucially to the making of the disease entity AIDS within the history of medicine. Cultural studies methods are applied to analyze clinical photography in the case of AIDS, thus contributing to the medical history of AIDS through the lens of photography. The article reveals the conflation of disease morphology and patient identity as a characteristic feature of both clinical photography and a now historical nature of AIDS.
NASA Astrophysics Data System (ADS)
Gelß, Patrick; Matera, Sebastian; Schütte, Christof
2016-06-01
In multiscale modeling of heterogeneous catalytic processes, one crucial point is the solution of a Markovian master equation describing the stochastic reaction kinetics. Usually, this is too high-dimensional to be solved with standard numerical techniques and one has to rely on sampling approaches based on the kinetic Monte Carlo method. In this study we break the curse of dimensionality for the direct solution of the Markovian master equation by exploiting the Tensor Train Format for this purpose. The performance of the approach is demonstrated on a first principles based, reduced model for the CO oxidation on the RuO2(110) surface. We investigate the complexity for increasing system size and for various reaction conditions. The advantage over the stochastic simulation approach is illustrated by a problem with increased stiffness.
NASA Astrophysics Data System (ADS)
Muráth, Szabolcs; Somosi, Zoltán; Tóth, Ildikó Y.; Tombácz, Etelka; Sipos, Pál; Pálinkó, István
2017-07-01
The delamination-restacking properties of MgAl-layered double hydroxide (MgAl-LDH) were studied in various solvents. The LDH samples were successfully delaminated in polar amides (formamide, N-methylformamide, N-methylacetamide). Usually, delamination was finalized by ultrasonic treatment. As rehydrating solutions, numerous Na-salts with single-, double- and triple-charged anions were used. Reconstruction was accomplished with anions of one or two negative charges, but triple-charged ones generally disrupted the rebuilding process, likely, because their salts with the metals of the LDH are very stable, and the thin layers can more readily transform to salts than the ordered materials. Samples and delamination-restacking processes were characterized by X-ray diffractometry (XRD), infrared spectroscopy (IR), dynamic light scattering (DLS), scanning electron microscopy (SEM) and energy-dispersive X-ray analysis (EDX).
Finite element techniques in computational time series analysis of turbulent flows
NASA Astrophysics Data System (ADS)
Horenko, I.
2009-04-01
In recent years there has been considerable increase of interest in the mathematical modeling and analysis of complex systems that undergo transitions between several phases or regimes. Such systems can be found, e.g., in weather forecast (transitions between weather conditions), climate research (ice and warm ages), computational drug design (conformational transitions) and in econometrics (e.g., transitions between different phases of the market). In all cases, the accumulation of sufficiently detailed time series has led to the formation of huge databases, containing enormous but still undiscovered treasures of information. However, the extraction of essential dynamics and identification of the phases is usually hindered by the multidimensional nature of the signal, i.e., the information is "hidden" in the time series. The standard filtering approaches (like f.~e. wavelets-based spectral methods) have in general unfeasible numerical complexity in high-dimensions, other standard methods (like f.~e. Kalman-filter, MVAR, ARCH/GARCH etc.) impose some strong assumptions about the type of the underlying dynamics. Approach based on optimization of the specially constructed regularized functional (describing the quality of data description in terms of the certain amount of specified models) will be introduced. Based on this approach, several new adaptive mathematical methods for simultaneous EOF/SSA-like data-based dimension reduction and identification of hidden phases in high-dimensional time series will be presented. The methods exploit the topological structure of the analysed data an do not impose severe assumptions on the underlying dynamics. Special emphasis will be done on the mathematical assumptions and numerical cost of the constructed methods. The application of the presented methods will be first demonstrated on a toy example and the results will be compared with the ones obtained by standard approaches. The importance of accounting for the mathematical assumptions used in the analysis will be pointed up in this example. Finally, applications to analysis of meteorological and climate data will be presented.
Research on NC motion controller based on SOPC technology
NASA Astrophysics Data System (ADS)
Jiang, Tingbiao; Meng, Biao
2006-11-01
With the rapid development of the digitization and informationization, the application of numerical control technology in the manufacturing industry becomes more and more important. However, the conventional numerical control system usually has some shortcomings such as the poor in system openness, character of real-time, cutability and reconfiguration. In order to solve these problems, this paper investigates the development prospect and advantage of the application in numerical control area with system-on-a-Programmable-Chip (SOPC) technology, and puts forward to a research program approach to the NC controller based on SOPC technology. Utilizing the characteristic of SOPC technology, we integrate high density logic device FPGA, memory SRAM, and embedded processor ARM into a single programmable logic device. We also combine the 32-bit RISC processor with high computing capability of the complicated algorithm with the FPGA device with strong motivable reconfiguration logic control ability. With these steps, we can greatly resolve the defect described in above existing numerical control systems. For the concrete implementation method, we use FPGA chip embedded with ARM hard nuclear processor to construct the control core of the motion controller. We also design the peripheral circuit of the controller according to the requirements of actual control functions, transplant real-time operating system into ARM, design the driver of the peripheral assisted chip, develop the application program to control and configuration of FPGA, design IP core of logic algorithm for various NC motion control to configured it into FPGA. The whole control system uses the concept of modular and structured design to develop hardware and software system. Thus the NC motion controller with the advantage of easily tailoring, highly opening, reconfigurable, and expandable can be implemented.
Ockhuysen-Vermey, Caroline F; Henneman, Lidewij; van Asperen, Christi J; Oosterwijk, Jan C; Menko, Fred H; Timmermans, Daniëlle R M
2008-10-03
Understanding risks is considered to be crucial for informed decision-making. Inaccurate risk perception is a common finding in women with a family history of breast cancer attending genetic counseling. As yet, it is unclear how risks should best be communicated in clinical practice. This study protocol describes the design and methods of the BRISC (Breast cancer RISk Communication) study evaluating the effect of different formats of risk communication on the counsellee's risk perception, psychological well-being and decision-making regarding preventive options for breast cancer. The BRISC study is designed as a pre-post-test controlled group intervention trial with repeated measurements using questionnaires. The intervention-an additional risk consultation-consists of one of 5 conditions that differ in the way counsellee's breast cancer risk is communicated: 1) lifetime risk in numerical format (natural frequencies, i.e. X out of 100), 2) lifetime risk in both numerical format and graphical format (population figures), 3) lifetime risk and age-related risk in numerical format, 4) lifetime risk and age-related risk in both numerical format and graphical format, and 5) lifetime risk in percentages. Condition 6 is the control condition in which no intervention is given (usual care). Participants are unaffected women with a family history of breast cancer attending one of three participating clinical genetic centres in the Netherlands. The BRISC study allows for an evaluation of the effects of different formats of communicating breast cancer risks to counsellees. The results can be used to optimize risk communication in order to improve informed decision-making among women with a family history of breast cancer. They may also be useful for risk communication in other health-related services. Current Controlled Trials ISRCTN14566836.
Khajouei, Reza; Hajesmaeel Gohari, Sadrieh; Mirzaee, Moghaddameh
2018-04-01
In addition to following the usual Heuristic Evaluation (HE) method, the usability of health information systems can also be evaluated using a checklist. The objective of this study is to compare the performance of these two methods in identifying usability problems of health information systems. Eight evaluators independently evaluated different parts of a Medical Records Information System using two methods of HE (usual and with a checklist). The two methods were compared in terms of the number of problems identified, problem type, and the severity of identified problems. In all, 192 usability problems were identified by two methods in the Medical Records Information System. This was significantly higher than the number of usability problems identified by the checklist and usual method (148 and 92, respectively) (p < 0.0001). After removing the duplicates, the difference between the number of unique usability problems identified by the checklist method (n = 100) and usual method (n = 44) was significant (p < 0.0001). Differences between the mean severity of the real usability problems (1.83) and those identified by only one of the methods (usual = 2.05, checklist = 1.74) were significant (p = 0.001). This study revealed the potential of the two HE methods for identifying usability problems of health information systems. The results demonstrated that the checklist method had significantly better performance in terms of the number of identified usability problems; however, the performance of the usual method for identifying problems of higher severity was significantly better. Although the checklist method can be more efficient for less experienced evaluators, wherever usability is critical, the checklist should be used with caution in usability evaluations. Copyright © 2018 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Charlson C.
2008-07-15
Numeric studies of the impact of the velocity space distribution on the stabilization of (1,1) internal kink mode and excitation of the fishbone mode are performed with a hybrid kinetic-magnetohydrodynamic model. These simulations demonstrate an extension of the physics capabilities of NIMROD[C. R. Sovinec et al., J. Comput. Phys. 195, 355 (2004)], a three-dimensional extended magnetohydrodynamic (MHD) code, to include the kinetic effects of an energetic minority ion species. Kinetic effects are captured by a modification of the usual MHD momentum equation to include a pressure tensor calculated from the {delta}f particle-in-cell method [S. E. Parker and W. W. Lee,more » Phys. Fluids B 5, 77 (1993)]. The particles are advanced in the self-consistent NIMROD fields. We outline the implementation and present simulation results of energetic minority ion stabilization of the (1,1) internal kink mode and excitation of the fishbone mode. A benchmark of the linear growth rate and real frequency is shown to agree well with another code. The impact of the details of the velocity space distribution is examined; particularly extending the velocity space cutoff of the simulation particles. Modestly increasing the cutoff strongly impacts the (1,1) mode. Numeric experiments are performed to study the impact of passing versus trapped particles. Observations of these numeric experiments suggest that assumptions of energetic particle effects should be re-examined.« less
Quasi-integrability in deformed sine-Gordon models and infinite towers of conserved charges
NASA Astrophysics Data System (ADS)
Blas, Harold; Callisaya, Hector Flores
2018-02-01
We have studied the space-reflection symmetries of some soliton solutions of deformed sine-Gordon models in the context of the quasi-integrability concept. Considering a dual pair of anomalous Lax representations of the deformed model we compute analytically and numerically an infinite number of alternating conserved and asymptotically conserved charges through a modification of the usual techniques of integrable field theories. The charges associated to two-solitons with a definite parity under space-reflection symmetry, i.e. kink-kink (odd parity) and kink-antikink (even parity) scatterings with equal and opposite velocities, split into two infinite towers of conserved and asymptotically conserved charges. For two-solitons without definite parity under space-reflection symmetry (kink-kink and kink-antikink scatterings with unequal and opposite velocities) our numerical results show the existence of the asymptotically conserved charges only. However, we show that in the center-of-mass reference frame of the two solitons the parity symmetries and their associated set of exactly conserved charges can be restored. Moreover, the positive parity breather-like (kink-antikink bound state) solution exhibits a tower of exactly conserved charges and a subset of charges which are periodic in time. We back up our results with extensive numerical simulations which also demonstrate the existence of long lived breather-like states in these models. The time evolution has been simulated by the 4th order Runge-Kutta method supplied with non-reflecting boundary conditions.
Numerical reconstruction of tsunami source using combined seismic, satellite and DART data
NASA Astrophysics Data System (ADS)
Krivorotko, Olga; Kabanikhin, Sergey; Marinin, Igor
2014-05-01
Recent tsunamis, for instance, in Japan (2011), in Sumatra (2004), and at the Indian coast (2004) showed that a system of producing exact and timely information about tsunamis is of a vital importance. Numerical simulation is an effective instrument for providing such information. Bottom relief characteristics and the initial perturbation data (a tsunami source) are required for the direct simulation of tsunamis. The seismic data about the source are usually obtained in a few tens of minutes after an event has occurred (the seismic waves velocity being about five hundred kilometres per minute, while the velocity of tsunami waves is less than twelve kilometres per minute). A difference in the arrival times of seismic and tsunami waves can be used when operationally refining the tsunami source parameters and modelling expected tsunami wave height on the shore. The most suitable physical models related to the tsunamis simulation are based on the shallow water equations. The problem of identification parameters of a tsunami source using additional measurements of a passing wave is called inverse tsunami problem. We investigate three different inverse problems of determining a tsunami source using three different additional data: Deep-ocean Assessment and Reporting of Tsunamis (DART) measurements, satellite wave-form images and seismic data. These problems are severely ill-posed. We apply regularization techniques to control the degree of ill-posedness such as Fourier expansion, truncated singular value decomposition, numerical regularization. The algorithm of selecting the truncated number of singular values of an inverse problem operator which is agreed with the error level in measured data is described and analyzed. In numerical experiment we used gradient methods (Landweber iteration and conjugate gradient method) for solving inverse tsunami problems. Gradient methods are based on minimizing the corresponding misfit function. To calculate the gradient of the misfit function, the adjoint problem is solved. The conservative finite-difference schemes for solving the direct and adjoint problems in the approximation of shallow water are constructed. Results of numerical experiments of the tsunami source reconstruction are presented and discussed. We show that using a combination of three different types of data allows one to increase the stability and efficiency of tsunami source reconstruction. Non-profit organization WAPMERR (World Agency of Planetary Monitoring and Earthquake Risk Reduction) in collaboration with Informap software development department developed the Integrated Tsunami Research and Information System (ITRIS) to simulate tsunami waves and earthquakes, river course changes, coastal zone floods, and risk estimates for coastal constructions at wave run-ups and earthquakes. The special scientific plug-in components are embedded in a specially developed GIS-type graphic shell for easy data retrieval, visualization and processing. This work was supported by the Russian Foundation for Basic Research (project No. 12-01-00773 'Theory and Numerical Methods for Solving Combined Inverse Problems of Mathematical Physics') and interdisciplinary project of SB RAS 14 'Inverse Problems and Applications: Theory, Algorithms, Software'.
Zou, Ling; Zhao, Haihua; Kim, Seung Jun
2016-11-16
In this study, the classical Welander’s oscillatory natural circulation problem is investigated using high-order numerical methods. As originally studied by Welander, the fluid motion in a differentially heated fluid loop can exhibit stable, weakly instable, and strongly instable modes. A theoretical stability map has also been originally derived from the stability analysis. Numerical results obtained in this paper show very good agreement with Welander’s theoretical derivations. For stable cases, numerical results from both the high-order and low-order numerical methods agree well with the non-dimensional flow rate analytically derived. The high-order numerical methods give much less numerical errors compared to themore » low-order methods. For stability analysis, the high-order numerical methods could perfectly predict the stability map, while the low-order numerical methods failed to do so. For all theoretically unstable cases, the low-order methods predicted them to be stable. The result obtained in this paper is a strong evidence to show the benefits of using high-order numerical methods over the low-order ones, when they are applied to simulate natural circulation phenomenon that has already gain increasing interests in many future nuclear reactor designs.« less
USDA-ARS?s Scientific Manuscript database
Numerous factors have been reported to affect rainbow trout egg quality, among which, post-ovulatory aging is one of the most significant causes as reared rainbow trout do not usually volitionally oviposit the ovulated eggs. Frequent examination of the stock is therefore required in order to reduce...
The Mental Number Line in Dyscalculia: Impaired Number Sense or Access from Symbolic Numbers?
ERIC Educational Resources Information Center
Lafay, Anne; St-Pierre, Marie-Catherine; Macoir, Joël
2017-01-01
Numbers may be manipulated and represented mentally over a compressible number line oriented from left to right. According to numerous studies, one of the primary reasons for dyscalculia is related to improper understanding of the mental number line. Children with dyscalculia usually show difficulty when they have to place Arabic numbers on a…
The endangered pondberry (Lindera melissifolia [Walt] Blume, Lauraceae)
Margaret S. Devall
2013-01-01
Pondberry (Lindera melissifolia) is an endangered plant species that occurs in seven southern states. It is a rhizomatous, clonal shrub that usually grows in colonies and has numerous stems with few branches and drooping leaves that give off a spicy odor when crushed. Pondberry is dioecious, with small yellow flowers that bloom in spring and have scarlet drupes that...
[Significant Issues in Education - Law.] Inequality in Education, Number 17, June 1974.
ERIC Educational Resources Information Center
Hall, Leon; And Others
This issue of Inequality in Education deviates from the usual format of in-depth discussion of a particular topic to include reports on a variety of significant issues in education-law. Leon Hall summarizes the numerous experiences he has had with Southern desegregated schools and students and relates his conclusions about how desegregation is…
The computation of standard solar models
NASA Technical Reports Server (NTRS)
Ulrich, Roger K.; Cox, Arthur N.
1991-01-01
Procedures for calculating standard solar models with the usual simplifying approximations of spherical symmetry, no mixing except in the surface convection zone, no mass loss or gain during the solar lifetime, and no separation of elements by diffusion are described. The standard network of nuclear reactions among the light elements is discussed including rates, energy production and abundance changes. Several of the equation of state and opacity formulations required for the basic equations of mass, momentum and energy conservation are presented. The usual mixing-length convection theory is used for these results. Numerical procedures for calculating the solar evolution, and current evolution and oscillation frequency results for the present sun by some recent authors are given.
Multigrid solutions to quasi-elliptic schemes
NASA Technical Reports Server (NTRS)
Brandt, A.; Taasan, S.
1985-01-01
Quasi-elliptic schemes arise from central differencing or finite element discretization of elliptic systems with odd order derivatives on non-staggered grids. They are somewhat unstable and less accurate then corresponding staggered-grid schemes. When usual multigrid solvers are applied to them, the asymptotic algebraic convergence is necessarily slow. Nevertheless, it is shown by mode analyses and numerical experiments that the usual FMG algorithm is very efficient in solving quasi-elliptic equations to the level of truncation errors. Also, a new type of multigrid algorithm is presented, mode analyzed and tested, for which even the asymptotic algebraic convergence is fast. The essence of that algorithm is applicable to other kinds of problems, including highly indefinite ones.
Multigrid solutions to quasi-elliptic schemes
NASA Technical Reports Server (NTRS)
Brandt, A.; Taasan, S.
1985-01-01
Quasi-elliptic schemes arise from central differencing or finite element discretization of elliptic systems with odd order derivatives on non-staggered grids. They are somewhat unstable and less accurate than corresponding staggered-grid schemes. When usual multigrid solvers are applied to them, the asymptotic algebraic convergence is necessarily slow. Nevertheless, it is shown by mode analyses and numerical experiments that the usual FMG algorithm is very efficient in solving quasi-elliptic equations to the level of truncation errors. Also, a new type of multigrid algorithm is presented, mode analyzed and tested, for which even the asymptotic algebraic convergence is fast. The essence of that algorithm is applicable to other kinds of problems, including highly indefinite ones.
NASA Astrophysics Data System (ADS)
Wang, Yaohui; Xin, Xuegang; Guo, Lei; Chen, Zhifeng; Liu, Feng
2018-05-01
The switching of a gradient coil current in magnetic resonance imaging will induce an eddy current in the surrounding conducting structures while the secondary magnetic field produced by the eddy current is harmful for the imaging. To minimize the eddy current effects, the stray field shielding in the gradient coil design is usually realized by minimizing the magnetic fields on the cryostat surface or the secondary magnetic fields over the imaging region. In this work, we explicitly compared these two active shielding design methods. Both the stray field and eddy current on the cryostat inner surface were quantitatively discussed by setting the stray field constraint with an ultra-low maximum intensity of 2 G and setting the secondary field constraint with an extreme small shielding ratio of 0.000 001. The investigation revealed that the secondary magnetic field control strategy can produce coils with a better performance. However, the former (minimizing the magnetic fields) is preferable when designing a gradient coil with an ultra-low eddy current that can also strictly control the stray field leakage at the edge of the cryostat inner surface. A wrapped-edge gradient coil design scheme was then optimized for a more effective control of the stray fields. The numerical simulation on the wrapped-edge coil design shows that the optimized wrapping angles for the x and z coils in terms of our coil dimensions are 40° and 90°, respectively.
Metaplot: a novel stata graph for assessing heterogeneity at a glance.
Poorolajal, J; Mahmoodi, M; Majdzadeh, R; Fotouhi, A
2010-01-01
Heterogeneity is usually a major concern in meta-analysis. Although there are some statistical approaches for assessing variability across studies, here we present a new approach to heterogeneity using "MetaPlot" that investigate the influence of a single study on the overall heterogeneity. MetaPlot is a two-way (x, y) graph, which can be considered as a complementary graphical approach for testing heterogeneity. This method shows graphically as well as numerically the results of an influence analysis, in which Higgins' I(2) statistic with 95% (Confidence interval) CI are computed omitting one study in each turn and then are plotted against reciprocal of standard error (1/SE) or "precision". In this graph, "1/SE" lies on x axis and "I(2) results" lies on y axe. Having a first glance at MetaPlot, one can predict to what extent omission of a single study may influence the overall heterogeneity. The precision on x-axis enables us to distinguish the size of each trial. The graph describes I(2) statistic with 95% CI graphically as well as numerically in one view for prompt comparison. It is possible to implement MetaPlot for meta-analysis of different types of outcome data and summary measures. This method presents a simple graphical approach to identify an outlier and its effect on overall heterogeneity at a glance. We wish to suggest MetaPlot to Stata experts to prepare its module for the software.
A Gompertz population model with Allee effect and fuzzy initial values
NASA Astrophysics Data System (ADS)
Amarti, Zenia; Nurkholipah, Nenden Siti; Anggriani, Nursanti; Supriatna, Asep K.
2018-03-01
Growth and population dynamics models are important tools used in preparing a good management for society to predict the future of population or species. This has been done by various known methods, one among them is by developing a mathematical model that describes population growth. Models are usually formed into differential equations or systems of differential equations, depending on the complexity of the underlying properties of the population. One example of biological complexity is Allee effect. It is a phenomenon showing a high correlation between very small population size and the mean individual fitness of the population. In this paper the population growth model used is the Gompertz equation model by considering the Allee effect on the population. We explore the properties of the solution to the model numerically using the Runge-Kutta method. Further exploration is done via fuzzy theoretical approach to accommodate uncertainty of the initial values of the model. It is known that an initial value greater than the Allee threshold will cause the solution rises towards carrying capacity asymptotically. However, an initial value smaller than the Allee threshold will cause the solution decreases towards zero asymptotically, which means the population is eventually extinct. Numerical solutions show that modeling uncertain initial value of the critical point A (the Allee threshold) with a crisp initial value could cause the extinction of population of a certain possibilistic degree, depending on the predetermined membership function of the initial value.
Du, Yuanwei; Guo, Yubin
2015-01-01
The intrinsic mechanism of multimorbidity is difficult to recognize and prediction and diagnosis are difficult to carry out accordingly. Bayesian networks can help to diagnose multimorbidity in health care, but it is difficult to obtain the conditional probability table (CPT) because of the lack of clinically statistical data. Today, expert knowledge and experience are increasingly used in training Bayesian networks in order to help predict or diagnose diseases, but the CPT in Bayesian networks is usually irrational or ineffective for ignoring realistic constraints especially in multimorbidity. In order to solve these problems, an evidence reasoning (ER) approach is employed to extract and fuse inference data from experts using a belief distribution and recursive ER algorithm, based on which evidence reasoning method for constructing conditional probability tables in Bayesian network of multimorbidity is presented step by step. A multimorbidity numerical example is used to demonstrate the method and prove its feasibility and application. Bayesian network can be determined as long as the inference assessment is inferred by each expert according to his/her knowledge or experience. Our method is more effective than existing methods for extracting expert inference data accurately and is fused effectively for constructing CPTs in a Bayesian network of multimorbidity.
Designing of self-deploying origami structures using geometrically misaligned crease patterns
Saito, Kazuya; Tsukahara, Akira; Okabe, Yoji
2016-01-01
Usually, origami-based morphing structures are designed on the premise of ‘rigid folding’, i.e. the facets and fold lines of origami can be replaced with rigid panels and ideal hinges, respectively. From a structural mechanics viewpoint, some rigid-foldable origami models are overconstrained and have negative degrees of freedom (d.f.). In these cases, the singularity in crease patterns guarantees their rigid foldability. This study presents a new method for designing self-deploying origami using the geometrically misaligned creases. In this method, some facets are replaced by ‘holes’ such that the systems become a 1-d.f. mechanism. These perforated origami models can be folded and unfolded similar to rigid-foldable (without misalignment) models because of their d.f. focusing on the removed facets, the holes will deform according to the motion of the frame of the remaining parts. In the proposed method, these holes are filled with elastic parts and store elastic energy for self-deployment. First, a new extended rigid-folding simulation technique is proposed to estimate the deformation of the holes. Next, the proposed method is applied on arbitrary-size quadrilateral mesh origami. Finally, by using the finite-element method, the authors conduct numerical simulations and confirm the deployment capabilities of the models. PMID:26997884
Designing of self-deploying origami structures using geometrically misaligned crease patterns.
Saito, Kazuya; Tsukahara, Akira; Okabe, Yoji
2016-01-01
Usually, origami-based morphing structures are designed on the premise of 'rigid folding', i.e. the facets and fold lines of origami can be replaced with rigid panels and ideal hinges, respectively. From a structural mechanics viewpoint, some rigid-foldable origami models are overconstrained and have negative degrees of freedom (d.f.). In these cases, the singularity in crease patterns guarantees their rigid foldability. This study presents a new method for designing self-deploying origami using the geometrically misaligned creases. In this method, some facets are replaced by 'holes' such that the systems become a 1-d.f. mechanism. These perforated origami models can be folded and unfolded similar to rigid-foldable (without misalignment) models because of their d.f. focusing on the removed facets, the holes will deform according to the motion of the frame of the remaining parts. In the proposed method, these holes are filled with elastic parts and store elastic energy for self-deployment. First, a new extended rigid-folding simulation technique is proposed to estimate the deformation of the holes. Next, the proposed method is applied on arbitrary-size quadrilateral mesh origami. Finally, by using the finite-element method, the authors conduct numerical simulations and confirm the deployment capabilities of the models.
Wang, Yujue; Lian, Ziyang; Yao, Mingge; Wang, Ji; Hu, Hongping
2013-10-01
A power harvester with adjustable frequency, which consists of a hinged-hinged piezoelectric bimorph and a concentrated mass, is studied by the precise electric field method (PEFM), taking into account a distribution of the electric field over the thickness. Usually, using the equivalent electric field method (EEFM), the electric field is approximated as a constant value in the piezoelectric layer. Charge on the upper electrode (UEC) of the bimorph is often assumed as output charge. However, different output charge can be obtained by integrating on electric displacement over the electrode with different thickness coordinates. Therefore, an average charge (AC) on thickness is often assumed as the output value. This method is denoted EEFM AC. The flexural vibration of the bimorph is calculated by the three methods and their results are compared. Numerical results illustrate that EEFM UEC overestimates resonant frequency, output power, and efficiency. EEFM AC can accurately calculate the output power and efficiency, but underestimates resonant frequency. The performance of the harvester, which depends on concentrated mass weight, position, and circuit load, is analyzed using PEFM. The resonant frequency can be modulated 924 Hz by moving the concentrated mass along the bimorph. This feature suggests that the natural frequency of the harvester can be adjusted conveniently to adapt to frequency fluctuation of the ambient vibration.
A software tool for modeling and simulation of numerical P systems.
Buiu, Catalin; Arsene, Octavian; Cipu, Corina; Patrascu, Monica
2011-03-01
A P system represents a distributed and parallel bio-inspired computing model in which basic data structures are multi-sets or strings. Numerical P systems have been recently introduced and they use numerical variables and local programs (or evolution rules), usually in a deterministic way. They may find interesting applications in areas such as computational biology, process control or robotics. The first simulator of numerical P systems (SNUPS) has been designed, implemented and made available to the scientific community by the authors of this paper. SNUPS allows a wide range of applications, from modeling and simulation of ordinary differential equations, to the use of membrane systems as computational blocks of cognitive architectures, and as controllers for autonomous mobile robots. This paper describes the functioning of a numerical P system and presents an overview of SNUPS capabilities together with an illustrative example. SNUPS is freely available to researchers as a standalone application and may be downloaded from a dedicated website, http://snups.ics.pub.ro/, which includes an user manual and sample membrane structures. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Steady flow model user's guide
NASA Astrophysics Data System (ADS)
Doughty, C.; Hellstrom, G.; Tsang, C. F.; Claesson, J.
1984-07-01
Sophisticated numerical models that solve the coupled mass and energy transport equations for nonisothermal fluid flow in a porous medium were used to match analytical results and field data for aquifer thermal energy storage (ATES) systems. As an alternative to the ATES problem the Steady Flow Model (SFM), a simplified but fast numerical model was developed. A steady purely radial flow field is prescribed in the aquifer, and incorporated into the heat transport equation which is then solved numerically. While the radial flow assumption limits the range of ATES systems that can be studied using the SFM, it greatly simplifies use of this code. The preparation of input is quite simple compared to that for a sophisticated coupled mass and energy model, and the cost of running the SFM is far cheaper. The simple flow field allows use of a special calculational mesh that eliminates the numerical dispersion usually associated with the numerical solution of convection problems. The problem is defined, the algorithm used to solve it are outllined, and the input and output for the SFM is described.
Comparing four methods to estimate usual intake distributions.
Souverein, O W; Dekkers, A L; Geelen, A; Haubrock, J; de Vries, J H; Ocké, M C; Harttig, U; Boeing, H; van 't Veer, P
2011-07-01
The aim of this paper was to compare methods to estimate usual intake distributions of nutrients and foods. As 'true' usual intake distributions are not known in practice, the comparison was carried out through a simulation study, as well as empirically, by application to data from the European Food Consumption Validation (EFCOVAL) Study in which two 24-h dietary recalls (24-HDRs) and food frequency data were collected. The methods being compared were the Iowa State University Method (ISU), National Cancer Institute Method (NCI), Multiple Source Method (MSM) and Statistical Program for Age-adjusted Dietary Assessment (SPADE). Simulation data were constructed with varying numbers of subjects (n), different values for the Box-Cox transformation parameter (λ(BC)) and different values for the ratio of the within- and between-person variance (r(var)). All data were analyzed with the four different methods and the estimated usual mean intake and selected percentiles were obtained. Moreover, the 2-day within-person mean was estimated as an additional 'method'. These five methods were compared in terms of the mean bias, which was calculated as the mean of the differences between the estimated value and the known true value. The application of data from the EFCOVAL Project included calculations of nutrients (that is, protein, potassium, protein density) and foods (that is, vegetables, fruit and fish). Overall, the mean bias of the ISU, NCI, MSM and SPADE Methods was small. However, for all methods, the mean bias and the variation of the bias increased with smaller sample size, higher variance ratios and with more pronounced departures from normality. Serious mean bias (especially in the 95th percentile) was seen using the NCI Method when r(var) = 9, λ(BC) = 0 and n = 1000. The ISU Method and MSM showed a somewhat higher s.d. of the bias compared with NCI and SPADE Methods, indicating a larger method uncertainty. Furthermore, whereas the ISU, NCI and SPADE Methods produced unimodal density functions by definition, MSM produced distributions with 'peaks', when sample size was small, because of the fact that the population's usual intake distribution was based on estimated individual usual intakes. The application to the EFCOVAL data showed that all estimates of the percentiles and mean were within 5% of each other for the three nutrients analyzed. For vegetables, fruit and fish, the differences were larger than that for nutrients, but overall the sample mean was estimated reasonably. The four methods that were compared seem to provide good estimates of the usual intake distribution of nutrients. Nevertheless, care needs to be taken when a nutrient has a high within-person variation or has a highly skewed distribution, and when the sample size is small. As the methods offer different features, practical reasons may exist to prefer one method over the other.
Verification of Numerical Programs: From Real Numbers to Floating Point Numbers
NASA Technical Reports Server (NTRS)
Goodloe, Alwyn E.; Munoz, Cesar; Kirchner, Florent; Correnson, Loiec
2013-01-01
Numerical algorithms lie at the heart of many safety-critical aerospace systems. The complexity and hybrid nature of these systems often requires the use of interactive theorem provers to verify that these algorithms are logically correct. Usually, proofs involving numerical computations are conducted in the infinitely precise realm of the field of real numbers. However, numerical computations in these algorithms are often implemented using floating point numbers. The use of a finite representation of real numbers introduces uncertainties as to whether the properties veri ed in the theoretical setting hold in practice. This short paper describes work in progress aimed at addressing these concerns. Given a formally proven algorithm, written in the Program Verification System (PVS), the Frama-C suite of tools is used to identify sufficient conditions and verify that under such conditions the rounding errors arising in a C implementation of the algorithm do not affect its correctness. The technique is illustrated using an algorithm for detecting loss of separation among aircraft.
A controlled experiment in ground water flow model calibration
Hill, M.C.; Cooley, R.L.; Pollock, D.W.
1998-01-01
Nonlinear regression was introduced to ground water modeling in the 1970s, but has been used very little to calibrate numerical models of complicated ground water systems. Apparently, nonlinear regression is thought by many to be incapable of addressing such complex problems. With what we believe to be the most complicated synthetic test case used for such a study, this work investigates using nonlinear regression in ground water model calibration. Results of the study fall into two categories. First, the study demonstrates how systematic use of a well designed nonlinear regression method can indicate the importance of different types of data and can lead to successive improvement of models and their parameterizations. Our method differs from previous methods presented in the ground water literature in that (1) weighting is more closely related to expected data errors than is usually the case; (2) defined diagnostic statistics allow for more effective evaluation of the available data, the model, and their interaction; and (3) prior information is used more cautiously. Second, our results challenge some commonly held beliefs about model calibration. For the test case considered, we show that (1) field measured values of hydraulic conductivity are not as directly applicable to models as their use in some geostatistical methods imply; (2) a unique model does not necessarily need to be identified to obtain accurate predictions; and (3) in the absence of obvious model bias, model error was normally distributed. The complexity of the test case involved implies that the methods used and conclusions drawn are likely to be powerful in practice.Nonlinear regression was introduced to ground water modeling in the 1970s, but has been used very little to calibrate numerical models of complicated ground water systems. Apparently, nonlinear regression is thought by many to be incapable of addressing such complex problems. With what we believe to be the most complicated synthetic test case used for such a study, this work investigates using nonlinear regression in ground water model calibration. Results of the study fall into two categories. First, the study demonstrates how systematic use of a well designed nonlinear regression method can indicate the importance of different types of data and can lead to successive improvement of models and their parameterizations. Our method differs from previous methods presented in the ground water literature in that (1) weighting is more closely related to expected data errors than is usually the case; (2) defined diagnostic statistics allow for more effective evaluation of the available data, the model, and their interaction; and (3) prior information is used more cautiously. Second, our results challenge some commonly held beliefs about model calibration. For the test case considered, we show that (1) field measured values of hydraulic conductivity are not as directly applicable to models as their use in some geostatistical methods imply; (2) a unique model does not necessarily need to be identified to obtain accurate predictions; and (3) in the absence of obvious model bias, model error was normally distributed. The complexity of the test case involved implies that the methods used and conclusions drawn are likely to be powerful in practice.
NASA Astrophysics Data System (ADS)
Cortinez, J. M.; Valocchi, A. J.; Herrera, P. A.
2013-12-01
Because of the finite size of numerical grids, it is very difficult to correctly account for processes that occur at different spatial scales to accurately simulate the migration of conservative and reactive compounds dissolved in groundwater. In one hand, transport processes in heterogeneous porous media are controlled by local-scale dispersion associated to transport processes at the pore-scale. On the other hand, variations of velocity at the continuum- or Darcy-scale produce spreading of the contaminant plume, which is referred to as macro-dispersion. Furthermore, under some conditions both effects interact, so that spreading may enhance the action of local-scale dispersion resulting in higher mixing, dilution and reaction rates. Traditionally, transport processes at different spatial scales have been included in numerical simulations by using a single dispersion coefficient. This approach implicitly assumes that the separate effects of local-dispersion and macro-dispersion can be added and represented by a unique effective dispersion coefficient. Moreover, the selection of the effective dispersion coefficient for numerical simulations usually do not consider the filtering effect of the grid size over the small-scale flow features. We have developed a multi-scale Lagragian numerical method that allows using two different dispersion coefficients to represent local- and macro-scale dispersion. This technique considers fluid particles that carry solute mass and whose locations evolve according to a deterministic component given by the grid-scale velocity and a stochastic component that corresponds to a block-effective macro-dispersion coefficient. Mass transfer between particles due to local-scale dispersion is approximated by a meshless method. We use our model to test under which transport conditions the combined effect of local- and macro-dispersion are additive and can be represented by a single effective dispersion coefficient. We also demonstrate that for the situations where both processes are additive, an effective grid-dependent dispersion coefficient can be derived based on the concept of block-effective dispersion. We show that the proposed effective dispersion coefficient is able to reproduce dilution, mixing and reaction rates for a wide range of transport conditions similar to the ones found in many practical applications.
NASA Astrophysics Data System (ADS)
Lavergne, Catherine
Geological formations of the Montreal area are mostly made of limestones. The usual approach for design is based on rock mass classification systems considering the rock mass as an equivalent continuous and isotropic material. However, for shallow excavations, stability is generally controlled by geological structures, that in Montreal, are bedding plans that give to the rock mass a strong strain and stress anisotropy. Objects of the research are to realize a numerical modeling that considers sedimentary rocks anisotropy and to determine the influence of the design parameters on displacements, stresses and failure around metro unsupported underground excavations. Geotechnical data used for this study comes from a metro extension project and has been made available to the author. The excavation geometries analyzed are the tunnel, the station and a garage consisting of three (3) parallel tunnels for rock covered between 4 and 16 m. The numerical modeling has been done with FLAC software that represents continuous environment, and ubiquitous joint behavior model to simulate strength anisotropy of sedimentary rock masses. The model considers gravity constraints for an anisotropic material and pore pressures. In total, eleven (11) design parameters have been analyzed. Results show that unconfined compressive strength of intact rock, fault zones and pore pressures in soils have an important influence on the stability of the numerical model. The geometry of excavation, the thickness of rock covered, the RQD, Poisson's ratio and the horizontal tectonic stresses have a moderate influence. Finally, ubiquitous joint parameters, pore pressures in rock mass, width of the pillars of the garage and the damage linked to the excavation method have a low impact. FLAC results have been compared with those of UDEC, a software that uses the distinct element method. Similar conclusions were obtained on displacements, stress state and failure modes. However, UDEC model give slightly less conservative results than FLAC. This study stands up by his local character and the large amount of geotechnical data available used to determine parameters of the numerical model. The results led to recommendations for laboratory tests that can be applied to characterize more specifically anisotropy of sedimentary rocks.
A moist Boussinesq shallow water equations set for testing atmospheric models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zerroukat, M., E-mail: mohamed.zerroukat@metoffice.gov.uk; Allen, T.
The shallow water equations have long been used as an initial test for numerical methods applied to atmospheric models with the test suite of Williamson et al. being used extensively for validating new schemes and assessing their accuracy. However the lack of physics forcing within this simplified framework often requires numerical techniques to be reworked when applied to fully three dimensional models. In this paper a novel two-dimensional shallow water equations system that retains moist processes is derived. This system is derived from three-dimensional Boussinesq approximation of the hydrostatic Euler equations where, unlike the classical shallow water set, we allowmore » the density to vary slightly with temperature. This results in extra (or buoyancy) terms for the momentum equations, through which a two-way moist-physics dynamics feedback is achieved. The temperature and moisture variables are advected as separate tracers with sources that interact with the mean-flow through a simplified yet realistic bulk moist-thermodynamic phase-change model. This moist shallow water system provides a unique tool to assess the usually complex and highly non-linear dynamics–physics interactions in atmospheric models in a simple yet realistic way. The full non-linear shallow water equations are solved numerically on several case studies and the results suggest quite realistic interaction between the dynamics and physics and in particular the generation of cloud and rain. - Highlights: • Novel shallow water equations which retains moist processes are derived from the three-dimensional hydrostatic Boussinesq equations. • The new shallow water set can be seen as a more general one, where the classical equations are a special case of these equations. • This moist shallow water system naturally allows a feedback mechanism from the moist physics increments to the momentum via buoyancy. • Like full models, temperature and moistures are advected as tracers that interact through a simplified yet realistic phase-change model. • This model is a unique tool to test numerical methods for atmospheric models, and physics–dynamics coupling, in a very realistic and simple way.« less
Applications of numerical methods to simulate the movement of contaminants in groundwater.
Sun, N Z
1989-01-01
This paper reviews mathematical models and numerical methods that have been extensively used to simulate the movement of contaminants through the subsurface. The major emphasis is placed on the numerical methods of advection-dominated transport problems and inverse problems. Several mathematical models that are commonly used in field problems are listed. A variety of numerical solutions for three-dimensional models are introduced, including the multiple cell balance method that can be considered a variation of the finite element method. The multiple cell balance method is easy to understand and convenient for solving field problems. When the advection transport dominates the dispersion transport, two kinds of numerical difficulties, overshoot and numerical dispersion, are always involved in solving standard, finite difference methods and finite element methods. To overcome these numerical difficulties, various numerical techniques are developed, such as upstream weighting methods and moving point methods. A complete review of these methods is given and we also mention the problems of parameter identification, reliability analysis, and optimal-experiment design that are absolutely necessary for constructing a practical model. PMID:2695327
Robust large-scale parallel nonlinear solvers for simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson
2005-11-01
This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their usemore » in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any existing linear solver, which makes it simple to write and easily portable. However, the method usually takes twice as long to solve as Newton-GMRES on general problems because it solves two linear systems at each iteration. In this paper, we discuss modifications to Bouaricha's method for a practical implementation, including a special globalization technique and other modifications for greater efficiency. We present numerical results showing computational advantages over Newton-GMRES on some realistic problems. We further discuss a new approach for dealing with singular (or ill-conditioned) matrices. In particular, we modify an algorithm for identifying a turning point so that an increasingly ill-conditioned Jacobian does not prevent convergence.« less
The Electric Potential of a Macromolecule in a Solvent: A Fundamental Approach
NASA Astrophysics Data System (ADS)
Juffer, André H.; Botta, Eugen F. F.; van Keulen, Bert A. M.; van der Ploeg, Auke; Berendsen, Herman J. C.
1991-11-01
A general numerical method is presented to compute the electric potential for a macromolecule of arbitrary shape in a solvent with nonzero ionic strength. The model is based on a continuum description of the dielectric and screening properties of the system, which consists of a bounded internal region with discrete charges and an infinite external region. The potential obeys the Poisson equation in the internal region and the linearized Poisson-Boltzmann equation in the external region, coupled through appropriate boundary conditions. It is shown how this three-dimensional problem can be presented as a pair of coupled integral equations for the potential and the normal component of the electric field at the dielectric interface. These equations can be solved by a straightforward application of boundary element techniques. The solution involves the decomposition of a matrix that depends only on the geometry of the surface and not on the positions of the charges. With this approach the number of unknowns is reduced by an order of magnitude with respect to the usual finite difference methods. Special attention is given to the numerical inaccuracies resulting from charges which are located close to the interface; an adapted formulation is given for that case. The method is tested both for a spherical geometry, for which an exact solution is available, and for a realistic problem, for which a finite difference solution and experimental verification is available. The latter concerns the shift in acid strength (pK-values) of histidines in the copper-containing protein azurin on oxidation of the copper, for various values of the ionic strength. A general method is given to triangulate a macromolecular surface. The possibility is discussed to use the method presented here for a correct treatment of long-range electrostatic interactions in simulations of solvated macromolecules, which form an essential part of correct potentials of mean force.
Surface topography and ultrastructural changes of mucinous carcinoma breast cells.
Voloudakis, G E; Baltatzis, G E; Agnantis, N J; Arnogianaki, N; Misitzis, J; Voloudakis-Baltatzis, I
2007-01-01
Mucinous carcinoma of the breast (MCB) is histologically classified into 2 groups: (1) pure MCB and (2) mixed MCB. Pure MCB carries a better diagnosis than mixed MCB. This research relates to the cell surface topography and ultrastructure of the cells in the above cases and aims to find the differences between them, by means of two methods: scanning electron microscopy (SEM) and transmission electron microscopy (TEM). For the SEM examination, it was necessary to initially culture the MCB tissues and then proceed with the usual SEM method. In contrast, for the TEM technique, MCB tissues were initially fixed followed by the classic TEM method. The authors found the topography of pure MCB cases to be without nodes. The cell membrane was smooth, with numerous pores and small ruffles that covered the entire cell. The ultrastructural appearance of the same cases was with a normal cell membrane containing abundant collagen fibers. They also had many small vesicles containing mucin as well as secretory droplets. In contrast the mixed MCB had a number of lymph nodes and their cell surface topography showed stronger changes such as microvilli, numerous blebs, ruffles and many long projections. Their ultrastructure showed very long microvilli with large cytoplasmic inclusions and extracellular mucin collections, electron-dense material vacuoles, and many important cytoplasmic organelles. An important fact is that mixed MCB also contains areas of infiltrating ductal carcinoma. These cells of the cytoplasmic organelles are clearly responsible for the synthesis, storage, and secretion of the characteristic mucin of this tumor type. Evidently, this abnormal mucin production and the abundance of secretory granules along with the long projections observed in the topographical structure might be responsible for transferring tumor cells to neighboring organs, thus being responsible for metastatic disease.
Yifat, Jonathan; Gannot, Israel
2015-03-01
Early detection of malignant tumors plays a crucial role in the survivability chances of the patient. Therefore, new and innovative tumor detection methods are constantly searched for. Tumor-specific magnetic-core nano-particles can be used with an alternating magnetic field to detect and treat tumors by hyperthermia. For the analysis of the method effectiveness, the bio-heat transfer between the nanoparticles and the tissue must be carefully studied. Heat diffusion in biological tissue is usually analyzed using the Pennes Bio-Heat Equation, where blood perfusion plays an important role. Malignant tumors are known to initiate an angiogenesis process, where endothelial cell migration from neighboring vasculature eventually leads to the formation of a thick blood capillary network around them. This process allows the tumor to receive its extensive nutrition demands and evolve into a more progressive and potentially fatal tumor. In order to assess the effect of angiogenesis on the bio-heat transfer problem, we have developed a discrete stochastic 3D model & simulation of tumor-induced angiogenesis. The model elaborates other angiogenesis models by providing high resolution 3D stochastic simulation, capturing of fine angiogenesis morphological features, effects of dynamic sprout thickness functions, and stochastic parent vessel generator. We show that the angiogenesis realizations produced are well suited for numerical bio-heat transfer analysis. Statistical study on the angiogenesis characteristics was derived using Monte Carlo simulations. According to the statistical analysis, we provide analytical expression for the blood perfusion coefficient in the Pennes equation, as a function of several parameters. This updated form of the Pennes equation could be used for numerical and analytical analyses of the proposed detection and treatment method. Copyright © 2014 Elsevier Inc. All rights reserved.
Simulations for the Development of Thermoelectric Measurements
NASA Astrophysics Data System (ADS)
Zabrocki, Knud; Ziolkowski, Pawel; Dasgupta, Titas; de Boor, Johannes; Müller, Eckhard
2013-07-01
In thermoelectricity, continuum theoretical equations are usually used for the calculation of the characteristics and performance of thermoelectric elements, modules or devices as a function of external parameters (material, geometry, temperatures, current, flow, load, etc.). An increasing number of commercial software packages aimed at applications, such as COMSOL and ANSYS, contain vkernels using direct thermoelectric coupling. Application of these numerical tools also allows analysis of physical measurement conditions and can lead to specifically adapted methods for developing special test equipment required for the determination of TE material and module properties. System-theoretical and simulation-based considerations of favorable geometries are taken into account to create draft sketches in the development of such measurement systems. Particular consideration is given to the development of transient measurement methods, which have great advantages compared with the conventional static methods in terms of the measurement duration required. In this paper the benefits of using numerical tools in designing measurement facilities are shown using two examples. The first is the determination of geometric correction factors in four-point probe measurement of electrical conductivity, whereas the second example is focused on the so-called combined thermoelectric measurement (CTEM) system, where all thermoelectric material properties (Seebeck coefficient, electrical and thermal conductivity, and Harman measurement of zT) are measured in a combined way. Here, we want to highlight especially the measurement of thermal conductivity in a transient mode. Factors influencing the measurement results such as coupling to the environment due to radiation, heat losses via the mounting of the probe head, as well as contact resistance between the sample and sample holder are illustrated, analyzed, and discussed. By employing the results of the simulations, we have developed an improved sample head that allows for measurements over a larger temperature interval with enhanced accuracy.
A numerical technique for linear elliptic partial differential equations in polygonal domains.
Hashemzadeh, P; Fokas, A S; Smitheman, S A
2015-03-08
Integral representations for the solution of linear elliptic partial differential equations (PDEs) can be obtained using Green's theorem. However, these representations involve both the Dirichlet and the Neumann values on the boundary, and for a well-posed boundary-value problem (BVPs) one of these functions is unknown. A new transform method for solving BVPs for linear and integrable nonlinear PDEs usually referred to as the unified transform ( or the Fokas transform ) was introduced by the second author in the late Nineties. For linear elliptic PDEs, this method can be considered as the analogue of Green's function approach but now it is formulated in the complex Fourier plane instead of the physical plane. It employs two global relations also formulated in the Fourier plane which couple the Dirichlet and the Neumann boundary values. These relations can be used to characterize the unknown boundary values in terms of the given boundary data, yielding an elegant approach for determining the Dirichlet to Neumann map . The numerical implementation of the unified transform can be considered as the counterpart in the Fourier plane of the well-known boundary integral method which is formulated in the physical plane. For this implementation, one must choose (i) a suitable basis for expanding the unknown functions and (ii) an appropriate set of complex values, which we refer to as collocation points, at which to evaluate the global relations. Here, by employing a variety of examples we present simple guidelines of how the above choices can be made. Furthermore, we provide concrete rules for choosing the collocation points so that the condition number of the matrix of the associated linear system remains low.
Kim, Chun-Ja; Kang, Duck-Hee
2006-01-01
Despite the numerous benefits of physical activity for patients with diabetes, most healthcare providers in busy clinical settings rarely find time to counsel their patients about it. A Web-based program for healthcare providers can be used as an effective counseling tool, when strategies are outlined for specific stages of readiness for physical activity. Seventy-three adults with type 2 diabetes were randomly assigned to Web-based intervention, printed-material intervention, or usual care. After 12 weeks, the effects of the interventions on physical activity, fasting blood sugar, and glycosylated hemoglobin were evaluated. Both Web-based and printed material intervention, compared with usual care, were effective in increasing physical activity (P < .001) and decreasing fasting blood sugar (P<.01) and glycosylated hemoglobin (P < .01). Post hoc analysis for change scores indicated significant differences between Web-based intervention and usual care and between printed material intervention and usual care, but not between web-based and printed material intervention. The findings of this study support the value of Web-based and printed material interventions in healthcare counseling. With increasing Web access, the effectiveness of Web-based programs offered directly to patients needs to be tested.
A Novel Skin and Fascia Opening for Subfascial Inserting of Intrathecal Baclofen Pump.
Fiaschi, Pietro; Cama, Armando; Piatelli, Gianluca; Moretti, Paolo; Pavanello, Marco
2018-02-01
The aim of this article is to introduce a new skin and fascia opening for intrathecal baclofen pump implantation in the abdomen, with the purpose of reducing complications related to wound breakdown. We introduce a novel way of cutaneous and fascial opening that leads two opposed "L shaped" incisions. This method entails numerous advantages. The first advantage is avoiding the direct alignment of overlapped sutures, which creates a locus minoris resistentiae that can weaken and break under the push of the pump. Another advantage consists of an increased obstruction against deep extension of infective processes from cutaneous origin. The wide opening of the subfascial pocket permits the implantation of any type of pump available, and it reduces complexities in reopening the pouch for pump replacement. It also permits the fastening of all anchoring systems usually present in pumps. Another advantage is the improved possibility of careful muscle cauterization thanks to the wide fascia opening, with reduced risk of postsurgical hematoma. Our results showed a reduction of wound complications with this method. This method could contribute to reducing the rate of wound complications and patient discomfort. Copyright © 2017 Elsevier Inc. All rights reserved.
JahaniShoorab, Nahid; Ebrahimzadeh Zagami, Samira; Nahvi, Ali; Mazluom, Seyed Reza; Golmakani, Nahid; Talebi, Mahdi; Pabarja, Ferial
2015-01-01
Background Pain is one of the side effects of episiotomy. The virtual reality (VR) is a non-pharmacological method for pain relief. The purpose of this study was to determine the effect of using video glasses on pain reduction in primiparity women during episiotomy repair. Methods This clinical trial was conducted on 30 primiparous parturient women having labor at Omolbanin Hospital (Mashhad, Iran) during May-July 2012. Samples during episiotomy repair were randomly divided into two equal groups. The intervention group received the usual treatment with VR (video glasses and local infiltration 5 ml solution of lidocaine 2%) and the control group only received local infiltration (5 ml solution of lidocaine 2%). Pain was measured using the Numeric Pain Rating Scale (0-100 scale) before, during and after the episiotomy repair. Data were analyzed using Fisher’s exact test, Chi-square, Mann-Whitney and repeated measures ANOVA tests by SPSS 11.5 software. Results There were statistically significant differences between the pain score during episiotomy repair in both groups (P=0.038). Conclusion Virtual reality is an effective complementary non-pharmacological method to reduce pain during episiotomy repair. Trial Registration Number: IRCT138811063185N1. PMID:25999621
NASA Technical Reports Server (NTRS)
Yang, Qiguang; Liu, Xu; Wu, Wan; Kizer, Susan; Baize, Rosemary R.
2016-01-01
A hybrid stream PCRTM-SOLAR model has been proposed for fast and accurate radiative transfer simulation. It calculates the reflected solar (RS) radiances with a fast coarse way and then, with the help of a pre-saved matrix, transforms the results to obtain the desired high accurate RS spectrum. The methodology has been demonstrated with the hybrid stream discrete ordinate (HSDO) radiative transfer (RT) model. The HSDO method calculates the monochromatic radiances using a 4-stream discrete ordinate method, where only a small number of monochromatic radiances are simulated with both 4-stream and a larger N-stream (N = 16) discrete ordinate RT algorithm. The accuracy of the obtained channel radiance is comparable to the result from N-stream moderate resolution atmospheric transmission version 5 (MODTRAN5). The root-mean-square errors are usually less than 5x10(exp -4) mW/sq cm/sr/cm. The computational speed is three to four-orders of magnitude faster than the medium speed correlated-k option MODTRAN5. This method is very efficient to simulate thousands of RS spectra under multi-layer clouds/aerosols and solar radiation conditions for climate change study and numerical weather prediction applications.
NASA Astrophysics Data System (ADS)
Xu, T.; Valocchi, A. J.; Ye, M.; Liang, F.
2016-12-01
Due to simplification and/or misrepresentation of the real aquifer system, numerical groundwater flow and solute transport models are usually subject to model structural error. During model calibration, the hydrogeological parameters may be overly adjusted to compensate for unknown structural error. This may result in biased predictions when models are used to forecast aquifer response to new forcing. In this study, we extend a fully Bayesian method [Xu and Valocchi, 2015] to calibrate a real-world, regional groundwater flow model. The method uses a data-driven error model to describe model structural error and jointly infers model parameters and structural error. In this study, Bayesian inference is facilitated using high performance computing and fast surrogate models. The surrogate models are constructed using machine learning techniques to emulate the response simulated by the computationally expensive groundwater model. We demonstrate in the real-world case study that explicitly accounting for model structural error yields parameter posterior distributions that are substantially different from those derived by the classical Bayesian calibration that does not account for model structural error. In addition, the Bayesian with error model method gives significantly more accurate prediction along with reasonable credible intervals.
On the fusion of tuning parameters of fuzzy rules and neural network
NASA Astrophysics Data System (ADS)
Mamuda, Mamman; Sathasivam, Saratha
2017-08-01
Learning fuzzy rule-based system with neural network can lead to a precise valuable empathy of several problems. Fuzzy logic offers a simple way to reach at a definite conclusion based upon its vague, ambiguous, imprecise, noisy or missing input information. Conventional learning algorithm for tuning parameters of fuzzy rules using training input-output data usually end in a weak firing state, this certainly powers the fuzzy rule and makes it insecure for a multiple-input fuzzy system. In this paper, we introduce a new learning algorithm for tuning the parameters of the fuzzy rules alongside with radial basis function neural network (RBFNN) in training input-output data based on the gradient descent method. By the new learning algorithm, the problem of weak firing using the conventional method was addressed. We illustrated the efficiency of our new learning algorithm by means of numerical examples. MATLAB R2014(a) software was used in simulating our result The result shows that the new learning method has the best advantage of training the fuzzy rules without tempering with the fuzzy rule table which allowed a membership function of the rule to be used more than one time in the fuzzy rule base.
NASA Astrophysics Data System (ADS)
Sævik, P. N.; Nixon, C. W.
2017-11-01
We demonstrate how topology-based measures of connectivity can be used to improve analytical estimates of effective permeability in 2-D fracture networks, which is one of the key parameters necessary for fluid flow simulations at the reservoir scale. Existing methods in this field usually compute fracture connectivity using the average fracture length. This approach is valid for ideally shaped, randomly distributed fractures, but is not immediately applicable to natural fracture networks. In particular, natural networks tend to be more connected than randomly positioned fractures of comparable lengths, since natural fractures often terminate in each other. The proposed topological connectivity measure is based on the number of intersections and fracture terminations per sampling area, which for statistically stationary networks can be obtained directly from limited outcrop exposures. To evaluate the method, numerical permeability upscaling was performed on a large number of synthetic and natural fracture networks, with varying topology and geometry. The proposed method was seen to provide much more reliable permeability estimates than the length-based approach, across a wide range of fracture patterns. We summarize our results in a single, explicit formula for the effective permeability.
Li, Bin; Chen, Lianping; Li, Li
2017-01-01
In this article, we propose a novel detection method for underwater moving targets by detecting their extremely low frequency (ELF) emissions with inductive sensors. The ELF field source of the targets is modeled by a horizontal electric dipole at distances more than several times of the targets’ length. The formulas for the fields produced in air are derived with a three-layer model (air, seawater and seafloor) and are evaluated with a complementary numerical integration technique. A proof of concept measurement is presented. The ELF emissions from a surface ship were detected by inductive electronic and magnetic sensors as the ship was leaving a harbor. ELF signals are of substantial strength and have typical characteristic of harmonic line spectrum, and the fundamental frequency has a direct relationship with the ship’s speed. Due to the high sensitivity and low noise level of our sensors, it is capable of resolving weak ELF signals at long distance. In our experiment, a detection distance of 1300 m from the surface ship above the sea surface was realized, which shows that this method would be an appealing complement to the usual acoustic detection and magnetic anomaly detection capability. PMID:28788097
A bubble detection system for propellant filling pipeline
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wen, Wen; Zong, Guanghua; Bi, Shusheng
2014-06-15
This paper proposes a bubble detection system based on the ultrasound transmission method, mainly for probing high-speed bubbles in the satellite propellant filling pipeline. First, three common ultrasonic detection methods are compared and the ultrasound transmission method is used in this paper. Then, the ultrasound beam in a vertical pipe is investigated, suggesting that the width of the beam used for detection is usually smaller than the internal diameter of the pipe, which means that when bubbles move close to the pipe wall, they may escape from being detected. A special device is designed to solve this problem. It canmore » generate the spiral flow to force all the bubbles to ascend along the central line of the pipe. In the end, experiments are implemented to evaluate the performance of this system. Bubbles of five different sizes are generated and detected. Experiment results show that the sizes and quantity of bubbles can be estimated by this system. Also, the bubbles of different radii can be distinguished from each other. The numerical relationship between the ultrasound attenuation and the bubble radius is acquired and it can be utilized for estimating the unknown bubble size and measuring the total bubble volume.« less
Direct computation of stochastic flow in reservoirs with uncertain parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dainton, M.P.; Nichols, N.K.; Goldwater, M.H.
1997-01-15
A direct method is presented for determining the uncertainty in reservoir pressure, flow, and net present value (NPV) using the time-dependent, one phase, two- or three-dimensional equations of flow through a porous medium. The uncertainty in the solution is modelled as a probability distribution function and is computed from given statistical data for input parameters such as permeability. The method generates an expansion for the mean of the pressure about a deterministic solution to the system equations using a perturbation to the mean of the input parameters. Hierarchical equations that define approximations to the mean solution at each point andmore » to the field convariance of the pressure are developed and solved numerically. The procedure is then used to find the statistics of the flow and the risked value of the field, defined by the NPV, for a given development scenario. This method involves only one (albeit complicated) solution of the equations and contrasts with the more usual Monte-Carlo approach where many such solutions are required. The procedure is applied easily to other physical systems modelled by linear or nonlinear partial differential equations with uncertain data. 14 refs., 14 figs., 3 tabs.« less
Brezar, Vedran; Ruffin, Nicolas; Lévy, Yves; Seddiki, Nabila
2014-09-01
Regulatory T cells (Tregs) are pivotal in preventing autoimmunity. They play a major but still ambiguous role in cancer and viral infections. Functional studies of human Tregs are often hampered by numerous technical difficulties arising from imperfections in isolating and depleting protocols, together with the usual low cell number available from clinical samples. We standardized a simple procedure (Single Step Method, SSM), based on magnetic beads technology, in which both depletion and isolation of human Tregs with high purities are simultaneously achieved. SSM is suitable when using low cell numbers either fresh or frozen from both patients and healthy individuals. It allows simultaneous Tregs isolation and depletion that can be used for further functional work to monitor suppressive function of isolated Tregs (in vitro suppression assay) and also effector IFN-γ responses of Tregs-depleted cell fraction (OX40 assay). To our knowledge, there is no accurate standardized method for Tregs isolation and depletion in a clinical context. SSM could thus be used and easily standardized across different laboratories. Copyright © 2014 Elsevier B.V. All rights reserved.
A fast solver for the Helmholtz equation based on the generalized multiscale finite-element method
NASA Astrophysics Data System (ADS)
Fu, Shubin; Gao, Kai
2017-11-01
Conventional finite-element methods for solving the acoustic-wave Helmholtz equation in highly heterogeneous media usually require finely discretized mesh to represent the medium property variations with sufficient accuracy. Computational costs for solving the Helmholtz equation can therefore be considerably expensive for complicated and large geological models. Based on the generalized multiscale finite-element theory, we develop a novel continuous Galerkin method to solve the Helmholtz equation in acoustic media with spatially variable velocity and mass density. Instead of using conventional polynomial basis functions, we use multiscale basis functions to form the approximation space on the coarse mesh. The multiscale basis functions are obtained from multiplying the eigenfunctions of a carefully designed local spectral problem with an appropriate multiscale partition of unity. These multiscale basis functions can effectively incorporate the characteristics of heterogeneous media's fine-scale variations, thus enable us to obtain accurate solution to the Helmholtz equation without directly solving the large discrete system formed on the fine mesh. Numerical results show that our new solver can significantly reduce the dimension of the discrete Helmholtz equation system, and can also obviously reduce the computational time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zou, Ling; Zhao, Haihua; Kim, Seung Jun
In this study, the classical Welander’s oscillatory natural circulation problem is investigated using high-order numerical methods. As originally studied by Welander, the fluid motion in a differentially heated fluid loop can exhibit stable, weakly instable, and strongly instable modes. A theoretical stability map has also been originally derived from the stability analysis. Numerical results obtained in this paper show very good agreement with Welander’s theoretical derivations. For stable cases, numerical results from both the high-order and low-order numerical methods agree well with the non-dimensional flow rate analytically derived. The high-order numerical methods give much less numerical errors compared to themore » low-order methods. For stability analysis, the high-order numerical methods could perfectly predict the stability map, while the low-order numerical methods failed to do so. For all theoretically unstable cases, the low-order methods predicted them to be stable. The result obtained in this paper is a strong evidence to show the benefits of using high-order numerical methods over the low-order ones, when they are applied to simulate natural circulation phenomenon that has already gain increasing interests in many future nuclear reactor designs.« less
Second to Fourth Digit Ratio and Numerical Competence in Children
ERIC Educational Resources Information Center
Fink, Bernhard; Brookes, Helen; Neave, Nick; Manning, John T.; Geary, David C.
2006-01-01
The ratio between the 2nd and 4th fingers (2D:4D)--a potential proxy for prenatal testosterone (T) exposure--shows a sex difference, with males usually having lower mean values; the latter potentially indicates higher prenatal T exposure. We studied relations between 2D:4D and competencies in the domains of counting, number knowledge, and…
26 CFR 1.168(i)-1T - General asset accounts (temporary).
Code of Federal Regulations, 2014 CFR
2014-04-01
... aircraft is a unit of property as determined under § 1.263(a)-3T(e)(3). However, for disposition purposes... depreciation deduction. (5) Mass assets is a mass or group of individual items of depreciable assets— (i) That... the mass or group; (iii) Numerous in quantity; (iv) Usually accounted for only on a total dollar or...
26 CFR 1.168(i)-1T - General asset accounts (temporary).
Code of Federal Regulations, 2013 CFR
2013-04-01
... aircraft is a unit of property as determined under § 1.263(a)-3T(e)(3). However, for disposition purposes... depreciation deduction. (5) Mass assets is a mass or group of individual items of depreciable assets— (i) That... the mass or group; (iii) Numerous in quantity; (iv) Usually accounted for only on a total dollar or...
26 CFR 1.168(i)-1T - General asset accounts (temporary).
Code of Federal Regulations, 2012 CFR
2012-04-01
... aircraft is a unit of property as determined under § 1.263(a)-3T(e)(3). However, for disposition purposes... depreciation deduction. (5) Mass assets is a mass or group of individual items of depreciable assets— (i) That... the mass or group; (iii) Numerous in quantity; (iv) Usually accounted for only on a total dollar or...
USDA-ARS?s Scientific Manuscript database
Background E. coli O157:H7 is an important foodborne pathogen responsible for numerous outbreaks worldwide. FSIS regulates this pathogen as an adulterant in meat products, and a “High Event Period” is defined as a time period in which commercial meat processing plants experience a higher than usual ...
The Master of Fine Arts (MFA) in Creative Writing in the United States: Teaching the "Unteachable"
ERIC Educational Resources Information Center
Caglioti, Carla
2010-01-01
The Master of Fine Arts (MFA) in Creative Writing, usually housed within the English Department, has become a progressively more popular field of study among students and budget conscious administrators. But for all its popularity, it is a field that has been left generally unexamined by scholars. While there have been numerous scholarly studies…
2014-03-24
of the aSIL microscopy for semiconductor failure analysis and is applicable to imaging in quantum optics [18], biophotonics [19] and metrology [20...is usually of interest, the model can be adapted to applications in fields such as quantum optics and biophotonics for which the non-resonant
Landscape esthetics: How to quantify the scenics of a river valley
Leopold, Luna Bergere
1969-01-01
There are an increasing number of bills before Congress that in one way or another affect the landscape or the environment. Each of these requires seemingly endless numbers of congressional hearings, which are recorded upon endless reams of paper.And if, for some reason, you happen to read the voluminous testimony surrounding one of these environment-affecting proposals, you will generally find a marked contrast between the volume and kind of information presented by those who are pressing for technical development - building a dam, constructing a highway, installing a nuclear power plant - and the testimony of those who either oppose the development or wish to alter it in some way. The developer usually employs numerical arguments, which tend to show that there is an economic benefit to be obtained by constructing something - whatever that something may be. The argument is usually expressed in terms of a "cost-benefit ratio." It is typically argued, for instance, that the construction cost of a given project will be repaid over a period of time and will yield a profit or a benefit in excess of the development costs by a ratio of, let us say, 1.2 to 1. The argument is further supported with great numbers of charts, graphs, tables, and additional figures.In marked contrast, those who favor protection of the environment against development are fewer in number, their statements are based on emotion or personal feelings, and they usually lack numerical information, quantitative data, and detailed computations. Perhaps this is the reason why this latter group seems to be continually fighting rearguard actions - losing battle after battle.
Benko, Matúš; Gfrerer, Helmut
2018-01-01
In this paper, we consider a sufficiently broad class of non-linear mathematical programs with disjunctive constraints, which, e.g. include mathematical programs with complemetarity/vanishing constraints. We present an extension of the concept of [Formula: see text]-stationarity which can be easily combined with the well-known notion of M-stationarity to obtain the stronger property of so-called [Formula: see text]-stationarity. We show how the property of [Formula: see text]-stationarity (and thus also of M-stationarity) can be efficiently verified for the considered problem class by computing [Formula: see text]-stationary solutions of a certain quadratic program. We consider further the situation that the point which is to be tested for [Formula: see text]-stationarity, is not known exactly, but is approximated by some convergent sequence, as it is usually the case when applying some numerical method.
Fuzzy portfolio model with fuzzy-input return rates and fuzzy-output proportions
NASA Astrophysics Data System (ADS)
Tsaur, Ruey-Chyn
2015-02-01
In the finance market, a short-term investment strategy is usually applied in portfolio selection in order to reduce investment risk; however, the economy is uncertain and the investment period is short. Further, an investor has incomplete information for selecting a portfolio with crisp proportions for each chosen security. In this paper we present a new method of constructing fuzzy portfolio model for the parameters of fuzzy-input return rates and fuzzy-output proportions, based on possibilistic mean-standard deviation models. Furthermore, we consider both excess or shortage of investment in different economic periods by using fuzzy constraint for the sum of the fuzzy proportions, and we also refer to risks of securities investment and vagueness of incomplete information during the period of depression economics for the portfolio selection. Finally, we present a numerical example of a portfolio selection problem to illustrate the proposed model and a sensitivity analysis is realised based on the results.
NASA Astrophysics Data System (ADS)
Singh, Mandeep; Khare, Kedar
2018-05-01
We describe a numerical processing technique that allows single-shot region-of-interest (ROI) reconstruction in image plane digital holographic microscopy with full pixel resolution. The ROI reconstruction is modelled as an optimization problem where the cost function to be minimized consists of an L2-norm squared data fitting term and a modified Huber penalty term that are minimized alternately in an adaptive fashion. The technique can provide full pixel resolution complex-valued images of the selected ROI which is not possible to achieve with the commonly used Fourier transform method. The technique can facilitate holographic reconstruction of individual cells of interest from a large field-of-view digital holographic microscopy data. The complementary phase information in addition to the usual absorption information already available in the form of bright field microscopy can make the methodology attractive to the biomedical user community.
Quantitative evaluation of first-order retardation corrections to the quarkonium spectrum
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brambilla, N.; Prosperi, G.M.
1992-08-01
We evaluate numerically first-order retardation corrections for some charmonium and bottomonium masses under the usual assumption of a Bethe-Salpeter purely scalar confinement kernel. The result depends strictly on the use of an additional effective potential to express the corrections (rather than to resort to Kato perturbation theory) and on an appropriate regularization prescription. The kernel has been chosen in order to reproduce in the instantaneous approximation a semirelativistic potential suggested by the Wilson loop method. The calculations are performed for two sets of parameters determined by fits in potential theory. The corrections turn out to be typically of the ordermore » of a few hundred MeV and depend on an additional scale parameter introduced in the regularization. A conjecture existing in the literature on the origin of the constant term in the potential is also discussed.« less
The role of water content in triboelectric charging of wind-blown sand.
Gu, Zhaolin; Wei, Wei; Su, Junwei; Yu, Chuck Wah
2013-01-01
Triboelectric charging is common in desert sandstorms and dust devils on Earth; however, it remains poorly understood. Here we show a charging mechanism of sands with the adsorbed water on micro-porous surface in wind-blown sand based on the fact that water content is universal but usually a minor component in most particle systems. The triboelectric charging could be resulted due to the different mobility of H(+)/OH(-) between the contacting sands with a temperature difference. Computational fluid dynamics (CFD) and discrete element method (DEM) were used to demonstrate the dynamics of the sand charging. The numerically simulated charge-to-mass ratios of sands and electric field strength established in wind tunnel agreed well with the experimental data. The charging mechanism could provide an explanation for the charging process of all identical granular systems with water content, including Martian dust devils, wind-blown snow, even powder electrification in industrial processes.
Quantification of uncertainty for fluid flow in heterogeneous petroleum reservoirs
NASA Astrophysics Data System (ADS)
Zhang, Dongxiao
Detailed description of the heterogeneity of oil/gas reservoirs is needed to make performance predictions of oil/gas recovery. However, only limited measurements at a few locations are usually available. This combination of significant spatial heterogeneity with incomplete information about it leads to uncertainty about the values of reservoir properties and thus, to uncertainty in estimates of production potential. The theory of stochastic processes provides a natural method for evaluating these uncertainties. In this study, we present a stochastic analysis of transient, single phase flow in heterogeneous reservoirs. We derive general equations governing the statistical moments of flow quantities by perturbation expansions. These moments can be used to construct confidence intervals for the flow quantities (e.g., pressure and flow rate). The moment equations are deterministic and can be solved numerically with existing solvers. The proposed moment equation approach has certain advantages over the commonly used Monte Carlo approach.
Variations of Strahl Properties with Fast and Slow Solar Wind
NASA Technical Reports Server (NTRS)
Figueroa-Vinas, Adolfo; Goldstein, Melvyn L.; Gurgiolo, Chris
2008-01-01
The interplanetary solar wind electron velocity distribution function generally shows three different populations. Two of the components, the core and halo, have been the most intensively analyzed and modeled populations using different theoretical models. The third component, the strahl, is usually seen at higher energies, is confined in pitch-angle, is highly field-aligned and skew. This population has been more difficult to identify and to model in the solar wind. In this work we make use of the high angular, energy and time resolution and three-dimensional data of the Cluster/PEACE electron spectrometer to identify and analyze this component in the ambient solar wind during high and slow speed solar wind. The moment density and fluid velocity have been computed by a semi-numerical integration method. The variations of solar wind density and drift velocity with the general build solar wind speed could provide some insight into the source, origin, and evolution of the strahl.
Comparative studies of physical properties of kinesiotapes.
Gołąb, Agnieszka; Kulesa-Mrowiecka, Małgorzata; Gołąb, Marek
2017-01-01
Nowadays we observe growing popularity of kinesiotaping as a supportive method in physiotherapy. In documents available on kinesiotaping we can find that mechanical properties of tapes are similar to the ones of a human skin, but usually there is hardly any numerical data characterizing these properties. Therefore, testing and comparing physical properties of commercially available kinesiotapes seems to be important. Physical properties of five commercially available kinesiotapes were examined. Strain vs. stress data was collected up to 15 N. Program Origin 9.0 was used for data analysis. The obtained results show that up to about 2 N the strain vs. stress characteristics of the tested tapes are similar while for greater stress they differ essentially. An alternative, to commonly used, way of defining relative strain is proposed. This definition could be more suitable in those cases when desired tape tensions are higher than 50% i.e. in ligament and tendon techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gelß, Patrick, E-mail: p.gelss@fu-berlin.de; Matera, Sebastian, E-mail: matera@math.fu-berlin.de; Schütte, Christof, E-mail: schuette@mi.fu-berlin.de
2016-06-01
In multiscale modeling of heterogeneous catalytic processes, one crucial point is the solution of a Markovian master equation describing the stochastic reaction kinetics. Usually, this is too high-dimensional to be solved with standard numerical techniques and one has to rely on sampling approaches based on the kinetic Monte Carlo method. In this study we break the curse of dimensionality for the direct solution of the Markovian master equation by exploiting the Tensor Train Format for this purpose. The performance of the approach is demonstrated on a first principles based, reduced model for the CO oxidation on the RuO{sub 2}(110) surface.more » We investigate the complexity for increasing system size and for various reaction conditions. The advantage over the stochastic simulation approach is illustrated by a problem with increased stiffness.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Everett, W.R.; Rechnitz, G.A.
1999-01-01
A mini review of enzyme-based electrochemical biosensors for inhibition analysis of organophosphorus and carbamate pesticides is presented. Discussion includes the most recent literature to present advances in detection limits, selectivity and real sample analysis. Recent reviews on the monitoring of pesticides and their residues suggest that the classical analytical techniques of gas and liquid chromatography are the most widely used methods of detection. These techniques, although very accurate in their determinations, can be quite time consuming and expensive and usually require extensive sample clean up and pro-concentration. For these and many other reasons, the classical techniques are very difficult tomore » adapt for field use. Numerous researchers, in the past decade, have developed and made improvements on biosensors for use in pesticide analysis. This mini review will focus on recent advances made in enzyme-based electrochemical biosensors for the determinations of organophosphorus and carbamate pesticides.« less
The 3D Hough Transform for plane detection in point clouds: A review and a new accumulator design
NASA Astrophysics Data System (ADS)
Borrmann, Dorit; Elseberg, Jan; Lingemann, Kai; Nüchter, Andreas
2011-03-01
The Hough Transform is a well-known method for detecting parameterized objects. It is the de facto standard for detecting lines and circles in 2-dimensional data sets. For 3D it has attained little attention so far. Even for the 2D case high computational costs have lead to the development of numerous variations for the Hough Transform. In this article we evaluate different variants of the Hough Transform with respect to their applicability to detect planes in 3D point clouds reliably. Apart from computational costs, the main problem is the representation of the accumulator. Usual implementations favor geometrical objects with certain parameters due to uneven sampling of the parameter space. We present a novel approach to design the accumulator focusing on achieving the same size for each cell and compare it to existing designs. [Figure not available: see fulltext.
Using soft systems methodology to develop a simulation of out-patient services.
Lehaney, B; Paul, R J
1994-10-01
Discrete event simulation is an approach to modelling a system in the form of a set of mathematical equations and logical relationships, usually used for complex problems, which are difficult to address by using analytical or numerical methods. Managing out-patient services is such a problem. However, simulation is not in itself a systemic approach, in that it provides no methodology by which system boundaries and system activities may be identified. The investigation considers the use of soft systems methodology as an aid to drawing system boundaries and identifying system activities, for the purpose of simulating the outpatients' department at a local hospital. The long term aims are to examine the effects that the participative nature of soft systems methodology has on the acceptability of the simulation model, and to provide analysts and managers with a process that may assist in planning strategies for health care.
In Situ Thermal Generation of Silver Nanoparticles in 3D Printed Polymeric Structures.
Fantino, Erika; Chiappone, Annalisa; Calignano, Flaviana; Fontana, Marco; Pirri, Fabrizio; Roppolo, Ignazio
2016-07-19
Polymer nanocomposites have always attracted the interest of researchers and industry because of their potential combination of properties from both the nanofillers and the hosting matrix. Gathering nanomaterials and 3D printing could offer clear advantages and numerous new opportunities in several application fields. Embedding nanofillers in a polymeric matrix could improve the final material properties but usually the printing process gets more difficult. Considering this drawback, in this paper we propose a method to obtain polymer nanocomposites by in situ generation of nanoparticles after the printing process. 3D structures were fabricated through a Digital Light Processing (DLP) system by disolving metal salts in the starting liquid formulation. The 3D fabrication is followed by a thermal treatment in order to induce in situ generation of metal nanoparticles (NPs) in the polymer matrix. Comprehensive studies were systematically performed on the thermo-mechanical characteristics, morphology and electrical properties of the 3D printed nanocomposites.
NASA Astrophysics Data System (ADS)
Sabzikar, Farzad; Meerschaert, Mark M.; Chen, Jinghua
2015-07-01
Fractional derivatives and integrals are convolutions with a power law. Multiplying by an exponential factor leads to tempered fractional derivatives and integrals. Tempered fractional diffusion equations, where the usual second derivative in space is replaced by a tempered fractional derivative, govern the limits of random walk models with an exponentially tempered power law jump distribution. The limiting tempered stable probability densities exhibit semi-heavy tails, which are commonly observed in finance. Tempered power law waiting times lead to tempered fractional time derivatives, which have proven useful in geophysics. The tempered fractional derivative or integral of a Brownian motion, called a tempered fractional Brownian motion, can exhibit semi-long range dependence. The increments of this process, called tempered fractional Gaussian noise, provide a useful new stochastic model for wind speed data. A tempered fractional difference forms the basis for numerical methods to solve tempered fractional diffusion equations, and it also provides a useful new correlation model in time series.
Meerschaert, Mark M; Sabzikar, Farzad; Chen, Jinghua
2015-07-15
Fractional derivatives and integrals are convolutions with a power law. Multiplying by an exponential factor leads to tempered fractional derivatives and integrals. Tempered fractional diffusion equations, where the usual second derivative in space is replaced by a tempered fractional derivative, govern the limits of random walk models with an exponentially tempered power law jump distribution. The limiting tempered stable probability densities exhibit semi-heavy tails, which are commonly observed in finance. Tempered power law waiting times lead to tempered fractional time derivatives, which have proven useful in geophysics. The tempered fractional derivative or integral of a Brownian motion, called a tempered fractional Brownian motion, can exhibit semi-long range dependence. The increments of this process, called tempered fractional Gaussian noise, provide a useful new stochastic model for wind speed data. A tempered difference forms the basis for numerical methods to solve tempered fractional diffusion equations, and it also provides a useful new correlation model in time series.
MEERSCHAERT, MARK M.; SABZIKAR, FARZAD; CHEN, JINGHUA
2014-01-01
Fractional derivatives and integrals are convolutions with a power law. Multiplying by an exponential factor leads to tempered fractional derivatives and integrals. Tempered fractional diffusion equations, where the usual second derivative in space is replaced by a tempered fractional derivative, govern the limits of random walk models with an exponentially tempered power law jump distribution. The limiting tempered stable probability densities exhibit semi-heavy tails, which are commonly observed in finance. Tempered power law waiting times lead to tempered fractional time derivatives, which have proven useful in geophysics. The tempered fractional derivative or integral of a Brownian motion, called a tempered fractional Brownian motion, can exhibit semi-long range dependence. The increments of this process, called tempered fractional Gaussian noise, provide a useful new stochastic model for wind speed data. A tempered difference forms the basis for numerical methods to solve tempered fractional diffusion equations, and it also provides a useful new correlation model in time series. PMID:26085690
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sabzikar, Farzad, E-mail: sabzika2@stt.msu.edu; Meerschaert, Mark M., E-mail: mcubed@stt.msu.edu; Chen, Jinghua, E-mail: cjhdzdz@163.com
2015-07-15
Fractional derivatives and integrals are convolutions with a power law. Multiplying by an exponential factor leads to tempered fractional derivatives and integrals. Tempered fractional diffusion equations, where the usual second derivative in space is replaced by a tempered fractional derivative, govern the limits of random walk models with an exponentially tempered power law jump distribution. The limiting tempered stable probability densities exhibit semi-heavy tails, which are commonly observed in finance. Tempered power law waiting times lead to tempered fractional time derivatives, which have proven useful in geophysics. The tempered fractional derivative or integral of a Brownian motion, called a temperedmore » fractional Brownian motion, can exhibit semi-long range dependence. The increments of this process, called tempered fractional Gaussian noise, provide a useful new stochastic model for wind speed data. A tempered fractional difference forms the basis for numerical methods to solve tempered fractional diffusion equations, and it also provides a useful new correlation model in time series.« less
Simulation of miniature endplate potentials in neuromuscular junctions by using a cellular automaton
NASA Astrophysics Data System (ADS)
Avella, Oscar Javier; Muñoz, José Daniel; Fayad, Ramón
2008-01-01
Miniature endplate potentials are recorded in the neuromuscular junction when the acetylcholine contents of one or a few synaptic vesicles are spontaneously released into the synaptic cleft. Since their discovery by Fatt and Katz in 1952, they have been among the paradigms in neuroscience. Those potentials are usually simulated by means of numerical approaches, such as Brownian dynamics, finite differences and finite element methods. Hereby we propose that diffusion cellular automata can be a useful alternative for investigating them. To illustrate this point, we simulate a miniature endplate potential by using experimental parameters. Our model reproduces the potential shape, amplitude and time course. Since our automaton is able to track the history and interactions of each single particle, it is very easy to introduce non-linear effects with little computational effort. This makes cellular automata excellent candidates for simulating biological reaction-diffusion processes, where no other external forces are involved.
Spectral Properties of Composite Excitations in the t-J Model
NASA Astrophysics Data System (ADS)
Otaki, Takashi; Yahagi, Yuta; Matsueda, Hiroaki
2017-08-01
In quantum many-body systems, the equation of motion for a simple fermionic operator does not close, and higher-order processes induce composite operators dressed with several types of nonlocal quantum fluctuation. We systematically examine the spectral properties of these composite excitations in the t-J model in one spatial dimension by both numerical and theoretical approaches. Of particular interest, with the help of the Bethe ansatz for the large-U Hubbard model, is the classification of which composite excitations are due to the string excitation, which is usually hidden in the single-particle spectrum, as well as the spinon and holon branches. We examine how the mixing between the spinon and string excitations is prohibited in terms of the composite operator method. Owing to the dimensionality independent nature of the present approach, we discuss the implications of the mixing in close connection with the pseudogap in high-Tc cuprates.
NASA Astrophysics Data System (ADS)
Soeryana, E.; Fadhlina, N.; Sukono; Rusyaman, E.; Supian, S.
2017-01-01
Investments in stocks investors are also faced with the issue of risk, due to daily price of stock also fluctuate. For minimize the level of risk, investors usually forming an investment portfolio. Establishment of a portfolio consisting of several stocks are intended to get the optimal composition of the investment portfolio. This paper discussed about optimizing investment portfolio of Mean-Variance to stocks by using mean and volatility is not constant based on logarithmic utility function. Non constant mean analysed using models Autoregressive Moving Average (ARMA), while non constant volatility models are analysed using the Generalized Autoregressive Conditional heteroscedastic (GARCH). Optimization process is performed by using the Lagrangian multiplier technique. As a numerical illustration, the method is used to analyse some Islamic stocks in Indonesia. The expected result is to get the proportion of investment in each Islamic stock analysed.
NASA Astrophysics Data System (ADS)
Soeryana, Endang; Halim, Nurfadhlina Bt Abdul; Sukono, Rusyaman, Endang; Supian, Sudradjat
2017-03-01
Investments in stocks investors are also faced with the issue of risk, due to daily price of stock also fluctuate. For minimize the level of risk, investors usually forming an investment portfolio. Establishment of a portfolio consisting of several stocks are intended to get the optimal composition of the investment portfolio. This paper discussed about optimizing investment portfolio of Mean-Variance to stocks by using mean and volatility is not constant based on the Negative Exponential Utility Function. Non constant mean analyzed using models Autoregressive Moving Average (ARMA), while non constant volatility models are analyzed using the Generalized Autoregressive Conditional heteroscedastic (GARCH). Optimization process is performed by using the Lagrangian multiplier technique. As a numerical illustration, the method is used to analyze some stocks in Indonesia. The expected result is to get the proportion of investment in each stock analyzed
Filter and Grid Resolution in DG-LES
NASA Astrophysics Data System (ADS)
Miao, Ling; Sammak, Shervin; Madnia, Cyrus K.; Givi, Peyman
2017-11-01
The discontinuous Galerkin (DG) methodology has proven very effective for large eddy simulation (LES) of turbulent flows. Two important parameters in DG-LES are the grid resolution (h) and the filter size (Δ). In most previous work, the filter size is usually set to be proportional to the grid spacing. In this work, the DG method is combined with a subgrid scale (SGS) closure which is equivalent to that of the filtered density function (FDF). The resulting hybrid scheme is particularly attractive because a larger portion of the resolved energy is captured as the order of spectral approximation increases. Different cases for LES of a three-dimensional temporally developing mixing layer are appraised and a systematic parametric study is conducted to investigate the effects of grid resolution, the filter width size, and the order of spectral discretization. Comparative assessments are also made via the use of high resolution direct numerical simulation (DNS) data.
Intuitive approach to the unified theory of spin relaxation
NASA Astrophysics Data System (ADS)
Szolnoki, Lénárd; Dóra, Balázs; Kiss, Annamária; Fabian, Jaroslav; Simon, Ferenc
2017-12-01
Spin relaxation is conventionally discussed using two different approaches for materials with and without inversion symmetry. The former is known as the Elliott-Yafet (EY) theory and for the latter the D'yakonov-Perel' (DP) theory applies. We discuss herein a simple and intuitive approach to demonstrate that the two seemingly disparate mechanisms are closely related. A compelling analogy between the respective Hamiltonians is presented, and that the usual derivation of spin-relaxation times, in the respective frameworks of the two theories, can be performed. The result also allows us to obtain less canonical spin-relaxation regimes, i.e. the generalization of the EY when the material has a large quasiparticle broadening, and the DP mechanism in ultrapure semiconductors. The method also allows a practical and intuitive numerical implementation of the spin-relaxation calculation, which is demonstrated for MgB2, which has anomalous spin-relaxation properties.
Atomic Gaussian type orbitals and their Fourier transforms via the Rayleigh expansion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yükçü, Niyazi
Gaussian type orbitals (GTOs), which are one of the types of exponential type orbitals (ETOs), are used usually as basis functions in the multi-center atomic and molecular integrals to better understand physical and chemical properties of matter. In the Fourier transform method (FTM), basis functions have not simplicity to make mathematical operations, but their Fourier transforms are easier to use. In this work, with the help of FTM, Rayleigh expansion and some properties of unnormalized GTOs, we present new mathematical results for the Fourier transform of GTOs in terms of Laguerre polynomials, hypergeometric and Whittaker functions. Physical and analytical propertiesmore » of GTOs are discussed and some numerical results have been given in a table. Finally, we compare our mathematical results with the other known literature results by using a computer program and details of evaluation are presented.« less
Implementation of Kane's Method for a Spacecraft Composed of Multiple Rigid Bodies
NASA Technical Reports Server (NTRS)
Stoneking, Eric T.
2013-01-01
Equations of motion are derived for a general spacecraft composed of rigid bodies connected via rotary (spherical or gimballed) joints in a tree topology. Several supporting concepts are developed in depth. Basis dyads aid in the transition from basis-free vector equations to component-wise equations. Joint partials allow abstraction of 1-DOF, 2-DOF, 3-DOF gimballed and spherical rotational joints to a common notation. The basic building block consisting of an "inner" body and an "outer" body connected by a joint enables efficient organization of arbitrary tree structures. Kane's equation is recast in a form which facilitates systematic assembly of large systems of equations, and exposes a relationship of Kane's equation to Newton and Euler's equations which is obscured by the usual presentation. The resulting system of dynamic equations is of minimum dimension, and is suitable for numerical solution by computer. Implementation is ·discussed, and illustrative simulation results are presented.
NASA Technical Reports Server (NTRS)
King, H. F.; Komornicki, A.
1986-01-01
Formulas are presented relating Taylor series expansion coefficients of three functions of several variables, the energy of the trial wave function (W), the energy computed using the optimized variational wave function (E), and the response function (lambda), under certain conditions. Partial derivatives of lambda are obtained through solution of a recursive system of linear equations, and solution through order n yields derivatives of E through order 2n + 1, extending Puley's application of Wigner's 2n + 1 rule to partial derivatives in couple perturbation theory. An examination of numerical accuracy shows that the usual two-term second derivative formula is less stable than an alternative four-term formula, and that previous claims that energy derivatives are stationary properties of the wave function are fallacious. The results have application to quantum theoretical methods for the computation of derivative properties such as infrared frequencies and intensities.
NASA Astrophysics Data System (ADS)
Gao, Tinghong; Li, Yidan; Xie, Quan; Tian, Zean; Chen, Qian; Liang, Yongchao; Ren, Lei; Hu, Xuechen
2018-01-01
The growth of GaN crystals at different pressures was studied by molecular dynamics simulation employing the Stillinger-Weber potential, and their structural properties and defects were characterized using the radial distribution function, the Voronoi polyhedron index method, and a suitable visualization technology. Crystal structures formed at 0, 1, 5, 10, and 20 GPa featured an overwhelming number of <4 0 0 0> Voronoi polyhedra, whereas amorphous structures comprising numerous disordered polyhedra were produced at 50 GPa. During quenching, coherent twin boundaries were easily formed between zinc-blende and wurtzite crystal structures in GaN. Notably, point defects usually appeared at low pressure, whereas dislocations were observed at high pressure, since the simultaneous growth of two crystal grains with different crystal orientations and their boundary expansion was hindered in the latter case, resulting in the formation of a dislocation between these grains.
Pareto-front shape in multiobservable quantum control
NASA Astrophysics Data System (ADS)
Sun, Qiuyang; Wu, Re-Bing; Rabitz, Herschel
2017-03-01
Many scenarios in the sciences and engineering require simultaneous optimization of multiple objective functions, which are usually conflicting or competing. In such problems the Pareto front, where none of the individual objectives can be further improved without degrading some others, shows the tradeoff relations between the competing objectives. This paper analyzes the Pareto-front shape for the problem of quantum multiobservable control, i.e., optimizing the expectation values of multiple observables in the same quantum system. Analytic and numerical results demonstrate that with two commuting observables the Pareto front is a convex polygon consisting of flat segments only, while with noncommuting observables the Pareto front includes convexly curved segments. We also assess the capability of a weighted-sum method to continuously capture the points along the Pareto front. Illustrative examples with realistic physical conditions are presented, including NMR control experiments on a 1H-13C two-spin system with two commuting or noncommuting observables.
Segmenting human from photo images based on a coarse-to-fine scheme.
Lu, Huchuan; Fang, Guoliang; Shao, Xinqing; Li, Xuelong
2012-06-01
Human segmentation in photo images is a challenging and important problem that finds numerous applications ranging from album making and photo classification to image retrieval. Previous works on human segmentation usually demand a time-consuming training phase for complex shape-matching processes. In this paper, we propose a straightforward framework to automatically recover human bodies from color photos. Employing a coarse-to-fine strategy, we first detect a coarse torso (CT) using the multicue CT detection algorithm and then extract the accurate region of the upper body. Then, an iterative multiple oblique histogram algorithm is presented to accurately recover the lower body based on human kinematics. The performance of our algorithm is evaluated on our own data set (contains 197 images with human body region ground truth data), VOC 2006, and the 2010 data set. Experimental results demonstrate the merits of the proposed method in segmenting a person with various poses.
Expressions for the precession quantities based upon the IAU /1976/ system of astronomical constants
NASA Technical Reports Server (NTRS)
Lieske, J. H.; Lederle, T.; Fricke, W.; Morando, B.
1977-01-01
The structure of the expressions usually employed in calculating the effects of precession is examined, and a method is outlined for revising the expressions to account for changes in the fundamental astronomical constants. It is shown that the basic set of parameters, upon which depend the lengthy polynomials for computing the mean obliquity of data and the elements of the precession matrix, consists of the mean obliquity, the speed of general precession in longitude at a fixed epoch, and the system of planetary masses. Special attention is given to the motion of the ecliptic pole, formulations for a basic epoch as well as an arbitrary epoch, and ecliptic motion relative to the basic epoch. Numerical precession quantities at epoch J2000.0 (JED 2451545.0) are presented which result from the revision of astronomical constants adopted at the XVI General Assembly of the IAU.
Metal hierarchical patterning by direct nanoimprint lithography
Radha, Boya; Lim, Su Hui; Saifullah, Mohammad S. M.; Kulkarni, Giridhar U.
2013-01-01
Three-dimensional hierarchical patterning of metals is of paramount importance in diverse fields involving photonics, controlling surface wettability and wearable electronics. Conventionally, this type of structuring is tedious and usually involves layer-by-layer lithographic patterning. Here, we describe a simple process of direct nanoimprint lithography using palladium benzylthiolate, a versatile metal-organic ink, which not only leads to the formation of hierarchical patterns but also is amenable to layer-by-layer stacking of the metal over large areas. The key to achieving such multi-faceted patterning is hysteretic melting of ink, enabling its shaping. It undergoes transformation to metallic palladium under gentle thermal conditions without affecting the integrity of the hierarchical patterns on micro- as well as nanoscale. A metallic rice leaf structure showing anisotropic wetting behavior and woodpile-like structures were thus fabricated. Furthermore, this method is extendable for transferring imprinted structures to a flexible substrate to make them robust enough to sustain numerous bending cycles. PMID:23446801
Local multiplicative Schwarz algorithms for convection-diffusion equations
NASA Technical Reports Server (NTRS)
Cai, Xiao-Chuan; Sarkis, Marcus
1995-01-01
We develop a new class of overlapping Schwarz type algorithms for solving scalar convection-diffusion equations discretized by finite element or finite difference methods. The preconditioners consist of two components, namely, the usual two-level additive Schwarz preconditioner and the sum of some quadratic terms constructed by using products of ordered neighboring subdomain preconditioners. The ordering of the subdomain preconditioners is determined by considering the direction of the flow. We prove that the algorithms are optimal in the sense that the convergence rates are independent of the mesh size, as well as the number of subdomains. We show by numerical examples that the new algorithms are less sensitive to the direction of the flow than either the classical multiplicative Schwarz algorithms, and converge faster than the additive Schwarz algorithms. Thus, the new algorithms are more suitable for fluid flow applications than the classical additive or multiplicative Schwarz algorithms.
The role of water content in triboelectric charging of wind-blown sand
Gu, Zhaolin; Wei, Wei; Su, Junwei; Yu, Chuck Wah
2013-01-01
Triboelectric charging is common in desert sandstorms and dust devils on Earth; however, it remains poorly understood. Here we show a charging mechanism of sands with the adsorbed water on micro-porous surface in wind-blown sand based on the fact that water content is universal but usually a minor component in most particle systems. The triboelectric charging could be resulted due to the different mobility of H+/OH− between the contacting sands with a temperature difference. Computational fluid dynamics (CFD) and discrete element method (DEM) were used to demonstrate the dynamics of the sand charging. The numerically simulated charge-to-mass ratios of sands and electric field strength established in wind tunnel agreed well with the experimental data. The charging mechanism could provide an explanation for the charging process of all identical granular systems with water content, including Martian dust devils, wind-blown snow, even powder electrification in industrial processes. PMID:23434920
NASA Technical Reports Server (NTRS)
Trivedi, K. S.; Geist, R. M.
1981-01-01
The CARE 3 reliability model for aircraft avionics and control systems is described by utilizing a number of examples which frequently use state-of-the-art mathematical modeling techniques as a basis for their exposition. Behavioral decomposition followed by aggregration were used in an attempt to deal with reliability models with a large number of states. A comprehensive set of models of the fault-handling processes in a typical fault-tolerant system was used. These models were semi-Markov in nature, thus removing the usual restrictions of exponential holding times within the coverage model. The aggregate model is a non-homogeneous Markov chain, thus allowing the times to failure to posses Weibull-like distributions. Because of the departures from traditional models, the solution method employed is that of Kolmogorov integral equations, which are evaluated numerically.
NASA Astrophysics Data System (ADS)
Bragov, Anatoly; Konstantinov, Alexander; Lomunov, Andrey; Sadyrin, Anatoly; Sergeichev, Ivan; Kruszka, Leopold
High-porosity materials, such as chamotte and mullite, possess a heat of fusion. Owing to their properties, these materials can be used with success as damping materials in containers for airplane, automobile, etc. transportation of radioactive or highly toxic materials. Experimental studies of the dynamic properties have been executed with using some original modifications of the Kolsky method. These modified experiments have allowed studying the dynamic compressibility of high-porosity chamotte at deformations up to 80% and amplitudes up to 50 MPa. The equations of the mathematical model describing shock compacting of chamotte as a highly porous, fragile, collapsing material are presented. Deformation of high-porous materials at non-stationary loadings is usually accompanied by fragile destruction of interpore partitions as observed in other porous ceramic materials. Comparison of numerical and experimental results has shown their good conformity.
Efficient micromagnetic modelling of spin-transfer torque and spin-orbit torque
NASA Astrophysics Data System (ADS)
Abert, Claas; Bruckner, Florian; Vogler, Christoph; Suess, Dieter
2018-05-01
While the spin-diffusion model is considered one of the most complete and accurate tools for the description of spin transport and spin torque, its solution in the context of dynamical micromagnetic simulations is numerically expensive. We propose a procedure to retrieve the free parameters of a simple macro-spin like spin-torque model through the spin-diffusion model. In case of spin-transfer torque the simplified model complies with the model of Slonczewski. A similar model can be established for the description of spin-orbit torque. In both cases the spin-diffusion model enables the retrieval of free model parameters from the geometry and the material parameters of the system. Since these parameters usually have to be determined phenomenologically through experiments, the proposed method combines the strength of the diffusion model to resolve material parameters and geometry with the high performance of simple torque models.
NASA Astrophysics Data System (ADS)
Bayanov, V. I.; Vinokurov, G. N.; Zhulin, V. I.; Yashin, V. E.
1989-02-01
A numerical calculation is reported of an inversion conservation coefficient of cylindrical rod solid-state amplifiers with the active element diameter from 1.5 to 15 cm operated under continuous pumping conditions. It is shown that the ultimate gain, limited only by superluminescence, exceeds considerably the value usually obtained in experiments. Various methods of eliminating parasitic effects, which limit the gain of real amplifiers, are considered. The degree of influence of these effects on the inversion conservation coefficient is discussed. The results are given of an experimental determination of the gain close to the ultimate value (0.18 cm- 1 for an active element 3 cm in diameter). Calculations are reported of the angular distributions of superluminescence and parasitic modes demonstrating that the latter can be suppressed by spatial filtering.
3D Gabor wavelet based vessel filtering of photoacoustic images.
Haq, Israr Ul; Nagoaka, Ryo; Makino, Takahiro; Tabata, Takuya; Saijo, Yoshifumi
2016-08-01
Filtering and segmentation of vasculature is an important issue in medical imaging. The visualization of vasculature is crucial for the early diagnosis and therapy in numerous medical applications. This paper investigates the use of Gabor wavelet to enhance the effect of vasculature while eliminating the noise due to size, sensitivity and aperture of the detector in 3D Optical Resolution Photoacoustic Microscopy (OR-PAM). A detailed multi-scale analysis of wavelet filtering and Hessian based method is analyzed for extracting vessels of different sizes since the blood vessels usually vary with in a range of radii. The proposed algorithm first enhances the vasculature in the image and then tubular structures are classified by eigenvalue decomposition of the local Hessian matrix at each voxel in the image. The algorithm is tested on non-invasive experiments, which shows appreciable results to enhance vasculature in photo-acoustic images.
Guillain-Barré syndrome: causes, immunopathogenic mechanisms and treatment.
Jasti, Anil K; Selmi, Carlo; Sarmiento-Monroy, Juan C; Vega, Daniel A; Anaya, Juan-Manuel; Gershwin, M Eric
2016-11-01
Guillain-Barré syndrome is a rare disease representing the most frequent cause of acute flaccid symmetrical weakness of the limbs and areflexia usually reaching its peak within a month. The etiology and pathogenesis remain largely enigmatic and the syndrome results in death or severe disability in 9-17% of cases despite immunotherapy. Areas covered: In terms of etiology, Guillain-Barré syndrome is linked to Campylobacter infection but less than 0.1% of infections result in the syndrome. In terms of pathogenesis, activated macrophages and T cells and serum antibodies against gangliosides are observed but their significance is unclear. Expert commentary: Guillain-Barré syndrome is a heterogeneous condition with numerous subtypes and recent data point towards the role of ganglioside epitopes by immunohistochemical methods. Ultimately, the syndrome results from a permissive genetic background on which environmental factors, including infections, vaccination and the influence of aging, lead to disease.
Rakhshan, Vahid
2015-01-01
Congenitally missing teeth (CMT), or as usually called hypodontia, is a highly prevalent and costly dental anomaly. Besides an unfavorable appearance, patients with missing teeth may suffer from malocclusion, periodontal damage, insufficient alveolar bone growth, reduced chewing ability, inarticulate pronunciation and other problems. Treatment might be usually expensive and multidisciplinary. This highly frequent and yet expensive anomaly is of interest to numerous clinical, basic science and public health fields such as orthodontics, pediatric dentistry, prosthodontics, periodontics, maxillofacial surgery, anatomy, anthropology and even the insurance industry. This essay reviews the findings on the etiology, prevalence, risk factors, occurrence patterns, skeletal changes and treatments of congenitally missing teeth. It seems that CMT usually appears in females and in the permanent dentition. It is not conclusive whether it tends to occur more in the maxilla or mandible and also in the anterior versus posterior segments. It can accompany various complications and should be attended by expert teams as soon as possible. PMID:25709668
Aircraft directional stability and vertical tail design: A review of semi-empirical methods
NASA Astrophysics Data System (ADS)
Ciliberti, Danilo; Della Vecchia, Pierluigi; Nicolosi, Fabrizio; De Marco, Agostino
2017-11-01
Aircraft directional stability and control are related to vertical tail design. The safety, performance, and flight qualities of an aircraft also depend on a correct empennage sizing. Specifically, the vertical tail is responsible for the aircraft yaw stability and control. If these characteristics are not well balanced, the entire aircraft design may fail. Stability and control are often evaluated, especially in the preliminary design phase, with semi-empirical methods, which are based on the results of experimental investigations performed in the past decades, and occasionally are merged with data provided by theoretical assumptions. This paper reviews the standard semi-empirical methods usually applied in the estimation of airplane directional stability derivatives in preliminary design, highlighting the advantages and drawbacks of these approaches that were developed from wind tunnel tests performed mainly on fighter airplane configurations of the first decades of the past century, and discussing their applicability on current transport aircraft configurations. Recent investigations made by the authors have shown the limit of these methods, proving the existence of aerodynamic interference effects in sideslip conditions which are not adequately considered in classical formulations. The article continues with a concise review of the numerical methods for aerodynamics and their applicability in aircraft design, highlighting how Reynolds-Averaged Navier-Stokes (RANS) solvers are well-suited to attain reliable results in attached flow conditions, with reasonable computational times. From the results of RANS simulations on a modular model of a representative regional turboprop airplane layout, the authors have developed a modern method to evaluate the vertical tail and fuselage contributions to aircraft directional stability. The investigation on the modular model has permitted an effective analysis of the aerodynamic interference effects by moving, changing, and expanding the available airplane components. Wind tunnel tests over a wide range of airplane configurations have been used to validate the numerical approach. The comparison between the proposed method and the standard semi-empirical methods available in literature proves the reliability of the innovative approach, according to the available experimental data collected in the wind tunnel test campaign.
The ozone depletion potentials on halocarbons: Their dependence of calculation assumptions
NASA Technical Reports Server (NTRS)
Karol, Igor L.; Kiselev, Andrey A.
1994-01-01
The concept of Ozone Depletion Potential (ODP) is widely used in the evaluation of numerous halocarbons and of their replacement effects on ozone, but the methods, assumptions and conditions used in ODP calculations have not been analyzed adequately. In this paper a model study of effects on ozone of the instantaneous releases of various amounts of CH3CCl3 and of CHF2Cl (HCFC-22) for several compositions of the background atmosphere are presented, aimed at understanding connections of ODP values with the assumptions used in their calculations. To facilitate the ODP computation in numerous versions for the long time periods after their releases, the above rather short-lived gases and the one-dimensional radiative photochemical model of the global annually averaged atmospheric layer up to 50 km height are used. The variation of released gas global mass from 1 Mt to 1 Gt leads to ODP value increase with its stabilization close to the upper bound of this range in the contemporary atmosphere. The same variations are analyzed for conditions of the CFC-free atmosphere of 1960's and for the anthropogenically loaded atmosphere in the 21st century according to the known IPCC 'business as usual' scenario. Recommendations for proper ways of ODP calculations are proposed for practically important cases.
Progress Toward an Efficient and General CFD Tool for Propulsion Design/Analysis
NASA Technical Reports Server (NTRS)
Cox, C. F.; Cinnella, P.; Westmoreland, S.
1996-01-01
The simulation of propulsive flows inherently involves chemical activity. Recent years have seen substantial strides made in the development of numerical schemes for reacting flowfields, in particular those involving finite-rate chemistry. However, finite-rate calculations are computationally intensive and require knowledge of the actual kinetics, which are not always known with sufficient accuracy. Alternatively, flow simulations based on the assumption of local chemical equilibrium are capable of obtaining physically reasonable results at far less computational cost. The present study summarizes the development of efficient numerical techniques for the simulation of flows in local chemical equilibrium, whereby a 'Black Box' chemical equilibrium solver is coupled to the usual gasdynamic equations. The generalization of the methods enables the modelling of any arbitrary mixture of thermally perfect gases, including air, combustion mixtures and plasmas. As demonstration of the potential of the methodologies, several solutions, involving reacting and perfect gas flows, will be presented. Included is a preliminary simulation of the SSME startup transient. Future enhancements to the proposed techniques will be discussed, including more efficient finite-rate and hybrid (partial equilibrium) schemes. The algorithms that have been developed and are being optimized provide for an efficient and general tool for the design and analysis of propulsion systems.
NASA Astrophysics Data System (ADS)
Yang, Yong; Li, Chengshan
2017-10-01
The effect of minor loop size on the magnetic stiffness has not been paid attention to by most researchers in experimental and theoretical studies about the high temperature superconductor (HTS) magnetic levitation system. In this work, we numerically investigate the average magnetic stiffness obtained by the minor loop traverses Δz (or Δx) varying from 0.1 mm to 2 mm in zero field cooling and field cooling regimes, respectively. The approximate values of the magnetic stiffness with zero traverse are obtained using the method of linear extrapolation. Compared with the average magnetic stiffness gained by any minor loop traverse, these approximate values are Not always close to the average magnetic stiffness produced by the smallest size of minor loops. The relative deviation ranges of average magnetic stiffness gained by the usually minor loop traverse (1 or 2 mm) are presented by the ratios of approximate values to average stiffness for different moving processes and two typical cooling conditions. The results show that most of average magnetic stiffness are remarkably influenced by the sizes of minor loop, which indicates that the magnetic stiffness obtained by a single minor loop traverse Δ z or Δ x, for example, 1 or 2 mm, can be generally caused a large deviation.
NASA Astrophysics Data System (ADS)
Valdes-Parada, F. J.; Ostvar, S.; Wood, B. D.; Miller, C. T.
2017-12-01
Modeling of hierarchical systems such as porous media can be performed by different approaches that bridge microscale physics to the macroscale. Among the several alternatives available in the literature, the thermodynamically constrained averaging theory (TCAT) has emerged as a robust modeling approach that provides macroscale models that are consistent across scales. For specific closure relation forms, TCAT models are expressed in terms of parameters that depend upon the physical system under study. These parameters are usually obtained from inverse modeling based upon either experimental data or direct numerical simulation at the pore scale. Other upscaling approaches, such as the method of volume averaging, involve an a priori scheme for parameter estimation for certain microscale and transport conditions. In this work, we show how such a predictive scheme can be implemented in TCAT by studying the simple problem of single-phase passive diffusion in rigid and homogeneous porous media. The components of the effective diffusivity tensor are predicted for several porous media by solving ancillary boundary-value problems in periodic unit cells. The results are validated through a comparison with data from direct numerical simulation. This extension of TCAT constitutes a useful advance for certain classes of problems amenable to this estimation approach.
Franck, D; de Carlan, L; Pierrat, N; Broggio, D; Lamart, S
2007-01-01
Although great efforts have been made to improve the physical phantoms used to calibrate in vivo measurement systems, these phantoms represent a single average counting geometry and usually contain a uniform distribution of the radionuclide over the tissue substitute. As a matter of fact, significant corrections must be made to phantom-based calibration factors in order to obtain absolute calibration efficiencies applicable to a given individual. The importance of these corrections is particularly crucial when considering in vivo measurements of low energy photons emitted by radionuclides deposited in the lung such as actinides. Thus, it was desirable to develop a method for calibrating in vivo measurement systems that is more sensitive to these types of variability. Previous works have demonstrated the possibility of such a calibration using the Monte Carlo technique. Our research programme extended such investigations to the reconstruction of numerical anthropomorphic phantoms based on personal physiological data obtained by computed tomography. New procedures based on a new graphical user interface (GUI) for development of computational phantoms for Monte Carlo calculations and data analysis are being developed to take advantage of recent progress in image-processing codes. This paper presents the principal features of this new GUI. Results of calculations and comparison with experimental data are also presented and discussed in this work.