An efficient numerical procedure for thermohydrodynamic analysis of cavitating bearings
NASA Technical Reports Server (NTRS)
Vijayaraghavan, D.
1995-01-01
An efficient and accurate numerical procedure to determine the thermo-hydrodynamic performance of cavitating bearings is described. This procedure is based on the earlier development of Elrod for lubricating films, in which the properties across the film thickness are determined at Lobatto points and their distributions are expressed by collocated polynomials. The cavitated regions and their boundaries are rigorously treated. Thermal boundary conditions at the surfaces, including heat dissipation through the metal to the ambient, are incorporated. Numerical examples are presented comparing the predictions using this procedure with earlier theoretical predictions and experimental data. With a few points across the film thickness and across the journal and the bearing in the radial direction, the temperature profile is very well predicted.
NASA Astrophysics Data System (ADS)
Simoni, L.; Secchi, S.; Schrefler, B. A.
2008-12-01
This paper analyses the numerical difficulties commonly encountered in solving fully coupled numerical models and proposes a numerical strategy apt to overcome them. The proposed procedure is based on space refinement and time adaptivity. The latter, which in mainly studied here, is based on the use of a finite element approach in the space domain and a Discontinuous Galerkin approximation within each time span. Error measures are defined for the jump of the solution at each time station. These constitute the parameters allowing for the time adaptivity. Some care is however, needed for a useful definition of the jump measures. Numerical tests are presented firstly to demonstrate the advantages and shortcomings of the method over the more traditional use of finite differences in time, then to assess the efficiency of the proposed procedure for adapting the time step. The proposed method reveals its efficiency and simplicity to adapt the time step in the solution of coupled field problems.
Efficient and robust relaxation procedures for multi-component mixtures including phase transition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Ee, E-mail: eehan@math.uni-bremen.de; Hantke, Maren, E-mail: maren.hantke@ovgu.de; Müller, Siegfried, E-mail: mueller@igpm.rwth-aachen.de
We consider a thermodynamic consistent multi-component model in multi-dimensions that is a generalization of the classical two-phase flow model of Baer and Nunziato. The exchange of mass, momentum and energy between the phases is described by additional source terms. Typically these terms are handled by relaxation procedures. Available relaxation procedures suffer from efficiency and robustness resulting in very costly computations that in general only allow for one-dimensional computations. Therefore we focus on the development of new efficient and robust numerical methods for relaxation processes. We derive exact procedures to determine mechanical and thermal equilibrium states. Further we introduce a novelmore » iterative method to treat the mass transfer for a three component mixture. All new procedures can be extended to an arbitrary number of inert ideal gases. We prove existence, uniqueness and physical admissibility of the resulting states and convergence of our new procedures. Efficiency and robustness of the procedures are verified by means of numerical computations in one and two space dimensions. - Highlights: • We develop novel relaxation procedures for a generalized, thermodynamically consistent Baer–Nunziato type model. • Exact procedures for mechanical and thermal relaxation procedures avoid artificial parameters. • Existence, uniqueness and physical admissibility of the equilibrium states are proven for special mixtures. • A novel iterative method for mass transfer is introduced for a three component mixture providing a unique and admissible equilibrium state.« less
Efficient Simulation Budget Allocation for Selecting an Optimal Subset
NASA Technical Reports Server (NTRS)
Chen, Chun-Hung; He, Donghai; Fu, Michael; Lee, Loo Hay
2008-01-01
We consider a class of the subset selection problem in ranking and selection. The objective is to identify the top m out of k designs based on simulated output. Traditional procedures are conservative and inefficient. Using the optimal computing budget allocation framework, we formulate the problem as that of maximizing the probability of correc tly selecting all of the top-m designs subject to a constraint on the total number of samples available. For an approximation of this corre ct selection probability, we derive an asymptotically optimal allocat ion and propose an easy-to-implement heuristic sequential allocation procedure. Numerical experiments indicate that the resulting allocatio ns are superior to other methods in the literature that we tested, and the relative efficiency increases for larger problems. In addition, preliminary numerical results indicate that the proposed new procedur e has the potential to enhance computational efficiency for simulation optimization.
NASA Technical Reports Server (NTRS)
Wie, Yong-Sun
1990-01-01
A procedure for calculating 3-D, compressible laminar boundary layer flow on general fuselage shapes is described. The boundary layer solutions can be obtained in either nonorthogonal 'body oriented' coordinates or orthogonal streamline coordinates. The numerical procedure is 'second order' accurate, efficient and independent of the cross flow velocity direction. Numerical results are presented for several test cases, including a sharp cone, an ellipsoid of revolution, and a general aircraft fuselage at angle of attack. Comparisons are made between numerical results obtained using nonorthogonal curvilinear 'body oriented' coordinates and streamline coordinates.
Solution of quadratic matrix equations for free vibration analysis of structures.
NASA Technical Reports Server (NTRS)
Gupta, K. K.
1973-01-01
An efficient digital computer procedure and the related numerical algorithm are presented herein for the solution of quadratic matrix equations associated with free vibration analysis of structures. Such a procedure enables accurate and economical analysis of natural frequencies and associated modes of discretized structures. The numerically stable algorithm is based on the Sturm sequence method, which fully exploits the banded form of associated stiffness and mass matrices. The related computer program written in FORTRAN V for the JPL UNIVAC 1108 computer proves to be substantially more accurate and economical than other existing procedures of such analysis. Numerical examples are presented for two structures - a cantilever beam and a semicircular arch.
The minimal residual QR-factorization algorithm for reliably solving subset regression problems
NASA Technical Reports Server (NTRS)
Verhaegen, M. H.
1987-01-01
A new algorithm to solve test subset regression problems is described, called the minimal residual QR factorization algorithm (MRQR). This scheme performs a QR factorization with a new column pivoting strategy. Basically, this strategy is based on the change in the residual of the least squares problem. Furthermore, it is demonstrated that this basic scheme might be extended in a numerically efficient way to combine the advantages of existing numerical procedures, such as the singular value decomposition, with those of more classical statistical procedures, such as stepwise regression. This extension is presented as an advisory expert system that guides the user in solving the subset regression problem. The advantages of the new procedure are highlighted by a numerical example.
NASA Technical Reports Server (NTRS)
Galvas, M. R.
1972-01-01
A computer program for predicting design point specific speed - efficiency characteristics of centrifugal compressors is presented with instructions for its use. The method permits rapid selection of compressor geometry that yields maximum total efficiency for a particular application. A numerical example is included to demonstrate the selection procedure.
Simple and Efficient Numerical Evaluation of Near-Hypersingular Integrals
NASA Technical Reports Server (NTRS)
Fink, Patrick W.; Wilton, Donald R.; Khayat, Michael A.
2007-01-01
Recently, significant progress has been made in the handling of singular and nearly-singular potential integrals that commonly arise in the Boundary Element Method (BEM). To facilitate object-oriented programming and handling of higher order basis functions, cancellation techniques are favored over techniques involving singularity subtraction. However, gradients of the Newton-type potentials, which produce hypersingular kernels, are also frequently required in BEM formulations. As is the case with the potentials, treatment of the near-hypersingular integrals has proven more challenging than treating the limiting case in which the observation point approaches the surface. Historically, numerical evaluation of these near-hypersingularities has often involved a two-step procedure: a singularity subtraction to reduce the order of the singularity, followed by a boundary contour integral evaluation of the extracted part. Since this evaluation necessarily links basis function, Green s function, and the integration domain (element shape), the approach ill fits object-oriented programming concepts. Thus, there is a need for cancellation-type techniques for efficient numerical evaluation of the gradient of the potential. Progress in the development of efficient cancellation-type procedures for the gradient potentials was recently presented. To the extent possible, a change of variables is chosen such that the Jacobian of the transformation cancels the singularity. However, since the gradient kernel involves singularities of different orders, we also require that the transformation leaves remaining terms that are analytic. The terms "normal" and "tangential" are used herein with reference to the source element. Also, since computational formulations often involve the numerical evaluation of both potentials and their gradients, it is highly desirable that a single integration procedure efficiently handles both.
77 FR 2829 - Energy Conservation Program: Test Procedure for Television Sets
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-19
... provisions designed to improve energy efficiency. (All references to EPCA refer to the statute as amended... also provides that the test procedure shall be reasonably designed to produce test results which... facility one is denoted with numerical values, while the data from test facility two is denoted with...
NASA Technical Reports Server (NTRS)
Khayat, Michael A.; Wilton, Donald R.; Fink, Patrick W.
2007-01-01
Simple and efficient numerical procedures using singularity cancellation methods are presented for evaluating singular and near-singular potential integrals. Four different transformations are compared and the advantages of the Radial-angular transform are demonstrated. A method is then described for optimizing this integration scheme.
A hybrid framework for coupling arbitrary summation-by-parts schemes on general meshes
NASA Astrophysics Data System (ADS)
Lundquist, Tomas; Malan, Arnaud; Nordström, Jan
2018-06-01
We develop a general interface procedure to couple both structured and unstructured parts of a hybrid mesh in a non-collocated, multi-block fashion. The target is to gain optimal computational efficiency in fluid dynamics simulations involving complex geometries. While guaranteeing stability, the proposed procedure is optimized for accuracy and requires minimal algorithmic modifications to already existing schemes. Initial numerical investigations confirm considerable efficiency gains compared to non-hybrid calculations of up to an order of magnitude.
An efficient numerical algorithm for transverse impact problems
NASA Technical Reports Server (NTRS)
Sankar, B. V.; Sun, C. T.
1985-01-01
Transverse impact problems in which the elastic and plastic indentation effects are considered, involve a nonlinear integral equation for the contact force, which, in practice, is usually solved by an iterative scheme with small increments in time. In this paper, a numerical method is proposed wherein the iterations of the nonlinear problem are separated from the structural response computations. This makes the numerical procedures much simpler and also efficient. The proposed method is applied to some impact problems for which solutions are available, and they are found to be in good agreement. The effect of the magnitude of time increment on the results is also discussed.
Navier-Stokes and viscous-inviscid interaction
NASA Technical Reports Server (NTRS)
Steger, Joseph L.; Vandalsem, William R.
1989-01-01
Some considerations toward developing numerical procedures for simulating viscous compressible flows are discussed. Both Navier-Stokes and boundary layer field methods are considered. Because efficient viscous-inviscid interaction methods have been difficult to extend to complex 3-D flow simulations, Navier-Stokes procedures are more frequently being utilized even though they require considerably more work per grid point. It would seem a mistake, however, not to make use of the more efficient approximate methods in those regions in which they are clearly valid. Ideally, a general purpose compressible flow solver that can optionally take advantage of approximate solution methods would suffice, both to improve accuracy and efficiency. Some potentially useful steps toward this goal are described: a generalized 3-D boundary layer formulation and the fortified Navier-Stokes procedure.
On-line Bayesian model updating for structural health monitoring
NASA Astrophysics Data System (ADS)
Rocchetta, Roberto; Broggi, Matteo; Huchet, Quentin; Patelli, Edoardo
2018-03-01
Fatigue induced cracks is a dangerous failure mechanism which affects mechanical components subject to alternating load cycles. System health monitoring should be adopted to identify cracks which can jeopardise the structure. Real-time damage detection may fail in the identification of the cracks due to different sources of uncertainty which have been poorly assessed or even fully neglected. In this paper, a novel efficient and robust procedure is used for the detection of cracks locations and lengths in mechanical components. A Bayesian model updating framework is employed, which allows accounting for relevant sources of uncertainty. The idea underpinning the approach is to identify the most probable crack consistent with the experimental measurements. To tackle the computational cost of the Bayesian approach an emulator is adopted for replacing the computationally costly Finite Element model. To improve the overall robustness of the procedure, different numerical likelihoods, measurement noises and imprecision in the value of model parameters are analysed and their effects quantified. The accuracy of the stochastic updating and the efficiency of the numerical procedure are discussed. An experimental aluminium frame and on a numerical model of a typical car suspension arm are used to demonstrate the applicability of the approach.
hp-Adaptive time integration based on the BDF for viscous flows
NASA Astrophysics Data System (ADS)
Hay, A.; Etienne, S.; Pelletier, D.; Garon, A.
2015-06-01
This paper presents a procedure based on the Backward Differentiation Formulas of order 1 to 5 to obtain efficient time integration of the incompressible Navier-Stokes equations. The adaptive algorithm performs both stepsize and order selections to control respectively the solution accuracy and the computational efficiency of the time integration process. The stepsize selection (h-adaptivity) is based on a local error estimate and an error controller to guarantee that the numerical solution accuracy is within a user prescribed tolerance. The order selection (p-adaptivity) relies on the idea that low-accuracy solutions can be computed efficiently by low order time integrators while accurate solutions require high order time integrators to keep computational time low. The selection is based on a stability test that detects growing numerical noise and deems a method of order p stable if there is no method of lower order that delivers the same solution accuracy for a larger stepsize. Hence, it guarantees both that (1) the used method of integration operates inside of its stability region and (2) the time integration procedure is computationally efficient. The proposed time integration procedure also features a time-step rejection and quarantine mechanisms, a modified Newton method with a predictor and dense output techniques to compute solution at off-step points.
NASA Technical Reports Server (NTRS)
Wang, Gang
2003-01-01
A multi grid solution procedure for the numerical simulation of turbulent flows in complex geometries has been developed. A Full Multigrid-Full Approximation Scheme (FMG-FAS) is incorporated into the continuity and momentum equations, while the scalars are decoupled from the multi grid V-cycle. A standard kappa-Epsilon turbulence model with wall functions has been used to close the governing equations. The numerical solution is accomplished by solving for the Cartesian velocity components either with a traditional grid staggering arrangement or with a multiple velocity grid staggering arrangement. The two solution methodologies are evaluated for relative computational efficiency. The solution procedure with traditional staggering arrangement is subsequently applied to calculate the flow and temperature fields around a model Short Take-off and Vertical Landing (STOVL) aircraft hovering in ground proximity.
An Efficient Alternative Mixed Randomized Response Procedure
ERIC Educational Resources Information Center
Singh, Housila P.; Tarray, Tanveer A.
2015-01-01
In this article, we have suggested a new modified mixed randomized response (RR) model and studied its properties. It is shown that the proposed mixed RR model is always more efficient than the Kim and Warde's mixed RR model. The proposed mixed RR model has also been extended to stratified sampling. Numerical illustrations and graphical…
Constraint treatment techniques and parallel algorithms for multibody dynamic analysis. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Chiou, Jin-Chern
1990-01-01
Computational procedures for kinematic and dynamic analysis of three-dimensional multibody dynamic (MBD) systems are developed from the differential-algebraic equations (DAE's) viewpoint. Constraint violations during the time integration process are minimized and penalty constraint stabilization techniques and partitioning schemes are developed. The governing equations of motion, a two-stage staggered explicit-implicit numerical algorithm, are treated which takes advantage of a partitioned solution procedure. A robust and parallelizable integration algorithm is developed. This algorithm uses a two-stage staggered central difference algorithm to integrate the translational coordinates and the angular velocities. The angular orientations of bodies in MBD systems are then obtained by using an implicit algorithm via the kinematic relationship between Euler parameters and angular velocities. It is shown that the combination of the present solution procedures yields a computationally more accurate solution. To speed up the computational procedures, parallel implementation of the present constraint treatment techniques, the two-stage staggered explicit-implicit numerical algorithm was efficiently carried out. The DAE's and the constraint treatment techniques were transformed into arrowhead matrices to which Schur complement form was derived. By fully exploiting the sparse matrix structural analysis techniques, a parallel preconditioned conjugate gradient numerical algorithm is used to solve the systems equations written in Schur complement form. A software testbed was designed and implemented in both sequential and parallel computers. This testbed was used to demonstrate the robustness and efficiency of the constraint treatment techniques, the accuracy of the two-stage staggered explicit-implicit numerical algorithm, and the speed up of the Schur-complement-based parallel preconditioned conjugate gradient algorithm on a parallel computer.
Solution of elliptic PDEs by fast Poisson solvers using a local relaxation factor
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung
1986-01-01
A large class of two- and three-dimensional, nonseparable elliptic partial differential equations (PDEs) is presently solved by means of novel one-step (D'Yakanov-Gunn) and two-step (accelerated one-step) iterative procedures, using a local, discrete Fourier analysis. In addition to being easily implemented and applicable to a variety of boundary conditions, these procedures are found to be computationally efficient on the basis of the results of numerical comparison with other established methods, which lack the present one's: (1) insensitivity to grid cell size and aspect ratio, and (2) ease of convergence rate estimation by means of the coefficient of the PDE being solved. The two-step procedure is numerically demonstrated to outperform the one-step procedure in the case of PDEs with variable coefficients.
Hernán Ocaña, Pablo
2018-04-01
Currently, sedation in endoscopic procedures is considered a necessary condition and a criterion of quality in digestive endoscopy. The role of SAE in conventional endoscopic procedures is clearly established in clinical guidelines, but this is not so clear in complex endoscopic procedures, such as ERCP. In recent years, numerous studies have been published, with results similar to those noticed in this article, endorsing the safety, efficacy and efficiency of SAE, when performed by properly trained staff.
NASA Technical Reports Server (NTRS)
Nguyen, D. T.; Watson, Willie R. (Technical Monitor)
2005-01-01
The overall objectives of this research work are to formulate and validate efficient parallel algorithms, and to efficiently design/implement computer software for solving large-scale acoustic problems, arised from the unified frameworks of the finite element procedures. The adopted parallel Finite Element (FE) Domain Decomposition (DD) procedures should fully take advantages of multiple processing capabilities offered by most modern high performance computing platforms for efficient parallel computation. To achieve this objective. the formulation needs to integrate efficient sparse (and dense) assembly techniques, hybrid (or mixed) direct and iterative equation solvers, proper pre-conditioned strategies, unrolling strategies, and effective processors' communicating schemes. Finally, the numerical performance of the developed parallel finite element procedures will be evaluated by solving series of structural, and acoustic (symmetrical and un-symmetrical) problems (in different computing platforms). Comparisons with existing "commercialized" and/or "public domain" software are also included, whenever possible.
Formulation of a dynamic analysis method for a generic family of hoop-mast antenna systems
NASA Technical Reports Server (NTRS)
Gabriele, A.; Loewy, R.
1981-01-01
Analytical studies of mast-cable-hoop-membrane type antennas were conducted using a transfer matrix numerical analysis approach. This method, by virtue of its specialization and the inherently easy compartmentalization of the formulation and numerical procedures, can be significantly more efficient in computer time required and in the time needed to review and interpret the results.
NASA Technical Reports Server (NTRS)
Padovan, J.; Adams, M.; Fertis, J.; Zeid, I.; Lam, P.
1982-01-01
Finite element codes are used in modelling rotor-bearing-stator structure common to the turbine industry. Engine dynamic simulation is used by developing strategies which enable the use of available finite element codes. benchmarking the elements developed are benchmarked by incorporation into a general purpose code (ADINA); the numerical characteristics of finite element type rotor-bearing-stator simulations are evaluated through the use of various types of explicit/implicit numerical integration operators. Improving the overall numerical efficiency of the procedure is improved.
A 3D staggered-grid finite difference scheme for poroelastic wave equation
NASA Astrophysics Data System (ADS)
Zhang, Yijie; Gao, Jinghuai
2014-10-01
Three dimensional numerical modeling has been a viable tool for understanding wave propagation in real media. The poroelastic media can better describe the phenomena of hydrocarbon reservoirs than acoustic and elastic media. However, the numerical modeling in 3D poroelastic media demands significantly more computational capacity, including both computational time and memory. In this paper, we present a 3D poroelastic staggered-grid finite difference (SFD) scheme. During the procedure, parallel computing is implemented to reduce the computational time. Parallelization is based on domain decomposition, and communication between processors is performed using message passing interface (MPI). Parallel analysis shows that the parallelized SFD scheme significantly improves the simulation efficiency and 3D decomposition in domain is the most efficient. We also analyze the numerical dispersion and stability condition of the 3D poroelastic SFD method. Numerical results show that the 3D numerical simulation can provide a real description of wave propagation.
A design procedure for a tension-wire stiffened truss-column
NASA Technical Reports Server (NTRS)
Greene, W. H.
1980-01-01
A deployable, tension wire stiffened, truss column configuration was considered for space structure applications. An analytical procedure, developed for design of the truss column and exercised in numerical studies, was based on equivalent beam stiffness coefficients in the classical analysis for an initially imperfect beam column. Failure constraints were formulated to be used in a combined weight/strength and nonlinear mathematical programming automated design procedure to determine the minimum mass column for a particular combination of design load and length. Numerical studies gave the mass characteristics of the truss column for broad ranges of load and length. Comparisons of the truss column with a baseline tubular column used a special structural efficiency parameter for this class of columns.
Recursive Newton-Euler formulation of manipulator dynamics
NASA Technical Reports Server (NTRS)
Nasser, M. G.
1989-01-01
A recursive Newton-Euler procedure is presented for the formulation and solution of manipulator dynamical equations. The procedure includes rotational and translational joints and a topological tree. This model was verified analytically using a planar two-link manipulator. Also, the model was tested numerically against the Walker-Orin model using the Shuttle Remote Manipulator System data. The hinge accelerations obtained from both models were identical. The computational requirements of the model vary linearly with the number of joints. The computational efficiency of this method exceeds that of Walker-Orin methods. This procedure may be viewed as a considerable generalization of Armstrong's method. A six-by-six formulation is adopted which enhances both the computational efficiency and simplicity of the model.
Quantitative optical diagnostics in pathology recognition and monitoring of tissue reaction to PDT
NASA Astrophysics Data System (ADS)
Kirillin, Mikhail; Shakhova, Maria; Meller, Alina; Sapunov, Dmitry; Agrba, Pavel; Khilov, Alexander; Pasukhin, Mikhail; Kondratieva, Olga; Chikalova, Ksenia; Motovilova, Tatiana; Sergeeva, Ekaterina; Turchin, Ilya; Shakhova, Natalia
2017-07-01
Optical coherence tomography (OCT) is currently actively introduced into clinical practice. Besides diagnostics, it can be efficiently employed for treatment monitoring allowing for timely correction of the treatment procedure. In monitoring of photodynamic therapy (PDT) traditionally employed fluorescence imaging (FI) can benefit from complementary use of OCT. Additional diagnostic efficiency can be derived from numerical processing of optical diagnostics data providing more information compared to visual evaluation. In this paper we report on application of OCT together with numerical processing for clinical diagnostic in gynecology and otolaryngology, for monitoring of PDT in otolaryngology and on OCT and FI applications in clinical and aesthetic dermatology. Image numerical processing and quantification provides increase in diagnostic accuracy. Keywords: optical coherence tomography, fluorescence imaging, photod
Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo
McDaniel, Tyler; D’Azevedo, Ed F.; Li, Ying Wai; ...
2017-11-07
Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is therefore formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with applicationmore » of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. Here this procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi- core CPUs and GPUs.« less
Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDaniel, Tyler; D’Azevedo, Ed F.; Li, Ying Wai
Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is therefore formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with applicationmore » of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. Here this procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi- core CPUs and GPUs.« less
Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo.
McDaniel, T; D'Azevedo, E F; Li, Y W; Wong, K; Kent, P R C
2017-11-07
Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is, therefore, formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with an application of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. This procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo, where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi-core central processing units and graphical processing units.
Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo
NASA Astrophysics Data System (ADS)
McDaniel, T.; D'Azevedo, E. F.; Li, Y. W.; Wong, K.; Kent, P. R. C.
2017-11-01
Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is, therefore, formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with an application of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. This procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo, where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi-core central processing units and graphical processing units.
Experimental and numerical research on forging with torsion
NASA Astrophysics Data System (ADS)
Petrov, Mikhail A.; Subich, Vadim N.; Petrov, Pavel A.
2017-10-01
Increasing the efficiency of the technological operations of blank production is closely related to the computer-aided technologies (CAx). On the one hand, the practical result represents reality exactly. On the other hand, the development procedure of new process development demands unrestricted resources, which are limited on the SMEs. The tools of CAx were successfully applied for development of new process of forging with torsion and result analysis as well. It was shown, that the theoretical calculations find the confirmation both in praxis and during numerical simulation. The mostly used constructional materials were under study. The torque angles were stated. The simulated results were evaluated by experimental procedure.
NASA Technical Reports Server (NTRS)
Wright, William B.
1988-01-01
Transient, numerical simulations of the deicing of composite aircraft components by electrothermal heating have been performed in a 2-D rectangular geometry. Seven numerical schemes and four solution methods were used to find the most efficient numerical procedure for this problem. The phase change in the ice was simulated using the Enthalpy method along with the Method for Assumed States. Numerical solutions illustrating deicer performance for various conditions are presented. Comparisons are made with previous numerical models and with experimental data. The simulation can also be used to solve a variety of other heat conduction problems involving composite bodies.
Numerical solution of quadratic matrix equations for free vibration analysis of structures
NASA Technical Reports Server (NTRS)
Gupta, K. K.
1975-01-01
This paper is concerned with the efficient and accurate solution of the eigenvalue problem represented by quadratic matrix equations. Such matrix forms are obtained in connection with the free vibration analysis of structures, discretized by finite 'dynamic' elements, resulting in frequency-dependent stiffness and inertia matrices. The paper presents a new numerical solution procedure of the quadratic matrix equations, based on a combined Sturm sequence and inverse iteration technique enabling economical and accurate determination of a few required eigenvalues and associated vectors. An alternative procedure based on a simultaneous iteration procedure is also described when only the first few modes are the usual requirement. The employment of finite dynamic elements in conjunction with the presently developed eigenvalue routines results in a most significant economy in the dynamic analysis of structures.
Decision rules for unbiased inventory estimates
NASA Technical Reports Server (NTRS)
Argentiero, P. D.; Koch, D.
1979-01-01
An efficient and accurate procedure for estimating inventories from remote sensing scenes is presented. In place of the conventional and expensive full dimensional Bayes decision rule, a one-dimensional feature extraction and classification technique was employed. It is shown that this efficient decision rule can be used to develop unbiased inventory estimates and that for large sample sizes typical of satellite derived remote sensing scenes, resulting accuracies are comparable or superior to more expensive alternative procedures. Mathematical details of the procedure are provided in the body of the report and in the appendix. Results of a numerical simulation of the technique using statistics obtained from an observed LANDSAT scene are included. The simulation demonstrates the effectiveness of the technique in computing accurate inventory estimates.
Herczeg, J; Szontágh, F
1974-06-23
Artificial interruption of pregnancy contains too many risks from the 12th week of pregnancy. The authors have been working at finding the most suitable and effective dosage of prostaglandin for the interruption of pregnancy during the 2nd trimester. The new dosage experimented was 25 mg of prostaglandin F2alpha, followed by another 25 mg 6 hours later. The clinical efficiency of this dosage was tested. This procedures was used in 45 cases. The efficiency of the method was compared to the efficiency of the previously used dosage, which was 25 mg of prostaglandin F2alpha, followed by 25 mg 24 hours later. The new dosage was evaluated 91% efficient, while the previous dosage was found to be 75% efficient. The side effects were rated as acceptable by the patients. There was no case of infection. Two undeniable advantages were found with this new dosage: the duration of the actual procedure is considerably reduced, and the method appears to be much safer. The authors conclude that this new procedure offers numerous clinical advantages.
Procedures for shape optimization of gas turbine disks
NASA Technical Reports Server (NTRS)
Cheu, Tsu-Chien
1989-01-01
Two procedures, the feasible direction method and sequential linear programming, for shape optimization of gas turbine disks are presented. The objective of these procedures is to obtain optimal designs of turbine disks with geometric and stress constraints. The coordinates of the selected points on the disk contours are used as the design variables. Structural weight, stress and their derivatives with respect to the design variables are calculated by an efficient finite element method for design senitivity analysis. Numerical examples of the optimal designs of a disk subjected to thermo-mechanical loadings are presented to illustrate and compare the effectiveness of these two procedures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cristina, S.; Feliziani, M.
1995-11-01
This paper describes a new procedure for the numerical computation of the electric field and current density distributions in a dc electrostatic precipitator in the presence of dust, taking into account the particle-size distribution. Poisson`s and continuity equations are numerically solved by supposing that the coronating conductors satisfy Kaptzov`s assumption on the emitter surfaces. Two iterative numerical procedures, both based on the finite element method (FEM), are implemented for evaluating, respectively, the unknown ionic charge density and the particle charge density distributions. The V-I characteristic and the precipitation efficiencies for the individual particle-size classes, calculated with reference to the pilotmore » precipitator installed by ENEL (Italian Electricity Board) at its Marghera (Venice) coal-fired power station, are found to be very close to those measured experimentally.« less
Fast Numerical Methods for the Design of Layered Photonic Structures with Rough Interfaces
NASA Technical Reports Server (NTRS)
Komarevskiy, Nikolay; Braginsky, Leonid; Shklover, Valery; Hafner, Christian; Lawson, John
2011-01-01
Modified boundary conditions (MBC) and a multilayer approach (MA) are proposed as fast and efficient numerical methods for the design of 1D photonic structures with rough interfaces. These methods are applicable for the structures, composed of materials with arbitrary permittivity tensor. MBC and MA are numerically validated on different types of interface roughness and permittivities of the constituent materials. The proposed methods can be combined with the 4x4 scattering matrix method as a field solver and an evolutionary strategy as an optimizer. The resulted optimization procedure is fast, accurate, numerically stable and can be used to design structures for various applications.
Coupled Neutron Transport for HZETRN
NASA Technical Reports Server (NTRS)
Slaba, Tony C.; Blattnig, Steve R.
2009-01-01
Exposure estimates inside space vehicles, surface habitats, and high altitude aircrafts exposed to space radiation are highly influenced by secondary neutron production. The deterministic transport code HZETRN has been identified as a reliable and efficient tool for such studies, but improvements to the underlying transport models and numerical methods are still necessary. In this paper, the forward-backward (FB) and directionally coupled forward-backward (DC) neutron transport models are derived, numerical methods for the FB model are reviewed, and a computationally efficient numerical solution is presented for the DC model. Both models are compared to the Monte Carlo codes HETC-HEDS, FLUKA, and MCNPX, and the DC model is shown to agree closely with the Monte Carlo results. Finally, it is found in the development of either model that the decoupling of low energy neutrons from the light particle transport procedure adversely affects low energy light ion fluence spectra and exposure quantities. A first order correction is presented to resolve the problem, and it is shown to be both accurate and efficient.
Developmental dissociation in the neural responses to simple multiplication and subtraction problems
Prado, Jérôme; Mutreja, Rachna; Booth, James R.
2014-01-01
Mastering single-digit arithmetic during school years is commonly thought to depend upon an increasing reliance on verbally memorized facts. An alternative model, however, posits that fluency in single-digit arithmetic might also be achieved via the increasing use of efficient calculation procedures. To test between these hypotheses, we used a cross-sectional design to measure the neural activity associated with single-digit subtraction and multiplication in 34 children from 2nd to 7th grade. The neural correlates of language and numerical processing were also identified in each child via localizer scans. Although multiplication and subtraction were undistinguishable in terms of behavior, we found a striking developmental dissociation in their neural correlates. First, we observed grade-related increases of activity for multiplication, but not for subtraction, in a language-related region of the left temporal cortex. Second, we found grade-related increases of activity for subtraction, but not for multiplication, in a region of the right parietal cortex involved in the procedural manipulation of numerical quantities. The present results suggest that fluency in simple arithmetic in children may be achieved by both increasing reliance on verbal retrieval and by greater use of efficient quantity-based procedures, depending on the operation. PMID:25089323
A shallow water model for the propagation of tsunami via Lattice Boltzmann method
NASA Astrophysics Data System (ADS)
Zergani, Sara; Aziz, Z. A.; Viswanathan, K. K.
2015-01-01
An efficient implementation of the lattice Boltzmann method (LBM) for the numerical simulation of the propagation of long ocean waves (e.g. tsunami), based on the nonlinear shallow water (NSW) wave equation is presented. The LBM is an alternative numerical procedure for the description of incompressible hydrodynamics and has the potential to serve as an efficient solver for incompressible flows in complex geometries. This work proposes the NSW equations for the irrotational surface waves in the case of complex bottom elevation. In recent time, equation involving shallow water is the current norm in modelling tsunami operations which include the propagation zone estimation. Several test-cases are presented to verify our model. Some implications to tsunami wave modelling are also discussed. Numerical results are found to be in excellent agreement with theory.
The generation and use of numerical shape models for irregular Solar System objects
NASA Technical Reports Server (NTRS)
Simonelli, Damon P.; Thomas, Peter C.; Carcich, Brian T.; Veverka, Joseph
1993-01-01
We describe a procedure that allows the efficient generation of numerical shape models for irregular Solar System objects, where a numerical model is simply a table of evenly spaced body-centered latitudes and longitudes and their associated radii. This modeling technique uses a combination of data from limbs, terminators, and control points, and produces shape models that have some important advantages over analytical shape models. Accurate numerical shape models make it feasible to study irregular objects with a wide range of standard scientific analysis techniques. These applications include the determination of moments of inertia and surface gravity, the mapping of surface locations and structural orientations, photometric measurement and analysis, the reprojection and mosaicking of digital images, and the generation of albedo maps. The capabilities of our modeling procedure are illustrated through the development of an accurate numerical shape model for Phobos and the production of a global, high-resolution, high-pass-filtered digital image mosaic of this Martian moon. Other irregular objects that have been modeled, or are being modeled, include the asteroid Gaspra and the satellites Deimos, Amalthea, Epimetheus, Janus, Hyperion, and Proteus.
Using the Multilayer Free-Surface Flow Model to Solve Wave Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prokof’ev, V. A., E-mail: ProkofyevVA@vniig.ru
2017-01-15
A method is presented for changing over from a single-layer shallow-water model to a multilayer model with hydrostatic pressure profile and, then, to a multilayer model with nonhydrostatic pressure profile. The method does not require complex procedures for solving the discrete Poisson’s equation and features high computation efficiency. The results of validating the algorithm against experimental data critical for the numerical dissipation of the numerical scheme are presented. Examples are considered.
Efficient harvesting methods for early-stage snake and turtle embryos.
Matsubara, Yoshiyuki; Kuroiwa, Atsushi; Suzuki, Takayuki
2016-04-01
Reptile development is an intriguing research target for understating the unique morphogenesis of reptiles as well as the evolution of vertebrates. However, there are numerous difficulties associated with studying development in reptiles. The number of available reptile eggs is usually quite limited. In addition, the reptile embryo is tightly adhered to the eggshell, making it a challenge to isolate reptile embryos intact. Furthermore, there have been few reports describing efficient procedures for isolating intact embryos especially prior to pharyngula stage. Thus, the aim of this review is to present efficient procedures for obtaining early-stage reptilian embryos intact. We first describe the method for isolating early-stage embryos of the Japanese striped snake. This is the first detailed method for obtaining embryos prior to oviposition in oviparous snake species. Second, we describe an efficient strategy for isolating early-stage embryos of the soft-shelled turtle. © 2016 Japanese Society of Developmental Biologists.
Numerical Study of the Generation of Linear Energy Transfer Spectra for Space Radiation Applications
NASA Technical Reports Server (NTRS)
Badavi, Francis F.; Wilson, John W.; Hunter, Abigail
2005-01-01
In analyzing charged particle spectra in space due to galactic cosmic rays (GCR) and solar particle events (SPE), the conversion of particle energy spectra into linear energy transfer (LET) distributions is a convenient guide in assessing biologically significant components of these spectra. The mapping of LET to energy is triple valued and can be defined only on open energy subintervals where the derivative of LET with respect to energy is not zero. Presented here is a well-defined numerical procedure which allows for the generation of LET spectra on the open energy subintervals that are integrable in spite of their singular nature. The efficiency and accuracy of the numerical procedures is demonstrated by providing examples of computed differential and integral LET spectra and their equilibrium components for historically large SPEs and 1977 solar minimum GCR environments. Due to the biological significance of tissue, all simulations are done with tissue as the target material.
NASA Technical Reports Server (NTRS)
Lewis, Robert Michael; Patera, Anthony T.; Peraire, Jaume
1998-01-01
We present a Neumann-subproblem a posteriori finite element procedure for the efficient and accurate calculation of rigorous, 'constant-free' upper and lower bounds for sensitivity derivatives of functionals of the solutions of partial differential equations. The design motivation for sensitivity derivative error control is discussed; the a posteriori finite element procedure is described; the asymptotic bounding properties and computational complexity of the method are summarized; and illustrative numerical results are presented.
NASA Technical Reports Server (NTRS)
Harris, J. E.
1975-01-01
An implicit finite-difference procedure is presented for solving the compressible three-dimensional boundary-layer equations. The method is second-order accurate, unconditionally stable (conditional stability for reverse cross flow), and efficient from the viewpoint of computer storage and processing time. The Reynolds stress terms are modeled by (1) a single-layer mixing length model and (2) a two-layer eddy viscosity model. These models, although simple in concept, accurately predicted the equilibrium turbulent flow for the conditions considered. Numerical results are compared with experimental wall and profile data for a cone at an angle of attack larger than the cone semiapex angle. These comparisons clearly indicate that the numerical procedure and turbulence models accurately predict the experimental data with as few as 21 nodal points in the plane normal to the wall boundary.
NASA Astrophysics Data System (ADS)
Méchi, Rachid; Farhat, Habib; Said, Rachid
2016-01-01
Nongray radiation calculations are carried out for a case problem available in the literature. The problem is a non-isothermal and inhomogeneous CO2-H2O- N2 gas mixture confined within an axisymmetric cylindrical furnace. The numerical procedure is based on the zonal method associated with the weighted sum of gray gases (WSGG) model. The effect of the wall emissivity on the heat flux losses is discussed. It is shown that this property affects strongly the furnace efficiency and that the most important heat fluxes are those leaving through the circumferential boundary. The numerical procedure adopted in this work is found to be effective and may be relied on to simulate coupled turbulent combustion-radiation in fired furnaces.
NASA Technical Reports Server (NTRS)
Steger, J. L.; Caradonna, F. X.
1980-01-01
An implicit finite difference procedure is developed to solve the unsteady full potential equation in conservation law form. Computational efficiency is maintained by use of approximate factorization techniques. The numerical algorithm is first order in time and second order in space. A circulation model and difference equations are developed for lifting airfoils in unsteady flow; however, thin airfoil body boundary conditions have been used with stretching functions to simplify the development of the numerical algorithm.
Markov chain sampling of the O(n) loop models on the infinite plane
NASA Astrophysics Data System (ADS)
Herdeiro, Victor
2017-07-01
A numerical method was recently proposed in Herdeiro and Doyon [Phys. Rev. E 94, 043322 (2016), 10.1103/PhysRevE.94.043322] showing a precise sampling of the infinite plane two-dimensional critical Ising model for finite lattice subsections. The present note extends the method to a larger class of models, namely the O(n) loop gas models for n ∈(1 ,2 ] . We argue that even though the Gibbs measure is nonlocal, it is factorizable on finite subsections when sufficient information on the loops touching the boundaries is stored. Our results attempt to show that provided an efficient Markov chain mixing algorithm and an improved discrete lattice dilation procedure the planar limit of the O(n) models can be numerically studied with efficiency similar to the Ising case. This confirms that scale invariance is the only requirement for the present numerical method to work.
An efficient solution procedure for the thermoelastic analysis of truss space structures
NASA Technical Reports Server (NTRS)
Givoli, D.; Rand, O.
1992-01-01
A solution procedure is proposed for the thermal and thermoelastic analysis of truss space structures in periodic motion. In this method, the spatial domain is first descretized using a consistent finite element formulation. Then the resulting semi-discrete equations in time are solved analytically by using Fourier decomposition. Geometrical symmetry is taken advantage of completely. An algorithm is presented for the calculation of heat flux distribution. The method is demonstrated via a numerical example of a cylindrically shaped space structure.
An unsteady Euler scheme for the analysis of ducted propellers
NASA Technical Reports Server (NTRS)
Srivastava, R.
1992-01-01
An efficient unsteady solution procedure has been developed for analyzing inviscid unsteady flow past ducted propeller configurations. This scheme is first order accurate in time and second order accurate in space. The solution procedure has been applied to a ducted propeller consisting of an 8-bladed SR7 propeller with a duct of NACA 0003 airfoil cross section around it, operating in a steady axisymmetric flowfield. The variation of elemental blade loading with radius, compares well with other published numerical results.
Queueing Network Models for Parallel Processing of Task Systems: an Operational Approach
NASA Technical Reports Server (NTRS)
Mak, Victor W. K.
1986-01-01
Computer performance modeling of possibly complex computations running on highly concurrent systems is considered. Earlier works in this area either dealt with a very simple program structure or resulted in methods with exponential complexity. An efficient procedure is developed to compute the performance measures for series-parallel-reducible task systems using queueing network models. The procedure is based on the concept of hierarchical decomposition and a new operational approach. Numerical results for three test cases are presented and compared to those of simulations.
Neutron Transport Models and Methods for HZETRN and Coupling to Low Energy Light Ion Transport
NASA Technical Reports Server (NTRS)
Blattnig, S.R.; Slaba, T.C.; Heinbockel, J.H.
2008-01-01
Exposure estimates inside space vehicles, surface habitats, and high altitude aircraft exposed to space radiation are highly influenced by secondary neutron production. The deterministic transport code HZETRN has been identified as a reliable and efficient tool for such studies, but improvements to the underlying transport models and numerical methods are still necessary. In this paper, the forward-backward (FB) and directionally coupled forward-backward (DC) neutron transport models are derived, numerical methods for the FB model are reviewed, and a computationally efficient numerical solution is presented for the DC model. Both models are compared to the Monte Carlo codes HETCHEDS and FLUKA, and the DC model is shown to agree closely with the Monte Carlo results. Finally, it is found in the development of either model that the decoupling of low energy neutrons from the light ion (A<4) transport procedure adversely affects low energy light ion fluence spectra and exposure quantities. A first order correction is presented to resolve the problem, and it is shown to be both accurate and efficient.
NASA Astrophysics Data System (ADS)
Chew, J. V. L.; Sulaiman, J.
2017-09-01
Partial differential equations that are used in describing the nonlinear heat and mass transfer phenomena are difficult to be solved. For the case where the exact solution is difficult to be obtained, it is necessary to use a numerical procedure such as the finite difference method to solve a particular partial differential equation. In term of numerical procedure, a particular method can be considered as an efficient method if the method can give an approximate solution within the specified error with the least computational complexity. Throughout this paper, the two-dimensional Porous Medium Equation (2D PME) is discretized by using the implicit finite difference scheme to construct the corresponding approximation equation. Then this approximation equation yields a large-sized and sparse nonlinear system. By using the Newton method to linearize the nonlinear system, this paper deals with the application of the Four-Point Newton-EGSOR (4NEGSOR) iterative method for solving the 2D PMEs. In addition to that, the efficiency of the 4NEGSOR iterative method is studied by solving three examples of the problems. Based on the comparative analysis, the Newton-Gauss-Seidel (NGS) and the Newton-SOR (NSOR) iterative methods are also considered. The numerical findings show that the 4NEGSOR method is superior to the NGS and the NSOR methods in terms of the number of iterations to get the converged solutions, the time of computation and the maximum absolute errors produced by the methods.
A third-order approximation method for three-dimensional wheel-rail contact
NASA Astrophysics Data System (ADS)
Negretti, Daniele
2012-03-01
Multibody train analysis is used increasingly by railway operators whenever a reliable and time-efficient method to evaluate the contact between wheel and rail is needed; particularly, the wheel-rail contact is one of the most important aspects that affects a reliable and time-efficient vehicle dynamics computation. The focus of the approach proposed here is to carry out such tasks by means of online wheel-rail elastic contact detection. In order to improve efficiency and save time, a main analytical approach is used for the definition of wheel and rail surfaces as well as for contact detection, then a final numerical evaluation is used to locate contact. The final numerical procedure consists in finding the zeros of a nonlinear function in a single variable. The overall method is based on the approximation of the wheel surface, which does not influence the contact location significantly, as shown in the paper.
A high-order Lagrangian-decoupling method for the incompressible Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Ho, Lee-Wing; Maday, Yvon; Patera, Anthony T.; Ronquist, Einar M.
1989-01-01
A high-order Lagrangian-decoupling method is presented for the unsteady convection-diffusion and incompressible Navier-Stokes equations. The method is based upon: (1) Lagrangian variational forms that reduce the convection-diffusion equation to a symmetric initial value problem; (2) implicit high-order backward-differentiation finite-difference schemes for integration along characteristics; (3) finite element or spectral element spatial discretizations; and (4) mesh-invariance procedures and high-order explicit time-stepping schemes for deducing function values at convected space-time points. The method improves upon previous finite element characteristic methods through the systematic and efficient extension to high order accuracy, and the introduction of a simple structure-preserving characteristic-foot calculation procedure which is readily implemented on modern architectures. The new method is significantly more efficient than explicit-convection schemes for the Navier-Stokes equations due to the decoupling of the convection and Stokes operators and the attendant increase in temporal stability. Numerous numerical examples are given for the convection-diffusion and Navier-Stokes equations for the particular case of a spectral element spatial discretization.
A direct method for unfolding the resolution function from measurements of neutron induced reactions
NASA Astrophysics Data System (ADS)
Žugec, P.; Colonna, N.; Sabate-Gilarte, M.; Vlachoudis, V.; Massimi, C.; Lerendegui-Marco, J.; Stamatopoulos, A.; Bacak, M.; Warren, S. G.; n TOF Collaboration
2017-12-01
The paper explores the numerical stability and the computational efficiency of a direct method for unfolding the resolution function from the measurements of the neutron induced reactions. A detailed resolution function formalism is laid out, followed by an overview of challenges present in a practical implementation of the method. A special matrix storage scheme is developed in order to facilitate both the memory management of the resolution function matrix, and to increase the computational efficiency of the matrix multiplication and decomposition procedures. Due to its admirable computational properties, a Cholesky decomposition is at the heart of the unfolding procedure. With the smallest but necessary modification of the matrix to be decomposed, the method is successfully applied to system of 105 × 105. However, the amplification of the uncertainties during the direct inversion procedures limits the applicability of the method to high-precision measurements of neutron induced reactions.
Integrated aerodynamic-structural design of a forward-swept transport wing
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.; Grossman, Bernard; Kao, Pi-Jen; Polen, David M.; Sobieszczanski-Sobieski, Jaroslaw
1989-01-01
The introduction of composite materials is having a profound effect on aircraft design. Since these materials permit the designer to tailor material properties to improve structural, aerodynamic and acoustic performance, they require an integrated multidisciplinary design process. Futhermore, because of the complexity of the design process, numerical optimization methods are required. The utilization of integrated multidisciplinary design procedures for improving aircraft design is not currently feasible because of software coordination problems and the enormous computational burden. Even with the expected rapid growth of supercomputers and parallel architectures, these tasks will not be practical without the development of efficient methods for cross-disciplinary sensitivities and efficient optimization procedures. The present research is part of an on-going effort which is focused on the processes of simultaneous aerodynamic and structural wing design as a prototype for design integration. A sequence of integrated wing design procedures has been developed in order to investigate various aspects of the design process.
NASA Astrophysics Data System (ADS)
WANG, P. T.
2015-12-01
Groundwater modeling requires to assign hydrogeological properties to every numerical grid. Due to the lack of detailed information and the inherent spatial heterogeneity, geological properties can be treated as random variables. Hydrogeological property is assumed to be a multivariate distribution with spatial correlations. By sampling random numbers from a given statistical distribution and assigning a value to each grid, a random field for modeling can be completed. Therefore, statistics sampling plays an important role in the efficiency of modeling procedure. Latin Hypercube Sampling (LHS) is a stratified random sampling procedure that provides an efficient way to sample variables from their multivariate distributions. This study combines the the stratified random procedure from LHS and the simulation by using LU decomposition to form LULHS. Both conditional and unconditional simulations of LULHS were develpoed. The simulation efficiency and spatial correlation of LULHS are compared to the other three different simulation methods. The results show that for the conditional simulation and unconditional simulation, LULHS method is more efficient in terms of computational effort. Less realizations are required to achieve the required statistical accuracy and spatial correlation.
All-optical nanomechanical heat engine.
Dechant, Andreas; Kiesel, Nikolai; Lutz, Eric
2015-05-08
We propose and theoretically investigate a nanomechanical heat engine. We show how a levitated nanoparticle in an optical trap inside a cavity can be used to realize a Stirling cycle in the underdamped regime. The all-optical approach enables fast and flexible control of all thermodynamical parameters and the efficient optimization of the performance of the engine. We develop a systematic optimization procedure to determine optimal driving protocols. Further, we perform numerical simulations with realistic parameters and evaluate the maximum power and the corresponding efficiency.
All-Optical Nanomechanical Heat Engine
NASA Astrophysics Data System (ADS)
Dechant, Andreas; Kiesel, Nikolai; Lutz, Eric
2015-05-01
We propose and theoretically investigate a nanomechanical heat engine. We show how a levitated nanoparticle in an optical trap inside a cavity can be used to realize a Stirling cycle in the underdamped regime. The all-optical approach enables fast and flexible control of all thermodynamical parameters and the efficient optimization of the performance of the engine. We develop a systematic optimization procedure to determine optimal driving protocols. Further, we perform numerical simulations with realistic parameters and evaluate the maximum power and the corresponding efficiency.
On computations of the integrated space shuttle flowfield using overset grids
NASA Technical Reports Server (NTRS)
Chiu, I-T.; Pletcher, R. H.; Steger, J. L.
1990-01-01
Numerical simulations using the thin-layer Navier-Stokes equations and chimera (overset) grid approach were carried out for flows around the integrated space shuttle vehicle over a range of Mach numbers. Body-conforming grids were used for all the component grids. Testcases include a three-component overset grid - the external tank (ET), the solid rocket booster (SRB) and the orbiter (ORB), and a five-component overset grid - the ET, SRB, ORB, forward and aft attach hardware, configurations. The results were compared with the wind tunnel and flight data. In addition, a Poisson solution procedure (a special case of the vorticity-velocity formulation) using primitive variables was developed to solve three-dimensional, irrotational, inviscid flows for single as well as overset grids. The solutions were validated by comparisons with other analytical or numerical solution, and/or experimental results for various geometries. The Poisson solution was also used as an initial guess for the thin-layer Navier-Stokes solution procedure to improve the efficiency of the numerical flow simulations. It was found that this approach resulted in roughly a 30 percent CPU time savings as compared with the procedure solving the thin-layer Navier-Stokes equations from a uniform free stream flowfield.
Khoram, Nafiseh; Zayane, Chadia; Djellouli, Rabia; Laleg-Kirati, Taous-Meriem
2016-03-15
The calibration of the hemodynamic model that describes changes in blood flow and blood oxygenation during brain activation is a crucial step for successfully monitoring and possibly predicting brain activity. This in turn has the potential to provide diagnosis and treatment of brain diseases in early stages. We propose an efficient numerical procedure for calibrating the hemodynamic model using some fMRI measurements. The proposed solution methodology is a regularized iterative method equipped with a Kalman filtering-type procedure. The Newton component of the proposed method addresses the nonlinear aspect of the problem. The regularization feature is used to ensure the stability of the algorithm. The Kalman filter procedure is incorporated here to address the noise in the data. Numerical results obtained with synthetic data as well as with real fMRI measurements are presented to illustrate the accuracy, robustness to the noise, and the cost-effectiveness of the proposed method. We present numerical results that clearly demonstrate that the proposed method outperforms the Cubature Kalman Filter (CKF), one of the most prominent existing numerical methods. We have designed an iterative numerical technique, called the TNM-CKF algorithm, for calibrating the mathematical model that describes the single-event related brain response when fMRI measurements are given. The method appears to be highly accurate and effective in reconstructing the BOLD signal even when the measurements are tainted with high noise level (as high as 30%). Published by Elsevier B.V.
NASA Technical Reports Server (NTRS)
Bratanow, T.; Ecer, A.
1973-01-01
A general computational method for analyzing unsteady flow around pitching and plunging airfoils was developed. The finite element method was applied in developing an efficient numerical procedure for the solution of equations describing the flow around airfoils. The numerical results were employed in conjunction with computer graphics techniques to produce visualization of the flow. The investigation involved mathematical model studies of flow in two phases: (1) analysis of a potential flow formulation and (2) analysis of an incompressible, unsteady, viscous flow from Navier-Stokes equations.
NASA Astrophysics Data System (ADS)
Kang, Seokkoo; Borazjani, Iman; Sotiropoulos, Fotis
2008-11-01
Unsteady 3D simulations of flows in natural streams is a challenging task due to the complexity of the bathymetry, the shallowness of the flow, and the presence of multiple nature- and man-made obstacles. This work is motivated by the need to develop a powerful numerical method for simulating such flows using coherent-structure-resolving turbulence models. We employ the curvilinear immersed boundary method of Ge and Sotiropoulos (Journal of Computational Physics, 2007) and address the critical issue of numerical efficiency in large aspect ratio computational domains and grids such as those encountered in long and shallow open channels. We show that the matrix-free Newton-Krylov method for solving the momentum equations coupled with an algebraic multigrid method with incomplete LU preconditioner for solving the Poisson equation yield a robust and efficient procedure for obtaining time-accurate solutions in such problems. We demonstrate the potential of the numerical approach by carrying out a direct numerical simulation of flow in a long and shallow meandering stream with multiple hydraulic structures.
Piecewise Polynomial Aggregation as Preprocessing for Data Numerical Modeling
NASA Astrophysics Data System (ADS)
Dobronets, B. S.; Popova, O. A.
2018-05-01
Data aggregation issues for numerical modeling are reviewed in the present study. The authors discuss data aggregation procedures as preprocessing for subsequent numerical modeling. To calculate the data aggregation, the authors propose using numerical probabilistic analysis (NPA). An important feature of this study is how the authors represent the aggregated data. The study shows that the offered approach to data aggregation can be interpreted as the frequency distribution of a variable. To study its properties, the density function is used. For this purpose, the authors propose using the piecewise polynomial models. A suitable example of such approach is the spline. The authors show that their approach to data aggregation allows reducing the level of data uncertainty and significantly increasing the efficiency of numerical calculations. To demonstrate the degree of the correspondence of the proposed methods to reality, the authors developed a theoretical framework and considered numerical examples devoted to time series aggregation.
Global Asymptotic Behavior of Iterative Implicit Schemes
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sweby, P. K.
1994-01-01
The global asymptotic nonlinear behavior of some standard iterative procedures in solving nonlinear systems of algebraic equations arising from four implicit linear multistep methods (LMMs) in discretizing three models of 2 x 2 systems of first-order autonomous nonlinear ordinary differential equations (ODEs) is analyzed using the theory of dynamical systems. The iterative procedures include simple iteration and full and modified Newton iterations. The results are compared with standard Runge-Kutta explicit methods, a noniterative implicit procedure, and the Newton method of solving the steady part of the ODEs. Studies showed that aside from exhibiting spurious asymptotes, all of the four implicit LMMs can change the type and stability of the steady states of the differential equations (DEs). They also exhibit a drastic distortion but less shrinkage of the basin of attraction of the true solution than standard nonLMM explicit methods. The simple iteration procedure exhibits behavior which is similar to standard nonLMM explicit methods except that spurious steady-state numerical solutions cannot occur. The numerical basins of attraction of the noniterative implicit procedure mimic more closely the basins of attraction of the DEs and are more efficient than the three iterative implicit procedures for the four implicit LMMs. Contrary to popular belief, the initial data using the Newton method of solving the steady part of the DEs may not have to be close to the exact steady state for convergence. These results can be used as an explanation for possible causes and cures of slow convergence and nonconvergence of steady-state numerical solutions when using an implicit LMM time-dependent approach in computational fluid dynamics.
A Polynomial Time, Numerically Stable Integer Relation Algorithm
NASA Technical Reports Server (NTRS)
Ferguson, Helaman R. P.; Bailey, Daivd H.; Kutler, Paul (Technical Monitor)
1998-01-01
Let x = (x1, x2...,xn be a vector of real numbers. X is said to possess an integer relation if there exist integers a(sub i) not all zero such that a1x1 + a2x2 + ... a(sub n)Xn = 0. Beginning in 1977 several algorithms (with proofs) have been discovered to recover the a(sub i) given x. The most efficient of these existing integer relation algorithms (in terms of run time and the precision required of the input) has the drawback of being very unstable numerically. It often requires a numeric precision level in the thousands of digits to reliably recover relations in modest-sized test problems. We present here a new algorithm for finding integer relations, which we have named the "PSLQ" algorithm. It is proved in this paper that the PSLQ algorithm terminates with a relation in a number of iterations that is bounded by a polynomial in it. Because this algorithm employs a numerically stable matrix reduction procedure, it is free from the numerical difficulties, that plague other integer relation algorithms. Furthermore, its stability admits an efficient implementation with lower run times oil average than other algorithms currently in Use. Finally, this stability can be used to prove that relation bounds obtained from computer runs using this algorithm are numerically accurate.
Thermo-viscoelastic analysis of composite materials
NASA Technical Reports Server (NTRS)
Lin, Kuen Y.; Hwang, I. H.
1989-01-01
The thermo-viscoelastic boundary value problem for anisotropic materials is formulated and a numerical procedure is developed for the efficient analysis of stress and deformation histories in composites. The procedure is based on the finite element method and therefore it is applicable to composite laminates containing geometric discontinuities and complicated boundary conditions. Using the present formulation, the time-dependent stress and strain distributions in both notched and unnotched graphite/epoxy composites have been obtained. The effect of temperature and ply orientation on the creep and relaxation response is also studied.
NASA Astrophysics Data System (ADS)
Bhrawy, A. H.; Doha, E. H.; Baleanu, D.; Ezz-Eldien, S. S.
2015-07-01
In this paper, an efficient and accurate spectral numerical method is presented for solving second-, fourth-order fractional diffusion-wave equations and fractional wave equations with damping. The proposed method is based on Jacobi tau spectral procedure together with the Jacobi operational matrix for fractional integrals, described in the Riemann-Liouville sense. The main characteristic behind this approach is to reduce such problems to those of solving systems of algebraic equations in the unknown expansion coefficients of the sought-for spectral approximations. The validity and effectiveness of the method are demonstrated by solving five numerical examples. Numerical examples are presented in the form of tables and graphs to make comparisons with the results obtained by other methods and with the exact solutions more easier.
Improved diffusion Monte Carlo propagators for bosonic systems using Itô calculus
NASA Astrophysics Data System (ADS)
Hâkansson, P.; Mella, M.; Bressanini, Dario; Morosi, Gabriele; Patrone, Marta
2006-11-01
The construction of importance sampled diffusion Monte Carlo (DMC) schemes accurate to second order in the time step is discussed. A central aspect in obtaining efficient second order schemes is the numerical solution of the stochastic differential equation (SDE) associated with the Fokker-Plank equation responsible for the importance sampling procedure. In this work, stochastic predictor-corrector schemes solving the SDE and consistent with Itô calculus are used in DMC simulations of helium clusters. These schemes are numerically compared with alternative algorithms obtained by splitting the Fokker-Plank operator, an approach that we analyze using the analytical tools provided by Itô calculus. The numerical results show that predictor-corrector methods are indeed accurate to second order in the time step and that they present a smaller time step bias and a better efficiency than second order split-operator derived schemes when computing ensemble averages for bosonic systems. The possible extension of the predictor-corrector methods to higher orders is also discussed.
NASA Technical Reports Server (NTRS)
Hafez, M.; Ahmad, J.; Kuruvila, G.; Salas, M. D.
1987-01-01
In this paper, steady, axisymmetric inviscid, and viscous (laminar) swirling flows representing vortex breakdown phenomena are simulated using a stream function-vorticity-circulation formulation and two numerical methods. The first is based on an inverse iteration, where a norm of the solution is prescribed and the swirling parameter is calculated as a part of the output. The second is based on direct Newton iterations, where the linearized equations, for all the unknowns, are solved simultaneously by an efficient banded Gaussian elimination procedure. Several numerical solutions for inviscid and viscous flows are demonstrated, followed by a discussion of the results. Some improvements on previous work have been achieved: first order upwind differences are replaced by second order schemes, line relaxation procedure (with linear convergence rate) is replaced by Newton's iterations (which converge quadratically), and Reynolds numbers are extended from 200 up to 1000.
Alternative Modal Basis Selection Procedures for Nonlinear Random Response Simulation
NASA Technical Reports Server (NTRS)
Przekop, Adam; Guo, Xinyun; Rizzi, Stephen A.
2010-01-01
Three procedures to guide selection of an efficient modal basis in a nonlinear random response analysis are examined. One method is based only on proper orthogonal decomposition, while the other two additionally involve smooth orthogonal decomposition. Acoustic random response problems are employed to assess the performance of the three modal basis selection approaches. A thermally post-buckled beam exhibiting snap-through behavior, a shallowly curved arch in the auto-parametric response regime and a plate structure are used as numerical test articles. The results of the three reduced-order analyses are compared with the results of the computationally taxing simulation in the physical degrees of freedom. For the cases considered, all three methods are shown to produce modal bases resulting in accurate and computationally efficient reduced-order nonlinear simulations.
An efficient method for solving the steady Euler equations
NASA Technical Reports Server (NTRS)
Liou, M.-S.
1986-01-01
An efficient numerical procedure for solving a set of nonlinear partial differential equations, the steady Euler equations, using Newton's linearization procedure is presented. A theorem indicating quadratic convergence for the case of differential equations is demonstrated. A condition for the domain of quadratic convergence Omega(2) is obtained which indicates that whether an approximation lies in Omega(2) depends on the rate of change and the smoothness of the flow vectors, and hence is problem-dependent. The choice of spatial differencing, of particular importance for the present method, is discussed. The treatment of boundary conditions is addressed, and the system of equations resulting from the foregoing analysis is summarized and solution strategies are discussed. The convergence of calculated solutions is demonstrated by comparing them with exact solutions to one and two-dimensional problems.
NASA Technical Reports Server (NTRS)
Kwak, Dochan; Kiris, C.; Smith, Charles A. (Technical Monitor)
1998-01-01
Performance of the two commonly used numerical procedures, one based on artificial compressibility method and the other pressure projection method, are compared. These formulations are selected primarily because they are designed for three-dimensional applications. The computational procedures are compared by obtaining steady state solutions of a wake vortex and unsteady solutions of a curved duct flow. For steady computations, artificial compressibility was very efficient in terms of computing time and robustness. For an unsteady flow which requires small physical time step, pressure projection method was found to be computationally more efficient than an artificial compressibility method. This comparison is intended to give some basis for selecting a method or a flow solution code for large three-dimensional applications where computing resources become a critical issue.
Accurate Monotonicity - Preserving Schemes With Runge-Kutta Time Stepping
NASA Technical Reports Server (NTRS)
Suresh, A.; Huynh, H. T.
1997-01-01
A new class of high-order monotonicity-preserving schemes for the numerical solution of conservation laws is presented. The interface value in these schemes is obtained by limiting a higher-order polynominal reconstruction. The limiting is designed to preserve accuracy near extrema and to work well with Runge-Kutta time stepping. Computational efficiency is enhanced by a simple test that determines whether the limiting procedure is needed. For linear advection in one dimension, these schemes are shown as well as the Euler equations also confirm their high accuracy, good shock resolution, and computational efficiency.
NASA Astrophysics Data System (ADS)
Mascarenhas, Eduardo; Flayac, Hugo; Savona, Vincenzo
2015-08-01
We develop a numerical procedure to efficiently model the nonequilibrium steady state of one-dimensional arrays of open quantum systems based on a matrix-product operator ansatz for the density matrix. The procedure searches for the null eigenvalue of the Liouvillian superoperator by sweeping along the system while carrying out a partial diagonalization of the single-site stationary problem. It bears full analogy to the density-matrix renormalization-group approach to the ground state of isolated systems, and its numerical complexity scales as a power law with the bond dimension. The method brings considerable advantage when compared to the integration of the time-dependent problem via Trotter decomposition, as it can address arbitrarily long-ranged couplings. Additionally, it ensures numerical stability in the case of weakly dissipative systems thanks to a slow tuning of the dissipation rates along the sweeps. We have tested the method on a driven-dissipative spin chain, under various assumptions for the Hamiltonian, drive, and dissipation parameters, and compared the results to those obtained both by Trotter dynamics and Monte Carlo wave function methods. Accurate and numerically stable convergence was always achieved when applying the method to systems with a gapped Liouvillian and a nondegenerate steady state.
Spreadsheet Applications using VisiCalc and Lotus 1-2-3 Programs.
ERIC Educational Resources Information Center
Cortland-Madison Board of Cooperative Educational Services, Cortland, NY.
The VisiCalc program is visual calculation on a computer making use of an electronic worksheet that is beneficial to the business user in dealing with numerous accounting and clerical procedures. The Lotus 1-2-3 program begins with VisiCalc and improves upon it by adding graphics and a database as well as more efficient ways to manipulate and…
GTA weld penetration and the effects of deviations in machine variables
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giedt, W.H.
1987-07-01
Analytical models for predicting the temperature distribution during GTA welding are reviewed with the purpose of developing a procedure for investigating the effects of deviations in machine parameters. The objective was to determine the accuracy required in machine settings to obtain reproducible results. This review revealed a wide range of published values (21 to 90%) for the arc heating efficiency. Low values (21 to 65%) were associated with evaluation of efficiency using constant property conduction models. Values from 75 to 90% were determined from calorimetric type measurements and are applicable for more accurate numerical solution procedures. Although numerical solutions canmore » yield better overall weld zone predictions, calculations are lengthy and complex. In view of this and the indication that acceptable agreement with experimental measurements can be achieved with the moving-point-source solution, it was utilized to investigate the effects of deviations or errors in voltage, current, and travel speed on GTA weld penetration. Variations resulting from welding within current goals for voltage (+-0.1 V), current (+-3.0 A), and travel speed (+-2.0%) were found to be +-2 to 4%, with voltage and current being more influential than travel speed.« less
Development of an efficient procedure for calculating the aerodynamic effects of planform variation
NASA Technical Reports Server (NTRS)
Mercer, J. E.; Geller, E. W.
1981-01-01
Numerical procedures to compute gradients in aerodynamic loading due to planform shape changes using panel method codes were studied. Two procedures were investigated: one computed the aerodynamic perturbation directly; the other computed the aerodynamic loading on the perturbed planform and on the base planform and then differenced these values to obtain the perturbation in loading. It is indicated that computing the perturbed values directly can not be done satisfactorily without proper aerodynamic representation of the pressure singularity at the leading edge of a thin wing. For the alternative procedure, a technique was developed which saves most of the time-consuming computations from a panel method calculation for the base planform. Using this procedure the perturbed loading can be calculated in about one-tenth the time of that for the base solution.
Nash equilibrium and multi criterion aerodynamic optimization
NASA Astrophysics Data System (ADS)
Tang, Zhili; Zhang, Lianhe
2016-06-01
Game theory and its particular Nash Equilibrium (NE) are gaining importance in solving Multi Criterion Optimization (MCO) in engineering problems over the past decade. The solution of a MCO problem can be viewed as a NE under the concept of competitive games. This paper surveyed/proposed four efficient algorithms for calculating a NE of a MCO problem. Existence and equivalence of the solution are analyzed and proved in the paper based on fixed point theorem. Specific virtual symmetric Nash game is also presented to set up an optimization strategy for single objective optimization problems. Two numerical examples are presented to verify proposed algorithms. One is mathematical functions' optimization to illustrate detailed numerical procedures of algorithms, the other is aerodynamic drag reduction of civil transport wing fuselage configuration by using virtual game. The successful application validates efficiency of algorithms in solving complex aerodynamic optimization problem.
Dynamic behavior of a rolling housing
NASA Astrophysics Data System (ADS)
Gentile, A.; Messina, A. M.; Trentadue, Bartolo
1994-09-01
One of the major objectives of industry is to curtail costs. An element, among others, that enables to achieve such goal is the efficiency of the production cycle machines. Such efficiency lies in the reliability of the upkeeping operations. Among maintenance procedures, measuring and analyzing vibrations is a way to detect structure modifications over the machine's lifespan. Further, the availability of a mathematical model describing the influence of each individual part of the machine on the total dynamic behavior of the whole machine may help localizing breakdowns during diagnosis operations. The paper hereof illustrates an analytical-numerical model which can simulate the behavior of a rolling housing. The aforesaid mathematical model has been obtained by FEM techniques, the dynamic response by mode superposition and the synthesis of the vibration time sequence in the frequency versus by FFT numerical techniques.
NASA Technical Reports Server (NTRS)
Crook, Andrew J.; Delaney, Robert A.
1992-01-01
The purpose of this study is the development of a three-dimensional Euler/Navier-Stokes flow analysis for fan section/engine geometries containing multiple blade rows and multiple spanwise flow splitters. An existing procedure developed by Dr. J. J. Adamczyk and associates and the NASA Lewis Research Center was modified to accept multiple spanwise splitter geometries and simulate engine core conditions. The procedure was also modified to allow coarse parallelization of the solution algorithm. This document is a final report outlining the development and techniques used in the procedure. The numerical solution is based upon a finite volume technique with a four stage Runge-Kutta time marching procedure. Numerical dissipation is used to gain solution stability but is reduced in viscous dominated flow regions. Local time stepping and implicit residual smoothing are used to increase the rate of convergence. Multiple blade row solutions are based upon the average-passage system of equations. The numerical solutions are performed on an H-type grid system, with meshes being generated by the system (TIGG3D) developed earlier under this contract. The grid generation scheme meets the average-passage requirement of maintaining a common axisymmetric mesh for each blade row grid. The analysis was run on several geometry configurations ranging from one to five blade rows and from one to four radial flow splitters. Pure internal flow solutions were obtained as well as solutions with flow about the cowl/nacelle and various engine core flow conditions. The efficiency of the solution procedure was shown to be the same as the original analysis.
Numerical simulation of the vortical flow around a pitching airfoil
NASA Astrophysics Data System (ADS)
Fu, Xiang; Li, Gaohua; Wang, Fuxin
2017-04-01
In order to study the dynamic behaviors of the flapping wing, the vortical flow around a pitching NACA0012 airfoil is investigated. The unsteady flow field is obtained by a very efficient zonal procedure based on the velocity-vorticity formulation and the Reynolds number based on the chord length of the airfoil is set to 1 million. The zonal procedure divides up the whole computation domain in to three zones: potential flow zone, boundary layer zone and Navier-Stokes zone. Since the vorticity is absent in the potential flow zone, the vorticity transport equation needs only to be solved in the boundary layer zone and Navier-Stokes zone. Moreover, the boundary layer equations are solved in the boundary layer zone. This arrangement drastically reduces the computation time against the traditional numerical method. After the flow field computation, the evolution of the vortices around the airfoil is analyzed in detail.
Computer simulations of phase field drops on super-hydrophobic surfaces
NASA Astrophysics Data System (ADS)
Fedeli, Livio
2017-09-01
We present a novel quasi-Newton continuation procedure that efficiently solves the system of nonlinear equations arising from the discretization of a phase field model for wetting phenomena. We perform a comparative numerical analysis that shows the improved speed of convergence gained with respect to other numerical schemes. Moreover, we discuss the conditions that, on a theoretical level, guarantee the convergence of this method. At each iterative step, a suitable continuation procedure develops and passes to the nonlinear solver an accurate initial guess. Discretization performs through cell-centered finite differences. The resulting system of equations is solved on a composite grid that uses dynamic mesh refinement and multi-grid techniques. The final code achieves three-dimensional, realistic computer experiments comparable to those produced in laboratory settings. This code offers not only new insights into the phenomenology of super-hydrophobicity, but also serves as a reliable predictive tool for the study of hydrophobic surfaces.
NASA Technical Reports Server (NTRS)
Kiris, Cetin; Kwak, Dochan
2001-01-01
Two numerical procedures, one based on artificial compressibility method and the other pressure projection method, are outlined for obtaining time-accurate solutions of the incompressible Navier-Stokes equations. The performance of the two method are compared by obtaining unsteady solutions for the evolution of twin vortices behind a at plate. Calculated results are compared with experimental and other numerical results. For an un- steady ow which requires small physical time step, pressure projection method was found to be computationally efficient since it does not require any subiterations procedure. It was observed that the artificial compressibility method requires a fast convergence scheme at each physical time step in order to satisfy incompressibility condition. This was obtained by using a GMRES-ILU(0) solver in our computations. When a line-relaxation scheme was used, the time accuracy was degraded and time-accurate computations became very expensive.
Khmyrova, Irina; Watanabe, Norikazu; Kholopova, Julia; Kovalchuk, Anatoly; Shapoval, Sergei
2014-07-20
We develop an analytical and numerical model for performing simulation of light extraction through the planar output interface of the light-emitting diodes (LEDs) with nonuniform current injection. Spatial nonuniformity of injected current is a peculiar feature of the LEDs in which top metal electrode is patterned as a mesh in order to enhance the output power of light extracted through the top surface. Basic features of the model are the bi-plane computation domain, related to other areas of numerical grid (NG) cells in these two planes, representation of light-generating layer by an ensemble of point light sources, numerical "collection" of light photons from the area limited by acceptance circle and adjustment of NG-cell areas in the computation procedure by the angle-tuned aperture function. The developed model and procedure are used to simulate spatial distributions of the output optical power as well as the total output power at different mesh pitches. The proposed model and simulation strategy can be very efficient in evaluation of the output optical performance of LEDs with periodical or symmetrical configuration of the electrodes.
Automated smoother for the numerical decoupling of dynamics models.
Vilela, Marco; Borges, Carlos C H; Vinga, Susana; Vasconcelos, Ana Tereza R; Santos, Helena; Voit, Eberhard O; Almeida, Jonas S
2007-08-21
Structure identification of dynamic models for complex biological systems is the cornerstone of their reverse engineering. Biochemical Systems Theory (BST) offers a particularly convenient solution because its parameters are kinetic-order coefficients which directly identify the topology of the underlying network of processes. We have previously proposed a numerical decoupling procedure that allows the identification of multivariate dynamic models of complex biological processes. While described here within the context of BST, this procedure has a general applicability to signal extraction. Our original implementation relied on artificial neural networks (ANN), which caused slight, undesirable bias during the smoothing of the time courses. As an alternative, we propose here an adaptation of the Whittaker's smoother and demonstrate its role within a robust, fully automated structure identification procedure. In this report we propose a robust, fully automated solution for signal extraction from time series, which is the prerequisite for the efficient reverse engineering of biological systems models. The Whittaker's smoother is reformulated within the context of information theory and extended by the development of adaptive signal segmentation to account for heterogeneous noise structures. The resulting procedure can be used on arbitrary time series with a nonstationary noise process; it is illustrated here with metabolic profiles obtained from in-vivo NMR experiments. The smoothed solution that is free of parametric bias permits differentiation, which is crucial for the numerical decoupling of systems of differential equations. The method is applicable in signal extraction from time series with nonstationary noise structure and can be applied in the numerical decoupling of system of differential equations into algebraic equations, and thus constitutes a rather general tool for the reverse engineering of mechanistic model descriptions from multivariate experimental time series.
Alternative Modal Basis Selection Procedures For Reduced-Order Nonlinear Random Response Simulation
NASA Technical Reports Server (NTRS)
Przekop, Adam; Guo, Xinyun; Rizi, Stephen A.
2012-01-01
Three procedures to guide selection of an efficient modal basis in a nonlinear random response analysis are examined. One method is based only on proper orthogonal decomposition, while the other two additionally involve smooth orthogonal decomposition. Acoustic random response problems are employed to assess the performance of the three modal basis selection approaches. A thermally post-buckled beam exhibiting snap-through behavior, a shallowly curved arch in the auto-parametric response regime and a plate structure are used as numerical test articles. The results of a computationally taxing full-order analysis in physical degrees of freedom are taken as the benchmark for comparison with the results from the three reduced-order analyses. For the cases considered, all three methods are shown to produce modal bases resulting in accurate and computationally efficient reduced-order nonlinear simulations.
A numerical study on high-pressure water-spray cleaning for CSP reflectors
NASA Astrophysics Data System (ADS)
Anglani, Francesco; Barry, John; Dekkers, Willem
2016-05-01
Mirror cleaning for concentrated solar thermal (CST) systems is an important aspect of operation and maintenance (O&M), which affects solar field efficiency. The cleaning process involves soil removal by erosion, resulting from droplet impingement on the surface. Several studies have been conducted on dust accumulation and CSP plant reflectivity restoration, demonstrating that parameters such as nozzle diameter, jet impingement angle, interaxial distance between nozzles, standoff distance, water velocity, nozzle pressure and others factors influence the extent of reflectance restoration. In this paper we aim at identifying optimized cleaning strategies suitable for CST plants, able to restore mirror reflectance by high-pressure water-spray systems through the enhancement of shear stress over reflectors' surface. In order to evaluate the forces generated by water-spray jet impingement during the cleaning process, fluid dynamics simulations have been undertaken with ANSYS CFX software. In this analysis, shear forces represent the "critical phenomena" within the soil removal process. Enhancing shear forces on a particular area of the target surface, varying the angle of impingement in combination with the variation of standoff distances, and managing the interaxial distance of nozzles can increase cleaning efficiency. This procedure intends to improve the cleaning operation for CST mirrors reducing spotted surface and increasing particles removal efficiency. However, turbulence developed by adjacent flows decrease the shear stress generated on the reflectors surface. The presence of turbulence is identified by the formation of "fountain regions" which are mostly responsible of cleaning inefficiency. By numerical analysis using ANSYS CFX, we have modelled a stationary water-spray system with an array of three nozzles in line, with two angles of impingement: θ = 90° and θ = 75°. Several numerical tests have been carried out, varying the interaxial distance of nozzles, standoff distance, jet pressure and jet impingement angle in order to identify effective and efficient cleaning procedures to restore collectors' reflectance, decrease turbulence and improve CST plant efficiency. Results show that the forces generated over the flat target surface are proportional to the inlet pressure and to the water velocity over the surface, and that the shear stresses decrease as the standoff distance increases.
Reliable Early Classification on Multivariate Time Series with Numerical and Categorical Attributes
2015-05-22
design a procedure of feature extraction in REACT named MEG (Mining Equivalence classes with shapelet Generators) based on the concept of...Equivalence Classes Mining [12, 15]. MEG can efficiently and effectively generate the discriminative features. In addition, several strategies are proposed...technique of parallel computing [4] to propose a process of pa- rallel MEG for substantially reducing the computational overhead of discovering shapelet
NASA Astrophysics Data System (ADS)
Reis, C.; Clain, S.; Figueiredo, J.; Baptista, M. A.; Miranda, J. M. A.
2015-12-01
Numerical tools turn to be very important for scenario evaluations of hazardous phenomena such as tsunami. Nevertheless, the predictions highly depends on the numerical tool quality and the design of efficient numerical schemes still receives important attention to provide robust and accurate solutions. In this study we propose a comparative study between the efficiency of two volume finite numerical codes with second-order discretization implemented with different method to solve the non-conservative shallow water equations, the MUSCL (Monotonic Upstream-Centered Scheme for Conservation Laws) and the MOOD methods (Multi-dimensional Optimal Order Detection) which optimize the accuracy of the approximation in function of the solution local smoothness. The MUSCL is based on a priori criteria where the limiting procedure is performed before updated the solution to the next time-step leading to non-necessary accuracy reduction. On the contrary, the new MOOD technique uses a posteriori detectors to prevent the solution from oscillating in the vicinity of the discontinuities. Indeed, a candidate solution is computed and corrections are performed only for the cells where non-physical oscillations are detected. Using a simple one-dimensional analytical benchmark, 'Single wave on a sloping beach', we show that the classical 1D shallow-water system can be accurately solved with the finite volume method equipped with the MOOD technique and provide better approximation with sharper shock and less numerical diffusion. For the code validation, we also use the Tohoku-Oki 2011 tsunami and reproduce two DART records, demonstrating that the quality of the solution may deeply interfere with the scenario one can assess. This work is funded by the Portugal-France research agreement, through the research project GEONUM FCT-ANR/MAT-NAN/0122/2012.Numerical tools turn to be very important for scenario evaluations of hazardous phenomena such as tsunami. Nevertheless, the predictions highly depends on the numerical tool quality and the design of efficient numerical schemes still receives important attention to provide robust and accurate solutions. In this study we propose a comparative study between the efficiency of two volume finite numerical codes with second-order discretization implemented with different method to solve the non-conservative shallow water equations, the MUSCL (Monotonic Upstream-Centered Scheme for Conservation Laws) and the MOOD methods (Multi-dimensional Optimal Order Detection) which optimize the accuracy of the approximation in function of the solution local smoothness. The MUSCL is based on a priori criteria where the limiting procedure is performed before updated the solution to the next time-step leading to non-necessary accuracy reduction. On the contrary, the new MOOD technique uses a posteriori detectors to prevent the solution from oscillating in the vicinity of the discontinuities. Indeed, a candidate solution is computed and corrections are performed only for the cells where non-physical oscillations are detected. Using a simple one-dimensional analytical benchmark, 'Single wave on a sloping beach', we show that the classical 1D shallow-water system can be accurately solved with the finite volume method equipped with the MOOD technique and provide better approximation with sharper shock and less numerical diffusion. For the code validation, we also use the Tohoku-Oki 2011 tsunami and reproduce two DART records, demonstrating that the quality of the solution may deeply interfere with the scenario one can assess. This work is funded by the Portugal-France research agreement, through the research project GEONUM FCT-ANR/MAT-NAN/0122/2012.
An efficient method for solving the steady Euler equations
NASA Technical Reports Server (NTRS)
Liou, M. S.
1986-01-01
An efficient numerical procedure for solving a set of nonlinear partial differential equations is given, specifically for the steady Euler equations. Solutions of the equations were obtained by Newton's linearization procedure, commonly used to solve the roots of nonlinear algebraic equations. In application of the same procedure for solving a set of differential equations we give a theorem showing that a quadratic convergence rate can be achieved. While the domain of quadratic convergence depends on the problems studied and is unknown a priori, we show that firstand second-order derivatives of flux vectors determine whether the condition for quadratic convergence is satisfied. The first derivatives enter as an implicit operator for yielding new iterates and the second derivatives indicates smoothness of the flows considered. Consequently flows involving shocks are expected to require larger number of iterations. First-order upwind discretization in conjunction with the Steger-Warming flux-vector splitting is employed on the implicit operator and a diagonal dominant matrix results. However the explicit operator is represented by first- and seond-order upwind differencings, using both Steger-Warming's and van Leer's splittings. We discuss treatment of boundary conditions and solution procedures for solving the resulting block matrix system. With a set of test problems for one- and two-dimensional flows, we show detailed study as to the efficiency, accuracy, and convergence of the present method.
NASA Astrophysics Data System (ADS)
Peng, Heng; Liu, Yinghua; Chen, Haofeng
2018-05-01
In this paper, a novel direct method called the stress compensation method (SCM) is proposed for limit and shakedown analysis of large-scale elastoplastic structures. Without needing to solve the specific mathematical programming problem, the SCM is a two-level iterative procedure based on a sequence of linear elastic finite element solutions where the global stiffness matrix is decomposed only once. In the inner loop, the static admissible residual stress field for shakedown analysis is constructed. In the outer loop, a series of decreasing load multipliers are updated to approach to the shakedown limit multiplier by using an efficient and robust iteration control technique, where the static shakedown theorem is adopted. Three numerical examples up to about 140,000 finite element nodes confirm the applicability and efficiency of this method for two-dimensional and three-dimensional elastoplastic structures, with detailed discussions on the convergence and the accuracy of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Iwase, Shigeru; Futamura, Yasunori; Imakura, Akira; Sakurai, Tetsuya; Tsukamoto, Shigeru; Ono, Tomoya
2018-05-01
We propose an efficient computational method for evaluating the self-energy matrices of electrodes to study ballistic electron transport properties in nanoscale systems. To reduce the high computational cost incurred in large systems, a contour integral eigensolver based on the Sakurai-Sugiura method combined with the shifted biconjugate gradient method is developed to solve an exponential-type eigenvalue problem for complex wave vectors. A remarkable feature of the proposed algorithm is that the numerical procedure is very similar to that of conventional band structure calculations. We implement the developed method in the framework of the real-space higher-order finite-difference scheme with nonlocal pseudopotentials. Numerical tests for a wide variety of materials validate the robustness, accuracy, and efficiency of the proposed method. As an illustration of the method, we present the electron transport property of the freestanding silicene with the line defect originating from the reversed buckled phases.
Factors affecting the development of somatic cell nuclear transfer embryos in Cattle.
Akagi, Satoshi; Matsukawa, Kazutsugu; Takahashi, Seiya
2014-01-01
Nuclear transfer is a complex multistep procedure that includes oocyte maturation, cell cycle synchronization of donor cells, enucleation, cell fusion, oocyte activation and embryo culture. Therefore, many factors are believed to contribute to the success of embryo development following nuclear transfer. Numerous attempts to improve cloning efficiency have been conducted since the birth of the first sheep by somatic cell nuclear transfer. However, the efficiency of somatic cell cloning has remained low, and applications have been limited. In this review, we discuss some of the factors that affect the developmental ability of somatic cell nuclear transfer embryos in cattle.
Algorithms for elasto-plastic-creep postbuckling
NASA Technical Reports Server (NTRS)
Padovan, J.; Tovichakchaikul, S.
1984-01-01
This paper considers the development of an improved constrained time stepping scheme which can efficiently and stably handle the pre-post-buckling behavior of general structure subject to high temperature environments. Due to the generality of the scheme, the combined influence of elastic-plastic behavior can be handled in addition to time dependent creep effects. This includes structural problems exhibiting indefinite tangent properties. To illustrate the capability of the procedure, several benchmark problems employing finite element analyses are presented. These demonstrate the numerical efficiency and stability of the scheme. Additionally, the potential influence of complex creep histories on the buckling characteristics is considered.
Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu
2015-11-11
Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted "useful" data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency.
NASA Technical Reports Server (NTRS)
Bardina, J. E.
1994-01-01
A new computational efficient 3-D compressible Reynolds-averaged implicit Navier-Stokes method with advanced two equation turbulence models for high speed flows is presented. All convective terms are modeled using an entropy satisfying higher-order Total Variation Diminishing (TVD) scheme based on implicit upwind flux-difference split approximations and arithmetic averaging procedure of primitive variables. This method combines the best features of data management and computational efficiency of space marching procedures with the generality and stability of time dependent Navier-Stokes procedures to solve flows with mixed supersonic and subsonic zones, including streamwise separated flows. Its robust stability derives from a combination of conservative implicit upwind flux-difference splitting with Roe's property U to provide accurate shock capturing capability that non-conservative schemes do not guarantee, alternating symmetric Gauss-Seidel 'method of planes' relaxation procedure coupled with a three-dimensional two-factor diagonal-dominant approximate factorization scheme, TVD flux limiters of higher-order flux differences satisfying realizability, and well-posed characteristic-based implicit boundary-point a'pproximations consistent with the local characteristics domain of dependence. The efficiency of the method is highly increased with Newton Raphson acceleration which allows convergence in essentially one forward sweep for supersonic flows. The method is verified by comparing with experiment and other Navier-Stokes methods. Here, results of adiabatic and cooled flat plate flows, compression corner flow, and 3-D hypersonic shock-wave/turbulent boundary layer interaction flows are presented. The robust 3-D method achieves a better computational efficiency of at least one order of magnitude over the CNS Navier-Stokes code. It provides cost-effective aerodynamic predictions in agreement with experiment, and the capability of predicting complex flow structures in complex geometries with good accuracy.
A numerically efficient damping model for acoustic resonances in microfluidic cavities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hahn, P., E-mail: hahnp@ethz.ch; Dual, J.
Bulk acoustic wave devices are typically operated in a resonant state to achieve enhanced acoustic amplitudes and high acoustofluidic forces for the manipulation of microparticles. Among other loss mechanisms related to the structural parts of acoustofluidic devices, damping in the fluidic cavity is a crucial factor that limits the attainable acoustic amplitudes. In the analytical part of this study, we quantify all relevant loss mechanisms related to the fluid inside acoustofluidic micro-devices. Subsequently, a numerical analysis of the time-harmonic visco-acoustic and thermo-visco-acoustic equations is carried out to verify the analytical results for 2D and 3D examples. The damping results aremore » fitted into the framework of classical linear acoustics to set up a numerically efficient device model. For this purpose, all damping effects are combined into an acoustofluidic loss factor. Since some components of the acoustofluidic loss factor depend on the acoustic mode shape in the fluid cavity, we propose a two-step simulation procedure. In the first step, the loss factors are deduced from the simulated mode shape. Subsequently, a second simulation is invoked, taking all losses into account. Owing to its computational efficiency, the presented numerical device model is of great relevance for the simulation of acoustofluidic particle manipulation by means of acoustic radiation forces or acoustic streaming. For the first time, accurate 3D simulations of realistic micro-devices for the quantitative prediction of pressure amplitudes and the related acoustofluidic forces become feasible.« less
Investigation of a Parabolic Iterative Solver for Three-dimensional Configurations
NASA Technical Reports Server (NTRS)
Nark, Douglas M.; Watson, Willie R.; Mani, Ramani
2007-01-01
A parabolic iterative solution procedure is investigated that seeks to extend the parabolic approximation used within the internal propagation module of the duct noise propagation and radiation code CDUCT-LaRC. The governing convected Helmholtz equation is split into a set of coupled equations governing propagation in the positive and negative directions. The proposed method utilizes an iterative procedure to solve the coupled equations in an attempt to account for possible reflections from internal bifurcations, impedance discontinuities, and duct terminations. A geometry consistent with the NASA Langley Curved Duct Test Rig is considered and the effects of acoustic treatment and non-anechoic termination are included. Two numerical implementations are studied and preliminary results indicate that improved accuracy in predicted amplitude and phase can be obtained for modes at a cut-off ratio of 1.7. Further predictions for modes at a cut-off ratio of 1.1 show improvement in predicted phase at the expense of increased amplitude error. Possible methods of improvement are suggested based on analytic and numerical analysis. It is hoped that coupling the parabolic iterative approach with less efficient, high fidelity finite element approaches will ultimately provide the capability to perform efficient, higher fidelity acoustic calculations within complex 3-D geometries for impedance eduction and noise propagation and radiation predictions.
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan
1994-01-01
LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 1 of a series of three reference publications that describe LENS, provide a detailed guide to its usage, and present many example problems. Part 1 derives the governing equations and describes the numerical solution procedures for the types of problems that can be solved. The accuracy and efficiency of LSENS are examined by means of various test problems, and comparisons with other methods and codes are presented. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.
Eigenproblem solution by a combined Sturm sequence and inverse iteration technique.
NASA Technical Reports Server (NTRS)
Gupta, K. K.
1973-01-01
Description of an efficient and numerically stable algorithm, along with a complete listing of the associated computer program, developed for the accurate computation of specified roots and associated vectors of the eigenvalue problem Aq = lambda Bq with band symmetric A and B, B being also positive-definite. The desired roots are first isolated by the Sturm sequence procedure; then a special variant of the inverse iteration technique is applied for the individual determination of each root along with its vector. The algorithm fully exploits the banded form of relevant matrices, and the associated program written in FORTRAN V for the JPL UNIVAC 1108 computer proves to be most significantly economical in comparison to similar existing procedures. The program may be conveniently utilized for the efficient solution of practical engineering problems, involving free vibration and buckling analysis of structures. Results of such analyses are presented for representative structures.
Making a mixed-model line more efficient and flexible by introducing a bypass line
NASA Astrophysics Data System (ADS)
Matsuura, Sho; Matsuura, Haruki; Asada, Akiko
2017-04-01
This paper provides a design procedure for the bypass subline in a mixed-model assembly line. The bypass subline is installed to reduce the effect of the large difference in operation times among products assembled together in a mixed-model line. The importance of the bypass subline has been increasing in association with the rising necessity for efficiency and flexibility in modern manufacturing. The main topics of this paper are as follows: 1) the conditions in which the bypass subline effectively functions, and 2) how the load should be distributed between the main line and the bypass subline, depending on production conditions such as degree of difference in operation times among products and the mixing ratio of products. To address these issues, we analyzed the lower and the upper bounds of the line length. Based on the results, a design procedure and a numerical example are demonstrated.
Gaussian basis functions for highly oscillatory scattering wavefunctions
NASA Astrophysics Data System (ADS)
Mant, B. P.; Law, M. M.
2018-04-01
We have applied a basis set of distributed Gaussian functions within the S-matrix version of the Kohn variational method to scattering problems involving deep potential energy wells. The Gaussian positions and widths are tailored to the potential using the procedure of Bačić and Light (1986 J. Chem. Phys. 85 4594) which has previously been applied to bound-state problems. The placement procedure is shown to be very efficient and gives scattering wavefunctions and observables in agreement with direct numerical solutions. We demonstrate the basis function placement method with applications to hydrogen atom–hydrogen atom scattering and antihydrogen atom–hydrogen atom scattering.
NASA Technical Reports Server (NTRS)
Noah, S. T.; Kim, Y. B.
1991-01-01
A general approach is developed for determining the periodic solutions and their stability of nonlinear oscillators with piecewise-smooth characteristics. A modified harmonic balance/Fourier transform procedure is devised for the analysis. The procedure avoids certain numerical differentiation employed previously in determining the periodic solutions, therefore enhancing the reliability and efficiency of the method. Stability of the solutions is determined via perturbations of their state variables. The method is applied to a forced oscillator interacting with a stop of finite stiffness. Flip and fold bifurcations are found to occur. This led to the identification of parameter ranges in which chaotic response occurred.
NASA Astrophysics Data System (ADS)
Mössinger, Peter; Jester-Zürker, Roland; Jung, Alexander
2015-01-01
Numerical investigations of hydraulic turbo machines under steady-state conditions are state of the art in current product development processes. Nevertheless allow increasing computational resources refined discretization methods, more sophisticated turbulence models and therefore better predictions of results as well as the quantification of existing uncertainties. Single stage investigations are done using in-house tools for meshing and set-up procedure. Beside different model domains and a mesh study to reduce mesh dependencies, the variation of several eddy viscosity and Reynolds stress turbulence models are investigated. All obtained results are compared with available model test data. In addition to global values, measured magnitudes in the vaneless space, at runner blade and draft tube positions in term of pressure and velocity are considered. From there it is possible to estimate the influence and relevance of various model domains depending on different operating points and numerical variations. Good agreement can be found for pressure and velocity measurements with all model configurations and, except the BSL-RSM model, all turbulence models. At part load, deviations in hydraulic efficiency are at a large magnitude, whereas at best efficiency and high load operating point efficiencies are close to the measurement. A consideration of the runner side gap geometry as well as a refined mesh is able to improve the results either in relation to hydraulic efficiency or velocity distribution with the drawbacks of less stable numerics and increasing computational time.
Schumann, Marcel; Armen, Roger S
2013-05-30
Molecular docking of small-molecules is an important procedure for computer-aided drug design. Modeling receptor side chain flexibility is often important or even crucial, as it allows the receptor to adopt new conformations as induced by ligand binding. However, the accurate and efficient incorporation of receptor side chain flexibility has proven to be a challenge due to the huge computational complexity required to adequately address this problem. Here we describe a new docking approach with a very fast, graph-based optimization algorithm for assignment of the near-optimal set of residue rotamers. We extensively validate our approach using the 40 DUD target benchmarks commonly used to assess virtual screening performance and demonstrate a large improvement using the developed side chain optimization over rigid receptor docking (average ROC AUC of 0.693 vs. 0.623). Compared to numerous benchmarks, the overall performance is better than nearly all other commonly used procedures. Furthermore, we provide a detailed analysis of the level of receptor flexibility observed in docking results for different classes of residues and elucidate potential avenues for further improvement. Copyright © 2013 Wiley Periodicals, Inc.
Assessment of Linear Finite-Difference Poisson-Boltzmann Solvers
Wang, Jun; Luo, Ray
2009-01-01
CPU time and memory usage are two vital issues that any numerical solvers for the Poisson-Boltzmann equation have to face in biomolecular applications. In this study we systematically analyzed the CPU time and memory usage of five commonly used finite-difference solvers with a large and diversified set of biomolecular structures. Our comparative analysis shows that modified incomplete Cholesky conjugate gradient and geometric multigrid are the most efficient in the diversified test set. For the two efficient solvers, our test shows that their CPU times increase approximately linearly with the numbers of grids. Their CPU times also increase almost linearly with the negative logarithm of the convergence criterion at very similar rate. Our comparison further shows that geometric multigrid performs better in the large set of tested biomolecules. However, modified incomplete Cholesky conjugate gradient is superior to geometric multigrid in molecular dynamics simulations of tested molecules. We also investigated other significant components in numerical solutions of the Poisson-Boltzmann equation. It turns out that the time-limiting step is the free boundary condition setup for the linear systems for the selected proteins if the electrostatic focusing is not used. Thus, development of future numerical solvers for the Poisson-Boltzmann equation should balance all aspects of the numerical procedures in realistic biomolecular applications. PMID:20063271
NASA Technical Reports Server (NTRS)
Stricklin, J. A.; Haisler, W. E.; Von Riesemann, W. A.
1972-01-01
This paper presents an assessment of the solution procedures available for the analysis of inelastic and/or large deflection structural behavior. A literature survey is given which summarized the contribution of other researchers in the analysis of structural problems exhibiting material nonlinearities and combined geometric-material nonlinearities. Attention is focused at evaluating the available computation and solution techniques. Each of the solution techniques is developed from a common equation of equilibrium in terms of pseudo forces. The solution procedures are applied to circular plates and shells of revolution in an attempt to compare and evaluate each with respect to computational accuracy, economy, and efficiency. Based on the numerical studies, observations and comments are made with regard to the accuracy and economy of each solution technique.
Yang, Tao; Sezer, Hayri; Celik, Ismail B.; ...
2015-06-02
In the present paper, a physics-based procedure combining experiments and multi-physics numerical simulations is developed for overall analysis of SOFCs operational diagnostics and performance predictions. In this procedure, essential information for the fuel cell is extracted first by utilizing empirical polarization analysis in conjunction with experiments and refined by multi-physics numerical simulations via simultaneous analysis and calibration of polarization curve and impedance behavior. The performance at different utilization cases and operating currents is also predicted to confirm the accuracy of the proposed model. It is demonstrated that, with the present electrochemical model, three air/fuel flow conditions are needed to producemore » a set of complete data for better understanding of the processes occurring within SOFCs. After calibration against button cell experiments, the methodology can be used to assess performance of planar cell without further calibration. The proposed methodology would accelerate the calibration process and improve the efficiency of design and diagnostics.« less
NASA Astrophysics Data System (ADS)
Dang, Jie; Chen, Hao
2016-12-01
The methodology and procedures are discussed on designing merchant ships to achieve fully-integrated and optimized hull-propulsion systems by using asymmetric aftbodies. Computational fluid dynamics (CFD) has been used to evaluate the powering performance through massive calculations with automatic deformation algorisms for the hull forms and the propeller blades. Comparative model tests of the designs to the optimized symmetric hull forms have been carried out to verify the efficiency gain. More than 6% improvement on the propulsive efficiency of an oil tanker has been measured during the model tests. Dedicated sea-trials show good agreement with the predicted performance from the test results.
Factors Affecting the Development of Somatic Cell Nuclear Transfer Embryos in Cattle
AKAGI, Satoshi; MATSUKAWA, Kazutsugu; TAKAHASHI, Seiya
2014-01-01
Nuclear transfer is a complex multistep procedure that includes oocyte maturation, cell cycle synchronization of donor cells, enucleation, cell fusion, oocyte activation and embryo culture. Therefore, many factors are believed to contribute to the success of embryo development following nuclear transfer. Numerous attempts to improve cloning efficiency have been conducted since the birth of the first sheep by somatic cell nuclear transfer. However, the efficiency of somatic cell cloning has remained low, and applications have been limited. In this review, we discuss some of the factors that affect the developmental ability of somatic cell nuclear transfer embryos in cattle. PMID:25341701
NASA Technical Reports Server (NTRS)
Crook, Andrew J.; Delaney, Robert A.
1992-01-01
The computer program user's manual for the ADPACAPES (Advanced Ducted Propfan Analysis Code-Average Passage Engine Simulation) program is included. The objective of the computer program is development of a three-dimensional Euler/Navier-Stokes flow analysis for fan section/engine geometries containing multiple blade rows and multiple spanwise flow splitters. An existing procedure developed by Dr. J. J. Adamczyk and associates at the NASA Lewis Research Center was modified to accept multiple spanwise splitter geometries and simulate engine core conditions. The numerical solution is based upon a finite volume technique with a four stage Runge-Kutta time marching procedure. Multiple blade row solutions are based upon the average-passage system of equations. The numerical solutions are performed on an H-type grid system, with meshes meeting the requirement of maintaining a common axisymmetric mesh for each blade row grid. The analysis was run on several geometry configurations ranging from one to five blade rows and from one to four radial flow splitters. The efficiency of the solution procedure was shown to be the same as the original analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miliordos, Evangelos; Xantheas, Sotiris S.
We propose a general procedure for the numerical calculation of the harmonic vibrational frequencies that is based on internal coordinates and Wilson’s GF methodology via double differentiation of the energy. The internal coordinates are defined as the geometrical parameters of a Z-matrix structure, thus avoiding issues related to their redundancy. Linear arrangements of atoms are described using a dummy atom of infinite mass. The procedure has been automated in FORTRAN90 and its main advantage lies in the nontrivial reduction of the number of single-point energy calculations needed for the construction of the Hessian matrix when compared to the corresponding numbermore » using double differentiation in Cartesian coordinates. For molecules of C 1 symmetry the computational savings in the energy calculations amount to 36N – 30, where N is the number of atoms, with additional savings when symmetry is present. Typical applications for small and medium size molecules in their minimum and transition state geometries as well as hydrogen bonded clusters (water dimer and trimer) are presented. Finally, in all cases the frequencies based on internal coordinates differ on average by <1 cm –1 from those obtained from Cartesian coordinates.« less
Systems identification technology development for large space systems
NASA Technical Reports Server (NTRS)
Armstrong, E. S.
1982-01-01
A methodology for synthesizinng systems identification, both parameter and state, estimation and related control schemes for flexible aerospace structures is developed with emphasis on the Maypole hoop column antenna as a real world application. Modeling studies of the Maypole cable hoop membrane type antenna are conducted using a transfer matrix numerical analysis approach. This methodology was chosen as particularly well suited for handling a large number of antenna configurations of a generic type. A dedicated transfer matrix analysis, both by virtue of its specialization and the inherently easy compartmentalization of the formulation and numerical procedures, is significantly more efficient not only in computer time required but, more importantly, in the time needed to review and interpret the results.
Simple numerical method for predicting steady compressible flows
NASA Technical Reports Server (NTRS)
Vonlavante, Ernst; Nelson, N. Duane
1986-01-01
A numerical method for solving the isenthalpic form of the governing equations for compressible viscous and inviscid flows was developed. The method was based on the concept of flux vector splitting in its implicit form. The method was tested on several demanding inviscid and viscous configurations. Two different forms of the implicit operator were investigated. The time marching to steady state was accelerated by the implementation of the multigrid procedure. Its various forms very effectively increased the rate of convergence of the present scheme. High quality steady state results were obtained in most of the test cases; these required only short computational times due to the relative efficiency of the basic method.
A time-accurate algorithm for chemical non-equilibrium viscous flows at all speeds
NASA Technical Reports Server (NTRS)
Shuen, J.-S.; Chen, K.-H.; Choi, Y.
1992-01-01
A time-accurate, coupled solution procedure is described for the chemical nonequilibrium Navier-Stokes equations over a wide range of Mach numbers. This method employs the strong conservation form of the governing equations, but uses primitive variables as unknowns. Real gas properties and equilibrium chemistry are considered. Numerical tests include steady convergent-divergent nozzle flows with air dissociation/recombination chemistry, dump combustor flows with n-pentane-air chemistry, nonreacting flow in a model double annular combustor, and nonreacting unsteady driven cavity flows. Numerical results for both the steady and unsteady flows demonstrate the efficiency and robustness of the present algorithm for Mach numbers ranging from the incompressible limit to supersonic speeds.
NASA Technical Reports Server (NTRS)
Dorsey, John T.; Mikulas, Martin M.; Doggett, William R.
2008-01-01
The mass and sizing characteristics of manipulators for Lunar and Mars planetary surface applications are investigated by analyzing three structural configurations: a simple cantilevered boom with a square tubular cross-section; a hybrid cable/boom configuration with a square tubular cross-section support structure; and a hybrid cable/boom configuration with a square truss cross-section support structure. Design procedures are developed for the three configurations and numerical examples are given. A new set of performance parameters are developed that relate the mass of manipulators and cranes to a loading parameter. These parameters enable the masses of different manipulator configurations to be compared over a wide range of design loads and reach envelopes (radii). The use of these parameters is demonstrated in the form of a structural efficiency chart using the newly considered manipulator configurations. To understand the performance of Lunar and Mars manipulators, the design procedures were exercised on the three manipulator configurations assuming graphite/epoxy materials for the tubes and trusses. It is also assumed that the actuators are electric motor, gear reduction systems. Numerical results for manipulator masses and sizes are presented for a variety of manipulator reach and payload mass capabilities. Results are presented that demonstrate the sensitivity of manipulator mass to operational radius, tip force, and actuator efficiency. The effect of the value of gravitational force on the ratio of manipulator-mass to payload-mass is also shown. Finally, results are presented to demonstrate the relative mass reduction for the use of graphite/epoxy compared to aluminum for the support structure.
Response of a Rotating Propeller to Aerodynamic Excitation
NASA Technical Reports Server (NTRS)
Arnoldi, Walter E.
1949-01-01
The flexural vibration of a rotating propeller blade with clamped shank is analyzed with the object of presenting, in matrix form, equations for the elastic bending moments in forced vibration resulting from aerodynamic forces applied at a fixed multiple of rotational speed. Matrix equations are also derived which define the critical speeds end mode shapes for any excitation order and the relation between critical speed and blade angle. Reference is given to standard works on the numerical solution of matrix equations of the forms derived. The use of a segmented blade as an approximation to a continuous blade provides a simple means for obtaining the matrix solution from the integral equation of equilibrium, so that, in the numerical application of the method presented, the several matrix arrays of the basic physical characteristics of the propeller blade are of simple form, end their simplicity is preserved until, with the solution in sight, numerical manipulations well-known in matrix algebra yield the desired critical speeds and mode shapes frame which the vibration at any operating condition may be synthesized. A close correspondence between the familiar Stodola method and the matrix method is pointed out, indicating that any features of novelty are characteristic not of the analytical procedure but only of the abbreviation, condensation, and efficient organization of the numerical procedure made possible by the use of classical matrix theory.
Full-degrees-of-freedom frequency based substructuring
NASA Astrophysics Data System (ADS)
Drozg, Armin; Čepon, Gregor; Boltežar, Miha
2018-01-01
Dividing the whole system into multiple subsystems and a separate dynamic analysis is common practice in the field of structural dynamics. The substructuring process improves the computational efficiency and enables an effective realization of the local optimization, modal updating and sensitivity analyses. This paper focuses on frequency-based substructuring methods using experimentally obtained data. An efficient substructuring process has already been demonstrated using numerically obtained frequency-response functions (FRFs). However, the experimental process suffers from several difficulties, among which, many of them are related to the rotational degrees of freedom. Thus, several attempts have been made to measure, expand or combine numerical correction methods in order to obtain a complete response model. The proposed methods have numerous limitations and are not yet generally applicable. Therefore, in this paper an alternative approach based on experimentally obtained data only, is proposed. The force-excited part of the FRF matrix is measured with piezoelectric translational and rotational direct accelerometers. The incomplete moment-excited part of the FRF matrix is expanded, based on the modal model. The proposed procedure is integrated in a Lagrange Multiplier Frequency Based Substructuring method and demonstrated on a simple beam structure, where the connection coordinates are mainly associated with the rotational degrees of freedom.
Implementation of Preconditioned Dual-Time Procedures in OVERFLOW
NASA Technical Reports Server (NTRS)
Pandya, Shishir A.; Venkateswaran, Sankaran; Pulliam, Thomas H.; Kwak, Dochan (Technical Monitor)
2003-01-01
Preconditioning methods have become the method of choice for the solution of flowfields involving the simultaneous presence of low Mach and transonic regions. It is well known that these methods are important for insuring accurate numerical discretization as well as convergence efficiency over various operating conditions such as low Mach number, low Reynolds number and high Strouhal numbers. For unsteady problems, the preconditioning is introduced within a dual-time framework wherein the physical time-derivatives are used to march the unsteady equations and the preconditioned time-derivatives are used for purposes of numerical discretization and iterative solution. In this paper, we describe the implementation of the preconditioned dual-time methodology in the OVERFLOW code. To demonstrate the performance of the method, we employ both simple and practical unsteady flowfields, including vortex propagation in a low Mach number flow, flowfield of an impulsively started plate (Stokes' first problem) arid a cylindrical jet in a low Mach number crossflow with ground effect. All the results demonstrate that the preconditioning algorithm is responsible for improvements to both numerical accuracy and convergence efficiency and, thereby, enables low Mach number unsteady computations to be performed at a fraction of the cost of traditional time-marching methods.
The Elastic Behaviour of Sintered Metallic Fibre Networks: A Finite Element Study by Beam Theory
Bosbach, Wolfram A.
2015-01-01
Background The finite element method has complimented research in the field of network mechanics in the past years in numerous studies about various materials. Numerical predictions and the planning efficiency of experimental procedures are two of the motivational aspects for these numerical studies. The widespread availability of high performance computing facilities has been the enabler for the simulation of sufficiently large systems. Objectives and Motivation In the present study, finite element models were built for sintered, metallic fibre networks and validated by previously published experimental stiffness measurements. The validated models were the basis for predictions about so far unknown properties. Materials and Methods The finite element models were built by transferring previously published skeletons of fibre networks into finite element models. Beam theory was applied as simplification method. Results and Conclusions The obtained material stiffness isn’t a constant but rather a function of variables such as sample size and boundary conditions. Beam theory offers an efficient finite element method for the simulated fibre networks. The experimental results can be approximated by the simulated systems. Two worthwhile aspects for future work will be the influence of size and shape and the mechanical interaction with matrix materials. PMID:26569603
Hybrid pathwise sensitivity methods for discrete stochastic models of chemical reaction systems.
Wolf, Elizabeth Skubak; Anderson, David F
2015-01-21
Stochastic models are often used to help understand the behavior of intracellular biochemical processes. The most common such models are continuous time Markov chains (CTMCs). Parametric sensitivities, which are derivatives of expectations of model output quantities with respect to model parameters, are useful in this setting for a variety of applications. In this paper, we introduce a class of hybrid pathwise differentiation methods for the numerical estimation of parametric sensitivities. The new hybrid methods combine elements from the three main classes of procedures for sensitivity estimation and have a number of desirable qualities. First, the new methods are unbiased for a broad class of problems. Second, the methods are applicable to nearly any physically relevant biochemical CTMC model. Third, and as we demonstrate on several numerical examples, the new methods are quite efficient, particularly if one wishes to estimate the full gradient of parametric sensitivities. The methods are rather intuitive and utilize the multilevel Monte Carlo philosophy of splitting an expectation into separate parts and handling each in an efficient manner.
Adaptive [theta]-methods for pricing American options
NASA Astrophysics Data System (ADS)
Khaliq, Abdul Q. M.; Voss, David A.; Kazmi, Kamran
2008-12-01
We develop adaptive [theta]-methods for solving the Black-Scholes PDE for American options. By adding a small, continuous term, the Black-Scholes PDE becomes an advection-diffusion-reaction equation on a fixed spatial domain. Standard implementation of [theta]-methods would require a Newton-type iterative procedure at each time step thereby increasing the computational complexity of the methods. Our linearly implicit approach avoids such complications. We establish a general framework under which [theta]-methods satisfy a discrete version of the positivity constraint characteristic of American options, and numerically demonstrate the sensitivity of the constraint. The positivity results are established for the single-asset and independent two-asset models. In addition, we have incorporated and analyzed an adaptive time-step control strategy to increase the computational efficiency. Numerical experiments are presented for one- and two-asset American options, using adaptive exponential splitting for two-asset problems. The approach is compared with an iterative solution of the two-asset problem in terms of computational efficiency.
NASA Astrophysics Data System (ADS)
Dupraz, K.; Cassou, K.; Martens, A.; Zomer, F.
2015-10-01
The ABCD matrix for parabolic reflectors is derived for any incident angles. It is used in numerical studies of four-mirror cavities composed of two flat and two parabolic mirrors. Constraints related to laser beam injection efficiency, optical stability, cavity-mode, beam-waist size and high stacking power are satisfied. A dedicated alignment procedure leading to stigmatic cavity-modes is employed to overcome issues related to the optical alignment of parabolic reflectors.
NASA Astrophysics Data System (ADS)
Barone, Alessandro; Fenton, Flavio; Veneziani, Alessandro
2017-09-01
An accurate estimation of cardiac conductivities is critical in computational electro-cardiology, yet experimental results in the literature significantly disagree on the values and ratios between longitudinal and tangential coefficients. These are known to have a strong impact on the propagation of potential particularly during defibrillation shocks. Data assimilation is a procedure for merging experimental data and numerical simulations in a rigorous way. In particular, variational data assimilation relies on the least-square minimization of the misfit between simulations and experiments, constrained by the underlying mathematical model, which in this study is represented by the classical Bidomain system, or its common simplification given by the Monodomain problem. Operating on the conductivity tensors as control variables of the minimization, we obtain a parameter estimation procedure. As the theory of this approach currently provides only an existence proof and it is not informative for practical experiments, we present here an extensive numerical simulation campaign to assess practical critical issues such as the size and the location of the measurement sites needed for in silico test cases of potential experimental and realistic settings. This will be finalized with a real validation of the variational data assimilation procedure. Results indicate the presence of lower and upper bounds for the number of sites which guarantee an accurate and minimally redundant parameter estimation, the location of sites being generally non critical for properly designed experiments. An effective combination of parameter estimation based on the Monodomain and Bidomain models is tested for the sake of computational efficiency. Parameter estimation based on the Monodomain equation potentially leads to the accurate computation of the transmembrane potential in real settings.
Numerical Boundary Condition Procedures
NASA Technical Reports Server (NTRS)
1981-01-01
Topics include numerical procedures for treating inflow and outflow boundaries, steady and unsteady discontinuous surfaces, far field boundaries, and multiblock grids. In addition, the effects of numerical boundary approximations on stability, accuracy, and convergence rate of the numerical solution are discussed.
Numerical Procedures for Inlet/Diffuser/Nozzle Flows
NASA Technical Reports Server (NTRS)
Rubin, Stanley G.
1998-01-01
Two primitive variable, pressure based, flux-split, RNS/NS solution procedures for viscous flows are presented. Both methods are uniformly valid across the full Mach number range, Le., from the incompressible limit to high supersonic speeds. The first method is an 'optimized' version of a previously developed global pressure relaxation RNS procedure. Considerable reduction in the number of relatively expensive matrix inversion, and thereby in the computational time, has been achieved with this procedure. CPU times are reduced by a factor of 15 for predominantly elliptic flows (incompressible and low subsonic). The second method is a time-marching, 'linearized' convection RNS/NS procedure. The key to the efficiency of this procedure is the reduction to a single LU inversion at the inflow cross-plane. The remainder of the algorithm simply requires back-substitution with this LU and the corresponding residual vector at any cross-plane location. This method is not time-consistent, but has a convective-type CFL stability limitation. Both formulations are robust and provide accurate solutions for a variety of internal viscous flows to be provided herein.
Staggered solution procedures for multibody dynamics simulation
NASA Technical Reports Server (NTRS)
Park, K. C.; Chiou, J. C.; Downer, J. D.
1990-01-01
The numerical solution procedure for multibody dynamics (MBD) systems is termed a staggered MBD solution procedure that solves the generalized coordinates in a separate module from that for the constraint force. This requires a reformulation of the constraint conditions so that the constraint forces can also be integrated in time. A major advantage of such a partitioned solution procedure is that additional analysis capabilities such as active controller and design optimization modules can be easily interfaced without embedding them into a monolithic program. After introducing the basic equations of motion for MBD system in the second section, Section 3 briefly reviews some constraint handling techniques and introduces the staggered stabilized technique for the solution of the constraint forces as independent variables. The numerical direct time integration of the equations of motion is described in Section 4. As accurate damping treatment is important for the dynamics of space structures, we have employed the central difference method and the mid-point form of the trapezoidal rule since they engender no numerical damping. This is in contrast to the current practice in dynamic simulations of ground vehicles by employing a set of backward difference formulas. First, the equations of motion are partitioned according to the translational and the rotational coordinates. This sets the stage for an efficient treatment of the rotational motions via the singularity-free Euler parameters. The resulting partitioned equations of motion are then integrated via a two-stage explicit stabilized algorithm for updating both the translational coordinates and angular velocities. Once the angular velocities are obtained, the angular orientations are updated via the mid-point implicit formula employing the Euler parameters. When the two algorithms, namely, the two-stage explicit algorithm for the generalized coordinates and the implicit staggered procedure for the constraint Lagrange multipliers, are brought together in a staggered manner, they constitute a staggered explicit-implicit procedure which is summarized in Section 5. Section 6 presents some example problems and discussions concerning several salient features of the staggered MBD solution procedure are offered in Section 7.
Risk Classification with an Adaptive Naive Bayes Kernel Machine Model.
Minnier, Jessica; Yuan, Ming; Liu, Jun S; Cai, Tianxi
2015-04-22
Genetic studies of complex traits have uncovered only a small number of risk markers explaining a small fraction of heritability and adding little improvement to disease risk prediction. Standard single marker methods may lack power in selecting informative markers or estimating effects. Most existing methods also typically do not account for non-linearity. Identifying markers with weak signals and estimating their joint effects among many non-informative markers remains challenging. One potential approach is to group markers based on biological knowledge such as gene structure. If markers in a group tend to have similar effects, proper usage of the group structure could improve power and efficiency in estimation. We propose a two-stage method relating markers to disease risk by taking advantage of known gene-set structures. Imposing a naive bayes kernel machine (KM) model, we estimate gene-set specific risk models that relate each gene-set to the outcome in stage I. The KM framework efficiently models potentially non-linear effects of predictors without requiring explicit specification of functional forms. In stage II, we aggregate information across gene-sets via a regularization procedure. Estimation and computational efficiency is further improved with kernel principle component analysis. Asymptotic results for model estimation and gene set selection are derived and numerical studies suggest that the proposed procedure could outperform existing procedures for constructing genetic risk models.
Non-adiabatic molecular dynamics by accelerated semiclassical Monte Carlo
White, Alexander J.; Gorshkov, Vyacheslav N.; Tretiak, Sergei; ...
2015-07-07
Non-adiabatic dynamics, where systems non-radiatively transition between electronic states, plays a crucial role in many photo-physical processes, such as fluorescence, phosphorescence, and photoisomerization. Methods for the simulation of non-adiabatic dynamics are typically either numerically impractical, highly complex, or based on approximations which can result in failure for even simple systems. Recently, the Semiclassical Monte Carlo (SCMC) approach was developed in an attempt to combine the accuracy of rigorous semiclassical methods with the efficiency and simplicity of widely used surface hopping methods. However, while SCMC was found to be more efficient than other semiclassical methods, it is not yet as efficientmore » as is needed to be used for large molecular systems. Here, we have developed two new methods: the accelerated-SCMC and the accelerated-SCMC with re-Gaussianization, which reduce the cost of the SCMC algorithm up to two orders of magnitude for certain systems. In many cases shown here, the new procedures are nearly as efficient as the commonly used surface hopping schemes, with little to no loss of accuracy. This implies that these modified SCMC algorithms will be of practical numerical solutions for simulating non-adiabatic dynamics in realistic molecular systems.« less
NASA Astrophysics Data System (ADS)
Zheng, Chang-Jun; Gao, Hai-Feng; Du, Lei; Chen, Hai-Bo; Zhang, Chuanzeng
2016-01-01
An accurate numerical solver is developed in this paper for eigenproblems governed by the Helmholtz equation and formulated through the boundary element method. A contour integral method is used to convert the nonlinear eigenproblem into an ordinary eigenproblem, so that eigenvalues can be extracted accurately by solving a set of standard boundary element systems of equations. In order to accelerate the solution procedure, the parameters affecting the accuracy and efficiency of the method are studied and two contour paths are compared. Moreover, a wideband fast multipole method is implemented with a block IDR (s) solver to reduce the overall solution cost of the boundary element systems of equations with multiple right-hand sides. The Burton-Miller formulation is employed to identify the fictitious eigenfrequencies of the interior acoustic problems with multiply connected domains. The actual effect of the Burton-Miller formulation on tackling the fictitious eigenfrequency problem is investigated and the optimal choice of the coupling parameter as α = i / k is confirmed through exterior sphere examples. Furthermore, the numerical eigenvalues obtained by the developed method are compared with the results obtained by the finite element method to show the accuracy and efficiency of the developed method.
Numerical simulation of heat transfer and fluid flow in laser drilling of metals
NASA Astrophysics Data System (ADS)
Zhang, Tingzhong; Ni, Chenyin; Zhou, Jie; Zhang, Hongchao; Shen, Zhonghua; Ni, Xiaowu; Lu, Jian
2015-05-01
Laser processing as laser drilling, laser welding and laser cutting, etc. is rather important in modern manufacture, and the interaction of laser and matter is a complex phenomenon which should be detailed studied in order to increase the manufacture efficiency and quality. In this paper, a two-dimensional transient numerical model was developed to study the temperature field and molten pool size during pulsed laser keyhole drilling. The volume-of-fluid method was employed to track free surfaces, and melting and evaporation enthalpy, recoil pressure, surface tension, and energy loss due to evaporating materials were considered in this model. Besides, the enthalpy-porosity technique was also applied to account for the latent heat during melting and solidification. Temperature fields and melt pool size were numerically simulated via finite element method. Moreover, the effectiveness of the developed computational procedure had been confirmed by experiments.
Acoustic scattering by arbitrary distributions of disjoint, homogeneous cylinders or spheres.
Hesford, Andrew J; Astheimer, Jeffrey P; Waag, Robert C
2010-05-01
A T-matrix formulation is presented to compute acoustic scattering from arbitrary, disjoint distributions of cylinders or spheres, each with arbitrary, uniform acoustic properties. The generalized approach exploits the similarities in these scattering problems to present a single system of equations that is easily specialized to cylindrical or spherical scatterers. By employing field expansions based on orthogonal harmonic functions, continuity of pressure and normal particle velocity are directly enforced at each scatterer using diagonal, analytic expressions to eliminate the need for integral equations. The effect of a cylinder or sphere that encloses all other scatterers is simulated with an outer iterative procedure that decouples the inner-object solution from the effect of the enclosing object to improve computational efficiency when interactions among the interior objects are significant. Numerical results establish the validity and efficiency of the outer iteration procedure for nested objects. Two- and three-dimensional methods that employ this outer iteration are used to measure and characterize the accuracy of two-dimensional approximations to three-dimensional scattering of elevation-focused beams.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sibaev, M.; Crittenden, D. L., E-mail: deborah.crittenden@canterbury.ac.nz
In this paper, we outline a general, scalable, and black-box approach for calculating high-order strongly coupled force fields in rectilinear normal mode coordinates, based upon constructing low order expansions in curvilinear coordinates with naturally limited mode-mode coupling, and then transforming between coordinate sets analytically. The optimal balance between accuracy and efficiency is achieved by transforming from 3 mode representation quartic force fields in curvilinear normal mode coordinates to 4 mode representation sextic force fields in rectilinear normal modes. Using this reduced mode-representation strategy introduces an error of only 1 cm{sup −1} in fundamental frequencies, on average, across a sizable testmore » set of molecules. We demonstrate that if it is feasible to generate an initial semi-quartic force field in curvilinear normal mode coordinates from ab initio data, then the subsequent coordinate transformation procedure will be relatively fast with modest memory demands. This procedure facilitates solving the nuclear vibrational problem, as all required integrals can be evaluated analytically. Our coordinate transformation code is implemented within the extensible PyPES library program package, at http://sourceforge.net/projects/pypes-lib-ext/.« less
Automated optimization techniques for aircraft synthesis
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.
1976-01-01
Application of numerical optimization techniques to automated conceptual aircraft design is examined. These methods are shown to be a general and efficient way to obtain quantitative information for evaluating alternative new vehicle projects. Fully automated design is compared with traditional point design methods and time and resource requirements for automated design are given. The NASA Ames Research Center aircraft synthesis program (ACSYNT) is described with special attention to calculation of the weight of a vehicle to fly a specified mission. The ACSYNT procedures for automatically obtaining sensitivity of the design (aircraft weight, performance and cost) to various vehicle, mission, and material technology parameters are presented. Examples are used to demonstrate the efficient application of these techniques.
Characterization of Meta-Materials Using Computational Electromagnetic Methods
NASA Technical Reports Server (NTRS)
Deshpande, Manohar; Shin, Joon
2005-01-01
An efficient and powerful computational method is presented to synthesize a meta-material to specified electromagnetic properties. Using the periodicity of meta-materials, the Finite Element Methodology (FEM) is developed to estimate the reflection and transmission through the meta-material structure for a normal plane wave incidence. For efficient computations of the reflection and transmission over a wide band frequency range through a meta-material a Finite Difference Time Domain (FDTD) approach is also developed. Using the Nicholson-Ross method and the Genetic Algorithms, a robust procedure to extract electromagnetic properties of meta-material from the knowledge of its reflection and transmission coefficients is described. Few numerical examples are also presented to validate the present approach.
The design of photovoltaic plants - An optimization procedure
NASA Astrophysics Data System (ADS)
Bartoli, B.; Cuomo, V.; Fontana, F.; Serio, C.; Silvestrini, V.
An analytical model is developed to match the components and overall size of a solar power facility (comprising photovoltaic array), maximum-power tracker, battery storage system, and inverter) to the load requirements and climatic conditions of a proposed site at the smallest possible cost. Input parameters are the efficiencies and unit costs of the components, the load fraction to be covered (for stand-alone systems), the statistically analyzed meteorological data, and the cost and efficiency data of the support system (for fuel-generator-assisted plants). Numerical results are presented in graphs and tables for sites in Italy, and it is found that the explicit form of the model equation is independent of locality, at least for this region.
NASA Astrophysics Data System (ADS)
Guachamin Acero, Wilson; Gao, Zhen; Moan, Torgeir
2017-09-01
Current installation costs of offshore wind turbines (OWTs) are high and profit margins in the offshore wind energy sector are low, it is thus necessary to develop installation methods that are more efficient and practical. This paper presents a numerical study (based on a global response analysis of marine operations) of a novel procedure for installing the tower and Rotor Nacelle Assemblies (RNAs) on bottom-fixed foundations of OWTs. The installation procedure is based on the inverted pendulum principle. A cargo barge is used to transport the OWT assembly in a horizontal position to the site, and a medium-size Heavy Lift Vessel (HLV) is then employed to lift and up-end the OWT assembly using a special upending frame. The main advantage of this novel procedure is that the need for a huge HLV (in terms of lifting height and capacity) is eliminated. This novel method requires that the cargo barge is in the leeward side of the HLV (which can be positioned with the best heading) during the entire installation. This is to benefit from shielding effects of the HLV on the motions of the cargo barge, so the foundations need to be installed with a specific heading based on wave direction statistics of the site and a typical installation season. Following a systematic approach based on numerical simulations of actual operations, potential critical installation activities, corresponding critical events, and limiting (response) parameters are identified. In addition, operational limits for some of the limiting parameters are established in terms of allowable limits of sea states. Following a preliminary assessment of these operational limits, the duration of the entire operation, the equipment used, and weather- and water depth-sensitivity, this novel procedure is demonstrated to be viable.
Extension of transonic flow computational concepts in the analysis of cavitated bearings
NASA Technical Reports Server (NTRS)
Vijayaraghavan, D.; Keith, T. G., Jr.; Brewe, D. E.
1990-01-01
An analogy between the mathematical modeling of transonic potential flow and the flow in a cavitating bearing is described. Based on the similarities, characteristics of the cavitated region and jump conditions across the film reformation and rupture fronts are developed using the method of weak solutions. The mathematical analogy is extended by utilizing a few computational concepts of transonic flow to numerically model the cavitating bearing. Methods of shock fitting and shock capturing are discussed. Various procedures used in transonic flow computations are adapted to bearing cavitation applications, for example, type differencing, grid transformation, an approximate factorization technique, and Newton's iteration method. These concepts have proved to be successful and have vastly improved the efficiency of numerical modeling of cavitated bearings.
NASA Astrophysics Data System (ADS)
Badawi, Mohamed S.; Jovanovic, Slobodan I.; Thabet, Abouzeid A.; El-Khatib, Ahmed M.; Dlabac, Aleksandar D.; Salem, Bohaysa A.; Gouda, Mona M.; Mihaljevic, Nikola N.; Almugren, Kholud S.; Abbas, Mahmoud I.
2017-03-01
The 4π NaI(Tl) γ-ray detectors are consisted of the well cavity with cylindrical cross section, and the enclosing geometry of measurements with large detection angle. This leads to exceptionally high efficiency level and a significant coincidence summing effect, much more than a single cylindrical or coaxial detector especially in very low activity measurements. In the present work, the detection effective solid angle in addition to both full-energy peak and total efficiencies of well-type detectors, were mainly calculated by the new numerical simulation method (NSM) and ANGLE4 software. To obtain the coincidence summing correction factors through the previously mentioned methods, the simulation of the coincident emission of photons was modeled mathematically, based on the analytical equations and complex integrations over the radioactive volumetric sources including the self-attenuation factor. The measured full-energy peak efficiencies and correction factors were done by using 152Eu, where an exact adjustment is required for the detector efficiency curve, because neglecting the coincidence summing effect can make the results inconsistent with the whole. These phenomena, in general due to the efficiency calibration process and the coincidence summing corrections, appear jointly. The full-energy peak and the total efficiencies from the two methods typically agree with discrepancy 10%. The discrepancy between the simulation, ANGLE4 and measured full-energy peak after corrections for the coincidence summing effect was on the average, while not exceeding 14%. Therefore, this technique can be easily applied in establishing the efficiency calibration curves of well-type detectors.
Implementation of a block Lanczos algorithm for Eigenproblem solution of gyroscopic systems
NASA Technical Reports Server (NTRS)
Gupta, Kajal K.; Lawson, Charles L.
1987-01-01
The details of implementation of a general numerical procedure developed for the accurate and economical computation of natural frequencies and associated modes of any elastic structure rotating along an arbitrary axis are described. A block version of the Lanczos algorithm is derived for the solution that fully exploits associated matrix sparsity and employs only real numbers in all relevant computations. It is also capable of determining multiple roots and proves to be most efficient when compared to other, similar, exisiting techniques.
1981-05-01
represented as a Winkler foundation. The program can treat any number of slabs connected by steel bars or other load trans- fer devices at the joints...dimensional finite element method. The inherent flexibility of such an approach permits the analysis of a rigid pavement with steel bars and stabilized...layers and provides an efficient tool for analyzing stress conditions at the joint. Unfor- tunately, such a procedure would require a tremendously
Efficient runner safety assessment during early design phase and root cause analysis
NASA Astrophysics Data System (ADS)
Liang, Q. W.; Lais, S.; Gentner, C.; Braun, O.
2012-11-01
Fatigue related problems in Francis turbines, especially high head Francis turbines, have been published several times in the last years. During operation the runner is exposed to various steady and unsteady hydraulic loads. Therefore the analysis of forced response of the runner structure requires a combined approach of fluid dynamics and structural dynamics. Due to the high complexity of the phenomena and due to the limitation of computer power, the numerical prediction was in the past too expensive and not feasible for the use as standard design tool. However, due to continuous improvement of the knowledge and the simulation tools such complex analysis has become part of the design procedure in ANDRITZ HYDRO. This article describes the application of most advanced analysis techniques in runner safety check (RSC), including steady state CFD analysis, transient CFD analysis considering rotor stator interaction (RSI), static FE analysis and modal analysis in water considering the added mass effect, in the early design phase. This procedure allows a very efficient interaction between the hydraulic designer and the mechanical designer during the design phase, such that a risk of failure can be detected and avoided in an early design stage.The RSC procedure can also be applied to a root cause analysis (RCA) both to find out the cause of failure and to quickly define a technical solution to meet the safety criteria. An efficient application to a RCA of cracks in a Francis runner is quoted in this article as an example. The results of the RCA are presented together with an efficient and inexpensive solution whose effectiveness could be proven again by applying the described RSC technics. It is shown that, with the RSC procedure developed and applied as standard procedure in ANDRITZ HYDRO such a failure is excluded in an early design phase. Moreover, the RSC procedure is compatible with different commercial and open source codes and can be easily adapted to apply for other types of turbines, such as pump turbines and Pelton runners.
Prakash, Jaya; Yalavarthy, Phaneendra K
2013-03-01
Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time.
NASA Astrophysics Data System (ADS)
Jha, Ratneshwar
Multidisciplinary design optimization (MDO) procedures have been developed for smart composite wings and turbomachinery blades. The analysis and optimization methods used are computationally efficient and sufficiently rigorous. Therefore, the developed MDO procedures are well suited for actual design applications. The optimization procedure for the conceptual design of composite aircraft wings with surface bonded piezoelectric actuators involves the coupling of structural mechanics, aeroelasticity, aerodynamics and controls. The load carrying member of the wing is represented as a single-celled composite box beam. Each wall of the box beam is analyzed as a composite laminate using a refined higher-order displacement field to account for the variations in transverse shear stresses through the thickness. Therefore, the model is applicable for the analysis of composite wings of arbitrary thickness. Detailed structural modeling issues associated with piezoelectric actuation of composite structures are considered. The governing equations of motion are solved using the finite element method to analyze practical wing geometries. Three-dimensional aerodynamic computations are performed using a panel code based on the constant-pressure lifting surface method to obtain steady and unsteady forces. The Laplace domain method of aeroelastic analysis produces root-loci of the system which gives an insight into the physical phenomena leading to flutter/divergence and can be efficiently integrated within an optimization procedure. The significance of the refined higher-order displacement field on the aeroelastic stability of composite wings has been established. The effect of composite ply orientations on flutter and divergence speeds has been studied. The Kreisselmeier-Steinhauser (K-S) function approach is used to efficiently integrate the objective functions and constraints into a single envelope function. The resulting unconstrained optimization problem is solved using the Broyden-Fletcher-Goldberg-Shanno algorithm. The optimization problem is formulated with the objective of simultaneously minimizing wing weight and maximizing its aerodynamic efficiency. Design variables include composite ply orientations, ply thicknesses, wing sweep, piezoelectric actuator thickness and actuator voltage. Constraints are placed on the flutter/divergence dynamic pressure, wing root stresses and the maximum electric field applied to the actuators. Numerical results are presented showing significant improvements, after optimization, compared to reference designs. The multidisciplinary optimization procedure for the design of turbomachinery blades integrates aerodynamic and heat transfer design objective criteria along with various mechanical and geometric constraints on the blade geometry. The airfoil shape is represented by Bezier-Bernstein polynomials, which results in a relatively small number of design variables for the optimization. Thin shear layer approximation of the Navier-Stokes equation is used for the viscous flow calculations. Grid generation is accomplished by solving Poisson equations. The maximum and average blade temperatures are obtained through a finite element analysis. Total pressure and exit kinetic energy losses are minimized, with constraints on blade temperatures and geometry. The constrained multiobjective optimization problem is solved using the K-S function approach. The results for the numerical example show significant improvements after optimization.
Accurate modelling of unsteady flows in collapsible tubes.
Marchandise, Emilie; Flaud, Patrice
2010-01-01
The context of this paper is the development of a general and efficient numerical haemodynamic tool to help clinicians and researchers in understanding of physiological flow phenomena. We propose an accurate one-dimensional Runge-Kutta discontinuous Galerkin (RK-DG) method coupled with lumped parameter models for the boundary conditions. The suggested model has already been successfully applied to haemodynamics in arteries and is now extended for the flow in collapsible tubes such as veins. The main difference with cardiovascular simulations is that the flow may become supercritical and elastic jumps may appear with the numerical consequence that scheme may not remain monotone if no limiting procedure is introduced. We show that our second-order RK-DG method equipped with an approximate Roe's Riemann solver and a slope-limiting procedure allows us to capture elastic jumps accurately. Moreover, this paper demonstrates that the complex physics associated with such flows is more accurately modelled than with traditional methods such as finite difference methods or finite volumes. We present various benchmark problems that show the flexibility and applicability of the numerical method. Our solutions are compared with analytical solutions when they are available and with solutions obtained using other numerical methods. Finally, to illustrate the clinical interest, we study the emptying process in a calf vein squeezed by contracting skeletal muscle in a normal and pathological subject. We compare our results with experimental simulations and discuss the sensitivity to parameters of our model.
NASA Astrophysics Data System (ADS)
Liang, Fayun; Chen, Haibing; Huang, Maosong
2017-07-01
To provide appropriate uses of nonlinear ground response analysis for engineering practice, a three-dimensional soil column with a distributed mass system and a time domain numerical analysis were implemented on the OpenSees simulation platform. The standard mesh of a three-dimensional soil column was suggested to be satisfied with the specified maximum frequency. The layered soil column was divided into multiple sub-soils with a different viscous damping matrix according to the shear velocities as the soil properties were significantly different. It was necessary to use a combination of other one-dimensional or three-dimensional nonlinear seismic ground analysis programs to confirm the applicability of nonlinear seismic ground motion response analysis procedures in soft soil or for strong earthquakes. The accuracy of the three-dimensional soil column finite element method was verified by dynamic centrifuge model testing under different peak accelerations of the earthquake. As a result, nonlinear seismic ground motion response analysis procedures were improved in this study. The accuracy and efficiency of the three-dimensional seismic ground response analysis can be adapted to the requirements of engineering practice.
Computation of Steady and Unsteady Laminar Flames: Theory
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas; Radhakrishnan, Krishnan; Zhou, Ruhai
1999-01-01
In this paper we describe the numerical analysis underlying our efforts to develop an accurate and reliable code for simulating flame propagation using complex physical and chemical models. We discuss our spatial and temporal discretization schemes, which in our current implementations range in order from two to six. In space we use staggered meshes to define discrete divergence and gradient operators, allowing us to approximate complex diffusion operators while maintaining ellipticity. Our temporal discretization is based on the use of preconditioning to produce a highly efficient linearly implicit method with good stability properties. High order for time accurate simulations is obtained through the use of extrapolation or deferred correction procedures. We also discuss our techniques for computing stationary flames. The primary issue here is the automatic generation of initial approximations for the application of Newton's method. We use a novel time-stepping procedure, which allows the dynamic updating of the flame speed and forces the flame front towards a specified location. Numerical experiments are presented, primarily for the stationary flame problem. These illustrate the reliability of our techniques, and the dependence of the results on various code parameters.
NASA Astrophysics Data System (ADS)
Zhengyong, R.; Jingtian, T.; Changsheng, L.; Xiao, X.
2007-12-01
Although adaptive finite-element (AFE) analysis is becoming more and more focused in scientific and engineering fields, its efficient implementations are remain to be a discussed problem as its more complex procedures. In this paper, we propose a clear C++ framework implementation to show the powerful properties of Object-oriented philosophy (OOP) in designing such complex adaptive procedure. In terms of the modal functions of OOP language, the whole adaptive system is divided into several separate parts such as the mesh generation or refinement, a-posterior error estimator, adaptive strategy and the final post processing. After proper designs are locally performed on these separate modals, a connected framework of adaptive procedure is formed finally. Based on the general elliptic deferential equation, little efforts should be added in the adaptive framework to do practical simulations. To show the preferable properties of OOP adaptive designing, two numerical examples are tested. The first one is the 3D direct current resistivity problem in which the powerful framework is efficiently shown as only little divisions are added. And then, in the second induced polarization£¨IP£©exploration case, new adaptive procedure is easily added which adequately shows the strong extendibility and re-usage of OOP language. Finally we believe based on the modal framework adaptive implementation by OOP methodology, more advanced adaptive analysis system will be available in future.
NASA Technical Reports Server (NTRS)
Garai, Anirban; Diosady, Laslo T.; Murman, Scott M.; Madavan, Nateri K.
2016-01-01
Recent progress towards developing a new computational capability for accurate and efficient high-fidelity direct numerical simulation (DNS) and large-eddy simulation (LES) of turbomachinery is described. This capability is based on an entropy- stable Discontinuous-Galerkin spectral-element approach that extends to arbitrarily high orders of spatial and temporal accuracy, and is implemented in a computationally efficient manner on a modern high performance computer architecture. An inflow turbulence generation procedure based on a linear forcing approach has been incorporated in this framework and DNS conducted to study the effect of inflow turbulence on the suction- side separation bubble in low-pressure turbine (LPT) cascades. The T106 series of airfoil cascades in both lightly (T106A) and highly loaded (T106C) configurations at exit isentropic Reynolds numbers of 60,000 and 80,000, respectively, are considered. The numerical simulations are performed using 8th-order accurate spatial and 4th-order accurate temporal discretization. The changes in separation bubble topology due to elevated inflow turbulence is captured by the present method and the physical mechanisms leading to the changes are explained. The present results are in good agreement with prior numerical simulations but some expected discrepancies with the experimental data for the T106C case are noted and discussed.
NASA Technical Reports Server (NTRS)
Bi, Lei; Yang, Ping; Kattawar, George W.; Mishchenko, Michael I.
2013-01-01
The extended boundary condition method (EBCM) and invariant imbedding method (IIM) are two fundamentally different T-matrix methods for the solution of light scattering by nonspherical particles. The standard EBCM is very efficient but encounters a loss of precision when the particle size is large, the maximum size being sensitive to the particle aspect ratio. The IIM can be applied to particles in a relatively large size parameter range but requires extensive computational time due to the number of spherical layers in the particle volume discretization. A numerical combination of the EBCM and the IIM (hereafter, the EBCM+IIM) is proposed to overcome the aforementioned disadvantages of each method. Even though the EBCM can fail to obtain the T-matrix of a considered particle, it is valuable for decreasing the computational domain (i.e., the number of spherical layers) of the IIM by providing the initial T-matrix associated with an iterative procedure in the IIM. The EBCM+IIM is demonstrated to be more efficient than the IIM in obtaining the optical properties of large size parameter particles beyond the convergence limit of the EBCM. The numerical performance of the EBCM+IIM is illustrated through representative calculations in spheroidal and cylindrical particle cases.
Cost efficiency of the non-associative flow rule simulation of an industrial component
NASA Astrophysics Data System (ADS)
Galdos, Lander; de Argandoña, Eneko Saenz; Mendiguren, Joseba
2017-10-01
In the last decade, metal forming industry is becoming more and more competitive. In this context, the FEM modeling has become a primary tool of information for the component and process design. Numerous researchers have been focused on improving the accuracy of the material models implemented on the FEM in order to improve the efficiency of the simulations. Aimed at increasing the efficiency of the anisotropic behavior modelling, in the last years the use of non-associative flow rule models (NAFR) has been presented as an alternative to the classic associative flow rule models (AFR). In this work, the cost efficiency of the used flow rule model has been numerically analyzed by simulating an industrial drawing operation with two different models of the same degree of flexibility: one AFR model and one NAFR model. From the present study, it has been concluded that the flow rule has a negligible influence on the final drawing prediction; this is mainly driven by the model parameter identification procedure. Even though the NAFR formulation is complex when compared to the AFR, the present study shows that the total simulation time while using explicit FE solvers has been reduced without loss of accuracy. Furthermore, NAFR formulations have an advantage over AFR formulations in parameter identification because the formulation decouples the yield stress and the Lankford coefficients.
NASA Technical Reports Server (NTRS)
Gossard, Myron L
1952-01-01
An iterative transformation procedure suggested by H. Wielandt for numerical solution of flutter and similar characteristic-value problems is presented. Application of this procedure to ordinary natural-vibration problems and to flutter problems is shown by numerical examples. Comparisons of computed results with experimental values and with results obtained by other methods of analysis are made.
Determination of Distance Distribution Functions by Singlet-Singlet Energy Transfer
Cantor, Charles R.; Pechukas, Philip
1971-01-01
The efficiency of energy transfer between two chromophores can be used to define an apparent donor-acceptor distance, which in flexible systems will depend on the R0 of the chromophores. If efficiency is measured as a function of R0, it will be possible to determine the actual distribution function of donor-acceptor distances. Numerical procedures are described for extracting this information from experimental data. They should be most useful for distribution functions with mean values from 20-30 Å (2-3 nm). This technique should provide considerably more detailed information on end-to-end distributions of oligomers than has hitherto been available. It should also be useful for describing, in detail, conformational flexibility in other large molecules. PMID:16591942
A globally well-posed finite element algorithm for aerodynamics applications
NASA Technical Reports Server (NTRS)
Iannelli, G. S.; Baker, A. J.
1991-01-01
A finite element CFD algorithm is developed for Euler and Navier-Stokes aerodynamic applications. For the linear basis, the resultant approximation is at least second-order-accurate in time and space for synergistic use of three procedures: (1) a Taylor weak statement, which provides for derivation of companion conservation law systems with embedded dispersion-error control mechanisms; (2) a stiffly stable second-order-accurate implicit Rosenbrock-Runge-Kutta temporal algorithm; and (3) a matrix tensor product factorization that permits efficient numerical linear algebra handling of the terminal large-matrix statement. Thorough analyses are presented regarding well-posed boundary conditions for inviscid and viscous flow specifications. Numerical solutions are generated and compared for critical evaluation of quasi-one- and two-dimensional Euler and Navier-Stokes benchmark test problems.
NASA Astrophysics Data System (ADS)
Titeux, Isabelle; Li, Yuming M.; Debray, Karl; Guo, Ying-Qiao
2004-11-01
This Note deals with an efficient algorithm to carry out the plastic integration and compute the stresses due to large strains for materials satisfying the Hill's anisotropic yield criterion. The classical algorithm of plastic integration such as 'Return Mapping Method' is largely used for nonlinear analyses of structures and numerical simulations of forming processes, but it requires an iterative schema and may have convergence problems. A new direct algorithm based on a scalar method is developed which allows us to directly obtain the plastic multiplier without an iteration procedure; thus the computation time is largely reduced and the numerical problems are avoided. To cite this article: I. Titeux et al., C. R. Mecanique 332 (2004).
Hybrid pathwise sensitivity methods for discrete stochastic models of chemical reaction systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolf, Elizabeth Skubak, E-mail: ewolf@saintmarys.edu; Anderson, David F., E-mail: anderson@math.wisc.edu
2015-01-21
Stochastic models are often used to help understand the behavior of intracellular biochemical processes. The most common such models are continuous time Markov chains (CTMCs). Parametric sensitivities, which are derivatives of expectations of model output quantities with respect to model parameters, are useful in this setting for a variety of applications. In this paper, we introduce a class of hybrid pathwise differentiation methods for the numerical estimation of parametric sensitivities. The new hybrid methods combine elements from the three main classes of procedures for sensitivity estimation and have a number of desirable qualities. First, the new methods are unbiased formore » a broad class of problems. Second, the methods are applicable to nearly any physically relevant biochemical CTMC model. Third, and as we demonstrate on several numerical examples, the new methods are quite efficient, particularly if one wishes to estimate the full gradient of parametric sensitivities. The methods are rather intuitive and utilize the multilevel Monte Carlo philosophy of splitting an expectation into separate parts and handling each in an efficient manner.« less
Numerical and experimental investigations of human swimming motions
Takagi, Hideki; Nakashima, Motomu; Sato, Yohei; Matsuuchi, Kazuo; Sanders, Ross H.
2016-01-01
ABSTRACT This paper reviews unsteady flow conditions in human swimming and identifies the limitations and future potential of the current methods of analysing unsteady flow. The capability of computational fluid dynamics (CFD) has been extended from approaches assuming steady-state conditions to consideration of unsteady/transient conditions associated with the body motion of a swimmer. However, to predict hydrodynamic forces and the swimmer’s potential speeds accurately, more robust and efficient numerical methods are necessary, coupled with validation procedures, requiring detailed experimental data reflecting local flow. Experimental data obtained by particle image velocimetry (PIV) in this area are limited, because at present observations are restricted to a two-dimensional 1.0 m2 area, though this could be improved if the output range of the associated laser sheet increased. Simulations of human swimming are expected to improve competitive swimming, and our review has identified two important advances relating to understanding the flow conditions affecting performance in front crawl swimming: one is a mechanism for generating unsteady fluid forces, and the other is a theory relating to increased speed and efficiency. PMID:26699925
Numerical and experimental investigations of human swimming motions.
Takagi, Hideki; Nakashima, Motomu; Sato, Yohei; Matsuuchi, Kazuo; Sanders, Ross H
2016-08-01
This paper reviews unsteady flow conditions in human swimming and identifies the limitations and future potential of the current methods of analysing unsteady flow. The capability of computational fluid dynamics (CFD) has been extended from approaches assuming steady-state conditions to consideration of unsteady/transient conditions associated with the body motion of a swimmer. However, to predict hydrodynamic forces and the swimmer's potential speeds accurately, more robust and efficient numerical methods are necessary, coupled with validation procedures, requiring detailed experimental data reflecting local flow. Experimental data obtained by particle image velocimetry (PIV) in this area are limited, because at present observations are restricted to a two-dimensional 1.0 m(2) area, though this could be improved if the output range of the associated laser sheet increased. Simulations of human swimming are expected to improve competitive swimming, and our review has identified two important advances relating to understanding the flow conditions affecting performance in front crawl swimming: one is a mechanism for generating unsteady fluid forces, and the other is a theory relating to increased speed and efficiency.
NASA Astrophysics Data System (ADS)
Beckstein, Pascal; Galindo, Vladimir; Vukčević, Vuko
2017-09-01
Eddy-current problems occur in a wide range of industrial and metallurgical applications where conducting material is processed inductively. Motivated by realising coupled multi-physics simulations, we present a new method for the solution of such problems in the finite volume framework of foam-extend, an extended version of the very popular OpenFOAM software. The numerical procedure involves a semi-coupled multi-mesh approach to solve Maxwell's equations for non-magnetic materials by means of the Coulomb gauged magnetic vector potential A and the electric scalar potential ϕ. The concept is further extended on the basis of the impressed and reduced magnetic vector potential and its usage in accordance with Biot-Savart's law to achieve a very efficient overall modelling even for complex three-dimensional geometries. Moreover, we present a special discretisation scheme to account for possible discontinuities in the electrical conductivity. To complement our numerical method, an extensive validation is completing the paper, which provides insight into the behaviour and the potential of our approach.
NASA Technical Reports Server (NTRS)
Hah, C.; Lakshminarayana, B.
1982-01-01
Turbulent wakes of turbomachinery rotor blades, isolated airfoils, and a cascade of airfoils were investigated both numerically and experimentally. Low subsonic and incompressible wake flows were examined. A finite difference procedure was employed in the numerical analysis utilizing the continuity, momentum, and turbulence closure equations in the rotating, curvilinear, and nonorthogonal coordinate system. A nonorthogonal curvilinear coordinate system was developed to improve the accuracy and efficiency of the numerical calculation. Three turbulence models were employed to obtain closure of the governing equations. The first model was comprised to transport equations for the turbulent kinetic energy and the rate of energy dissipation, and the second and third models were comprised of equations for the rate of turbulent kinetic energy dissipation and Reynolds stresses, respectively. The second model handles the convection and diffusion terms in the Reynolds stress transport equation collectively, while the third model handles them individually. The numerical results demonstrate that the second and third models provide accurate predictions, but the computer time and memory storage can be considerably saved with the second model.
Random element method for numerical modeling of diffusional processes
NASA Technical Reports Server (NTRS)
Ghoniem, A. F.; Oppenheim, A. K.
1982-01-01
The random element method is a generalization of the random vortex method that was developed for the numerical modeling of momentum transport processes as expressed in terms of the Navier-Stokes equations. The method is based on the concept that random walk, as exemplified by Brownian motion, is the stochastic manifestation of diffusional processes. The algorithm based on this method is grid-free and does not require the diffusion equation to be discritized over a mesh, it is thus devoid of numerical diffusion associated with finite difference methods. Moreover, the algorithm is self-adaptive in space and explicit in time, resulting in an improved numerical resolution of gradients as well as a simple and efficient computational procedure. The method is applied here to an assortment of problems of diffusion of momentum and energy in one-dimension as well as heat conduction in two-dimensions in order to assess its validity and accuracy. The numerical solutions obtained are found to be in good agreement with exact solution except for a statistical error introduced by using a finite number of elements, the error can be reduced by increasing the number of elements or by using ensemble averaging over a number of solutions.
Modeling of Passive Acoustic Liners from High Fidelity Numerical Simulations
NASA Astrophysics Data System (ADS)
Ferrari, Marcello do Areal Souto
Noise reduction in aviation has been an important focus of study in the last few decades. One common solution is setting up acoustic liners in the internal walls of the engines. However, measurements in the laboratory with liners are expensive and time consuming. The present work proposes a nonlinear physics-based time domain model to predict the acoustic behavior of a given liner in a defined flow condition. The parameters of the model are defined by analysis of accurate numerical solutions of the flow obtained from a high-fidelity numerical code. The length of the cavity is taken into account by using an analytical procedure to account for internal reflections in the interior of the cavity. Vortices and jets originated from internal flow separations are confirmed to be important mechanisms of sound absorption, which defines the overall efficiency of the liner. Numerical simulations at different frequency, geometry and sound pressure level are studied in detail to define the model parameters. Comparisons with high-fidelity numerical simulations show that the proposed model is accurate, robust, and can be used to define a boundary condition simulating a liner in a high-fidelity code.
NASA Astrophysics Data System (ADS)
Park, Keun; Lee, Sang-Ik
2010-03-01
High-frequency induction is an efficient, non-contact means of heating the surface of an injection mold through electromagnetic induction. Because the procedure allows for the rapid heating and cooling of mold surfaces, it has been recently applied to the injection molding of thin-walled parts or micro/nano-structures. The present study proposes a localized heating method involving the selective use of mold materials to enhance the heating efficiency of high-frequency induction heating. For localized induction heating, a composite injection mold of ferromagnetic material and paramagnetic material is used. The feasibility of the proposed heating method is investigated through numerical analyses in terms of its heating efficiency for localized mold surfaces and in terms of the structural safety of the composite mold. The moldability of high aspect ratio micro-features is then experimentally compared under a variety of induction heating conditions.
Group implicit concurrent algorithms in nonlinear structural dynamics
NASA Technical Reports Server (NTRS)
Ortiz, M.; Sotelino, E. D.
1989-01-01
During the 70's and 80's, considerable effort was devoted to developing efficient and reliable time stepping procedures for transient structural analysis. Mathematically, the equations governing this type of problems are generally stiff, i.e., they exhibit a wide spectrum in the linear range. The algorithms best suited to this type of applications are those which accurately integrate the low frequency content of the response without necessitating the resolution of the high frequency modes. This means that the algorithms must be unconditionally stable, which in turn rules out explicit integration. The most exciting possibility in the algorithms development area in recent years has been the advent of parallel computers with multiprocessing capabilities. So, this work is mainly concerned with the development of parallel algorithms in the area of structural dynamics. A primary objective is to devise unconditionally stable and accurate time stepping procedures which lend themselves to an efficient implementation in concurrent machines. Some features of the new computer architecture are summarized. A brief survey of current efforts in the area is presented. A new class of concurrent procedures, or Group Implicit algorithms is introduced and analyzed. The numerical simulation shows that GI algorithms hold considerable promise for application in coarse grain as well as medium grain parallel computers.
NASA Astrophysics Data System (ADS)
Kim, Euiyoung; Cho, Maenghyo
2017-11-01
In most non-linear analyses, the construction of a system matrix uses a large amount of computation time, comparable to the computation time required by the solving process. If the process for computing non-linear internal force matrices is substituted with an effective equivalent model that enables the bypass of numerical integrations and assembly processes used in matrix construction, efficiency can be greatly enhanced. A stiffness evaluation procedure (STEP) establishes non-linear internal force models using polynomial formulations of displacements. To efficiently identify an equivalent model, the method has evolved such that it is based on a reduced-order system. The reduction process, however, makes the equivalent model difficult to parameterize, which significantly affects the efficiency of the optimization process. In this paper, therefore, a new STEP, E-STEP, is proposed. Based on the element-wise nature of the finite element model, the stiffness evaluation is carried out element-by-element in the full domain. Since the unit of computation for the stiffness evaluation is restricted by element size, and since the computation is independent, the equivalent model can be constructed efficiently in parallel, even in the full domain. Due to the element-wise nature of the construction procedure, the equivalent E-STEP model is easily characterized by design parameters. Various reduced-order modeling techniques can be applied to the equivalent system in a manner similar to how they are applied in the original system. The reduced-order model based on E-STEP is successfully demonstrated for the dynamic analyses of non-linear structural finite element systems under varying design parameters.
NASA Astrophysics Data System (ADS)
Galassi, S.
2018-05-01
In this paper a mechanical model of masonry arches strengthened with fibre-reinforced composite materials and the relevant numerical procedure for the analysis are proposed. The arch is modelled by using an assemblage of rigid blocks that are connected together and, also to the supporting structures, by mortar joints. The presence of the reinforcement, usually a sheet placed at the intrados or the extrados, prevents the occurrence of cracks that could activate possible collapse mechanisms, due to tensile failure of the mortar joints. Therefore, in a reinforced arch failure generally occurs in a different way from the URM arch. The numerical procedure proposed checks, as a function of an external incremental load, the inner stress state in the arch, in the reinforcement and in the adhesive layer. In so doing, it then provides a prediction of failure modes. Results obtained from experimental tests, carried out on four in-scale models performed in a laboratory, have been compared with those provided by the numerical procedure, implemented in ArchiVAULT, a software developed by the author. In this regard, the numerical procedure is an extension of previous works. Although additional experimental investigations are necessary, these former results confirm that the proposed numerical procedure is promising.
A three-dimensional structured/unstructured hybrid Navier-Stokes method for turbine blade rows
NASA Technical Reports Server (NTRS)
Tsung, F.-L.; Loellbach, J.; Kwon, O.; Hah, C.
1994-01-01
A three-dimensional viscous structured/unstructured hybrid scheme has been developed for numerical computation of high Reynolds number turbomachinery flows. The procedure allows an efficient structured solver to be employed in the densely clustered, high aspect-ratio grid around the viscous regions near solid surfaces, while employing an unstructured solver elsewhere in the flow domain to add flexibility in mesh generation. Test results for an inviscid flow over an external transonic wing and a Navier-Stokes flow for an internal annular cascade are presented.
Computed Flow Through An Artificial Heart Valve
NASA Technical Reports Server (NTRS)
Rogers, Stewart E.; Kwak, Dochan; Kiris, Cetin; Chang, I-Dee
1994-01-01
Report discusses computations of blood flow through prosthetic tilting disk valve. Computational procedure developed in simulation used to design better artificial hearts and valves by reducing or eliminating following adverse flow characteristics: large pressure losses, which prevent hearts from working efficiently; separated and secondary flows, which causes clotting; and high turbulent shear stresses, which damages red blood cells. Report reiterates and expands upon part of NASA technical memorandum "Computed Flow Through an Artificial Heart and Valve" (ARC-12983). Also based partly on research described in "Numerical Simulation of Flow Through an Artificial Heart" (ARC-12478).
NASA Technical Reports Server (NTRS)
Kao, M. H.; Bodenheimer, R. E.
1976-01-01
The tse computer's capability of achieving image congruence between temporal and multiple images with misregistration due to rotational differences is reported. The coordinate transformations are obtained and a general algorithms is devised to perform image rotation using tse operations very efficiently. The details of this algorithm as well as its theoretical implications are presented. Step by step procedures of image registration are described in detail. Numerous examples are also employed to demonstrate the correctness and the effectiveness of the algorithms and conclusions and recommendations are made.
An assessment of the adaptive unstructured tetrahedral grid, Euler Flow Solver Code FELISA
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Erickson, Larry L.
1994-01-01
A three-dimensional solution-adaptive Euler flow solver for unstructured tetrahedral meshes is assessed, and the accuracy and efficiency of the method for predicting sonic boom pressure signatures about simple generic models are demonstrated. Comparison of computational and wind tunnel data and enhancement of numerical solutions by means of grid adaptivity are discussed. The mesh generation is based on the advancing front technique. The FELISA code consists of two solvers, the Taylor-Galerkin and the Runge-Kutta-Galerkin schemes, both of which are spacially discretized by the usual Galerkin weighted residual finite-element methods but with different explicit time-marching schemes to steady state. The solution-adaptive grid procedure is based on either remeshing or mesh refinement techniques. An alternative geometry adaptive procedure is also incorporated.
Numerical pricing of options using high-order compact finite difference schemes
NASA Astrophysics Data System (ADS)
Tangman, D. Y.; Gopaul, A.; Bhuruth, M.
2008-09-01
We consider high-order compact (HOC) schemes for quasilinear parabolic partial differential equations to discretise the Black-Scholes PDE for the numerical pricing of European and American options. We show that for the heat equation with smooth initial conditions, the HOC schemes attain clear fourth-order convergence but fail if non-smooth payoff conditions are used. To restore the fourth-order convergence, we use a grid stretching that concentrates grid nodes at the strike price for European options. For an American option, an efficient procedure is also described to compute the option price, Greeks and the optimal exercise curve. Comparisons with a fourth-order non-compact scheme are also done. However, fourth-order convergence is not experienced with this strategy. To improve the convergence rate for American options, we discuss the use of a front-fixing transformation with the HOC scheme. We also show that the HOC scheme with grid stretching along the asset price dimension gives accurate numerical solutions for European options under stochastic volatility.
NASA Astrophysics Data System (ADS)
Hosseini, E.; Loghmani, G. B.; Heydari, M.; Rashidi, M. M.
2017-02-01
In this paper, the boundary layer flow and heat transfer of unsteady flow over a porous accelerating stretching surface in the presence of the velocity slip and temperature jump effects are investigated numerically. A new effective collocation method based on rational Bernstein functions is applied to solve the governing system of nonlinear ordinary differential equations. This method solves the problem on the semi-infinite domain without truncating or transforming it to a finite domain. In addition, the presented method reduces the solution of the problem to the solution of a system of algebraic equations. Graphical and tabular results are presented to investigate the influence of the unsteadiness parameter A , Prandtl number Pr, suction parameter fw, velocity slip parameter γ and thermal slip parameter φ on the velocity and temperature profiles of the fluid. The numerical experiments are reported to show the accuracy and efficiency of the novel proposed computational procedure. Comparisons of present results are made with those obtained by previous works and show excellent agreement.
Tensor methodology and computational geometry in direct computational experiments in fluid mechanics
NASA Astrophysics Data System (ADS)
Degtyarev, Alexander; Khramushin, Vasily; Shichkina, Julia
2017-07-01
The paper considers a generalized functional and algorithmic construction of direct computational experiments in fluid dynamics. Notation of tensor mathematics is naturally embedded in the finite - element operation in the construction of numerical schemes. Large fluid particle, which have a finite size, its own weight, internal displacement and deformation is considered as an elementary computing object. Tensor representation of computational objects becomes strait linear and uniquely approximation of elementary volumes and fluid particles inside them. The proposed approach allows the use of explicit numerical scheme, which is an important condition for increasing the efficiency of the algorithms developed by numerical procedures with natural parallelism. It is shown that advantages of the proposed approach are achieved among them by considering representation of large particles of a continuous medium motion in dual coordinate systems and computing operations in the projections of these two coordinate systems with direct and inverse transformations. So new method for mathematical representation and synthesis of computational experiment based on large particle method is proposed.
Using data mining to segment healthcare markets from patients' preference perspectives.
Liu, Sandra S; Chen, Jie
2009-01-01
This paper aims to provide an example of how to use data mining techniques to identify patient segments regarding preferences for healthcare attributes and their demographic characteristics. Data were derived from a number of individuals who received in-patient care at a health network in 2006. Data mining and conventional hierarchical clustering with average linkage and Pearson correlation procedures are employed and compared to show how each procedure best determines segmentation variables. Data mining tools identified three differentiable segments by means of cluster analysis. These three clusters have significantly different demographic profiles. The study reveals, when compared with traditional statistical methods, that data mining provides an efficient and effective tool for market segmentation. When there are numerous cluster variables involved, researchers and practitioners need to incorporate factor analysis for reducing variables to clearly and meaningfully understand clusters. Interests and applications in data mining are increasing in many businesses. However, this technology is seldom applied to healthcare customer experience management. The paper shows that efficient and effective application of data mining methods can aid the understanding of patient healthcare preferences.
Efficient Robust Optimization of Metal Forming Processes using a Sequential Metamodel Based Strategy
NASA Astrophysics Data System (ADS)
Wiebenga, J. H.; Klaseboer, G.; van den Boogaard, A. H.
2011-08-01
The coupling of Finite Element (FE) simulations to mathematical optimization techniques has contributed significantly to product improvements and cost reductions in the metal forming industries. The next challenge is to bridge the gap between deterministic optimization techniques and the industrial need for robustness. This paper introduces a new and generally applicable structured methodology for modeling and solving robust optimization problems. Stochastic design variables or noise variables are taken into account explicitly in the optimization procedure. The metamodel-based strategy is combined with a sequential improvement algorithm to efficiently increase the accuracy of the objective function prediction. This is only done at regions of interest containing the optimal robust design. Application of the methodology to an industrial V-bending process resulted in valuable process insights and an improved robust process design. Moreover, a significant improvement of the robustness (>2σ) was obtained by minimizing the deteriorating effects of several noise variables. The robust optimization results demonstrate the general applicability of the robust optimization strategy and underline the importance of including uncertainty and robustness explicitly in the numerical optimization procedure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vecharynski, Eugene; Brabec, Jiri; Shao, Meiyue
We present two efficient iterative algorithms for solving the linear response eigen- value problem arising from the time dependent density functional theory. Although the matrix to be diagonalized is nonsymmetric, it has a special structure that can be exploited to save both memory and floating point operations. In particular, the nonsymmetric eigenvalue problem can be transformed into a product eigenvalue problem that is self-adjoint with respect to a K-inner product. This product eigenvalue problem can be solved efficiently by a modified Davidson algorithm and a modified locally optimal block preconditioned conjugate gradient (LOBPCG) algorithm that make use of the K-innermore » product. The solution of the product eigenvalue problem yields one component of the eigenvector associated with the original eigenvalue problem. However, the other component of the eigenvector can be easily recovered in a postprocessing procedure. Therefore, the algorithms we present here are more efficient than existing algorithms that try to approximate both components of the eigenvectors simultaneously. The efficiency of the new algorithms is demonstrated by numerical examples.« less
Capellari, Giovanni; Eftekhar Azam, Saeed; Mariani, Stefano
2015-01-01
Health monitoring of lightweight structures, like thin flexible plates, is of interest in several engineering fields. In this paper, a recursive Bayesian procedure is proposed to monitor the health of such structures through data collected by a network of optimally placed inertial sensors. As a main drawback of standard monitoring procedures is linked to the computational costs, two remedies are jointly considered: first, an order-reduction of the numerical model used to track the structural dynamics, enforced with proper orthogonal decomposition; and, second, an improved particle filter, which features an extended Kalman updating of each evolving particle before the resampling stage. The former remedy can reduce the number of effective degrees-of-freedom of the structural model to a few only (depending on the excitation), whereas the latter one allows to track the evolution of damage and to locate it thanks to an intricate formulation. To assess the effectiveness of the proposed procedure, the case of a plate subject to bending is investigated; it is shown that, when the procedure is appropriately fed by measurements, damage is efficiently and accurately estimated. PMID:26703615
NASA Astrophysics Data System (ADS)
Käser, Martin; Dumbser, Michael; de la Puente, Josep; Igel, Heiner
2007-01-01
We present a new numerical method to solve the heterogeneous anelastic, seismic wave equations with arbitrary high order accuracy in space and time on 3-D unstructured tetrahedral meshes. Using the velocity-stress formulation provides a linear hyperbolic system of equations with source terms that is completed by additional equations for the anelastic functions including the strain history of the material. These additional equations result from the rheological model of the generalized Maxwell body and permit the incorporation of realistic attenuation properties of viscoelastic material accounting for the behaviour of elastic solids and viscous fluids. The proposed method combines the Discontinuous Galerkin (DG) finite element (FE) method with the ADER approach using Arbitrary high order DERivatives for flux calculations. The DG approach, in contrast to classical FE methods, uses a piecewise polynomial approximation of the numerical solution which allows for discontinuities at element interfaces. Therefore, the well-established theory of numerical fluxes across element interfaces obtained by the solution of Riemann problems can be applied as in the finite volume framework. The main idea of the ADER time integration approach is a Taylor expansion in time in which all time derivatives are replaced by space derivatives using the so-called Cauchy-Kovalewski procedure which makes extensive use of the governing PDE. Due to the ADER time integration technique the same approximation order in space and time is achieved automatically and the method is a one-step scheme advancing the solution for one time step without intermediate stages. To this end, we introduce a new unrolled recursive algorithm for efficiently computing the Cauchy-Kovalewski procedure by making use of the sparsity of the system matrices. The numerical convergence analysis demonstrates that the new schemes provide very high order accuracy even on unstructured tetrahedral meshes while computational cost and storage space for a desired accuracy can be reduced when applying higher degree approximation polynomials. In addition, we investigate the increase in computing time, when the number of relaxation mechanisms due to the generalized Maxwell body are increased. An application to a well-acknowledged test case and comparisons with analytic and reference solutions, obtained by different well-established numerical methods, confirm the performance of the proposed method. Therefore, the development of the highly accurate ADER-DG approach for tetrahedral meshes including viscoelastic material provides a novel, flexible and efficient numerical technique to approach 3-D wave propagation problems including realistic attenuation and complex geometry.
Improving the thermal efficiency of a jaggery production module using a fire-tube heat exchanger.
La Madrid, Raul; Orbegoso, Elder Mendoza; Saavedra, Rafael; Marcelo, Daniel
2017-12-15
Jaggery is a product obtained after heating and evaporation processes have been applied to sugar cane juice via the addition of thermal energy, followed by the crystallisation process through mechanical agitation. At present, jaggery production uses furnaces and pans that are designed empirically based on trial and error procedures, which results in low ranges of thermal efficiency operation. To rectify these deficiencies, this study proposes the use of fire-tube pans to increase heat transfer from the flue gases to the sugar cane juice. With the aim of increasing the thermal efficiency of a jaggery installation, a computational fluid dynamic (CFD)-based model was used as a numerical tool to design a fire-tube pan that would replace the existing finned flat pan. For this purpose, the original configuration of the jaggery furnace was simulated via a pre-validated CFD model in order to calculate its current thermal performance. Then, the newly-designed fire-tube pan was virtually replaced in the jaggery furnace with the aim of numerically estimating the thermal performance at the same operating conditions. A comparison of both simulations highlighted the growth of the heat transfer rate at around 105% in the heating/evaporation processes when the fire-tube pan replaced the original finned flat pan. This enhancement impacted the jaggery production installation, whereby the thermal efficiency of the installation increased from 31.4% to 42.8%. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Han, Song; Zhang, Wei; Zhang, Jie
2017-09-01
A fast sweeping method (FSM) determines the first arrival traveltimes of seismic waves by sweeping the velocity model in different directions meanwhile applying a local solver. It is an efficient way to numerically solve Hamilton-Jacobi equations for traveltime calculations. In this study, we develop an improved FSM to calculate the first arrival traveltimes of quasi-P (qP) waves in 2-D tilted transversely isotropic (TTI) media. A local solver utilizes the coupled slowness surface of qP and quasi-SV (qSV) waves to form a quartic equation, and solve it numerically to obtain possible traveltimes of qP-wave. The proposed quartic solver utilizes Fermat's principle to limit the range of the possible solution, then uses the bisection procedure to efficiently determine the real roots. With causality enforced during sweepings, our FSM converges fast in a few iterations, and the exact number depending on the complexity of the velocity model. To improve the accuracy, we employ high-order finite difference schemes and derive the second-order formulae. There is no weak anisotropy assumption, and no approximation is made to the complex slowness surface of qP-wave. In comparison to the traveltimes calculated by a horizontal slowness shooting method, the validity and accuracy of our FSM is demonstrated.
Numerical solutions of the Navier-Stokes equations for transonic afterbody flows
NASA Technical Reports Server (NTRS)
Swanson, R. C., Jr.
1980-01-01
The time dependent Navier-Stokes equations in mass averaged variables are solved for transonic flow over axisymmetric boattail plume simulator configurations. Numerical solution of these equations is accomplished with the unsplit explict finite difference algorithm of MacCormack. A grid subcycling procedure and computer code vectorization are used to improve computational efficiency. The two layer algebraic turbulence models of Cebeci-Smith and Baldwin-Lomax are employed for investigating turbulence closure. Two relaxation models based on these baseline models are also considered. Results in the form of surface pressure distribution for three different circular arc boattails at two free stream Mach numbers are compared with experimental data. The pressures in the recirculating flow region for all separated cases are poorly predicted with the baseline turbulence models. Significant improvements in the predictions are usually obtained by using the relaxation models.
Design and simulation of a cable-pulley-based transmission for artificial ankle joints
NASA Astrophysics Data System (ADS)
Liu, Huaxin; Ceccarelli, Marco; Huang, Qiang
2016-06-01
In this paper, a mechanical transmission based on cable pulley is proposed for human-like actuation in the artificial ankle joints of human-scale. The anatomy articular characteristics of the human ankle is discussed for proper biomimetic inspiration in designing an accurate, efficient, and robust motion control of artificial ankle joint devices. The design procedure is presented through the inclusion of conceptual considerations and design details for an interactive solution of the transmission system. A mechanical design is elaborated for the ankle joint angular with pitch motion. A multi-body dynamic simulation model is elaborated accordingly and evaluated numerically in the ADAMS environment. Results of the numerical simulations are discussed to evaluate the dynamic performance of the proposed design solution and to investigate the feasibility of the proposed design in future applications for humanoid robots.
A split finite element algorithm for the compressible Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Baker, A. J.
1979-01-01
An accurate and efficient numerical solution algorithm is established for solution of the high Reynolds number limit of the Navier-Stokes equations governing the multidimensional flow of a compressible essentially inviscid fluid. Finite element interpolation theory is used within a dissipative formulation established using Galerkin criteria within the Method of Weighted Residuals. An implicit iterative solution algorithm is developed, employing tensor product bases within a fractional steps integration procedure, that significantly enhances solution economy concurrent with sharply reduced computer hardware demands. The algorithm is evaluated for resolution of steep field gradients and coarse grid accuracy using both linear and quadratic tensor product interpolation bases. Numerical solutions for linear and nonlinear, one, two and three dimensional examples confirm and extend the linearized theoretical analyses, and results are compared to competitive finite difference derived algorithms.
NASA Astrophysics Data System (ADS)
Yin, Sisi; Nishi, Tatsushi
2014-11-01
Quantity discount policy is decision-making for trade-off prices between suppliers and manufacturers while production is changeable due to demand fluctuations in a real market. In this paper, quantity discount models which consider selection of contract suppliers, production quantity and inventory simultaneously are addressed. The supply chain planning problem with quantity discounts under demand uncertainty is formulated as a mixed-integer nonlinear programming problem (MINLP) with integral terms. We apply an outer-approximation method to solve MINLP problems. In order to improve the efficiency of the proposed method, the problem is reformulated as a stochastic model replacing the integral terms by using a normalisation technique. We present numerical examples to demonstrate the efficiency of the proposed method.
NASA Astrophysics Data System (ADS)
Shang, J. S.; Andrienko, D. A.; Huang, P. G.; Surzhikov, S. T.
2014-06-01
An efficient computational capability for nonequilibrium radiation simulation via the ray tracing technique has been accomplished. The radiative rate equation is iteratively coupled with the aerodynamic conservation laws including nonequilibrium chemical and chemical-physical kinetic models. The spectral properties along tracing rays are determined by a space partition algorithm of the nearest neighbor search process, and the numerical accuracy is further enhanced by a local resolution refinement using the Gauss-Lobatto polynomial. The interdisciplinary governing equations are solved by an implicit delta formulation through the diminishing residual approach. The axisymmetric radiating flow fields over the reentry RAM-CII probe have been simulated and verified with flight data and previous solutions by traditional methods. A computational efficiency gain nearly forty times is realized over that of the existing simulation procedures.
The FLAME-slab method for electromagnetic wave scattering in aperiodic slabs
NASA Astrophysics Data System (ADS)
Mansha, Shampy; Tsukerman, Igor; Chong, Y. D.
2017-12-01
The proposed numerical method, "FLAME-slab," solves electromagnetic wave scattering problems for aperiodic slab structures by exploiting short-range regularities in these structures. The computational procedure involves special difference schemes with high accuracy even on coarse grids. These schemes are based on Trefftz approximations, utilizing functions that locally satisfy the governing differential equations, as is done in the Flexible Local Approximation Method (FLAME). Radiation boundary conditions are implemented via Fourier expansions in the air surrounding the slab. When applied to ensembles of slab structures with identical short-range features, such as amorphous or quasicrystalline lattices, the method is significantly more efficient, both in runtime and in memory consumption, than traditional approaches. This efficiency is due to the fact that the Trefftz functions need to be computed only once for the whole ensemble.
NASA Technical Reports Server (NTRS)
Yee, H. C.; Shinn, J. L.
1986-01-01
Some numerical aspects of finite-difference algorithms for nonlinear multidimensional hyperbolic conservation laws with stiff nonhomogenous (source) terms are discussed. If the stiffness is entirely dominated by the source term, a semi-implicit shock-capturing method is proposed provided that the Jacobian of the soruce terms possesses certain properties. The proposed semi-implicit method can be viewed as a variant of the Bussing and Murman point-implicit scheme with a more appropriate numerical dissipation for the computation of strong shock waves. However, if the stiffness is not solely dominated by the source terms, a fully implicit method would be a better choice. The situation is complicated by problems that are higher than one dimension, and the presence of stiff source terms further complicates the solution procedures for alternating direction implicit (ADI) methods. Several alternatives are discussed. The primary motivation for constructing these schemes was to address thermally and chemically nonequilibrium flows in the hypersonic regime. Due to the unique structure of the eigenvalues and eigenvectors for fluid flows of this type, the computation can be simplified, thus providing a more efficient solution procedure than one might have anticipated.
Wigner phase space distribution via classical adiabatic switching
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bose, Amartya; Makri, Nancy; Department of Physics, University of Illinois, 1110 W. Green Street, Urbana, Illinois 61801
2015-09-21
Evaluation of the Wigner phase space density for systems of many degrees of freedom presents an extremely demanding task because of the oscillatory nature of the Fourier-type integral. We propose a simple and efficient, approximate procedure for generating the Wigner distribution that avoids the computational difficulties associated with the Wigner transform. Starting from a suitable zeroth-order Hamiltonian, for which the Wigner density is available (either analytically or numerically), the phase space distribution is propagated in time via classical trajectories, while the perturbation is gradually switched on. According to the classical adiabatic theorem, each trajectory maintains a constant action if themore » perturbation is switched on infinitely slowly. We show that the adiabatic switching procedure produces the exact Wigner density for harmonic oscillator eigenstates and also for eigenstates of anharmonic Hamiltonians within the Wentzel-Kramers-Brillouin (WKB) approximation. We generalize the approach to finite temperature by introducing a density rescaling factor that depends on the energy of each trajectory. Time-dependent properties are obtained simply by continuing the integration of each trajectory under the full target Hamiltonian. Further, by construction, the generated approximate Wigner distribution is invariant under classical propagation, and thus, thermodynamic properties are strictly preserved. Numerical tests on one-dimensional and dissipative systems indicate that the method produces results in very good agreement with those obtained by full quantum mechanical methods over a wide temperature range. The method is simple and efficient, as it requires no input besides the force fields required for classical trajectory integration, and is ideal for use in quasiclassical trajectory calculations.« less
Assaying Oxidative Coupling Activity of CYP450 Enzymes.
Agarwal, Vinayak
2018-01-01
Cytochrome P450 (CYP450) enzymes are ubiquitous catalysts in natural product biosynthetic schemes where they catalyze numerous different transformations using radical intermediates. In this protocol, we describe procedures to assay the activity of a marine bacterial CYP450 enzyme Bmp7 which catalyzes the oxidative radical coupling of polyhalogenated aromatic substrates. The broad substrate tolerance of Bmp7, together with rearrangements of the aryl radical intermediates leads to a large number of products to be generated by the enzymatic action of Bmp7. The complexity of the product pool generated by Bmp7 thus presents an analytical challenge for structural elucidation. To address this challenge, we describe mass spectrometry-based procedures to provide structural insights into aryl crosslinked products generated by Bmp7, which can complement subsequent spectroscopic experiments. Using the procedures described here, for the first time, we show that Bmp7 can efficiently accept polychlorinated aryl substrates, in addition to the physiological polybrominated substrates for the biosynthesis of polyhalogenated marine natural products. © 2018 Elsevier Inc. All rights reserved.
Modeling Geometry and Progressive Failure of Material Interfaces in Plain Weave Composites
NASA Technical Reports Server (NTRS)
Hsu, Su-Yuen; Cheng, Ron-Bin
2010-01-01
A procedure combining a geometrically nonlinear, explicit-dynamics contact analysis, computer aided design techniques, and elasticity-based mesh adjustment is proposed to efficiently generate realistic finite element models for meso-mechanical analysis of progressive failure in textile composites. In the procedure, the geometry of fiber tows is obtained by imposing a fictitious expansion on the tows. Meshes resulting from the procedure are conformal with the computed tow-tow and tow-matrix interfaces but are incongruent at the interfaces. The mesh interfaces are treated as cohesive contact surfaces not only to resolve the incongruence but also to simulate progressive failure. The method is employed to simulate debonding at the material interfaces in a ceramic-matrix plain weave composite with matrix porosity and in a polymeric matrix plain weave composite without matrix porosity, both subject to uniaxial cyclic loading. The numerical results indicate progression of the interfacial damage during every loading and reverse loading event in a constant strain amplitude cyclic process. However, the composites show different patterns of damage advancement.
Supercomputing Aspects for Simulating Incompressible Flow
NASA Technical Reports Server (NTRS)
Kwak, Dochan; Kris, Cetin C.
2000-01-01
The primary objective of this research is to support the design of liquid rocket systems for the Advanced Space Transportation System. Since the space launch systems in the near future are likely to rely on liquid rocket engines, increasing the efficiency and reliability of the engine components is an important task. One of the major problems in the liquid rocket engine is to understand fluid dynamics of fuel and oxidizer flows from the fuel tank to plume. Understanding the flow through the entire turbo-pump geometry through numerical simulation will be of significant value toward design. One of the milestones of this effort is to develop, apply and demonstrate the capability and accuracy of 3D CFD methods as efficient design analysis tools on high performance computer platforms. The development of the Message Passage Interface (MPI) and Multi Level Parallel (MLP) versions of the INS3D code is currently underway. The serial version of INS3D code is a multidimensional incompressible Navier-Stokes solver based on overset grid technology, INS3D-MPI is based on the explicit massage-passing interface across processors and is primarily suited for distributed memory systems. INS3D-MLP is based on multi-level parallel method and is suitable for distributed-shared memory systems. For the entire turbo-pump simulations, moving boundary capability and efficient time-accurate integration methods are built in the flow solver, To handle the geometric complexity and moving boundary problems, an overset grid scheme is incorporated with the solver so that new connectivity data will be obtained at each time step. The Chimera overlapped grid scheme allows subdomains move relative to each other, and provides a great flexibility when the boundary movement creates large displacements. Two numerical procedures, one based on artificial compressibility method and the other pressure projection method, are outlined for obtaining time-accurate solutions of the incompressible Navier-Stokes equations. The performance of the two methods is compared by obtaining unsteady solutions for the evolution of twin vortices behind a flat plate. Calculated results are compared with experimental and other numerical results. For an unsteady flow, which requires small physical time step, the pressure projection method was found to be computationally efficient since it does not require any subiteration procedure. It was observed that the artificial compressibility method requires a fast convergence scheme at each physical time step in order to satisfy the incompressibility condition. This was obtained by using a GMRES-ILU(0) solver in present computations. When a line-relaxation scheme was used, the time accuracy was degraded and time-accurate computations became very expensive.
Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures
2017-10-04
Report: Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures The views, opinions and/or findings contained in this...Chapel Hill Title: Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures Report Term: 0-Other Email: dm...algorithms for scientific and geometric computing by exploiting the power and performance efficiency of heterogeneous shared memory architectures . These
NASA Technical Reports Server (NTRS)
Balakrishnan, L.; Abdol-Hamid, Khaled S.
1992-01-01
Compressible jet plumes were studied using a two-equation turbulence model. A space marching procedure based on an upwind numerical scheme was used to solve the governing equations and turbulence transport equations. The computed results indicate that extending the space marching procedure for solving supersonic/subsonic mixing problems can be stable, efficient and accurate. Moreover, a newly developed correction for compressible dissipation has been verified in fully expanded and underexpanded jet plumes. For a sonic jet plume, no improvement in results over the standard two-equation model was seen. However for a supersonic jet plume, the correction due to compressible dissipation successfully predicted the reduced spreading rate of the jet compared to the sonic case. The computed results were generally in good agreement with the experimental data.
Side-branch resonators modelling with Green's function methods
NASA Astrophysics Data System (ADS)
Perrey-Debain, E.; Maréchal, R.; Ville, J. M.
2014-09-01
This paper deals with strategies for computing efficiently the propagation of sound waves in ducts containing passive components. In many cases of practical interest, these components are acoustic cavities which are connected to the duct. Though standard Finite Element software could be used for the numerical prediction of sound transmission through such a system, the method is known to be extremely demanding, both in terms of data preparation and computation, especially in the mid-frequency range. To alleviate this, a numerical technique that exploits the benefit of the FEM and the BEM approach has been devised. First, a set of eigenmodes is computed in the cavity to produce a numerical impedance matrix connecting the pressure and the acoustic velocity on the duct wall interface. Then an integral representation for the acoustic pressure in the main duct is used. By choosing an appropriate Green's function for the duct, the integration procedure is limited to the duct-cavity interface only. This allows an accurate computation of the scattering matrix of such an acoustic system with a numerical complexity that grows very mildly with the frequency. Typical applications involving Helmholtz and Herschel-Quincke resonators are presented.
NASA Astrophysics Data System (ADS)
Wei, Xiaohui; Li, Weishan; Tian, Hailong; Li, Hongliang; Xu, Haixiao; Xu, Tianfu
2015-07-01
The numerical simulation of multiphase flow and reactive transport in the porous media on complex subsurface problem is a computationally intensive application. To meet the increasingly computational requirements, this paper presents a parallel computing method and architecture. Derived from TOUGHREACT that is a well-established code for simulating subsurface multi-phase flow and reactive transport problems, we developed a high performance computing THC-MP based on massive parallel computer, which extends greatly on the computational capability for the original code. The domain decomposition method was applied to the coupled numerical computing procedure in the THC-MP. We designed the distributed data structure, implemented the data initialization and exchange between the computing nodes and the core solving module using the hybrid parallel iterative and direct solver. Numerical accuracy of the THC-MP was verified through a CO2 injection-induced reactive transport problem by comparing the results obtained from the parallel computing and sequential computing (original code). Execution efficiency and code scalability were examined through field scale carbon sequestration applications on the multicore cluster. The results demonstrate successfully the enhanced performance using the THC-MP on parallel computing facilities.
Radioactive Phosphorylation of Alcohols to Monitor Biocatalytic Diels-Alder Reactions
Nierth, Alexander; Jäschke, Andres
2011-01-01
Nature has efficiently adopted phosphorylation for numerous biological key processes, spanning from cell signaling to energy storage and transmission. For the bioorganic chemist the number of possible ways to attach a single phosphate for radioactive labeling is surprisingly small. Here we describe a very simple and fast one-pot synthesis to phosphorylate an alcohol with phosphoric acid using trichloroacetonitrile as activating agent. Using this procedure, we efficiently attached the radioactive phosphorus isotope 32P to an anthracene diene, which is a substrate for the Diels-Alderase ribozyme—an RNA sequence that catalyzes the eponymous reaction. We used the 32P-substrate for the measurement of RNA-catalyzed reaction kinetics of several dye-labeled ribozyme variants for which precise optical activity determination (UV/vis, fluorescence) failed due to interference of the attached dyes. The reaction kinetics were analyzed by thin-layer chromatographic separation of the 32P-labeled reaction components and densitometric analysis of the substrate and product radioactivities, thereby allowing iterative optimization of the dye positions for future single-molecule studies. The phosphorylation strategy with trichloroacetonitrile may be applicable for labeling numerous other compounds that contain alcoholic hydroxyl groups. PMID:21731729
An efficient technique for higher order fractional differential equation.
Ali, Ayyaz; Iqbal, Muhammad Asad; Ul-Hassan, Qazi Mahmood; Ahmad, Jamshad; Mohyud-Din, Syed Tauseef
2016-01-01
In this study, we establish exact solutions of fractional Kawahara equation by using the idea of [Formula: see text]-expansion method. The results of different studies show that the method is very effective and can be used as an alternative for finding exact solutions of nonlinear evolution equations (NLEEs) in mathematical physics. The solitary wave solutions are expressed by the hyperbolic, trigonometric, exponential and rational functions. Graphical representations along with the numerical data reinforce the efficacy of the used procedure. The specified idea is very effective, expedient for fractional PDEs, and could be extended to other physical problems.
Structural tailoring of advanced turboprops
NASA Technical Reports Server (NTRS)
Brown, K. W.; Hopkins, Dale A.
1988-01-01
The Structural Tailoring of Advanced Turboprops (STAT) computer program was developed to perform numerical optimization on highly swept propfan blades. The optimization procedure seeks to minimize an objective function defined as either: (1) direct operating cost of full scale blade or, (2) aeroelastic differences between a blade and its scaled model, by tuning internal and external geometry variables that must satisfy realistic blade design constraints. The STAT analysis system includes an aerodynamic efficiency evaluation, a finite element stress and vibration analysis, an acoustic analysis, a flutter analysis, and a once-per-revolution forced response life prediction capability. STAT includes all relevant propfan design constraints.
Bistatic passive radar simulator with spatial filtering subsystem
NASA Astrophysics Data System (ADS)
Hossa, Robert; Szlachetko, Boguslaw; Lewandowski, Andrzej; Górski, Maksymilian
2009-06-01
The purpose of this paper is to briefly introduce the structure and features of the developed virtual passive FM radar implemented in Matlab system of numerical computations and to present many alternative ways of its performance. An idea of the proposed solution is based on analytic representation of transmitted direct signals and reflected echo signals. As a spatial filtering subsystem a beamforming network of ULA and UCA dipole configuration dedicated to bistatic radar concept is considered and computationally efficient procedures are presented in details. Finally, exemplary results of the computer simulations of the elaborated virtual simulator are provided and discussed.
Dissipative preparation of entangled many-body states with Rydberg atoms
NASA Astrophysics Data System (ADS)
Roghani, Maryam; Weimer, Hendrik
2018-07-01
We investigate a one-dimensional atomic lattice laser-driven to a Rydberg state, in which engineered dissipation channels lead to entanglement in the many-body system. In particular, we demonstrate the efficient generation of ground states of a frustration-free Hamiltonian, as well as states closely related to W states. We discuss the realization of the required coherent and dissipative terms, and we perform extensive numerical simulations characterizing the fidelity of the state preparation procedure. We identify the optimum parameters for high fidelity entanglement preparation and investigate the scaling with the size of the system.
Time-dependent grid adaptation for meshes of triangles and tetrahedra
NASA Technical Reports Server (NTRS)
Rausch, Russ D.
1993-01-01
This paper presents in viewgraph form a method of optimizing grid generation for unsteady CFD flow calculations that distributes the numerical error evenly throughout the mesh. Adaptive meshing is used to locally enrich in regions of relatively large errors and to locally coarsen in regions of relatively small errors. The enrichment/coarsening procedures are robust for isotropic cells; however, enrichment of high aspect ratio cells may fail near boundary surfaces with relatively large curvature. The enrichment indicator worked well for the cases shown, but in general requires user supervision for a more efficient solution.
NASA Technical Reports Server (NTRS)
Reddy, T. S. R.; Warmbrodt, W.
1985-01-01
The combined effects of blade torsion and dynamic inflow on the aeroelastic stability of an elastic rotor blade in forward flight are studied. The governing sets of equations of motion (fully nonlinear, linearized, and multiblade equations) used in this study are derived symbolically using a program written in FORTRAN. Stability results are presented for different structural models with and without dynamic inflow. A combination of symbolic and numerical programs at the proper stage in the derivation process makes the obtainment of final stability results an efficient and straightforward procedure.
Antireflective coatings for multijunction solar cells under wide-angle ray bundles.
Victoria, Marta; Domínguez, César; Antón, Ignacio; Sala, Gabriel
2012-03-26
Two important aspects must be considered when optimizing antireflection coatings (ARCs) for multijunction solar cells to be used in concentrators: the angular light distribution over the cell created by the particular concentration system and the wide spectral bandwidth the solar cell is sensitive to. In this article, a numerical optimization procedure and its results are presented. The potential efficiency enhancement by means of ARC optimization is calculated for several concentrating PV systems. In addition, two methods for ARCs direct characterization are presented. The results of these show that real ARCs slightly underperform theoretical predictions.
Numerical solutions of a control problem governed by functional differential equations
NASA Technical Reports Server (NTRS)
Banks, H. T.; Thrift, P. R.; Burns, J. A.; Cliff, E. M.
1978-01-01
A numerical procedure is proposed for solving optimal control problems governed by linear retarded functional differential equations. The procedure is based on the idea of 'averaging approximations', due to Banks and Burns (1975). For illustration, numerical results generated on an IBM 370/158 computer, which demonstrate the rapid convergence of the method are presented.
A waste characterisation procedure for ADM1 implementation based on degradation kinetics.
Girault, R; Bridoux, G; Nauleau, F; Poullain, C; Buffet, J; Steyer, J-P; Sadowski, A G; Béline, F
2012-09-01
In this study, a procedure accounting for degradation kinetics was developed to split the total COD of a substrate into each input state variable required for Anaerobic Digestion Model n°1. The procedure is based on the combination of batch experimental degradation tests ("anaerobic respirometry") and numerical interpretation of the results obtained (optimisation of the ADM1 input state variable set). The effects of the main operating parameters, such as the substrate to inoculum ratio in batch experiments and the origin of the inoculum, were investigated. Combined with biochemical fractionation of the total COD of substrates, this method enabled determination of an ADM1-consistent input state variable set for each substrate with affordable identifiability. The substrate to inoculum ratio in the batch experiments and the origin of the inoculum influenced input state variables. However, based on results modelled for a CSTR fed with the substrate concerned, these effects were not significant. Indeed, if the optimal ranges of these operational parameters are respected, uncertainty in COD fractionation is mainly limited to temporal variability of the properties of the substrates. As the method is based on kinetics and is easy to implement for a wide range of substrates, it is a very promising way to numerically predict the effect of design parameters on the efficiency of an anaerobic CSTR. This method thus promotes the use of modelling for the design and optimisation of anaerobic processes. Copyright © 2012 Elsevier Ltd. All rights reserved.
Spitzer observatory operations: increasing efficiency in mission operations
NASA Astrophysics Data System (ADS)
Scott, Charles P.; Kahr, Bolinda E.; Sarrel, Marc A.
2006-06-01
This paper explores the how's and why's of the Spitzer Mission Operations System's (MOS) success, efficiency, and affordability in comparison to other observatory-class missions. MOS exploits today's flight, ground, and operations capabilities, embraces automation, and balances both risk and cost. With operational efficiency as the primary goal, MOS maintains a strong control process by translating lessons learned into efficiency improvements, thereby enabling the MOS processes, teams, and procedures to rapidly evolve from concept (through thorough validation) into in-flight implementation. Operational teaming, planning, and execution are designed to enable re-use. Mission changes, unforeseen events, and continuous improvement have often times forced us to learn to fly anew. Collaborative spacecraft operations and remote science and instrument teams have become well integrated, and worked together to improve and optimize each human, machine, and software-system element. Adaptation to tighter spacecraft margins has facilitated continuous operational improvements via automated and autonomous software coupled with improved human analysis. Based upon what we now know and what we need to improve, adapt, or fix, the projected mission lifetime continues to grow - as does the opportunity for numerous scientific discoveries.
NASA Astrophysics Data System (ADS)
Ballestra, Luca Vincenzo; Pacelli, Graziella; Radi, Davide
2016-12-01
We propose a numerical method to compute the first-passage probability density function in a time-changed Brownian model. In particular, we derive an integral representation of such a density function in which the integrand functions must be obtained solving a system of Volterra equations of the first kind. In addition, we develop an ad-hoc numerical procedure to regularize and solve this system of integral equations. The proposed method is tested on three application problems of interest in mathematical finance, namely the calculation of the survival probability of an indebted firm, the pricing of a single-knock-out put option and the pricing of a double-knock-out put option. The results obtained reveal that the novel approach is extremely accurate and fast, and performs significantly better than the finite difference method.
Finite element solution of optimal control problems with inequality constraints
NASA Technical Reports Server (NTRS)
Bless, Robert R.; Hodges, Dewey H.
1990-01-01
A finite-element method based on a weak Hamiltonian form of the necessary conditions is summarized for optimal control problems. Very crude shape functions (so simple that element numerical quadrature is not necessary) can be used to develop an efficient procedure for obtaining candidate solutions (i.e., those which satisfy all the necessary conditions) even for highly nonlinear problems. An extension of the formulation allowing for discontinuities in the states and derivatives of the states is given. A theory that includes control inequality constraints is fully developed. An advanced launch vehicle (ALV) model is presented. The model involves staging and control constraints, thus demonstrating the full power of the weak formulation to date. Numerical results are presented along with total elapsed computer time required to obtain the results. The speed and accuracy in obtaining the results make this method a strong candidate for a real-time guidance algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.
A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less
Efficient Trajectory Propagation for Orbit Determination Problems
NASA Technical Reports Server (NTRS)
Roa, Javier; Pelaez, Jesus
2015-01-01
Regularized formulations of orbital motion apply a series of techniques to improve the numerical integration of the orbit. Despite their advantages and potential applications little attention has been paid to the propagation of the partial derivatives of the corresponding set of elements or coordinates, required in many orbit-determination scenarios and optimization problems. This paper fills this gap by presenting the general procedure for integrating the state-transition matrix of the system together with the nominal trajectory using regularized formulations and different sets of elements. The main difficulty comes from introducing an independent variable different from time, because the solution needs to be synchronized. The correction of the time delay is treated from a generic perspective not focused on any particular formulation. The synchronization using time-elements is also discussed. Numerical examples include strongly-perturbed orbits in the Pluto system, motivated by the recent flyby of the New Horizons spacecraft, together with a geocentric flyby of the NEAR spacecraft.
Thermal radiation view factor: Methods, accuracy and computer-aided procedures
NASA Technical Reports Server (NTRS)
Kadaba, P. V.
1982-01-01
The computer aided thermal analysis programs which predicts the result of predetermined acceptable temperature range prior to stationing of these orbiting equipment in various attitudes with respect to the Sun and the Earth was examined. Complexity of the surface geometries suggests the use of numerical schemes for the determination of these viewfactors. Basic definitions and standard methods which form the basis for various digital computer methods and various numerical methods are presented. The physical model and the mathematical methods on which a number of available programs are built are summarized. The strength and the weaknesses of the methods employed, the accuracy of the calculations and the time required for computations are evaluated. The situations where accuracies are important for energy calculations are identified and methods to save computational times are proposed. Guide to best use of the available programs at several centers and the future choices for efficient use of digital computers are included in the recommendations.
Optimally analyzing and implementing of bolt fittings in steel structure based on ANSYS
NASA Astrophysics Data System (ADS)
Han, Na; Song, Shuangyang; Cui, Yan; Wu, Yongchun
2018-03-01
ANSYS simulation software for its excellent performance become outstanding one in Computer-aided Engineering (CAE) family, it is committed to the innovation of engineering simulation to help users to shorten the design process. First, a typical procedure to implement CAE was design. The framework of structural numerical analysis on ANSYS Technology was proposed. Then, A optimally analyzing and implementing of bolt fittings in beam-column join of steel structure was implemented by ANSYS, which was display the cloud chart of XY-shear stress, the cloud chart of YZ-shear stress and the cloud chart of Y component of stress. Finally, ANSYS software simulating results was compared with the measured results by the experiment. The result of ANSYS simulating and analyzing is reliable, efficient and optical. In above process, a structural performance's numerical simulating and analyzing model were explored for engineering enterprises' practice.
Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.
2017-04-12
A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less
Fuchs, Lynn S; Geary, David C; Compton, Donald L; Fuchs, Douglas; Hamlett, Carol L; Seethaler, Pamela M; Bryant, Joan D; Schatschneider, Christopher
2010-11-01
The purpose of this study was to examine the interplay between basic numerical cognition and domain-general abilities (such as working memory) in explaining school mathematics learning. First graders (N = 280; mean age = 5.77 years) were assessed on 2 types of basic numerical cognition, 8 domain-general abilities, procedural calculations, and word problems in fall and then reassessed on procedural calculations and word problems in spring. Development was indexed by latent change scores, and the interplay between numerical and domain-general abilities was analyzed by multiple regression. Results suggest that the development of different types of formal school mathematics depends on different constellations of numerical versus general cognitive abilities. When controlling for 8 domain-general abilities, both aspects of basic numerical cognition were uniquely predictive of procedural calculations and word problems development. Yet, for procedural calculations development, the additional amount of variance explained by the set of domain-general abilities was not significant, and only counting span was uniquely predictive. By contrast, for word problems development, the set of domain-general abilities did provide additional explanatory value, accounting for about the same amount of variance as the basic numerical cognition variables. Language, attentive behavior, nonverbal problem solving, and listening span were uniquely predictive.
A test of a vortex method for the computation of flap side edge noise
NASA Technical Reports Server (NTRS)
Martin, James E.
1995-01-01
Upon approach to landing, a major source location of airframe noise occurs at the side edges of the part span, trailing edge flaps. In the vicinity of these flaps, a complex arrangement of spanwise flow with primary and secondary tip vortices may form. Each of these vortices is observed to become fully three-dimensional. In the present study, a numerical model is developed to investigate the noise radiated from the side edge of a flap. The inherent three-dimensionality of this flow forces us to carefully consider a numerical scheme which will be both accurate in its prediction of the flow acoustics and also computationally efficient. Vortex methods have offered a fast and efficient means of simulating many two and three-dimensional, vortex dominated flows. In vortex methods, the time development of the flow is tracked by following exclusively the vorticity containing regions. Through the Biot-Savart law, knowledge of the vorticity field enables one to obtain flow quantities at any desired location during the flow evolution. In the present study, a numerical procedure has been developed which incorporates the Lagrangian approach of vortex methods into a calculation for the noise radiated by a flow-surface interaction. In particular, the noise generated by a vortex in the presence of a flat half plane is considered. This problem serves as a basic model of flap edge flow. It also permits the direct comparison between our computed results and previous acoustic analyses performed for this problem. In our numerical simulations, the mean flow is represented by the complex potential W(z) = Aiz(exp l/2), which is obtained through conformal mapping techniques. The magnitude of the mean flow is controlled by the parameter A. This mean flow has been used in the acoustic analysis by Hardin and is considered a reasonable model of the flow field in the vicinity of the edge and away from the leading and trailing edges of the flap. To represent the primary vortex which occurs near the flap, a point vortex is introduced just below the flat half plane. Using a technique from panel methods, boundary conditions on the flap surface are satisfied by the introduction of a row of stationary point vortices along the extent of the flap. At each time step in the calculation, the strength of these vortices is chosen to eliminate the normal velocity at intermediary collocation points. The time development of the overall flow field is then tracked using standard techniques from vortex methods. Vortex trajectories obtained through this computation are in good agreement with those predicted by the analytical solution given by Hardin, thus verifying the viability of this procedure for more complex flow arrangements. For the flow acoustics, the Ffowcs Williams-Hawkings equation is numerically integrated. This equation supplies the far field acoustic pressure based upon pressures occurring along the flap surface. With our vortex method solution, surface pressures may be obtained with exceptional resolution. The Ffowcs Williams-Hawkings equation is integrated using a spatially fourth order accurate Simpson's rule. Rational function interpolation is used to obtain the surface pressures at the appropriate retarded times. Comparisons between our numerical results for the acoustic pressure and those predicted by the Hardin analysis have been made. Preliminary results indicate the need for an improved integration technique. In the future, the numerical procedure developed in this study will be applied to the case of a rectangular flap of finite thickness and ultimately modified for application to the fully three-dimensional problem.
Three-dimensional near-field MIMO array imaging using range migration techniques.
Zhuge, Xiaodong; Yarovoy, Alexander G
2012-06-01
This paper presents a 3-D near-field imaging algorithm that is formulated for 2-D wideband multiple-input-multiple-output (MIMO) imaging array topology. The proposed MIMO range migration technique performs the image reconstruction procedure in the frequency-wavenumber domain. The algorithm is able to completely compensate the curvature of the wavefront in the near-field through a specifically defined interpolation process and provides extremely high computational efficiency by the application of the fast Fourier transform. The implementation aspects of the algorithm and the sampling criteria of a MIMO aperture are discussed. The image reconstruction performance and computational efficiency of the algorithm are demonstrated both with numerical simulations and measurements using 2-D MIMO arrays. Real-time 3-D near-field imaging can be achieved with a real-aperture array by applying the proposed MIMO range migration techniques.
Projected role of advanced computational aerodynamic methods at the Lockheed-Georgia company
NASA Technical Reports Server (NTRS)
Lores, M. E.
1978-01-01
Experience with advanced computational methods being used at the Lockheed-Georgia Company to aid in the evaluation and design of new and modified aircraft indicates that large and specialized computers will be needed to make advanced three-dimensional viscous aerodynamic computations practical. The Numerical Aerodynamic Simulation Facility should be used to provide a tool for designing better aerospace vehicles while at the same time reducing development costs by performing computations using Navier-Stokes equations solution algorithms and permitting less sophisticated but nevertheless complex calculations to be made efficiently. Configuration definition procedures and data output formats can probably best be defined in cooperation with industry, therefore, the computer should handle many remote terminals efficiently. The capability of transferring data to and from other computers needs to be provided. Because of the significant amount of input and output associated with 3-D viscous flow calculations and because of the exceedingly fast computation speed envisioned for the computer, special attention should be paid to providing rapid, diversified, and efficient input and output.
NASA Astrophysics Data System (ADS)
Yang, Chongqiu; Peng, Yanke; Simon, Terrence; Cui, Tianhong
2018-04-01
Perovskite solar cells (PSC) have outstanding potential to be low-cost, high-efficiency photovoltaic devices. The PSC can be fabricated by numerous techniques; however, the power conversion efficiency (PCE) for the two-step-processed PSC falls behind that of the one-step method. In this work, we investigate the effects of relative humidity (RH) and dry air flow on the lead iodide (PbI2) solution deposition process. We conclude that the quality of the PbI2 film is critical to the development of the perovskite film and the performance of the PSC device. Low RH and dry air flow used during the PbI2 spin coating procedure can increase supersaturation concentration to form denser PbI2 nuclei and a more suitable PbI2 film. Moreover, airflow-assisted PbI2 drying and thermal annealing steps can smooth transformation from the nucleation stage to the crystallization stage.
Optimizing cost-efficiency in mean exposure assessment - cost functions reconsidered
2011-01-01
Background Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Methods Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Results Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods. For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. Conclusions The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios. PMID:21600023
Optimizing cost-efficiency in mean exposure assessment--cost functions reconsidered.
Mathiassen, Svend Erik; Bolin, Kristian
2011-05-21
Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods.For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios.
An interval model updating strategy using interval response surface models
NASA Astrophysics Data System (ADS)
Fang, Sheng-En; Zhang, Qiu-Hu; Ren, Wei-Xin
2015-08-01
Stochastic model updating provides an effective way of handling uncertainties existing in real-world structures. In general, probabilistic theories, fuzzy mathematics or interval analyses are involved in the solution of inverse problems. However in practice, probability distributions or membership functions of structural parameters are often unavailable due to insufficient information of a structure. At this moment an interval model updating procedure shows its superiority in the aspect of problem simplification since only the upper and lower bounds of parameters and responses are sought. To this end, this study develops a new concept of interval response surface models for the purpose of efficiently implementing the interval model updating procedure. The frequent interval overestimation due to the use of interval arithmetic can be maximally avoided leading to accurate estimation of parameter intervals. Meanwhile, the establishment of an interval inverse problem is highly simplified, accompanied by a saving of computational costs. By this means a relatively simple and cost-efficient interval updating process can be achieved. Lastly, the feasibility and reliability of the developed method have been verified against a numerical mass-spring system and also against a set of experimentally tested steel plates.
All-silicon nanorod-based Dammann gratings.
Li, Zile; Zheng, Guoxing; He, Ping'An; Li, Song; Deng, Qiling; Zhao, Jiangnan; Ai, Yong
2015-09-15
Established diffractive optical elements (DOEs), such as Dammann gratings, whose phase profile is controlled by etching different depths into a transparent dielectric substrate, suffer from a contradiction between the complexity of fabrication procedures and the performance of such gratings. In this Letter, we combine the concept of geometric phase and phase modulation in depth, and prove by theoretical analysis and numerical simulation that nanorod arrays etched on a silicon substrate have a characteristic of strong polarization conversion between two circularly polarized states and can act as a highly efficient half-wave plate. More importantly, only by changing the orientation angles of each nanorod can the arrays control the phase of a circularly polarized light, cell by cell. With the above principle, we report the realization of nanorod-based Dammann gratings reaching diffraction efficiencies of 50%-52% in the C-band fiber telecommunications window (1530-1565 nm). In this design, uniform 4×4 spot arrays with an extending angle of 59°×59° can be obtained in the far field. Because of these advantages of the single-step fabrication procedure, accurate phase controlling, and strong polarization conversion, nanorod-based Dammann gratings could be utilized for various practical applications in a range of fields.
NASA Technical Reports Server (NTRS)
Deshpande, Manohar
2011-01-01
A precise knowledge of the interior structure of asteroids, comets, and Near Earth Objects (NEO) is important to assess the consequences of their impacts with the Earth and develop efficient mitigation strategies. Knowledge of their interior structure also provides opportunities for extraction of raw materials for future space activities. Low frequency radio sounding is often proposed for investigating interior structures of asteroids and NEOs. For designing and optimizing radio sounding instrument it is advantageous to have an accurate and efficient numerical simulation model of radio reflection and transmission through large size bodies of asteroid shapes. In this presentation we will present electromagnetic (EM) scattering analysis of electrically large size asteroids using (1) a weak form formulation and (2) also a more accurate hybrid finite element method/method of moments (FEM/MOM) to help estimate their internal structures. Assuming the internal structure with known electrical properties of a sample asteroid, we first develop its forward EM scattering model. From the knowledge of EM scattering as a function of frequency and look angle we will then present the inverse scattering procedure to extract its interior structure image. Validity of the inverse scattering procedure will be presented through few simulation examples.
Cortical bone drilling: An experimental and numerical study.
Alam, Khurshid; Bahadur, Issam M; Ahmed, Naseer
2014-12-16
Bone drilling is a common surgical procedure in orthopedics, dental and neurosurgeries. In conventional bone drilling process, the surgeon exerts a considerable amount of pressure to penetrate the drill into the bone tissue. Controlled penetration of drill in the bone is necessary for safe and efficient drilling. Development of a validated Finite Element (FE) model of cortical bone drilling. Drilling experiments were conducted on bovine cortical bone. The FE model of the bone drilling was based on mechanical properties obtained from literature data and additionally conducted microindentation tests on the cortical bone. The magnitude of stress in bone was found to decrease exponentially away from the lips of the drill in simulations. Feed rate was found to be the main influential factor affecting the force and torque in the numerical simulations and experiments. The drilling thrust force and torque were found to be unaffected by the drilling speed in numerical simulations. Simulated forces and torques were compared with experimental results for similar drilling conditions and were found in good agreement.CONCLUSIONS: FE schemes may be successfully applied to model complex kinematics of bone drilling process.
A novel numerical framework for self-similarity in plasticity: Wedge indentation in single crystals
NASA Astrophysics Data System (ADS)
Juul, K. J.; Niordson, C. F.; Nielsen, K. L.; Kysar, J. W.
2018-03-01
A novel numerical framework for analyzing self-similar problems in plasticity is developed and demonstrated. Self-similar problems of this kind include processes such as stationary cracks, void growth, indentation etc. The proposed technique offers a simple and efficient method for handling this class of complex problems by avoiding issues related to traditional Lagrangian procedures. Moreover, the proposed technique allows for focusing the mesh in the region of interest. In the present paper, the technique is exploited to analyze the well-known wedge indentation problem of an elastic-viscoplastic single crystal. However, the framework may be readily adapted to any constitutive law of interest. The main focus herein is the development of the self-similar framework, while the indentation study serves primarily as verification of the technique by comparing to existing numerical and analytical studies. In this study, the three most common metal crystal structures will be investigated, namely the face-centered cubic (FCC), body-centered cubic (BCC), and hexagonal close packed (HCP) crystal structures, where the stress and slip rate fields around the moving contact point singularity are presented.
NASA Astrophysics Data System (ADS)
Kudryavtsev, O.; Rodochenko, V.
2018-03-01
We propose a new general numerical method aimed to solve integro-differential equations with variable coefficients. The problem under consideration arises in finance where in the context of pricing barrier options in a wide class of stochastic volatility models with jumps. To handle the effect of the correlation between the price and the variance, we use a suitable substitution for processes. Then we construct a Markov-chain approximation for the variation process on small time intervals and apply a maturity randomization technique. The result is a system of boundary problems for integro-differential equations with constant coefficients on the line in each vertex of the chain. We solve the arising problems using a numerical Wiener-Hopf factorization method. The approximate formulae for the factors are efficiently implemented by means of the Fast Fourier Transform. Finally, we use a recurrent procedure that moves backwards in time on the variance tree. We demonstrate the convergence of the method using Monte-Carlo simulations and compare our results with the results obtained by the Wiener-Hopf method with closed-form expressions of the factors.
A robust component mode synthesis method for stochastic damped vibroacoustics
NASA Astrophysics Data System (ADS)
Tran, Quang Hung; Ouisse, Morvan; Bouhaddi, Noureddine
2010-01-01
In order to reduce vibrations or sound levels in industrial vibroacoustic problems, the low-cost and efficient way consists in introducing visco- and poro-elastic materials either on the structure or on cavity walls. Depending on the frequency range of interest, several numerical approaches can be used to estimate the behavior of the coupled problem. In the context of low frequency applications related to acoustic cavities with surrounding vibrating structures, the finite elements method (FEM) is one of the most efficient techniques. Nevertheless, industrial problems lead to large FE models which are time-consuming in updating or optimization processes. A classical way to reduce calculation time is the component mode synthesis (CMS) method, whose classical formulation is not always efficient to predict dynamical behavior of structures including visco-elastic and/or poro-elastic patches. Then, to ensure an efficient prediction, the fluid and structural bases used for the model reduction need to be updated as a result of changes in a parametric optimization procedure. For complex models, this leads to prohibitive numerical costs in the optimization phase or for management and propagation of uncertainties in the stochastic vibroacoustic problem. In this paper, the formulation of an alternative CMS method is proposed and compared to classical ( u, p) CMS method: the Ritz basis is completed with static residuals associated to visco-elastic and poro-elastic behaviors. This basis is also enriched by the static response of residual forces due to structural modifications, resulting in a so-called robust basis, also adapted to Monte Carlo simulations for uncertainties propagation using reduced models.
Fluid mechanics of swimming bacteria with multiple flagella.
Kanehl, Philipp; Ishikawa, Takuji
2014-04-01
It is known that some kinds of bacteria swim by forming a bundle of their multiple flagella. However, the details of flagella synchronization as well as the swimming efficiency of such bacteria have not been fully understood. In this study, swimming of multiflagellated bacteria is investigated numerically by the boundary element method. We assume that the cell body is a rigid ellipsoid and the flagella are rigid helices suspended on flexible hooks. Motors apply constant torque to the hooks, rotating the flagella either clockwise or counterclockwise. Rotating all flagella clockwise, bundling of all flagella is observed in every simulated case. It is demonstrated that the counter rotation of the body speeds up the bundling process. During this procedure the flagella synchronize due to hydrodynamic interactions. Moreover, the results illustrated that during running the multiflagellated bacterium shows higher propulsive efficiency (distance traveled per one flagellar rotation) over a bacterium with a single thick helix. With an increasing number of flagella the propulsive efficiency increases, whereas the energetic efficiency decreases, which indicates that efficiency is something multiflagellated bacteria are assigning less priority to than to motility. These findings form a fundamental basis in understanding bacterial physiology and metabolism.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shao, Yan-Lin, E-mail: yanlin.shao@dnvgl.com; Faltinsen, Odd M.
2014-10-01
We propose a new efficient and accurate numerical method based on harmonic polynomials to solve boundary value problems governed by 3D Laplace equation. The computational domain is discretized by overlapping cells. Within each cell, the velocity potential is represented by the linear superposition of a complete set of harmonic polynomials, which are the elementary solutions of Laplace equation. By its definition, the method is named as Harmonic Polynomial Cell (HPC) method. The characteristics of the accuracy and efficiency of the HPC method are demonstrated by studying analytical cases. Comparisons will be made with some other existing boundary element based methods,more » e.g. Quadratic Boundary Element Method (QBEM) and the Fast Multipole Accelerated QBEM (FMA-QBEM) and a fourth order Finite Difference Method (FDM). To demonstrate the applications of the method, it is applied to some studies relevant for marine hydrodynamics. Sloshing in 3D rectangular tanks, a fully-nonlinear numerical wave tank, fully-nonlinear wave focusing on a semi-circular shoal, and the nonlinear wave diffraction of a bottom-mounted cylinder in regular waves are studied. The comparisons with the experimental results and other numerical results are all in satisfactory agreement, indicating that the present HPC method is a promising method in solving potential-flow problems. The underlying procedure of the HPC method could also be useful in other fields than marine hydrodynamics involved with solving Laplace equation.« less
Optimal control design of turbo spin‐echo sequences with applications to parallel‐transmit systems
Hoogduin, Hans; Hajnal, Joseph V.; van den Berg, Cornelis A. T.; Luijten, Peter R.; Malik, Shaihan J.
2016-01-01
Purpose The design of turbo spin‐echo sequences is modeled as a dynamic optimization problem which includes the case of inhomogeneous transmit radiofrequency fields. This problem is efficiently solved by optimal control techniques making it possible to design patient‐specific sequences online. Theory and Methods The extended phase graph formalism is employed to model the signal evolution. The design problem is cast as an optimal control problem and an efficient numerical procedure for its solution is given. The numerical and experimental tests address standard multiecho sequences and pTx configurations. Results Standard, analytically derived flip angle trains are recovered by the numerical optimal control approach. New sequences are designed where constraints on radiofrequency total and peak power are included. In the case of parallel transmit application, the method is able to calculate the optimal echo train for two‐dimensional and three‐dimensional turbo spin echo sequences in the order of 10 s with a single central processing unit (CPU) implementation. The image contrast is maintained through the whole field of view despite inhomogeneities of the radiofrequency fields. Conclusion The optimal control design sheds new light on the sequence design process and makes it possible to design sequences in an online, patient‐specific fashion. Magn Reson Med 77:361–373, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine PMID:26800383
Prediction of chronic post-operative pain: pre-operative DNIC testing identifies patients at risk.
Yarnitsky, David; Crispel, Yonathan; Eisenberg, Elon; Granovsky, Yelena; Ben-Nun, Alon; Sprecher, Elliot; Best, Lael-Anson; Granot, Michal
2008-08-15
Surgical and medical procedures, mainly those associated with nerve injuries, may lead to chronic persistent pain. Currently, one cannot predict which patients undergoing such procedures are 'at risk' to develop chronic pain. We hypothesized that the endogenous analgesia system is key to determining the pattern of handling noxious events, and therefore testing diffuse noxious inhibitory control (DNIC) will predict susceptibility to develop chronic post-thoracotomy pain (CPTP). Pre-operative psychophysical tests, including DNIC assessment (pain reduction during exposure to another noxious stimulus at remote body area), were conducted in 62 patients, who were followed 29.0+/-16.9 weeks after thoracotomy. Logistic regression revealed that pre-operatively assessed DNIC efficiency and acute post-operative pain intensity were two independent predictors for CPTP. Efficient DNIC predicted lower risk of CPTP, with OR 0.52 (0.33-0.77 95% CI, p=0.0024), i.e., a 10-point numerical pain scale (NPS) reduction halves the chance to develop chronic pain. Higher acute pain intensity indicated OR of 1.80 (1.28-2.77, p=0.0024) predicting nearly a double chance to develop chronic pain for each 10-point increase. The other psychophysical measures, pain thresholds and supra-threshold pain magnitudes, did not predict CPTP. For prediction of acute post-operative pain intensity, DNIC efficiency was not found significant. Effectiveness of the endogenous analgesia system obtained at a pain-free state, therefore, seems to reflect the individual's ability to tackle noxious events, identifying patients 'at risk' to develop post-intervention chronic pain. Applying this diagnostic approach before procedures that might generate pain may allow individually tailored pain prevention and management, which may substantially reduce suffering.
NASA Astrophysics Data System (ADS)
Leonov, G. A.; Kuznetsov, N. V.
From a computational point of view, in nonlinear dynamical systems, attractors can be regarded as self-excited and hidden attractors. Self-excited attractors can be localized numerically by a standard computational procedure, in which after a transient process a trajectory, starting from a point of unstable manifold in a neighborhood of equilibrium, reaches a state of oscillation, therefore one can easily identify it. In contrast, for a hidden attractor, a basin of attraction does not intersect with small neighborhoods of equilibria. While classical attractors are self-excited, attractors can therefore be obtained numerically by the standard computational procedure. For localization of hidden attractors it is necessary to develop special procedures, since there are no similar transient processes leading to such attractors. At first, the problem of investigating hidden oscillations arose in the second part of Hilbert's 16th problem (1900). The first nontrivial results were obtained in Bautin's works, which were devoted to constructing nested limit cycles in quadratic systems, that showed the necessity of studying hidden oscillations for solving this problem. Later, the problem of analyzing hidden oscillations arose from engineering problems in automatic control. In the 50-60s of the last century, the investigations of widely known Markus-Yamabe's, Aizerman's, and Kalman's conjectures on absolute stability have led to the finding of hidden oscillations in automatic control systems with a unique stable stationary point. In 1961, Gubar revealed a gap in Kapranov's work on phase locked-loops (PLL) and showed the possibility of the existence of hidden oscillations in PLL. At the end of the last century, the difficulties in analyzing hidden oscillations arose in simulations of drilling systems and aircraft's control systems (anti-windup) which caused crashes. Further investigations on hidden oscillations were greatly encouraged by the present authors' discovery, in 2010 (for the first time), of chaotic hidden attractor in Chua's circuit. This survey is dedicated to efficient analytical-numerical methods for the study of hidden oscillations. Here, an attempt is made to reflect the current trends in the synthesis of analytical and numerical methods.
10 CFR 431.16 - Test procedures for the measurement of energy efficiency.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 3 2011-01-01 2011-01-01 false Test procedures for the measurement of energy efficiency. 431.16 Section 431.16 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY EFFICIENCY PROGRAM FOR... Methods of Determining Efficiency § 431.16 Test procedures for the measurement of energy efficiency. For...
10 CFR 431.16 - Test procedures for the measurement of energy efficiency.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 3 2010-01-01 2010-01-01 false Test procedures for the measurement of energy efficiency. 431.16 Section 431.16 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY EFFICIENCY PROGRAM FOR... Methods of Determining Efficiency § 431.16 Test procedures for the measurement of energy efficiency. For...
Chasioti, Evdokia; Chiang, Tat Fai; Drew, Howard J
2013-01-01
Prosthetic guided implant surgery requires adequate ridge dimensions for proper implant placement. Various surgical procedures can be used to augment deficient alveolar ridges. Studies have examined new bone formation on deficient ridges, utilizing numerous surgical techniques and biomaterials. The goal is to develop time efficient techniques, which have low morbidity. A crucial factor for successful bone grafting procedures is space maintenance. The article discusses space maintenance tenting screws, used in conjunction with bone allografts and resorbable barrier membranes, to ensure uneventful guided bone regeneration (GBR) enabling optimal implant positioning. The technique utilized has been described in the literature to treat severely resorbed alveolar ridges and additionally can be considered in restoring the vertical and horizontal component of deficient extraction sites. Three cases are presented to illustrate the utilization and effectiveness of tenting screw technology in the treatment of atrophic extraction sockets and for deficient ridges.
Efficient kinetic method for fluid simulation beyond the Navier-Stokes equation.
Zhang, Raoyang; Shan, Xiaowen; Chen, Hudong
2006-10-01
We present a further theoretical extension to the kinetic-theory-based formulation of the lattice Boltzmann method of Shan [J. Fluid Mech. 550, 413 (2006)]. In addition to the higher-order projection of the equilibrium distribution function and a sufficiently accurate Gauss-Hermite quadrature in the original formulation, a regularization procedure is introduced in this paper. This procedure ensures a consistent order of accuracy control over the nonequilibrium contributions in the Galerkin sense. Using this formulation, we construct a specific lattice Boltzmann model that accurately incorporates up to third-order hydrodynamic moments. Numerical evidence demonstrates that the extended model overcomes some major defects existing in conventionally known lattice Boltzmann models, so that fluid flows at finite Knudsen number Kn can be more quantitatively simulated. Results from force-driven Poiseuille flow simulations predict the Knudsen's minimum and the asymptotic behavior of flow flux at large Kn.
NASA Astrophysics Data System (ADS)
Liu, J. X.; Deng, S. C.; Liang, N. G.
2008-02-01
Concrete is heterogeneous and usually described as a three-phase material, where matrix, aggregate and interface are distinguished. To take this heterogeneity into consideration, the Generalized Beam (GB) lattice model is adopted. The GB lattice model is much more computationally efficient than the beam lattice model. Numerical procedures of both quasi-static method and dynamic method are developed to simulate fracture processes in uniaxial tensile tests conducted on a concrete panel. Cases of different loading rates are compared with the quasi-static case. It is found that the inertia effect due to load increasing becomes less important and can be ignored with the loading rate decreasing, but the inertia effect due to unstable crack propagation remains considerable no matter how low the loading rate is. Therefore, an unrealistic result will be obtained if a fracture process including unstable cracking is simulated by the quasi-static procedure.
Stochastic DG Placement for Conservation Voltage Reduction Based on Multiple Replications Procedure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Zhaoyu; Chen, Bokan; Wang, Jianhui
2015-06-01
Conservation voltage reduction (CVR) and distributed-generation (DG) integration are popular strategies implemented by utilities to improve energy efficiency. This paper investigates the interactions between CVR and DG placement to minimize load consumption in distribution networks, while keeping the lowest voltage level within the predefined range. The optimal placement of DG units is formulated as a stochastic optimization problem considering the uncertainty of DG outputs and load consumptions. A sample average approximation algorithm-based technique is developed to solve the formulated problem effectively. A multiple replications procedure is developed to test the stability of the solution and calculate the confidence interval ofmore » the gap between the candidate solution and optimal solution. The proposed method has been applied to the IEEE 37-bus distribution test system with different scenarios. The numerical results indicate that the implementations of CVR and DG, if combined, can achieve significant energy savings.« less
All-Particle Multiscale Computation of Hypersonic Rarefied Flow
NASA Astrophysics Data System (ADS)
Jun, E.; Burt, J. M.; Boyd, I. D.
2011-05-01
This study examines a new hybrid particle scheme used as an alternative means of multiscale flow simulation. The hybrid particle scheme employs the direct simulation Monte Carlo (DSMC) method in rarefied flow regions and the low diffusion (LD) particle method in continuum flow regions. The numerical procedures of the low diffusion particle method are implemented within an existing DSMC algorithm. The performance of the LD-DSMC approach is assessed by studying Mach 10 nitrogen flow over a sphere with a global Knudsen number of 0.002. The hybrid scheme results show good overall agreement with results from standard DSMC and CFD computation. Subcell procedures are utilized to improve computational efficiency and reduce sensitivity to DSMC cell size in the hybrid scheme. This makes it possible to perform the LD-DSMC simulation on a much coarser mesh that leads to a significant reduction in computation time.
Coexistence Analysis of Civil Unmanned Aircraft Systems at Low Altitudes
NASA Astrophysics Data System (ADS)
Zhou, Yuzhe
2016-11-01
The requirement of unmanned aircraft systems in civil areas is growing. However, provisioning of flight efficiency and safety of unmanned aircraft has critical requirements on wireless communication spectrum resources. Current researches mainly focus on spectrum availability. In this paper, the unmanned aircraft system communication models, including the coverage model and data rate model, and two coexistence analysis procedures, i. e. the interference and noise ratio criterion and frequency-distance-direction criterion, are proposed to analyze spectrum requirements and interference results of the civil unmanned aircraft systems at low altitudes. In addition, explicit explanations are provided. The proposed coexistence analysis criteria are applied to assess unmanned aircraft systems' uplink and downlink interference performances and to support corresponding spectrum planning. Numerical results demonstrate that the proposed assessments and analysis procedures satisfy requirements of flexible spectrum accessing and safe coexistence among multiple unmanned aircraft systems.
Vorticity Dynamics in Axial Compressor Flow Diagnosis and Design.
NASA Astrophysics Data System (ADS)
Wu, Jie-Zhi; Yang, Yan-Tao; Wu, Hong; Li, Qiu-Shi; Mao, Feng; Zhou, Sheng
2007-11-01
It is well recognized that vorticity and vortical structures appear inevitably in viscous compressor flows and have strong influence on the compressor performance. But conventional analysis and design procedure cannot pinpoint the quantitative contribution of each individual vortical structure to the integrated performance of a compressor, such as the stagnation-pressure ratio and efficiency. We fill this gap by using the so-called derivative-moment transformation which has been successfully applied to external aerodynamics. We show that the compressor performance is mainly controlled by the radial distribution of azimuthal vorticity, of which an optimization in the through-flow design stage leads to a simple Abel equation of the second kind. Solving the equation yields desired circulation distribution that optimizes the blade geometry. The advantage of this new procedure is demonstrated by numerical examples, including the posterior performance check by 3-D Navier-Stokes simulation.
Crystal structure optimisation using an auxiliary equation of state
NASA Astrophysics Data System (ADS)
Jackson, Adam J.; Skelton, Jonathan M.; Hendon, Christopher H.; Butler, Keith T.; Walsh, Aron
2015-11-01
Standard procedures for local crystal-structure optimisation involve numerous energy and force calculations. It is common to calculate an energy-volume curve, fitting an equation of state around the equilibrium cell volume. This is a computationally intensive process, in particular, for low-symmetry crystal structures where each isochoric optimisation involves energy minimisation over many degrees of freedom. Such procedures can be prohibitive for non-local exchange-correlation functionals or other "beyond" density functional theory electronic structure techniques, particularly where analytical gradients are not available. We present a simple approach for efficient optimisation of crystal structures based on a known equation of state. The equilibrium volume can be predicted from one single-point calculation and refined with successive calculations if required. The approach is validated for PbS, PbTe, ZnS, and ZnTe using nine density functionals and applied to the quaternary semiconductor Cu2ZnSnS4 and the magnetic metal-organic framework HKUST-1.
76 FR 47178 - Energy Efficiency Program: Test Procedure for Lighting Systems (Luminaires)
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-04
...: Test Procedure for Lighting Systems (Luminaires) AGENCY: Office of Energy Efficiency and Renewable... (``DOE'' or the ``Department'') is currently evaluating energy efficiency test procedures for luminaires... products. DOE recognizes that well-designed test procedures are important to produce reliable, repeatable...
A new third order finite volume weighted essentially non-oscillatory scheme on tetrahedral meshes
NASA Astrophysics Data System (ADS)
Zhu, Jun; Qiu, Jianxian
2017-11-01
In this paper a third order finite volume weighted essentially non-oscillatory scheme is designed for solving hyperbolic conservation laws on tetrahedral meshes. Comparing with other finite volume WENO schemes designed on tetrahedral meshes, the crucial advantages of such new WENO scheme are its simplicity and compactness with the application of only six unequal size spatial stencils for reconstructing unequal degree polynomials in the WENO type spatial procedures, and easy choice of the positive linear weights without considering the topology of the meshes. The original innovation of such scheme is to use a quadratic polynomial defined on a big central spatial stencil for obtaining third order numerical approximation at any points inside the target tetrahedral cell in smooth region and switch to at least one of five linear polynomials defined on small biased/central spatial stencils for sustaining sharp shock transitions and keeping essentially non-oscillatory property simultaneously. By performing such new procedures in spatial reconstructions and adopting a third order TVD Runge-Kutta time discretization method for solving the ordinary differential equation (ODE), the new scheme's memory occupancy is decreased and the computing efficiency is increased. So it is suitable for large scale engineering requirements on tetrahedral meshes. Some numerical results are provided to illustrate the good performance of such scheme.
Analysis of the Optimum Usage of Slag for the Compressive Strength of Concrete.
Lee, Han-Seung; Wang, Xiao-Yong; Zhang, Li-Na; Koh, Kyung-Taek
2015-03-18
Ground granulated blast furnace slag is widely used as a mineral admixture to replace partial Portland cement in the concrete industry. As the amount of slag increases, the late-age compressive strength of concrete mixtures increases. However, after an optimum point, any further increase in slag does not improve the late-age compressive strength. This optimum replacement ratio of slag is a crucial factor for its efficient use in the concrete industry. This paper proposes a numerical procedure to analyze the optimum usage of slag for the compressive strength of concrete. This numerical procedure starts with a blended hydration model that simulates cement hydration, slag reaction, and interactions between cement hydration and slag reaction. The amount of calcium silicate hydrate (CSH) is calculated considering the contributions from cement hydration and slag reaction. Then, by using the CSH contents, the compressive strength of the slag-blended concrete is evaluated. Finally, based on the parameter analysis of the compressive strength development of concrete with different slag inclusions, the optimum usage of slag in concrete mixtures is determined to be approximately 40% of the total binder content. The proposed model is verified through experimental results of the compressive strength of slag-blended concrete with different water-to-binder ratios and different slag inclusions.
Analysis of the Optimum Usage of Slag for the Compressive Strength of Concrete
Lee, Han-Seung; Wang, Xiao-Yong; Zhang, Li-Na; Koh, Kyung-Taek
2015-01-01
Ground granulated blast furnace slag is widely used as a mineral admixture to replace partial Portland cement in the concrete industry. As the amount of slag increases, the late-age compressive strength of concrete mixtures increases. However, after an optimum point, any further increase in slag does not improve the late-age compressive strength. This optimum replacement ratio of slag is a crucial factor for its efficient use in the concrete industry. This paper proposes a numerical procedure to analyze the optimum usage of slag for the compressive strength of concrete. This numerical procedure starts with a blended hydration model that simulates cement hydration, slag reaction, and interactions between cement hydration and slag reaction. The amount of calcium silicate hydrate (CSH) is calculated considering the contributions from cement hydration and slag reaction. Then, by using the CSH contents, the compressive strength of the slag-blended concrete is evaluated. Finally, based on the parameter analysis of the compressive strength development of concrete with different slag inclusions, the optimum usage of slag in concrete mixtures is determined to be approximately 40% of the total binder content. The proposed model is verified through experimental results of the compressive strength of slag-blended concrete with different water-to-binder ratios and different slag inclusions. PMID:28787998
NASA Technical Reports Server (NTRS)
Chang, S. C.
1986-01-01
A two-step semidirect procedure is developed to accelerate the one-step procedure described in NASA TP-2529. For a set of constant coefficient model problems, the acceleration factor increases from 1 to 2 as the one-step procedure convergence rate decreases from + infinity to 0. It is also shown numerically that the two-step procedure can substantially accelerate the convergence of the numerical solution of many partial differential equations (PDE's) with variable coefficients.
Fast ground filtering for TLS data via Scanline Density Analysis
NASA Astrophysics Data System (ADS)
Che, Erzhuo; Olsen, Michael J.
2017-07-01
Terrestrial Laser Scanning (TLS) efficiently collects 3D information based on lidar (light detection and ranging) technology. TLS has been widely used in topographic mapping, engineering surveying, forestry, industrial facilities, cultural heritage, and so on. Ground filtering is a common procedure in lidar data processing, which separates the point cloud data into ground points and non-ground points. Effective ground filtering is helpful for subsequent procedures such as segmentation, classification, and modeling. Numerous ground filtering algorithms have been developed for Airborne Laser Scanning (ALS) data. However, many of these are error prone in application to TLS data because of its different angle of view and highly variable resolution. Further, many ground filtering techniques are limited in application within challenging topography and experience difficulty coping with some objects such as short vegetation, steep slopes, and so forth. Lastly, due to the large size of point cloud data, operations such as data traversing, multiple iterations, and neighbor searching significantly affect the computation efficiency. In order to overcome these challenges, we present an efficient ground filtering method for TLS data via a Scanline Density Analysis, which is very fast because it exploits the grid structure storing TLS data. The process first separates the ground candidates, density features, and unidentified points based on an analysis of point density within each scanline. Second, a region growth using the scan pattern is performed to cluster the ground candidates and further refine the ground points (clusters). In the experiment, the effectiveness, parameter robustness, and efficiency of the proposed method is demonstrated with datasets collected from an urban scene and a natural scene, respectively.
Luthra, Suvitesh; Ramady, Omar; Monge, Mary; Fitzsimons, Michael G; Kaleta, Terry R; Sundt, Thoralf M
2015-06-01
Markers of operation room (OR) efficiency in cardiac surgery are focused on "knife to skin" and "start time tardiness." These do not evaluate the middle and later parts of the cardiac surgical pathway. The purpose of this analysis was to evaluate knife to skin time as an efficiency marker in cardiac surgery. We looked at knife to skin time, procedure time, and transfer times in the cardiac operational pathway for their correlation with predefined indices of operational efficiency (Index of Operation Efficiency - InOE, Surgical Index of Operational Efficiency - sInOE). A regression analysis was performed to test the goodness of fit of the regression curves estimated for InOE relative to the times on the operational pathway. The mean knife to skin time was 90.6 ± 13 minutes (23% of total OR time). The mean procedure time was 282 ± 123 minutes (71% of total OR time). Utilization efficiencies were highest for aortic valve replacement and coronary artery bypass grafting and least for complex aortic procedures. There were no significant procedure-specific or team-specific differences for standard procedures. Procedure times correlated the strongest with InOE (r = -0.98, p < 0.01). Compared to procedure times, knife to skin is not as strong an indicator of efficiency. A statistically significant linear dependence on InOE was observed with "procedure times" only. Procedure times are a better marker of OR efficiency than knife to skin in cardiac cases. Strategies to increase OR utilization and efficiency should address procedure times in addition to knife to skin times. © 2015 Wiley Periodicals, Inc.
Analytical and numerical analysis of frictional damage in quasi brittle materials
NASA Astrophysics Data System (ADS)
Zhu, Q. Z.; Zhao, L. Y.; Shao, J. F.
2016-07-01
Frictional sliding and crack growth are two main dissipation processes in quasi brittle materials. The frictional sliding along closed cracks is the origin of macroscopic plastic deformation while the crack growth induces a material damage. The main difficulty of modeling is to consider the inherent coupling between these two processes. Various models and associated numerical algorithms have been proposed. But there are so far no analytical solutions even for simple loading paths for the validation of such algorithms. In this paper, we first present a micro-mechanical model taking into account the damage-friction coupling for a large class of quasi brittle materials. The model is formulated by combining a linear homogenization procedure with the Mori-Tanaka scheme and the irreversible thermodynamics framework. As an original contribution, a series of analytical solutions of stress-strain relations are developed for various loading paths. Based on the micro-mechanical model, two numerical integration algorithms are exploited. The first one involves a coupled friction/damage correction scheme, which is consistent with the coupling nature of the constitutive model. The second one contains a friction/damage decoupling scheme with two consecutive steps: the friction correction followed by the damage correction. With the analytical solutions as reference results, the two algorithms are assessed through a series of numerical tests. It is found that the decoupling correction scheme is efficient to guarantee a systematic numerical convergence.
NASA Astrophysics Data System (ADS)
Doha, E. H.; Bhrawy, A. H.; Abdelkawy, M. A.; Van Gorder, Robert A.
2014-03-01
A Jacobi-Gauss-Lobatto collocation (J-GL-C) method, used in combination with the implicit Runge-Kutta method of fourth order, is proposed as a numerical algorithm for the approximation of solutions to nonlinear Schrödinger equations (NLSE) with initial-boundary data in 1+1 dimensions. Our procedure is implemented in two successive steps. In the first one, the J-GL-C is employed for approximating the functional dependence on the spatial variable, using (N-1) nodes of the Jacobi-Gauss-Lobatto interpolation which depends upon two general Jacobi parameters. The resulting equations together with the two-point boundary conditions induce a system of 2(N-1) first-order ordinary differential equations (ODEs) in time. In the second step, the implicit Runge-Kutta method of fourth order is applied to solve this temporal system. The proposed J-GL-C method, used in combination with the implicit Runge-Kutta method of fourth order, is employed to obtain highly accurate numerical approximations to four types of NLSE, including the attractive and repulsive NLSE and a Gross-Pitaevskii equation with space-periodic potential. The numerical results obtained by this algorithm have been compared with various exact solutions in order to demonstrate the accuracy and efficiency of the proposed method. Indeed, for relatively few nodes used, the absolute error in our numerical solutions is sufficiently small.
Thermoviscoelastic characterization and prediction of Kevlar/epoxy composite laminates
NASA Technical Reports Server (NTRS)
Gramoll, K. C.; Dillard, D. A.; Brinson, H. F.
1990-01-01
The thermoviscoelastic characterization of Kevlar 49/Fiberite 7714A epoxy composite lamina and the development of a numerical procedure to predict the viscoelastic response of any general laminate constructed from the same material were studied. The four orthotropic material properties, S sub 11, S sub 12, S sub 22, and S sub 66, were characterized by 20 minute static creep tests on unidirectional (0) sub 8, (10) sub 8, and (90) sub 16 lamina specimens. The Time-Temperature Superposition-Principle (TTSP) was used successfully to accelerate the characterization process. A nonlinear constitutive model was developed to describe the stress dependent viscoelastic response for each of the material properties. A numerical procedure to predict long term laminate properties from lamina properties (obtained experimentally) was developed. Numerical instabilities and time constraints associated with viscoelastic numerical techniques were discussed and solved. The numerical procedure was incorporated into a user friendly microcomputer program called Viscoelastic Composite Analysis Program (VCAP), which is available for IBM PC type computers. The program was designed for ease of use. The final phase involved testing actual laminates constructed from the characterized material, Kevlar/epoxy, at various temperatures and load level for 4 to 5 weeks. These results were compared with the VCAP program predictions to verify the testing procedure and to check the numerical procedure used in the program. The actual tests and predictions agreed for all test cases which included 1, 2, 3, and 4 fiber direction laminates.
A new design approach to innovative spectrometers. Case study: TROPOLITE
NASA Astrophysics Data System (ADS)
Volatier, Jean-Baptiste; Baümer, Stefan; Kruizinga, Bob; Vink, Rob
2014-05-01
Designing a novel optical system is a nested iterative process. The optimization loop, from a starting point to final system is already mostly automated. However this loop is part of a wider loop which is not. This wider loop starts with an optical specification and ends with a manufacturability assessment. When designing a new spectrometer with emphasis on weight and cost, numerous iterations between the optical- and mechanical designer are inevitable. The optical designer must then be able to reliably produce optical designs based on new input gained from multidisciplinary studies. This paper presents a procedure that can automatically generate new starting points based on any kind of input or new constraint that might arise. These starting points can then be handed over to a generic optimization routine to make the design tasks extremely efficient. The optical designer job is then not to design optical systems, but to meta-design a procedure that produces optical systems paving the way for system level optimization. We present here this procedure and its application to the design of TROPOLITE a lightweight push broom imaging spectrometer.
Analysis of damaging process and crack propagation
NASA Astrophysics Data System (ADS)
Semenski, D.; Wolf, H.; Božić, Ž.
2010-06-01
Supervising and health monitoring of structures can assess the actual state of existing structures after initial loading or in the state of operation. Structural life management requires the integration of design and analysis, materials behavior and structural testing, as given for several examples. Procedure of survey of structural elements and criteria for their selection must be strongly defined as it is for the offshore gas platforms. Numerical analysis of dynamic loading is shown for the Aeolian vibrations of overhead transmission line conductors. Since the damper’s efficiency strongly depends on its position, the procedure of determining the optimum position of the damper is described. The optical method of caustics is established in isotropic materials for determination of the stress intensity factors (SIFs) of the cracks in deformed structures and is advantageously improved for the application to fiberreinforced composites. A procedure for simulation of crack propagation for multiple cracks was introduced and SIFs have been calculated by using finite element method. Crack growth of a single crack or a periodical array of cracks initiated at the stiffeners in a stiffened panel has been investigated.
Individualizing drug dosage with longitudinal data.
Zhu, Xiaolu; Qu, Annie
2016-10-30
We propose a two-step procedure to personalize drug dosage over time under the framework of a log-linear mixed-effect model. We model patients' heterogeneity using subject-specific random effects, which are treated as the realizations of an unspecified stochastic process. We extend the conditional quadratic inference function to estimate both fixed-effect coefficients and individual random effects on a longitudinal training data sample in the first step and propose an adaptive procedure to estimate new patients' random effects and provide dosage recommendations for new patients in the second step. An advantage of our approach is that we do not impose any distribution assumption on estimating random effects. Moreover, the new approach can accommodate more general time-varying covariates corresponding to random effects. We show in theory and numerical studies that the proposed method is more efficient compared with existing approaches, especially when covariates are time varying. In addition, a real data example of a clozapine study confirms that our two-step procedure leads to more accurate drug dosage recommendations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
VARIABLE SELECTION FOR REGRESSION MODELS WITH MISSING DATA
Garcia, Ramon I.; Ibrahim, Joseph G.; Zhu, Hongtu
2009-01-01
We consider the variable selection problem for a class of statistical models with missing data, including missing covariate and/or response data. We investigate the smoothly clipped absolute deviation penalty (SCAD) and adaptive LASSO and propose a unified model selection and estimation procedure for use in the presence of missing data. We develop a computationally attractive algorithm for simultaneously optimizing the penalized likelihood function and estimating the penalty parameters. Particularly, we propose to use a model selection criterion, called the ICQ statistic, for selecting the penalty parameters. We show that the variable selection procedure based on ICQ automatically and consistently selects the important covariates and leads to efficient estimates with oracle properties. The methodology is very general and can be applied to numerous situations involving missing data, from covariates missing at random in arbitrary regression models to nonignorably missing longitudinal responses and/or covariates. Simulations are given to demonstrate the methodology and examine the finite sample performance of the variable selection procedures. Melanoma data from a cancer clinical trial is presented to illustrate the proposed methodology. PMID:20336190
A numerical performance assessment of a commercial cardiopulmonary by-pass blood heat exchanger.
Consolo, Filippo; Fiore, Gianfranco B; Pelosi, Alessandra; Reggiani, Stefano; Redaelli, Alberto
2015-06-01
We developed a numerical model, based on multi-physics computational fluid dynamics (CFD) simulations, to assist the design process of a plastic hollow-fiber bundle blood heat exchanger (BHE) integrated within the INSPIRE(TM), a blood oxygenator (OXY) for cardiopulmonary by-pass procedures, recently released by Sorin Group Italia. In a comparative study, we analyzed five different geometrical design solutions of the BHE module. Quantitative geometrical-dependent parameters providing a comprehensive evaluation of both the hemo- and thermo-dynamics performance of the device were extracted to identify the best-performing prototypical solution. A convenient design configuration was identified, characterized by (i) a uniform blood flow pattern within the fiber bundle, preventing blood flow shunting and the onset of stagnation/recirculation areas and/or high velocity pathways, (ii) an enhanced blood heating efficiency, and (iii) a reduced blood pressure drop. The selected design configuration was then prototyped and tested to experimentally characterize the device performance. Experimental results confirmed numerical predictions, proving the effectiveness of CFD modeling as a reliable tool for in silico identification of suitable working conditions of blood handling medical devices. Notably, the numerical approach limited the need for extensive prototyping, thus reducing the corresponding machinery costs and time-to-market. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.
Development of programs for computing characteristics of ultraviolet radiation
NASA Technical Reports Server (NTRS)
Dave, J. V.
1972-01-01
Efficient programs were developed for computing all four characteristics of the radiation scattered by a plane-parallel, turbid, terrestrial atmospheric model. They were developed (FORTRAN 4) and tested on the IBM /360 computers with 2314 direct access storage facility. The storage requirement varies between 200K and 750K bytes depending upon the task. The scattering phase matrix (or function) is expanded in a Fourier series whose number of terms depend upon the zenith angles of the incident and scattered radiations, as well as on the nature of aerosols. A Gauss-Seidel procedure is used for obtaining the numerical solution of the transfer equation.
Analytic solution for quasi-Lambertian radiation transfer.
Braun, Avi; Gordon, Jeffrey M
2010-02-10
An analytic solution is derived for radiation transfer between flat quasi-Lambertian surfaces of arbitrary orientation, i.e., surfaces that radiate in a Lambertian fashion but within a numerical aperture smaller than unity. These formulas obviate the need for ray trace simulations and provide exact, physically transparent results. Illustrative examples that capture the salient features of the flux maps and the efficiency of flux transfer are presented for a few configurations of practical interest. There is also a fundamental reciprocity relation for quasi-Lambertian exchange, akin to the reciprocity theorem for fully Lambertian surfaces. Applications include optical fiber coupling, fiber-optic biomedical procedures, and solar concentrators.
On the generalized VIP time integral methodology for transient thermal problems
NASA Technical Reports Server (NTRS)
Mei, Youping; Chen, Xiaoqin; Tamma, Kumar K.; Sha, Desong
1993-01-01
The paper describes the development and applicability of a generalized VIrtual-Pulse (VIP) time integral method of computation for thermal problems. Unlike past approaches for general heat transfer computations, and with the advent of high speed computing technology and the importance of parallel computations for efficient use of computing environments, a major motivation via the developments described in this paper is the need for developing explicit computational procedures with improved accuracy and stability characteristics. As a consequence, a new and effective VIP methodology is described which inherits these improved characteristics. Numerical illustrative examples are provided to demonstrate the developments and validate the results obtained for thermal problems.
NASA Technical Reports Server (NTRS)
Atluri, S. N.; Nakagaki, M.; Kathiresan, K.
1980-01-01
In this paper, efficient numerical methods for the analysis of crack-closure effects on fatigue-crack-growth-rates, in plane stress situations, and for the solution of stress-intensity factors for arbitrary shaped surface flaws in pressure vessels, are presented. For the former problem, an elastic-plastic finite element procedure valid for the case of finite deformation gradients is developed and crack growth is simulated by the translation of near-crack-tip elements with embedded plastic singularities. For the latter problem, an embedded-elastic-singularity hybrid finite element method, which leads to a direct evaluation of K-factors, is employed.
NASA Astrophysics Data System (ADS)
Dehghan, Mehdi; Mohammadi, Vahid
2017-03-01
As is said in [27], the tumor-growth model is the incorporation of nutrient within the mixture as opposed to being modeled with an auxiliary reaction-diffusion equation. The formulation involves systems of highly nonlinear partial differential equations of surface effects through diffuse-interface models [27]. Simulations of this practical model using numerical methods can be applied for evaluating it. The present paper investigates the solution of the tumor growth model with meshless techniques. Meshless methods are applied based on the collocation technique which employ multiquadrics (MQ) radial basis function (RBFs) and generalized moving least squares (GMLS) procedures. The main advantages of these choices come back to the natural behavior of meshless approaches. As well as, a method based on meshless approach can be applied easily for finding the solution of partial differential equations in high-dimension using any distributions of points on regular and irregular domains. The present paper involves a time-dependent system of partial differential equations that describes four-species tumor growth model. To overcome the time variable, two procedures will be used. One of them is a semi-implicit finite difference method based on Crank-Nicolson scheme and another one is based on explicit Runge-Kutta time integration. The first case gives a linear system of algebraic equations which will be solved at each time-step. The second case will be efficient but conditionally stable. The obtained numerical results are reported to confirm the ability of these techniques for solving the two and three-dimensional tumor-growth equations.
NASA Astrophysics Data System (ADS)
Belfort, Benjamin; Weill, Sylvain; Lehmann, François
2017-07-01
A novel, non-invasive imaging technique is proposed that determines 2D maps of water content in unsaturated porous media. This method directly relates digitally measured intensities to the water content of the porous medium. This method requires the classical image analysis steps, i.e., normalization, filtering, background subtraction, scaling and calibration. The main advantages of this approach are that no calibration experiment is needed, because calibration curve relating water content and reflected light intensities is established during the main monitoring phase of each experiment and that no tracer or dye is injected into the flow tank. The procedure enables effective processing of a large number of photographs and thus produces 2D water content maps at high temporal resolution. A drainage/imbibition experiment in a 2D flow tank with inner dimensions of 40 cm × 14 cm × 6 cm (L × W × D) is carried out to validate the methodology. The accuracy of the proposed approach is assessed using a statistical framework to perform an error analysis and numerical simulations with a state-of-the-art computational code that solves the Richards' equation. Comparison of the cumulative mass leaving and entering the flow tank and water content maps produced by the photographic measurement technique and the numerical simulations demonstrate the efficiency and high accuracy of the proposed method for investigating vadose zone flow processes. Finally, the photometric procedure has been developed expressly for its extension to heterogeneous media. Other processes may be investigated through different laboratory experiments which will serve as benchmark for numerical codes validation.
NASA Technical Reports Server (NTRS)
Payne, Fred R.
1992-01-01
Lumley's 1967 Moscow paper provided, for the first time, a completely rational definition of the physically-useful term 'large eddy', popular for a half-century. The numerical procedures based upon his results are: (1) PODT (Proper Orthogonal Decomposition Theorem), which extracts the Large Eddy structure of stochastic processes from physical or computer simulation two-point covariances, and 2) LEIM (Large-Eddy Interaction Model), a predictive scheme for the dynamical large eddies based upon higher order turbulence modeling. Earlier Lumley's work (1964) forms the basis for the final member of the triad of numerical procedures: this predicts the global neutral modes of turbulence which have surprising agreement with both structural eigenmodes and those obtained from the dynamical equations. The ultimate goal of improved engineering design tools for turbulence may be near at hand, partly due to the power and storage of 'supermicrocomputer' workstations finally becoming adequate for the demanding numerics of these procedures.
10 CFR 431.444 - Test procedures for the measurement of energy efficiency.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 3 2011-01-01 2011-01-01 false Test procedures for the measurement of energy efficiency. 431.444 Section 431.444 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY EFFICIENCY PROGRAM FOR... procedures for the measurement of energy efficiency. (a) Scope. Pursuant to section 346(b)(1) of EPCA, this...
10 CFR 431.444 - Test procedures for the measurement of energy efficiency.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 3 2010-01-01 2010-01-01 false Test procedures for the measurement of energy efficiency. 431.444 Section 431.444 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY EFFICIENCY PROGRAM FOR... procedures for the measurement of energy efficiency. (a) Scope. Pursuant to section 346(b)(1) of EPCA, this...
Implementationof a modular software system for multiphysical processes in porous media
NASA Astrophysics Data System (ADS)
Naumov, Dmitri; Watanabe, Norihiro; Bilke, Lars; Fischer, Thomas; Lehmann, Christoph; Rink, Karsten; Walther, Marc; Wang, Wenqing; Kolditz, Olaf
2016-04-01
Subsurface georeservoirs are a candidate technology for large scale energy storage required as part of the transition to renewable energy sources. The increased use of the subsurface results in competing interests and possible impacts on protected entities. To optimize and plan the use of the subsurface in large scale scenario analyses,powerful numerical frameworks are required that aid process understanding and can capture the coupled thermal (T), hydraulic (H), mechanical (M), and chemical (C) processes with high computational efficiency. Due to having a multitude of different couplings between basic T, H, M, or C processes and the necessity to implement new numerical schemes the development focus has moved to software's modularity. The decreased coupling between the components results in two major advantages: easier addition of specialized processes and improvement of the code's testability and therefore its quality. The idea of modularization is implemented on several levels, in addition to library based separation of the previous code version, by using generalized algorithms available in the Standard Template Library and the Boost library, relying on efficient implementations of liner algebra solvers, using concepts when designing new types, and localization of frequently accessed data structures. This procedure shows certain benefits for a flexible high-performance framework applied to the analysis of multipurpose georeservoirs.
Automatic red eye correction and its quality metric
NASA Astrophysics Data System (ADS)
Safonov, Ilia V.; Rychagov, Michael N.; Kang, KiMin; Kim, Sang Ho
2008-01-01
The red eye artifacts are troublesome defect of amateur photos. Correction of red eyes during printing without user intervention and making photos more pleasant for an observer are important tasks. The novel efficient technique of automatic correction of red eyes aimed for photo printers is proposed. This algorithm is independent from face orientation and capable to detect paired red eyes as well as single red eyes. The approach is based on application of 3D tables with typicalness levels for red eyes and human skin tones and directional edge detection filters for processing of redness image. Machine learning is applied for feature selection. For classification of red eye regions a cascade of classifiers including Gentle AdaBoost committee from Classification and Regression Trees (CART) is applied. Retouching stage includes desaturation, darkening and blending with initial image. Several versions of approach implementation using trade-off between detection and correction quality, processing time, memory volume are possible. The numeric quality criterion of automatic red eye correction is proposed. This quality metric is constructed by applying Analytic Hierarchy Process (AHP) for consumer opinions about correction outcomes. Proposed numeric metric helped to choose algorithm parameters via optimization procedure. Experimental results demonstrate high accuracy and efficiency of the proposed algorithm in comparison with existing solutions.
Arithmetic Procedures are Induced from Examples.
1985-08-13
concrete numerals (eg. coins. Dienes blocks, poker chips. Montessori rods etc Analogy is included as a third hypothesis even though it is not particularly...collections of coins. Diennes blocks. Montessori rods and so forth. This is a mapping between two kinds of numerals. and not two procedures Later. this
1994-02-01
numerical treatment. An explicit numerical procedure based on Runqe-Kutta time stepping for cell-centered, hexahedral finite volumes is...An explicit numerical procedure based on Runge-Kutta time stepping for cell-centered, hexahedral finite volumes is outlined for the approximate...Discretization 16 3.1 Cell-Centered Finite -Volume Discretization in Space 16 3.2 Artificial Dissipation 17 3.3 Time Integration 21 3.4 Convergence
Kodak, Tiffany; Campbell, Vincent; Bergmann, Samantha; LeBlanc, Brittany; Kurtz-Nelson, Eva; Cariveau, Tom; Haq, Shaji; Zemantic, Patricia; Mahon, Jacob
2016-09-01
Prior research shows that learners have idiosyncratic responses to error-correction procedures during instruction. Thus, assessments that identify error-correction strategies to include in instruction can aid practitioners in selecting individualized, efficacious, and efficient interventions. The current investigation conducted an assessment to compare 5 error-correction procedures that have been evaluated in the extant literature and are common in instructional practice for children with autism spectrum disorder (ASD). Results showed that the assessment identified efficacious and efficient error-correction procedures for all participants, and 1 procedure was efficient for 4 of the 5 participants. To examine the social validity of error-correction procedures, participants selected among efficacious and efficient interventions in a concurrent-chains assessment. We discuss the results in relation to prior research on error-correction procedures and current instructional practices for learners with ASD. © 2016 Society for the Experimental Analysis of Behavior.
NASA Astrophysics Data System (ADS)
Latorre, Borja; Peña-Sancho, Carolina; Angulo-Jaramillo, Rafaël; Moret-Fernández, David
2015-04-01
Measurement of soil hydraulic properties is of paramount importance in fields such as agronomy, hydrology or soil science. Fundamented on the analysis of the Haverkamp et al. (1994) model, the aim of this paper is to explain a technique to estimate the soil hydraulic properties (sorptivity, S, and hydraulic conductivity, K) from the full-time cumulative infiltration curves. The method (NSH) was validated by means of 12 synthetic infiltration curves generated with HYDRUS-3D from known soil hydraulic properties. The K values used to simulate the synthetic curves were compared to those estimated with the proposed method. A procedure to identify and remove the effect of the contact sand layer on the cumulative infiltration curve was also developed. A sensitivity analysis was performed using the water level measurement as uncertainty source. Finally, the procedure was evaluated using different infiltration times and data noise. Since a good correlation between the K used in HYDRUS-3D to model the infiltration curves and those estimated by the NSH method was obtained, (R2 =0.98), it can be concluded that this technique is robust enough to estimate the soil hydraulic conductivity from complete infiltration curves. The numerical procedure to detect and remove the influence of the contact sand layer on the K and S estimates seemed to be robust and efficient. An effect of the curve infiltration noise on the K estimate was observed, which uncertainty increased with increasing noise. Finally, the results showed that infiltration time was an important factor to estimate K. Lower values of K or smaller uncertainty needed longer infiltration times.
NASA Technical Reports Server (NTRS)
Harris, J. E.; Blanchard, D. K.
1982-01-01
A numerical algorithm and computer program are presented for solving the laminar, transitional, or turbulent two dimensional or axisymmetric compressible boundary-layer equations for perfect-gas flows. The governing equations are solved by an iterative three-point implicit finite-difference procedure. The software, program VGBLP, is a modification of the approach presented in NASA TR R-368 and NASA TM X-2458, respectively. The major modifications are: (1) replacement of the fourth-order Runge-Kutta integration technique with a finite-difference procedure for numerically solving the equations required to initiate the parabolic marching procedure; (2) introduction of the Blottner variable-grid scheme; (3) implementation of an iteration scheme allowing the coupled system of equations to be converged to a specified accuracy level; and (4) inclusion of an iteration scheme for variable-entropy calculations. These modifications to the approach presented in NASA TR R-368 and NASA TM X-2458 yield a software package with high computational efficiency and flexibility. Turbulence-closure options include either two-layer eddy-viscosity or mixing-length models. Eddy conductivity is modeled as a function of eddy viscosity through a static turbulent Prandtl number formulation. Several options are provided for specifying the static turbulent Prandtl number. The transitional boundary layer is treated through a streamwise intermittency function which modifies the turbulence-closure model. This model is based on the probability distribution of turbulent spots and ranges from zero to unity for laminar and turbulent flow, respectively. Several test cases are presented as guides for potential users of the software.
Finite element procedures for time-dependent convection-diffusion-reaction systems
NASA Technical Reports Server (NTRS)
Tezduyar, T. E.; Park, Y. J.; Deans, H. A.
1988-01-01
New finite element procedures based on the streamline-upwind/Petrov-Galerkin formulations are developed for time-dependent convection-diffusion-reaction equations. These procedures minimize spurious oscillations for convection-dominated and reaction-dominated problems. The results obtained for representative numerical examples are accurate with minimal oscillations. As a special application problem, the single-well chemical tracer test (a procedure for measuring oil remaining in a depleted field) is simulated numerically. The results show the importance of temperature effects on the interpreted value of residual oil saturation from such tests.
NASA Astrophysics Data System (ADS)
Mössinger, Peter; Jester-Zürker, Roland; Jung, Alexander
2017-01-01
With increasing requirements for hydropower plant operation due to intermittent renewable energy sources like wind and solar, numerical simulations of transient operations in hydraulic turbo machines become more important. As a continuation of the work performed for the first workshop which covered three steady operating conditions, in the present paper load changes and a shutdown procedure are investigated. The findings of previous studies are used to create a 360° model and compare measurements with simulation results for the operating points part load, high load and best efficiency. A mesh motion procedure is introduced, allowing to represent moving guide vanes for load changes from best efficiency to part load and high load. Additionally an automated re-mesh procedure is added for turbine shutdown to ensure reliable mesh quality during guide vane closing. All three transient operations are compared to PIV velocity measurements in the draft tube and pressure signals in the vaneless space. Simulation results of axial velocity distributions for all three steady operation points, during both load changes and for the shutdown correlated well with the measurement. An offset at vaneless space pressure is found to be a result of guide vane corrections for the simulation to ensure similar velocity fields. Short-time Fourier transformation indicating increasing amplitudes and frequencies at speed-no load conditions. Further studies will discuss the already measured start-up procedure and investigate the necessity to consider the hydraulic system dynamics upstream of the turbine by means of a 1D3D coupling between the 3D flow field and a 1D system model.
Dielectrophoretic capture of low abundance cell population using thick electrodes.
Marchalot, Julien; Chateaux, Jean-François; Faivre, Magalie; Mertani, Hichem C; Ferrigno, Rosaria; Deman, Anne-Laure
2015-09-01
Enrichment of rare cell populations such as Circulating Tumor Cells (CTCs) is a critical step before performing analysis. This paper presents a polymeric microfluidic device with integrated thick Carbon-PolyDimethylSiloxane composite (C-PDMS) electrodes designed to carry out dielectrophoretic (DEP) trapping of low abundance biological cells. Such conductive composite material presents advantages over metallic structures. Indeed, as it combines properties of both the matrix and doping particles, C-PDMS allows the easy and fast integration of conductive microstructures using a soft-lithography approach while preserving O2 plasma bonding properties of PDMS substrate and avoiding a cumbersome alignment procedure. Here, we first performed numerical simulations to demonstrate the advantage of such thick C-PDMS electrodes over a coplanar electrode configuration. It is well established that dielectrophoretic force ([Formula: see text]) decreases quickly as the distance from the electrode surface increases resulting in coplanar configuration to a low trapping efficiency at high flow rate. Here, we showed quantitatively that by using electrodes as thick as a microchannel height, it is possible to extend the DEP force influence in the whole volume of the channel compared to coplanar electrode configuration and maintaining high trapping efficiency while increasing the throughput. This model was then used to numerically optimize a thick C-PDMS electrode configuration in terms of trapping efficiency. Then, optimized microfluidic configurations were fabricated and tested at various flow rates for the trapping of MDA-MB-231 breast cancer cell line. We reached trapping efficiencies of 97% at 20 μl/h and 78.7% at 80 μl/h, for 100 μm thick electrodes. Finally, we applied our device to the separation and localized trapping of CTCs (MDA-MB-231) from a red blood cells sample (concentration ratio of 1:10).
NASA Technical Reports Server (NTRS)
Farhat, C.; Park, K. C.; Dubois-Pelerin, Y.
1991-01-01
An unconditionally stable second order accurate implicit-implicit staggered procedure for the finite element solution of fully coupled thermoelasticity transient problems is proposed. The procedure is stabilized with a semi-algebraic augmentation technique. A comparative cost analysis reveals the superiority of the proposed computational strategy to other conventional staggered procedures. Numerical examples of one and two-dimensional thermomechanical coupled problems demonstrate the accuracy of the proposed numerical solution algorithm.
Numerical and experimental study of a hydrodynamic cavitation tube
NASA Astrophysics Data System (ADS)
Hu, H.; Finch, J. A.; Zhou, Z.; Xu, Z.
1998-08-01
A numerical analysis of hydrodynamics in a cavitation tube used for activating fine particle flotation is described. Using numerical procedures developed for solving the turbulent k-ɛ model with boundary fitted coordinates, the stream function, vorticity, velocity, and pressure distributions in a cavitation tube were calculated. The calculated pressure distribution was found to be in excellent agreement with experimental results. The requirement of a pressure drop below approximately 10 m water for cavitation to occur was observed experimentally and confirmed by the model. The use of the numerical procedures for cavitation tube design is discussed briefly.
Green's function multiple-scattering theory with a truncated basis set: An augmented-KKR formalism
NASA Astrophysics Data System (ADS)
Alam, Aftab; Khan, Suffian N.; Smirnov, A. V.; Nicholson, D. M.; Johnson, Duane D.
2014-11-01
The Korringa-Kohn-Rostoker (KKR) Green's function, multiple-scattering theory is an efficient site-centered, electronic-structure technique for addressing an assembly of N scatterers. Wave functions are expanded in a spherical-wave basis on each scattering center and indexed up to a maximum orbital and azimuthal number Lmax=(l,mmax), while scattering matrices, which determine spectral properties, are truncated at Lt r=(l,mt r) where phase shifts δl >ltr are negligible. Historically, Lmax is set equal to Lt r, which is correct for large enough Lmax but not computationally expedient; a better procedure retains higher-order (free-electron and single-site) contributions for Lmax>Lt r with δl >ltr set to zero [X.-G. Zhang and W. H. Butler, Phys. Rev. B 46, 7433 (1992), 10.1103/PhysRevB.46.7433]. We present a numerically efficient and accurate augmented-KKR Green's function formalism that solves the KKR equations by exact matrix inversion [R3 process with rank N (ltr+1 ) 2 ] and includes higher-L contributions via linear algebra [R2 process with rank N (lmax+1) 2 ]. The augmented-KKR approach yields properly normalized wave functions, numerically cheaper basis-set convergence, and a total charge density and electron count that agrees with Lloyd's formula. We apply our formalism to fcc Cu, bcc Fe, and L 1 0 CoPt and present the numerical results for accuracy and for the convergence of the total energies, Fermi energies, and magnetic moments versus Lmax for a given Lt r.
Design, realization and structural testing of a compliant adaptable wing
NASA Astrophysics Data System (ADS)
Molinari, G.; Quack, M.; Arrieta, A. F.; Morari, M.; Ermanni, P.
2015-10-01
This paper presents the design, optimization, realization and testing of a novel wing morphing concept, based on distributed compliance structures, and actuated by piezoelectric elements. The adaptive wing features ribs with a selectively compliant inner structure, numerically optimized to achieve aerodynamically efficient shape changes while simultaneously withstanding aeroelastic loads. The static and dynamic aeroelastic behavior of the wing, and the effect of activating the actuators, is assessed by means of coupled 3D aerodynamic and structural simulations. To demonstrate the capabilities of the proposed morphing concept and optimization procedure, the wings of a model airplane are designed and manufactured according to the presented approach. The goal is to replace conventional ailerons, thus to achieve controllability in roll purely by morphing. The mechanical properties of the manufactured components are characterized experimentally, and used to create a refined and correlated finite element model. The overall stiffness, strength, and actuation capabilities are experimentally tested and successfully compared with the numerical prediction. To counteract the nonlinear hysteretic behavior of the piezoelectric actuators, a closed-loop controller is implemented, and its capability of accurately achieving the desired shape adaptation is evaluated experimentally. Using the correlated finite element model, the aeroelastic behavior of the manufactured wing is simulated, showing that the morphing concept can provide sufficient roll authority to allow controllability of the flight. The additional degrees of freedom offered by morphing can be also used to vary the plane lift coefficient, similarly to conventional flaps. The efficiency improvements offered by this technique are evaluated numerically, and compared to the performance of a rigid wing.
NASA Astrophysics Data System (ADS)
Shen, Yanfeng; Cesnik, Carlos E. S.
2016-04-01
This paper presents a parallelized modeling technique for the efficient simulation of nonlinear ultrasonics introduced by the wave interaction with fatigue cracks. The elastodynamic wave equations with contact effects are formulated using an explicit Local Interaction Simulation Approach (LISA). The LISA formulation is extended to capture the contact-impact phenomena during the wave damage interaction based on the penalty method. A Coulomb friction model is integrated into the computation procedure to capture the stick-slip contact shear motion. The LISA procedure is coded using the Compute Unified Device Architecture (CUDA), which enables the highly parallelized supercomputing on powerful graphic cards. Both the explicit contact formulation and the parallel feature facilitates LISA's superb computational efficiency over the conventional finite element method (FEM). The theoretical formulations based on the penalty method is introduced and a guideline for the proper choice of the contact stiffness is given. The convergence behavior of the solution under various contact stiffness values is examined. A numerical benchmark problem is used to investigate the new LISA formulation and results are compared with a conventional contact finite element solution. Various nonlinear ultrasonic phenomena are successfully captured using this contact LISA formulation, including the generation of nonlinear higher harmonic responses. Nonlinear mode conversion of guided waves at fatigue cracks is also studied.
Malkyarenko, Dariya I; Chenevert, Thomas L
2014-12-01
To describe an efficient procedure to empirically characterize gradient nonlinearity and correct for the corresponding apparent diffusion coefficient (ADC) bias on a clinical magnetic resonance imaging (MRI) scanner. Spatial nonlinearity scalars for individual gradient coils along superior and right directions were estimated via diffusion measurements of an isotropicic e-water phantom. Digital nonlinearity model from an independent scanner, described in the literature, was rescaled by system-specific scalars to approximate 3D bias correction maps. Correction efficacy was assessed by comparison to unbiased ADC values measured at isocenter. Empirically estimated nonlinearity scalars were confirmed by geometric distortion measurements of a regular grid phantom. The applied nonlinearity correction for arbitrarily oriented diffusion gradients reduced ADC bias from 20% down to 2% at clinically relevant offsets both for isotropic and anisotropic media. Identical performance was achieved using either corrected diffusion-weighted imaging (DWI) intensities or corrected b-values for each direction in brain and ice-water. Direction-average trace image correction was adequate only for isotropic medium. Empiric scalar adjustment of an independent gradient nonlinearity model adequately described DWI bias for a clinical scanner. Observed efficiency of implemented ADC bias correction quantitatively agreed with previous theoretical predictions and numerical simulations. The described procedure provides an independent benchmark for nonlinearity bias correction of clinical MRI scanners.
Compression-RSA technique: A more efficient encryption-decryption procedure
NASA Astrophysics Data System (ADS)
Mandangan, Arif; Mei, Loh Chai; Hung, Chang Ee; Che Hussin, Che Haziqah
2014-06-01
The efficiency of encryption-decryption procedures has become a major problem in asymmetric cryptography. Compression-RSA technique is developed to overcome the efficiency problem by compressing the numbers of kplaintext, where k∈Z+ and k > 2, becoming only 2 plaintext. That means, no matter how large the numbers of plaintext, they will be compressed to only 2 plaintext. The encryption-decryption procedures are expected to be more efficient since these procedures only receive 2 inputs to be processed instead of kinputs. However, it is observed that as the numbers of original plaintext are increasing, the size of the new plaintext becomes bigger. As a consequence, it will probably affect the efficiency of encryption-decryption procedures, especially for RSA cryptosystem since both of its encryption-decryption procedures involve exponential operations. In this paper, we evaluated the relationship between the numbers of original plaintext and the size of the new plaintext. In addition, we conducted several experiments to show that the RSA cryptosystem with embedded Compression-RSA technique is more efficient than the ordinary RSA cryptosystem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Sapio, Vincent
2010-09-01
The analysis of spacecraft kinematics and dynamics requires an efficient scheme for spatial representation. While the representation of displacement in three dimensional Euclidean space is straightforward, orientation in three dimensions poses particular challenges. The unit quaternion provides an approach that mitigates many of the problems intrinsic in other representation approaches, including the ill-conditioning that arises from computing many successive rotations. This report focuses on the computational utility of unit quaternions and their application to the reconstruction of re-entry vehicle (RV) motion history from sensor data. To this end they will be used in conjunction with other kinematic and data processingmore » techniques. We will present a numerical implementation for the reconstruction of RV motion solely from gyroscope and accelerometer data. This will make use of unit quaternions due to their numerical efficacy in dealing with the composition of many incremental rotations over a time series. In addition to signal processing and data conditioning procedures, algorithms for numerical quaternion-based integration of gyroscope data will be addressed, as well as accelerometer triangulation and integration to yield RV trajectory. Actual processed flight data will be presented to demonstrate the implementation of these methods.« less
NASA Astrophysics Data System (ADS)
Kim, Sungtae; Lee, Soogab; Kim, Kyu Hong
2008-04-01
A new numerical method toward accurate and efficient aeroacoustic computations of multi-dimensional compressible flows has been developed. The core idea of the developed scheme is to unite the advantages of the wavenumber-extended optimized scheme and M-AUSMPW+/MLP schemes by predicting a physical distribution of flow variables more accurately in multi-space dimensions. The wavenumber-extended optimization procedure for the finite volume approach based on the conservative requirement is newly proposed for accuracy enhancement, which is required to capture the acoustic portion of the solution in the smooth region. Furthermore, the new distinguishing mechanism which is based on the Gibbs phenomenon in discontinuity, between continuous and discontinuous regions is introduced to eliminate the excessive numerical dissipation in the continuous region by the restricted application of MLP according to the decision of the distinguishing function. To investigate the effectiveness of the developed method, a sequence of benchmark simulations such as spherical wave propagation, nonlinear wave propagation, shock tube problem and vortex preservation test problem are executed. Also, throughout more realistic shock-vortex interaction and muzzle blast flow problems, the utility of the new method for aeroacoustic applications is verified by comparing with the previous numerical or experimental results.
A third-order gas-kinetic CPR method for the Euler and Navier-Stokes equations on triangular meshes
NASA Astrophysics Data System (ADS)
Zhang, Chao; Li, Qibing; Fu, Song; Wang, Z. J.
2018-06-01
A third-order accurate gas-kinetic scheme based on the correction procedure via reconstruction (CPR) framework is developed for the Euler and Navier-Stokes equations on triangular meshes. The scheme combines the accuracy and efficiency of the CPR formulation with the multidimensional characteristics and robustness of the gas-kinetic flux solver. Comparing with high-order finite volume gas-kinetic methods, the current scheme is more compact and efficient by avoiding wide stencils on unstructured meshes. Unlike the traditional CPR method where the inviscid and viscous terms are treated differently, the inviscid and viscous fluxes in the current scheme are coupled and computed uniformly through the kinetic evolution model. In addition, the present scheme adopts a fully coupled spatial and temporal gas distribution function for the flux evaluation, achieving high-order accuracy in both space and time within a single step. Numerical tests with a wide range of flow problems, from nearly incompressible to supersonic flows with strong shocks, for both inviscid and viscous problems, demonstrate the high accuracy and efficiency of the present scheme.
NASA Technical Reports Server (NTRS)
Iida, H. T.
1966-01-01
Computational procedure reduces the numerical effort whenever the method of finite differences is used to solve ablation problems for which the surface recession is large relative to the initial slab thickness. The number of numerical operations required for a given maximum space mesh size is reduced.
Numerical modeling and optimization of the Iguassu gas centrifuge
NASA Astrophysics Data System (ADS)
Bogovalov, S. V.; Borman, V. D.; Borisevich, V. D.; Tronin, V. N.; Tronin, I. V.
2017-07-01
The full procedure of the numerical calculation of the optimized parameters of the Iguassu gas centrifuge (GC) is under discussion. The procedure consists of a few steps. On the first step the problem of a hydrodynamical flow of the gas in the rotating rotor of the GC is solved numerically. On the second step the problem of diffusion of the binary mixture of isotopes is solved. The separation power of the gas centrifuge is calculated after that. On the last step the time consuming procedure of optimization of the GC is performed providing us the maximum of the separation power. The optimization is based on the BOBYQA method exploring the results of numerical simulations of the hydrodynamics and diffusion of the mixture of isotopes. Fast convergence of calculations is achieved due to exploring of a direct solver at the solution of the hydrodynamical and diffusion parts of the problem. Optimized separative power and optimal internal parameters of the Iguassu GC with 1 m rotor were calculated using the developed approach. Optimization procedure converges in 45 iterations taking 811 minutes.
NASA Astrophysics Data System (ADS)
Srinivas, V.; Jeyasehar, C. Antony; Ramanjaneyulu, K.; Sasmal, Saptarshi
2012-02-01
Need for developing efficient non-destructive damage assessment procedures for civil engineering structures is growing rapidly towards structural health assessment and management of existing structures. Damage assessment of structures by monitoring changes in the dynamic properties or response of the structure has received considerable attention in recent years. In the present study, damage assessment studies have been carried out on a reinforced concrete beam by evaluating the changes in vibration characteristics with the changes in damage levels. Structural damage is introduced by static load applied through a hydraulic jack. After each stage of damage, vibration testing is performed and system parameters were evaluated from the measured acceleration and displacement responses. Reduction in fundamental frequencies in first three modes is observed for different levels of damage. It is found that a consistent decrease in fundamental frequency with increase in damage magnitude is noted. The beam is numerically simulated and found that the vibration characteristics obtained from the measured data are in close agreement with the numerical data.
Robust numerical solution of the reservoir routing equation
NASA Astrophysics Data System (ADS)
Fiorentini, Marcello; Orlandini, Stefano
2013-09-01
The robustness of numerical methods for the solution of the reservoir routing equation is evaluated. The methods considered in this study are: (1) the Laurenson-Pilgrim method, (2) the fourth-order Runge-Kutta method, and (3) the fixed order Cash-Karp method. Method (1) is unable to handle nonmonotonic outflow rating curves. Method (2) is found to fail under critical conditions occurring, especially at the end of inflow recession limbs, when large time steps (greater than 12 min in this application) are used. Method (3) is computationally intensive and it does not solve the limitations of method (2). The limitations of method (2) can be efficiently overcome by reducing the time step in the critical phases of the simulation so as to ensure that water level remains inside the domains of the storage function and the outflow rating curve. The incorporation of a simple backstepping procedure implementing this control into the method (2) yields a robust and accurate reservoir routing method that can be safely used in distributed time-continuous catchment models.
A numerical solution method for acoustic radiation from axisymmetric bodies
NASA Technical Reports Server (NTRS)
Caruthers, John E.; Raviprakash, G. K.
1995-01-01
A new and very efficient numerical method for solving equations of the Helmholtz type is specialized for problems having axisymmetric geometry. It is then demonstrated by application to the classical problem of acoustic radiation from a vibrating piston set in a stationary infinite plane. The method utilizes 'Green's Function Discretization', to obtain an accurate resolution of the waves using only 2-3 points per wave. Locally valid free space Green's functions, used in the discretization step, are obtained by quadrature. Results are computed for a range of grid spacing/piston radius ratios at a frequency parameter, omega R/c(sub 0), of 2 pi. In this case, the minimum required grid resolution appears to be fixed by the need to resolve a step boundary condition at the piston edge rather than by the length scale imposed by the wave length of the acoustic radiation. It is also demonstrated that a local near-field radiation boundary procedure allows the domain to be truncated very near the radiating source with little effect on the solution.
Code of Federal Regulations, 2011 CFR
2011-01-01
... ENERGY CONSERVATION ENERGY EFFICIENCY PROGRAM FOR CERTAIN COMMERCIAL AND INDUSTRIAL EQUIPMENT Electric Motors Test Procedures, Materials Incorporated and Methods of Determining Efficiency § 431.21 Procedures... Assistant Secretary for Energy Efficiency and Renewable Energy, U.S. Department of Energy, Forrestal...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-24
... Efficiency Program for Certain Commercial and Industrial Equipment: Test Procedures for Commercial Refrigeration Equipment AGENCY: Office of Energy Efficiency and Renewable Energy, Department of Energy. ACTION... amendments to its test procedure for commercial refrigeration equipment (CRE). The amendments would update...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sudiarta, I. Wayan; Angraini, Lily Maysari, E-mail: lilyangraini@unram.ac.id
We have applied the finite difference time domain (FDTD) method with the supersymmetric quantum mechanics (SUSY-QM) procedure to determine excited energies of one dimensional quantum systems. The theoretical basis of FDTD, SUSY-QM, a numerical algorithm and an illustrative example for a particle in a one dimensional square-well potential were given in this paper. It was shown that the numerical results were in excellent agreement with theoretical results. Numerical errors produced by the SUSY-QM procedure was due to errors in estimations of superpotentials and supersymmetric partner potentials.
40 CFR 63.752 - Recordkeeping requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... efficiency of the control system (as determined using the procedures specified in § 63.750(h)) and all test... adsorber: (i) The overall control efficiency of the control system (as determined using the procedures... overall control efficiency of the control system (as determined using the procedures specified in § 63.750...
40 CFR 63.752 - Recordkeeping requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... efficiency of the control system (as determined using the procedures specified in § 63.750(h)) and all test... adsorber: (i) The overall control efficiency of the control system (as determined using the procedures... overall control efficiency of the control system (as determined using the procedures specified in § 63.750...
40 CFR 63.752 - Recordkeeping requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... efficiency of the control system (as determined using the procedures specified in § 63.750(h)) and all test... adsorber: (i) The overall control efficiency of the control system (as determined using the procedures... overall control efficiency of the control system (as determined using the procedures specified in § 63.750...
40 CFR 63.752 - Recordkeeping requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... efficiency of the control system (as determined using the procedures specified in § 63.750(h)) and all test... adsorber: (i) The overall control efficiency of the control system (as determined using the procedures... overall control efficiency of the control system (as determined using the procedures specified in § 63.750...
40 CFR 63.752 - Recordkeeping requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... efficiency of the control system (as determined using the procedures specified in § 63.750(h)) and all test... adsorber: (i) The overall control efficiency of the control system (as determined using the procedures... overall control efficiency of the control system (as determined using the procedures specified in § 63.750...
The value of swarm data for practical modeling of plasma devices
NASA Astrophysics Data System (ADS)
Napartovich, A. P.; Kochetov, I. V.
2011-04-01
The non-thermal plasma is a key component in gas lasers, waste gas cleaners, ozone generators, plasma igniters, flame holders, flow control in high-speed aerodynamics and other applications. The specific feature of the non-thermal plasma is its high sensitivity to variations in governing parameters (gas composition, pressure, pulse duration, E/N parameter). The reactivity of the plasma is due to the appearance of atoms and chemical radicals. For the efficient production of chemically active species high average electron energy is required, which is controlled by the balance of gain from the electric field and loss in inelastic collisions. In low-ionized plasma the electron energy distribution function is far from Maxwellian and must be found numerically for specified conditions. Numerical modeling of processes in plasma technologies requires vast databases on electron scattering cross sections to be available. The only reliable criterion for evaluations of validity of a set of cross sections for a particular species is a correct prediction of electron transport and kinetic coefficients measured in swarm experiments. This criterion is used traditionally to improve experimentally measured cross sections, as was suggested earlier by Phelps. The set of cross sections subjected to this procedure is called a self-consistent set. Nowadays, such reliable self-consistent sets are known for many species. Problems encountered in implementation of the fitting procedure and examples of its successful applications are described in the paper. .
NASA Technical Reports Server (NTRS)
Weng, Fuzhong
1992-01-01
A theory is developed for discretizing the vector integro-differential radiative transfer equation including both solar and thermal radiation. A complete solution and boundary equations are obtained using the discrete-ordinate method. An efficient numerical procedure is presented for calculating the phase matrix and achieving computational stability. With natural light used as a beam source, the Stokes parameters from the model proposed here are compared with the analytical solutions of Chandrasekhar (1960) for a Rayleigh scattering atmosphere. The model is then applied to microwave frequencies with a thermal source, and the brightness temperatures are compared with those from Stamnes'(1988) radiative transfer model.
Rugate filter for light-trapping in solar cells.
Fahr, Stephan; Ulbrich, Carolin; Kirchartz, Thomas; Rau, Uwe; Rockstuhl, Carsten; Lederer, Falk
2008-06-23
We suggest a design for a coating that could be applied on top of any solar cell having at least one diffusing surface. This coating acts as an angle and wavelength selective filter, which increases the average path length and absorptance at long wavelengths without altering the solar cell performance at short wavelengths. The filter design is based on a continuous variation of the refractive index in order to minimize undesired reflection losses. Numerical procedures are used to optimize the filter for a 10 microm thick monocrystalline silicon solar cell, which lifts the efficiency above the Auger limit for unconcentrated illumination. The feasibility to fabricate such filters is also discussed, considering a finite available refractive index range.
Full potential methods for analysis/design of complex aerospace configurations
NASA Technical Reports Server (NTRS)
Shankar, Vijaya; Szema, Kuo-Yen; Bonner, Ellwood
1986-01-01
The steady form of the full potential equation, in conservative form, is employed to analyze and design a wide variety of complex aerodynamic shapes. The nonlinear method is based on the theory of characteristic signal propagation coupled with novel flux biasing concepts and body-fitted mapping procedures. The resulting codes are vectorized for the CRAY XMP and the VPS-32 supercomputers. Use of the full potential nonlinear theory is demonstrated for a single-point supersonic wing design and a multipoint design for transonic maneuver/supersonic cruise/maneuver conditions. Achievement of high aerodynamic efficiency through numerical design is verified by wind tunnel tests. Other studies reported include analyses of a canard/wing/nacelle fighter geometry.
Non-linear eigensolver-based alternative to traditional SCF methods
NASA Astrophysics Data System (ADS)
Gavin, B.; Polizzi, E.
2013-05-01
The self-consistent procedure in electronic structure calculations is revisited using a highly efficient and robust algorithm for solving the non-linear eigenvector problem, i.e., H({ψ})ψ = Eψ. This new scheme is derived from a generalization of the FEAST eigenvalue algorithm to account for the non-linearity of the Hamiltonian with the occupied eigenvectors. Using a series of numerical examples and the density functional theory-Kohn/Sham model, it will be shown that our approach can outperform the traditional SCF mixing-scheme techniques by providing a higher converge rate, convergence to the correct solution regardless of the choice of the initial guess, and a significant reduction of the eigenvalue solve time in simulations.
Efficient numerical simulation of an electrothermal de-icer pad
NASA Technical Reports Server (NTRS)
Roelke, R. J.; Keith, T. G., Jr.; De Witt, K. J.; Wright, W. B.
1987-01-01
In this paper, a new approach to calculate the transient thermal behavior of an iced electrothermal de-icer pad was developed. The method of splines was used to obtain the temperature distribution within the layered pad. Splines were used in order to create a tridiagonal system of equations that could be directly solved by Gauss elimination. The Stefan problem was solved using the enthalpy method along with a recent implicit technique. Only one to three iterations were needed to locate the melt front during any time step. Computational times were shown to be greatly reduced over those of an existing one dimensional procedure without any reduction in accuracy; the curent technique was more than 10 times faster.
Optimization of Composite Structures with Curved Fiber Trajectories
NASA Astrophysics Data System (ADS)
Lemaire, Etienne; Zein, Samih; Bruyneel, Michael
2014-06-01
This paper studies the problem of optimizing composites shells manufactured using Automated Tape Layup (ATL) or Automated Fiber Placement (AFP) processes. The optimization procedure relies on a new approach to generate equidistant fiber trajectories based on Fast Marching Method. Starting with a (possibly curved) reference fiber direction defined on a (possibly curved) meshed surface, the new method allows determining fibers orientation resulting from a uniform thickness layup. The design variables are the parameters defining the position and the shape of the reference curve which results in very few design variables. Thanks to this efficient parameterization, maximum stiffness optimization numerical applications are proposed. The shape of the design space is discussed, regarding local and global optimal solutions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnston, Henry; Wang, Cong; Winterfeld, Philip
An efficient modeling approach is described for incorporating arbitrary 3D, discrete fractures, such as hydraulic fractures or faults, into modeling fracture-dominated fluid flow and heat transfer in fractured geothermal reservoirs. This technique allows 3D discrete fractures to be discretized independently from surrounding rock volume and inserted explicitly into a primary fracture/matrix grid, generated without including 3D discrete fractures in prior. An effective computational algorithm is developed to discretize these 3D discrete fractures and construct local connections between 3D fractures and fracture/matrix grid blocks of representing the surrounding rock volume. The constructed gridding information on 3D fractures is then added tomore » the primary grid. This embedded fracture modeling approach can be directly implemented into a developed geothermal reservoir simulator via the integral finite difference (IFD) method or with TOUGH2 technology This embedded fracture modeling approach is very promising and computationally efficient to handle realistic 3D discrete fractures with complicated geometries, connections, and spatial distributions. Compared with other fracture modeling approaches, it avoids cumbersome 3D unstructured, local refining procedures, and increases computational efficiency by simplifying Jacobian matrix size and sparsity, while keeps sufficient accuracy. Several numeral simulations are present to demonstrate the utility and robustness of the proposed technique. Our numerical experiments show that this approach captures all the key patterns about fluid flow and heat transfer dominated by fractures in these cases. Thus, this approach is readily available to simulation of fractured geothermal reservoirs with both artificial and natural fractures.« less
Full three-dimensional investigation of structural contact interactions in turbomachines
NASA Astrophysics Data System (ADS)
Legrand, Mathias; Batailly, Alain; Magnain, Benoît; Cartraud, Patrice; Pierre, Christophe
2012-05-01
Minimizing the operating clearance between rotating bladed-disks and stationary surrounding casings is a primary concern in the design of modern turbomachines since it may advantageously affect their energy efficiency. This technical choice possibly leads to interactions between elastic structural components through direct unilateral contact and dry friction, events which are now accepted as normal operating conditions. Subsequent nonlinear dynamical behaviors of such systems are commonly investigated with simplified academic models mainly due to theoretical difficulties and numerical challenges involved in non-smooth large-scale realistic models. In this context, the present paper introduces an adaptation of a full three-dimensional contact strategy for the prediction of potentially damaging motions that would imply highly demanding computational efforts for the targeted aerospace application in an industrial context. It combines a smoothing procedure including bicubic B-spline patches together with a Lagrange multiplier based contact strategy within an explicit time-marching integration procedure preferred for its versatility. The proposed algorithm is first compared on a benchmark configuration against the more elaborated bi-potential formulation and the commercial software Ansys. The consistency of the provided results and the low energy fluctuations of the introduced approach underlines its reliable numerical properties. A case study featuring blade-tip/casing contact on industrial finite element models is then proposed: it incorporates component mode synthesis and the developed three-dimensional contact algorithm for investigating structural interactions occurring within a turbomachine compressor stage. Both time results and frequency-domain analysis emphasize the practical use of such a numerical tool: detection of severe operating conditions and critical rotational velocities, time-dependent maps of stresses acting within the structures, parameter studies and blade design tests.
Numerical methods for the design of gradient-index optical coatings.
Anzengruber, Stephan W; Klann, Esther; Ramlau, Ronny; Tonova, Diana
2012-12-01
We formulate the problem of designing gradient-index optical coatings as the task of solving a system of operator equations. We use iterative numerical procedures known from the theory of inverse problems to solve it with respect to the coating refractive index profile and thickness. The mathematical derivations necessary for the application of the procedures are presented, and different numerical methods (Landweber, Newton, and Gauss-Newton methods, Tikhonov minimization with surrogate functionals) are implemented. Procedures for the transformation of the gradient coating designs into quasi-gradient ones (i.e., multilayer stacks of homogeneous layers with different refractive indices) are also developed. The design algorithms work with physically available coating materials that could be produced with the modern coating technologies.
Geometry definition and grid generation for a complete fighter aircraft
NASA Technical Reports Server (NTRS)
Edwards, T. A.
1986-01-01
Recent advances in computing power and numerical solution procedures have enabled computational fluid dynamicists to attempt increasingly difficult problems. In particular, efforts are focusing on computations of complex three-dimensional flow fields about realistic aerodynamic bodies. To perform such computations, a very accurate and detailed description of the surface geometry must be provided, and a three-dimensional grid must be generated in the space around the body. The geometry must be supplied in a format compatible with the grid generation requirements, and must be verified to be free of inconsistencies. This paper presents a procedure for performing the geometry definition of a fighter aircraft that makes use of a commercial computer-aided design/computer-aided manufacturing system. Furthermore, visual representations of the geometry are generated using a computer graphics system for verification of the body definition. Finally, the three-dimensional grids for fighter-like aircraft are generated by means of an efficient new parabolic grid generation method. This method exhibits good control of grid quality.
Combustion of hydrogen injected into a supersonic airstream (the SHIP computer program)
NASA Technical Reports Server (NTRS)
Markatos, N. C.; Spalding, D. B.; Tatchell, D. G.
1977-01-01
The mathematical and physical basis of the SHIP computer program which embodies a finite-difference, implicit numerical procedure for the computation of hydrogen injected into a supersonic airstream at an angle ranging from normal to parallel to the airstream main flow direction is described. The physical hypotheses built into the program include: a two-equation turbulence model, and a chemical equilibrium model for the hydrogen-oxygen reaction. Typical results for equilibrium combustion are presented and exhibit qualitatively plausible behavior. The computer time required for a given case is approximately 1 minute on a CDC 7600 machine. A discussion of the assumption of parabolic flow in the injection region is given which suggests that improvement in calculation in this region could be obtained by use of the partially parabolic procedure of Pratap and Spalding. It is concluded that the technique described herein provides the basis for an efficient and reliable means for predicting the effects of hydrogen injection into supersonic airstreams and of its subsequent combustion.
Geometry definition and grid generation for a complete fighter aircraft
NASA Technical Reports Server (NTRS)
Edwards, Thomas A.
1986-01-01
Recent advances in computing power and numerical solution procedures have enabled computational fluid dynamicists to attempt increasingly difficult problems. In particular, efforts are focusing on computations of complex three-dimensional flow fields about realistic aerodynamic bodies. To perform such computations, a very accurate and detailed description of the surface geometry must be provided, and a three-dimensional grid must be generated in the space around the body. The geometry must be supplied in a format compatible with the grid generation requirements, and must be verified to be free of inconsistencies. A procedure for performing the geometry definition of a fighter aircraft that makes use of a commercial computer-aided design/computer-aided manufacturing system is presented. Furthermore, visual representations of the geometry are generated using a computer graphics system for verification of the body definition. Finally, the three-dimensional grids for fighter-like aircraft are generated by means of an efficient new parabolic grid generation method. This method exhibits good control of grid quality.
Road landslide information management and forecasting system base on GIS.
Wang, Wei Dong; Du, Xiang Gang; Xie, Cui Ming
2009-09-01
Take account of the characters of road geological hazard and its supervision, it is very important to develop the Road Landslides Information Management and Forecasting System based on Geographic Information System (GIS). The paper presents the system objective, function, component modules and key techniques in the procedure of system development. The system, based on the spatial information and attribute information of road geological hazard, was developed and applied in Guizhou, a province of China where there are numerous and typical landslides. The manager of communication, using the system, can visually inquire all road landslides information based on regional road network or on the monitoring network of individual landslide. Furthermore, the system, integrated with mathematical prediction models and the GIS's strongpoint on spatial analyzing, can assess and predict landslide developing procedure according to the field monitoring data. Thus, it can efficiently assists the road construction or management units in making decision to control the landslides and to reduce human vulnerability.
Automated Approach to Very High-Order Aeroacoustic Computations. Revision
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.; Goodrich, John W.
2001-01-01
Computational aeroacoustics requires efficient, high-resolution simulation tools. For smooth problems, this is best accomplished with very high-order in space and time methods on small stencils. However, the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewski recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that am located near wall boundaries. These procedures are used to develop automatically and to implement very high-order methods (> 15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.
Asymptotic Analysis Of The Total Least Squares ESPRIT Algorithm'
NASA Astrophysics Data System (ADS)
Ottersten, B. E.; Viberg, M.; Kailath, T.
1989-11-01
This paper considers the problem of estimating the parameters of multiple narrowband signals arriving at an array of sensors. Modern approaches to this problem often involve costly procedures for calculating the estimates. The ESPRIT (Estimation of Signal Parameters via Rotational Invariance Techniques) algorithm was recently proposed as a means for obtaining accurate estimates without requiring a costly search of the parameter space. This method utilizes an array invariance to arrive at a computationally efficient multidimensional estimation procedure. Herein, the asymptotic distribution of the estimation error is derived for the Total Least Squares (TLS) version of ESPRIT. The Cramer-Rao Bound (CRB) for the ESPRIT problem formulation is also derived and found to coincide with the variance of the asymptotic distribution through numerical examples. The method is also compared to least squares ESPRIT and MUSIC as well as to the CRB for a calibrated array. Simulations indicate that the theoretic expressions can be used to accurately predict the performance of the algorithm.
Full Parallel Implementation of an All-Electron Four-Component Dirac-Kohn-Sham Program.
Rampino, Sergio; Belpassi, Leonardo; Tarantelli, Francesco; Storchi, Loriano
2014-09-09
A full distributed-memory implementation of the Dirac-Kohn-Sham (DKS) module of the program BERTHA (Belpassi et al., Phys. Chem. Chem. Phys. 2011, 13, 12368-12394) is presented, where the self-consistent field (SCF) procedure is replicated on all the parallel processes, each process working on subsets of the global matrices. The key feature of the implementation is an efficient procedure for switching between two matrix distribution schemes, one (integral-driven) optimal for the parallel computation of the matrix elements and another (block-cyclic) optimal for the parallel linear algebra operations. This approach, making both CPU-time and memory scalable with the number of processors used, virtually overcomes at once both time and memory barriers associated with DKS calculations. Performance, portability, and numerical stability of the code are illustrated on the basis of test calculations on three gold clusters of increasing size, an organometallic compound, and a perovskite model. The calculations are performed on a Beowulf and a BlueGene/Q system.
An Automated Approach to Very High Order Aeroacoustic Computations in Complex Geometries
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.; Goodrich, John W.
2000-01-01
Computational aeroacoustics requires efficient, high-resolution simulation tools. And for smooth problems, this is best accomplished with very high order in space and time methods on small stencils. But the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewslci recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that are located near wall boundaries. These procedures are used to automatically develop and implement very high order methods (>15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.
NASA Astrophysics Data System (ADS)
Citro, V.; Luchini, P.; Giannetti, F.; Auteri, F.
2017-09-01
The study of the stability of a dynamical system described by a set of partial differential equations (PDEs) requires the computation of unstable states as the control parameter exceeds its critical threshold. Unfortunately, the discretization of the governing equations, especially for fluid dynamic applications, often leads to very large discrete systems. As a consequence, matrix based methods, like for example the Newton-Raphson algorithm coupled with a direct inversion of the Jacobian matrix, lead to computational costs too large in terms of both memory and execution time. We present a novel iterative algorithm, inspired by Krylov-subspace methods, which is able to compute unstable steady states and/or accelerate the convergence to stable configurations. Our new algorithm is based on the minimization of the residual norm at each iteration step with a projection basis updated at each iteration rather than at periodic restarts like in the classical GMRES method. The algorithm is able to stabilize any dynamical system without increasing the computational time of the original numerical procedure used to solve the governing equations. Moreover, it can be easily inserted into a pre-existing relaxation (integration) procedure with a call to a single black-box subroutine. The procedure is discussed for problems of different sizes, ranging from a small two-dimensional system to a large three-dimensional problem involving the Navier-Stokes equations. We show that the proposed algorithm is able to improve the convergence of existing iterative schemes. In particular, the procedure is applied to the subcritical flow inside a lid-driven cavity. We also discuss the application of Boostconv to compute the unstable steady flow past a fixed circular cylinder (2D) and boundary-layer flow over a hemispherical roughness element (3D) for supercritical values of the Reynolds number. We show that Boostconv can be used effectively with any spatial discretization, be it a finite-difference, finite-volume, finite-element or spectral method.
Accuracy of an unstructured-grid upwind-Euler algorithm for the ONERA M6 wing
NASA Technical Reports Server (NTRS)
Batina, John T.
1991-01-01
Improved algorithms for the solution of the three-dimensional, time-dependent Euler equations are presented for aerodynamic analysis involving unstructured dynamic meshes. The improvements have been developed recently to the spatial and temporal discretizations used by unstructured-grid flow solvers. The spatial discretization involves a flux-split approach that is naturally dissipative and captures shock waves sharply with at most one grid point within the shock structure. The temporal discretization involves either an explicit time-integration scheme using a multistage Runge-Kutta procedure or an implicit time-integration scheme using a Gauss-Seidel relaxation procedure, which is computationally efficient for either steady or unsteady flow problems. With the implicit Gauss-Seidel procedure, very large time steps may be used for rapid convergence to steady state, and the step size for unsteady cases may be selected for temporal accuracy rather than for numerical stability. Steady flow results are presented for both the NACA 0012 airfoil and the Office National d'Etudes et de Recherches Aerospatiales M6 wing to demonstrate applications of the new Euler solvers. The paper presents a description of the Euler solvers along with results and comparisons that assess the capability.
Nonlocal equation for the superconducting gap parameter
NASA Astrophysics Data System (ADS)
Simonucci, S.; Strinati, G. Calvanese
2017-08-01
The properties are considered in detail of a nonlocal (integral) equation for the superconducting gap parameter, which is obtained by a coarse-graining procedure applied to the Bogoliubov-de Gennes (BdG) equations over the whole coupling-versus-temperature phase diagram associated with the superfluid phase. It is found that the limiting size of the coarse-graining procedure, which is dictated by the range of the kernel of this integral equation, corresponds to the size of the Cooper pairs over the whole coupling-versus-temperature phase diagram up to the critical temperature, even when Cooper pairs turn into composite bosons on the BEC side of the BCS-BEC crossover. A practical method is further implemented to solve numerically this integral equation in an efficient way, which is based on a novel algorithm for calculating the Fourier transforms. Application of this method to the case of an isolated vortex, throughout the BCS-BEC crossover and for all temperatures in the superfluid phase, helps clarifying the nature of the length scales associated with a single vortex and the kinds of details that are in practice disposed off by the coarse-graining procedure on the BdG equations.
Numerical procedure to determine geometric view factors for surfaces occluded by cylinders
NASA Technical Reports Server (NTRS)
Sawyer, P. L.
1978-01-01
A numerical procedure was developed to determine geometric view factors between connected infinite strips occluded by any number of infinite circular cylinders. The procedure requires a two-dimensional cross-sectional model of the configuration of interest. The two-dimensional model consists of a convex polygon enclosing any number of circles. Each side of the polygon represents one strip, and each circle represents a circular cylinder. A description and listing of a computer program based on this procedure are included in this report. The program calculates geometric view factors between individual strips and between individual strips and the collection of occluding cylinders.
A new procedure for calculating contact stresses in gear teeth
NASA Technical Reports Server (NTRS)
Somprakit, Paisan; Huston, Ronald L.
1991-01-01
A numerical procedure for evaluating and monitoring contact stresses in meshing gear teeth is discussed. The procedure is intended to extend the range of applicability and to improve the accuracy of gear contact stress analysis. The procedure is based upon fundamental solution from the theory of elasticity. It is an iterative numerical procedure. The method is believed to have distinct advantages over the classical Hertz method, the finite-element method, and over existing approaches with the boundary element method. Unlike many classical contact stress analyses, friction effects and sliding are included. Slipping and sticking in the contact region are studied. Several examples are discussed. The results are in agreement with classical results. Applications are presented for spur gears.
A high-efficiency low-voltage class-E PA for IoT applications in sub-1 GHz frequency range
NASA Astrophysics Data System (ADS)
Zhou, Chenyi; Lu, Zhenghao; Gu, Jiangmin; Yu, Xiaopeng
2017-10-01
We present and propose a complete and iterative integrated-circuit and electro-magnetic (EM) co-design methodology and procedure for a low-voltage sub-1 GHz class-E PA. The presented class-E PA consists of the on-chip power transistor, the on-chip gate driving circuits, the off-chip tunable LC load network and the off-chip LC ladder low pass filter. The design methodology includes an explicit design equation based circuit components values' analysis and numerical derivation, output power targeted transistor size and low pass filter design, and power efficiency oriented design optimization. The proposed design procedure includes the power efficiency oriented LC network tuning, the detailed circuit/EM co-simulation plan on integrated circuit level, package level and PCB level to ensure an accurate simulation to measurement match and first pass design success. The proposed PA is targeted to achieve more than 15 dBm output power delivery and 40% power efficiency at 433 MHz frequency band with 1.5 V low voltage supply. The LC load network is designed to be off-chip for the purpose of easy tuning and optimization. The same circuit can be extended to all sub-1 GHz applications with the same tuning and optimization on the load network at different frequencies. The amplifier is implemented in 0.13 μm CMOS technology with a core area occupation of 400 μm by 300 μm. Measurement results showed that it provided power delivery of 16.42 dBm at antenna with efficiency of 40.6%. A harmonics suppression of 44 dBc is achieved, making it suitable for massive deployment of IoT devices. Project supported by the National Natural Science Foundation of China (No. 61574125) and the Industry Innovation Project of Suzhou City of China (No. SYG201641).
Equilibrium paths analysis of materials with rheological properties by using the chaos theory
NASA Astrophysics Data System (ADS)
Bednarek, Paweł; Rządkowski, Jan
2018-01-01
The numerical equilibrium path analysis of the material with random rheological properties by using standard procedures and specialist computer programs was not successful. The proper solution for the analysed heuristic model of the material was obtained on the base of chaos theory elements and neural networks. The paper deals with mathematical reasons of used computer programs and also are elaborated the properties of the attractor used in analysis. There are presented results of conducted numerical analysis both in a numerical and in graphical form for the used procedures.
Tsai, Chen-An; Lee, Kuan-Ting; Liu, Jen-Pei
2016-01-01
A key feature of precision medicine is that it takes individual variability at the genetic or molecular level into account in determining the best treatment for patients diagnosed with diseases detected by recently developed novel biotechnologies. The enrichment design is an efficient design that enrolls only the patients testing positive for specific molecular targets and randomly assigns them for the targeted treatment or the concurrent control. However there is no diagnostic device with perfect accuracy and precision for detecting molecular targets. In particular, the positive predictive value (PPV) can be quite low for rare diseases with low prevalence. Under the enrichment design, some patients testing positive for specific molecular targets may not have the molecular targets. The efficacy of the targeted therapy may be underestimated in the patients that actually do have the molecular targets. To address the loss of efficiency due to misclassification error, we apply the discrete mixture modeling for time-to-event data proposed by Eng and Hanlon [8] to develop an inferential procedure, based on the Cox proportional hazard model, for treatment effects of the targeted treatment effect for the true-positive patients with the molecular targets. Our proposed procedure incorporates both inaccuracy of diagnostic devices and uncertainty of estimated accuracy measures. We employed the expectation-maximization algorithm in conjunction with the bootstrap technique for estimation of the hazard ratio and its estimated variance. We report the results of simulation studies which empirically investigated the performance of the proposed method. Our proposed method is illustrated by a numerical example.
Excel spreadsheet in teaching numerical methods
NASA Astrophysics Data System (ADS)
Djamila, Harimi
2017-09-01
One of the important objectives in teaching numerical methods for undergraduates’ students is to bring into the comprehension of numerical methods algorithms. Although, manual calculation is important in understanding the procedure, it is time consuming and prone to error. This is specifically the case when considering the iteration procedure used in many numerical methods. Currently, many commercial programs are useful in teaching numerical methods such as Matlab, Maple, and Mathematica. These are usually not user-friendly by the uninitiated. Excel spreadsheet offers an initial level of programming, which it can be used either in or off campus. The students will not be distracted with writing codes. It must be emphasized that general commercial software is required to be introduced later to more elaborated questions. This article aims to report on a teaching numerical methods strategy for undergraduates engineering programs. It is directed to students, lecturers and researchers in engineering field.
On recent advances and future research directions for computational fluid dynamics
NASA Technical Reports Server (NTRS)
Baker, A. J.; Soliman, M. O.; Manhardt, P. D.
1986-01-01
This paper highlights some recent accomplishments regarding CFD numerical algorithm constructions for generation of discrete approximate solutions to classes of Reynolds-averaged Navier-Stokes equations. Following an overview of turbulent closure modeling, and development of appropriate conservation law systems, a Taylor weak-statement semi-discrete approximate solution algorithm is developed. Various forms for completion to the final linear algebra statement are cited, as are a range of candidate numerical linear algebra solution procedures. This development sequence emphasizes the key building blocks of a CFD RNS algorithm, including solution trial and test spaces, integration procedure and added numerical stability mechanisms. A range of numerical results are discussed focusing on key topics guiding future research directions.
Duncan, James R; Kline, Benjamin; Glaiberman, Craig B
2007-04-01
To create and test methods of extracting efficiency data from recordings of simulated renal stent procedures. Task analysis was performed and used to design a standardized testing protocol. Five experienced angiographers then performed 16 renal stent simulations using the Simbionix AngioMentor angiographic simulator. Audio and video recordings of these simulations were captured from multiple vantage points. The recordings were synchronized and compiled. A series of efficiency metrics (procedure time, contrast volume, and tool use) were then extracted from the recordings. The intraobserver and interobserver variability of these individual metrics was also assessed. The metrics were converted to costs and aggregated to determine the fixed and variable costs of a procedure segment or the entire procedure. Task analysis and pilot testing led to a standardized testing protocol suitable for performance assessment. Task analysis also identified seven checkpoints that divided the renal stent simulations into six segments. Efficiency metrics for these different segments were extracted from the recordings and showed excellent intra- and interobserver correlations. Analysis of the individual and aggregated efficiency metrics demonstrated large differences between segments as well as between different angiographers. These differences persisted when efficiency was expressed as either total or variable costs. Task analysis facilitated both protocol development and data analysis. Efficiency metrics were readily extracted from recordings of simulated procedures. Aggregating the metrics and dividing the procedure into segments revealed potential insights that could be easily overlooked because the simulator currently does not attempt to aggregate the metrics and only provides data derived from the entire procedure. The data indicate that analysis of simulated angiographic procedures will be a powerful method of assessing performance in interventional radiology.
Liu, Qing; He, Ya-Ling; Li, Qing
2017-08-01
In this paper, an enthalpy-based multiple-relaxation-time (MRT) lattice Boltzmann (LB) method is developed for solid-liquid phase-change heat transfer in metal foams under the local thermal nonequilibrium (LTNE) condition. The enthalpy-based MRT-LB method consists of three different MRT-LB models: one for flow field based on the generalized non-Darcy model, and the other two for phase-change material (PCM) and metal-foam temperature fields described by the LTNE model. The moving solid-liquid phase interface is implicitly tracked through the liquid fraction, which is simultaneously obtained when the energy equations of PCM and metal foam are solved. The present method has several distinctive features. First, as compared with previous studies, the present method avoids the iteration procedure; thus it retains the inherent merits of the standard LB method and is superior to the iteration method in terms of accuracy and computational efficiency. Second, a volumetric LB scheme instead of the bounce-back scheme is employed to realize the no-slip velocity condition in the interface and solid phase regions, which is consistent with the actual situation. Last but not least, the MRT collision model is employed, and with additional degrees of freedom, it has the ability to reduce the numerical diffusion across the phase interface induced by solid-liquid phase change. Numerical tests demonstrate that the present method can serve as an accurate and efficient numerical tool for studying metal-foam enhanced solid-liquid phase-change heat transfer in latent heat storage. Finally, comparisons and discussions are made to offer useful information for practical applications of the present method.
NASA Astrophysics Data System (ADS)
Liu, Qing; He, Ya-Ling; Li, Qing
2017-08-01
In this paper, an enthalpy-based multiple-relaxation-time (MRT) lattice Boltzmann (LB) method is developed for solid-liquid phase-change heat transfer in metal foams under the local thermal nonequilibrium (LTNE) condition. The enthalpy-based MRT-LB method consists of three different MRT-LB models: one for flow field based on the generalized non-Darcy model, and the other two for phase-change material (PCM) and metal-foam temperature fields described by the LTNE model. The moving solid-liquid phase interface is implicitly tracked through the liquid fraction, which is simultaneously obtained when the energy equations of PCM and metal foam are solved. The present method has several distinctive features. First, as compared with previous studies, the present method avoids the iteration procedure; thus it retains the inherent merits of the standard LB method and is superior to the iteration method in terms of accuracy and computational efficiency. Second, a volumetric LB scheme instead of the bounce-back scheme is employed to realize the no-slip velocity condition in the interface and solid phase regions, which is consistent with the actual situation. Last but not least, the MRT collision model is employed, and with additional degrees of freedom, it has the ability to reduce the numerical diffusion across the phase interface induced by solid-liquid phase change. Numerical tests demonstrate that the present method can serve as an accurate and efficient numerical tool for studying metal-foam enhanced solid-liquid phase-change heat transfer in latent heat storage. Finally, comparisons and discussions are made to offer useful information for practical applications of the present method.
NASA Technical Reports Server (NTRS)
Dash, S.; Delguidice, P. D.
1975-01-01
A parametric numerical procedure permitting the rapid determination of the performance of a class of scramjet nozzle configurations is presented. The geometric complexity of these configurations ruled out attempts to employ conventional nozzle design procedures. The numerical program developed permitted the parametric variation of cowl length, turning angles on the cowl and vehicle undersurface and lateral expansion, and was subject to fixed constraints such as the vehicle length and nozzle exit height. The program required uniform initial conditions at the burner exit station and yielded the location of all predominant wave zones, accounting for lateral expansion effects. In addition, the program yielded the detailed pressure distribution on the cowl, vehicle undersurface and fences, if any, and calculated the nozzle thrust, lift and pitching moments.
Time-variant random interval natural frequency analysis of structures
NASA Astrophysics Data System (ADS)
Wu, Binhua; Wu, Di; Gao, Wei; Song, Chongmin
2018-02-01
This paper presents a new robust method namely, unified interval Chebyshev-based random perturbation method, to tackle hybrid random interval structural natural frequency problem. In the proposed approach, random perturbation method is implemented to furnish the statistical features (i.e., mean and standard deviation) and Chebyshev surrogate model strategy is incorporated to formulate the statistical information of natural frequency with regards to the interval inputs. The comprehensive analysis framework combines the superiority of both methods in a way that computational cost is dramatically reduced. This presented method is thus capable of investigating the day-to-day based time-variant natural frequency of structures accurately and efficiently under concrete intrinsic creep effect with probabilistic and interval uncertain variables. The extreme bounds of the mean and standard deviation of natural frequency are captured through the embedded optimization strategy within the analysis procedure. Three particularly motivated numerical examples with progressive relationship in perspective of both structure type and uncertainty variables are demonstrated to justify the computational applicability, accuracy and efficiency of the proposed method.
Pozzi, Alessandro; Arcuri, Lorenzo; Moy, Peter K
2018-03-01
The growing interest in minimally invasive implant placement and delivery of a prefabricated provisional prosthesis immediately, thus minimizing "time to teeth," has led to the development of numerous 3-dimensional (3D) planning software programs. Given the enhancements associated with fully digital workflows, such as better 3D soft-tissue visualization and virtual tooth rendering, computer-guided implant surgery and immediate function has become an effective and reliable procedure. This article describes how modern implant planning software programs provide a comprehensive digital platform that enables efficient interplay between the surgical and restorative aspects of implant treatment. These new technologies that streamline the overall digital workflow allow transformation of the digital wax-up into a personalized, CAD/CAM-milled provisional restoration. Thus, collaborative digital workflows provide a novel approach for time-efficient delivery of a customized, screw-retained provisional restoration on the day of implant surgery, resulting in improved predictability for immediate function in the partially edentate patient.
Efficient vibration mode analysis of aircraft with multiple external store configurations
NASA Technical Reports Server (NTRS)
Karpel, M.
1988-01-01
A coupling method for efficient vibration mode analysis of aircraft with multiple external store configurations is presented. A set of low-frequency vibration modes, including rigid-body modes, represent the aircraft. Each external store is represented by its vibration modes with clamped boundary conditions, and by its rigid-body inertial properties. The aircraft modes are obtained from a finite-element model loaded by dummy rigid external stores with fictitious masses. The coupling procedure unloads the dummy stores and loads the actual stores instead. The analytical development is presented, the effects of the fictitious mass magnitudes are discussed, and a numerical example is given for a combat aircraft with external wing stores. Comparison with vibration modes obtained by a direct (full-size) eigensolution shows very accurate coupling results. Once the aircraft and stores data bases are constructed, the computer time for analyzing any external store configuration is two to three orders of magnitude less than that of a direct solution.
NASA Astrophysics Data System (ADS)
Bechara, William S.; Pelletier, Guillaume; Charette, André B.
2012-03-01
The development of efficient and selective transformations is crucial in synthetic chemistry as it opens new possibilities in the total synthesis of complex molecules. Applying such reactions to the synthesis of ketones is of great importance, as this motif serves as a synthetic handle for the elaboration of numerous organic functionalities. In this context, we report a general and chemoselective method based on an activation/addition sequence on secondary amides allowing the controlled isolation of structurally diverse ketones and ketimines. The generation of a highly electrophilic imidoyl triflate intermediate was found to be pivotal in the observed exceptional functional group tolerance, allowing the facile addition of readily available Grignard and diorganozinc reagents to amides, and avoiding commonly observed over-addition or reduction side reactions. The methodology has been applied to the formal synthesis of analogues of the antineoplastic agent Bexarotene and to the rapid and efficient synthesis of unsymmetrical diketones in a one-pot procedure.
Recent experience in simultaneous control-structure optimization
NASA Technical Reports Server (NTRS)
Salama, M.; Ramaker, R.; Milman, M.
1989-01-01
To show the feasibility of simultaneous optimization as design procedure, low order problems were used in conjunction with simple control formulations. The numerical results indicate that simultaneous optimization is not only feasible, but also advantageous. Such advantages come at the expense of introducing complexities beyond those encountered in structure optimization alone, or control optimization alone. Examples include: larger design parameter space, optimization may combine continuous and combinatoric variables, and the combined objective function may be nonconvex. Future extensions to include large order problems, more complex objective functions and constraints, and more sophisticated control formulations will require further research to ensure that the additional complexities do not outweigh the advantages of simultaneous optimization. Some areas requiring more efficient tools than currently available include: multiobjective criteria and nonconvex optimization. Efficient techniques to deal with optimization over combinatoric and continuous variables, and with truncation issues for structure and control parameters of both the model space as well as the design space need to be developed.
NASA Astrophysics Data System (ADS)
Vecharynski, Eugene; Brabec, Jiri; Shao, Meiyue; Govind, Niranjan; Yang, Chao
2017-12-01
We present two efficient iterative algorithms for solving the linear response eigenvalue problem arising from the time dependent density functional theory. Although the matrix to be diagonalized is nonsymmetric, it has a special structure that can be exploited to save both memory and floating point operations. In particular, the nonsymmetric eigenvalue problem can be transformed into an eigenvalue problem that involves the product of two matrices M and K. We show that, because MK is self-adjoint with respect to the inner product induced by the matrix K, this product eigenvalue problem can be solved efficiently by a modified Davidson algorithm and a modified locally optimal block preconditioned conjugate gradient (LOBPCG) algorithm that make use of the K-inner product. The solution of the product eigenvalue problem yields one component of the eigenvector associated with the original eigenvalue problem. We show that the other component of the eigenvector can be easily recovered in an inexpensive postprocessing procedure. As a result, the algorithms we present here become more efficient than existing methods that try to approximate both components of the eigenvectors simultaneously. In particular, our numerical experiments demonstrate that the new algorithms presented here consistently outperform the existing state-of-the-art Davidson type solvers by a factor of two in both solution time and storage.
Computer-intensive simulation of solid-state NMR experiments using SIMPSON.
Tošner, Zdeněk; Andersen, Rasmus; Stevensson, Baltzar; Edén, Mattias; Nielsen, Niels Chr; Vosegaard, Thomas
2014-09-01
Conducting large-scale solid-state NMR simulations requires fast computer software potentially in combination with efficient computational resources to complete within a reasonable time frame. Such simulations may involve large spin systems, multiple-parameter fitting of experimental spectra, or multiple-pulse experiment design using parameter scan, non-linear optimization, or optimal control procedures. To efficiently accommodate such simulations, we here present an improved version of the widely distributed open-source SIMPSON NMR simulation software package adapted to contemporary high performance hardware setups. The software is optimized for fast performance on standard stand-alone computers, multi-core processors, and large clusters of identical nodes. We describe the novel features for fast computation including internal matrix manipulations, propagator setups and acquisition strategies. For efficient calculation of powder averages, we implemented interpolation method of Alderman, Solum, and Grant, as well as recently introduced fast Wigner transform interpolation technique. The potential of the optimal control toolbox is greatly enhanced by higher precision gradients in combination with the efficient optimization algorithm known as limited memory Broyden-Fletcher-Goldfarb-Shanno. In addition, advanced parallelization can be used in all types of calculations, providing significant time reductions. SIMPSON is thus reflecting current knowledge in the field of numerical simulations of solid-state NMR experiments. The efficiency and novel features are demonstrated on the representative simulations. Copyright © 2014 Elsevier Inc. All rights reserved.
Worst-Case Energy Efficiency Maximization in a 5G Massive MIMO-NOMA System.
Chinnadurai, Sunil; Selvaprabhu, Poongundran; Jeong, Yongchae; Jiang, Xueqin; Lee, Moon Ho
2017-09-18
In this paper, we examine the robust beamforming design to tackle the energy efficiency (EE) maximization problem in a 5G massive multiple-input multiple-output (MIMO)-non-orthogonal multiple access (NOMA) downlink system with imperfect channel state information (CSI) at the base station. A novel joint user pairing and dynamic power allocation (JUPDPA) algorithm is proposed to minimize the inter user interference and also to enhance the fairness between the users. This work assumes imperfect CSI by adding uncertainties to channel matrices with worst-case model, i.e., ellipsoidal uncertainty model (EUM). A fractional non-convex optimization problem is formulated to maximize the EE subject to the transmit power constraints and the minimum rate requirement for the cell edge user. The designed problem is difficult to solve due to its nonlinear fractional objective function. We firstly employ the properties of fractional programming to transform the non-convex problem into its equivalent parametric form. Then, an efficient iterative algorithm is proposed established on the constrained concave-convex procedure (CCCP) that solves and achieves convergence to a stationary point of the above problem. Finally, Dinkelbach's algorithm is employed to determine the maximum energy efficiency. Comprehensive numerical results illustrate that the proposed scheme attains higher worst-case energy efficiency as compared with the existing NOMA schemes and the conventional orthogonal multiple access (OMA) scheme.
Worst-Case Energy Efficiency Maximization in a 5G Massive MIMO-NOMA System
Jeong, Yongchae; Jiang, Xueqin; Lee, Moon Ho
2017-01-01
In this paper, we examine the robust beamforming design to tackle the energy efficiency (EE) maximization problem in a 5G massive multiple-input multiple-output (MIMO)-non-orthogonal multiple access (NOMA) downlink system with imperfect channel state information (CSI) at the base station. A novel joint user pairing and dynamic power allocation (JUPDPA) algorithm is proposed to minimize the inter user interference and also to enhance the fairness between the users. This work assumes imperfect CSI by adding uncertainties to channel matrices with worst-case model, i.e., ellipsoidal uncertainty model (EUM). A fractional non-convex optimization problem is formulated to maximize the EE subject to the transmit power constraints and the minimum rate requirement for the cell edge user. The designed problem is difficult to solve due to its nonlinear fractional objective function. We firstly employ the properties of fractional programming to transform the non-convex problem into its equivalent parametric form. Then, an efficient iterative algorithm is proposed established on the constrained concave-convex procedure (CCCP) that solves and achieves convergence to a stationary point of the above problem. Finally, Dinkelbach’s algorithm is employed to determine the maximum energy efficiency. Comprehensive numerical results illustrate that the proposed scheme attains higher worst-case energy efficiency as compared with the existing NOMA schemes and the conventional orthogonal multiple access (OMA) scheme. PMID:28927019
NASA Astrophysics Data System (ADS)
Zulai, Luis G. T.; Durand, Fábio R.; Abrão, Taufik
2015-05-01
In this article, an energy-efficiency mechanism for next-generation passive optical networks is investigated through heuristic particle swarm optimization. Ten-gigabit Ethernet-wavelength division multiplexing optical code division multiplexing-passive optical network next-generation passive optical networks are based on the use of a legacy 10-gigabit Ethernet-passive optical network with the advantage of using only an en/decoder pair of optical code division multiplexing technology, thus eliminating the en/decoder at each optical network unit. The proposed joint mechanism is based on the sleep-mode power-saving scheme for a 10-gigabit Ethernet-passive optical network, combined with a power control procedure aiming to adjust the transmitted power of the active optical network units while maximizing the overall energy-efficiency network. The particle swarm optimization based power control algorithm establishes the optimal transmitted power in each optical network unit according to the network pre-defined quality of service requirements. The objective is controlling the power consumption of the optical network unit according to the traffic demand by adjusting its transmitter power in an attempt to maximize the number of transmitted bits with minimum energy consumption, achieving maximal system energy efficiency. Numerical results have revealed that it is possible to save 75% of energy consumption with the proposed particle swarm optimization based sleep-mode energy-efficiency mechanism compared to 55% energy savings when just a sleeping-mode-based mechanism is deployed.
Object oriented development of engineering software using CLIPS
NASA Technical Reports Server (NTRS)
Yoon, C. John
1991-01-01
Engineering applications involve numeric complexity and manipulations of a large amount of data. Traditionally, numeric computation has been the concern in developing an engineering software. As engineering application software became larger and more complex, management of resources such as data, rather than the numeric complexity, has become the major software design problem. Object oriented design and implementation methodologies can improve the reliability, flexibility, and maintainability of the resulting software; however, some tasks are better solved with the traditional procedural paradigm. The C Language Integrated Production System (CLIPS), with deffunction and defgeneric constructs, supports the procedural paradigm. The natural blending of object oriented and procedural paradigms has been cited as the reason for the popularity of the C++ language. The CLIPS Object Oriented Language's (COOL) object oriented features are more versatile than C++'s. A software design methodology based on object oriented and procedural approaches appropriate for engineering software, and to be implemented in CLIPS was outlined. A method for sensor placement for Space Station Freedom is being implemented in COOL as a sample problem.
Steepest descent method implementation on unconstrained optimization problem using C++ program
NASA Astrophysics Data System (ADS)
Napitupulu, H.; Sukono; Mohd, I. Bin; Hidayat, Y.; Supian, S.
2018-03-01
Steepest Descent is known as the simplest gradient method. Recently, many researches are done to obtain the appropriate step size in order to reduce the objective function value progressively. In this paper, the properties of steepest descent method from literatures are reviewed together with advantages and disadvantages of each step size procedure. The development of steepest descent method due to its step size procedure is discussed. In order to test the performance of each step size, we run a steepest descent procedure in C++ program. We implemented it to unconstrained optimization test problem with two variables, then we compare the numerical results of each step size procedure. Based on the numerical experiment, we conclude the general computational features and weaknesses of each procedure in each case of problem.
Malyarenko, Dariya I; Ross, Brian D; Chenevert, Thomas L
2014-03-01
Gradient nonlinearity of MRI systems leads to spatially dependent b-values and consequently high non-uniformity errors (10-20%) in apparent diffusion coefficient (ADC) measurements over clinically relevant field-of-views. This work seeks practical correction procedure that effectively reduces observed ADC bias for media of arbitrary anisotropy in the fewest measurements. All-inclusive bias analysis considers spatial and time-domain cross-terms for diffusion and imaging gradients. The proposed correction is based on rotation of the gradient nonlinearity tensor into the diffusion gradient frame where spatial bias of b-matrix can be approximated by its Euclidean norm. Correction efficiency of the proposed procedure is numerically evaluated for a range of model diffusion tensor anisotropies and orientations. Spatial dependence of nonlinearity correction terms accounts for the bulk (75-95%) of ADC bias for FA = 0.3-0.9. Residual ADC non-uniformity errors are amplified for anisotropic diffusion. This approximation obviates need for full diffusion tensor measurement and diagonalization to derive a corrected ADC. Practical scenarios are outlined for implementation of the correction on clinical MRI systems. The proposed simplified correction algorithm appears sufficient to control ADC non-uniformity errors in clinical studies using three orthogonal diffusion measurements. The most efficient reduction of ADC bias for anisotropic medium is achieved with non-lab-based diffusion gradients. Copyright © 2013 Wiley Periodicals, Inc.
Analysis and correction of gradient nonlinearity bias in ADC measurements
Malyarenko, Dariya I.; Ross, Brian D.; Chenevert, Thomas L.
2013-01-01
Purpose Gradient nonlinearity of MRI systems leads to spatially-dependent b-values and consequently high non-uniformity errors (10–20%) in ADC measurements over clinically relevant field-of-views. This work seeks practical correction procedure that effectively reduces observed ADC bias for media of arbitrary anisotropy in the fewest measurements. Methods All-inclusive bias analysis considers spatial and time-domain cross-terms for diffusion and imaging gradients. The proposed correction is based on rotation of the gradient nonlinearity tensor into the diffusion gradient frame where spatial bias of b-matrix can be approximated by its Euclidean norm. Correction efficiency of the proposed procedure is numerically evaluated for a range of model diffusion tensor anisotropies and orientations. Results Spatial dependence of nonlinearity correction terms accounts for the bulk (75–95%) of ADC bias for FA = 0.3–0.9. Residual ADC non-uniformity errors are amplified for anisotropic diffusion. This approximation obviates need for full diffusion tensor measurement and diagonalization to derive a corrected ADC. Practical scenarios are outlined for implementation of the correction on clinical MRI systems. Conclusions The proposed simplified correction algorithm appears sufficient to control ADC non-uniformity errors in clinical studies using three orthogonal diffusion measurements. The most efficient reduction of ADC bias for anisotropic medium is achieved with non-lab-based diffusion gradients. PMID:23794533
Array feed synthesis for correction of reflector distortion and Vernier Beamsteering
NASA Technical Reports Server (NTRS)
Blank, S. J.; Imbriale, W. A.
1986-01-01
An algorithmic procedure for the synthesis of planar array feeds for paraboloidal reflectors is described which simultaneously provides electronic correction of systematic reflector surface distortions as well as a Vernier electronic beamsteering capability. Simple rules of thumb for the optimum choice of planar array feed configuration (i.e., number and type of elements) are derived from a parametric study made using the synthesis procedure. A number of f/D ratios and distortion models were examined that are typical of large paraboloidal reflectors. Numerical results are presented showing that, for the range of distortion models considered, good on-axis gain restoration can be achieved with as few as seven elements. For beamsteering to +/- 1 beamwidth (BW), 19 elements are required. For arrays with either 7 or 19 elements, the results indicate that the use of high-aperture-efficiency elements (e.g., disk-on-rod and short backfire) in the array yields higher system gain than can be obtained with elements having lower aperture efficiency (e.g., open-ended waveguides). With 37 elements, excellent gain and beamsteering performance to +/- 1.5 BW are obtained independent of the assumed effective aperture of the array element. An approximate expression is derived for the focal-plane field distribution of the distorted reflector. Contour plots of the focal-plane fields are also presented for various distortion and beam scan angle cases. The results obtained show the effectiveness of the array feed approach.
A conservative fully implicit algorithm for predicting slug flows
NASA Astrophysics Data System (ADS)
Krasnopolsky, Boris I.; Lukyanov, Alexander A.
2018-02-01
An accurate and predictive modelling of slug flows is required by many industries (e.g., oil and gas, nuclear engineering, chemical engineering) to prevent undesired events potentially leading to serious environmental accidents. For example, the hydrodynamic and terrain-induced slugging leads to unwanted unsteady flow conditions. This demands the development of fast and robust numerical techniques for predicting slug flows. The presented in this paper study proposes a multi-fluid model and its implementation method accounting for phase appearance and disappearance. The numerical modelling of phase appearance and disappearance presents a complex numerical challenge for all multi-component and multi-fluid models. Numerical challenges arise from the singular systems of equations when some phases are absent and from the solution discontinuity when some phases appear or disappear. This paper provides a flexible and robust solution to these issues. A fully implicit formulation described in this work enables to efficiently solve governing fluid flow equations. The proposed numerical method provides a modelling capability of phase appearance and disappearance processes, which is based on switching procedure between various sets of governing equations. These sets of equations are constructed using information about the number of phases present in the computational domain. The proposed scheme does not require an explicit truncation of solutions leading to a conservative scheme for mass and linear momentum. A transient two-fluid model is used to verify and validate the proposed algorithm for conditions of hydrodynamic and terrain-induced slug flow regimes. The developed modelling capabilities allow to predict all the major features of the experimental data, and are in a good quantitative agreement with them.
NASA Astrophysics Data System (ADS)
Polanský, Jiří; Kalmár, László; Gášpár, Roman
2013-12-01
The main aim of this paper is determine the centrifugal fan with forward curved blades aerodynamic characteristics based on numerical modeling. Three variants of geometry were investigated. The first, basic "A" variant contains 12 blades. The geometry of second "B" variant contains 12 blades and 12 semi-blades with optimal length [1]. The third, control variant "C" contains 24 blades without semi-blades. Numerical calculations were performed by CFD Ansys. Another aim of this paper is to compare results of the numerical simulation with results of approximate numerical procedure. Applied approximate numerical procedure [2] is designated to determine characteristics of the turbulent flow in the bladed space of a centrifugal-flow fan impeller. This numerical method is an extension of the hydro-dynamical cascade theory for incompressible and inviscid fluid flow. Paper also partially compares results from the numerical simulation and results from the experimental investigation. Acoustic phenomena observed during experiment, during numerical simulation manifested as deterioration of the calculation stability, residuals oscillation and thus also as a flow field oscillation. Pressure pulsations are evaluated by using frequency analysis for each variant and working condition.
An efficient technique for the numerical solution of the bidomain equations.
Whiteley, Jonathan P
2008-08-01
Computing the numerical solution of the bidomain equations is widely accepted to be a significant computational challenge. In this study we extend a previously published semi-implicit numerical scheme with good stability properties that has been used to solve the bidomain equations (Whiteley, J.P. IEEE Trans. Biomed. Eng. 53:2139-2147, 2006). A new, efficient numerical scheme is developed which utilizes the observation that the only component of the ionic current that must be calculated on a fine spatial mesh and updated frequently is the fast sodium current. Other components of the ionic current may be calculated on a coarser mesh and updated less frequently, and then interpolated onto the finer mesh. Use of this technique to calculate the transmembrane potential and extracellular potential induces very little error in the solution. For the simulations presented in this study an increase in computational efficiency of over two orders of magnitude over standard numerical techniques is obtained.
Numerical modeling of runback water on ice protected aircraft surfaces
NASA Technical Reports Server (NTRS)
Al-Khalil, Kamel M.; Keith, Theo G., Jr.; Dewitt, Kenneth J.
1992-01-01
A numerical simulation for 'running wet' aircraft anti-icing systems is developed. The model includes breakup of the water film, which exists in regions of direct impingement, into individual rivulets. The wetness factor distribution resulting from the film breakup and the rivulet configuration on the surface are predicted in the numerical solution procedure. The solid wall is modeled as a multilayer structure and the anti-icing system used is of the thermal type utilizing hot air and/or electrical heating elements embedded with the layers. Details of the calculation procedure and the methods used are presented.
Numerical methods for stiff systems of two-point boundary value problems
NASA Technical Reports Server (NTRS)
Flaherty, J. E.; Omalley, R. E., Jr.
1983-01-01
Numerical procedures are developed for constructing asymptotic solutions of certain nonlinear singularly perturbed vector two-point boundary value problems having boundary layers at one or both endpoints. The asymptotic approximations are generated numerically and can either be used as is or to furnish a general purpose two-point boundary value code with an initial approximation and the nonuniform computational mesh needed for such problems. The procedures are applied to a model problem that has multiple solutions and to problems describing the deformation of thin nonlinear elastic beam that is resting on an elastic foundation.
Davidenko’s Method for the Solution of Nonlinear Operator Equations.
NONLINEAR DIFFERENTIAL EQUATIONS, NUMERICAL INTEGRATION), OPERATORS(MATHEMATICS), BANACH SPACE , MAPPING (TRANSFORMATIONS), NUMERICAL METHODS AND PROCEDURES, INTEGRALS, SET THEORY, CONVERGENCE, MATRICES(MATHEMATICS)
NHPP-Based Software Reliability Models Using Equilibrium Distribution
NASA Astrophysics Data System (ADS)
Xiao, Xiao; Okamura, Hiroyuki; Dohi, Tadashi
Non-homogeneous Poisson processes (NHPPs) have gained much popularity in actual software testing phases to estimate the software reliability, the number of remaining faults in software and the software release timing. In this paper, we propose a new modeling approach for the NHPP-based software reliability models (SRMs) to describe the stochastic behavior of software fault-detection processes. The fundamental idea is to apply the equilibrium distribution to the fault-detection time distribution in NHPP-based modeling. We also develop efficient parameter estimation procedures for the proposed NHPP-based SRMs. Through numerical experiments, it can be concluded that the proposed NHPP-based SRMs outperform the existing ones in many data sets from the perspective of goodness-of-fit and prediction performance.
NASA Technical Reports Server (NTRS)
Marconi, F.; Salas, M.; Yaeger, L.
1976-01-01
A numerical procedure has been developed to compute the inviscid super/hypersonic flow field about complex vehicle geometries accurately and efficiently. A second order accurate finite difference scheme is used to integrate the three dimensional Euler equations in regions of continuous flow, while all shock waves are computed as discontinuities via the Rankine Hugoniot jump conditions. Conformal mappings are used to develop a computational grid. The effects of blunt nose entropy layers are computed in detail. Real gas effects for equilibrium air are included using curve fits of Mollier charts. Typical calculated results for shuttle orbiter, hypersonic transport, and supersonic aircraft configurations are included to demonstrate the usefulness of this tool.
NASA Technical Reports Server (NTRS)
Dash, S.; Delguidice, P.
1978-01-01
This report summarizes work accomplished under Contract No. NAS1-12726 towards the development of computational procedures and associated numerical. The flow fields considered were those associated with airbreathing hypersonic aircraft which require a high degree of engine/airframe integration in order to achieve optimized performance. The exhaust flow, due to physical area limitations, was generally underexpanded at the nozzle exit; the vehicle afterbody undersurface was used to provide additional expansion to obtain maximum propulsive efficiency. This resulted in a three dimensional nozzle flow, initialized at the combustor exit, whose boundaries are internally defined by the undersurface, cowling and walls separating individual modules, and externally, by the undersurface and slipstream separating the exhaust flow and external stream.
NASA Technical Reports Server (NTRS)
Marconi, F.; Yaeger, L.
1976-01-01
A numerical procedure was developed to compute the inviscid super/hypersonic flow field about complex vehicle geometries accurately and efficiently. A second-order accurate finite difference scheme is used to integrate the three-dimensional Euler equations in regions of continuous flow, while all shock waves are computed as discontinuities via the Rankine-Hugoniot jump conditions. Conformal mappings are used to develop a computational grid. The effects of blunt nose entropy layers are computed in detail. Real gas effects for equilibrium air are included using curve fits of Mollier charts. Typical calculated results for shuttle orbiter, hypersonic transport, and supersonic aircraft configurations are included to demonstrate the usefulness of this tool.
Reformulation of Possio's kernel with application to unsteady wind tunnel interference
NASA Technical Reports Server (NTRS)
Fromme, J. A.; Golberg, M. A.
1980-01-01
An efficient method for computing the Possio kernel has remained elusive up to the present time. In this paper the Possio is reformulated so that it can be computed accurately using existing high precision numerical quadrature techniques. Convergence to the correct values is demonstrated and optimization of the integration procedures is discussed. Since more general kernels such as those associated with unsteady flows in ventilated wind tunnels are analytic perturbations of the Possio free air kernel, a more accurate evaluation of their collocation matrices results with an exponential improvement in convergence. An application to predicting frequency response of an airfoil-trailing edge control system in a wind tunnel compared with that in free air is given showing strong interference effects.
Integer-ambiguity resolution in astronomy and geodesy
NASA Astrophysics Data System (ADS)
Lannes, A.; Prieur, J.-L.
2014-02-01
Recent theoretical developments in astronomical aperture synthesis have revealed the existence of integer-ambiguity problems. Those problems, which appear in the self-calibration procedures of radio imaging, have been shown to be similar to the nearest-lattice point (NLP) problems encountered in high-precision geodetic positioning and in global navigation satellite systems. In this paper we analyse the theoretical aspects of the matter and propose new methods for solving those NLP~problems. The related optimization aspects concern both the preconditioning stage, and the discrete-search stage in which the integer ambiguities are finally fixed. Our algorithms, which are described in an explicit manner, can easily be implemented. They lead to substantial gains in the processing time of both stages. Their efficiency was shown via intensive numerical tests.
Maro, S; Zarattin, D; Baron, T; Bourez, S; de la Taille, A; Salomon, L
2014-09-01
Bladder catheter can induce a Catheter-Related Bladder Discomfort (CRBD). Antagonist of muscarinic receptor is the gold standard treatment. Clonazepam is an antimuscarinic, muscle relaxing oral drug. The aim of this study is to look for a correlation between the type of surgical procedure and the existence of CRBD and to evaluate the efficiency of clonazepam. One hundred patients needing bladder catheter were evaluated. Sexe, age, BMI, presence of diabetes, surgical procedure and existence of CRBD were noted. Pain was evaluated with analogic visual scale. Timing of pain, need for specific treatment by clonazepam and its efficiency were noted. Correlation between preoperative data, type of surgical procedure, existence of CRBD and efficiency of treatment were evaluated. There were 79 men and 21 women (age: 65.9 years, BMI: 25.4). Twelve patients presented diabetes. Surgical procedure concerned prostate in 39 cases, bladder in 19 cases (tumor resections), endo-urology in 20 cases, upper urinary tract in 12 cases (nephrectomy…) and lower urinary tract in 10 cases (sphincter, sub-uretral tape). Forty patients presented CRBD, (pain 4.5 using VAS). This pain occurred 0.6 days after surgery. No correlation was found between preoperative data and CRBD. Bladder resection and endo-urological procedures were surgical procedures which procured CRBD. Clonazepam was efficient in 30 (75 %) out of 40 patients with CRBD. However, it was less efficient in case of bladder tumor resection. CRBD is frequent and occurred immediately after surgery. Bladder resection and endo-urology were the main surgical procedures which induced CRBD. Clonazepam is efficient in 75 %. Bladder resection is the surgical procedure which is the most refractory to treatment. 5. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
Optical design of an in vivo laparoscopic lighting system
NASA Astrophysics Data System (ADS)
Liu, Xiaolong; Abdolmalaki, Reza Yazdanpanah; Mancini, Gregory J.; Tan, Jindong
2017-12-01
This paper proposes an in vivo laparoscopic lighting system design to address the illumination issues, namely poor lighting uniformity and low optical efficiency, existing in the state-of-the-art in vivo laparoscopic cameras. The transformable design of the laparoscopic lighting system is capable of carrying purposefully designed freeform optical lenses for achieving lighting performance with high illuminance uniformity and high optical efficiency in a desired target region. To design freeform optical lenses for extended light sources such as LEDs with Lambertian light intensity distributions, we present an effective and complete freeform optical design method. The procedures include (1) ray map computation by numerically solving a standard Monge-Ampere equation; (2) initial freeform optical surface construction by using Snell's law and a lens volume restriction; (3) correction of surface normal vectors due to accumulated errors from the initially constructed surfaces; and (4) feedback modification of the solution to deal with degraded illuminance uniformity caused by the extended sizes of the LEDs. We employed an optical design software package to evaluate the performance of our laparoscopic lighting system design. The simulation results show that our design achieves greater than 95% illuminance uniformity and greater than 89% optical efficiency (considering Fresnel losses) for illuminating the target surgical region.
Efficient field testing for load rating railroad bridges
NASA Astrophysics Data System (ADS)
Schulz, Jeffrey L.; Brett C., Commander
1995-06-01
As the condition of our infrastructure continues to deteriorate, and the loads carried by our bridges continue to increase, an ever growing number of railroad and highway bridges require load limits. With safety and transportation costs at both ends of the spectrum. the need for accurate load rating is paramount. This paper describes a method that has been developed for efficient load testing and evaluation of short- and medium-span bridges. Through the use of a specially-designed structural testing system and efficient load test procedures, a typical bridge can be instrumented and tested at 64 points in less than one working day and with minimum impact on rail traffic. Various techniques are available to evaluate structural properties and obtain a realistic model. With field data, a simple finite element model is 'calibrated' and its accuracy is verified. Appropriate design and rating loads are applied to the resulting model and stress predictions are made. This technique has been performed on numerous structures to address specific problems and to provide accurate load ratings. The merits and limitations of this approach are discussed in the context of actual examples of both rail and highway bridges that were tested and evaluated.
A Two-Zone Multigrid Model for SI Engine Combustion Simulation Using Detailed Chemistry
Ge, Hai-Wen; Juneja, Harmit; Shi, Yu; ...
2010-01-01
An efficient multigrid (MG) model was implemented for spark-ignited (SI) engine combustion modeling using detailed chemistry. The model is designed to be coupled with a level-set-G-equation model for flame propagation (GAMUT combustion model) for highly efficient engine simulation. The model was explored for a gasoline direct-injection SI engine with knocking combustion. The numerical results using the MG model were compared with the results of the original GAMUT combustion model. A simpler one-zone MG model was found to be unable to reproduce the results of the original GAMUT model. However, a two-zone MG model, which treats the burned and unburned regionsmore » separately, was found to provide much better accuracy and efficiency than the one-zone MG model. Without loss in accuracy, an order of magnitude speedup was achieved in terms of CPU and wall times. To reproduce the results of the original GAMUT combustion model, either a low searching level or a procedure to exclude high-temperature computational cells from the grouping should be applied to the unburned region, which was found to be more sensitive to the combustion model details.« less
High-efficiency wavefunction updates for large scale Quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Kent, Paul; McDaniel, Tyler; Li, Ying Wai; D'Azevedo, Ed
Within ab intio Quantum Monte Carlo (QMC) simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunctions. The evaluation of each Monte Carlo move requires finding the determinant of a dense matrix, which is traditionally iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. For calculations with thousands of electrons, this operation dominates the execution profile. We propose a novel rank- k delayed update scheme. This strategy enables probability evaluation for multiple successive Monte Carlo moves, with application of accepted moves to the matrices delayed until after a predetermined number of moves, k. Accepted events grouped in this manner are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency. This procedure does not change the underlying Monte Carlo sampling or the sampling efficiency. For large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude speedups can be obtained on both multi-core CPU and on GPUs, making this algorithm highly advantageous for current petascale and future exascale computations.
Development of efficient time-evolution method based on three-term recurrence relation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akama, Tomoko, E-mail: a.tomo---s-b-l-r@suou.waseda.jp; Kobayashi, Osamu; Nanbu, Shinkoh, E-mail: shinkoh.nanbu@sophia.ac.jp
The advantage of the real-time (RT) propagation method is a direct solution of the time-dependent Schrödinger equation which describes frequency properties as well as all dynamics of a molecular system composed of electrons and nuclei in quantum physics and chemistry. Its applications have been limited by computational feasibility, as the evaluation of the time-evolution operator is computationally demanding. In this article, a new efficient time-evolution method based on the three-term recurrence relation (3TRR) was proposed to reduce the time-consuming numerical procedure. The basic formula of this approach was derived by introducing a transformation of the operator using the arcsine function.more » Since this operator transformation causes transformation of time, we derived the relation between original and transformed time. The formula was adapted to assess the performance of the RT time-dependent Hartree-Fock (RT-TDHF) method and the time-dependent density functional theory. Compared to the commonly used fourth-order Runge-Kutta method, our new approach decreased computational time of the RT-TDHF calculation by about factor of four, showing the 3TRR formula to be an efficient time-evolution method for reducing computational cost.« less
Li, Chuan; Li, Lin; Zhang, Jie; Alexov, Emil
2012-01-01
The Gauss-Seidel method is a standard iterative numerical method widely used to solve a system of equations and, in general, is more efficient comparing to other iterative methods, such as the Jacobi method. However, standard implementation of the Gauss-Seidel method restricts its utilization in parallel computing due to its requirement of using updated neighboring values (i.e., in current iteration) as soon as they are available. Here we report an efficient and exact (not requiring assumptions) method to parallelize iterations and to reduce the computational time as a linear/nearly linear function of the number of CPUs. In contrast to other existing solutions, our method does not require any assumptions and is equally applicable for solving linear and nonlinear equations. This approach is implemented in the DelPhi program, which is a finite difference Poisson-Boltzmann equation solver to model electrostatics in molecular biology. This development makes the iterative procedure on obtaining the electrostatic potential distribution in the parallelized DelPhi several folds faster than that in the serial code. Further we demonstrate the advantages of the new parallelized DelPhi by computing the electrostatic potential and the corresponding energies of large supramolecular structures. PMID:22674480
Efficient grid-based techniques for density functional theory
NASA Astrophysics Data System (ADS)
Rodriguez-Hernandez, Juan Ignacio
Understanding the chemical and physical properties of molecules and materials at a fundamental level often requires quantum-mechanical models for these substance's electronic structure. This type of many body quantum mechanics calculation is computationally demanding, hindering its application to substances with more than a few hundreds atoms. The supreme goal of many researches in quantum chemistry---and the topic of this dissertation---is to develop more efficient computational algorithms for electronic structure calculations. In particular, this dissertation develops two new numerical integration techniques for computing molecular and atomic properties within conventional Kohn-Sham-Density Functional Theory (KS-DFT) of molecular electronic structure. The first of these grid-based techniques is based on the transformed sparse grid construction. In this construction, a sparse grid is generated in the unit cube and then mapped to real space according to the pro-molecular density using the conditional distribution transformation. The transformed sparse grid was implemented in program deMon2k, where it is used as the numerical integrator for the exchange-correlation energy and potential in the KS-DFT procedure. We tested our grid by computing ground state energies, equilibrium geometries, and atomization energies. The accuracy on these test calculations shows that our grid is more efficient than some previous integration methods: our grids use fewer points to obtain the same accuracy. The transformed sparse grids were also tested for integrating, interpolating and differentiating in different dimensions (n = 1,2,3,6). The second technique is a grid-based method for computing atomic properties within QTAIM. It was also implemented in deMon2k. The performance of the method was tested by computing QTAIM atomic energies, charges, dipole moments, and quadrupole moments. For medium accuracy, our method is the fastest one we know of.
NASA Astrophysics Data System (ADS)
Tatomir, Alexandru Bogdan A. C.; Flemisch, Bernd; Class, Holger; Helmig, Rainer; Sauter, Martin
2017-04-01
Geological storage of CO2 represents one viable solution to reduce greenhouse gas emission in the atmosphere. Potential leakage of CO2 storage can occur through networks of interconnected fractures. The geometrical complexity of these networks is often very high involving fractures occurring at various scales and having hierarchical structures. Such multiphase flow systems are usually hard to solve with a discrete fracture modelling (DFM) approach. Therefore, continuum fracture models assuming average properties are usually preferred. The multiple interacting continua (MINC) model is an extension of the classic double porosity model (Warren and Root, 1963) which accounts for the non-linear behaviour of the matrix-fracture interactions. For CO2 storage applications the transient representation of the inter-porosity two phase flow plays an important role. This study tests the accuracy and computational efficiency of the MINC method complemented with the multiple sub-region (MSR) upscaling procedure versus the DFM. The two phase flow MINC simulator is implemented in the free-open source numerical toolbox DuMux (www.dumux.org). The MSR (Gong et al., 2009) determines the inter-porosity terms by solving simplified local single-phase flow problems. The DFM is considered as the reference solution. The numerical examples consider a quasi-1D reservoir with a quadratic fracture system , a five-spot radial symmetric reservoir, and a completely random generated fracture system. Keywords: MINC, upscaling, two-phase flow, fractured porous media, discrete fracture model, continuum fracture model
NASA Technical Reports Server (NTRS)
Rudy, D. H.; Morris, D. J.; Blanchard, D. K.; Cooke, C. H.; Rubin, S. G.
1975-01-01
The status of an investigation of four numerical techniques for the time-dependent compressible Navier-Stokes equations is presented. Results for free shear layer calculations in the Reynolds number range from 1000 to 81000 indicate that a sequential alternating-direction implicit (ADI) finite-difference procedure requires longer computing times to reach steady state than a low-storage hopscotch finite-difference procedure. A finite-element method with cubic approximating functions was found to require excessive computer storage and computation times. A fourth method, an alternating-direction cubic spline technique which is still being tested, is also described.
NASA Technical Reports Server (NTRS)
Stein, M.; Housner, J. D.
1978-01-01
A numerical analysis developed for the buckling of rectangular orthotropic layered panels under combined shear and compression is described. This analysis uses a central finite difference procedure based on trigonometric functions instead of using the conventional finite differences which are based on polynomial functions. Inasmuch as the buckle mode shape is usually trigonometric in nature, the analysis using trigonometric finite differences can be made to exhibit a much faster convergence rate than that using conventional differences. Also, the trigonometric finite difference procedure leads to difference equations having the same form as conventional finite differences; thereby allowing available conventional finite difference formulations to be converted readily to trigonometric form. For two-dimensional problems, the procedure introduces two numerical parameters into the analysis. Engineering approaches for the selection of these parameters are presented and the analysis procedure is demonstrated by application to several isotropic and orthotropic panel buckling problems. Among these problems is the shear buckling of stiffened isotropic and filamentary composite panels in which the stiffener is broken. Results indicate that a break may degrade the effect of the stiffener to the extent that the panel will not carry much more load than if the stiffener were absent.
Microstructure based procedure for process parameter control in rolling of aluminum thin foils
NASA Astrophysics Data System (ADS)
Johannes, Kronsteiner; Kabliman, Evgeniya; Klimek, Philipp-Christoph
2018-05-01
In present work, a microstructure based procedure is used for a numerical prediction of strength properties for Al-Mg-Sc thin foils during a hot rolling process. For this purpose, the following techniques were developed and implemented. At first, a toolkit for a numerical analysis of experimental stress-strain curves obtained during a hot compression testing by a deformation dilatometer was developed. The implemented techniques allow for the correction of a temperature increase in samples due to adiabatic heating and for the determination of a yield strength needed for the separation of the elastic and plastic deformation regimes during numerical simulation of multi-pass hot rolling. At the next step, an asymmetric Hot Rolling Simulator (adjustable table inlet/outlet height as well as separate roll infeed) was developed in order to match the exact processing conditions of a semi-industrial rolling procedure. At each element of a finite element mesh the total strength is calculated by in-house Flow Stress Model based on evolution of mean dislocation density. The strength values obtained by numerical modelling were found in a reasonable agreement with results of tensile tests for thin Al-Mg-Sc foils. Thus, the proposed simulation procedure might allow to optimize the processing parameters with respect to the microstructure development.
Utilizing Visual Effects Software for Efficient and Flexible Isostatic Adjustment Modelling
NASA Astrophysics Data System (ADS)
Meldgaard, A.; Nielsen, L.; Iaffaldano, G.
2017-12-01
The isostatic adjustment signal generated by transient ice sheet loading is an important indicator of past ice sheet extent and the rheological constitution of the interior of the Earth. Finite element modelling has proved to be a very useful tool in these studies. We present a simple numerical model for 3D visco elastic Earth deformation and a new approach to the design of such models utilizing visual effects software designed for the film and game industry. The software package Houdini offers an assortment of optimized tools and libraries which greatly facilitate the creation of efficient numerical algorithms. In particular, we make use of Houdini's procedural work flow, the SIMD programming language VEX, Houdini's sparse matrix creation and inversion libraries, an inbuilt tetrahedralizer for grid creation, and the user interface, which facilitates effortless manipulation of 3D geometry. We mitigate many of the time consuming steps associated with the authoring of efficient algorithms from scratch while still keeping the flexibility that may be lost with the use of commercial dedicated finite element programs. We test the efficiency of the algorithm by comparing simulation times with off-the-shelf solutions from the Abaqus software package. The algorithm is tailored for the study of local isostatic adjustment patterns, in close vicinity to present ice sheet margins. In particular, we wish to examine possible causes for the considerable spatial differences in the uplift magnitude which are apparent from field observations in these areas. Such features, with spatial scales of tens of kilometres, are not resolvable with current global isostatic adjustment models, and may require the inclusion of local topographic features. We use the presented algorithm to study a near field area where field observations are abundant, namely, Disko Bay in West Greenland with the intention of constraining Earth parameters and ice thickness. In addition, we assess how local topographic features may influence the differential isostatic uplift in the area.
NASA Technical Reports Server (NTRS)
Swinbank, Richard; Purser, James
2006-01-01
Recent years have seen a resurgence of interest in a variety of non-standard computational grids for global numerical prediction. The motivation has been to reduce problems associated with the converging meridians and the polar singularities of conventional regular latitude-longitude grids. A further impetus has come from the adoption of massively parallel computers, for which it is necessary to distribute work equitably across the processors; this is more practicable for some non-standard grids. Desirable attributes of a grid for high-order spatial finite differencing are: (i) geometrical regularity; (ii) a homogeneous and approximately isotropic spatial resolution; (iii) a low proportion of the grid points where the numerical procedures require special customization (such as near coordinate singularities or grid edges). One family of grid arrangements which, to our knowledge, has never before been applied to numerical weather prediction, but which appears to offer several technical advantages, are what we shall refer to as "Fibonacci grids". They can be thought of as mathematically ideal generalizations of the patterns occurring naturally in the spiral arrangements of seeds and fruit found in sunflower heads and pineapples (to give two of the many botanical examples). These grids possess virtually uniform and highly isotropic resolution, with an equal area for each grid point. There are only two compact singular regions on a sphere that require customized numerics. We demonstrate the practicality of these grids in shallow water simulations, and discuss the prospects for efficiently using these frameworks in three-dimensional semi-implicit and semi-Lagrangian weather prediction or climate models.
NASA Astrophysics Data System (ADS)
Adam, Saad; Premnath, Kannan
2016-11-01
Fluid mechanics of non-Newtonian fluids, which arise in numerous settings, are characterized by non-linear constitutive models that pose certain unique challenges for computational methods. Here, we consider the lattice Boltzmann method (LBM), which offers some computational advantages due to its kinetic basis and its simpler stream-and-collide procedure enabling efficient simulations. However, further improvements are necessary to improve its numerical stability and accuracy for computations involving broader parameter ranges. Hence, in this study, we extend the cascaded LBM formulation by modifying its moment equilibria and relaxation parameters to handle a variety of non-Newtonian constitutive equations, including power-law and Bingham fluids, with improved stability. In addition, we include corrections to the moment equilibria to obtain an inertial frame invariant scheme without cubic-velocity defects. After preforming its validation study for various benchmark flows, we study the physics of non-Newtonian flow over pairs of circular and square cylinders in a tandem arrangement, especially the wake structure interactions and their effects on resulting forces in each cylinder, and elucidate the effect of the various characteristic parameters.
NASA Astrophysics Data System (ADS)
Greenman, Loren; Lucchese, Robert R.; McCurdy, C. William
2017-11-01
The complex Kohn variational method for electron-polyatomic-molecule scattering is formulated using an overset-grid representation of the scattering wave function. The overset grid consists of a central grid and multiple dense atom-centered subgrids that allow the simultaneous spherical expansions of the wave function about multiple centers. Scattering boundary conditions are enforced by using a basis formed by the repeated application of the free-particle Green's function and potential Ĝ0+V ̂ on the overset grid in a Born-Arnoldi solution of the working equations. The theory is shown to be equivalent to a specific Padé approximant to the T matrix and has rapid convergence properties, in both the number of numerical basis functions employed and the number of partial waves employed in the spherical expansions. The method is demonstrated in calculations on methane and CF4 in the static-exchange approximation and compared in detail with calculations performed with the numerical Schwinger variational approach based on single-center expansions. An efficient procedure for operating with the free-particle Green's function and exchange operators (to which no approximation is made) is also described.
Mapped grid methods for long-range molecules and cold collisions
NASA Astrophysics Data System (ADS)
Willner, K.; Dulieu, O.; Masnou-Seeuws, F.
2004-01-01
The paper discusses ways of improving the accuracy of numerical calculations for vibrational levels of diatomic molecules close to the dissociation limit or for ultracold collisions, in the framework of a grid representation. In order to avoid the implementation of very large grids, Kokoouline et al. [J. Chem. Phys. 110, 9865 (1999)] have proposed a mapping procedure through introduction of an adaptive coordinate x subjected to the variation of the local de Broglie wavelength as a function of the internuclear distance R. Some unphysical levels ("ghosts") then appear in the vibrational series computed via a mapped Fourier grid representation. In the present work the choice of the basis set is reexamined, and two alternative expansions are discussed: Sine functions and Hardy functions. It is shown that use of a basis set with fixed nodes at both grid ends is efficient to eliminate "ghost" solutions. It is further shown that the Hamiltonian matrix in the sine basis can be calculated very accurately by using an auxiliary basis of cosine functions, overcoming the problems arising from numerical calculation of the Jacobian J(x) of the R→x coordinate transformation.
An overview of San Francisco Bay PORTS
Cheng, Ralph T.; McKinnie, David; English, Chad; Smith, Richard E.
1998-01-01
The Physical Oceanographic Real-Time System (PORTS) provides observations of tides, tidal currents, and meteorological conditions in real-time. The San Francisco Bay PORTS (SFPORTS) is a decision support system to facilitate safe and efficient maritime commerce. In addition to real-time observations, SFPORTS includes a nowcast numerical model forming a San Francisco Bay marine nowcast system. SFPORTS data and nowcast numerical model results are made available to users through the World Wide Web (WWW). A brief overview of SFPORTS is presented, from the data flow originated at instrument sensors to final results delivered to end users on the WWW. A user-friendly interface for SFPORTS has been designed and implemented. Appropriate field data analysis, nowcast procedures, design and generation of graphics for WWW display of field data and nowcast results are presented and discussed. Furthermore, SFPORTS is designed to support hazardous materials spill prevention and response, and to serve as resources to scientists studying the health of San Francisco Bay ecosystem. The success (or failure) of the SFPORTS to serve the intended user community is determined by the effectiveness of the user interface.
NASA Astrophysics Data System (ADS)
Park, George Ilhwan; Moin, Parviz
2016-01-01
This paper focuses on numerical and practical aspects associated with a parallel implementation of a two-layer zonal wall model for large-eddy simulation (LES) of compressible wall-bounded turbulent flows on unstructured meshes. A zonal wall model based on the solution of unsteady three-dimensional Reynolds-averaged Navier-Stokes (RANS) equations on a separate near-wall grid is implemented in an unstructured, cell-centered finite-volume LES solver. The main challenge in its implementation is to couple two parallel, unstructured flow solvers for efficient boundary data communication and simultaneous time integrations. A coupling strategy with good load balancing and low processors underutilization is identified. Face mapping and interpolation procedures at the coupling interface are explained in detail. The method of manufactured solution is used for verifying the correct implementation of solver coupling, and parallel performance of the combined wall-modeled LES (WMLES) solver is investigated. The method has successfully been applied to several attached and separated flows, including a transitional flow over a flat plate and a separated flow over an airfoil at an angle of attack.
Estimation of Sonic Fatigue by Reduced-Order Finite Element Based Analyses
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Przekop, Adam
2006-01-01
A computationally efficient, reduced-order method is presented for prediction of sonic fatigue of structures exhibiting geometrically nonlinear response. A procedure to determine the nonlinear modal stiffness using commercial finite element codes allows the coupled nonlinear equations of motion in physical degrees of freedom to be transformed to a smaller coupled system of equations in modal coordinates. The nonlinear modal system is first solved using a computationally light equivalent linearization solution to determine if the structure responds to the applied loading in a nonlinear fashion. If so, a higher fidelity numerical simulation in modal coordinates is undertaken to more accurately determine the nonlinear response. Comparisons of displacement and stress response obtained from the reduced-order analyses are made with results obtained from numerical simulation in physical degrees-of-freedom. Fatigue life predictions from nonlinear modal and physical simulations are made using the rainflow cycle counting method in a linear cumulative damage analysis. Results computed for a simple beam structure under a random acoustic loading demonstrate the effectiveness of the approach and compare favorably with results obtained from the solution in physical degrees-of-freedom.
Numerical Studies of Boundary-Layer Receptivity
NASA Technical Reports Server (NTRS)
Reed, Helen L.
1995-01-01
Direct numerical simulations (DNS) of the acoustic receptivity process on a semi-infinite flat plate with a modified-super-elliptic (MSE) leading edge are performed. The incompressible Navier-Stokes equations are solved in stream-function/vorticity form in a general curvilinear coordinate system. The steady basic-state solution is found by solving the governing equations using an alternating direction implicit (ADI) procedure which takes advantage of the parallelism present in line-splitting techniques. Time-harmonic oscillations of the farfield velocity are applied as unsteady boundary conditions to the unsteady disturbance equations. An efficient time-harmonic scheme is used to produce the disturbance solutions. Buffer-zone techniques have been applied to eliminate wave reflection from the outflow boundary. The spatial evolution of Tollmien-Schlichting (T-S) waves is analyzed and compared with experiment and theory. The effects of nose-radius, frequency, Reynolds number, angle of attack, and amplitude of the acoustic wave are investigated. This work is being performed in conjunction with the experiments at the Arizona State University Unsteady Wind Tunnel under the direction of Professor William Saric. The simulations are of the same configuration and parameters used in the wind-tunnel experiments.
Sensitivity of Rayleigh wave ellipticity and implications for surface wave inversion
NASA Astrophysics Data System (ADS)
Cercato, Michele
2018-04-01
The use of Rayleigh wave ellipticity has gained increasing popularity in recent years for investigating earth structures, especially for near-surface soil characterization. In spite of its widespread application, the sensitivity of the ellipticity function to the soil structure has been rarely explored in a comprehensive and systematic manner. To this end, a new analytical method is presented for computing the sensitivity of Rayleigh wave ellipticity with respect to the structural parameters of a layered elastic half-space. This method takes advantage of the minor decomposition of the surface wave eigenproblem and is numerically stable at high frequency. This numerical procedure allowed to retrieve the sensitivity for typical near surface and crustal geological scenarios, pointing out the key parameters for ellipticity interpretation under different circumstances. On this basis, a thorough analysis is performed to assess how ellipticity data can efficiently complement surface wave dispersion information in a joint inversion algorithm. The results of synthetic and real-world examples are illustrated to analyse quantitatively the diagnostic potential of the ellipticity data with respect to the soil structure, focusing on the possible sources of misinterpretation in data inversion.
Numerical systems on a minicomputer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Jr., Roy Leonard
1973-02-01
This thesis defines the concept of a numerical system for a minicomputer and provides a description of the software and computer system configuration necessary to implement such a system. A procedure for creating a numerical system from a FORTRAN program is developed and an example is presented.
Atmospheric model development in support of SEASAT. Volume 1: Summary of findings
NASA Technical Reports Server (NTRS)
Kesel, P. G.
1977-01-01
Atmospheric analysis and prediction models of varying (grid) resolution were developed. The models were tested using real observational data for the purpose of assessing the impact of grid resolution on short range numerical weather prediction. The discretionary model procedures were examined so that the computational viability of SEASAT data might be enhanced during the conduct of (future) sensitivity tests. The analysis effort covers: (1) examining the procedures for allowing data to influence the analysis; (2) examining the effects of varying the weights in the analysis procedure; (3) testing and implementing procedures for solving the minimization equation in an optimal way; (4) describing the impact of grid resolution on analysis; and (5) devising and implementing numerous practical solutions to analysis problems, generally.
NASA Astrophysics Data System (ADS)
Chen, Ying; Lowengrub, John; Shen, Jie; Wang, Cheng; Wise, Steven
2018-07-01
We develop efficient energy stable numerical methods for solving isotropic and strongly anisotropic Cahn-Hilliard systems with the Willmore regularization. The scheme, which involves adaptive mesh refinement and a nonlinear multigrid finite difference method, is constructed based on a convex splitting approach. We prove that, for the isotropic Cahn-Hilliard system with the Willmore regularization, the total free energy of the system is non-increasing for any time step and mesh sizes. A straightforward modification of the scheme is then used to solve the regularized strongly anisotropic Cahn-Hilliard system, and it is numerically verified that the discrete energy of the anisotropic system is also non-increasing, and can be efficiently solved by using the modified stable method. We present numerical results in both two and three dimensions that are in good agreement with those in earlier work on the topics. Numerical simulations are presented to demonstrate the accuracy and efficiency of the proposed methods.
NASA Astrophysics Data System (ADS)
Clark, Martyn P.; Kavetski, Dmitri
2010-10-01
A major neglected weakness of many current hydrological models is the numerical method used to solve the governing model equations. This paper thoroughly evaluates several classes of time stepping schemes in terms of numerical reliability and computational efficiency in the context of conceptual hydrological modeling. Numerical experiments are carried out using 8 distinct time stepping algorithms and 6 different conceptual rainfall-runoff models, applied in a densely gauged experimental catchment, as well as in 12 basins with diverse physical and hydroclimatic characteristics. Results show that, over vast regions of the parameter space, the numerical errors of fixed-step explicit schemes commonly used in hydrology routinely dwarf the structural errors of the model conceptualization. This substantially degrades model predictions, but also, disturbingly, generates fortuitously adequate performance for parameter sets where numerical errors compensate for model structural errors. Simply running fixed-step explicit schemes with shorter time steps provides a poor balance between accuracy and efficiency: in some cases daily-step adaptive explicit schemes with moderate error tolerances achieved comparable or higher accuracy than 15 min fixed-step explicit approximations but were nearly 10 times more efficient. From the range of simple time stepping schemes investigated in this work, the fixed-step implicit Euler method and the adaptive explicit Heun method emerge as good practical choices for the majority of simulation scenarios. In combination with the companion paper, where impacts on model analysis, interpretation, and prediction are assessed, this two-part study vividly highlights the impact of numerical errors on critical performance aspects of conceptual hydrological models and provides practical guidelines for robust numerical implementation.
78 FR 49607 - Energy Conservation Program: Test Procedures for Residential Clothes Dryers
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-14
... reasonably designed to produce test results which measure energy efficiency, energy use or estimated annual... Energy Conservation Program: Test Procedures for Residential Clothes Dryers; Final Rule #0;#0;Federal... Conservation Program: Test Procedures for Residential Clothes Dryers AGENCY: Office of Energy Efficiency and...
Femtosecond laser pulses for chemical-free embryonic and mesenchymal stem cell differentiation
NASA Astrophysics Data System (ADS)
Mthunzi, Patience; Dholakia, Kishan; Gunn-Moore, Frank
2011-10-01
Owing to their self renewal and pluripotency properties, stem cells can efficiently advance current therapies in tissue regeneration and/or engineering. Under appropriate culture conditions in vitro, pluripotent stem cells can be primed to differentiate into any cell type some examples including neural, cardiac and blood cells. However, there still remains a pressing necessity to answer the biological questions concerning how stem cell renewal and how differentiation programs are operated and regulated at the genetic level. In stem cell research, an urgent requirement on experimental procedures allowing non-invasive, marker-free observation of growth, proliferation and stability of living stem cells under physiological conditions exists. Femtosecond (fs) laser pulses have been reported to non-invasively deliver exogenous materials, including foreign genetic species into both multipotent and pluripotent stem cells successfully. Through this multi-photon facilitated technique, directly administering fs laser pulses onto the cell plasma membrane induces transient submicrometer holes, thereby promoting cytosolic uptake of the surrounding extracellular matter. To display a chemical-free cell transfection procedure that utilises micro-litre scale volumes of reagents, we report for the first time on 70 % transfection efficiency in ES-E14TG2a cells using the enhanced green fluorescing protein (EGFP) DNA plasmid. We also show how varying the average power output during optical transfection influences cell viability, proliferation and cytotoxicity in embryonic stem cells. The impact of utilizing objective lenses of different numerical aperture (NA) on the optical transfection efficiency in ES-E14TG2a cells is presented. Finally, we report on embryonic and mesenchymal stem cell differentiation. The produced specialized cell types could thereafter be characterized and used for cell based therapies.
Xing, Z F; Greenberg, J M
1994-08-20
The analyticity of the complex extinction efficiency is examined numerically in the size-parameter domain for homogeneous prolate and oblate spheroids and finite cylinders. The T-matrix code, which is the most efficient program available to date, is employed to calculate the individual particle-extinction efficiencies. Because of its computational limitations in the size-parameter range, a slightly modified Hilbert-transform algorithm is required to establish the analyticity numerically. The findings concerning analyticity that we reported for spheres (Astrophys. J. 399, 164-175, 1992) apply equally to these nonspherical particles.
Transonic Navier-Stokes solutions of three-dimensional afterbody flows
NASA Technical Reports Server (NTRS)
Compton, William B., III; Thomas, James L.; Abeyounis, William K.; Mason, Mary L.
1989-01-01
The performance of a three-dimensional Navier-Stokes solution technique in predicting the transonic flow past a nonaxisymmetric nozzle was investigated. The investigation was conducted at free-stream Mach numbers ranging from 0.60 to 0.94 and an angle of attack of 0 degrees. The numerical solution procedure employs the three-dimensional, unsteady, Reynolds-averaged Navier-Stokes equations written in strong conservation form, a thin layer assumption, and the Baldwin-Lomax turbulence model. The equations are solved by using the finite-volume principle in conjunction with an approximately factored upwind-biased numerical algorithm. In the numerical procedure, the jet exhaust is represented by a solid sting. Wind-tunnel data with the jet exhaust simulated by high pressure air were also obtained to compare with the numerical calculations.
Numerical calculations of velocity and pressure distribution around oscillating airfoils
NASA Technical Reports Server (NTRS)
Bratanow, T.; Ecer, A.; Kobiske, M.
1974-01-01
An analytical procedure based on the Navier-Stokes equations was developed for analyzing and representing properties of unsteady viscous flow around oscillating obstacles. A variational formulation of the vorticity transport equation was discretized in finite element form and integrated numerically. At each time step of the numerical integration, the velocity field around the obstacle was determined for the instantaneous vorticity distribution from the finite element solution of Poisson's equation. The time-dependent boundary conditions around the oscillating obstacle were introduced as external constraints, using the Lagrangian Multiplier Technique, at each time step of the numerical integration. The procedure was then applied for determining pressures around obstacles oscillating in unsteady flow. The obtained results for a cylinder and an airfoil were illustrated in the form of streamlines and vorticity and pressure distributions.
Multiply scaled constrained nonlinear equation solvers. [for nonlinear heat conduction problems
NASA Technical Reports Server (NTRS)
Padovan, Joe; Krishna, Lala
1986-01-01
To improve the numerical stability of nonlinear equation solvers, a partitioned multiply scaled constraint scheme is developed. This scheme enables hierarchical levels of control for nonlinear equation solvers. To complement the procedure, partitioned convergence checks are established along with self-adaptive partitioning schemes. Overall, such procedures greatly enhance the numerical stability of the original solvers. To demonstrate and motivate the development of the scheme, the problem of nonlinear heat conduction is considered. In this context the main emphasis is given to successive substitution-type schemes. To verify the improved numerical characteristics associated with partitioned multiply scaled solvers, results are presented for several benchmark examples.
NASA Astrophysics Data System (ADS)
Kiss, Gellért Zsolt; Borbély, Sándor; Nagy, Ladislau
2017-12-01
We have presented here an efficient numerical approach for the ab initio numerical solution of the time-dependent Schrödinger Equation describing diatomic molecules, which interact with ultrafast laser pulses. During the construction of the model we have assumed a frozen nuclear configuration and a single active electron. In order to increase efficiency our system was described using prolate spheroidal coordinates, where the wave function was discretized using the finite-element discrete variable representation (FE-DVR) method. The discretized wave functions were efficiently propagated in time using the short-iterative Lanczos algorithm. As a first test we have studied here how the laser induced bound state dynamics in H2+ is influenced by the strength of the driving laser field.
Automatic segmentation and centroid detection of skin sensors for lung interventions
NASA Astrophysics Data System (ADS)
Lu, Kongkuo; Xu, Sheng; Xue, Zhong; Wong, Stephen T.
2012-02-01
Electromagnetic (EM) tracking has been recognized as a valuable tool for locating the interventional devices in procedures such as lung and liver biopsy or ablation. The advantage of this technology is its real-time connection to the 3D volumetric roadmap, i.e. CT, of a patient's anatomy while the intervention is performed. EM-based guidance requires tracking of the tip of the interventional device, transforming the location of the device onto pre-operative CT images, and superimposing the device in the 3D images to assist physicians to complete the procedure more effectively. A key requirement of this data integration is to find automatically the mapping between EM and CT coordinate systems. Thus, skin fiducial sensors are attached to patients before acquiring the pre-operative CTs. Then, those sensors can be recognized in both CT and EM coordinate systems and used calculate the transformation matrix. In this paper, to enable the EM-based navigation workflow and reduce procedural preparation time, an automatic fiducial detection method is proposed to obtain the centroids of the sensors from the pre-operative CT. The approach has been applied to 13 rabbit datasets derived from an animal study and eight human images from an observation study. The numerical results show that it is a reliable and efficient method for use in EM-guided application.
Time-delayed feedback technique for suppressing instabilities in time-periodic flow
NASA Astrophysics Data System (ADS)
Shaabani-Ardali, Léopold; Sipp, Denis; Lesshafft, Lutz
2017-11-01
A numerical method is presented that allows to compute time-periodic flow states, even in the presence of hydrodynamic instabilities. The method is based on filtering nonharmonic components by way of delayed feedback control, as introduced by Pyragas [Phys. Lett. A 170, 421 (1992), 10.1016/0375-9601(92)90745-8]. Its use in flow problems is demonstrated here for the case of a periodically forced laminar jet, subject to a subharmonic instability that gives rise to vortex pairing. The optimal choice of the filter gain, which is a free parameter in the stabilization procedure, is investigated in the context of a low-dimensional model problem, and it is shown that this model predicts well the filter performance in the high-dimensional flow system. Vortex pairing in the jet is efficiently suppressed, so that the unstable periodic flow state in response to harmonic forcing is accurately retrieved. The procedure is straightforward to implement inside any standard flow solver. Memory requirements for the delayed feedback control can be significantly reduced by means of time interpolation between checkpoints. Finally, the method is extended for the treatment of periodic problems where the frequency is not known a priori. This procedure is demonstrated for a three-dimensional cubic lid-driven cavity in supercritical conditions.
The Improvement of Efficiency in the Numerical Computation of Orbit Trajectories
NASA Technical Reports Server (NTRS)
Dyer, J.; Danchick, R.; Pierce, S.; Haney, R.
1972-01-01
An analysis, system design, programming, and evaluation of results are described for numerical computation of orbit trajectories. Evaluation of generalized methods, interaction of different formulations for satellite motion, transformation of equations of motion and integrator loads, and development of efficient integrators are also considered.
Streamlined vessels for speedboats: Macro modifications of shark skin design applications
NASA Astrophysics Data System (ADS)
Ibrahim, M. D.; Amran, S. N. A.; Zulkharnain, A.; Sunami, Y.
2018-01-01
Functional properties of shark denticles have caught the attention of engineers and scientist today due to the hydrodynamic effects of its skin surface roughness. The skin of a fast swimming shark reveals riblet structures that help to reduce skin friction drag, shear stresses, making its movement to be more efficient and faster. Inspired by the structure of the shark skin denticles, our team has conducted a study on alternative on improving the hydrodynamic design of marine vessels by applying the simplified version of shark skin skin denticles on the surface hull of the vessels. Models used for this study are constructed and computational fluid dynamic (CFD) simulations are then carried out to predict the effectiveness of the hydrodynamic effects of the biomimetic shark skins on those models. Interestingly, the numerical calculated results obtained shows that the presence of biomimetic shark skin implemented on the vessels give improvements in the maximum speed as well as reducing the drag force experience by the vessels. The pattern of the wave generated post cruising area behind the vessels can also be observed to reduce the wakes and eddies. Theoretically, reduction of drag force provides a more efficient vessel with a better cruising speed. To further improve on this study, the authors are now actively arranging an experimental procedure in order to verify the numerical results obtained by CFD. The experimental test will be carried out using an 8 metre flow channel provided by University Malaysia Sarawak, Malaysia.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anton, Luis; MartI, Jose M; Ibanez, Jose M
2010-05-01
We obtain renormalized sets of right and left eigenvectors of the flux vector Jacobians of the relativistic MHD equations, which are regular and span a complete basis in any physical state including degenerate ones. The renormalization procedure relies on the characterization of the degeneracy types in terms of the normal and tangential components of the magnetic field to the wave front in the fluid rest frame. Proper expressions of the renormalized eigenvectors in conserved variables are obtained through the corresponding matrix transformations. Our work completes previous analysis that present different sets of right eigenvectors for non-degenerate and degenerate states, andmore » can be seen as a relativistic generalization of earlier work performed in classical MHD. Based on the full wave decomposition (FWD) provided by the renormalized set of eigenvectors in conserved variables, we have also developed a linearized (Roe-type) Riemann solver. Extensive testing against one- and two-dimensional standard numerical problems allows us to conclude that our solver is very robust. When compared with a family of simpler solvers that avoid the knowledge of the full characteristic structure of the equations in the computation of the numerical fluxes, our solver turns out to be less diffusive than HLL and HLLC, and comparable in accuracy to the HLLD solver. The amount of operations needed by the FWD solver makes it less efficient computationally than those of the HLL family in one-dimensional problems. However, its relative efficiency increases in multidimensional simulations.« less
Electronic-Imen-Delphi (EID): An Online Conferencing Procedure.
ERIC Educational Resources Information Center
Passig, David; Sharbat, Aviva
2000-01-01
Examines the efficiency of the Imen-Delphi (ID) technique as an electronic procedure for conferencing that helps participants clarify their opinions and expectations regarding preferable and possible futures. Describes an electronic version of the original ID procedure and tested its efficiency among a group of experts on virtual reality and…
Optimal sensors placement and spillover suppression
NASA Astrophysics Data System (ADS)
Hanis, Tomas; Hromcik, Martin
2012-04-01
A new approach to optimal placement of sensors (OSP) in mechanical structures is presented. In contrast to existing methods, the presented procedure enables a designer to seek for a trade-off between the presence of desirable modes in captured measurements and the elimination of influence of those mode shapes that are not of interest in a given situation. An efficient numerical algorithm is presented, developed from an existing routine based on the Fischer information matrix analysis. We consider two requirements in the optimal sensor placement procedure. On top of the classical EFI approach, the sensors configuration should also minimize spillover of unwanted higher modes. We use the information approach to OSP, based on the effective independent method (EFI), and modify the underlying criterion to meet both of our requirements—to maximize useful signals and minimize spillover of unwanted modes at the same time. Performance of our approach is demonstrated by means of examples, and a flexible Blended Wing Body (BWB) aircraft case study related to a running European-level FP7 research project 'ACFA 2020—Active Control for Flexible Aircraft'.
Szidarovszky, Tamás; Fábri, Csaba; Császár, Attila G
2012-05-07
Approximate rotational characterization of variational rovibrational wave functions via the rigid rotor decomposition (RRD) protocol is developed for Hamiltonians based on arbitrary sets of internal coordinates and axis embeddings. An efficient and general procedure is given that allows employing the Eckart embedding with arbitrary polyatomic Hamiltonians through a fully numerical approach. RRD tables formed by projecting rotational-vibrational wave functions into products of rigid-rotor basis functions and previously determined vibrational eigenstates yield rigid-rotor labels for rovibrational eigenstates by selecting the largest overlap. Embedding-dependent RRD analyses are performed, up to high energies and rotational excitations, for the H(2) (16)O isotopologue of the water molecule. Irrespective of the embedding chosen, the RRD procedure proves effective in providing unambiguous rotational assignments at low energies and J values. Rotational labeling of rovibrational states of H(2) (16)O proves to be increasingly difficult beyond about 10,000 cm(-1), close to the barrier to linearity of the water molecule. For medium energies and excitations the Eckart embedding yields the largest RRD coefficients, thus providing the largest number of unambiguous rotational labels.
A Review of Current Methods for Analysis of Mycotoxins in Herbal Medicines
Zhang, Lei; Dou, Xiao-Wen; Zhang, Cheng; Logrieco, Antonio F.; Yang, Mei-Hua
2018-01-01
The presence of mycotoxins in herbal medicines is an established problem throughout the entire world. The sensitive and accurate analysis of mycotoxin in complicated matrices (e.g., herbs) typically involves challenging sample pretreatment procedures and an efficient detection instrument. However, although numerous reviews have been published regarding the occurrence of mycotoxins in herbal medicines, few of them provided a detailed summary of related analytical methods for mycotoxin determination. This review focuses on analytical techniques including sampling, extraction, cleanup, and detection for mycotoxin determination in herbal medicines established within the past ten years. Dedicated sections of this article address the significant developments in sample preparation, and highlight the importance of this procedure in the analytical technology. This review also summarizes conventional chromatographic techniques for mycotoxin qualification or quantitation, as well as recent studies regarding the development and application of screening assays such as enzyme-linked immunosorbent assays, lateral flow immunoassays, aptamer-based lateral flow assays, and cytometric bead arrays. The present work provides a good insight regarding the advanced research that has been done and closes with an indication of future demand for the emerging technologies. PMID:29393905
A cluster expansion model for predicting activation barrier of atomic processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rehman, Tafizur; Jaipal, M.; Chatterjee, Abhijit, E-mail: achatter@iitk.ac.in
2013-06-15
We introduce a procedure based on cluster expansion models for predicting the activation barrier of atomic processes encountered while studying the dynamics of a material system using the kinetic Monte Carlo (KMC) method. Starting with an interatomic potential description, a mathematical derivation is presented to show that the local environment dependence of the activation barrier can be captured using cluster interaction models. Next, we develop a systematic procedure for training the cluster interaction model on-the-fly, which involves: (i) obtaining activation barriers for handful local environments using nudged elastic band (NEB) calculations, (ii) identifying the local environment by analyzing the NEBmore » results, and (iii) estimating the cluster interaction model parameters from the activation barrier data. Once a cluster expansion model has been trained, it is used to predict activation barriers without requiring any additional NEB calculations. Numerical studies are performed to validate the cluster expansion model by studying hop processes in Ag/Ag(100). We show that the use of cluster expansion model with KMC enables efficient generation of an accurate process rate catalog.« less
Ontology-based data integration between clinical and research systems.
Mate, Sebastian; Köpcke, Felix; Toddenroth, Dennis; Martin, Marcus; Prokosch, Hans-Ulrich; Bürkle, Thomas; Ganslandt, Thomas
2015-01-01
Data from the electronic medical record comprise numerous structured but uncoded elements, which are not linked to standard terminologies. Reuse of such data for secondary research purposes has gained in importance recently. However, the identification of relevant data elements and the creation of database jobs for extraction, transformation and loading (ETL) are challenging: With current methods such as data warehousing, it is not feasible to efficiently maintain and reuse semantically complex data extraction and trans-formation routines. We present an ontology-supported approach to overcome this challenge by making use of abstraction: Instead of defining ETL procedures at the database level, we use ontologies to organize and describe the medical concepts of both the source system and the target system. Instead of using unique, specifically developed SQL statements or ETL jobs, we define declarative transformation rules within ontologies and illustrate how these constructs can then be used to automatically generate SQL code to perform the desired ETL procedures. This demonstrates how a suitable level of abstraction may not only aid the interpretation of clinical data, but can also foster the reutilization of methods for un-locking it.
Doha, E.H.; Abd-Elhameed, W.M.; Youssri, Y.H.
2014-01-01
Two families of certain nonsymmetric generalized Jacobi polynomials with negative integer indexes are employed for solving third- and fifth-order two point boundary value problems governed by homogeneous and nonhomogeneous boundary conditions using a dual Petrov–Galerkin method. The idea behind our method is to use trial functions satisfying the underlying boundary conditions of the differential equations and the test functions satisfying the dual boundary conditions. The resulting linear systems from the application of our method are specially structured and they can be efficiently inverted. The use of generalized Jacobi polynomials simplify the theoretical and numerical analysis of the method and also leads to accurate and efficient numerical algorithms. The presented numerical results indicate that the proposed numerical algorithms are reliable and very efficient. PMID:26425358
NASA Astrophysics Data System (ADS)
Hosseinalipour, S. M.; Raja, A.; Hajikhani, S.
2012-06-01
A full three dimensional Navier - Stokes numerical simulation has been performed for performance analysis of a Kaplan turbine which is installed in one of the Irans south dams. No simplifications have been enforced in the simulation. The numerical results have been evaluated using some integral parameters such as the turbine efficiency via comparing the results with existing experimental data from the prototype Hill chart. In part of this study the numerical simulations were performed in order to calculate the prototype turbine efficiencies in some specific points which comes from the scaling up of the model efficiency that are available in the model experimental Hill chart. The results are very promising which shows the good ability of the numerical techniques for resolving the flow characteristics in these kind of complex geometries. A parametric study regarding the evaluation of turbine performance in three different runner angles of the prototype is also performed and the results are cited in this paper.
Numeric Data Products and Services. SPEC Kit.
ERIC Educational Resources Information Center
Cook, Michael N., Comp.; Hernandez, John J., Comp.; Nicholson, Shawn, Comp.
2001-01-01
This SPEC (Systems and Procedures Exchange Center) Kit presents the results of a survey of Association of Research Libraries (ARL) member libraries. The survey addressed the following questions about numeric data (i.e., any information resource, print or non-print, with considerable numeric content) in academic libraries: (1) What relationships…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Volkov, S. A., E-mail: volkoff-sergey@mail.ru
2016-06-15
A new subtractive procedure for canceling ultraviolet and infrared divergences in the Feynman integrals described here is developed for calculating QED corrections to the electron anomalous magnetic moment. The procedure formulated in the form of a forest expression with linear operators applied to Feynman amplitudes of UV-diverging subgraphs makes it possible to represent the contribution of each Feynman graph containing only electron and photon propagators in the form of a converging integral with respect to Feynman parameters. The application of the developed method for numerical calculation of two- and threeloop contributions is described.
Computer Facilitated Mathematical Methods in Chemical Engineering--Similarity Solution
ERIC Educational Resources Information Center
Subramanian, Venkat R.
2006-01-01
High-performance computers coupled with highly efficient numerical schemes and user-friendly software packages have helped instructors to teach numerical solutions and analysis of various nonlinear models more efficiently in the classroom. One of the main objectives of a model is to provide insight about the system of interest. Analytical…
Numerical Processing Efficiency Improved in Experienced Mental Abacus Children
ERIC Educational Resources Information Center
Wang, Yunqi; Geng, Fengji; Hu, Yuzheng; Du, Fenglei; Chen, Feiyan
2013-01-01
Experienced mental abacus (MA) users are able to perform mental arithmetic calculations with unusual speed and accuracy. However, it remains unclear whether their extraordinary gains in mental arithmetic ability are accompanied by an improvement in numerical processing efficiency. To address this question, the present study, using a numerical…
A Quantitative Review of Functional Analysis Procedures in Public School Settings
ERIC Educational Resources Information Center
Solnick, Mark D.; Ardoin, Scott P.
2010-01-01
Functional behavioral assessments can consist of indirect, descriptive and experimental procedures, such as a functional analysis. Although the research contains numerous examples demonstrating the effectiveness of functional analysis procedures, experimental conditions are often difficult to implement in classroom settings and analog conditions…
Calhoun, Karen H; Templer, Jerry; Patenaude, Bart
2006-01-01
There are numerous strategies, devices and procedures available to treat snoring. The surgical procedures have an overall success rate of 60-70%, but this probably decreases over time, especially if there is weight gain. There are no long-term rigorously-designed studies comparing the various procedures for decreasing snoring.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1978-01-01
This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1976-01-01
The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.
NASA Technical Reports Server (NTRS)
Wang, Qinglin; Gogineni, S. P.
1991-01-01
A numerical procedure for estimating the true scattering coefficient, sigma(sup 0), from measurements made using wide-beam antennas. The use of wide-beam antennas results in an inaccurate estimate of sigma(sup 0) if the narrow-beam approximation is used in the retrieval process for sigma(sup 0). To reduce this error, a correction procedure was proposed that estimates the error resulting from the narrow-beam approximation and uses the error to obtain a more accurate estimate of sigma(sup 0). An exponential model was assumed to take into account the variation of sigma(sup 0) with incidence angles, and the model parameters are estimated from measured data. Based on the model and knowledge of the antenna pattern, the procedure calculates the error due to the narrow-beam approximation. The procedure is shown to provide a significant improvement in estimation of sigma(sup 0) obtained with wide-beam antennas. The proposed procedure is also shown insensitive to the assumed sigma(sup 0) model.
NASA Technical Reports Server (NTRS)
Raibstein, A. I.; Kalev, I.; Pipano, A.
1976-01-01
A procedure for the local stiffness modifications of large structures is described. It enables structural modifications without an a priori definition of the changes in the original structure and without loss of efficiency due to multiple loading conditions. The solution procedure, implemented in NASTRAN, involved the decomposed stiffness matrix and the displacement vectors of the original structure. It solves the modified structure exactly, irrespective of the magnitude of the stiffness changes. In order to investigate the efficiency of the present procedure and to test its applicability within a design environment, several real and large structures were solved. The results of the efficiency studies indicate that the break-even point of the procedure varies between 8% and 60% stiffness modifications, depending upon the structure's characteristics and the options employed.
30 CFR 7.102 - Exhaust gas cooling efficiency test.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Exhaust gas cooling efficiency test. 7.102 Section 7.102 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR TESTING....102 Exhaust gas cooling efficiency test. (a) Test procedures. (1) Follow the procedures specified in...
Development of Multiobjective Optimization Techniques for Sonic Boom Minimization
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Rajadas, John Narayan; Pagaldipti, Naryanan S.
1996-01-01
A discrete, semi-analytical sensitivity analysis procedure has been developed for calculating aerodynamic design sensitivities. The sensitivities of the flow variables and the grid coordinates are numerically calculated using direct differentiation of the respective discretized governing equations. The sensitivity analysis techniques are adapted within a parabolized Navier Stokes equations solver. Aerodynamic design sensitivities for high speed wing-body configurations are calculated using the semi-analytical sensitivity analysis procedures. Representative results obtained compare well with those obtained using the finite difference approach and establish the computational efficiency and accuracy of the semi-analytical procedures. Multidisciplinary design optimization procedures have been developed for aerospace applications namely, gas turbine blades and high speed wing-body configurations. In complex applications, the coupled optimization problems are decomposed into sublevels using multilevel decomposition techniques. In cases with multiple objective functions, formal multiobjective formulation such as the Kreisselmeier-Steinhauser function approach and the modified global criteria approach have been used. Nonlinear programming techniques for continuous design variables and a hybrid optimization technique, based on a simulated annealing algorithm, for discrete design variables have been used for solving the optimization problems. The optimization procedure for gas turbine blades improves the aerodynamic and heat transfer characteristics of the blades. The two-dimensional, blade-to-blade aerodynamic analysis is performed using a panel code. The blade heat transfer analysis is performed using an in-house developed finite element procedure. The optimization procedure yields blade shapes with significantly improved velocity and temperature distributions. The multidisciplinary design optimization procedures for high speed wing-body configurations simultaneously improve the aerodynamic, the sonic boom and the structural characteristics of the aircraft. The flow solution is obtained using a comprehensive parabolized Navier Stokes solver. Sonic boom analysis is performed using an extrapolation procedure. The aircraft wing load carrying member is modeled as either an isotropic or a composite box beam. The isotropic box beam is analyzed using thin wall theory. The composite box beam is analyzed using a finite element procedure. The developed optimization procedures yield significant improvements in all the performance criteria and provide interesting design trade-offs. The semi-analytical sensitivity analysis techniques offer significant computational savings and allow the use of comprehensive analysis procedures within design optimization studies.
Critical Parameters of the Initiation Zone for Spontaneous Dynamic Rupture Propagation
NASA Astrophysics Data System (ADS)
Galis, M.; Pelties, C.; Kristek, J.; Moczo, P.; Ampuero, J. P.; Mai, P. M.
2014-12-01
Numerical simulations of rupture propagation are used to study both earthquake source physics and earthquake ground motion. Under linear slip-weakening friction, artificial procedures are needed to initiate a self-sustained rupture. The concept of an overstressed asperity is often applied, in which the asperity is characterized by its size, shape and overstress. The physical properties of the initiation zone may have significant impact on the resulting dynamic rupture propagation. A trial-and-error approach is often necessary for successful initiation because 2D and 3D theoretical criteria for estimating the critical size of the initiation zone do not provide general rules for designing 3D numerical simulations. Therefore, it is desirable to define guidelines for efficient initiation with minimal artificial effects on rupture propagation. We perform an extensive parameter study using numerical simulations of 3D dynamic rupture propagation assuming a planar fault to examine the critical size of square, circular and elliptical initiation zones as a function of asperity overstress and background stress. For a fixed overstress, we discover that the area of the initiation zone is more important for the nucleation process than its shape. Comparing our numerical results with published theoretical estimates, we find that the estimates by Uenishi & Rice (2004) are applicable to configurations with low background stress and small overstress. None of the published estimates are consistent with numerical results for configurations with high background stress. We therefore derive new equations to estimate the initiation zone size in environments with high background stress. Our results provide guidelines for defining the size of the initiation zone and overstress with minimal effects on the subsequent spontaneous rupture propagation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vecharynski, Eugene; Brabec, Jiri; Shao, Meiyue
Within this paper, we present two efficient iterative algorithms for solving the linear response eigenvalue problem arising from the time dependent density functional theory. Although the matrix to be diagonalized is nonsymmetric, it has a special structure that can be exploited to save both memory and floating point operations. In particular, the nonsymmetric eigenvalue problem can be transformed into an eigenvalue problem that involves the product of two matrices M and K. We show that, because MK is self-adjoint with respect to the inner product induced by the matrix K, this product eigenvalue problem can be solved efficiently by amore » modified Davidson algorithm and a modified locally optimal block preconditioned conjugate gradient (LOBPCG) algorithm that make use of the K-inner product. Additionally, the solution of the product eigenvalue problem yields one component of the eigenvector associated with the original eigenvalue problem. We show that the other component of the eigenvector can be easily recovered in an inexpensive postprocessing procedure. As a result, the algorithms we present here become more efficient than existing methods that try to approximate both components of the eigenvectors simultaneously. In particular, our numerical experiments demonstrate that the new algorithms presented here consistently outperform the existing state-of-the-art Davidson type solvers by a factor of two in both solution time and storage.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vecharynski, Eugene; Brabec, Jiri; Shao, Meiyue
In this article, we present two efficient iterative algorithms for solving the linear response eigenvalue problem arising from the time dependent density functional theory. Although the matrix to be diagonalized is nonsymmetric, it has a special structure that can be exploited to save both memory and floating point operations. In particular, the nonsymmetric eigenvalue problem can be transformed into an eigenvalue problem that involves the product of two matrices M and K. We show that, because MK is self-adjoint with respect to the inner product induced by the matrix K, this product eigenvalue problem can be solved efficiently by amore » modified Davidson algorithm and a modified locally optimal block preconditioned conjugate gradient (LOBPCG) algorithm that make use of the K-inner product. The solution of the product eigenvalue problem yields one component of the eigenvector associated with the original eigenvalue problem. We show that the other component of the eigenvector can be easily recovered in an inexpensive postprocessing procedure. As a result, the algorithms we present here become more efficient than existing methods that try to approximate both components of the eigenvectors simultaneously. In particular, our numerical experiments demonstrate that the new algorithms presented here consistently outperform the existing state-of-the-art Davidson type solvers by a factor of two in both solution time and storage.« less
Vecharynski, Eugene; Brabec, Jiri; Shao, Meiyue; ...
2017-12-01
In this article, we present two efficient iterative algorithms for solving the linear response eigenvalue problem arising from the time dependent density functional theory. Although the matrix to be diagonalized is nonsymmetric, it has a special structure that can be exploited to save both memory and floating point operations. In particular, the nonsymmetric eigenvalue problem can be transformed into an eigenvalue problem that involves the product of two matrices M and K. We show that, because MK is self-adjoint with respect to the inner product induced by the matrix K, this product eigenvalue problem can be solved efficiently by amore » modified Davidson algorithm and a modified locally optimal block preconditioned conjugate gradient (LOBPCG) algorithm that make use of the K-inner product. The solution of the product eigenvalue problem yields one component of the eigenvector associated with the original eigenvalue problem. We show that the other component of the eigenvector can be easily recovered in an inexpensive postprocessing procedure. As a result, the algorithms we present here become more efficient than existing methods that try to approximate both components of the eigenvectors simultaneously. In particular, our numerical experiments demonstrate that the new algorithms presented here consistently outperform the existing state-of-the-art Davidson type solvers by a factor of two in both solution time and storage.« less
Vecharynski, Eugene; Brabec, Jiri; Shao, Meiyue; ...
2017-08-24
Within this paper, we present two efficient iterative algorithms for solving the linear response eigenvalue problem arising from the time dependent density functional theory. Although the matrix to be diagonalized is nonsymmetric, it has a special structure that can be exploited to save both memory and floating point operations. In particular, the nonsymmetric eigenvalue problem can be transformed into an eigenvalue problem that involves the product of two matrices M and K. We show that, because MK is self-adjoint with respect to the inner product induced by the matrix K, this product eigenvalue problem can be solved efficiently by amore » modified Davidson algorithm and a modified locally optimal block preconditioned conjugate gradient (LOBPCG) algorithm that make use of the K-inner product. Additionally, the solution of the product eigenvalue problem yields one component of the eigenvector associated with the original eigenvalue problem. We show that the other component of the eigenvector can be easily recovered in an inexpensive postprocessing procedure. As a result, the algorithms we present here become more efficient than existing methods that try to approximate both components of the eigenvectors simultaneously. In particular, our numerical experiments demonstrate that the new algorithms presented here consistently outperform the existing state-of-the-art Davidson type solvers by a factor of two in both solution time and storage.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-25
... test procedures that are reasonably designed to produce results that measure energy efficiency, energy... contains one or more design characteristics that prevent testing according to the prescribed test procedure... Department of Energy Residential Dishwasher Test Procedure AGENCY: Office of Energy Efficiency and Renewable...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-25
... test procedures that are reasonably designed to produce results that measure energy efficiency, energy... contains one or more design characteristics that prevent testing according to the prescribed test procedure... Department of Energy Residential Dishwasher Test Procedure AGENCY: Office of Energy Efficiency and Renewable...
Low-Rank Correction Methods for Algebraic Domain Decomposition Preconditioners
Li, Ruipeng; Saad, Yousef
2017-08-01
This study presents a parallel preconditioning method for distributed sparse linear systems, based on an approximate inverse of the original matrix, that adopts a general framework of distributed sparse matrices and exploits domain decomposition (DD) and low-rank corrections. The DD approach decouples the matrix and, once inverted, a low-rank approximation is applied by exploiting the Sherman--Morrison--Woodbury formula, which yields two variants of the preconditioning methods. The low-rank expansion is computed by the Lanczos procedure with reorthogonalizations. Numerical experiments indicate that, when combined with Krylov subspace accelerators, this preconditioner can be efficient and robust for solving symmetric sparse linear systems. Comparisonsmore » with pARMS, a DD-based parallel incomplete LU (ILU) preconditioning method, are presented for solving Poisson's equation and linear elasticity problems.« less
Estimation of mean response via effective balancing score
Hu, Zonghui; Follmann, Dean A.; Wang, Naisyin
2015-01-01
Summary We introduce effective balancing scores for estimation of the mean response under a missing at random mechanism. Unlike conventional balancing scores, the effective balancing scores are constructed via dimension reduction free of model specification. Three types of effective balancing scores are introduced: those that carry the covariate information about the missingness, the response, or both. They lead to consistent estimation with little or no loss in efficiency. Compared to existing estimators, the effective balancing score based estimator relieves the burden of model specification and is the most robust. It is a near-automatic procedure which is most appealing when high dimensional covariates are involved. We investigate both the asymptotic and the numerical properties, and demonstrate the proposed method in a study on Human Immunodeficiency Virus disease. PMID:25797955
Integration of PGD-virtual charts into an engineering design process
NASA Astrophysics Data System (ADS)
Courard, Amaury; Néron, David; Ladevèze, Pierre; Ballere, Ludovic
2016-04-01
This article deals with the efficient construction of approximations of fields and quantities of interest used in geometric optimisation of complex shapes that can be encountered in engineering structures. The strategy, which is developed herein, is based on the construction of virtual charts that allow, once computed offline, to optimise the structure for a negligible online CPU cost. These virtual charts can be used as a powerful numerical decision support tool during the design of industrial structures. They are built using the proper generalized decomposition (PGD) that offers a very convenient framework to solve parametrised problems. In this paper, particular attention has been paid to the integration of the procedure into a genuine engineering design process. In particular, a dedicated methodology is proposed to interface the PGD approach with commercial software.
Algorithm 971: An Implementation of a Randomized Algorithm for Principal Component Analysis
LI, HUAMIN; LINDERMAN, GEORGE C.; SZLAM, ARTHUR; STANTON, KELLY P.; KLUGER, YUVAL; TYGERT, MARK
2017-01-01
Recent years have witnessed intense development of randomized methods for low-rank approximation. These methods target principal component analysis and the calculation of truncated singular value decompositions. The present article presents an essentially black-box, foolproof implementation for Mathworks’ MATLAB, a popular software platform for numerical computation. As illustrated via several tests, the randomized algorithms for low-rank approximation outperform or at least match the classical deterministic techniques (such as Lanczos iterations run to convergence) in basically all respects: accuracy, computational efficiency (both speed and memory usage), ease-of-use, parallelizability, and reliability. However, the classical procedures remain the methods of choice for estimating spectral norms and are far superior for calculating the least singular values and corresponding singular vectors (or singular subspaces). PMID:28983138
Modified independent modal space control method for active control of flexible systems
NASA Technical Reports Server (NTRS)
Baz, A.; Poh, S.
1987-01-01
A modified independent modal space control (MIMSC) method is developed for designing active vibration control systems for large flexible structures. The method accounts for the interaction between the controlled and residual modes. It incorporates also optimal placement procedures for selecting the optimal locations of the actuators in the structure in order to minimize the structural vibrations as well as the actuation energy. The MIMSC method relies on an important feature which is based on time sharing of a small number of actuators, in the modal space, to control effectively a large number of modes. Numerical examples are presented to illustrate the application of the method to generic flexible systems. The results obtained suggest the potential of the devised method in designing efficient active control systems for large flexible structures.
Buckling analysis of non-prismatic columns based on modified vibration modes
NASA Astrophysics Data System (ADS)
Rahai, A. R.; Kazemi, S.
2008-10-01
In this paper, a new procedure is formulated for the buckling analysis of tapered column members. The calculation of the buckling loads was carried out by using modified vibrational mode shape (MVM) and energy method. The change of stiffness within a column is characterized by introducing a tapering index. It is shown that, the changes in the vibrational mode shapes of a tapered column can be represented by considering a linear combination of various modes of uniform-section columns. As a result, by making use of these modified mode shapes (MVM) and applying the principle of stationary total potential energy, the buckling load of tapered columns can be obtained. Several numerical examples on tapered columns demonstrate the accuracy and efficiency of the proposed analytical method.
Improving the satellite communication efficiency of the accumulative acknowledgement strategies
NASA Astrophysics Data System (ADS)
Duarte, Otto Carlos M. B.; de Lima, Heliomar Medeiros
The performances of two finite buffer error recovery strategies are analyzed. In both strategies the retransmission request decision between selective repeat and continuous retransmission is based on an imminent buffer overflow condition. These are accumulative acknowledgment schemes, but in the second strategy the selective-repeat control frame is uniquely an individual negative acknowledgment. The two strategies take advantage of the availability of a greater buffer capacity, making the most of the selective repeat, postponing the sending of a continuous retransmission request. Numerical results show a better performance very close to the ideal, but it does not integrally conform to the high-level data link control (HDLC) procedures. It is shown that these strategies are well suited for high-speed data transfer in the high-error-rate satellite environment.
Multiple-grid convergence acceleration of viscous and inviscid flow computations
NASA Technical Reports Server (NTRS)
Johnson, G. M.
1983-01-01
A multiple-grid algorithm for use in efficiently obtaining steady solution to the Euler and Navier-Stokes equations is presented. The convergence of a simple, explicit fine-grid solution procedure is accelerated on a sequence of successively coarser grids by a coarse-grid information propagation method which rapidly eliminates transients from the computational domain. This use of multiple-gridding to increase the convergence rate results in substantially reduced work requirements for the numerical solution of a wide range of flow problems. Computational results are presented for subsonic and transonic inviscid flows and for laminar and turbulent, attached and separated, subsonic viscous flows. Work reduction factors as large as eight, in comparison to the basic fine-grid algorithm, were obtained. Possibilities for further performance improvement are discussed.
NASA Technical Reports Server (NTRS)
Chang, S. C.
1986-01-01
An algorithm for solving a large class of two- and three-dimensional nonseparable elliptic partial differential equations (PDE's) is developed and tested. It uses a modified D'Yakanov-Gunn iterative procedure in which the relaxation factor is grid-point dependent. It is easy to implement and applicable to a variety of boundary conditions. It is also computationally efficient, as indicated by the results of numerical comparisons with other established methods. Furthermore, the current algorithm has the advantage of possessing two important properties which the traditional iterative methods lack; that is: (1) the convergence rate is relatively insensitive to grid-cell size and aspect ratio, and (2) the convergence rate can be easily estimated by using the coefficient of the PDE being solved.
Equiangular tight frames and unistochastic matrices
NASA Astrophysics Data System (ADS)
Goyeneche, Dardo; Turek, Ondřej
2017-06-01
We demonstrate that a complex equiangular tight frame composed of N vectors in dimension d, denoted ETF (d, N), exists if and only if a certain bistochastic matrix, univocally determined by N and d, belongs to a special class of unistochastic matrices. This connection allows us to find new complex ETFs in infinitely many dimensions and to derive a method to introduce non-trivial free parameters in ETFs. We present an explicit six-parametric family of complex ETF(6,16), which defines a family of symmetric POVMs. Minimal and maximal possible average entanglement of the vectors within this qubit-qutrit family are described. Furthermore, we propose an efficient numerical procedure to compute the unitary matrix underlying a unistochastic matrix, which we apply to find all existing classes of complex ETFs containing up to 20 vectors.
NASA Technical Reports Server (NTRS)
Sheng, Chunhua; Hyams, Daniel G.; Sreenivas, Kidambi; Gaither, J. Adam; Marcum, David L.; Whitfield, David L.
2000-01-01
A multiblock unstructured grid approach is presented for solving three-dimensional incompressible inviscid and viscous turbulent flows about complete configurations. The artificial compressibility form of the governing equations is solved by a node-based, finite volume implicit scheme which uses a backward Euler time discretization. Point Gauss-Seidel relaxations are used to solve the linear system of equations at each time step. This work employs a multiblock strategy to the solution procedure, which greatly improves the efficiency of the algorithm by significantly reducing the memory requirements by a factor of 5 over the single-grid algorithm while maintaining a similar convergence behavior. The numerical accuracy of solutions is assessed by comparing with the experimental data for a submarine with stem appendages and a high-lift configuration.
NASA Astrophysics Data System (ADS)
Bogani, F.; Borchi, E.; Bruzzi, M.; Leroy, C.; Sciortino, S.
1997-02-01
The thermoluminescent (TL) response of Chemical Vapour Deposited (CVD) diamond films to beta irradiation has been investigated. A numerical curve-fitting procedure, calibrated by means of a set of LiF TLD100 experimental spectra, has been developed to deconvolute the complex structured TL glow curves. The values of the activation energy and of the frequency factor related to each of the TL peaks involved have been determined. The TL response of the CVD diamond films to beta irradiation has been compared with the TL response of a set of LiF TLD100 and TLD700 dosimeters. The results have been discussed and compared in view of an assessment of the efficiency of CVD diamond films in future applications as in vivo dosimeters.
Fibro/Adipogenic Progenitors (FAPs): Isolation by FACS and Culture.
Low, Marcela; Eisner, Christine; Rossi, Fabio
2017-01-01
Fibro/adipogenic progenitors (FAPs ) are tissue-resident mesenchymal stromal cells (MSCs). Current literature supports a role for these cells in the homeostasis and repair of multiple tissues suggesting that FAPs may have extensive therapeutic potential in the treatment of numerous diseases. In this context, it is crucial to establish efficient and reproducible procedures to purify FAP populations from various tissues. Here, we describe a protocol for the isolation and cell culture of FAPs from murine skeletal muscle using fluorescence -activated cell sorting (FACS), which is particularly useful for experiments where high cell purity is an essential requirement. Identification, isolation, and cell culture of FAPs represent powerful tools that will help us to understand the role of these cells in different conditions and facilitate the development of safe and effective new treatments for diseases.
A soft computing-based approach to optimise queuing-inventory control problem
NASA Astrophysics Data System (ADS)
Alaghebandha, Mohammad; Hajipour, Vahid
2015-04-01
In this paper, a multi-product continuous review inventory control problem within batch arrival queuing approach (MQr/M/1) is developed to find the optimal quantities of maximum inventory. The objective function is to minimise summation of ordering, holding and shortage costs under warehouse space, service level and expected lost-sales shortage cost constraints from retailer and warehouse viewpoints. Since the proposed model is Non-deterministic Polynomial-time hard, an efficient imperialist competitive algorithm (ICA) is proposed to solve the model. To justify proposed ICA, both ganetic algorithm and simulated annealing algorithm are utilised. In order to determine the best value of algorithm parameters that result in a better solution, a fine-tuning procedure is executed. Finally, the performance of the proposed ICA is analysed using some numerical illustrations.
Low-Rank Correction Methods for Algebraic Domain Decomposition Preconditioners
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Ruipeng; Saad, Yousef
This study presents a parallel preconditioning method for distributed sparse linear systems, based on an approximate inverse of the original matrix, that adopts a general framework of distributed sparse matrices and exploits domain decomposition (DD) and low-rank corrections. The DD approach decouples the matrix and, once inverted, a low-rank approximation is applied by exploiting the Sherman--Morrison--Woodbury formula, which yields two variants of the preconditioning methods. The low-rank expansion is computed by the Lanczos procedure with reorthogonalizations. Numerical experiments indicate that, when combined with Krylov subspace accelerators, this preconditioner can be efficient and robust for solving symmetric sparse linear systems. Comparisonsmore » with pARMS, a DD-based parallel incomplete LU (ILU) preconditioning method, are presented for solving Poisson's equation and linear elasticity problems.« less
Playing biology's name game: identifying protein names in scientific text.
Hanisch, Daniel; Fluck, Juliane; Mevissen, Heinz-Theodor; Zimmer, Ralf
2003-01-01
A growing body of work is devoted to the extraction of protein or gene interaction information from the scientific literature. Yet, the basis for most extraction algorithms, i.e. the specific and sensitive recognition of protein and gene names and their numerous synonyms, has not been adequately addressed. Here we describe the construction of a comprehensive general purpose name dictionary and an accompanying automatic curation procedure based on a simple token model of protein names. We designed an efficient search algorithm to analyze all abstracts in MEDLINE in a reasonable amount of time on standard computers. The parameters of our method are optimized using machine learning techniques. Used in conjunction, these ingredients lead to good search performance. A supplementary web page is available at http://cartan.gmd.de/ProMiner/.
Vibration and noise analysis of a gear transmission system
NASA Technical Reports Server (NTRS)
Choy, F. K.; Qian, W.; Zakrajsek, J. J.; Oswald, F. B.
1993-01-01
This paper presents a comprehensive procedure to predict both the vibration and noise generated by a gear transmission system under normal operating conditions. The gearbox vibrations were obtained from both numerical simulation and experimental studies using a gear noise test rig. In addition, the noise generated by the gearbox vibrations was recorded during the experimental testing. A numerical method was used to develop linear relationships between the gearbox vibration and the generated noise. The hypercoherence function is introduced to correlate the nonlinear relationship between the fundamental noise frequency and its harmonics. A numerical procedure was developed using both the linear and nonlinear relationships generated from the experimental data to predict noise resulting from the gearbox vibrations. The application of this methodology is demonstrated by comparing the numerical and experimental results from the gear noise test rig.
Optimum strata boundaries and sample sizes in health surveys using auxiliary variables
2018-01-01
Using convenient stratification criteria such as geographical regions or other natural conditions like age, gender, etc., is not beneficial in order to maximize the precision of the estimates of variables of interest. Thus, one has to look for an efficient stratification design to divide the whole population into homogeneous strata that achieves higher precision in the estimation. In this paper, a procedure for determining Optimum Stratum Boundaries (OSB) and Optimum Sample Sizes (OSS) for each stratum of a variable of interest in health surveys is developed. The determination of OSB and OSS based on the study variable is not feasible in practice since the study variable is not available prior to the survey. Since many variables in health surveys are generally skewed, the proposed technique considers the readily-available auxiliary variables to determine the OSB and OSS. This stratification problem is formulated into a Mathematical Programming Problem (MPP) that seeks minimization of the variance of the estimated population parameter under Neyman allocation. It is then solved for the OSB by using a dynamic programming (DP) technique. A numerical example with a real data set of a population, aiming to estimate the Haemoglobin content in women in a national Iron Deficiency Anaemia survey, is presented to illustrate the procedure developed in this paper. Upon comparisons with other methods available in literature, results reveal that the proposed approach yields a substantial gain in efficiency over the other methods. A simulation study also reveals similar results. PMID:29621265
Optimum strata boundaries and sample sizes in health surveys using auxiliary variables.
Reddy, Karuna Garan; Khan, Mohammad G M; Khan, Sabiha
2018-01-01
Using convenient stratification criteria such as geographical regions or other natural conditions like age, gender, etc., is not beneficial in order to maximize the precision of the estimates of variables of interest. Thus, one has to look for an efficient stratification design to divide the whole population into homogeneous strata that achieves higher precision in the estimation. In this paper, a procedure for determining Optimum Stratum Boundaries (OSB) and Optimum Sample Sizes (OSS) for each stratum of a variable of interest in health surveys is developed. The determination of OSB and OSS based on the study variable is not feasible in practice since the study variable is not available prior to the survey. Since many variables in health surveys are generally skewed, the proposed technique considers the readily-available auxiliary variables to determine the OSB and OSS. This stratification problem is formulated into a Mathematical Programming Problem (MPP) that seeks minimization of the variance of the estimated population parameter under Neyman allocation. It is then solved for the OSB by using a dynamic programming (DP) technique. A numerical example with a real data set of a population, aiming to estimate the Haemoglobin content in women in a national Iron Deficiency Anaemia survey, is presented to illustrate the procedure developed in this paper. Upon comparisons with other methods available in literature, results reveal that the proposed approach yields a substantial gain in efficiency over the other methods. A simulation study also reveals similar results.
Array feed synthesis for correction of reflector distortion and Vernier beamsteering
NASA Technical Reports Server (NTRS)
Blank, Stephen J.; Imbriale, William A.
1988-01-01
An algorithmic procedure for the synthesis of planar array feeds for paraboloidal reflectors is described which simultaneously provides electronic correction of systematic reflector surface distortions as well as a Vernier electronic beamsteering capability. Simple rules of thumb for the optimum chioce of planar array feed configuration (i.e., the number and type of elements) are derived from a parametric study made using the synthesis procedure. A number of f/D ratios and distortion models were examined that are typical of large paraboloidal reflectors. Numerical results are presented showing that, for the range of distortion models considered, good on-axis gain restoration can be achieved with as few as seven elements. For beamsteering to +/- 1 beamwidth (BW), 19 elements are required. For arrays with either 7 or 19 elements, the results indicate that the use of high-aperture-efficiency elements (e.g., disk-on-rod and short backfire) in the array yields higher system gain than can be obtained with elements having lower aperture efficiency (e.g., open-ended waveguides). With 37 elements, excellent gain and beamsteering performance to +/- 1.5 BW are obtained independent of the assumed effective aperture of the array element. An approximate expression is derived for the focal-plane field distribution of the distorted reflector. Contour plots of the focal-plane fields are also presented for various distortion and beam scan angle cases. The results obtained show the effectiveness of the array feed approach.
Bassil, Alfred; Rubod, Chrystèle; Borghesi, Yves; Kerbage, Yohan; Schreiber, Elie Servan; Azaïs, Henri; Garabedian, Charles
2017-04-01
Hysteroscopy is one of the most common gynaecological procedure. Training for diagnostic and operative hysteroscopy can be achieved through numerous previously described models like animal models or virtual reality simulation. We present our novel combined model associating virtual reality and bovine uteruses and bladders. End year residents in obstetrics and gynaecology attended a full day workshop. The workshop was divided in theoretical courses from senior surgeons and hands-on training in operative hysteroscopy and virtual reality Essure ® procedures using the EssureSim™ and Pelvicsim™ simulators with multiple scenarios. Theoretical and operative knowledge was evaluated before and after the workshop and General Points Averages (GPAs) were calculated and compared using a Student's T test. GPAs were significantly higher after the workshop was completed. The biggest difference was observed in operative knowledge (0,28 GPA before workshop versus 0,55 after workshop, p<0,05). All of the 25 residents having completed the workshop applauded the realism an efficiency of this type of training. The force feedback allowed by the cattle uteruses gives the residents the possibility to manage thickness of resection as in real time surgery. Furthermore, the two-horned bovine uteruses allowed to reproduce septa resection in conditions close to human surgery CONCLUSION: Teaching operative and diagnostic hysteroscopy is essential. Managing this training through a full day workshop using a combined animal model and virtual reality simulation is an efficient model not described before. Copyright © 2017 Elsevier B.V. All rights reserved.
Constant Head Evaluation of Full Scale Soil Absorption Fields
NASA Astrophysics Data System (ADS)
Dix, S. P.
2001-05-01
Design loading rates for septic tank effluent in trenches of various designs with different geometry and media has been debated for decades. The role of bottom and sidewall is a hot topic with many opinion by experts in the field of agricultural and environmental engineering. Research institutions have conducted numerous studies and developed procedures for measuring both test systems and fundamental of soil hydraulics. Falling head tests have been used more recently to evaluate mature test cells and evaluate both sidewall and basal absorption, (Keys et al). The proposed paper will discuss the design and testing of a constant head permeameter. Testing this equipment and developing the test protocol followed the application of the procedure to on a number of residential systems in both sandy and clay loam soil. Results from this testing showed the relability step that must be taken to successfully use this equipment. Result of the testing show the variability and consistency of absorption, the changes in absorption when systems are flooded above their equilibrium condition and the longer-term changes that occur when trenches are rested in a warm climate. More recent application of the test procedure evaluated affects of head and increased depth sidewall on absorption rates when the effluent level in the trenches was raised. Future modification of the test equipment and procedure by integrating a data logger will permits more exact recording of dose cycles and improved estimate of soil absorption efficiency over time.
Has the gap between pancreas and islet transplantation closed?
Niclauss, Nadja; Morel, Philippe; Berney, Thierry
2014-09-27
Both pancreas and islet transplantations are therapeutic options for complicated type 1 diabetes. Until recent years, outcomes of islet transplantation have been significantly inferior to those of whole pancreas. Islet transplantation is primarily performed alone in patients with severe hypoglycemia, and recent registry reports have suggested that results of islet transplantation alone in this indication may be about to match those of pancreas transplant alone in insulin independence. Figures of 50% insulin independence at 5 years for either procedure have been cited. In this article, we address the question whether islet transplantation has indeed bridged the gap with whole pancreas. Looking at the evidence to answer this question, we propose that although pancreas may still be more efficient in taking recipients off insulin than islets, there are in fact numerous "gaps" separating both procedures that must be taken into the equation. These "gaps" relate to organ utilization, organ allocation, indication for transplantation, and morbidity. In-depth analysis reveals that islet transplantation, in fact, has an edge on whole pancreas in some of these aspects. Accordingly, attempts should be made to bridge these gaps from both sides to achieve the same level of success with either procedure. More realistically, it is likely that some of these gaps will remain and that both procedures will coexist and complement each other, to ensure that β cell replacement can be successfully implemented in the greatest possible number of patients with type 1 diabetes.
Evolution of the concentration PDF in random environments modeled by global random walk
NASA Astrophysics Data System (ADS)
Suciu, Nicolae; Vamos, Calin; Attinger, Sabine; Knabner, Peter
2013-04-01
The evolution of the probability density function (PDF) of concentrations of chemical species transported in random environments is often modeled by ensembles of notional particles. The particles move in physical space along stochastic-Lagrangian trajectories governed by Ito equations, with drift coefficients given by the local values of the resolved velocity field and diffusion coefficients obtained by stochastic or space-filtering upscaling procedures. A general model for the sub-grid mixing also can be formulated as a system of Ito equations solving for trajectories in the composition space. The PDF is finally estimated by the number of particles in space-concentration control volumes. In spite of their efficiency, Lagrangian approaches suffer from two severe limitations. Since the particle trajectories are constructed sequentially, the demanded computing resources increase linearly with the number of particles. Moreover, the need to gather particles at the center of computational cells to perform the mixing step and to estimate statistical parameters, as well as the interpolation of various terms to particle positions, inevitably produce numerical diffusion in either particle-mesh or grid-free particle methods. To overcome these limitations, we introduce a global random walk method to solve the system of Ito equations in physical and composition spaces, which models the evolution of the random concentration's PDF. The algorithm consists of a superposition on a regular lattice of many weak Euler schemes for the set of Ito equations. Since all particles starting from a site of the space-concentration lattice are spread in a single numerical procedure, one obtains PDF estimates at the lattice sites at computational costs comparable with those for solving the system of Ito equations associated to a single particle. The new method avoids the limitations concerning the number of particles in Lagrangian approaches, completely removes the numerical diffusion, and speeds up the computation by orders of magnitude. The approach is illustrated for the transport of passive scalars in heterogeneous aquifers, with hydraulic conductivity modeled as a random field.
Mass-conservative reconstruction of Galerkin velocity fields for transport simulations
NASA Astrophysics Data System (ADS)
Scudeler, C.; Putti, M.; Paniconi, C.
2016-08-01
Accurate calculation of mass-conservative velocity fields from numerical solutions of Richards' equation is central to reliable surface-subsurface flow and transport modeling, for example in long-term tracer simulations to determine catchment residence time distributions. In this study we assess the performance of a local Larson-Niklasson (LN) post-processing procedure for reconstructing mass-conservative velocities from a linear (P1) Galerkin finite element solution of Richards' equation. This approach, originally proposed for a-posteriori error estimation, modifies the standard finite element velocities by imposing local conservation on element patches. The resulting reconstructed flow field is characterized by continuous fluxes on element edges that can be efficiently used to drive a second order finite volume advective transport model. Through a series of tests of increasing complexity that compare results from the LN scheme to those using velocity fields derived directly from the P1 Galerkin solution, we show that a locally mass-conservative velocity field is necessary to obtain accurate transport results. We also show that the accuracy of the LN reconstruction procedure is comparable to that of the inherently conservative mixed finite element approach, taken as a reference solution, but that the LN scheme has much lower computational costs. The numerical tests examine steady and unsteady, saturated and variably saturated, and homogeneous and heterogeneous cases along with initial and boundary conditions that include dry soil infiltration, alternating solute and water injection, and seepage face outflow. Typical problems that arise with velocities derived from P1 Galerkin solutions include outgoing solute flux from no-flow boundaries, solute entrapment in zones of low hydraulic conductivity, and occurrences of anomalous sources and sinks. In addition to inducing significant mass balance errors, such manifestations often lead to oscillations in concentration values that can moreover cause the numerical solution to explode. These problems do not occur when using LN post-processed velocities.
NASA Technical Reports Server (NTRS)
Chaderjian, Neal M.
1991-01-01
Computations from two Navier-Stokes codes, NSS and F3D, are presented for a tangent-ogive-cylinder body at high angle of attack. Features of this steady flow include a pair of primary vortices on the leeward side of the body as well as secondary vortices. The topological and physical plausibility of this vortical structure is discussed. The accuracy of these codes are assessed by comparison of the numerical solutions with experimental data. The effects of turbulence model, numerical dissipation, and grid refinement are presented. The overall efficiency of these codes are also assessed by examining their convergence rates, computational time per time step, and maximum allowable time step for time-accurate computations. Overall, the numerical results from both codes compared equally well with experimental data, however, the NSS code was found to be significantly more efficient than the F3D code.
Aerodynamic design using numerical optimization
NASA Technical Reports Server (NTRS)
Murman, E. M.; Chapman, G. T.
1983-01-01
The procedure of using numerical optimization methods coupled with computational fluid dynamic (CFD) codes for the development of an aerodynamic design is examined. Several approaches that replace wind tunnel tests, develop pressure distributions and derive designs, or fulfill preset design criteria are presented. The method of Aerodynamic Design by Numerical Optimization (ADNO) is described and illustrated with examples.
Numerical Simulation of Two Phase Flows
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing
2001-01-01
Two phase flows can be found in broad situations in nature, biology, and industry devices and can involve diverse and complex mechanisms. While the physical models may be specific for certain situations, the mathematical formulation and numerical treatment for solving the governing equations can be general. Hence, we will require information concerning each individual phase as needed in a single phase. but also the interactions between them. These interaction terms, however, pose additional numerical challenges because they are beyond the basis that we use to construct modern numerical schemes, namely the hyperbolicity of equations. Moreover, due to disparate differences in time scales, fluid compressibility and nonlinearity become acute, further complicating the numerical procedures. In this paper, we will show the ideas and procedure how the AUSM-family schemes are extended for solving two phase flows problems. Specifically, both phases are assumed in thermodynamic equilibrium, namely, the time scales involved in phase interactions are extremely short in comparison with those in fluid speeds and pressure fluctuations. Details of the numerical formulation and issues involved are discussed and the effectiveness of the method are demonstrated for several industrial examples.
Recovery Discontinuous Galerkin Jacobian-free Newton-Krylov Method for all-speed flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
HyeongKae Park; Robert Nourgaliev; Vincent Mousseau
2008-07-01
There is an increasing interest to develop the next generation simulation tools for the advanced nuclear energy systems. These tools will utilize the state-of-art numerical algorithms and computer science technology in order to maximize the predictive capability, support advanced reactor designs, reduce uncertainty and increase safety margins. In analyzing nuclear energy systems, we are interested in compressible low-Mach number, high heat flux flows with a wide range of Re, Ra, and Pr numbers. Under these conditions, the focus is placed on turbulent heat transfer, in contrast to other industries whose main interest is in capturing turbulent mixing. Our objective ismore » to develop singlepoint turbulence closure models for large-scale engineering CFD code, using Direct Numerical Simulation (DNS) or Large Eddy Simulation (LES) tools, requireing very accurate and efficient numerical algorithms. The focus of this work is placed on fully-implicit, high-order spatiotemporal discretization based on the discontinuous Galerkin method solving the conservative form of the compressible Navier-Stokes equations. The method utilizes a local reconstruction procedure derived from weak formulation of the problem, which is inspired by the recovery diffusion flux algorithm of van Leer and Nomura [?] and by the piecewise parabolic reconstruction [?] in the finite volume method. The developed methodology is integrated into the Jacobianfree Newton-Krylov framework [?] to allow a fully-implicit solution of the problem.« less
NASA Astrophysics Data System (ADS)
Zhao, Huafeng; Zhou, Binwu; Wu, Xuecheng; Wu, Yingchun; Gao, Xiang; Gréhan, Gérard; Cen, Kefa
2014-04-01
Digital holography plays a key role in particle field measurement, and appears to be a strong contender as the next-generation technology for diagnostics of 3D particle field. However, various recording parameters, such as the recording distance, the particle size, the wavelength, the size of the CCD chip, the pixel size and the particle concentration, will affect the results of the reconstruction, and may even determine the success or failure of a measurement. This paper presents a numerical investigation on the effect of particle concentration, the volume depth to evaluate the capability of digital holographic microscopy. Standard particles holograms with all known recording parameters are numerically generated by using a common procedure based on Lorenz-Mie scattering theory. Reconstruction of those holograms are then performed by a wavelet-transform based method. Results show that the reconstruction efficiency decreases quickly until particle concentration reaches 50×104 (mm-3), and decreases linearly with the increase of particle concentration from 50 × 104 (mm-3) to 860 × 104 (mm-3) in the same volume. The first half of the line waves larger than the second half. It also indicates that the increase of concentration leads the rise in average diameter error and z position error of particles. Besides, the volume depth also plays a key role in reconstruction.
van Det, M J; Meijerink, W J H J; Hoff, C; Middel, B; Pierie, J P E N
2013-08-01
INtraoperative Video Enhanced Surgical procedure Training (INVEST) is a new training method designed to improve the transition from basic skills training in a skills lab to procedural training in the operating theater. Traditionally, the master-apprentice model (MAM) is used for procedural training in the operating theater, but this model lacks uniformity and efficiency at the beginning of the learning curve. This study was designed to investigate the effectiveness and efficiency of INVEST compared to MAM. Ten surgical residents with no laparoscopic experience were recruited for a laparoscopic cholecystectomy training curriculum either by the MAM or with INVEST. After a uniform course in basic laparoscopic skills, each trainee performed six cholecystectomies that were digitally recorded. For 14 steps of the procedure, an observer who was blinded for the type of training determined whether the step was performed entirely by the trainee (2 points), partially by the trainee (1 point), or by the supervisor (0 points). Time measurements revealed the total procedure time and the amount of effective procedure time during which the trainee acted as the operating surgeon. Results were compared between both groups. Trainees in the INVEST group were awarded statistically significant more points (115.8 vs. 70.2; p < 0.001) and performed more steps without the interference of the supervisor (46.6 vs. 18.8; p < 0.001). Total procedure time was not lengthened by INVEST, and the part performed by trainees was significantly larger (69.9 vs. 54.1 %; p = 0.004). INVEST enhances effectiveness and training efficiency for procedural training inside the operating theater without compromising operating theater time efficiency.
Poulain, Christophe A.; Finlayson, Bruce A.; Bassingthwaighte, James B.
2010-01-01
The analysis of experimental data obtained by the multiple-indicator method requires complex mathematical models for which capillary blood-tissue exchange (BTEX) units are the building blocks. This study presents a new, nonlinear, two-region, axially distributed, single capillary, BTEX model. A facilitated transporter model is used to describe mass transfer between plasma and intracellular spaces. To provide fast and accurate solutions, numerical techniques suited to nonlinear convection-dominated problems are implemented. These techniques are the random choice method, an explicit Euler-Lagrange scheme, and the MacCormack method with and without flux correction. The accuracy of the numerical techniques is demonstrated, and their efficiencies are compared. The random choice, Euler-Lagrange and plain MacCormack method are the best numerical techniques for BTEX modeling. However, the random choice and Euler-Lagrange methods are preferred over the MacCormack method because they allow for the derivation of a heuristic criterion that makes the numerical methods stable without degrading their efficiency. Numerical solutions are also used to illustrate some nonlinear behaviors of the model and to show how the new BTEX model can be used to estimate parameters from experimental data. PMID:9146808
NASA Astrophysics Data System (ADS)
Tortora, Maxime M. C.; Doye, Jonathan P. K.
2017-12-01
We detail the application of bounding volume hierarchies to accelerate second-virial evaluations for arbitrary complex particles interacting through hard and soft finite-range potentials. This procedure, based on the construction of neighbour lists through the combined use of recursive atom-decomposition techniques and binary overlap search schemes, is shown to scale sub-logarithmically with particle resolution in the case of molecular systems with high aspect ratios. Its implementation within an efficient numerical and theoretical framework based on classical density functional theory enables us to investigate the cholesteric self-assembly of a wide range of experimentally relevant particle models. We illustrate the method through the determination of the cholesteric behavior of hard, structurally resolved twisted cuboids, and report quantitative evidence of the long-predicted phase handedness inversion with increasing particle thread angles near the phenomenological threshold value of 45°. Our results further highlight the complex relationship between microscopic structure and helical twisting power in such model systems, which may be attributed to subtle geometric variations of their chiral excluded-volume manifold.
LSENS, The NASA Lewis Kinetics and Sensitivity Analysis Code
NASA Technical Reports Server (NTRS)
Radhakrishnan, K.
2000-01-01
A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS (the NASA Lewis kinetics and sensitivity analysis code), are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include: static system; steady, one-dimensional, inviscid flow; incident-shock initiated reaction in a shock tube; and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method (LSODE, the Livermore Solver for Ordinary Differential Equations), which works efficiently for the extremes of very fast and very slow reactions, is used to solve the "stiff" ordinary differential equation systems that arise in chemical kinetics. For static reactions, the code uses the decoupled direct method to calculate sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters. Solution methods for the equilibrium and post-shock conditions and for perfectly stirred reactor problems are either adapted from or based on the procedures built into the NASA code CEA (Chemical Equilibrium and Applications).
NASA Astrophysics Data System (ADS)
Akhtarianfar, S. F.; Ramazani, A.; Almasi-Kashi, M.; Montazer, A. H.
2018-05-01
Fabrication of different nanostructures based on template-assisted methods has become conventional, due to their numerous potential applications. In this paper, Fe nanowire arrays (NWAs) were fabricated using a pulsed electrodeposition in porous anodic alumina (PAA) templates. The effect of alumina barrier layer conditions such as barrier layer temperature (BLT) and Cu pre-plating at the dendritic sections of pores on the electrodeposition efficiency (EE) and magnetic properties of Fe NWAs in two pH regimes (2.6 and 4.0) has been investigated. At pH 4.0, BLT was changed from 4 to 32 °C, leading to an EE of approximately 60% for BLT 24 °C. Moreover, to overcome the problem of low EE 2% at the pH of 2.6, Cu pre-plating was performed with deposition current densities of 25 and 35 mA/cm2. This procedure increased the EE up to about 40%, providing a promising approach to enhance the EE in the fabrication of Fe NWAs. Furthermore, a nearly constant trend of magnetic properties was observed for highly crystalline Fe NWs.
Development and application of unified algorithms for problems in computational science
NASA Technical Reports Server (NTRS)
Shankar, Vijaya; Chakravarthy, Sukumar
1987-01-01
A framework is presented for developing computationally unified numerical algorithms for solving nonlinear equations that arise in modeling various problems in mathematical physics. The concept of computational unification is an attempt to encompass efficient solution procedures for computing various nonlinear phenomena that may occur in a given problem. For example, in Computational Fluid Dynamics (CFD), a unified algorithm will be one that allows for solutions to subsonic (elliptic), transonic (mixed elliptic-hyperbolic), and supersonic (hyperbolic) flows for both steady and unsteady problems. The objectives are: development of superior unified algorithms emphasizing accuracy and efficiency aspects; development of codes based on selected algorithms leading to validation; application of mature codes to realistic problems; and extension/application of CFD-based algorithms to problems in other areas of mathematical physics. The ultimate objective is to achieve integration of multidisciplinary technologies to enhance synergism in the design process through computational simulation. Specific unified algorithms for a hierarchy of gas dynamics equations and their applications to two other areas: electromagnetic scattering, and laser-materials interaction accounting for melting.
Tripartite equilibrium strategy for a carbon tax setting problem in air passenger transport.
Xu, Jiuping; Qiu, Rui; Tao, Zhimiao; Xie, Heping
2018-03-01
Carbon emissions in air passenger transport have become increasing serious with the rapidly development of aviation industry. Combined with a tripartite equilibrium strategy, this paper proposes a multi-level multi-objective model for an air passenger transport carbon tax setting problem (CTSP) among an international organization, an airline and passengers with the fuzzy uncertainty. The proposed model is simplified to an equivalent crisp model by a weighted sum procedure and a Karush-Kuhn-Tucker (KKT) transformation method. To solve the equivalent crisp model, a fuzzy logic controlled genetic algorithm with entropy-Bolitzmann selection (FLC-GA with EBS) is designed as an integrated solution method. Then, a numerical example is provided to demonstrate the practicality and efficiency of the optimization method. Results show that the cap tax mechanism is an important part of air passenger trans'port carbon emission mitigation and thus, it should be effectively applied to air passenger transport. These results also indicate that the proposed method can provide efficient ways of mitigating carbon emissions for air passenger transport, and therefore assist decision makers in formulating relevant strategies under multiple scenarios.
NASA Astrophysics Data System (ADS)
Samaras, Stefanos; Böckmann, Christine; Nicolae, Doina
2016-06-01
In this work we propose a two-step advancement of the Mie spherical-particle model accounting for particle non-sphericity. First, a naturally two-dimensional (2D) generalized model (GM) is made, which further triggers analogous 2D re-definitions of microphysical parameters. We consider a spheroidal-particle approach where the size distribution is additionally dependent on aspect ratio. Second, we incorporate the notion of a sphere-spheroid particle mixture (PM) weighted by a non-sphericity percentage. The efficiency of these two models is investigated running synthetic data retrievals with two different regularization methods to account for the inherent instability of the inversion procedure. Our preliminary studies show that a retrieval with the PM model improves the fitting errors and the microphysical parameter retrieval and it has at least the same efficiency as the GM. While the general trend of the initial size distributions is captured in our numerical experiments, the reconstructions are subject to artifacts. Finally, our approach is applied to a measurement case yielding acceptable results.
Flow-Induced Mitral Leaflet Motion in Hypertrophic Cardiomyopathy
NASA Astrophysics Data System (ADS)
Meschini, Valentina; Mittal, Rajat; Verzicco, Roberto
2017-11-01
Hypertrophic cardiomyopathy (HCM) is considered the cause of sudden cardiac death in developed countries. Clinically it is found to be related to the thickening of the intra-ventricular septum combined with elongated mitral leaflets. During systole the low pressure, induced by the abnormal velocities in the narrowed aortic channel, can attract one or both the mitral leaflets causing the aortic obstruction and sometimes instantaneous death. In this paper a fluid structure interaction model for the flow in the left ventricle with a native mitral valve is employed to investigate the physio-pathology of HCM. The problem is studied using direct numerical simulations of the Navier-Stokes equations with a two-way coupled structural solver based on interaction potential approach for the structure dynamics. Simulations are performed for two different degrees of hypertrophy, and two values of pumping efficiency. The leaflets dynamics and the ventricle deformation resulting from the echocardiography of patients affected by HCM are well captured by the simulations. Moreover, the procedures of leaflets plication and septum myectomy are simulated in order to get insights into the efficiency and reliability of such surgery.
TURNS - A free-wake Euler/Navier-Stokes numerical method for helicopter rotors
NASA Technical Reports Server (NTRS)
Srinivasan, G. R.; Baeder, J. D.
1993-01-01
Computational capabilities of a numerical procedure, called TURNS (transonic unsteady rotor Navier-Stokes), to calculate the aerodynamics and acoustics (high-speed impulsive noise) out to several rotor diameters are summarized. The procedure makes it possible to obtain the aerodynamics and acoustics information in one single calculation. The vortical wave and its influence, as well as the acoustics, are captured as part of the overall flowfield solution. The accuracy and suitability of the TURNS method is demonstrated through comparisons with experimental data.
Lattice Boltzmann for Simulation of Gases Mixture in Fruit Storage Chambers
NASA Astrophysics Data System (ADS)
Fabero, J. C.; Barreiro, P.; Casasús, L.
2003-04-01
Fluid Dynamics can be modelled through the Navier-Stokes equations. This description corresponds to a macroscopic definition of the fluid motion phenomena. During the past 20 year new simulation procedures are emerging from Statistical Physics and Computer Science domains. One of them is the Lattice Gas Cellular Automata (LGCA) method. This approach, which is considered to be a microscopic description of the world, in spite of its intuitiveness and numerical efficiency, fails to simulate the real Navier-Stokes equations. Another classical simulation procedure for the fluid motion phenomena is the so called Lattice Boltzmann method [1]. This corresponds to a meso-scale description of the world [2]. Simulation of laminar and turbulent motions of fluids, specially when considering several gas species is still an ongoing research [3]. Nowadays, the use of Low Oxygen and Ultra Low Oxygen Controlled Atmospheres has been recognized as a reliable method to extend the storage life of fruits an vegetables. However, small spatial gradients in gas concentration during storage may generate internal disorders in the commodities. In this work, four different gases will be considered: oxygen, carbon dioxide, water vapor and ethylene. Physiological effects such as transpiration, which affects the level of water vapor, respiration, which modifies both oxygen and carbon dioxide concentrations, and ethylene emission, must be taken into account in the hole model. The numerical model, based on that proposed by Shan and Chen, is implemented, being able to consider the behavior of multiple mixable gas species. Forced air motion, needed to obtain a correct ventilation of the chamber, has also been modelled.
Eiben, Bjoern; Hipwell, John H.; Williams, Norman R.; Keshtgar, Mo; Hawkes, David J.
2016-01-01
Surgical treatment for early-stage breast carcinoma primarily necessitates breast conserving therapy (BCT), where the tumour is removed while preserving the breast shape. To date, there have been very few attempts to develop accurate and efficient computational tools that could be used in the clinical environment for pre-operative planning and oncoplastic breast surgery assessment. Moreover, from the breast cancer research perspective, there has been very little effort to model complex mechano-biological processes involved in wound healing. We address this by providing an integrated numerical framework that can simulate the therapeutic effects of BCT over the extended period of treatment and recovery. A validated, three-dimensional, multiscale finite element procedure that simulates breast tissue deformations and physiological wound healing is presented. In the proposed methodology, a partitioned, continuum-based mathematical model for tissue recovery and angiogenesis, and breast tissue deformation is considered. The effectiveness and accuracy of the proposed numerical scheme is illustrated through patient-specific representative examples. Wound repair and contraction numerical analyses of real MRI-derived breast geometries are investigated, and the final predictions of the breast shape are validated against post-operative follow-up optical surface scans from four patients. Mean (standard deviation) breast surface distance errors in millimetres of 3.1 (±3.1), 3.2 (±2.4), 2.8 (±2.7) and 4.1 (±3.3) were obtained, demonstrating the ability of the surgical simulation tool to predict, pre-operatively, the outcome of BCT to clinically useful accuracy. PMID:27466815
DOE Office of Scientific and Technical Information (OSTI.GOV)
Voronov, D.L.; Warwick, T.; Gullikson, E. M.
2016-07-27
High-resolution Resonant Inelastic X-ray Scattering (RIXS) requires diffraction gratings with very exacting characteristics. The gratings should provide both very high dispersion and high efficiency which are conflicting requirements and extremely challenging to satisfy in the soft x-ray region for a traditional grazing incidence geometry. To achieve high dispersion one should increase the groove density of a grating; this however results in a diffraction angle beyond the critical angle range and results in drastic efficiency loss. The problem can be solved by use of multilayer coated blazed gratings (MBG). In this work we have investigated the diffraction characteristics of MBGs viamore » numerical simulations and have developed a procedure for optimization of grating design for a multiplexed high resolution imaging spectrometer for RIXS spectroscopy to be built in sector 6 at the Advanced Light Source (ALS). We found that highest diffraction efficiency can be achieved for gratings optimized for 4{sup th} or 5{sup th} order operation. Fabrication of such gratings is an extremely challenging technological problem. We present a first experimental prototype of these gratings and report its performance. High order and high line density gratings have the potential to be a revolutionary new optical element that should have great impact in the area of soft x-ray RIXS.« less
Optimal Computing Budget Allocation for Particle Swarm Optimization in Stochastic Optimization.
Zhang, Si; Xu, Jie; Lee, Loo Hay; Chew, Ek Peng; Wong, Wai Peng; Chen, Chun-Hung
2017-04-01
Particle Swarm Optimization (PSO) is a popular metaheuristic for deterministic optimization. Originated in the interpretations of the movement of individuals in a bird flock or fish school, PSO introduces the concept of personal best and global best to simulate the pattern of searching for food by flocking and successfully translate the natural phenomena to the optimization of complex functions. Many real-life applications of PSO cope with stochastic problems. To solve a stochastic problem using PSO, a straightforward approach is to equally allocate computational effort among all particles and obtain the same number of samples of fitness values. This is not an efficient use of computational budget and leaves considerable room for improvement. This paper proposes a seamless integration of the concept of optimal computing budget allocation (OCBA) into PSO to improve the computational efficiency of PSO for stochastic optimization problems. We derive an asymptotically optimal allocation rule to intelligently determine the number of samples for all particles such that the PSO algorithm can efficiently select the personal best and global best when there is stochastic estimation noise in fitness values. We also propose an easy-to-implement sequential procedure. Numerical tests show that our new approach can obtain much better results using the same amount of computational effort.
Optical design of an in vivo laparoscopic lighting system.
Liu, Xiaolong; Abdolmalaki, Reza Yazdanpanah; Mancini, Gregory J; Tan, Jindong
2017-12-01
This paper proposes an in vivo laparoscopic lighting system design to address the illumination issues, namely poor lighting uniformity and low optical efficiency, existing in the state-of-the-art in vivo laparoscopic cameras. The transformable design of the laparoscopic lighting system is capable of carrying purposefully designed freeform optical lenses for achieving lighting performance with high illuminance uniformity and high optical efficiency in a desired target region. To design freeform optical lenses for extended light sources such as LEDs with Lambertian light intensity distributions, we present an effective and complete freeform optical design method. The procedures include (1) ray map computation by numerically solving a standard Monge-Ampere equation; (2) initial freeform optical surface construction by using Snell's law and a lens volume restriction; (3) correction of surface normal vectors due to accumulated errors from the initially constructed surfaces; and (4) feedback modification of the solution to deal with degraded illuminance uniformity caused by the extended sizes of the LEDs. We employed an optical design software package to evaluate the performance of our laparoscopic lighting system design. The simulation results show that our design achieves greater than 95% illuminance uniformity and greater than 89% optical efficiency (considering Fresnel losses) for illuminating the target surgical region. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Li, Daojin; Yin, Danyang; Chen, Yang; Liu, Zhen
2017-05-19
Protein phosphorylation is a major post-translational modification, which plays a vital role in cellular signaling of numerous biological processes. Mass spectrometry (MS) has been an essential tool for the analysis of protein phosphorylation, for which it is a key step to selectively enrich phosphopeptides from complex biological samples. In this study, metal-organic frameworks (MOFs)-based monolithic capillary has been successfully prepared as an effective sorbent for the selective enrichment of phosphopeptides and has been off-line coupled with matrix-assisted laser desorption ionization-time-of-flight mass spectrometry (MALDI-TOF MS) for efficient analysis of phosphopeptides. Using š-casein as a representative phosphoprotein, efficient phosphorylation analysis by this off-line platform was verified. Phosphorylation analysis of a nonfat milk sample was also demonstrated. Through introducing large surface areas and highly ordered pores of MOFs into monolithic column, the MOFs-based monolithic capillary exhibited several significant advantages, such as excellent selectivity toward phosphopeptides, superb tolerance to interference and simple operation procedure. Because of these highly desirable properties, the MOFs-based monolithic capillary could be a useful tool for protein phosphorylation analysis. Copyright © 2016 Elsevier B.V. All rights reserved.
Optimal Computing Budget Allocation for Particle Swarm Optimization in Stochastic Optimization
Zhang, Si; Xu, Jie; Lee, Loo Hay; Chew, Ek Peng; Chen, Chun-Hung
2017-01-01
Particle Swarm Optimization (PSO) is a popular metaheuristic for deterministic optimization. Originated in the interpretations of the movement of individuals in a bird flock or fish school, PSO introduces the concept of personal best and global best to simulate the pattern of searching for food by flocking and successfully translate the natural phenomena to the optimization of complex functions. Many real-life applications of PSO cope with stochastic problems. To solve a stochastic problem using PSO, a straightforward approach is to equally allocate computational effort among all particles and obtain the same number of samples of fitness values. This is not an efficient use of computational budget and leaves considerable room for improvement. This paper proposes a seamless integration of the concept of optimal computing budget allocation (OCBA) into PSO to improve the computational efficiency of PSO for stochastic optimization problems. We derive an asymptotically optimal allocation rule to intelligently determine the number of samples for all particles such that the PSO algorithm can efficiently select the personal best and global best when there is stochastic estimation noise in fitness values. We also propose an easy-to-implement sequential procedure. Numerical tests show that our new approach can obtain much better results using the same amount of computational effort. PMID:29170617
NASA Astrophysics Data System (ADS)
Cai, Yong; Cui, Xiangyang; Li, Guangyao; Liu, Wenyang
2018-04-01
The edge-smooth finite element method (ES-FEM) can improve the computational accuracy of triangular shell elements and the mesh partition efficiency of complex models. In this paper, an approach is developed to perform explicit finite element simulations of contact-impact problems with a graphical processing unit (GPU) using a special edge-smooth triangular shell element based on ES-FEM. Of critical importance for this problem is achieving finer-grained parallelism to enable efficient data loading and to minimize communication between the device and host. Four kinds of parallel strategies are then developed to efficiently solve these ES-FEM based shell element formulas, and various optimization methods are adopted to ensure aligned memory access. Special focus is dedicated to developing an approach for the parallel construction of edge systems. A parallel hierarchy-territory contact-searching algorithm (HITA) and a parallel penalty function calculation method are embedded in this parallel explicit algorithm. Finally, the program flow is well designed, and a GPU-based simulation system is developed, using Nvidia's CUDA. Several numerical examples are presented to illustrate the high quality of the results obtained with the proposed methods. In addition, the GPU-based parallel computation is shown to significantly reduce the computing time.
A Second Law Based Unstructured Finite Volume Procedure for Generalized Flow Simulation
NASA Technical Reports Server (NTRS)
Majumdar, Alok
1998-01-01
An unstructured finite volume procedure has been developed for steady and transient thermo-fluid dynamic analysis of fluid systems and components. The procedure is applicable for a flow network consisting of pipes and various fittings where flow is assumed to be one dimensional. It can also be used to simulate flow in a component by modeling a multi-dimensional flow using the same numerical scheme. The flow domain is discretized into a number of interconnected control volumes located arbitrarily in space. The conservation equations for each control volume account for the transport of mass, momentum and entropy from the neighboring control volumes. In addition, they also include the sources of each conserved variable and time dependent terms. The source term of entropy equation contains entropy generation due to heat transfer and fluid friction. Thermodynamic properties are computed from the equation of state of a real fluid. The system of equations is solved by a hybrid numerical method which is a combination of simultaneous Newton-Raphson and successive substitution schemes. The paper also describes the application and verification of the procedure by comparing its predictions with the analytical and numerical solution of several benchmark problems.
NASA Astrophysics Data System (ADS)
MacDonald, Christopher L.; Bhattacharya, Nirupama; Sprouse, Brian P.; Silva, Gabriel A.
2015-09-01
Computing numerical solutions to fractional differential equations can be computationally intensive due to the effect of non-local derivatives in which all previous time points contribute to the current iteration. In general, numerical approaches that depend on truncating part of the system history while efficient, can suffer from high degrees of error and inaccuracy. Here we present an adaptive time step memory method for smooth functions applied to the Grünwald-Letnikov fractional diffusion derivative. This method is computationally efficient and results in smaller errors during numerical simulations. Sampled points along the system's history at progressively longer intervals are assumed to reflect the values of neighboring time points. By including progressively fewer points backward in time, a temporally 'weighted' history is computed that includes contributions from the entire past of the system, maintaining accuracy, but with fewer points actually calculated, greatly improving computational efficiency.
Efficient numerical method for analyzing optical bistability in photonic crystal microcavities.
Yuan, Lijun; Lu, Ya Yan
2013-05-20
Nonlinear optical effects can be enhanced by photonic crystal microcavities and be used to develop practical ultra-compact optical devices with low power requirements. The finite-difference time-domain method is the standard numerical method for simulating nonlinear optical devices, but it has limitations in terms of accuracy and efficiency. In this paper, a rigorous and efficient frequency-domain numerical method is developed for analyzing nonlinear optical devices where the nonlinear effect is concentrated in the microcavities. The method replaces the linear problem outside the microcavities by a rigorous and numerically computed boundary condition, then solves the nonlinear problem iteratively in a small region around the microcavities. Convergence of the iterative method is much easier to achieve since the size of the problem is significantly reduced. The method is presented for a specific two-dimensional photonic crystal waveguide-cavity system with a Kerr nonlinearity, using numerical methods that can take advantage of the geometric features of the structure. The method is able to calculate multiple solutions exhibiting the optical bistability phenomenon in the strongly nonlinear regime.
Boundary condition computational procedures for inviscid, supersonic steady flow field calculations
NASA Technical Reports Server (NTRS)
Abbett, M. J.
1971-01-01
Results are given of a comparative study of numerical procedures for computing solid wall boundary points in supersonic inviscid flow calculatons. Twenty five different calculation procedures were tested on two sample problems: a simple expansion wave and a simple compression (two-dimensional steady flow). A simple calculation procedure was developed. The merits and shortcomings of the various procedures are discussed, along with complications for three-dimensional and time-dependent flows.
Shen, Yi; Dai, Wei; Richards, Virginia M
2015-03-01
A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given.
Multi-fidelity stochastic collocation method for computation of statistical moments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Xueyu, E-mail: xueyu-zhu@uiowa.edu; Linebarger, Erin M., E-mail: aerinline@sci.utah.edu; Xiu, Dongbin, E-mail: xiu.16@osu.edu
We present an efficient numerical algorithm to approximate the statistical moments of stochastic problems, in the presence of models with different fidelities. The method extends the multi-fidelity approximation method developed in . By combining the efficiency of low-fidelity models and the accuracy of high-fidelity models, our method exhibits fast convergence with a limited number of high-fidelity simulations. We establish an error bound of the method and present several numerical examples to demonstrate the efficiency and applicability of the multi-fidelity algorithm.
NASA Technical Reports Server (NTRS)
Maccormack, R. W.
1978-01-01
The calculation of flow fields past aircraft configuration at flight Reynolds numbers is considered. Progress in devising accurate and efficient numerical methods, in understanding and modeling the physics of turbulence, and in developing reliable and powerful computer hardware is discussed. Emphasis is placed on efficient solutions to the Navier-Stokes equations.
Numerical convergence improvements for porflow unsaturated flow simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flach, Greg
2017-08-14
Section 3.6 of SRNL (2016) discusses various PORFLOW code improvements to increase modeling efficiency, in preparation for the next E-Area Performance Assessment (WSRC 2008) revision. This memorandum documents interaction with Analytic & Computational Research, Inc. (http://www.acricfd.com/default.htm) to improve numerical convergence efficiency using PORFLOW version 6.42 for unsaturated flow simulations.
Multivariate Analysis of the Visual Information Processing of Numbers
ERIC Educational Resources Information Center
Levine, David M.
1977-01-01
Nonmetric multidimensional scaling and hierarchical clustering procedures are applied to a confusion matrix of numerals. Two dimensions were interpreted: straight versus curved, and locus of curvature. Four major clusters of numerals were developed. (Author/JKS)
The Dysfunctions of Bureaucratic Structure.
ERIC Educational Resources Information Center
Duttweiler, Patricia Cloud
1988-01-01
Numerous dysfunctions result from bureaucratic school organization, including an overemphasis on specialized tasks, routine operating rules, and formal procedures for managing teaching and learning. Such schools are characterized by numerous regulations; formal communications; centralized decision making; and sharp distinctions among…
How [NOT] to Measure a Solar Cell to Get the Highest Efficiency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Emery, Keith
The multibillion-dollar photovoltaic (PV) industry sells products by the watt; the calibration labs measure this parameter at the cell and module level with the lowest possible uncertainty of 1-2 percent. The methods and procedures to achieve a measured 50 percent efficiency on a thin-film solar cell are discussed. This talk will describe methods that ignore procedures that increase the uncertainty. Your questions will be answered concerning 'Everything you Always Wanted to Know about Efficiency Enhancements But Were Afraid to Ask.' The talk will cover a step-by-step procedure using examples found in literature or encountered in customer samples by the Nationalmore » Renewable Energy Laboratory's (NREL's) PV Performance Characterization Group on how to artificially enhance the efficiency. The procedures will describe methods that have been used to enhance the current voltage and fill factor.« less
NASA Astrophysics Data System (ADS)
Feskov, Serguei V.; Ivanov, Anatoly I.
2018-03-01
An approach to the construction of diabatic free energy surfaces (FESs) for ultrafast electron transfer (ET) in a supramolecule with an arbitrary number of electron localization centers (redox sites) is developed, supposing that the reorganization energies for the charge transfers and shifts between all these centers are known. Dimensionality of the coordinate space required for the description of multistage ET in this supramolecular system is shown to be equal to N - 1, where N is the number of the molecular centers involved in the reaction. The proposed algorithm of FES construction employs metric properties of the coordinate space, namely, relation between the solvent reorganization energy and the distance between the two FES minima. In this space, the ET reaction coordinate zn n' associated with electron transfer between the nth and n'th centers is calculated through the projection to the direction, connecting the FES minima. The energy-gap reaction coordinates zn n' corresponding to different ET processes are not in general orthogonal so that ET between two molecular centers can create nonequilibrium distribution, not only along its own reaction coordinate but along other reaction coordinates too. This results in the influence of the preceding ET steps on the kinetics of the ensuing ET. It is important for the ensuing reaction to be ultrafast to proceed in parallel with relaxation along the ET reaction coordinates. Efficient algorithms for numerical simulation of multistage ET within the stochastic point-transition model are developed. The algorithms are based on the Brownian simulation technique with the recrossing-event detection procedure. The main advantages of the numerical method are (i) its computational complexity is linear with respect to the number of electronic states involved and (ii) calculations can be naturally parallelized up to the level of individual trajectories. The efficiency of the proposed approach is demonstrated for a model supramolecular system involving four redox centers.
NASA Astrophysics Data System (ADS)
Nagel, T.; Böttcher, N.; Görke, U. J.; Kolditz, O.
2014-12-01
The design process of geotechnical installations includes the application of numerical simulation tools for safety assessment, dimensioning and long term effectiveness estimations. Underground salt caverns can be used for the storage of natural gas, hydrogen, oil, waste or compressed air. For their design one has to take into account fluctuating internal pressures due to different levels of filling, the stresses imposed by the surrounding rock mass, irregular geometries and possibly heterogeneous material properties [3] in order to estimate long term cavern convergence as well as locally critical wall stresses. Constitutive models applied to rock salt are usually viscoplastic in nature and most often based on a Burgers-type rheological model extended by non-linear viscosity functions and/or plastic friction elements. Besides plastic dilatation, healing and damage are sometimes accounted for as well [2]. The scales of the geotechnical system to be simulated and the laboratory tests from which material parameters are determined are vastly different. The most common material testing modalities to determine material parameters in geoengineering are the uniaxial and the triaxial compression tests. Some constitutive formulations in widespread use are formulated based on equivalent rather than tensorial quantities valid under these specific test conditions and are subsequently applied to heterogeneous underground systems and complex 3D load cases. We show here that this procedure is inappropriate and can lead to erroneous results. We further propose alternative formulations of the constitutive models in question that restore their validity under arbitrary loading conditions. For an efficient numerical simulation, the discussed constitutive models are integrated locally with a Newton-Raphson algorithm that directly provides the algorithmically consistent tangent matrix for the global Newton iteration of the displacement based finite element formulation. Finally, the finite element implementations of the proposed constitutive formulations are employed to simulate an underground salt cavern used for compressed air energy storage with OpenGeoSys [1]. Transient convergence and stress fields are evaluated for typical fluctuating operation pressure regimes.
A Procedure Using Calculators to Express Answers in Fractional Form.
ERIC Educational Resources Information Center
Carlisle, Earnest
A procedure is described that enables students to perform operations on fractions with a calculator, expressing the answer as a fraction. Patterns using paper-and-pencil procedures for each operation with fractions are presented. A microcomputer software program illustrates how the answer can be found using integer values of the numerators and…
ERIC Educational Resources Information Center
Clinton, Elias; Clees, Tom J.
2015-01-01
Interspersal Procedures (IP) represent a group of interventions that imbed, at varying ratios, requests for individuals to exhibit mastered skills before or within sequences of requests for target skills. Interspersal Procedures include numerous strategies, such as high-probability request sequences, pre-task requests, and high-preference…
Numerical approximations for fractional diffusion equations via a Chebyshev spectral-tau method
NASA Astrophysics Data System (ADS)
Doha, Eid H.; Bhrawy, Ali H.; Ezz-Eldien, Samer S.
2013-10-01
In this paper, a class of fractional diffusion equations with variable coefficients is considered. An accurate and efficient spectral tau technique for solving the fractional diffusion equations numerically is proposed. This method is based upon Chebyshev tau approximation together with Chebyshev operational matrix of Caputo fractional differentiation. Such approach has the advantage of reducing the problem to the solution of a system of algebraic equations, which may then be solved by any standard numerical technique. We apply this general method to solve four specific examples. In each of the examples considered, the numerical results show that the proposed method is of high accuracy and is efficient for solving the time-dependent fractional diffusion equations.
NASA Astrophysics Data System (ADS)
Toufik, Mekkaoui; Atangana, Abdon
2017-10-01
Recently a new concept of fractional differentiation with non-local and non-singular kernel was introduced in order to extend the limitations of the conventional Riemann-Liouville and Caputo fractional derivatives. A new numerical scheme has been developed, in this paper, for the newly established fractional differentiation. We present in general the error analysis. The new numerical scheme was applied to solve linear and non-linear fractional differential equations. We do not need a predictor-corrector to have an efficient algorithm, in this method. The comparison of approximate and exact solutions leaves no doubt believing that, the new numerical scheme is very efficient and converges toward exact solution very rapidly.
Utility-based designs for randomized comparative trials with categorical outcomes
Murray, Thomas A.; Thall, Peter F.; Yuan, Ying
2016-01-01
A general utility-based testing methodology for design and conduct of randomized comparative clinical trials with categorical outcomes is presented. Numerical utilities of all elementary events are elicited to quantify their desirabilities. These numerical values are used to map the categorical outcome probability vector of each treatment to a mean utility, which is used as a one-dimensional criterion for constructing comparative tests. Bayesian tests are presented, including fixed sample and group sequential procedures, assuming Dirichlet-multinomial models for the priors and likelihoods. Guidelines are provided for establishing priors, eliciting utilities, and specifying hypotheses. Efficient posterior computation is discussed, and algorithms are provided for jointly calibrating test cutoffs and sample size to control overall type I error and achieve specified power. Asymptotic approximations for the power curve are used to initialize the algorithms. The methodology is applied to re-design a completed trial that compared two chemotherapy regimens for chronic lymphocytic leukemia, in which an ordinal efficacy outcome was dichotomized and toxicity was ignored to construct the trial’s design. The Bayesian tests also are illustrated by several types of categorical outcomes arising in common clinical settings. Freely available computer software for implementation is provided. PMID:27189672
Dynamic response of a collidant impacting a low pressure airbag
NASA Astrophysics Data System (ADS)
Dreher, Peter A.
There are many uses of low pressure airbags, both military and commercial. Many of these applications have been hampered by inadequate and inaccurate modeling tools. This dissertation contains the derivation of a four degree-of-freedom system of differential equations from physical laws of mass and energy conservation, force equilibrium, and the Ideal Gas Law. Kinematic equations were derived to model a cylindrical airbag as a single control volume impacted by a parallelepiped collidant. An efficient numerical procedure was devised to solve the simplified system of equations in a manner amenable to discovering design trends. The largest public airbag experiment, both in scale and scope, was designed and built to collect data on low-pressure airbag responses, otherwise unavailable in the literature. The experimental results were compared to computational simulations to validate the simplified numerical model. Experimental response trends are presented that will aid airbag designers. The two objectives of using a low pressure airbag to demonstrate the feasibility to (1) accelerate a munition to 15 feet per second velocity from a bomb bay, and (2) decelerate humans hitting trucks below the human tolerance level of 50 G's, were both met.
A systematic linear space approach to solving partially described inverse eigenvalue problems
NASA Astrophysics Data System (ADS)
Hu, Sau-Lon James; Li, Haujun
2008-06-01
Most applications of the inverse eigenvalue problem (IEP), which concerns the reconstruction of a matrix from prescribed spectral data, are associated with special classes of structured matrices. Solving the IEP requires one to satisfy both the spectral constraint and the structural constraint. If the spectral constraint consists of only one or few prescribed eigenpairs, this kind of inverse problem has been referred to as the partially described inverse eigenvalue problem (PDIEP). This paper develops an efficient, general and systematic approach to solve the PDIEP. Basically, the approach, applicable to various structured matrices, converts the PDIEP into an ordinary inverse problem that is formulated as a set of simultaneous linear equations. While solving simultaneous linear equations for model parameters, the singular value decomposition method is applied. Because of the conversion to an ordinary inverse problem, other constraints associated with the model parameters can be easily incorporated into the solution procedure. The detailed derivation and numerical examples to implement the newly developed approach to symmetric Toeplitz and quadratic pencil (including mass, damping and stiffness matrices of a linear dynamic system) PDIEPs are presented. Excellent numerical results for both kinds of problem are achieved under the situations that have either unique or infinitely many solutions.
Ledzewicz, Urszula; Schättler, Heinz
2017-08-10
Metronomic chemotherapy refers to the frequent administration of chemotherapy at relatively low, minimally toxic doses without prolonged treatment interruptions. Different from conventional or maximum-tolerated-dose chemotherapy which aims at an eradication of all malignant cells, in a metronomic dosing the goal often lies in the long-term management of the disease when eradication proves elusive. Mathematical modeling and subsequent analysis (theoretical as well as numerical) have become an increasingly more valuable tool (in silico) both for determining conditions under which specific treatment strategies should be preferred and for numerically optimizing treatment regimens. While elaborate, computationally-driven patient specific schemes that would optimize the timing and drug dose levels are still a part of the future, such procedures may become instrumental in making chemotherapy effective in situations where it currently fails. Ideally, mathematical modeling and analysis will develop into an additional decision making tool in the complicated process that is the determination of efficient chemotherapy regimens. In this article, we review some of the results that have been obtained about metronomic chemotherapy from mathematical models and what they infer about the structure of optimal treatment regimens. Copyright © 2017 Elsevier B.V. All rights reserved.
Non-linear eigensolver-based alternative to traditional SCF methods
NASA Astrophysics Data System (ADS)
Gavin, Brendan; Polizzi, Eric
2013-03-01
The self-consistent iterative procedure in Density Functional Theory calculations is revisited using a new, highly efficient and robust algorithm for solving the non-linear eigenvector problem (i.e. H(X)X = EX;) of the Kohn-Sham equations. This new scheme is derived from a generalization of the FEAST eigenvalue algorithm, and provides a fundamental and practical numerical solution for addressing the non-linearity of the Hamiltonian with the occupied eigenvectors. In contrast to SCF techniques, the traditional outer iterations are replaced by subspace iterations that are intrinsic to the FEAST algorithm, while the non-linearity is handled at the level of a projected reduced system which is orders of magnitude smaller than the original one. Using a series of numerical examples, it will be shown that our approach can outperform the traditional SCF mixing techniques such as Pulay-DIIS by providing a high converge rate and by converging to the correct solution regardless of the choice of the initial guess. We also discuss a practical implementation of the technique that can be achieved effectively using the FEAST solver package. This research is supported by NSF under Grant #ECCS-0846457 and Intel Corporation.
Numerical Simulation of Flow Through an Artificial Heart
NASA Technical Reports Server (NTRS)
Rogers, Stuart E.; Kutler, Paul; Kwak, Dochan; Kiris, Cetin
1989-01-01
A solution procedure was developed that solves the unsteady, incompressible Navier-Stokes equations, and was used to numerically simulate viscous incompressible flow through a model of the Pennsylvania State artificial heart. The solution algorithm is based on the artificial compressibility method, and uses flux-difference splitting to upwind the convective terms; a line-relaxation scheme is used to solve the equations. The time-accuracy of the method is obtained by iteratively solving the equations at each physical time step. The artificial heart geometry involves a piston-type action with a moving solid wall. A single H-grid is fit inside the heart chamber. The grid is continuously compressed and expanded with a constant number of grid points to accommodate the moving piston. The computational domain ends at the valve openings where nonreflective boundary conditions based on the method of characteristics are applied. Although a number of simplifing assumptions were made regarding the geometry, the computational results agreed reasonably well with an experimental picture. The computer time requirements for this flow simulation, however, are quite extensive. Computational study of this type of geometry would benefit greatly from improvements in computer hardware speed and algorithm efficiency enhancements.