A posteriori error estimation for multi-stage Runge–Kutta IMEX schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chaudhry, Jehanzeb H.; Collins, J. B.; Shadid, John N.
Implicit–Explicit (IMEX) schemes are widely used for time integration methods for approximating solutions to a large class of problems. In this work, we develop accurate a posteriori error estimates of a quantity-of-interest for approximations obtained from multi-stage IMEX schemes. This is done by first defining a finite element method that is nodally equivalent to an IMEX scheme, then using typical methods for adjoint-based error estimation. Furthermore, the use of a nodally equivalent finite element method allows a decomposition of the error into multiple components, each describing the effect of a different portion of the method on the total error inmore » a quantity-of-interest.« less
A posteriori error estimation for multi-stage Runge–Kutta IMEX schemes
Chaudhry, Jehanzeb H.; Collins, J. B.; Shadid, John N.
2017-02-05
Implicit–Explicit (IMEX) schemes are widely used for time integration methods for approximating solutions to a large class of problems. In this work, we develop accurate a posteriori error estimates of a quantity-of-interest for approximations obtained from multi-stage IMEX schemes. This is done by first defining a finite element method that is nodally equivalent to an IMEX scheme, then using typical methods for adjoint-based error estimation. Furthermore, the use of a nodally equivalent finite element method allows a decomposition of the error into multiple components, each describing the effect of a different portion of the method on the total error inmore » a quantity-of-interest.« less
Fault Analysis and Detection in Microgrids with High PV Penetration
DOE Office of Scientific and Technical Information (OSTI.GOV)
El Khatib, Mohamed; Hernandez Alvidrez, Javier; Ellis, Abraham
In this report we focus on analyzing current-controlled PV inverters behaviour under faults in order to develop fault detection schemes for microgrids with high PV penetration. Inverter model suitable for steady state fault studies is presented and the impact of PV inverters on two protection elements is analyzed. The studied protection elements are superimposed quantities based directional element and negative sequence directional element. Additionally, several non-overcurrent fault detection schemes are discussed in this report for microgrids with high PV penetration. A detailed time-domain simulation study is presented to assess the performance of the presented fault detection schemes under different microgridmore » modes of operation.« less
Convergence Analysis of Triangular MAC Schemes for Two Dimensional Stokes Equations
Wang, Ming; Zhong, Lin
2015-01-01
In this paper, we consider the use of H(div) elements in the velocity–pressure formulation to discretize Stokes equations in two dimensions. We address the error estimate of the element pair RT0–P0, which is known to be suboptimal, and render the error estimate optimal by the symmetry of the grids and by the superconvergence result of Lagrange inter-polant. By enlarging RT0 such that it becomes a modified BDM-type element, we develop a new discretization BDM1b–P0. We, therefore, generalize the classical MAC scheme on rectangular grids to triangular grids and retain all the desirable properties of the MAC scheme: exact divergence-free, solver-friendly, and local conservation of physical quantities. Further, we prove that the proposed discretization BDM1b–P0 achieves the optimal convergence rate for both velocity and pressure on general quasi-uniform grids, and one and half order convergence rate for the vorticity and a recovered pressure. We demonstrate the validity of theories developed here by numerical experiments. PMID:26041948
NASA Astrophysics Data System (ADS)
Sauer, Roger A.
2013-08-01
Recently an enriched contact finite element formulation has been developed that substantially increases the accuracy of contact computations while keeping the additional numerical effort at a minimum reported by Sauer (Int J Numer Meth Eng, 87: 593-616, 2011). Two enrich-ment strategies were proposed, one based on local p-refinement using Lagrange interpolation and one based on Hermite interpolation that produces C 1-smoothness on the contact surface. Both classes, which were initially considered for the frictionless Signorini problem, are extended here to friction and contact between deformable bodies. For this, a symmetric contact formulation is used that allows the unbiased treatment of both contact partners. This paper also proposes a post-processing scheme for contact quantities like the contact pressure. The scheme, which provides a more accurate representation than the raw data, is based on an averaging procedure that is inspired by mortar formulations. The properties of the enrichment strategies and the corresponding post-processing scheme are illustrated by several numerical examples considering sliding and peeling contact in the presence of large deformations.
Midgley, S M
2004-01-21
A novel parameterization of x-ray interaction cross-sections is developed, and employed to describe the x-ray linear attenuation coefficient and mass energy absorption coefficient for both elements and mixtures. The new parameterization scheme addresses the Z-dependence of elemental cross-sections (per electron) using a simple function of atomic number, Z. This obviates the need for a complicated mathematical formalism. Energy dependent coefficients describe the Z-direction curvature of the cross-sections. The composition dependent quantities are the electron density and statistical moments describing the elemental distribution. We show that it is possible to describe elemental cross-sections for the entire periodic table and at energies above the K-edge (from 6 keV to 125 MeV), with an accuracy of better than 2% using a parameterization containing not more than five coefficients. For the biologically important elements 1 < or = Z < or = 20, and the energy range 30-150 keV, the parameterization utilizes four coefficients. At higher energies, the parameterization uses fewer coefficients with only two coefficients needed at megavoltage energies.
On a realization of { β}-expansion in QCD
NASA Astrophysics Data System (ADS)
Mikhailov, S. V.
2017-04-01
We suggest a simple algebraic approach to fix the elements of the { β}-expansion for renormalization group invariant quantities, which uses additional degrees of freedom. The approach is discussed in detail for N2LO calculations in QCD with the MSSM gluino — an additional degree of freedom. We derive the formulae of the { β}-expansion for the nonsinglet Adler D-function and Bjorken polarized sum rules in the actual N3LO within this quantum field theory scheme with the MSSM gluino and the scheme with the second additional degree of freedom. We discuss the properties of the { β}-expansion for higher orders considering the N4LO as an example.
Ghanbari, J; Naghdabadi, R
2009-07-22
We have used a hierarchical multiscale modeling scheme for the analysis of cortical bone considering it as a nanocomposite. This scheme consists of definition of two boundary value problems, one for macroscale, and another for microscale. The coupling between these scales is done by using the homogenization technique. At every material point in which the constitutive model is needed, a microscale boundary value problem is defined using a macroscopic kinematical quantity and solved. Using the described scheme, we have studied elastic properties of cortical bone considering its nanoscale microstructural constituents with various mineral volume fractions. Since the microstructure of bone consists of mineral platelet with nanometer size embedded in a protein matrix, it is similar to the microstructure of soft matrix nanocomposites reinforced with hard nanostructures. Considering a representative volume element (RVE) of the microstructure of bone as the microscale problem in our hierarchical multiscale modeling scheme, the global behavior of bone is obtained under various macroscopic loading conditions. This scheme may be suitable for modeling arbitrary bone geometries subjected to a variety of loading conditions. Using the presented method, mechanical properties of cortical bone including elastic moduli and Poisson's ratios in two major directions and shear modulus is obtained for different mineral volume fractions.
The two-mass contribution to the three-loop gluonic operator matrix element Agg,Q(3)
NASA Astrophysics Data System (ADS)
Ablinger, J.; Blümlein, J.; De Freitas, A.; Goedicke, A.; Schneider, C.; Schönwald, K.
2018-07-01
We calculate the two-mass QCD contributions to the massive operator matrix element Agg,Q at O (αs3) in analytic form in Mellin N- and z-space, maintaining the complete dependence on the heavy quark mass ratio. These terms are important ingredients for the matching relations of the variable flavor number scheme in the presence of two heavy quark flavors, such as charm and bottom. In Mellin N-space the result is given in the form of nested harmonic, generalized harmonic, cyclotomic and binomial sums, with arguments depending on the mass ratio. The Mellin inversion of these quantities to z-space gives rise to generalized iterated integrals with square root valued letters in the alphabet, depending on the mass ratio as well. Numerical results are presented.
Multi-boson block factorization of fermions
NASA Astrophysics Data System (ADS)
Giusti, Leonardo; Cè, Marco; Schaefer, Stefan
2018-03-01
The numerical computations of many quantities of theoretical and phenomenological interest are plagued by statistical errors which increase exponentially with the distance of the sources in the relevant correlators. Notable examples are baryon masses and matrix elements, the hadronic vacuum polarization and the light-by-light scattering contributions to the muon g - 2, and the form factors of semileptonic B decays. Reliable and precise determinations of these quantities are very difficult if not impractical with state-of-the-art standard Monte Carlo integration schemes. I will review a recent proposal for factorizing the fermion determinant in lattice QCD that leads to a local action in the gauge field and in the auxiliary boson fields. Once combined with the corresponding factorization of the quark propagator, it paves the way for multi-level Monte Carlo integration in the presence of fermions opening new perspectives in lattice QCD. Exploratory results on the impact on the above mentioned observables will be presented.
Card, Jeffrey W; Fikree, Hana; Haighton, Lois A; Blackwell, James; Felice, Brian; Wright, Teresa L
2015-11-01
A banding scheme theory has been proposed to assess the potency/toxicity of biologics and assist with decisions regarding the introduction of new biologic products into existing manufacturing facilities. The current work was conducted to provide a practical example of how this scheme could be applied. Information was identified for representatives from the following four proposed bands: Band A (lethal toxins); Band B (toxins and apoptosis signals); Band C (cytokines and growth factors); and Band D (antibodies, antibody fragments, scaffold molecules, and insulins). The potency/toxicity of the representative substances was confirmed as follows: Band A, low nanogram quantities exert lethal effects; Band B, repeated administration of microgram quantities is tolerated in humans; Band C, endogenous substances and recombinant versions administered to patients in low (interferons), intermediate (growth factors), and high (interleukins) microgram doses, often on a chronic basis; and Band D, endogenous substances present or produced in the body in milligram quantities per day (insulin, collagen) or protein therapeutics administered in milligram quantities per dose (mAbs). This work confirms that substances in Bands A, B, C, and D represent very high, high, medium, and low concern with regard to risk of cross-contamination in manufacturing facilities, thus supporting the proposed banding scheme. Copyright © 2015 Elsevier Inc. All rights reserved.
The effect of numerical methods on the simulation of mid-ocean ridge hydrothermal models
NASA Astrophysics Data System (ADS)
Carpio, J.; Braack, M.
2012-01-01
This work considers the effect of the numerical method on the simulation of a 2D model of hydrothermal systems located in the high-permeability axial plane of mid-ocean ridges. The behavior of hot plumes, formed in a porous medium between volcanic lava and the ocean floor, is very irregular due to convective instabilities. Therefore, we discuss and compare two different numerical methods for solving the mathematical model of this system. In concrete, we consider two ways to treat the temperature equation of the model: a semi-Lagrangian formulation of the advective terms in combination with a Galerkin finite element method for the parabolic part of the equations and a stabilized finite element scheme. Both methods are very robust and accurate. However, due to physical instabilities in the system at high Rayleigh number, the effect of the numerical method is significant with regard to the temperature distribution at a certain time instant. The good news is that relevant statistical quantities remain relatively stable and coincide for the two numerical schemes. The agreement is larger in the case of a mathematical model with constant water properties. In the case of a model with nonlinear dependence of the water properties on the temperature and pressure, the agreement in the statistics is clearly less pronounced. Hence, the presented work accentuates the need for a strengthened validation of the compatibility between numerical scheme (accuracy/resolution) and complex (realistic/nonlinear) models.
The Dualized Standard Model and its Applications — AN Interim Report
NASA Astrophysics Data System (ADS)
Chan, Hong-Mo; Tsou, Sheung Tsun
Based on a non-Abelian generalization of electric-magnetic duality, the Dualized Standard Model (DSM) suggests a natural explanation for exactly three generations of fermions as the "dual colour" widetilde SU (3) symmetry broken in a particular manner. The resulting scheme then offers on the one hand a fermion mass hierarchy and a perturbative method for calculating the mass and mixing parameters of the Standard Model fermions, and on the other hand testable predictions for new phenomena ranging from rare meson decays to ultra-high energy cosmic rays. Calculations to one-loop order gives, at the cost of adjusting only three real parameters, values for the following quantities all (except one) in very good agreement with experiment: the quark CKM matrix elements dvbr Vrsdvbr , the lepton CKM matrix elements dvbr Ursdvbr, and the second generation masses mc, ms, mμ. This means, in particular, that it gives near maximal mixing Uμ3 between νμ and ντ as observed by SuperKamiokande, Kamiokande and Soudan, while keeping small the corresponding quark angles Vcb, Vts. In addition, the scheme gives (i) rough order-of-magnitude estimates for the masses of the lowest generation, (ii) predictions for low energy FCNC effects such as KL→ eμ, and (iii) a possible explanation for the long-standing puzzle of air showers beyond the GZK cut-off. All these together, however, still represent but a portion of the possible physical consequences derivable from the DSM scheme, the majority of which are yet to be explored.
A Reconstruction Approach to High-Order Schemes Including Discontinuous Galerkin for Diffusion
NASA Technical Reports Server (NTRS)
Huynh, H. T.
2009-01-01
We introduce a new approach to high-order accuracy for the numerical solution of diffusion problems by solving the equations in differential form using a reconstruction technique. The approach has the advantages of simplicity and economy. It results in several new high-order methods including a simplified version of discontinuous Galerkin (DG). It also leads to new definitions of common value and common gradient quantities at each interface shared by the two adjacent cells. In addition, the new approach clarifies the relations among the various choices of new and existing common quantities. Fourier stability and accuracy analyses are carried out for the resulting schemes. Extensions to the case of quadrilateral meshes are obtained via tensor products. For the two-point boundary value problem (steady state), it is shown that these schemes, which include most popular DG methods, yield exact common interface quantities as well as exact cell average solutions for nearly all cases.
ACCURATE ORBITAL INTEGRATION OF THE GENERAL THREE-BODY PROBLEM BASED ON THE D'ALEMBERT-TYPE SCHEME
DOE Office of Scientific and Technical Information (OSTI.GOV)
Minesaki, Yukitaka
2013-03-15
We propose an accurate orbital integration scheme for the general three-body problem that retains all conserved quantities except angular momentum. The scheme is provided by an extension of the d'Alembert-type scheme for constrained autonomous Hamiltonian systems. Although the proposed scheme is merely second-order accurate, it can precisely reproduce some periodic, quasiperiodic, and escape orbits. The Levi-Civita transformation plays a role in designing the scheme.
Computational catalyst screening: Scaling, bond-order and catalysis
Abild-Pedersen, Frank
2015-10-01
Here, the design of new and better heterogeneous catalysts needed to accommodate the growing demand for energy from renewable sources is an important challenge for coming generations. Most surface catalyzed processes involve a large number of complex reaction networks and the energetics ultimately defines the turn-over-frequency and the selectivity of the process. In order not to get lost in the large quantities of data, simplification schemes that still contain the key elements of the reaction are required. Adsorption and transition state scaling relations constitutes such a scheme that not only maps the reaction relevant information in terms of few parametersmore » but also provides an efficient way of screening for new materials in a continuous multi-dimensional energy space. As with all relations they impose certain restrictions on what can be achieved and in this paper, I show why these limitations exist and how we can change the behavior through an energy-resolved approach that still maintains the screening capabilities needed in computational catalysis.« less
NASA Astrophysics Data System (ADS)
Rehman, Asad; Ali, Ishtiaq; Qamar, Shamsul
An upwind space-time conservation element and solution element (CE/SE) scheme is extended to numerically approximate the dusty gas flow model. Unlike central CE/SE schemes, the current method uses the upwind procedure to derive the numerical fluxes through the inner boundary of conservation elements. These upwind fluxes are utilized to calculate the gradients of flow variables. For comparison and validation, the central upwind scheme is also applied to solve the same dusty gas flow model. The suggested upwind CE/SE scheme resolves the contact discontinuities more effectively and preserves the positivity of flow variables in low density flows. Several case studies are considered and the results of upwind CE/SE are compared with the solutions of central upwind scheme. The numerical results show better performance of the upwind CE/SE method as compared to the central upwind scheme.
Chemical disorder as an engineering tool for spin polarization in Mn3Ga -based Heusler systems
NASA Astrophysics Data System (ADS)
Chadov, S.; D'Souza, S. W.; Wollmann, L.; Kiss, J.; Fecher, G. H.; Felser, C.
2015-03-01
Our study highlights spin-polarization mechanisms in metals by focusing on the mobilities of conducting electrons with different spins instead of their quantities. Here, we engineer electron mobility by applying chemical disorder induced by nonstoichiometric variations. As a practical example, we discuss the scheme that establishes such variations in tetragonal Mn3Ga Heusler material. We justify this approach using first-principles calculations of the spin-projected conductivity components based on the Kubo-Greenwood formalism. It follows that, in the majority of cases, even a small substitution of some other transition element instead of Mn may lead to a substantial increase in spin polarization along the tetragonal axis.
Matthews, M E; Waldvogel, C F; Mahaffey, M J; Zemel, P C
1978-06-01
Preparation procedures of standardized quantity formulas were analyzed for similarities and differences in production activities, and three entrée classifications were developed, based on these activities. Two formulas from each classification were selected, preparation procedures were divided into elements of production, and the MSD Quantity Food Production Code was applied. Macro elements not included in the existing Code were simulated, coded, assigned associated Time Measurement Units, and added to the MSD Quantity Food Production Code. Repeated occurrence of similar elements within production methods indicated that macro elements could be synthesized for use within one or more entrée classifications. Basic elements were grouped, simulated, and macro elements were derived. Macro elements were applied in the simulated production of 100 portions of each entrée formula. Total production time for each formula and average production time for each entrée classification were calculated. Application of macro elements indicated that this method of predetermining production time was feasible and could be adapted by quantity foodservice managers as a decision technique used to evaluate menu mix, production personnel schedules, and allocation of equipment usage. These macro elements could serve as a basis for further development and refinement of other macro elements which could be applied to a variety of menu item formulas.
Nonlinear Road Pricing : [Summary
DOT National Transportation Integrated Search
2012-01-01
Nonlinear pricing is an unfamiliar term for a familiar idea. Linear pricing charges all consumers the same price for the same quantity of goods or services; in nonlinear schemes, the price varies, depending, for example, on quantity purchased or a co...
A progress report on estuary modeling by the finite-element method
Gray, William G.
1978-01-01
Various schemes are investigated for finite-element modeling of two-dimensional surface-water flows. The first schemes investigated combine finite-element spatial discretization with split-step time stepping schemes that have been found useful in finite-difference computations. Because of the large number of numerical integrations performed in space and the large sparse matrices solved, these finite-element schemes were found to be economically uncompetitive with finite-difference schemes. A very promising leapfrog scheme is proposed which, when combined with a novel very fast spatial integration procedure, eliminates the need to solve any matrices at all. Additional problems attacked included proper propagation of waves and proper specification of the normal flow-boundary condition. This report indicates work in progress and does not come to a definitive conclusion as to the best approach for finite-element modeling of surface-water problems. The results presented represent findings obtained between September 1973 and July 1976. (Woodard-USGS)
78 FR 76533 - New Mailing Standards for Domestic Mailing Services Products
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-18
... sequencing system (FSS), a mailer association suggested to change the required minimum for an FSS scheme... must make FSS scheme or FSS facility pallets when the quantity reaches 250 pounds, so that minimum is... Standard Mail, FSS scheme pallets may be entered at origin, or at DNDC, DSCF, or DFSS entry points, but the...
Duality in a supersymmetric gauge theory from a perturbative viewpoint
NASA Astrophysics Data System (ADS)
Ryttov, Thomas A.; Shrock, Robert
2018-03-01
We study duality in N =1 supersymmetric QCD in the non-Abelian Coulomb phase, order-by-order in scheme-independent series expansions. Using exact results, we show how the dimensions of various fundamental and composite chiral superfields, and the quantities a , c , a /c , and b at superconformal fixed points of the renormalization group emerge in scheme-independent series expansions in the electric and magnetic theories. We further demonstrate that truncations of these series expansions to modest order yield very accurate approximations to these quantities and suggest possible implications for nonsupersymmetric theories.
On the Treatment of Field Quantities and Elemental Continuity in FEM Solutions.
Jallepalli, Ashok; Docampo-Sanchez, Julia; Ryan, Jennifer K; Haimes, Robert; Kirby, Robert M
2018-01-01
As the finite element method (FEM) and the finite volume method (FVM), both traditional and high-order variants, continue their proliferation into various applied engineering disciplines, it is important that the visualization techniques and corresponding data analysis tools that act on the results produced by these methods faithfully represent the underlying data. To state this in another way: the interpretation of data generated by simulation needs to be consistent with the numerical schemes that underpin the specific solver technology. As the verifiable visualization literature has demonstrated: visual artifacts produced by the introduction of either explicit or implicit data transformations, such as data resampling, can sometimes distort or even obfuscate key scientific features in the data. In this paper, we focus on the handling of elemental continuity, which is often only continuous or piecewise discontinuous, when visualizing primary or derived fields from FEM or FVM simulations. We demonstrate that traditional data handling and visualization of these fields introduce visual errors. In addition, we show how the use of the recently proposed line-SIAC filter provides a way of handling elemental continuity issues in an accuracy-conserving manner with the added benefit of casting the data in a smooth context even if the representation is element discontinuous.
Implicit Space-Time Conservation Element and Solution Element Schemes
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung; Himansu, Ananda; Wang, Xiao-Yen
1999-01-01
Artificial numerical dissipation is in important issue in large Reynolds number computations. In such computations, the artificial dissipation inherent in traditional numerical schemes can overwhelm the physical dissipation and yield inaccurate results on meshes of practical size. In the present work, the space-time conservation element and solution element method is used to construct new and accurate implicit numerical schemes such that artificial numerical dissipation will not overwhelm physical dissipation. Specifically, these schemes have the property that numerical dissipation vanishes when the physical viscosity goes to zero. These new schemes therefore accurately model the physical dissipation even when it is extremely small. The new schemes presented are two highly accurate implicit solvers for a convection-diffusion equation. The two schemes become identical in the pure convection case, and in the pure diffusion case. The implicit schemes are applicable over the whole Reynolds number range, from purely diffusive equations to convection-dominated equations with very small viscosity. The stability and consistency of the schemes are analysed, and some numerical results are presented. It is shown that, in the inviscid case, the new schemes become explicit and their amplification factors are identical to those of the Leapfrog scheme. On the other hand, in the pure diffusion case, their principal amplification factor becomes the amplification factor of the Crank-Nicolson scheme.
Multi-scale Eulerian model within the new National Environmental Modeling System
NASA Astrophysics Data System (ADS)
Janjic, Zavisa; Janjic, Tijana; Vasic, Ratko
2010-05-01
The unified Non-hydrostatic Multi-scale Model on the Arakawa B grid (NMMB) is being developed at NCEP within the National Environmental Modeling System (NEMS). The finite-volume horizontal differencing employed in the model preserves important properties of differential operators and conserves a variety of basic and derived dynamical and quadratic quantities. Among these, conservation of energy and enstrophy improves the accuracy of nonlinear dynamics of the model. Within further model development, advection schemes of fourth order of formal accuracy have been developed. It is argued that higher order advection schemes should not be used in the thermodynamic equation in order to preserve consistency with the second order scheme used for computation of the pressure gradient force. Thus, the fourth order scheme is applied only to momentum advection. Three sophisticated second order schemes were considered for upgrade. Two of them, proposed in Janjic(1984), conserve energy and enstrophy, but with enstrophy calculated differently. One of them conserves enstrophy as computed by the most accurate second order Laplacian operating on stream function. The other scheme conserves enstrophy as computed from the B grid velocity. The third scheme (Arakawa 1972) is arithmetic mean of the former two. It does not conserve enstrophy strictly, but it conserves other quadratic quantities that control the nonlinear energy cascade. Linearization of all three schemes leads to the same second order linear advection scheme. The second order term of the truncation error of the linear advection scheme has a special form so that it can be eliminated by simply preconditioning the advected quantity. Tests with linear advection of a cone confirm the advantage of the fourth order scheme. However, if a localized, large amplitude and high wave-number pattern is present in initial conditions, the clear advantage of the fourth order scheme disappears. In real data runs, problems with noisy data may appear due to mountains. Thus, accuracy and formal accuracy may not be synonymous. The nonlinear fourth order schemes are quadratic conservative and reduce to the Arakawa Jacobian in case of non-divergent flow. In case of general flow the conservation properties of the new momentum advection schemes impose stricter constraint on the nonlinear cascade than the original second order schemes. However, for non-divergent flow, the conservation properties of the fourth order schemes cannot be proven in the same way as those of the original second order schemes. Therefore, nonlinear tests were carried out in order to check how well the fourth order schemes control the nonlinear energy cascade. In the tests nonlinear shallow water equations are solved in a rotating rectangular domain (Janjic, 1984). The domain is covered with only 17 x 17 grid points. A diagnostic quantity is used to monitor qualitative changes in the spectrum over 116 days of simulated time. All schemes maintained meaningful solutions throughout the test. Among the second order schemes, the best result was obtained with the scheme that conserved enstrophy as computed by the second order Laplacian of the stream function. It was closely followed by the Arakawa (1972) scheme, while the remaining scheme was distant third. The fourth order schemes ranked in the same order, and were competitive throughout the experiments with their second order counterparts in preventing accumulation of energy at small scales. Finally, the impact was examined of the fourth order momentum advection on global medium range forecasts. The 500 mb anomaly correlation coefficient is used as a measure of success of the forecasts. Arakawa, A., 1972: Design of the UCLA general circulation model. Tech. Report No. 7, Department of Meteorology, University of California, Los Angeles, 116 pp. Janjic, Z. I., 1984: Non-linear advection schemes and energy cascade on semi-staggered grids. Monthly Weather Review, 112, 1234-1245.
An Introduction to the Problem of the Existence of Classical and Quantum Information
NASA Astrophysics Data System (ADS)
Rocchi, Paolo; Gianfagna, Leonida
2006-01-01
Quantum computing raises novel meditation upon the nature of information, notably a number of theorists set out the critical elements of Shannon's work, which currently emerges as the most popular reference in the quantum territory. The present paper follows this vein and highlights how the prerequisites of the information theory, which should detail the precise hypotheses of this theory, appear rather obscure and the problem of the existence of information is still open. This work puts forward a theoretical scheme that calculates the existence of elementary items. These results clarify basic assumptions in information engineering. Later we bring evidence how information is not an absolute quantity and close with a discussion upon the information relativity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soko, W.A.; Biaecka, B.
1998-12-31
In this paper the solution to waste problems in the paint industry is presented by describing their combustion in a fluidized bed boiler as a part of the waste management system in the paint factory. Based on the Cleaner Production idea and concept of integration of design process with a future exploitation of equipment, some modifications of the waste management scheme in the factory are discussed to reduce the quantity of toxic wastes. To verify this concept combustion tests of paint production wastes and cocombustion of paint wastes with coal in an adopted industrial boiler were done. Results of thesemore » tests are presented in the paper.« less
Elements of EAF automation processes
NASA Astrophysics Data System (ADS)
Ioana, A.; Constantin, N.; Dragna, E. C.
2017-01-01
Our article presents elements of Electric Arc Furnace (EAF) automation. So, we present and analyze detailed two automation schemes: the scheme of electrical EAF automation system; the scheme of thermic EAF automation system. The application results of these scheme of automation consists in: the sensitive reduction of specific consummation of electrical energy of Electric Arc Furnace, increasing the productivity of Electric Arc Furnace, increase the quality of the developed steel, increasing the durability of the building elements of Electric Arc Furnace.
Accurate interatomic force fields via machine learning with covariant kernels
NASA Astrophysics Data System (ADS)
Glielmo, Aldo; Sollich, Peter; De Vita, Alessandro
2017-06-01
We present a novel scheme to accurately predict atomic forces as vector quantities, rather than sets of scalar components, by Gaussian process (GP) regression. This is based on matrix-valued kernel functions, on which we impose the requirements that the predicted force rotates with the target configuration and is independent of any rotations applied to the configuration database entries. We show that such covariant GP kernels can be obtained by integration over the elements of the rotation group SO (d ) for the relevant dimensionality d . Remarkably, in specific cases the integration can be carried out analytically and yields a conservative force field that can be recast into a pair interaction form. Finally, we show that restricting the integration to a summation over the elements of a finite point group relevant to the target system is sufficient to recover an accurate GP. The accuracy of our kernels in predicting quantum-mechanical forces in real materials is investigated by tests on pure and defective Ni, Fe, and Si crystalline systems.
Spatial Convergence of Three Dimensional Turbulent Flows
NASA Technical Reports Server (NTRS)
Park, Michael A.; Anderson, W. Kyle
2016-01-01
Finite-volume and finite-element schemes, both implemented within the FUN3D flow solver, are evaluated for several test cases described on the Turbulence-Modeling Resource (TMR) web site. The cases include subsonic flow over a hemisphere cylinder, subsonic flow over a swept bump configuration, and supersonic flow in a square duct. The finite- volume and finite-element schemes are both used to obtain solutions for the first two cases, whereas only the finite-volume scheme is used for the supersonic duct. For the hemisphere cylinder, finite-element solutions obtained on tetrahedral meshes are compared with finite- volume solutions on mixed-element meshes. For the swept bump, finite-volume solutions have been obtained for both hexahedral and tetrahedral meshes and are compared with finite-element solutions obtained on tetrahedral meshes. For the hemisphere cylinder and the swept bump, solutions are obtained on a series of meshes with varying grid density and comparisons are made between drag coefficients, pressure distributions, velocity profiles, and profiles of the turbulence working variable. The square duct shows small variation due to element type or the spatial accuracy of turbulence model convection. It is demonstrated that the finite-element scheme on tetrahedral meshes yields similar accuracy as the finite- volume scheme on mixed-element and hexahedral grids, and demonstrates less sensitivity to the mesh topology (biased tetrahedral grids) than the finite-volume scheme.
Mobile phone collection, reuse and recycling in the UK
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ongondo, F.O.; Williams, I.D., E-mail: idw@soton.ac.uk
Highlights: > We characterized the key features of the voluntary UK mobile phone takeback network via a survey. > We identified 3 flows: information; product (handsets and accessories); and incentives. > There has been a significant rise in the number of UK takeback schemes since 1997. > Most returned handsets are low quality; little data exists on quantities of mobile phones collected. > Takeback schemes increasingly divert EoL mobile phones from landfill and enable reuse/recycling. - Abstract: Mobile phones are the most ubiquitous electronic product on the globe. They have relatively short lifecycles and because of their (perceived) in-built obsolescence,more » discarded mobile phones represent a significant and growing problem with respect to waste electrical and electronic equipment (WEEE). An emerging and increasingly important issue for industry is the shortage of key metals, especially the types of metals found in mobile phones, and hence the primary aim of this timely study was to assess and evaluate the voluntary mobile phone takeback network in the UK. The study has characterised the information, product and incentives flows in the voluntary UK mobile phone takeback network and reviewed the merits and demerits of the incentives offered. A survey of the activities of the voluntary mobile phone takeback schemes was undertaken in 2008 to: identify and evaluate the takeback schemes operating in the UK; determine the target groups from whom handsets are collected; and assess the collection, promotion and advertising methods used by the schemes. In addition, the survey sought to identify and critically evaluate the incentives offered by the takeback schemes, evaluate their ease and convenience of use; and determine the types, qualities and quantities of mobile phones they collect. The study has established that the UK voluntary mobile phone takeback network can be characterised as three distinctive flows: information flow; product flow (handsets and related accessories); and incentives flow. Over 100 voluntary schemes offering online takeback of mobile phone handsets were identified. The schemes are operated by manufacturers, retailers, mobile phone network service operators, charities and by mobile phone reuse, recycling and refurbishing companies. The latter two scheme categories offer the highest level of convenience and ease of use to their customers. Approximately 83% of the schemes are either for-profit/commercial-oriented and/or operate to raise funds for charities. The voluntary schemes use various methods to collect mobile phones from consumers, including postal services, courier and in-store. The majority of schemes utilise and finance pre-paid postage to collect handsets. Incentives offered by the takeback schemes include monetary payments, donation to charity and entry into prize draws. Consumers from whom handsets and related equipment are collected include individuals, businesses, schools, colleges, universities, charities and clubs with some schemes specialising on collecting handsets from one target group. The majority (84.3%) of voluntary schemes did not provide information on their websites about the quantities of mobile phones they collect. The operations of UK takeback schemes are decentralised in nature. Comparisons are made between the UK's decentralised collection system versus Australia's centralised network for collection of mobile phones. The significant principal conclusions from the study are: there has been a significant rise in the number of takeback schemes operating in the UK since the initial scheme was launched in 1997; the majority of returned handsets seem to be of low quality; and there is very little available information on the quantities of mobile phones collected by the various schemes. Irrespective of their financial motives, UK takeback schemes increasingly play an important role in sustainable waste management by diverting EoL mobile phones from landfills and encouraging reuse and recycling. Recommendations for future actions to improve the management of end-of-life mobile phone handsets and related accessories are made.« less
Least-squares finite element methods for compressible Euler equations
NASA Technical Reports Server (NTRS)
Jiang, Bo-Nan; Carey, G. F.
1990-01-01
A method based on backward finite differencing in time and a least-squares finite element scheme for first-order systems of partial differential equations in space is applied to the Euler equations for gas dynamics. The scheme minimizes the L-sq-norm of the residual within each time step. The method naturally generates numerical dissipation proportional to the time step size. An implicit method employing linear elements has been implemented and proves robust. For high-order elements, computed solutions based on the L-sq method may have oscillations for calculations at similar time step sizes. To overcome this difficulty, a scheme which minimizes the weighted H1-norm of the residual is proposed and leads to a successful scheme with high-degree elements. Finally, a conservative least-squares finite element method is also developed. Numerical results for two-dimensional problems are given to demonstrate the shock resolution of the methods and compare different approaches.
WENO schemes on arbitrary mixed-element unstructured meshes in three space dimensions
NASA Astrophysics Data System (ADS)
Tsoutsanis, P.; Titarev, V. A.; Drikakis, D.
2011-02-01
The paper extends weighted essentially non-oscillatory (WENO) methods to three dimensional mixed-element unstructured meshes, comprising tetrahedral, hexahedral, prismatic and pyramidal elements. Numerical results illustrate the convergence rates and non-oscillatory properties of the schemes for various smooth and discontinuous solutions test cases and the compressible Euler equations on various types of grids. Schemes of up to fifth order of spatial accuracy are considered.
Qarri, Flora; Lazo, Pranvera; Bekteshi, Lirim; Stafilov, Trajce; Frontasyeva, Marina; Harmens, Harry
2015-02-01
The atmospheric deposition of heavy metals in Albania was investigated by using a carpet-forming moss species (Hypnum cupressiforme) as bioindicator. Sampling was done in the dry seasons of autumn 2010 and summer 2011. Two different sampling schemes are discussed in this paper: a random sampling scheme with 62 sampling sites distributed over the whole territory of Albania and systematic sampling scheme with 44 sampling sites distributed over the same territory. Unwashed, dried samples were totally digested by using microwave digestion, and the concentrations of metal elements were determined by inductively coupled plasma atomic emission spectroscopy (ICP-AES) and AAS (Cd and As). Twelve elements, such as conservative and trace elements (Al and Fe and As, Cd, Cr, Cu, Ni, Mn, Pb, V, Zn, and Li), were measured in moss samples. Li as typical lithogenic element is also included. The results reflect local emission points. The median concentrations and statistical parameters of elements were discussed by comparing two sampling schemes. The results of both sampling schemes are compared with the results of other European countries. Different levels of the contamination valuated by the respective contamination factor (CF) of each element are obtained for both sampling schemes, while the local emitters identified like iron-chromium metallurgy and cement industry, oil refinery, mining industry, and transport have been the same for both sampling schemes. In addition, the natural sources, from the accumulation of these metals in mosses caused by metal-enriched soil, associated with wind blowing soils were pointed as another possibility of local emitting factors.
A new 3D maser code applied to flaring events
NASA Astrophysics Data System (ADS)
Gray, M. D.; Mason, L.; Etoka, S.
2018-06-01
We set out the theory and discretization scheme for a new finite-element computer code, written specifically for the simulation of maser sources. The code was used to compute fractional inversions at each node of a 3D domain for a range of optical thicknesses. Saturation behaviour of the nodes with regard to location and optical depth was broadly as expected. We have demonstrated via formal solutions of the radiative transfer equation that the apparent size of the model maser cloud decreases as expected with optical depth as viewed by a distant observer. Simulations of rotation of the cloud allowed the construction of light curves for a number of observable quantities. Rotation of the model cloud may be a reasonable model for quasi-periodic variability, but cannot explain periodic flaring.
Development of an integrated BEM for hot fluid-structure interaction
NASA Technical Reports Server (NTRS)
Banerjee, P. K.; Dargush, G. F.
1989-01-01
The Boundary Element Method (BEM) is chosen as a basic analysis tool principally because the definition of quantities like fluxes, temperature, displacements, and velocities is very precise on a boundary base discretization scheme. One fundamental difficulty is, of course, that the entire analysis requires a very considerable amount of analytical work which is not present in other numerical methods. During the last 18 months all of this analytical work was completed and a two-dimensional, general purpose code was written. Some of the early results are described. It is anticipated that within the next two to three months almost all two-dimensional idealizations will be examined. It should be noted that the analytical work for the three-dimensional case has also been done and numerical implementation will begin next year.
Optimizing the choice of spin-squeezed states for detecting and characterizing quantum processes
Rozema, Lee A.; Mahler, Dylan H.; Blume-Kohout, Robin; ...
2014-11-07
Quantum metrology uses quantum states with no classical counterpart to measure a physical quantity with extraordinary sensitivity or precision. Most such schemes characterize a dynamical process by probing it with a specially designed quantum state. The success of such a scheme usually relies on the process belonging to a particular one-parameter family. If this assumption is violated, or if the goal is to measure more than one parameter, a different quantum state may perform better. In the most extreme case, we know nothing about the process and wish to learn everything. This requires quantum process tomography, which demands an informationallymore » complete set of probe states. It is very convenient if this set is group covariant—i.e., each element is generated by applying an element of the quantum system’s natural symmetry group to a single fixed fiducial state. In this paper, we consider metrology with 2-photon (“biphoton”) states and report experimental studies of different states’ sensitivity to small, unknown collective SU( 2) rotations [“ SU( 2) jitter”]. Maximally entangled N00 N states are the most sensitive detectors of such a rotation, yet they are also among the worst at fully characterizing an a priori unknown process. We identify (and confirm experimentally) the best SU( 2)-covariant set for process tomography; these states are all less entangled than the N00 N state, and are characterized by the fact that they form a 2-design.« less
Enhancing Least-Squares Finite Element Methods Through a Quantity-of-Interest
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chaudhry, Jehanzeb Hameed; Cyr, Eric C.; Liu, Kuo
2014-12-18
Here, we introduce an approach that augments least-squares finite element formulations with user-specified quantities-of-interest. The method incorporates the quantity-of-interest into the least-squares functional and inherits the global approximation properties of the standard formulation as well as increased resolution of the quantity-of-interest. We establish theoretical properties such as optimality and enhanced convergence under a set of general assumptions. Central to the approach is that it offers an element-level estimate of the error in the quantity-of-interest. As a result, we introduce an adaptive approach that yields efficient, adaptively refined approximations. Several numerical experiments for a range of situations are presented to supportmore » the theory and highlight the effectiveness of our methodology. Notably, the results show that the new approach is effective at improving the accuracy per total computational cost.« less
Stabilized Finite Elements in FUN3D
NASA Technical Reports Server (NTRS)
Anderson, W. Kyle; Newman, James C.; Karman, Steve L.
2017-01-01
A Streamlined Upwind Petrov-Galerkin (SUPG) stabilized finite-element discretization has been implemented as a library into the FUN3D unstructured-grid flow solver. Motivation for the selection of this methodology is given, details of the implementation are provided, and the discretization for the interior scheme is verified for linear and quadratic elements by using the method of manufactured solutions. A methodology is also described for capturing shocks, and simulation results are compared to the finite-volume formulation that is currently the primary method employed for routine engineering applications. The finite-element methodology is demonstrated to be more accurate than the finite-volume technology, particularly on tetrahedral meshes where the solutions obtained using the finite-volume scheme can suffer from adverse effects caused by bias in the grid. Although no effort has been made to date to optimize computational efficiency, the finite-element scheme is competitive with the finite-volume scheme in terms of computer time to reach convergence.
Speciated Elemental and Isotopic Characterization of Atmospheric Aerosols - Recent Advances
NASA Astrophysics Data System (ADS)
Shafer, M.; Majestic, B.; Schauer, J.
2007-12-01
Detailed elemental, isotopic, and chemical speciation analysis of aerosol particulate matter (PM) can provide valuable information on PM sources, atmospheric processing, and climate forcing. Certain PM sources may best be resolved using trace metal signatures, and elemental and isotopic fingerprints can supplement and enhance molecular maker analysis of PM for source apportionment modeling. In the search for toxicologically relevant components of PM, health studies are increasingly demanding more comprehensive characterization schemes. It is also clear that total metal analysis is at best a poor surrogate for the bioavailable component, and analytical techniques that address the labile component or specific chemical species are needed. Recent sampling and analytical developments advanced by the project team have facilitated comprehensive characterization of even very small masses of atmospheric PM. Historically; this level of detail was rarely achieved due to limitations in analytical sensitivity and a lack of awareness concerning the potential for contamination. These advances have enabled the coupling of advanced chemical characterization to vital field sampling approaches that typically supply only very limited PM mass; e.g. (1) particle size-resolved sampling; (2) personal sampler collections; and (3) fine temporal scale sampling. The analytical tools that our research group is applying include: (1) sector field (high-resolution-HR) ICP-MS, (2) liquid waveguide long-path spectrophotometry (LWG-LPS), and (3) synchrotron x-ray absorption spectroscopy (sXAS). When coupled with an efficient and validated solubilization method, the HR-ICP-MS can provide quantitative elemental information on over 50 elements in microgram quantities of PM. The high mass resolution and enhanced signal-to-noise of HR-ICP-MS significantly advance data quality and quantity over that possible with traditional quadrupole ICP-MS. The LWG-LPS system enables an assessment of the soluble/labile components of PM, while simultaneously providing critical oxidation state speciation data. Importantly, the LWG- LPS can be deployed in a semi-real-time configuration to probe fine temporal scale variations in atmospheric processing or sources of PM. The sXAS is providing complementary oxidation state speciation of bulk PM. Using examples from our research; we will illustrate the capabilities and applications of these new methods.
Zhou, Wei; Feng, Chuqiao; Liu, Xinghong; Liu, Shuhua; Zhang, Chao; Yuan, Wei
2016-01-01
This work is a contrastive investigation of numerical simulations to improve the comprehension of thermo-structural coupled phenomena of mass concrete structures during construction. The finite element (FE) analysis of thermo-structural behaviors is used to investigate the applicability of supersulfated cement (SSC) in mass concrete structures. A multi-scale framework based on a homogenization scheme is adopted in the parameter studies to describe the nonlinear concrete behaviors. Based on the experimental data of hydration heat evolution rate and quantity of SSC and fly ash Portland cement, the hydration properties of various cements are studied. Simulations are run on a concrete dam section with a conventional method and a chemo-thermo-mechanical coupled method. The results show that SSC is more suitable for mass concrete structures from the standpoint of temperature control and crack prevention. PMID:28773517
Zhou, Wei; Feng, Chuqiao; Liu, Xinghong; Liu, Shuhua; Zhang, Chao; Yuan, Wei
2016-05-20
This work is a contrastive investigation of numerical simulations to improve the comprehension of thermo-structural coupled phenomena of mass concrete structures during construction. The finite element (FE) analysis of thermo-structural behaviors is used to investigate the applicability of supersulfated cement (SSC) in mass concrete structures. A multi-scale framework based on a homogenization scheme is adopted in the parameter studies to describe the nonlinear concrete behaviors. Based on the experimental data of hydration heat evolution rate and quantity of SSC and fly ash Portland cement, the hydration properties of various cements are studied. Simulations are run on a concrete dam section with a conventional method and a chemo-thermo-mechanical coupled method. The results show that SSC is more suitable for mass concrete structures from the standpoint of temperature control and crack prevention.
Macro- and microelement distribution in organs of Glyceria maxima and biomonitoring applications.
Klink, Agnieszka; Stankiewicz, Andrzej; Wisłocka, Magdalena; Polechońska, Ludmiła
2014-07-01
The content of nutrients (N, P, K, Ca and Mg) and of trace metals (Fe, Cu, Mn, Zn, Pb, Cd, Co and Ni) in water, bottom sediments and various organs of Glyceria maxima from 19 study sites selected in the Jeziorka River was determined. In general, the concentrations of nutrients recorded in the plant material decreased in the following order: leaf>root>rhizome>stem, while the concentrations of the trace elements showed the following accumulation scheme: root>rhizome>leaf>stem. The bioaccumulation and transfer factors for nutrients were significantly higher than for trace metals. G. maxima from agricultural fields was characterised by the highest P and K concentrations in leaves, and plants from forested land contained high Zn and Ni amounts. However, the manna grass from small localities showed high accumulation of Ca, Mg and Mn. Positive significant correlations between Fe, Cu, Zn, Cd, Co and Ni concentrations in water or sediments and their concentrations in plant indicate that G. maxima may be employed as a biomonitor of trace element contamination. Moreover, a high degree of similarity was noted between self-organizing feature map (SOFM)-grouped sites of comparable quantities of elements in the water and sediments and sites where G. maxima had a corresponding content of the same elements in its leaves. Therefore, SOFM could be recommended in analysing ecological conditions of the environment from the perspective of nutrients and trace element content in different plant species and their surroundings.
New Imaging Operation Scheme at VLTI
NASA Astrophysics Data System (ADS)
Haubois, Xavier
2018-04-01
After PIONIER and GRAVITY, MATISSE will soon complete the set of 4 telescope beam combiners at VLTI. Together with recent developments in the image reconstruction algorithms, the VLTI aims to develop its operation scheme to allow optimized and adaptive UV plane coverage. The combination of spectro-imaging instruments, optimized operation framework and image reconstruction algorithms should lead to an increase of the reliability and quantity of the interferometric images. In this contribution, I will present the status of this new scheme as well as possible synergies with other instruments.
A New Eddy Dissipation Rate Formulation for the Terminal Area PBL Prediction System(TAPPS)
NASA Technical Reports Server (NTRS)
Charney, Joseph J.; Kaplan, Michael L.; Lin, Yuh-Lang; Pfeiffer, Karl D.
2000-01-01
The TAPPS employs the MASS model to produce mesoscale atmospheric simulations in support of the Wake Vortex project at Dallas Fort-Worth International Airport (DFW). A post-processing scheme uses the simulated three-dimensional atmospheric characteristics in the planetary boundary layer (PBL) to calculate the turbulence quantities most important to the dissipation of vortices: turbulent kinetic energy and eddy dissipation rate. TAPPS will ultimately be employed to enhance terminal area productivity by providing weather forecasts for the Aircraft Vortex Spacing System (AVOSS). The post-processing scheme utilizes experimental data and similarity theory to determine the turbulence quantities from the simulated horizontal wind field and stability characteristics of the atmosphere. Characteristic PBL quantities important to these calculations are determined based on formulations from the Blackadar PBL parameterization, which is regularly employed in the MASS model to account for PBL processes in mesoscale simulations. The TAPPS forecasts are verified against high-resolution observations of the horizontal winds at DFW. Statistical assessments of the error in the wind forecasts suggest that TAPPS captures the essential features of the horizontal winds with considerable skill. Additionally, the turbulence quantities produced by the post-processor are shown to compare favorably with corresponding tower observations.
A New Linearized Crank-Nicolson Mixed Element Scheme for the Extended Fisher-Kolmogorov Equation
Wang, Jinfeng; Li, Hong; He, Siriguleng; Gao, Wei
2013-01-01
We present a new mixed finite element method for solving the extended Fisher-Kolmogorov (EFK) equation. We first decompose the EFK equation as the two second-order equations, then deal with a second-order equation employing finite element method, and handle the other second-order equation using a new mixed finite element method. In the new mixed finite element method, the gradient ∇u belongs to the weaker (L 2(Ω))2 space taking the place of the classical H(div; Ω) space. We prove some a priori bounds for the solution for semidiscrete scheme and derive a fully discrete mixed scheme based on a linearized Crank-Nicolson method. At the same time, we get the optimal a priori error estimates in L 2 and H 1-norm for both the scalar unknown u and the diffusion term w = −Δu and a priori error estimates in (L 2)2-norm for its gradient χ = ∇u for both semi-discrete and fully discrete schemes. PMID:23864831
A new linearized Crank-Nicolson mixed element scheme for the extended Fisher-Kolmogorov equation.
Wang, Jinfeng; Li, Hong; He, Siriguleng; Gao, Wei; Liu, Yang
2013-01-01
We present a new mixed finite element method for solving the extended Fisher-Kolmogorov (EFK) equation. We first decompose the EFK equation as the two second-order equations, then deal with a second-order equation employing finite element method, and handle the other second-order equation using a new mixed finite element method. In the new mixed finite element method, the gradient ∇u belongs to the weaker (L²(Ω))² space taking the place of the classical H(div; Ω) space. We prove some a priori bounds for the solution for semidiscrete scheme and derive a fully discrete mixed scheme based on a linearized Crank-Nicolson method. At the same time, we get the optimal a priori error estimates in L² and H¹-norm for both the scalar unknown u and the diffusion term w = -Δu and a priori error estimates in (L²)²-norm for its gradient χ = ∇u for both semi-discrete and fully discrete schemes.
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung
2005-01-01
As part of the continuous development of the space-time conservation element and solution element (CE-SE) method, recently a set of so call ed "Courant number insensitive schemes" has been proposed. The key advantage of these new schemes is that the numerical dissipation associa ted with them generally does not increase as the Courant number decre ases. As such, they can be applied to problems with large Courant number disparities (such as what commonly occurs in Navier-Stokes problem s) without incurring excessive numerical dissipation.
Adaptive implicit-explicit and parallel element-by-element iteration schemes
NASA Technical Reports Server (NTRS)
Tezduyar, T. E.; Liou, J.; Nguyen, T.; Poole, S.
1989-01-01
Adaptive implicit-explicit (AIE) and grouped element-by-element (GEBE) iteration schemes are presented for the finite element solution of large-scale problems in computational mechanics and physics. The AIE approach is based on the dynamic arrangement of the elements into differently treated groups. The GEBE procedure, which is a way of rewriting the EBE formulation to make its parallel processing potential and implementation more clear, is based on the static arrangement of the elements into groups with no inter-element coupling within each group. Various numerical tests performed demonstrate the savings in the CPU time and memory.
Fourier analysis of finite element preconditioned collocation schemes
NASA Technical Reports Server (NTRS)
Deville, Michel O.; Mund, Ernest H.
1990-01-01
The spectrum of the iteration operator of some finite element preconditioned Fourier collocation schemes is investigated. The first part of the paper analyses one-dimensional elliptic and hyperbolic model problems and the advection-diffusion equation. Analytical expressions of the eigenvalues are obtained with use of symbolic computation. The second part of the paper considers the set of one-dimensional differential equations resulting from Fourier analysis (in the tranverse direction) of the 2-D Stokes problem. All results agree with previous conclusions on the numerical efficiency of finite element preconditioning schemes.
Mahfuz, Mohammad Upal
2016-10-01
In this paper, the expressions of achievable strength-based detection probabilities of concentration-encoded molecular communication (CEMC) system have been derived based on finite pulsewidth (FP) pulse-amplitude modulated (PAM) on-off keying (OOK) modulation scheme and strength threshold. An FP-PAM system is characterized by its duty cycle α that indicates the fraction of the entire symbol duration the transmitter remains on and transmits the signal. Results show that the detection performance of an FP-PAM OOK CEMC system significantly depends on the statistical distribution parameters of diffusion-based propagation noise and intersymbol interference (ISI). Analytical detection performance of an FP-PAM OOK CEMC system under ISI scenario has been explained and compared based on receiver operating characteristics (ROC) for impulse (i.e., spike)-modulated (IM) and FP-PAM CEMC schemes. It is shown that the effects of diffusion noise and ISI on ROC can be explained separately based on their communication range-dependent statistics. With full duty cycle, an FP-PAM scheme provides significantly worse performance than an IM scheme. The paper also analyzes the performance of the system when duty cycle, transmission data rate, and quantity of molecules vary.
NASA Technical Reports Server (NTRS)
Chang, S. C.; Wang, X. Y.; Chow, C. Y.; Himansu, A.
1995-01-01
The method of space-time conservation element and solution element is a nontraditional numerical method designed from a physicist's perspective, i.e., its development is based more on physics than numerics. It uses only the simplest approximation techniques and yet is capable of generating nearly perfect solutions for a 2-D shock reflection problem used by Helen Yee and others. In addition to providing an overall view of the new method, we introduce a new concept in the design of implicit schemes, and use it to construct a highly accurate solver for a convection-diffusion equation. It is shown that, in the inviscid case, this new scheme becomes explicit and its amplification factors are identical to those of the Leapfrog scheme. On the other hand, in the pure diffusion case, its principal amplification factor becomes the amplification factor of the Crank-Nicolson scheme.
NASA Astrophysics Data System (ADS)
Roşu, M. M.; Tarbă, C. I.; Neagu, C.
2016-11-01
The current models for inventory management are complementary, but together they offer a large pallet of elements for solving complex problems of companies when wanting to establish the optimum economic order quantity for unfinished products, row of materials, goods etc. The main objective of this paper is to elaborate an automated decisional model for the calculus of the economic order quantity taking into account the price regressive rates for the total order quantity. This model has two main objectives: first, to determine the periodicity when to be done the order n or the quantity order q; second, to determine the levels of stock: lighting control, security stock etc. In this way we can provide the answer to two fundamental questions: How much must be ordered? When to Order? In the current practice, the business relationships with its suppliers are based on regressive rates for price. This means that suppliers may grant discounts, from a certain level of quantities ordered. Thus, the unit price of the products is a variable which depends on the order size. So, the most important element for choosing the optimum for the economic order quantity is the total cost for ordering and this cost depends on the following elements: the medium price per units, the stock cost, the ordering cost etc.
Numerical Methods Using B-Splines
NASA Technical Reports Server (NTRS)
Shariff, Karim; Merriam, Marshal (Technical Monitor)
1997-01-01
The seminar will discuss (1) The current range of applications for which B-spline schemes may be appropriate (2) The property of high-resolution and the relationship between B-spline and compact schemes (3) Comparison between finite-element, Hermite finite element and B-spline schemes (4) Mesh embedding using B-splines (5) A method for the incompressible Navier-Stokes equations in curvilinear coordinates using divergence-free expansions.
Application of holographic elements in displays and planar illuminators
NASA Astrophysics Data System (ADS)
Putilin, Andrew; Gustomiasov, Igor
2007-05-01
Holographic Optical Elements (HOE's) on planar waveguides can be used to design the planar optics for backlit units, color selectors or filters, lenses for virtual reality displays. The several schemes for HOE recording are proposed to obtain planar stereo backlit unit and private eye displays light source. It is shown in the paper that the specific light transformation grating permits to construct efficient backlit units for display holograms and LCD. Several schemes of reflection/transmission backlit units and scattering films based on holographic optical elements are also proposed. The performance of the waveguide HOE can be optimized using the parameters of recording scheme and etching parameters. The schemes of HOE application are discussed and some experimental results are shown.
Mobile phone collection, reuse and recycling in the UK.
Ongondo, F O; Williams, I D
2011-06-01
Mobile phones are the most ubiquitous electronic product on the globe. They have relatively short lifecycles and because of their (perceived) in-built obsolescence, discarded mobile phones represent a significant and growing problem with respect to waste electrical and electronic equipment (WEEE). An emerging and increasingly important issue for industry is the shortage of key metals, especially the types of metals found in mobile phones, and hence the primary aim of this timely study was to assess and evaluate the voluntary mobile phone takeback network in the UK. The study has characterised the information, product and incentives flows in the voluntary UK mobile phone takeback network and reviewed the merits and demerits of the incentives offered. A survey of the activities of the voluntary mobile phone takeback schemes was undertaken in 2008 to: identify and evaluate the takeback schemes operating in the UK; determine the target groups from whom handsets are collected; and assess the collection, promotion and advertising methods used by the schemes. In addition, the survey sought to identify and critically evaluate the incentives offered by the takeback schemes, evaluate their ease and convenience of use; and determine the types, qualities and quantities of mobile phones they collect. The study has established that the UK voluntary mobile phone takeback network can be characterised as three distinctive flows: information flow; product flow (handsets and related accessories); and incentives flow. Over 100 voluntary schemes offering online takeback of mobile phone handsets were identified. The schemes are operated by manufacturers, retailers, mobile phone network service operators, charities and by mobile phone reuse, recycling and refurbishing companies. The latter two scheme categories offer the highest level of convenience and ease of use to their customers. Approximately 83% of the schemes are either for-profit/commercial-oriented and/or operate to raise funds for charities. The voluntary schemes use various methods to collect mobile phones from consumers, including postal services, courier and in-store. The majority of schemes utilise and finance pre-paid postage to collect handsets. Incentives offered by the takeback schemes include monetary payments, donation to charity and entry into prize draws. Consumers from whom handsets and related equipment are collected include individuals, businesses, schools, colleges, universities, charities and clubs with some schemes specialising on collecting handsets from one target group. The majority (84.3%) of voluntary schemes did not provide information on their websites about the quantities of mobile phones they collect. The operations of UK takeback schemes are decentralised in nature. Comparisons are made between the UK's decentralised collection system versus Australia's centralised network for collection of mobile phones. The significant principal conclusions from the study are: there has been a significant rise in the number of takeback schemes operating in the UK since the initial scheme was launched in 1997; the majority of returned handsets seem to be of low quality; and there is very little available information on the quantities of mobile phones collected by the various schemes. Irrespective of their financial motives, UK takeback schemes increasingly play an important role in sustainable waste management by diverting EoL mobile phones from landfills and encouraging reuse and recycling. Recommendations for future actions to improve the management of end-of-life mobile phone handsets and related accessories are made. Copyright © 2011 Elsevier Ltd. All rights reserved.
Parallel discrete-event simulation schemes with heterogeneous processing elements.
Kim, Yup; Kwon, Ikhyun; Chae, Huiseung; Yook, Soon-Hyung
2014-07-01
To understand the effects of nonidentical processing elements (PEs) on parallel discrete-event simulation (PDES) schemes, two stochastic growth models, the restricted solid-on-solid (RSOS) model and the Family model, are investigated by simulations. The RSOS model is the model for the PDES scheme governed by the Kardar-Parisi-Zhang equation (KPZ scheme). The Family model is the model for the scheme governed by the Edwards-Wilkinson equation (EW scheme). Two kinds of distributions for nonidentical PEs are considered. In the first kind computing capacities of PEs are not much different, whereas in the second kind the capacities are extremely widespread. The KPZ scheme on the complex networks shows the synchronizability and scalability regardless of the kinds of PEs. The EW scheme never shows the synchronizability for the random configuration of PEs of the first kind. However, by regularizing the arrangement of PEs of the first kind, the EW scheme is made to show the synchronizability. In contrast, EW scheme never shows the synchronizability for any configuration of PEs of the second kind.
NASA Astrophysics Data System (ADS)
Setiawan, R.
2018-05-01
In this paper, Economic Order Quantity (EOQ) of the vendor-buyer supply-chain model under a probabilistic condition with imperfect quality items has been analysed. The analysis is delivered using two concepts in game theory approach, which is Stackelberg equilibrium and Pareto Optimal, under non-cooperative and cooperative games, respectively. Another result is getting acomparison of theoptimal result between integrated scheme and game theory approach based on analytical and numerical result using appropriate simulation data.
Boundary states at reflective moving boundaries
NASA Astrophysics Data System (ADS)
Acosta Minoli, Cesar A.; Kopriva, David A.
2012-06-01
We derive and evaluate boundary states for Maxwell's equations, the linear, and the nonlinear Euler gas-dynamics equations to compute wave reflection from moving boundaries. In this study we use a Discontinuous Galerkin Spectral Element method (DGSEM) with Arbitrary Lagrangian-Eulerian (ALE) mapping for the spatial approximation, but the boundary states can be used with other methods, like finite volume schemes. We present four studies using Maxwell's equations, one for the linear Euler equations, and one more for the nonlinear Euler equations. These are: reflection of light from a plane mirror moving at constant velocity, reflection of light from a moving cylinder, reflection of light from a vibrating mirror, reflection of sound from a plane wall and dipole sound generation by an oscillating cylinder in an inviscid flow. The studies show that the boundary states preserve spectral convergence in the solution and in derived quantities like divergence and vorticity.
Kajita, Seiji; Ohba, Nobuko; Jinnouchi, Ryosuke; Asahi, Ryoji
2017-12-05
Material informatics (MI) is a promising approach to liberate us from the time-consuming Edisonian (trial and error) process for material discoveries, driven by machine-learning algorithms. Several descriptors, which are encoded material features to feed computers, were proposed in the last few decades. Especially to solid systems, however, their insufficient representations of three dimensionality of field quantities such as electron distributions and local potentials have critically hindered broad and practical successes of the solid-state MI. We develop a simple, generic 3D voxel descriptor that compacts any field quantities, in such a suitable way to implement convolutional neural networks (CNNs). We examine the 3D voxel descriptor encoded from the electron distribution by a regression test with 680 oxides data. The present scheme outperforms other existing descriptors in the prediction of Hartree energies that are significantly relevant to the long-wavelength distribution of the valence electrons. The results indicate that this scheme can forecast any functionals of field quantities just by learning sufficient amount of data, if there is an explicit correlation between the target properties and field quantities. This 3D descriptor opens a way to import prominent CNNs-based algorithms of supervised, semi-supervised and reinforcement learnings into the solid-state MI.
Analysis of 3D poroelastodynamics using BEM based on modified time-step scheme
NASA Astrophysics Data System (ADS)
Igumnov, L. A.; Petrov, A. N.; Vorobtsov, I. V.
2017-10-01
The development of 3d boundary elements modeling of dynamic partially saturated poroelastic media using a stepping scheme is presented in this paper. Boundary Element Method (BEM) in Laplace domain and the time-stepping scheme for numerical inversion of the Laplace transform are used to solve the boundary value problem. The modified stepping scheme with a varied integration step for quadrature coefficients calculation using the symmetry of the integrand function and integral formulas of Strongly Oscillating Functions was applied. The problem with force acting on a poroelastic prismatic console end was solved using the developed method. A comparison of the results obtained by the traditional stepping scheme with the solutions obtained by this modified scheme shows that the computational efficiency is better with usage of combined formulas.
NASA Technical Reports Server (NTRS)
Gong, Jian; Volakis, John L.; Nurnberger, Michael W.
1995-01-01
This semi-annual report describes progress up to mid-January 1995. The report contains five sections all dealing with the modeling of spiral and patch antennas recessed in metallic platforms. Of significance is the development of decomposition schemes which separate the different regions of the antenna volume. Substantial effort was devoted to improving the feed model in the context of the finite element method (FEM). Finally, an innovative scheme for truncating finite element meshes is presented.
Structure and Fabrication of a Microscale Flow-Rate/Skin Friction Sensor
NASA Technical Reports Server (NTRS)
Chandrasekharan, Vijay (Inventor); Sells, Jeremy (Inventor); Sheplak, Mark (Inventor); Arnold, David P. (Inventor)
2014-01-01
A floating element shear sensor and method for fabricating the same are provided. According to an embodiment, a microelectromechanical systems (MEMS)-based capacitive floating element shear stress sensor is provided that can achieve time-resolved turbulence measurement. In one embodiment, a differential capacitive transduction scheme is used for shear stress measurement. The floating element structure for the differential capacitive transduction scheme incorporates inter digitated comb fingers forming differential capacitors, which provide electrical output proportional to the floating element deflection.
A Summary of the Space-Time Conservation Element and Solution Element (CESE) Method
NASA Technical Reports Server (NTRS)
Wang, Xiao-Yen J.
2015-01-01
The space-time Conservation Element and Solution Element (CESE) method for solving conservation laws is examined for its development motivation and design requirements. The characteristics of the resulting scheme are discussed. The discretization of the Euler equations is presented to show readers how to construct a scheme based on the CESE method. The differences and similarities between the CESE method and other traditional methods are discussed. The strengths and weaknesses of the method are also addressed.
NASA Astrophysics Data System (ADS)
Dutta, Sourav; Daripa, Prabir
2015-11-01
Surfactant-polymer flooding is a widely used method of chemical enhanced oil recovery (EOR) in which an array of complex fluids containing suitable and varying amounts of surfactant or polymer or both mixed with water is injected into the reservoir. This is an example of multiphase, multicomponent and multiphysics porous media flow which is characterized by the spontaneous formation of complex viscous fingering patterns and is modeled by a system of strongly coupled nonlinear partial differential equations with appropriate initial and boundary conditions. Here we propose and discuss a modern, hybrid method based on a combination of a discontinuous, multiscale finite element formulation and the method of characteristics to accurately solve the system. Several types of flooding schemes and rheological properties of the injected fluids are used to numerically study the effectiveness of various injection policies in minimizing the viscous fingering and maximizing oil recovery. Numerical simulations are also performed to investigate the effect of various other physical and model parameters such as heterogeneity, relative permeability and residual saturation on the quantities of interest like cumulative oil recovery, sweep efficiency, fingering intensity to name a few. Supported by the grant NPRP 08-777-1-141 from the Qatar National Research Fund (a member of The Qatar Foundation).
Sensing temperature via downshifting emissions of lanthanide-doped metal oxides and salts. A review
NASA Astrophysics Data System (ADS)
Dramićanin, Miroslav D.
2016-12-01
Temperature is important because it has an effect on even the tiniest elements of daily life and is involved in a broad spectrum of human activities. That is why it is the most commonly measured physical quantity. Traditional temperature measurements encounter difficulties when used in some emerging technologies and environments, such as nanotechnology and biomedicine. The problem may be alleviated using optical techniques, one of which is luminescence thermometry. This paper reviews the state of luminescence thermometry and presents different temperature read-out schemes with an emphasis on those utilizing the downshifting emission of lanthanide-doped metal oxides and salts. The read-out schemes for temperature include those based on measurements of spectral characteristics of luminescence (band positions and shapes, emission intensity and ratio of emission intensities), and those based on measurements of the temporal behavior of luminescence (lifetimes and rise times). This review (with 140 references) gives the basics of the fundamental principles and theory that underlie the methods presented, and describes the methodology for the estimation of their performance. The major part of the text is devoted to those lanthanide-doped metal oxides and salts that are used as temperature probes, and to the comparison of their performance and characteristics.
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung
1995-01-01
A new numerical framework for solving conservation laws is being developed. This new framework differs substantially in both concept and methodology from the well-established methods, i.e., finite difference, finite volume, finite element, and spectral methods. It is conceptually simple and designed to overcome several key limitations of the above traditional methods. A two-level scheme for solving the convection-diffusion equation is constructed and used to illuminate the major differences between the present method and those previously mentioned. This explicit scheme, referred to as the a-mu scheme, has two independent marching variables.
Hirakawa, Teruo; Suzuki, Teppei; Bowler, David R; Miyazaki, Tsuyoshi
2017-10-11
We discuss the development and implementation of a constant temperature (NVT) molecular dynamics scheme that combines the Nosé-Hoover chain thermostat with the extended Lagrangian Born-Oppenheimer molecular dynamics (BOMD) scheme, using a linear scaling density functional theory (DFT) approach. An integration scheme for this canonical-ensemble extended Lagrangian BOMD is developed and discussed in the context of the Liouville operator formulation. Linear scaling DFT canonical-ensemble extended Lagrangian BOMD simulations are tested on bulk silicon and silicon carbide systems to evaluate our integration scheme. The results show that the conserved quantity remains stable with no systematic drift even in the presence of the thermostat.
Simulations of acoustic waves in channels and phonation in glottal ducts
NASA Astrophysics Data System (ADS)
Yang, Jubiao; Krane, Michael; Zhang, Lucy
2014-11-01
Numerical simulations of acoustic wave propagation were performed by solving compressible Navier-Stokes equations using finite element method. To avoid numerical contamination of acoustic field induced by non-physical reflections at computational boundaries, a Perfectly Matched Layer (PML) scheme was implemented to attenuate the acoustic waves and their reflections near these boundaries. The acoustic simulation was further combined with the simulation of interaction of vocal fold vibration and glottal flow, using our fully-coupled Immersed Finite Element Method (IFEM) approach, to study phonation in the glottal channel. In order to decouple the aeroelastic and aeroacoustic aspects of phonation, the airway duct used has a uniform cross section with PML properly applied. The dynamics of phonation were then studied by computing the terms of the equations of motion for a control volume comprised of the fluid in the vicinity of the vocal folds. It is shown that the principal dynamics is comprised of the near cancellation of the pressure force driving the flow through the glottis, and the aerodynamic drag on the vocal folds. Aeroacoustic source strengths are also presented, estimated from integral quantities computed in the source region, as well as from the radiated acoustic field.
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing; Steffen, Christopher J., Jr.
1993-01-01
A new flux splitting scheme is proposed. The scheme is remarkably simple and yet its accuracy rivals and in some cases surpasses that of Roe's solver in the Euler and Navier-Stokes solutions performed in this study. The scheme is robust and converges as fast as the Roe splitting. An approximately defined cell-face advection Mach number is proposed using values from the two straddling cells via associated characteristic speeds. This interface Mach number is then used to determine the upwind extrapolation for the convective quantities. Accordingly, the name of the scheme is coined as Advection Upstream Splitting Method (AUSM). A new pressure splitting is introduced which is shown to behave successfully, yielding much smoother results than other existing pressure splittings. Of particular interest is the supersonic blunt body problem in which the Roe scheme gives anomalous solutions. The AUSM produces correct solutions without difficulty for a wide range of flow conditions as well as grids.
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing; Steffen, Christopher J., Jr.
1991-01-01
A new flux splitting scheme is proposed. The scheme is remarkably simple and yet its accuracy rivals and in some cases surpasses that of Roe's solver in the Euler and Navier-Stokes solutions performed in this study. The scheme is robust and converges as fast as the Roe splitting. An approximately defined cell-face advection Mach number is proposed using values from the two straddling cells via associated characteristic speeds. This interface Mach number is then used to determine the upwind extrapolation for the convective quantities. Accordingly, the name of the scheme is coined as Advection Upstream Splitting Method (AUSM). A new pressure splitting is introduced which is shown to behave successfully, yielding much smoother results than other existing pressure splittings. Of particular interest is the supersonic blunt body problem in which the Roe scheme gives anomalous solutions. The AUSM produces correct solutions without difficulty for a wide range of flow conditions as well as grids.
Photonic sensing based on variation of propagation properties of photonic crystal fibres
NASA Astrophysics Data System (ADS)
Rothwell, John H.; Flavin, Dónal A.; MacPherson, William N.; Jones, Julian D.; Knight, Jonathan C.; Russell, Philip St. J.
2006-12-01
We report on a low-coherence interferometric scheme for the measurement of the strain and temperature dependences of group delay and dispersion in short, index-guiding, 'endlessly-single-mode' photonic crystal fibre elements in the 840 nm and 1550 nm regions. Based on the measurements, we propose two schemes for simultaneous strain and temperature measurement using a single unmodified PCF element, without a requirement for any compensating components, and we project the measurement accuracies of these schemes.
Extraction of Xenon Using Enriching Reflux Pressure Swing Adsorption
2010-09-01
collection scheme aimed at preconcentrating xenon without the use of any form of cooling. The collection scheme utilizes activated charcoal (AC), a... collection efficiency for a given trap size. For a given isothermal system, it can be seen that if adsorption occurs at high pressure, where capacity is... activated charcoal at room temperature. These results are presented below and show that these early tests appear very promising and that useful quantities
ADAPTIVE TETRAHEDRAL GRID REFINEMENT AND COARSENING IN MESSAGE-PASSING ENVIRONMENTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hallberg, J.; Stagg, A.
2000-10-01
A grid refinement and coarsening scheme has been developed for tetrahedral and triangular grid-based calculations in message-passing environments. The element adaption scheme is based on an edge bisection of elements marked for refinement by an appropriate error indicator. Hash-table/linked-list data structures are used to store nodal and element formation. The grid along inter-processor boundaries is refined and coarsened consistently with the update of these data structures via MPI calls. The parallel adaption scheme has been applied to the solution of a transient, three-dimensional, nonlinear, groundwater flow problem. Timings indicate efficiency of the grid refinement process relative to the flow solvermore » calculations.« less
Scheme variations of the QCD coupling
NASA Astrophysics Data System (ADS)
Boito, Diogo; Jamin, Matthias; Miravitllas, Ramon
2017-03-01
The Quantum Chromodynamics (QCD) coupling αs is a central parameter in the Standard Model of particle physics. However, it depends on theoretical conventions related to renormalisation and hence is not an observable quantity. In order to capture this dependence in a transparent way, a novel definition of the QCD coupling, denoted by â, is introduced, whose running is explicitly renormalisation scheme invariant. The remaining renormalisation scheme dependence is related to transformations of the QCD scale Λ, and can be parametrised by a single parameter C. Hence, we call â the C-scheme coupling. The dependence on C can be exploited to study and improve perturbative predictions of physical observables. This is demonstrated for the QCD Adler function and hadronic decays of the τ lepton.
Hybrid DG/FV schemes for magnetohydrodynamics and relativistic hydrodynamics
NASA Astrophysics Data System (ADS)
Núñez-de la Rosa, Jonatan; Munz, Claus-Dieter
2018-01-01
This paper presents a high order hybrid discontinuous Galerkin/finite volume scheme for solving the equations of the magnetohydrodynamics (MHD) and of the relativistic hydrodynamics (SRHD) on quadrilateral meshes. In this approach, for the spatial discretization, an arbitrary high order discontinuous Galerkin spectral element (DG) method is combined with a finite volume (FV) scheme in order to simulate complex flow problems involving strong shocks. Regarding the time discretization, a fourth order strong stability preserving Runge-Kutta method is used. In the proposed hybrid scheme, a shock indicator is computed at the beginning of each Runge-Kutta stage in order to flag those elements containing shock waves or discontinuities. Subsequently, the DG solution in these troubled elements and in the current time step is projected onto a subdomain composed of finite volume subcells. Right after, the DG operator is applied to those unflagged elements, which, in principle, are oscillation-free, meanwhile the troubled elements are evolved with a robust second/third order FV operator. With this approach we are able to numerically simulate very challenging problems in the context of MHD and SRHD in one, and two space dimensions and with very high order polynomials. We make convergence tests and show a comprehensive one- and two dimensional testbench for both equation systems, focusing in problems with strong shocks. The presented hybrid approach shows that numerical schemes of very high order of accuracy are able to simulate these complex flow problems in an efficient and robust manner.
Absence of even-integer ζ-function values in Euclidean physical quantities in QCD
NASA Astrophysics Data System (ADS)
Jamin, Matthias; Miravitllas, Ramon
2018-04-01
At order αs4 in perturbative quantum chromodynamics, even-integer ζ-function values are present in Euclidean physical correlation functions like the scalar quark correlation function or the scalar gluonium correlator. We demonstrate that these contributions cancel when the perturbative expansion is expressed in terms of the so-called C-scheme coupling αˆs which has recently been introduced in Ref. [1]. It is furthermore conjectured that a ζ4 term should arise in the Adler function at order αs5 in the MS ‾-scheme, and that this term is expected to disappear in the C-scheme as well.
Du, Bing; Liu Aimin; Huang, Yeru
2014-09-01
Polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) in soil samples were analyzed by isotope dilution method with high resolution gas chromatography and high resolution mass spectrometry (ID-HRGC/HRMS), and the toxic equivalent quantity (TEQ) were calculated. The impacts of major source of measurement uncertainty are discussed, and the combined relative standard uncertainties were calculated for each 2, 3, 7, 8 substituted con- gener. Furthermore, the concentration, combined uncertainty and expanded uncertainty for TEQ of PCDD/Fs in a soil sample in I-TEF, WHO-1998-TEF and WHO-2005-TEF schemes are provided as an example. I-TEF, WHO-1998-TEF and WHO-2005-TEF are the evaluation schemes of toxic equivalent factor (TEF), and are all currently used to describe 2,3,7,8 sub- stituted relative potencies.
Robyn, Paul Jacob; Sauerborn, Rainer; Bärnighausen, Till
2013-01-01
Objectives Community-based health insurance (CBI) is a common mechanism to generate financial resources for health care in developing countries. We review for the first time provider payment methods used in CBI in developing countries and their impact on CBI performance. Methods We conducted a systematic review of the literature on provider payment methods used by CBI in developing countries published up to January 2010. Results Information on provider payment was available for a total of 32 CBI schemes in 34 reviewed publications: 17 schemes in South Asia, 10 in sub-Saharan Africa, 4 in East Asia and 1 in Latin America. Various types of provider payment were applied by the CBI schemes: 17 used fee-for-service, 12 used salaries, 9 applied a coverage ceiling, 7 used capitation and 6 applied a co-insurance. The evidence suggests that provider payment impacts CBI performance through provider participation and support for CBI, population enrolment and patient satisfaction with CBI, quantity and quality of services provided and provider and patient retention. Lack of provider participation in designing and choosing a CBI payment method can lead to reduced provider support for the scheme. Conclusion CBI schemes in developing countries have used a wide range of provider payment methods. The existing evidence suggests that payment methods are a key determinant of CBI performance and sustainability, but the strength of this evidence is limited since it is largely based on observational studies rather than on trials or on quasi-experimental research. According to the evidence, provider payment can affect provider participation, satisfaction and retention in CBI; the quantity and quality of services provided to CBI patients; patient demand of CBI services; and population enrollment, risk pooling and financial sustainability of CBI. CBI schemes should carefully consider how their current payment methods influence their performance, how changes in the methods could improve performance, and how such effects could be assessed with scientific rigour to increase the strength of evidence on this topic. PMID:22522770
Robyn, Paul Jacob; Sauerborn, Rainer; Bärnighausen, Till
2013-03-01
Community-based health insurance (CBI) is a common mechanism to generate financial resources for health care in developing countries. We review for the first time provider payment methods used in CBI in developing countries and their impact on CBI performance. We conducted a systematic review of the literature on provider payment methods used by CBI in developing countries published up to January 2010. Information on provider payment was available for a total of 32 CBI schemes in 34 reviewed publications: 17 schemes in South Asia, 10 in sub-Saharan Africa, 4 in East Asia and 1 in Latin America. Various types of provider payment were applied by the CBI schemes: 17 used fee-for-service, 12 used salaries, 9 applied a coverage ceiling, 7 used capitation and 6 applied a co-insurance. The evidence suggests that provider payment impacts CBI performance through provider participation and support for CBI, population enrolment and patient satisfaction with CBI, quantity and quality of services provided and provider and patient retention. Lack of provider participation in designing and choosing a CBI payment method can lead to reduced provider support for the scheme. CBI schemes in developing countries have used a wide range of provider payment methods. The existing evidence suggests that payment methods are a key determinant of CBI performance and sustainability, but the strength of this evidence is limited since it is largely based on observational studies rather than on trials or on quasi-experimental research. According to the evidence, provider payment can affect provider participation, satisfaction and retention in CBI; the quantity and quality of services provided to CBI patients; patient demand of CBI services; and population enrollment, risk pooling and financial sustainability of CBI. CBI schemes should carefully consider how their current payment methods influence their performance, how changes in the methods could improve performance, and how such effects could be assessed with scientific rigour to increase the strength of evidence on this topic.
NASA Astrophysics Data System (ADS)
Nisar, Ubaid Ahmed; Ashraf, Waqas; Qamar, Shamsul
2016-08-01
Numerical solutions of the hydrodynamical model of semiconductor devices are presented in one and two-space dimension. The model describes the charge transport in semiconductor devices. Mathematically, the models can be written as a convection-diffusion type system with a right hand side describing the relaxation effects and interaction with a self consistent electric field. The proposed numerical scheme is a splitting scheme based on the conservation element and solution element (CE/SE) method for hyperbolic step, and a semi-implicit scheme for the relaxation step. The numerical results of the suggested scheme are compared with the splitting scheme based on Nessyahu-Tadmor (NT) central scheme for convection step and the same semi-implicit scheme for the relaxation step. The effects of various parameters such as low field mobility, device length, lattice temperature and voltages for one-space dimensional hydrodynamic model are explored to further validate the generic applicability of the CE/SE method for the current model equations. A two dimensional simulation is also performed by CE/SE method for a MESFET device, producing results in good agreement with those obtained by NT-central scheme.
An Overview of Recent Advances in Event-Triggered Consensus of Multiagent Systems.
Ding, Lei; Han, Qing-Long; Ge, Xiaohua; Zhang, Xian-Ming
2018-04-01
Event-triggered consensus of multiagent systems (MASs) has attracted tremendous attention from both theoretical and practical perspectives due to the fact that it enables all agents eventually to reach an agreement upon a common quantity of interest while significantly alleviating utilization of communication and computation resources. This paper aims to provide an overview of recent advances in event-triggered consensus of MASs. First, a basic framework of multiagent event-triggered operational mechanisms is established. Second, representative results and methodologies reported in the literature are reviewed and some in-depth analysis is made on several event-triggered schemes, including event-based sampling schemes, model-based event-triggered schemes, sampled-data-based event-triggered schemes, and self-triggered sampling schemes. Third, two examples are outlined to show applicability of event-triggered consensus in power sharing of microgrids and formation control of multirobot systems, respectively. Finally, some challenging issues on event-triggered consensus are proposed for future research.
Hou, Chieh; Ateshian, Gerard A.
2015-01-01
Fibrous biological tissues may be modeled using a continuous fiber distribution (CFD) to capture tension-compression nonlinearity, anisotropic fiber distributions, and load-induced anisotropy. The CFD framework requires spherical integration of weighted individual fiber responses, with fibers contributing to the stress response only when they are in tension. The common method for performing this integration employs the discretization of the unit sphere into a polyhedron with nearly uniform triangular faces (finite element integration or FEI scheme). Although FEI has proven to be more accurate and efficient than integration using spherical coordinates, it presents three major drawbacks: First, the number of elements on the unit sphere needed to achieve satisfactory accuracy becomes a significant computational cost in a finite element analysis. Second, fibers may not be in tension in some regions on the unit sphere, where the integration becomes a waste. Third, if tensed fiber bundles span a small region compared to the area of the elements on the sphere, a significant discretization error arises. This study presents an integration scheme specialized to the CFD framework, which significantly mitigates the first drawback of the FEI scheme, while eliminating the second and third completely. Here, integration is performed only over the regions of the unit sphere where fibers are in tension. Gauss-Kronrod quadrature is used across latitudes and the trapezoidal scheme across longitudes. Over a wide range of strain states, fiber material properties, and fiber angular distributions, results demonstrate that this new scheme always outperforms FEI, sometimes by orders of magnitude in the number of computational steps and relative accuracy of the stress calculation. PMID:26291492
Hou, Chieh; Ateshian, Gerard A
2016-01-01
Fibrous biological tissues may be modeled using a continuous fiber distribution (CFD) to capture tension-compression nonlinearity, anisotropic fiber distributions, and load-induced anisotropy. The CFD framework requires spherical integration of weighted individual fiber responses, with fibers contributing to the stress response only when they are in tension. The common method for performing this integration employs the discretization of the unit sphere into a polyhedron with nearly uniform triangular faces (finite element integration or FEI scheme). Although FEI has proven to be more accurate and efficient than integration using spherical coordinates, it presents three major drawbacks: First, the number of elements on the unit sphere needed to achieve satisfactory accuracy becomes a significant computational cost in a finite element (FE) analysis. Second, fibers may not be in tension in some regions on the unit sphere, where the integration becomes a waste. Third, if tensed fiber bundles span a small region compared to the area of the elements on the sphere, a significant discretization error arises. This study presents an integration scheme specialized to the CFD framework, which significantly mitigates the first drawback of the FEI scheme, while eliminating the second and third completely. Here, integration is performed only over the regions of the unit sphere where fibers are in tension. Gauss-Kronrod quadrature is used across latitudes and the trapezoidal scheme across longitudes. Over a wide range of strain states, fiber material properties, and fiber angular distributions, results demonstrate that this new scheme always outperforms FEI, sometimes by orders of magnitude in the number of computational steps and relative accuracy of the stress calculation.
COMPARISON OF NUMERICAL SCHEMES FOR SOLVING A SPHERICAL PARTICLE DIFFUSION EQUATION
A new robust iterative numerical scheme was developed for a nonlinear diffusive model that described sorption dynamics in spherical particle suspensions. he numerical scheme had been applied to finite difference and finite element models that showed rapid convergence and stabilit...
Finite Macro-Element Mesh Deformation in a Structured Multi-Block Navier-Stokes Code
NASA Technical Reports Server (NTRS)
Bartels, Robert E.
2005-01-01
A mesh deformation scheme is developed for a structured multi-block Navier-Stokes code consisting of two steps. The first step is a finite element solution of either user defined or automatically generated macro-elements. Macro-elements are hexagonal finite elements created from a subset of points from the full mesh. When assembled, the finite element system spans the complete flow domain. Macro-element moduli vary according to the distance to the nearest surface, resulting in extremely stiff elements near a moving surface and very pliable elements away from boundaries. Solution of the finite element system for the imposed boundary deflections generally produces smoothly varying nodal deflections. The manner in which distance to the nearest surface has been found to critically influence the quality of the element deformation. The second step is a transfinite interpolation which distributes the macro-element nodal deflections to the remaining fluid mesh points. The scheme is demonstrated for several two-dimensional applications.
HiCoDG: a hierarchical data-gathering scheme using cooperative multiple mobile elements.
Van Le, Duc; Oh, Hoon; Yoon, Seokhoon
2014-12-17
In this paper, we study mobile element (ME)-based data-gathering schemes in wireless sensor networks. Due to the physical speed limits of mobile elements, the existing data-gathering schemes that use mobile elements can suffer from high data-gathering latency. In order to address this problem, this paper proposes a new hierarchical and cooperative data-gathering (HiCoDG) scheme that enables multiple mobile elements to cooperate with each other to collect and relay data. In HiCoDG, two types of mobile elements are used: the mobile collector (MC) and the mobile relay (MR). MCs collect data from sensors and forward them to the MR, which will deliver them to the sink. In this work, we also formulated an integer linear programming (ILP) optimization problem to find the optimal trajectories for MCs and the MR, such that the traveling distance of MEs is minimized. Two variants of HiCoDG, intermediate station (IS)-based and cooperative movement scheduling (CMS)-based, are proposed to facilitate cooperative data forwarding from MCs to the MR. An analytical model for estimating the average data-gathering latency in HiCoDG was also designed. Simulations were performed to compare the performance of the IS and CMS variants, as well as a multiple traveling salesman problem (mTSP)-based approach. The simulation results show that HiCoDG outperforms mTSP in terms of latency. The results also show that CMS can achieve the lowest latency with low energy consumption.
HiCoDG: A Hierarchical Data-Gathering Scheme Using Cooperative Multiple Mobile Elements †
Van Le, Duc; Oh, Hoon; Yoon, Seokhoon
2014-01-01
In this paper, we study mobile element (ME)-based data-gathering schemes in wireless sensor networks. Due to the physical speed limits of mobile elements, the existing data-gathering schemes that use mobile elements can suffer from high data-gathering latency. In order to address this problem, this paper proposes a new hierarchical and cooperative data-gathering (HiCoDG) scheme that enables multiple mobile elements to cooperate with each other to collect and relay data. In HiCoDG, two types of mobile elements are used: the mobile collector (MC) and the mobile relay (MR). MCs collect data from sensors and forward them to the MR, which will deliver them to the sink. In this work, we also formulated an integer linear programming (ILP) optimization problem to find the optimal trajectories for MCs and the MR, such that the traveling distance of MEs is minimized. Two variants of HiCoDG, intermediate station (IS)-based and cooperative movement scheduling (CMS)-based, are proposed to facilitate cooperative data forwarding from MCs to the MR. An analytical model for estimating the average data-gathering latency in HiCoDG was also designed. Simulations were performed to compare the performance of the IS and CMS variants, as well as a multiple traveling salesman problem (mTSP)-based approach. The simulation results show that HiCoDG outperforms mTSP in terms of latency. The results also show that CMS can achieve the lowest latency with low energy consumption. PMID:25526356
NASA Astrophysics Data System (ADS)
Zanotti, Olindo; Dumbser, Michael
2016-01-01
We present a new version of conservative ADER-WENO finite volume schemes, in which both the high order spatial reconstruction as well as the time evolution of the reconstruction polynomials in the local space-time predictor stage are performed in primitive variables, rather than in conserved ones. To obtain a conservative method, the underlying finite volume scheme is still written in terms of the cell averages of the conserved quantities. Therefore, our new approach performs the spatial WENO reconstruction twice: the first WENO reconstruction is carried out on the known cell averages of the conservative variables. The WENO polynomials are then used at the cell centers to compute point values of the conserved variables, which are subsequently converted into point values of the primitive variables. This is the only place where the conversion from conservative to primitive variables is needed in the new scheme. Then, a second WENO reconstruction is performed on the point values of the primitive variables to obtain piecewise high order reconstruction polynomials of the primitive variables. The reconstruction polynomials are subsequently evolved in time with a novel space-time finite element predictor that is directly applied to the governing PDE written in primitive form. The resulting space-time polynomials of the primitive variables can then be directly used as input for the numerical fluxes at the cell boundaries in the underlying conservative finite volume scheme. Hence, the number of necessary conversions from the conserved to the primitive variables is reduced to just one single conversion at each cell center. We have verified the validity of the new approach over a wide range of hyperbolic systems, including the classical Euler equations of gas dynamics, the special relativistic hydrodynamics (RHD) and ideal magnetohydrodynamics (RMHD) equations, as well as the Baer-Nunziato model for compressible two-phase flows. In all cases we have noticed that the new ADER schemes provide less oscillatory solutions when compared to ADER finite volume schemes based on the reconstruction in conserved variables, especially for the RMHD and the Baer-Nunziato equations. For the RHD and RMHD equations, the overall accuracy is improved and the CPU time is reduced by about 25 %. Because of its increased accuracy and due to the reduced computational cost, we recommend to use this version of ADER as the standard one in the relativistic framework. At the end of the paper, the new approach has also been extended to ADER-DG schemes on space-time adaptive grids (AMR).
Weak Galerkin method for the Biot’s consolidation model
Hu, Xiaozhe; Mu, Lin; Ye, Xiu
2017-08-23
In this study, we develop a weak Galerkin (WG) finite element method for the Biot’s consolidation model in the classical displacement–pressure two-field formulation. Weak Galerkin linear finite elements are used for both displacement and pressure approximations in spatial discretizations. Backward Euler scheme is used for temporal discretization in order to obtain an implicit fully discretized scheme. We study the well-posedness of the linear system at each time step and also derive the overall optimal-order convergence of the WG formulation. Such WG scheme is designed on general shape regular polytopal meshes and provides stable and oscillation-free approximation for the pressure withoutmore » special treatment. Lastlyl, numerical experiments are presented to demonstrate the efficiency and accuracy of the proposed weak Galerkin finite element method.« less
Weak Galerkin method for the Biot’s consolidation model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Xiaozhe; Mu, Lin; Ye, Xiu
In this study, we develop a weak Galerkin (WG) finite element method for the Biot’s consolidation model in the classical displacement–pressure two-field formulation. Weak Galerkin linear finite elements are used for both displacement and pressure approximations in spatial discretizations. Backward Euler scheme is used for temporal discretization in order to obtain an implicit fully discretized scheme. We study the well-posedness of the linear system at each time step and also derive the overall optimal-order convergence of the WG formulation. Such WG scheme is designed on general shape regular polytopal meshes and provides stable and oscillation-free approximation for the pressure withoutmore » special treatment. Lastlyl, numerical experiments are presented to demonstrate the efficiency and accuracy of the proposed weak Galerkin finite element method.« less
Sparsity-Aware DOA Estimation Scheme for Noncircular Source in MIMO Radar.
Wang, Xianpeng; Wang, Wei; Li, Xin; Liu, Qi; Liu, Jing
2016-04-14
In this paper, a novel sparsity-aware direction of arrival (DOA) estimation scheme for a noncircular source is proposed in multiple-input multiple-output (MIMO) radar. In the proposed method, the reduced-dimensional transformation technique is adopted to eliminate the redundant elements. Then, exploiting the noncircularity of signals, a joint sparsity-aware scheme based on the reweighted l1 norm penalty is formulated for DOA estimation, in which the diagonal elements of the weight matrix are the coefficients of the noncircular MUSIC-like (NC MUSIC-like) spectrum. Compared to the existing l1 norm penalty-based methods, the proposed scheme provides higher angular resolution and better DOA estimation performance. Results from numerical experiments are used to show the effectiveness of our proposed method.
NASA Astrophysics Data System (ADS)
Le Hardy, D.; Favennec, Y.; Rousseau, B.
2016-08-01
The 2D radiative transfer equation coupled with specular reflection boundary conditions is solved using finite element schemes. Both Discontinuous Galerkin and Streamline-Upwind Petrov-Galerkin variational formulations are fully developed. These two schemes are validated step-by-step for all involved operators (transport, scattering, reflection) using analytical formulations. Numerical comparisons of the two schemes, in terms of convergence rate, reveal that the quadratic SUPG scheme proves efficient for solving such problems. This comparison constitutes the main issue of the paper. Moreover, the solution process is accelerated using block SOR-type iterative methods, for which the determination of the optimal parameter is found in a very cheap way.
Weickenmeier, J; Jabareen, M
2014-11-01
The characteristic highly nonlinear, time-dependent, and often inelastic material response of soft biological tissues can be expressed in a set of elastic-viscoplastic constitutive equations. The specific elastic-viscoplastic model for soft tissues proposed by Rubin and Bodner (2002) is generalized with respect to the constitutive equations for the scalar quantity of the rate of inelasticity and the hardening parameter in order to represent a general framework for elastic-viscoplastic models. A strongly objective integration scheme and a new mixed finite element formulation were developed based on the introduction of the relative deformation gradient-the deformation mapping between the last converged and current configurations. The numerical implementation of both the generalized framework and the specific Rubin and Bodner model is presented. As an example of a challenging application of the new model equations, the mechanical response of facial skin tissue is characterized through an experimental campaign based on the suction method. The measurement data are used for the identification of a suitable set of model parameters that well represents the experimentally observed tissue behavior. Two different measurement protocols were defined to address specific tissue properties with respect to the instantaneous tissue response, inelasticity, and tissue recovery. Copyright © 2014 John Wiley & Sons, Ltd.
Vibrations of a Mindlin plate subjected to a pair of inertial loads moving in opposite directions
NASA Astrophysics Data System (ADS)
Dyniewicz, Bartłomiej; Pisarski, Dominik; Bajer, Czesław I.
2017-01-01
A Mindlin plate subjected to a pair of inertial loads traveling at a constant high speed in opposite directions along arbitrary trajectory, straight or curved, is presented. The masses represent vehicles passing a bridge or track plates. A numerical solution is obtained using the space-time finite element method, since it allows a clear and simple derivation of the characteristic matrices of the time-stepping procedure. The transition from one spatial finite element to another must be energetically consistent. In the case of the moving inertial load the classical time-integration schemes are methodologically difficult, since we consider the Dirac delta term with a moving argument. The proposed numerical approach provides the correct definition of force equilibrium in the time interval. The given approach closes the problem of the numerical analysis of vibration of a structure subjected to inertial loads moving arbitrarily with acceleration. The results obtained for a massless and an inertial load traveling over a Mindlin plate at various speeds are compared with benchmark results obtained for a Kirchhoff plate. The pair of inertial forces traveling in opposite directions causes displacements and stresses more than twice as large as their corresponding quantities observed for the passage of a single mass.
NASA Astrophysics Data System (ADS)
Strack, O. D. L.
2018-02-01
We present equations for new limitless analytic line elements. These elements possess a virtually unlimited number of degrees of freedom. We apply these new limitless analytic elements to head-specified boundaries and to problems with inhomogeneities in hydraulic conductivity. Applications of these new analytic elements to practical problems involving head-specified boundaries require the solution of a very large number of equations. To make the new elements useful in practice, an efficient iterative scheme is required. We present an improved version of the scheme presented by Bandilla et al. (2007), based on the application of Cauchy integrals. The limitless analytic elements are useful when modeling strings of elements, rivers for example, where local conditions are difficult to model, e.g., when a well is close to a river. The solution of such problems is facilitated by increasing the order of the elements to obtain a good solution. This makes it unnecessary to resort to dividing the element in question into many smaller elements to obtain a satisfactory solution.
Qualitative Analysis, with Periodicity, for "Real" Solutions.
ERIC Educational Resources Information Center
Rich, Ronald L.
1984-01-01
Presents an outline of group separations for a nonhydrogen sulfide analytical scheme applicable to all metallic elements (Bromide scheme). Also presents another outline of an abbreviated and modified version (Iodide scheme) designed for emphasis on nutritionally important metals, with special attention to 10 cations. (JM)
A point-value enhanced finite volume method based on approximate delta functions
NASA Astrophysics Data System (ADS)
Xuan, Li-Jun; Majdalani, Joseph
2018-02-01
We revisit the concept of an approximate delta function (ADF), introduced by Huynh (2011) [1], in the form of a finite-order polynomial that holds identical integral properties to the Dirac delta function when used in conjunction with a finite-order polynomial integrand over a finite domain. We show that the use of generic ADF polynomials can be effective at recovering and generalizing several high-order methods, including Taylor-based and nodal-based Discontinuous Galerkin methods, as well as the Correction Procedure via Reconstruction. Based on the ADF concept, we then proceed to formulate a Point-value enhanced Finite Volume (PFV) method, which stores and updates the cell-averaged values inside each element as well as the unknown quantities and, if needed, their derivatives on nodal points. The sharing of nodal information with surrounding elements saves the number of degrees of freedom compared to other compact methods at the same order. To ensure conservation, cell-averaged values are updated using an identical approach to that adopted in the finite volume method. Here, the updating of nodal values and their derivatives is achieved through an ADF concept that leverages all of the elements within the domain of integration that share the same nodal point. The resulting scheme is shown to be very stable at successively increasing orders. Both accuracy and stability of the PFV method are verified using a Fourier analysis and through applications to the linear wave and nonlinear Burgers' equations in one-dimensional space.
Saravanan, Chandra; Shao, Yihan; Baer, Roi; Ross, Philip N; Head-Gordon, Martin
2003-04-15
A sparse matrix multiplication scheme with multiatom blocks is reported, a tool that can be very useful for developing linear-scaling methods with atom-centered basis functions. Compared to conventional element-by-element sparse matrix multiplication schemes, efficiency is gained by the use of the highly optimized basic linear algebra subroutines (BLAS). However, some sparsity is lost in the multiatom blocking scheme because these matrix blocks will in general contain negligible elements. As a result, an optimal block size that minimizes the CPU time by balancing these two effects is recovered. In calculations on linear alkanes, polyglycines, estane polymers, and water clusters the optimal block size is found to be between 40 and 100 basis functions, where about 55-75% of the machine peak performance was achieved on an IBM RS6000 workstation. In these calculations, the blocked sparse matrix multiplications can be 10 times faster than a standard element-by-element sparse matrix package. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 618-622, 2003
Study of hypervelocity meteoroid impact on orbital space stations
NASA Technical Reports Server (NTRS)
Leimbach, K. R.; Prozan, R. J.
1973-01-01
Structural damage resulting in hypervelocity impact of a meteorite on a spacecraft is discussed. Of particular interest is the backside spallation caused by such a collision. To treat this phenomenon two numerical schemes were developed in the course of this study to compute the elastic-plastic flow fracture of a solid. The numerical schemes are a five-point finite difference scheme and a four-node finite element scheme. The four-node finite element scheme proved to be less sensitive to the type of boundary conditions and loadings. Although further development work is needed to improve the program versatility (generalization of the network topology, secondary storage for large systems, improving of the coding to reduce the run time, etc.), the basic framework is provided for a utilitarian computer program which may be used in a wide variety of situations. Analytic results showing the program output are given for several test cases.
NASA Technical Reports Server (NTRS)
Reed, K. W.; Stonesifer, R. B.; Atluri, S. N.
1983-01-01
A new hybrid-stress finite element algorith, suitable for analyses of large quasi-static deformations of inelastic solids, is presented. Principal variables in the formulation are the nominal stress-rate and spin. A such, a consistent reformulation of the constitutive equation is necessary, and is discussed. The finite element equations give rise to an initial value problem. Time integration has been accomplished by Euler and Runge-Kutta schemes and the superior accuracy of the higher order schemes is noted. In the course of integration of stress in time, it has been demonstrated that classical schemes such as Euler's and Runge-Kutta may lead to strong frame-dependence. As a remedy, modified integration schemes are proposed and the potential of the new schemes for suppressing frame dependence of numerically integrated stress is demonstrated. The topic of the development of valid creep fracture criteria is also addressed.
Development of non-linear finite element computer code
NASA Technical Reports Server (NTRS)
Becker, E. B.; Miller, T.
1985-01-01
Recent work has shown that the use of separable symmetric functions of the principal stretches can adequately describe the response of certain propellant materials and, further, that a data reduction scheme gives a convenient way of obtaining the values of the functions from experimental data. Based on representation of the energy, a computational scheme was developed that allows finite element analysis of boundary value problems of arbitrary shape and loading. The computational procedure was implemental in a three-dimensional finite element code, TEXLESP-S, which is documented herein.
NASA Technical Reports Server (NTRS)
Padovan, Joe
1987-01-01
In a three-part series of papers, a generalized finite element analysis scheme is developed to handle the steady and transient response of moving/rolling nonlinear viscoelastic structure. This paper considers the development of the moving/rolling element strategy, including the effects of large deformation kinematics and viscoelasticity modeled by fractional integrodifferential operators. To improve the solution strategy, a special hierarchical constraint procedure is developed for the case of steady rolling/translating, as well as a transient scheme involving the use of a Grunwaldian representation of the fractional operator.
NASA Technical Reports Server (NTRS)
Weinan, E.; Shu, Chi-Wang
1994-01-01
High order essentially non-oscillatory (ENO) schemes, originally designed for compressible flow and in general for hyperbolic conservation laws, are applied to incompressible Euler and Navier-Stokes equations with periodic boundary conditions. The projection to divergence-free velocity fields is achieved by fourth-order central differences through fast Fourier transforms (FFT) and a mild high-order filtering. The objective of this work is to assess the resolution of ENO schemes for large scale features of the flow when a coarse grid is used and small scale features of the flow, such as shears and roll-ups, are not fully resolved. It is found that high-order ENO schemes remain stable under such situations and quantities related to large scale features, such as the total circulation around the roll-up region, are adequately resolved.
NASA Technical Reports Server (NTRS)
Weinan, E.; Shu, Chi-Wang
1992-01-01
High order essentially non-oscillatory (ENO) schemes, originally designed for compressible flow and in general for hyperbolic conservation laws, are applied to incompressible Euler and Navier-Stokes equations with periodic boundary conditions. The projection to divergence-free velocity fields is achieved by fourth order central differences through Fast Fourier Transforms (FFT) and a mild high-order filtering. The objective of this work is to assess the resolution of ENO schemes for large scale features of the flow when a coarse grid is used and small scale features of the flow, such as shears and roll-ups, are not fully resolved. It is found that high-order ENO schemes remain stable under such situations and quantities related to large-scale features, such as the total circulation around the roll-up region, are adequately resolved.
Studies of Flerovium and Element 115 Homologs with Macrocyclic Extractants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Despotopulos, John D.
2015-03-12
Study of the chemistry of the heaviest elements, Z ≥ 104, poses a unique challenge due to their low production cross-sections and short half-lives. Chemistry also must be studied on the one-atom-at-a-time scale, requiring automated, fast, and very efficient chemical schemes. Recent studies of the chemical behavior of copernicium (Cn, element 112) and flerovium (Fl, element 114) together with the discovery of isotopes of these elements with half-lives suitable for chemical studies have spurred a renewed interest in the development of rapid systems designed to study the chemical properties of elements with Z ≥ 114. This dissertation explores both extractionmore » chromatography and solvent extraction as methods for development of a rapid chemical separation scheme for the homologs of flerovium (Pb, Sn, Hg) and element 115 (Bi, Sb), with the goal of developing a chemical scheme that, in the future, can be applied to on-line chemistry of both Fl and element 115. Carrier-free radionuclides, used in these studies, of the homologs of Fl and element 115 were obtained by proton activation of high-purity metal foils at the Lawrence Livermore National Laboratory (LLNL) Center for Accelerator Mass Spectrometry (CAMS): natIn(p,n) 113Sn, natSn(p,n) 124Sb, and Au(p,n) 197m,gHg. The carrier-free activity was separated from the foils by novel separation schemes based on ion exchange and extraction chromatography techniques. Carrier-free Pb and Bi isotopes were obtained from development of a novel generator based on cation exchange chromatography using the 232U parent to generate 212Pb and 212Bi. Macrocyclic extractants, specifically crown ethers and their derivatives, were chosen for these studies; crown ethers show high selectivity for metal ions. Finally. a potential chemical system for Fl was established based on the Eichrom Pb resin, and insight to an improved system based on thiacrown ethers is presented.« less
Sparsity-Aware DOA Estimation Scheme for Noncircular Source in MIMO Radar
Wang, Xianpeng; Wang, Wei; Li, Xin; Liu, Qi; Liu, Jing
2016-01-01
In this paper, a novel sparsity-aware direction of arrival (DOA) estimation scheme for a noncircular source is proposed in multiple-input multiple-output (MIMO) radar. In the proposed method, the reduced-dimensional transformation technique is adopted to eliminate the redundant elements. Then, exploiting the noncircularity of signals, a joint sparsity-aware scheme based on the reweighted l1 norm penalty is formulated for DOA estimation, in which the diagonal elements of the weight matrix are the coefficients of the noncircular MUSIC-like (NC MUSIC-like) spectrum. Compared to the existing l1 norm penalty-based methods, the proposed scheme provides higher angular resolution and better DOA estimation performance. Results from numerical experiments are used to show the effectiveness of our proposed method. PMID:27089345
Targeting the UPR to Circumvent Endocrine Resistance in Breast Cancer
2016-12-01
chemistry Our initial synthetic efforts in the NPPTA project are to generate gram quantities of NPPTA and JS-20 as seen in Figure 1. NPPTA JS...diversity on the bi-aryl ether moiety of the target analogs. This plan requires the preparation of gram quantities of acid 2. 1 X = H, CF 3 N N NNH S O...NNH S O NH O O NH2 Scheme 3. Proposed synthesis of NPPTA Our goal in this project is to prepare five grams of NPPTA and JS20 as seen in Figure 1
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-27
... Meat processing facilities. 311411 Frozen fruit, juice, and vegetable manufacturing facilities. 311421... volume conversion factor. Y 98.256(m)(3) Only total quantity of crude oil plus the quantity of...
Edge equilibrium code for tokamaks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Xujing; Zakharov, Leonid E.; Drozdov, Vladimir V.
2014-01-15
The edge equilibrium code (EEC) described in this paper is developed for simulations of the near edge plasma using the finite element method. It solves the Grad-Shafranov equation in toroidal coordinate and uses adaptive grids aligned with magnetic field lines. Hermite finite elements are chosen for the numerical scheme. A fast Newton scheme which is the same as implemented in the equilibrium and stability code (ESC) is applied here to adjust the grids.
Element-by-element Solution Procedures for Nonlinear Structural Analysis
NASA Technical Reports Server (NTRS)
Hughes, T. J. R.; Winget, J. M.; Levit, I.
1984-01-01
Element-by-element approximate factorization procedures are proposed for solving the large finite element equation systems which arise in nonlinear structural mechanics. Architectural and data base advantages of the present algorithms over traditional direct elimination schemes are noted. Results of calculations suggest considerable potential for the methods described.
Frequency analysis of urban runoff quality in an urbanizing catchment of Shenzhen, China
NASA Astrophysics Data System (ADS)
Qin, Huapeng; Tan, Xiaolong; Fu, Guangtao; Zhang, Yingying; Huang, Yuefei
2013-07-01
This paper investigates the frequency distribution of urban runoff quality indicators using a long-term continuous simulation approach and evaluates the impacts of proposed runoff control schemes on runoff quality in an urbanizing catchment in Shenzhen, China. Four different indicators are considered to provide a comprehensive assessment of the potential impacts: total runoff depth, event pollutant load, Event Mean Concentration, and peak concentration during a rainfall event. The results obtained indicate that urban runoff quantity and quality in the catchment have significant variations in rainfall events and a very high rate of non-compliance with surface water quality regulations. Three runoff control schemes with the capacity to intercept an initial runoff depth of 5 mm, 10 mm, and 15 mm are evaluated, respectively, and diminishing marginal benefits are found with increasing interception levels in terms of water quality improvement. The effects of seasonal variation in rainfall events are investigated to provide a better understanding of the performance of the runoff control schemes. The pre-flood season has higher risk of poor water quality than other seasons after runoff control. This study demonstrates that frequency analysis of urban runoff quantity and quality provides a probabilistic evaluation of pollution control measures, and thus helps frame a risk-based decision making for urban runoff quality management in an urbanizing catchment.
NASA Astrophysics Data System (ADS)
Liu, Tao; Kubis, Tillmann; Jie Wang, Qi; Klimeck, Gerhard
2012-03-01
The nonequilibrium Green's function approach is applied to the design of three-well indirect pumping terahertz (THz) quantum cascade lasers (QCLs) based on a resonant phonon depopulation scheme. The effects of the anticrossing of the injector states and the dipole matrix element of the laser levels on the optical gain of THz QCLs are studied. The results show that a design that results in a more pronounced anticrossing of the injector states will achieve a higher optical gain in the indirect pumping scheme compared to the traditional resonant-tunneling injection scheme. This offers in general a more efficient coherent resonant-tunneling transport of electrons in the indirect pumping scheme. It is also shown that, for operating temperatures below 200 K and low lasing frequencies, larger dipole matrix elements, i.e., vertical optical transitions, offer a higher optical gain. In contrast, in the case of high lasing frequencies, smaller dipole matrix elements, i.e., diagonal optical transitions are better for achieving a higher optical gain.
NASA Astrophysics Data System (ADS)
Tjong, Tiffany; Yihaa’ Roodhiyah, Lisa; Nurhasan; Sutarno, Doddy
2018-04-01
In this work, an inversion scheme was performed using a vector finite element (VFE) based 2-D magnetotelluric (MT) forward modelling. We use an inversion scheme with Singular value decomposition (SVD) method toimprove the accuracy of MT inversion.The inversion scheme was applied to transverse electric (TE) mode of MT. SVD method was used in this inversion to decompose the Jacobian matrices. Singular values which obtained from the decomposition process were analyzed. This enabled us to determine the importance of data and therefore to define a threshold for truncation process. The truncation of singular value in inversion processcould improve the resulted model.
Simulation of underwater explosion benchmark experiments with ALE3D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Couch, R.; Faux, D.
1997-05-19
Some code improvements have been made during the course of this study. One immediately obvious need was for more flexibility in the constitutive representation for materials in shell elements. To remedy this situation, a model with a tabular representation of stress versus strain and rate dependent effects was implemented. This was required in order to obtain reasonable results in the IED cylinder simulation. Another deficiency was in the ability to extract and plot variables associated with shell elements. The pipe whip analysis required the development of a scheme to tally and plot time dependent shell quantities such as stresses andmore » strains. This capability had previously existed only for solid elements. Work was initiated to provide the same range of plotting capability for structural elements that exist with the DYNA3D/TAURUS tools. One of the characteristics of these problems is the disparity in zoning required in the vicinity of the charge and bubble compared to that needed in the far field. This disparity can cause the equipotential relaxation logic to provide a less than optimal solution. Various approaches were utilized to bias the relaxation to obtain more optimal meshing during relaxation. Extensions of these techniques have been developed to provide more powerful options, but more work still needs to be done. The results presented here are representative of what can be produced with an ALE code structured like ALE3D. They are not necessarily the best results that could have been obtained. More experience in assessing sensitivities to meshing and boundary conditions would be very useful. A number of code deficiencies discovered in the course of this work have been corrected and are available for any future investigations.« less
An Embedded Statistical Method for Coupling Molecular Dynamics and Finite Element Analyses
NASA Technical Reports Server (NTRS)
Saether, E.; Glaessgen, E.H.; Yamakov, V.
2008-01-01
The coupling of molecular dynamics (MD) simulations with finite element methods (FEM) yields computationally efficient models that link fundamental material processes at the atomistic level with continuum field responses at higher length scales. The theoretical challenge involves developing a seamless connection along an interface between two inherently different simulation frameworks. Various specialized methods have been developed to solve particular classes of problems. Many of these methods link the kinematics of individual MD atoms with FEM nodes at their common interface, necessarily requiring that the finite element mesh be refined to atomic resolution. Some of these coupling approaches also require simulations to be carried out at 0 K and restrict modeling to two-dimensional material domains due to difficulties in simulating full three-dimensional material processes. In the present work, a new approach to MD-FEM coupling is developed based on a restatement of the standard boundary value problem used to define a coupled domain. The method replaces a direct linkage of individual MD atoms and finite element (FE) nodes with a statistical averaging of atomistic displacements in local atomic volumes associated with each FE node in an interface region. The FEM and MD computational systems are effectively independent and communicate only through an iterative update of their boundary conditions. With the use of statistical averages of the atomistic quantities to couple the two computational schemes, the developed approach is referred to as an embedded statistical coupling method (ESCM). ESCM provides an enhanced coupling methodology that is inherently applicable to three-dimensional domains, avoids discretization of the continuum model to atomic scale resolution, and permits finite temperature states to be applied.
A New Concurrent Multiscale Methodology for Coupling Molecular Dynamics and Finite Element Analyses
NASA Technical Reports Server (NTRS)
Yamakov, Vesselin; Saether, Erik; Glaessgen, Edward H/.
2008-01-01
The coupling of molecular dynamics (MD) simulations with finite element methods (FEM) yields computationally efficient models that link fundamental material processes at the atomistic level with continuum field responses at higher length scales. The theoretical challenge involves developing a seamless connection along an interface between two inherently different simulation frameworks. Various specialized methods have been developed to solve particular classes of problems. Many of these methods link the kinematics of individual MD atoms with FEM nodes at their common interface, necessarily requiring that the finite element mesh be refined to atomic resolution. Some of these coupling approaches also require simulations to be carried out at 0 K and restrict modeling to two-dimensional material domains due to difficulties in simulating full three-dimensional material processes. In the present work, a new approach to MD-FEM coupling is developed based on a restatement of the standard boundary value problem used to define a coupled domain. The method replaces a direct linkage of individual MD atoms and finite element (FE) nodes with a statistical averaging of atomistic displacements in local atomic volumes associated with each FE node in an interface region. The FEM and MD computational systems are effectively independent and communicate only through an iterative update of their boundary conditions. With the use of statistical averages of the atomistic quantities to couple the two computational schemes, the developed approach is referred to as an embedded statistical coupling method (ESCM). ESCM provides an enhanced coupling methodology that is inherently applicable to three-dimensional domains, avoids discretization of the continuum model to atomic scale resolution, and permits finite temperature states to be applied.
Monitoring of Ritz modal generation
NASA Technical Reports Server (NTRS)
Chargin, Mladen; Butler, Thomas G.
1990-01-01
A scheme is proposed to monitor the adequacy of a set of Ritz modes to represent a solution by comparing the quantity generated with certain properties involving the forcing function. In so doing an attempt was made to keep this algorithm lean and efficient, so that it will be economical to apply. Using this monitoring scheme during Ritz Mode generation will automatically ensure that the k Ritz modes theta k that are generated are adequate to represent both the spatial and temporal behavior of the structure when forced under the given transient condition defined by F(s,t).
Miller, C.M.; Nogar, N.S.
1982-09-02
Photoionization via autoionizing atomic levels combined with conventional mass spectroscopy provides a technique for quantitative analysis of trace quantities of chemical elements in the presence of much larger amounts of other elements with substantially the same atomic mass. Ytterbium samples smaller than 10 ng have been detected using an ArF* excimer laser which provides the atomic ions for a time-of-flight mass spectrometer. Elemental selectivity of greater than 5:1 with respect to lutetium impurity has been obtained. Autoionization via a single photon process permits greater photon utilization efficiency because of its greater absorption cross section than bound-free transitions, while maintaining sufficient spectroscopic structure to allow significant photoionization selectivity between different atomic species. Separation of atomic species from others of substantially the same atomic mass is also described.
A supply chain contract with flexibility as a risk-sharing mechanism for demand forecasting
NASA Astrophysics Data System (ADS)
Kim, Whan-Seon
2013-06-01
Demand forecasting is one of the main causes of the bullwhip effect in a supply chain. As a countermeasure for demand uncertainty as well as a risk-sharing mechanism for demand forecasting in a supply chain, this article studies a bilateral contract with order quantity flexibility. Under the contract, the buyer places orders in advance for the predetermined horizons and makes minimum purchase commitments. The supplier, in return, provides the buyer with the flexibility to adjust the order quantities later, according to the most updated demand information. To conduct comparative simulations, four-echelon supply chain models, that employ the contracts and different forecasting techniques under dynamic market demands, are developed. The simulation outcomes show that demand fluctuation can be effectively absorbed by the contract scheme, which enables better inventory management and customer service. Furthermore, it has been verified that the contract scheme under study plays a role as an effective coordination mechanism in a decentralised supply chain.
High-Accuracy Finite Element Method: Benchmark Calculations
NASA Astrophysics Data System (ADS)
Gusev, Alexander; Vinitsky, Sergue; Chuluunbaatar, Ochbadrakh; Chuluunbaatar, Galmandakh; Gerdt, Vladimir; Derbov, Vladimir; Góźdź, Andrzej; Krassovitskiy, Pavel
2018-02-01
We describe a new high-accuracy finite element scheme with simplex elements for solving the elliptic boundary-value problems and show its efficiency on benchmark solutions of the Helmholtz equation for the triangle membrane and hypercube.
A symmetric metamaterial element-based RF biosensor for rapid and label-free detection
NASA Astrophysics Data System (ADS)
Lee, Hee-Jo; Lee, Jung-Hyun; Jung, Hyo-Il
2011-10-01
A symmetric metamaterial element-based RF biosensing scheme is experimentally demonstrated by detecting biomolecular binding between a prostate-specific antigen (PSA) and its antibody. The metamaterial element in a high-impedance microstrip line shows an intrinsic S21 resonance having a Q-factor of 55. The frequency shift with PSA concentration, i.e., 100 ng/ml, 10 ng/ml, and 1 ng/ml, is observed and the changes are Δf ≈ 20 MHz, 10 MHz, and 5 MHz, respectively. The proposed biosensor offers advantages of label-free detection, a simple and direct scheme, and cost-efficient fabrication.
Multistatic Array Sampling Scheme for Fast Near-Field Image Reconstruction
2016-01-01
1 Multistatic Array Sampling Scheme for Fast Near-Field Image Reconstruction William F. Moulder, James D. Krieger, Denise T. Maurais-Galejs, Huy...described and validated experimentally with the formation of high quality microwave images. It is further shown that the scheme is more than two orders of... scheme (wherein transmitters and receivers are co-located) which require NTNR transmit-receive elements to achieve the same sampling. The second
2004-06-01
equinoctial elements , because both sets of orbital elements reference the equinoctial coordinate system. In fact, to...spacecraft position and velocity vectors, or an element set , which represents the orbit using scalar quantities and angle measurements called orbital ...common element sets used to describe elliptical orbits (including circular orbits ) are Keplerian elements , also called classical orbital
NASA Astrophysics Data System (ADS)
Zwanenburg, Philip; Nadarajah, Siva
2016-02-01
The aim of this paper is to demonstrate the equivalence between filtered Discontinuous Galerkin (DG) schemes and the Energy Stable Flux Reconstruction (ESFR) schemes, expanding on previous demonstrations in 1D [1] and for straight-sided elements in 3D [2]. We first derive the DG and ESFR schemes in strong form and compare the respective flux penalization terms while highlighting the implications of the fundamental assumptions for stability in the ESFR formulations, notably that all ESFR scheme correction fields can be interpreted as modally filtered DG correction fields. We present the result in the general context of all higher dimensional curvilinear element formulations. Through a demonstration that there exists a weak form of the ESFR schemes which is both discretely and analytically equivalent to the strong form, we then extend the results obtained for the strong formulations to demonstrate that ESFR schemes can be interpreted as a DG scheme in weak form where discontinuous edge flux is substituted for numerical edge flux correction. Theoretical derivations are then verified with numerical results obtained from a 2D Euler testcase with curved boundaries. Given the current choice of high-order DG-type schemes and the question as to which might be best to use for a specific application, the main significance of this work is the bridge that it provides between them. Clearly outlining the similarities between the schemes results in the important conclusion that it is always less efficient to use ESFR schemes, as opposed to the weak DG scheme, when solving problems implicitly.
Gradiometer Using Middle Loops as Sensing Elements in a Low-Field SQUID MRI System
NASA Technical Reports Server (NTRS)
Penanen, Konstantin; Hahn, Inseob; Ho Eom, Byeong
2009-01-01
A new gradiometer scheme uses middle loops as sensing elements in lowfield superconducting quantum interference device (SQUID) magnetic resonance imaging (MRI). This design of a second order gradiometer increases its sensitivity and makes it more uniform, compared to the conventional side loop sensing scheme with a comparable matching SQUID. The space between the two middle loops becomes the imaging volume with the enclosing cryostat built accordingly.
Edge Equilibrium Code (EEC) For Tokamaks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Xujling
2014-02-24
The edge equilibrium code (EEC) described in this paper is developed for simulations of the near edge plasma using the finite element method. It solves the Grad-Shafranov equation in toroidal coordinate and uses adaptive grids aligned with magnetic field lines. Hermite finite elements are chosen for the numerical scheme. A fast Newton scheme which is the same as implemented in the equilibrium and stability code (ESC) is applied here to adjust the grids
A soil-canopy scheme for use in a numerical model of the atmosphere: 1D stand-alone model
NASA Astrophysics Data System (ADS)
Kowalczyk, E. A.; Garratt, J. R.; Krummel, P. B.
We provide a detailed description of a soil-canopy scheme for use in the CSIRO general circulation models (GCMs) (CSIRO-4 and CSIRO-9), in the form of a one-dimensional stand-alone model. In addition, the paper documents the model's ability to simulate realistic surface fluxes by comparison with mesoscale model simulations (involving more sophisticated soil and boundary-layer treatments) and observations, and the diurnal range in surface quantities, including extreme maximum surface temperatures. The sensitivity of the model to values of the surface resistance is also quantified. The model represents phase 1 of a longer-term plan to improve the atmospheric boundary layer (ABL) and surface schemes in the CSIRO GCMs.
A classification scheme for edge-localized modes based on their probability distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shabbir, A., E-mail: aqsa.shabbir@ugent.be; Max Planck Institute for Plasma Physics, D-85748 Garching; Hornung, G.
We present here an automated classification scheme which is particularly well suited to scenarios where the parameters have significant uncertainties or are stochastic quantities. To this end, the parameters are modeled with probability distributions in a metric space and classification is conducted using the notion of nearest neighbors. The presented framework is then applied to the classification of type I and type III edge-localized modes (ELMs) from a set of carbon-wall plasmas at JET. This provides a fast, standardized classification of ELM types which is expected to significantly reduce the effort of ELM experts in identifying ELM types. Further, themore » classification scheme is general and can be applied to various other plasma phenomena as well.« less
A Mixed Finite Volume Element Method for Flow Calculations in Porous Media
NASA Technical Reports Server (NTRS)
Jones, Jim E.
1996-01-01
A key ingredient in the simulation of flow in porous media is the accurate determination of the velocities that drive the flow. The large scale irregularities of the geology, such as faults, fractures, and layers suggest the use of irregular grids in the simulation. Work has been done in applying the finite volume element (FVE) methodology as developed by McCormick in conjunction with mixed methods which were developed by Raviart and Thomas. The resulting mixed finite volume element discretization scheme has the potential to generate more accurate solutions than standard approaches. The focus of this paper is on a multilevel algorithm for solving the discrete mixed FVE equations. The algorithm uses a standard cell centered finite difference scheme as the 'coarse' level and the more accurate mixed FVE scheme as the 'fine' level. The algorithm appears to have potential as a fast solver for large size simulations of flow in porous media.
Computational plasticity algorithm for particle dynamics simulations
NASA Astrophysics Data System (ADS)
Krabbenhoft, K.; Lyamin, A. V.; Vignes, C.
2018-01-01
The problem of particle dynamics simulation is interpreted in the framework of computational plasticity leading to an algorithm which is mathematically indistinguishable from the common implicit scheme widely used in the finite element analysis of elastoplastic boundary value problems. This algorithm provides somewhat of a unification of two particle methods, the discrete element method and the contact dynamics method, which usually are thought of as being quite disparate. In particular, it is shown that the former appears as the special case where the time stepping is explicit while the use of implicit time stepping leads to the kind of schemes usually labelled contact dynamics methods. The framing of particle dynamics simulation within computational plasticity paves the way for new approaches similar (or identical) to those frequently employed in nonlinear finite element analysis. These include mixed implicit-explicit time stepping, dynamic relaxation and domain decomposition schemes.
Measurements and mathematical formalism of quantum mechanics
NASA Astrophysics Data System (ADS)
Slavnov, D. A.
2007-03-01
A scheme for constructing quantum mechanics is given that does not have Hilbert space and linear operators as its basic elements. Instead, a version of algebraic approach is considered. Elements of a noncommutative algebra (observables) and functionals on this algebra (elementary states) associated with results of single measurements are used as primary components of the scheme. On the one hand, it is possible to use within the scheme the formalism of the standard (Kolmogorov) probability theory, and, on the other hand, it is possible to reproduce the mathematical formalism of standard quantum mechanics, and to study the limits of its applicability. A short outline is given of the necessary material from the theory of algebras and probability theory. It is described how the mathematical scheme of the paper agrees with the theory of quantum measurements, and avoids quantum paradoxes.
Investigation on improved Gabor order tracking technique
NASA Astrophysics Data System (ADS)
Pan, Min-Chun; Chiu, Chun-Ching
2004-07-01
The study proposes an improved Gabor order tracking (GOT) technique to cope with crossing orders that cannot be effectively separated using the original GOT scheme. The improvement aids both the reconstruction and interpretation of two crossing orders such as a transmission-element-regarding order component and a structural resonant component. In the paper, the influence of the dual function to Gabor expansion coefficients is investigated, which can affect the precision of the tracked order component. Additionally, using the GOT scheme in noise conditions is demonstrated as well. For applying the improved GOT in real tasks, separation and extraction of close-order components of vibration signals measured from a transmission-element test bench is illustrated using both the GOT and Vold-Kalman filtering (VKF) OT schemes. Finally, comprehensive comparisons between the improved GOT and VKF_OT schemes are made from processing results.
NASA Astrophysics Data System (ADS)
Mishra, C.; Samantaray, A. K.; Chakraborty, G.
2016-09-01
Vibration analysis for diagnosis of faults in rolling element bearings is complicated when the rotor speed is variable or slow. In the former case, the time interval between the fault-induced impact responses in the vibration signal are non-uniform and the signal strength is variable. In the latter case, the fault-induced impact response strength is weak and generally gets buried in the noise, i.e. noise dominates the signal. This article proposes a diagnosis scheme based on a combination of a few signal processing techniques. The proposed scheme initially represents the vibration signal in terms of uniformly resampled angular position of the rotor shaft by using the interpolated instantaneous angular position measurements. Thereafter, intrinsic mode functions (IMFs) are generated through empirical mode decomposition (EMD) of resampled vibration signal which is followed by thresholding of IMFs and signal reconstruction to de-noise the signal and envelope order tracking to diagnose the faults. Data for validating the proposed diagnosis scheme are initially generated from a multi-body simulation model of rolling element bearing which is developed using bond graph approach. This bond graph model includes the ball and cage dynamics, localized fault geometry, contact mechanics, rotor unbalance, and friction and slip effects. The diagnosis scheme is finally validated with experiments performed with the help of a machine fault simulator (MFS) system. Some fault scenarios which could not be experimentally recreated are then generated through simulations and analyzed through the developed diagnosis scheme.
NASA Astrophysics Data System (ADS)
Cave, Robert J.; Newton, Marshall D.
1996-01-01
A new method for the calculation of the electronic coupling matrix element for electron transfer processes is introduced and results for several systems are presented. The method can be applied to ground and excited state systems and can be used in cases where several states interact strongly. Within the set of states chosen it is a non-perturbative treatment, and can be implemented using quantities obtained solely in terms of the adiabatic states. Several applications based on quantum chemical calculations are briefly presented. Finally, since quantities for adiabatic states are the only input to the method, it can also be used with purely experimental data to estimate electron transfer matrix elements.
Interpolation Hermite Polynomials For Finite Element Method
NASA Astrophysics Data System (ADS)
Gusev, Alexander; Vinitsky, Sergue; Chuluunbaatar, Ochbadrakh; Chuluunbaatar, Galmandakh; Gerdt, Vladimir; Derbov, Vladimir; Góźdź, Andrzej; Krassovitskiy, Pavel
2018-02-01
We describe a new algorithm for analytic calculation of high-order Hermite interpolation polynomials of the simplex and give their classification. A typical example of triangle element, to be built in high accuracy finite element schemes, is given.
78 FR 78939 - 36(b)(1) Arms Sales Notification
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-27
... Quantity or Quantities of Articles or Services under Consideration for Purchase: C-130J technical, engineering and software support; software updates and patches; familiarization training for Portable Flight... and contractor technical support services; and other related elements of logistics and program support...
Higher-Order Theory for Functionally Graded Materials
NASA Technical Reports Server (NTRS)
Aboudi, Jacob; Pindera, Marek-Jerzy; Arnold, Steven M.
1999-01-01
This paper presents the full generalization of the Cartesian coordinate-based higher-order theory for functionally graded materials developed by the authors during the past several years. This theory circumvents the problematic use of the standard micromechanical approach, based on the concept of a representative volume element, commonly employed in the analysis of functionally graded composites by explicitly coupling the local (microstructural) and global (macrostructural) responses. The theoretical framework is based on volumetric averaging of the various field quantities, together with imposition of boundary and interfacial conditions in an average sense between the subvolumes used to characterize the composite's functionally graded microstructure. The generalization outlined herein involves extension of the theoretical framework to enable the analysis of materials characterized by spatially variable microstructures in three directions. Specialization of the generalized theoretical framework to previously published versions of the higher-order theory for materials functionally graded in one and two directions is demonstrated. In the applications part of the paper we summarize the major findings obtained with the one-directional and two-directional versions of the higher-order theory. The results illustrate both the fundamental issues related to the influence of microstructure on microscopic and macroscopic quantities governing the response of composites and the technologically important applications. A major issue addressed herein is the applicability of the classical homogenization schemes in the analysis of functionally graded materials. The technologically important applications illustrate the utility of functionally graded microstructures in tailoring the response of structural components in a variety of applications involving uniform and gradient thermomechanical loading.
A COMPLETE DISPOSAL-RECYCLE SCHEME FOR AGRICULTURAL SOLID WASTES
This investigation applied the anaerobic process to the production of methane gas and a stabilized sludge from cow manure and farm clippings in laboratory pilot plants as well as a full-scale (2,000 gal.) digester system. The quantity and quality of gas produced, the biochemical ...
Recovery Schemes for Primitive Variables in General-relativistic Magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Siegel, Daniel M.; Mösta, Philipp; Desai, Dhruv; Wu, Samantha
2018-05-01
General-relativistic magnetohydrodynamic (GRMHD) simulations are an important tool to study a variety of astrophysical systems such as neutron star mergers, core-collapse supernovae, and accretion onto compact objects. A conservative GRMHD scheme numerically evolves a set of conservation equations for “conserved” quantities and requires the computation of certain primitive variables at every time step. This recovery procedure constitutes a core part of any conservative GRMHD scheme and it is closely tied to the equation of state (EOS) of the fluid. In the quest to include nuclear physics, weak interactions, and neutrino physics, state-of-the-art GRMHD simulations employ finite-temperature, composition-dependent EOSs. While different schemes have individually been proposed, the recovery problem still remains a major source of error, failure, and inefficiency in GRMHD simulations with advanced microphysics. The strengths and weaknesses of the different schemes when compared to each other remain unclear. Here we present the first systematic comparison of various recovery schemes used in different dynamical spacetime GRMHD codes for both analytic and tabulated microphysical EOSs. We assess the schemes in terms of (i) speed, (ii) accuracy, and (iii) robustness. We find large variations among the different schemes and that there is not a single ideal scheme. While the computationally most efficient schemes are less robust, the most robust schemes are computationally less efficient. More robust schemes may require an order of magnitude more calls to the EOS, which are computationally expensive. We propose an optimal strategy of an efficient three-dimensional Newton–Raphson scheme and a slower but more robust one-dimensional scheme as a fall-back.
Finite element dynamic analysis on CDC STAR-100 computer
NASA Technical Reports Server (NTRS)
Noor, A. K.; Lambiotte, J. J., Jr.
1978-01-01
Computational algorithms are presented for the finite element dynamic analysis of structures on the CDC STAR-100 computer. The spatial behavior is described using higher-order finite elements. The temporal behavior is approximated by using either the central difference explicit scheme or Newmark's implicit scheme. In each case the analysis is broken up into a number of basic macro-operations. Discussion is focused on the organization of the computation and the mode of storage of different arrays to take advantage of the STAR pipeline capability. The potential of the proposed algorithms is discussed and CPU times are given for performing the different macro-operations for a shell modeled by higher order composite shallow shell elements having 80 degrees of freedom.
Frequency domain modeling and dynamic characteristics evaluation of existing wind turbine systems
NASA Astrophysics Data System (ADS)
Chiang, Chih-Hung; Yu, Chih-Peng
2016-04-01
It is quite well accepted that frequency domain procedures are suitable for the design and dynamic analysis of wind turbine structures, especially for floating offshore wind turbines, since random wind loads and wave induced motions are most likely simulated in the frequency domain. This paper presents specific applications of an effective frequency domain scheme to the linear analysis of wind turbine structures in which a 1-D spectral element was developed based on the axially-loaded member. The solution schemes are summarized for the spectral analyses of the tower, the blades, and the combined system with selected frequency-dependent coupling effect from foundation-structure interactions. Numerical examples demonstrate that the modal frequencies obtained using spectral-element models are in good agreement with those found in the literature. A 5-element mono-pile model results in less than 0.3% deviation from an existing 160-element model. It is preliminarily concluded that the proposed scheme is relatively efficient in performing quick verification for test data obtained from the on-site vibration measurement using the microwave interferometer.
A high-order Lagrangian-decoupling method for the incompressible Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Ho, Lee-Wing; Maday, Yvon; Patera, Anthony T.; Ronquist, Einar M.
1989-01-01
A high-order Lagrangian-decoupling method is presented for the unsteady convection-diffusion and incompressible Navier-Stokes equations. The method is based upon: (1) Lagrangian variational forms that reduce the convection-diffusion equation to a symmetric initial value problem; (2) implicit high-order backward-differentiation finite-difference schemes for integration along characteristics; (3) finite element or spectral element spatial discretizations; and (4) mesh-invariance procedures and high-order explicit time-stepping schemes for deducing function values at convected space-time points. The method improves upon previous finite element characteristic methods through the systematic and efficient extension to high order accuracy, and the introduction of a simple structure-preserving characteristic-foot calculation procedure which is readily implemented on modern architectures. The new method is significantly more efficient than explicit-convection schemes for the Navier-Stokes equations due to the decoupling of the convection and Stokes operators and the attendant increase in temporal stability. Numerous numerical examples are given for the convection-diffusion and Navier-Stokes equations for the particular case of a spectral element spatial discretization.
Spectral Collocation Time-Domain Modeling of Diffractive Optical Elements
NASA Astrophysics Data System (ADS)
Hesthaven, J. S.; Dinesen, P. G.; Lynov, J. P.
1999-11-01
A spectral collocation multi-domain scheme is developed for the accurate and efficient time-domain solution of Maxwell's equations within multi-layered diffractive optical elements. Special attention is being paid to the modeling of out-of-plane waveguide couplers. Emphasis is given to the proper construction of high-order schemes with the ability to handle very general problems of considerable geometric and material complexity. Central questions regarding efficient absorbing boundary conditions and time-stepping issues are also addressed. The efficacy of the overall scheme for the time-domain modeling of electrically large, and computationally challenging, problems is illustrated by solving a number of plane as well as non-plane waveguide problems.
Chronopoulos, D
2017-01-01
A systematic expression quantifying the wave energy skewing phenomenon as a function of the mechanical characteristics of a non-isotropic structure is derived in this study. A structure of arbitrary anisotropy, layering and geometric complexity is modelled through Finite Elements (FEs) coupled to a periodic structure wave scheme. A generic approach for efficiently computing the angular sensitivity of the wave slowness for each wave type, direction and frequency is presented. The approach does not involve any finite differentiation scheme and is therefore computationally efficient and not prone to the associated numerical errors. Copyright © 2016 Elsevier B.V. All rights reserved.
A hybrid Lagrangian Voronoi-SPH scheme
NASA Astrophysics Data System (ADS)
Fernandez-Gutierrez, D.; Souto-Iglesias, A.; Zohdi, T. I.
2018-07-01
A hybrid Lagrangian Voronoi-SPH scheme, with an explicit weakly compressible formulation for both the Voronoi and SPH sub-domains, has been developed. The SPH discretization is substituted by Voronoi elements close to solid boundaries, where SPH consistency and boundary conditions implementation become problematic. A buffer zone to couple the dynamics of both sub-domains is used. This zone is formed by a set of particles where fields are interpolated taking into account SPH particles and Voronoi elements. A particle may move in or out of the buffer zone depending on its proximity to a solid boundary. The accuracy of the coupled scheme is discussed by means of a set of well-known verification benchmarks.
A hybrid Lagrangian Voronoi-SPH scheme
NASA Astrophysics Data System (ADS)
Fernandez-Gutierrez, D.; Souto-Iglesias, A.; Zohdi, T. I.
2017-11-01
A hybrid Lagrangian Voronoi-SPH scheme, with an explicit weakly compressible formulation for both the Voronoi and SPH sub-domains, has been developed. The SPH discretization is substituted by Voronoi elements close to solid boundaries, where SPH consistency and boundary conditions implementation become problematic. A buffer zone to couple the dynamics of both sub-domains is used. This zone is formed by a set of particles where fields are interpolated taking into account SPH particles and Voronoi elements. A particle may move in or out of the buffer zone depending on its proximity to a solid boundary. The accuracy of the coupled scheme is discussed by means of a set of well-known verification benchmarks.
Abundance of He-3 and other solar-wind-derived volatiles in lunar soil
NASA Technical Reports Server (NTRS)
Swindle, Timothy D.
1992-01-01
Volatiles implanted into the lunar regolith by the solar wind are potentially important lunar resources. Wittenberg et al. (1986) have proposed that lunar He-3 could be used as a fuel for terrestrial nuclear fusion reactors. They argue that a fusion scheme involving D and He-3 would be cleaner and more efficient than currently-proposed schemes involving D and T. However, since the terrestrial inventory of He-3 is so small, they suggest that the lunar regolith, with concentrations of the order of parts per billion (by mass) would be an economical source of He-3. Solar-wind implantation is also the primary source of H, C, and N in lunar soil. These elements could also be important, particularly for life support and for propellant production. In a SERC study of the feasibility of obtaining the necessary amount of He-3, Swindle et al. (1990) concluded that the available amount is sufficient for early reactors, at least, but that the mining problems, while not necessarily insurmountable, are prodigious. The volatiles H, C, and N, on the other hand, come in parts per million level abundances. The differences in abundances mean that (1) a comparable amount of H, C, and/or N could be extracted with orders of magnitude smaller operations than required for He-3, and (2) if He-3 extraction ever becomes important, huge quantities of H, C, and N will be produced as by-products.
Vectorized schemes for conical potential flow using the artificial density method
NASA Technical Reports Server (NTRS)
Bradley, P. F.; Dwoyer, D. L.; South, J. C., Jr.; Keen, J. M.
1984-01-01
A method is developed to determine solutions to the full-potential equation for steady supersonic conical flow using the artificial density method. Various update schemes used generally for transonic potential solutions are investigated. The schemes are compared for speed and robustness. All versions of the computer code have been vectorized and are currently running on the CYBER-203 computer. The update schemes are vectorized, where possible, either fully (explicit schemes) or partially (implicit schemes). Since each version of the code differs only by the update scheme and elements other than the update scheme are completely vectorizable, comparisons of computational effort and convergence rate among schemes are a measure of the specific scheme's performance. Results are presented for circular and elliptical cones at angle of attack for subcritical and supercritical crossflows.
NASA Astrophysics Data System (ADS)
Glazyrina, O. V.; Pavlova, M. F.
2016-11-01
We consider the parabolic inequality with monotone with respect to a gradient space operator, which is depended on integral with respect to space variables solution characteristic. We construct a two-layer differential scheme for this problem with use of penalty method, semidiscretization with respect to time variable method and the finite element method (FEM) with respect to space variables. We proved a convergence of constructed mothod.
Galerkin finite element scheme for magnetostrictive structures and composites
NASA Astrophysics Data System (ADS)
Kannan, Kidambi Srinivasan
The ever increasing-role of magnetostrictives in actuation and sensing applications is an indication of their importance in the emerging field of smart structures technology. As newer, and more complex, applications are developed, there is a growing need for a reliable computational tool that can effectively address the magneto-mechanical interactions and other nonlinearities in these materials and in structures incorporating them. This thesis presents a continuum level quasi-static, three-dimensional finite element computational scheme for modeling the nonlinear behavior of bulk magnetostrictive materials and particulate magnetostrictive composites. Models for magnetostriction must deal with two sources of nonlinearities-nonlinear body forces/moments in equilibrium equations governing magneto-mechanical interactions in deformable and magnetized bodies; and nonlinear coupled magneto-mechanical constitutive models for the material of interest. In the present work, classical differential formulations for nonlinear magneto-mechanical interactions are recast in integral form using the weighted-residual method. A discretized finite element form is obtained by applying the Galerkin technique. The finite element formulation is based upon three dimensional eight-noded (isoparametric) brick element interpolation functions and magnetostatic infinite elements at the boundary. Two alternative possibilities are explored for establishing the nonlinear incremental constitutive model-characterization in terms of magnetic field or in terms of magnetization. The former methodology is the one most commonly used in the literature. In this work, a detailed comparative study of both methodologies is carried out. The computational scheme is validated, qualitatively and quantitatively, against experimental measurements published in the literature on structures incorporating the magnetostrictive material Terfenol-D. The influence of nonlinear body forces and body moments of magnetic origin, on the response of magnetostrictive structures to complex mechanical and magnetic loading conditions, is carefully examined. While monolithic magnetostrictive materials have been commercially-available since the late eighties, attention in the smart structures research community has recently focussed upon building and using magnetostrictive particulate composite structures for conventional actuation applications and novel sensing methodologies in structural health monitoring. A particulate magnetostrictive composite element has been developed in the present work to model such structures. This composite element incorporates interactions between magnetostrictive particles by combining a numerical micromechanical analysis based on magneto-mechanical Green's functions, with a homogenization scheme based upon the Mori-Tanaka approach. This element has been applied to the simulation of particulate actuators and sensors reported in the literature. Simulation results are compared to experimental data for validation purposes. The computational schemes developed, for bulk materials and for composites, are expected to be of great value to researchers and designers of novel applications based on magnetostrictives.
NASA Technical Reports Server (NTRS)
Wang, Xiao-Yen; Chow, Chuen-Yen; Chang, Sin-Chung
1998-01-01
Without resorting to special treatment for each individual test case, the 1D and 2D CE/SE shock-capturing schemes described previously (in Part I) are used to simulate flows involving phenomena such as shock waves, contact discontinuities, expansion waves and their interactions. Five 1D and six 2D problems are considered to examine the capability and robustness of these schemes. Despite their simple logical structures and low computational cost (for the 2D CE/SE shock-capturing scheme, the CPU time is about 2 micro-secs per mesh point per marching step on a Cray C90 machine), the numerical results, when compared with experimental data, exact solutions or numerical solutions by other methods, indicate that these schemes can accurately resolve shock and contact discontinuities consistently.
Test functions for three-dimensional control-volume mixed finite-element methods on irregular grids
Naff, R.L.; Russell, T.F.; Wilson, J.D.; ,; ,; ,; ,; ,
2000-01-01
Numerical methods based on unstructured grids, with irregular cells, usually require discrete shape functions to approximate the distribution of quantities across cells. For control-volume mixed finite-element methods, vector shape functions are used to approximate the distribution of velocities across cells and vector test functions are used to minimize the error associated with the numerical approximation scheme. For a logically cubic mesh, the lowest-order shape functions are chosen in a natural way to conserve intercell fluxes that vary linearly in logical space. Vector test functions, while somewhat restricted by the mapping into the logical reference cube, admit a wider class of possibilities. Ideally, an error minimization procedure to select the test function from an acceptable class of candidates would be the best procedure. Lacking such a procedure, we first investigate the effect of possible test functions on the pressure distribution over the control volume; specifically, we look for test functions that allow for the elimination of intermediate pressures on cell faces. From these results, we select three forms for the test function for use in a control-volume mixed method code and subject them to an error analysis for different forms of grid irregularity; errors are reported in terms of the discrete L2 norm of the velocity error. Of these three forms, one appears to produce optimal results for most forms of grid irregularity.
Neural adaptive control for vibration suppression in composite fin-tip of aircraft.
Suresh, S; Kannan, N; Sundararajan, N; Saratchandran, P
2008-06-01
In this paper, we present a neural adaptive control scheme for active vibration suppression of a composite aircraft fin tip. The mathematical model of a composite aircraft fin tip is derived using the finite element approach. The finite element model is updated experimentally to reflect the natural frequencies and mode shapes very accurately. Piezo-electric actuators and sensors are placed at optimal locations such that the vibration suppression is a maximum. Model-reference direct adaptive neural network control scheme is proposed to force the vibration level within the minimum acceptable limit. In this scheme, Gaussian neural network with linear filters is used to approximate the inverse dynamics of the system and the parameters of the neural controller are estimated using Lyapunov based update law. In order to reduce the computational burden, which is critical for real-time applications, the number of hidden neurons is also estimated in the proposed scheme. The global asymptotic stability of the overall system is ensured using the principles of Lyapunov approach. Simulation studies are carried-out using sinusoidal force functions of varying frequency. Experimental results show that the proposed neural adaptive control scheme is capable of providing significant vibration suppression in the multiple bending modes of interest. The performance of the proposed scheme is better than the H(infinity) control scheme.
Disasters and Impact of Sleep Quality and Quantity on National Guard Medical Personnel
2018-04-30
Impact of Sleep Quality & Quantity on National Guard Medical Personnel Sb. GRANT NUMBER Sc. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Sd. PROJECT NUMBER...Std. 239.18 Adobe Professional 7 .0 Approved for Public Release ~••Unlmlted Disasters & Impact of Sleep Quality & Quantity on National Guard...College of Nursing 4/11/2018 6 Methods • Measures • Critical skills questions • Medication calculations +Licensed • Basic Life Support (BLS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brougham, Thomas; Andersson, Erika; Barnett, Stephen M.
A joint measurement of two observables is a simultaneous measurement of both quantities upon the same quantum system. When two quantum-mechanical observables do not commute, then a joint measurement of these observables cannot be accomplished directly by projective measurements alone. In this paper we shall discuss the use of quantum cloning to perform a joint measurement of two components of spin associated with a qubit system. We introduce cloning schemes which are optimal with respect to this task. The cloning schemes may be thought to work by cloning two components of spin onto their outputs. We compare the proposed cloningmore » machines to existing cloners.« less
Water resources management in karst aquifers - concepts and modeling approaches
NASA Astrophysics Data System (ADS)
Sauter, M.; Schmidt, S.; Abusaada, M.; Reimann, T.; Liedl, R.; Kordilla, J.; Geyer, T.
2011-12-01
Water resources management schemes generally imply the availability of a spectrum of various sources of water with a variability of quantity and quality in space and time, and the availability and suitability of storage facilities to cover various demands of water consumers on quantity and quality. Aquifers are generally regarded as suitable reservoirs since large volumes of water can be stored in the subsurface, water is protected from contamination and evaporation and the underground passage assists in the removal of at least some groundwater contaminants. Favorable aquifer properties include high vertical hydraulic conductivities for infiltration, large storage coefficients and not too large hydraulic gradients / conductivities. The latter factors determine the degree of discharge, i.e. loss of groundwater. Considering the above criteria, fractured and karstified aquifers appear to not really fulfill the respective conditions for storage reservoirs. Although infiltration capacity is relatively high, due to low storativity and high hydraulic conductivities, the small quantity of water stored is rapidly discharged. However, for a number of specific conditions, even karst aquifers are suitable for groundwater management schemes. They can be subdivided into active and passive management strategies. Active management options include strategies such as overpumping, i.e. the depletion of the karst water resources below the spring outflow level, the construction of subsurface dams to prevent rapid discharge. Passive management options include the optimal use of the discharging groundwater under natural discharge conditions. System models that include the superposition of the effect of the different compartments soil zone, epikarst, vadose and phreatic zone assist in the optimal usage of the available groundwater resources, while taking into account the different water reservoirs. The elaboration and implementation of groundwater protection schemes employing well established vulnerability assessment techniques ascertain the respective groundwater quality. In this paper a systematic overview is provided on karst groundwater management schemes illustrating the specific conditions allowing active or passive management in the first place as well as the employment of various types of adapted models for the design of the different management schemes. Examples are provided from karst systems in Israel/Palestine, where a large 4000sqkm basin is being managed as a whole, the South of France, where the Lez groundwater development scheme illustrates the optimal use of overpumping from the conduit system, providing additional water for the City of Montpellier during dry summers and at the same time increasing recharge and assisting in the mitigation of flooding during high winter discharge conditions. Overpumping could be an option in many Mediterranean karst catchments since karst conduit development occurred well below today's spring discharge level. Other examples include the construction of subsurface dams for hydropower generation in the Dinaric karst and reduction of discharge. Problems of leakage and general feasibility are discussed.
Slave finite elements: The temporal element approach to nonlinear analysis
NASA Technical Reports Server (NTRS)
Gellin, S.
1984-01-01
A formulation method for finite elements in space and time incorporating nonlinear geometric and material behavior is presented. The method uses interpolation polynomials for approximating the behavior of various quantities over the element domain, and only explicit integration over space and time. While applications are general, the plate and shell elements that are currently being programmed are appropriate to model turbine blades, vanes, and combustor liners.
NASA Astrophysics Data System (ADS)
Jiang, YuXiao; Guo, PengLiang; Gao, ChengYan; Wang, HaiBo; Alzahrani, Faris; Hobiny, Aatef; Deng, FuGuo
2017-12-01
We present an original self-error-rejecting photonic qubit transmission scheme for both the polarization and spatial states of photon systems transmitted over collective noise channels. In our scheme, we use simple linear-optical elements, including half-wave plates, 50:50 beam splitters, and polarization beam splitters, to convert spatial-polarization modes into different time bins. By using postselection in different time bins, the success probability of obtaining the uncorrupted states approaches 1/4 for single-photon transmission, which is not influenced by the coefficients of noisy channels. Our self-error-rejecting transmission scheme can be generalized to hyperentangled n-photon systems and is useful in practical high-capacity quantum communications with photon systems in two degrees of freedom.
NASA Technical Reports Server (NTRS)
Kent, James; Holdaway, Daniel
2015-01-01
A number of geophysical applications require the use of the linearized version of the full model. One such example is in numerical weather prediction, where the tangent linear and adjoint versions of the atmospheric model are required for the 4DVAR inverse problem. The part of the model that represents the resolved scale processes of the atmosphere is known as the dynamical core. Advection, or transport, is performed by the dynamical core. It is a central process in many geophysical applications and is a process that often has a quasi-linear underlying behavior. However, over the decades since the advent of numerical modelling, significant effort has gone into developing many flavors of high-order, shape preserving, nonoscillatory, positive definite advection schemes. These schemes are excellent in terms of transporting the quantities of interest in the dynamical core, but they introduce nonlinearity through the use of nonlinear limiters. The linearity of the transport schemes used in Goddard Earth Observing System version 5 (GEOS-5), as well as a number of other schemes, is analyzed using a simple 1D setup. The linearized version of GEOS-5 is then tested using a linear third order scheme in the tangent linear version.
NASA Astrophysics Data System (ADS)
Salvalaglio, Marco; Backofen, Rainer; Voigt, Axel; Elder, Ken R.
2017-08-01
One of the major difficulties in employing phase-field crystal (PFC) modeling and the associated amplitude (APFC) formulation is the ability to tune model parameters to match experimental quantities. In this work, we address the problem of tuning the defect core and interface energies in the APFC formulation. We show that the addition of a single term to the free-energy functional can be used to increase the solid-liquid interface and defect energies in a well-controlled fashion, without any major change to other features. The influence of the newly added term is explored in two-dimensional triangular and honeycomb structures as well as bcc and fcc lattices in three dimensions. In addition, a finite-element method (FEM) is developed for the model that incorporates a mesh refinement scheme. The combination of the FEM and mesh refinement to simulate amplitude expansion with a new energy term provides a method of controlling microscopic features such as defect and interface energies while simultaneously delivering a coarse-grained examination of the system.
Analytically derived switching functions for exact H2+ eigenstates
NASA Astrophysics Data System (ADS)
Thorson, W. R.; Kimura, M.; Choi, J. H.; Knudson, S. K.
1981-10-01
Electron translation factors (ETF's) appropriate for slow atomic collisions may be constructed using switching functions. In this paper we derive a set of switching functions for the H2+ system by an analytical "two-center decomposition" of the exact molecular eigenstates. These switching functions are closely approximated by the simple form f=bη, where η is the "angle variable" of prolate spheroidal coordinates. For given united atom angular momentum quantum numbers (l,m), the characteristic parameter blm depends only on the quantity c2=-ɛR22, where ɛ is the electronic binding energy and R the internuclear distance in a.u. The resulting parameters are in excellent agreement with those found in our earlier work by a heuristic "optimization" scheme based on a study of coupling matrix-element behavior for a number of H2+ states. An approximate extension to asymmetric cases (HeH2+) has also been made. Nonadiabatic couplings based on these switching functions have been used in recent close-coupling calculations for H+-H(1s) collisions and He2+-H(1s) collisions at energies 1.0-20 keV.
An extended GS method for dense linear systems
NASA Astrophysics Data System (ADS)
Niki, Hiroshi; Kohno, Toshiyuki; Abe, Kuniyoshi
2009-09-01
Davey and Rosindale [K. Davey, I. Rosindale, An iterative solution scheme for systems of boundary element equations, Internat. J. Numer. Methods Engrg. 37 (1994) 1399-1411] derived the GSOR method, which uses an upper triangular matrix [Omega] in order to solve dense linear systems. By applying functional analysis, the authors presented an expression for the optimum [Omega]. Moreover, Davey and Bounds [K. Davey, S. Bounds, A generalized SOR method for dense linear systems of boundary element equations, SIAM J. Comput. 19 (1998) 953-967] also introduced further interesting results. In this note, we employ a matrix analysis approach to investigate these schemes, and derive theorems that compare these schemes with existing preconditioners for dense linear systems. We show that the convergence rate of the Gauss-Seidel method with preconditioner PG is superior to that of the GSOR method. Moreover, we define some splittings associated with the iterative schemes. Some numerical examples are reported to confirm the theoretical analysis. We show that the EGS method with preconditioner produces an extremely small spectral radius in comparison with the other schemes considered.
Obermann, Konrad; Chanturidze, Tata; Glazinski, Bernd; Dobberschuetz, Karin; Steinhauer, Heiko; Schmidt, Jean-Olivier
2018-02-20
Managers and administrators in charge of social protection and health financing, service purchasing and provision play a crucial role in harnessing the potential advantage of prudent organization, management and purchasing of health services, thereby supporting the attainment of Universal Health Coverage. However, very little is known about the needed quantity and quality of such staff, in particular when it comes to those institutions managing mandatory health insurance schemes and purchasing services. As many health care systems in low- and middle-income countries move towards independent institutions (both purchasers and providers) there is a clear need to have good data on staff and administrative cost in different social health protection schemes as a basis for investing in the development of a cadre of health managers and administrators for such schemes. We report on a systematic literature review of human resources in health management and administration in social protection schemes and suggest some aspects in moving research, practical applications and the policy debate forward.
Entropy-stable summation-by-parts discretization of the Euler equations on general curved elements
NASA Astrophysics Data System (ADS)
Crean, Jared; Hicken, Jason E.; Del Rey Fernández, David C.; Zingg, David W.; Carpenter, Mark H.
2018-03-01
We present and analyze an entropy-stable semi-discretization of the Euler equations based on high-order summation-by-parts (SBP) operators. In particular, we consider general multidimensional SBP elements, building on and generalizing previous work with tensor-product discretizations. In the absence of dissipation, we prove that the semi-discrete scheme conserves entropy; significantly, this proof of nonlinear L2 stability does not rely on integral exactness. Furthermore, interior penalties can be incorporated into the discretization to ensure that the total (mathematical) entropy decreases monotonically, producing an entropy-stable scheme. SBP discretizations with curved elements remain accurate, conservative, and entropy stable provided the mapping Jacobian satisfies the discrete metric invariants; polynomial mappings at most one degree higher than the SBP operators automatically satisfy the metric invariants in two dimensions. In three-dimensions, we describe an elementwise optimization that leads to suitable Jacobians in the case of polynomial mappings. The properties of the semi-discrete scheme are verified and investigated using numerical experiments.
NASA Astrophysics Data System (ADS)
Nkhoma, Bryson; Kayira, Gift
2016-04-01
Over the past two decades, Malawi has been adversely hit by climatic variability and changes, and irrigation schemes which rely mostly on water from rivers have been negatively affected. In the face of dwindling quantities of water, distribution and sharing of water for irrigation has been a source of contestations and conflicts. Women who constitute a significant section of irrigation farmers in schemes have been major culprits. The study seeks to analyze gender contestations and conflicts over the use of water in the schemes developed in the Lake Chilwa basin, in southern Malawi. Using oral and written sources as well as drawing evidence from participatory and field observations conducted at Likangala and Domasi irrigation schemes, the largest schemes in the basin, the study observes that women are not passive victims of male domination over the use of dwindling waters for irrigation farming. They have often used existing political and traditional structures developed in the management of water in the schemes to competitively gain monopoly over water. They have sometimes expressed their agency by engaging in irrigation activities that fall beyond the control of formal rules and regulations of irrigation agriculture. Other than being losers, women are winning the battle for water and land resources in the basin.
A simple and efficient shear-flexible plate bending element
NASA Technical Reports Server (NTRS)
Chaudhuri, Reaz A.
1987-01-01
A shear-flexible triangular element formulation, which utilizes an assumed quadratic displacement potential energy approach and is numerically integrated using Gauss quadrature, is presented. The Reissner/Mindlin hypothesis of constant cross-sectional warping is directly applied to the three-dimensional elasticity theory to obtain a moderately thick-plate theory or constant shear-angle theory (CST), wherein the middle surface is no longer considered to be the reference surface and the two rotations are replaced by the two in-plane displacements as nodal variables. The resulting finite-element possesses 18 degrees of freedom (DOF). Numerical results are obtained for two different numerical integration schemes and a wide range of meshes and span-to-thickness ratios. These, when compared with available exact, series or finite-element solutions, demonstrate accuracy and rapid convergence characteristics of the present element. This is especially true in the case of thin to very thin plates, when the present element, used in conjunction with the reduced integration scheme, outperforms its counterpart, based on discrete Kirchhoff constraint theory (DKT).
Newmark local time stepping on high-performance computing architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rietmann, Max, E-mail: max.rietmann@erdw.ethz.ch; Institute of Geophysics, ETH Zurich; Grote, Marcus, E-mail: marcus.grote@unibas.ch
In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strongmore » element-size contrasts (more than 100x). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.« less
NASA Astrophysics Data System (ADS)
Zhang, Bin; Deng, Congying; Zhang, Yi
2018-03-01
Rolling element bearings are mechanical components used frequently in most rotating machinery and they are also vulnerable links representing the main source of failures in such systems. Thus, health condition monitoring and fault diagnosis of rolling element bearings have long been studied to improve operational reliability and maintenance efficiency of rotatory machines. Over the past decade, prognosis that enables forewarning of failure and estimation of residual life attracted increasing attention. To accurately and efficiently predict failure of the rolling element bearing, the degradation requires to be well represented and modelled. For this purpose, degradation of the rolling element bearing is analysed with the delay-time-based model in this paper. Also, a hybrid feature selection and health indicator construction scheme is proposed for extraction of the bearing health relevant information from condition monitoring sensor data. Effectiveness of the presented approach is validated through case studies on rolling element bearing run-to-failure experiments.
NASA Astrophysics Data System (ADS)
Kazakova, E. I.; Medvedev, A. N.; Kolomytseva, A. O.; Demina, M. I.
2017-11-01
The paper presents a mathematical model of blasting schemes management in presence of random disturbances. Based on the lemmas and theorems proved, a control functional is formulated, which is stable. A universal classification of blasting schemes is developed. The main classification attributes are suggested: the orientation in plan the charging wells rows relatively the block of rocks; the presence of cuts in the blasting schemes; the separation of the wells series onto elements; the sequence of the blasting. The periodic regularity of transition from one Short-delayed scheme of blasting to another is proved.
Constructing network scheme of connecting with Internet
NASA Astrophysics Data System (ADS)
Lin, Ganghua
2001-06-01
Nowadays people are more and more imminent for information's gain; time of gaining information is stressed; demand for information quantity is larger and larger. These make us have to scan again our communication pattern. Purpose of this article is comparing a few ways to connect with Internet to find a way that is proper to us.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cave, Robert J., E-mail: Robert-Cave@hmc.edu; Stanton, John F., E-mail: JFStanton@gmail.com
We present a simple quasi-diabatization scheme applicable to spectroscopic studies that can be applied using any wavefunction for which one-electron properties and transition properties can be calculated. The method is based on rotation of a pair (or set) of adiabatic states to minimize the difference between the given transition property at a reference geometry of high symmetry (where the quasi-diabatic states and adiabatic states coincide) and points of lower symmetry where quasi-diabatic quantities are desired. Compared to other quasi-diabatization techniques, the method requires no special coding, facilitates direct comparison between quasi-diabatic quantities calculated using different types of wavefunctions, and ismore » free of any selection of configurations in the definition of the quasi-diabatic states. On the other hand, the method appears to be sensitive to multi-state issues, unlike recent methods we have developed that use a configurational definition of quasi-diabatic states. Results are presented and compared with two other recently developed quasi-diabatization techniques.« less
Efficient variable time-stepping scheme for intense field-atom interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cerjan, C.; Kosloff, R.
1993-03-01
The recently developed Residuum method [Tal-Ezer, Kosloff, and Cerjan, J. Comput. Phys. 100, 179 (1992)], a Krylov subspace technique with variable time-step integration for the solution of the time-dependent Schroedinger equation, is applied to the frequently used soft Coulomb potential in an intense laser field. This one-dimensional potential has asymptotic Coulomb dependence with a softened'' singularity at the origin; thus it models more realistic phenomena. Two of the more important quantities usually calculated in this idealized system are the photoelectron and harmonic photon generation spectra. These quantities are shown to be sensitive to the choice of a numerical integration scheme:more » some spectral features are incorrectly calculated or missing altogether. Furthermore, the Residuum method allows much larger grid spacings for equivalent or higher accuracy in addition to the advantages of variable time stepping. Finally, it is demonstrated that enhanced high-order harmonic generation accompanies intense field stabilization and that preparation of the atom in an intermediate Rydberg state leads to stabilization at much lower laser intensity.« less
Nonlinear Aeroacoustics Computations by the Space-Time CE/SE Method
NASA Technical Reports Server (NTRS)
Loh, Ching Y.
2003-01-01
The Space-Time Conservation Element and Solution Element Method, or CE/SE Method for short, is a recently developed numerical method for conservation laws. Despite its second order accuracy in space and time, it possesses low dispersion errors and low dissipation. The method is robust enough to cover a wide range of compressible flows: from weak linear acoustic waves to strong discontinuous waves (shocks). An outstanding feature of the CE/SE scheme is its truly multi-dimensional, simple but effective non-reflecting boundary condition (NRBC), which is particularly valuable for computational aeroacoustics (CAA). In nature, the method may be categorized as a finite volume method, where the conservation element (CE) is equivalent to a finite control volume (or cell) and the solution element (SE) can be understood as the cell interface. However, due to its careful treatment of the surface fluxes and geometry, it is different from the existing schemes. Currently, the CE/SE scheme has been developed to a matured stage that a 3-D unstructured CE/SE Navier-Stokes solver is already available. However, in the present review paper, as a general introduction to the CE/SE method, only the 2-D unstructured Euler CE/SE solver is chosen and sketched in section 2. Then applications of the 2-D and 3-D CE/SE schemes to linear, and in particular, nonlinear aeroacoustics are depicted in sections 3, 4, and 5 to demonstrate its robustness and capability.
NASA Astrophysics Data System (ADS)
De Meij, A.; Vinuesa, J.-F.; Maupas, V.
2018-05-01
The sensitivity of different microphysics and dynamics schemes on calculated global horizontal irradiation (GHI) values in the Weather Research Forecasting (WRF) model is studied. 13 sensitivity simulations were performed for which the microphysics, cumulus parameterization schemes and land surface models were changed. Firstly we evaluated the model's performance by comparing calculated GHI values for the Base Case with observations for the Reunion Island for 2014. In general, the model calculates the largest bias during the austral summer. This indicates that the model is less accurate in timing the formation and dissipation of clouds during the summer, when higher water vapor quantities are present in the atmosphere than during the austral winter. Secondly, the model sensitivity on changing the microphysics, cumulus parameterization and land surface models on calculated GHI values is evaluated. The sensitivity simulations showed that changing the microphysics from the Thompson scheme (or Single-Moment 6-class scheme) to the Morrison double-moment scheme, the relative bias improves from 45% to 10%. The underlying reason for this improvement is that the Morrison double-moment scheme predicts the mass and number concentrations of five hydrometeors, which help to improve the calculation of the densities, size and lifetime of the cloud droplets. While the single moment schemes only predicts the mass for less hydrometeors. Changing the cumulus parameterization schemes and land surface models does not have a large impact on GHI calculations.
The first ANDES elements: 9-DOF plate bending triangles
NASA Technical Reports Server (NTRS)
Militello, Carmelo; Felippa, Carlos A.
1991-01-01
New elements are derived to validate and assess the assumed natural deviatoric strain (ANDES) formulation. This is a brand new variant of the assumed natural strain (ANS) formulation of finite elements, which has recently attracted attention as an effective method for constructing high-performance elements for linear and nonlinear analysis. The ANDES formulation is based on an extended parametrized variational principle developed in recent publications. The key concept is that only the deviatoric part of the strains is assumed over the element whereas the mean strain part is discarded in favor of a constant stress assumption. Unlike conventional ANS elements, ANDES elements satisfy the individual element test (a stringent form of the patch test) a priori while retaining the favorable distortion-insensitivity properties of ANS elements. The first application of this formulation is the development of several Kirchhoff plate bending triangular elements with the standard nine degrees of freedom. Linear curvature variations are sampled along the three sides with the corners as gage reading points. These sample values are interpolated over the triangle using three schemes. Two schemes merge back to conventional ANS elements, one being identical to the Discrete Kirchhoff Triangle (DKT), whereas the third one produces two new ANDES elements. Numerical experiments indicate that one of the ANDES element is relatively insensitive to distortion compared to previously derived high-performance plate-bending elements, while retaining accuracy for nondistorted elements.
A 3-dimensional mass conserving element for compressible flows
NASA Technical Reports Server (NTRS)
Fix, G.; Suri, M.
1985-01-01
A variety of finite element schemes has been used in the numerical approximation of compressible flows particularly in underwater acoustics. In many instances instabilities have been generated due to the lack of mass conservation. Two- and three-dimensional elements are developed which avoid these problems.
Multi-Scale Computational Modeling of Two-Phased Metal Using GMC Method
NASA Technical Reports Server (NTRS)
Moghaddam, Masoud Ghorbani; Achuthan, A.; Bednacyk, B. A.; Arnold, S. M.; Pineda, E. J.
2014-01-01
A multi-scale computational model for determining plastic behavior in two-phased CMSX-4 Ni-based superalloys is developed on a finite element analysis (FEA) framework employing crystal plasticity constitutive model that can capture the microstructural scale stress field. The generalized method of cells (GMC) micromechanics model is used for homogenizing the local field quantities. At first, GMC as stand-alone is validated by analyzing a repeating unit cell (RUC) as a two-phased sample with 72.9% volume fraction of gamma'-precipitate in the gamma-matrix phase and comparing the results with those predicted by finite element analysis (FEA) models incorporating the same crystal plasticity constitutive model. The global stress-strain behavior and the local field quantity distributions predicted by GMC demonstrated good agreement with FEA. High computational saving, at the expense of some accuracy in the components of local tensor field quantities, was obtained with GMC. Finally, the capability of the developed multi-scale model linking FEA and GMC to solve real life sized structures is demonstrated by analyzing an engine disc component and determining the microstructural scale details of the field quantities.
A semi-Lagrangian advection scheme for radioactive tracers in a regional spectral model
NASA Astrophysics Data System (ADS)
Chang, E.-C.; Yoshimura, K.
2015-06-01
In this study, the non-iteration dimensional-split semi-Lagrangian (NDSL) advection scheme is applied to the National Centers for Environmental Prediction (NCEP) regional spectral model (RSM) to alleviate the Gibbs phenomenon. The Gibbs phenomenon is a problem wherein negative values of positive-definite quantities (e.g., moisture and tracers) are generated by the spectral space transformation in a spectral model system. To solve this problem, the spectral prognostic specific humidity and radioactive tracer advection scheme is replaced by the NDSL advection scheme, which considers advection of tracers in a grid system without spectral space transformations. A regional version of the NDSL is developed in this study and is applied to the RSM. Idealized experiments show that the regional version of the NDSL is successful. The model runs for an actual case study suggest that the NDSL can successfully advect radioactive tracers (iodine-131 and cesium-137) without noise from the Gibbs phenomenon. The NDSL can also remove negative specific humidity values produced in spectral calculations without losing detailed features.
Best Hiding Capacity Scheme for Variable Length Messages Using Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Bajaj, Ruchika; Bedi, Punam; Pal, S. K.
Steganography is an art of hiding information in such a way that prevents the detection of hidden messages. Besides security of data, the quantity of data that can be hidden in a single cover medium, is also very important. We present a secure data hiding scheme with high embedding capacity for messages of variable length based on Particle Swarm Optimization. This technique gives the best pixel positions in the cover image, which can be used to hide the secret data. In the proposed scheme, k bits of the secret message are substituted into k least significant bits of the image pixel, where k varies from 1 to 4 depending on the message length. The proposed scheme is tested and results compared with simple LSB substitution, uniform 4-bit LSB hiding (with PSO) for the test images Nature, Baboon, Lena and Kitty. The experimental study confirms that the proposed method achieves high data hiding capacity and maintains imperceptibility and minimizes the distortion between the cover image and the obtained stego image.
Application of the GA-BP Neural Network in Earthwork Calculation
NASA Astrophysics Data System (ADS)
Fang, Peng; Cai, Zhixiong; Zhang, Ping
2018-01-01
The calculation of earthwork quantity is the key factor to determine the project cost estimate and the optimization of the scheme. It is of great significance and function in the excavation of earth and rock works. We use optimization principle of GA-BP intelligent algorithm running process, and on the basis of earthwork quantity and cost information database, the design of the GA-BP neural network intelligent computing model, through the network training and learning, the accuracy of the results meet the actual engineering construction of gauge fan requirements, it provides a new approach for other projects the calculation, and has good popularization value.
NASA Astrophysics Data System (ADS)
Asoka-Kumar, P.; Leung, T. C.; Lynn, K. G.; Nielsen, B.; Forcier, M. P.; Weinberg, Z. A.; Rubloff, G. W.
1992-06-01
The centroid shifts of positron annihilation spectra are reported from the depletion regions of metal-oxide-semiconductor (MOS) capacitors at room temperature and at 35 K. The centroid shift measurement can be explained using the variation of the electric field strength and depletion layer thickness as a function of the applied gate bias. An estimate for the relevant MOS quantities is obtained by fitting the centroid shift versus beam energy data with a steady-state diffusion-annihilation equation and a derivative-gaussian positron implantation profile. Inadequacy of the present analysis scheme is evident from the derived quantities and alternate methods are required for better predictions.
NASA Astrophysics Data System (ADS)
Setiawan, R.
2018-03-01
In this paper, Economic Order Quantity (EOQ) of probabilistic two-level supply – chain system for items with imperfect quality has been analyzed under service level constraint. A firm applies an active service level constraint to avoid unpredictable shortage terms in the objective function. Mathematical analysis of optimal result is delivered using two equilibrium scheme concept in game theory approach. Stackelberg’s equilibrium for cooperative strategy and Stackelberg’s Equilibrium for noncooperative strategy. This is a new approach to game theory result in inventory system whether service level constraint is applied by a firm in his moves.
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung; Wang, Xiao-Yen; Chow, Chuen-Yen
1994-01-01
A new numerical discretization method for solving conservation laws is being developed. This new approach differs substantially in both concept and methodology from the well-established methods, i.e., finite difference, finite volume, finite element, and spectral methods. It is motivated by several important physical/numerical considerations and designed to avoid several key limitations of the above traditional methods. As a result of the above considerations, a set of key principles for the design of numerical schemes was put forth in a previous report. These principles were used to construct several numerical schemes that model a 1-D time-dependent convection-diffusion equation. These schemes were then extended to solve the time-dependent Euler and Navier-Stokes equations of a perfect gas. It was shown that the above schemes compared favorably with the traditional schemes in simplicity, generality, and accuracy. In this report, the 2-D versions of the above schemes, except the Navier-Stokes solver, are constructed using the same set of design principles. Their constructions are simplified greatly by the use of a nontraditional space-time mesh. Its use results in the simplest stencil possible, i.e., a tetrahedron in a 3-D space-time with a vertex at the upper time level and other three at the lower time level. Because of the similarity in their design, each of the present 2-D solvers virtually shares with its 1-D counterpart the same fundamental characteristics. Moreover, it is shown that the present Euler solver is capable of generating highly accurate solutions for a famous 2-D shock reflection problem. Specifically, both the incident and the reflected shocks can be resolved by a single data point without the presence of numerical oscillations near the discontinuity.
Uptake coefficients for biosolids-amended dryland winter wheat
USDA-ARS?s Scientific Manuscript database
Biosolids regulations developed in the United States employed risk assessment impacts of trace element additions on plant uptake. The US Environmental Protection Agency adapted the uptake coefficient (ratio of plant concentration to quantity of element added) when developing limitations on selected...
Seismic waves in heterogeneous material: subcell resolution of the discontinuous Galerkin method
NASA Astrophysics Data System (ADS)
Castro, Cristóbal E.; Käser, Martin; Brietzke, Gilbert B.
2010-07-01
We present an important extension of the arbitrary high-order discontinuous Galerkin (DG) finite-element method to model 2-D elastic wave propagation in highly heterogeneous material. In this new approach we include space-variable coefficients to describe smooth or discontinuous material variations inside each element using the same numerical approximation strategy as for the velocity-stress variables in the formulation of the elastic wave equation. The combination of the DG method with a time integration scheme based on the solution of arbitrary accuracy derivatives Riemann problems still provides an explicit, one-step scheme which achieves arbitrary high-order accuracy in space and time. Compared to previous formulations the new scheme contains two additional terms in the form of volume integrals. We show that the increasing computational cost per element can be overcompensated due to the improved material representation inside each element as coarser meshes can be used which reduces the total number of elements and therefore computational time to reach a desired error level. We confirm the accuracy of the proposed scheme performing convergence tests and several numerical experiments considering smooth and highly heterogeneous material. As the approximation of the velocity and stress variables in the wave equation and of the material properties in the model can be chosen independently, we investigate the influence of the polynomial material representation on the accuracy of the synthetic seismograms with respect to computational cost. Moreover, we study the behaviour of the new method on strong material discontinuities, in the case where the mesh is not aligned with such a material interface. In this case second-order linear material approximation seems to be the best choice, with higher-order intra-cell approximation leading to potential instable behaviour. For all test cases we validate our solution against the well-established standard fourth-order finite difference and spectral element method.
Abdalrahman, T; Scheiner, S; Hellmich, C
2015-01-21
It is generally agreed on that trabecular bone permeability, a physiologically important quantity, is governed by the material׳s (vascular or intertrabecular) porosity as well as by the viscosity of the pore-filling fluids. Still, there is less agreement on how these two key factors govern bone permeability. In order to shed more light onto this somewhat open issue, we here develop a random homogenization scheme for upscaling Poiseuille flow in the vascular porosity, up to Darcy-type permeability of the overall porous medium "trabecular bone". The underlying representative volume element of the macroscopic bone material contains two types of phases: a spherical, impermeable extracellular bone matrix phase interacts with interpenetrating cylindrical pore channel phases that are oriented in all different space directions. This type of interaction is modeled by means of a self-consistent homogenization scheme. While the permeability of the bone matrix equals to zero, the permeability of the pore phase is found through expressing the classical Hagen-Poiseuille law for laminar flow in the format of a "micro-Darcy law". The upscaling scheme contains pore size and porosity as geometrical input variables; however, they can be related to each other, based on well-known relations between porosity and specific bone surface. As two key results, validated through comprehensive experimental data, it appears (i) that the famous Kozeny-Carman constant (which relates bone permeability to the cube of the porosity, the square of the specific surface, as well as to the bone fluid viscosity) needs to be replaced by an again porosity-dependent rational function, and (ii) that the overall bone permeability is strongly affected by the pore fluid viscosity, which, in case of polarized fluids, is strongly increased due to the presence of electrically charged pore walls. Copyright © 2014 Elsevier Ltd. All rights reserved.
Salt-water-freshwater transient upconing - An implicit boundary-element solution
Kemblowski, M.
1985-01-01
The boundary-element method is used to solve the set of partial differential equations describing the flow of salt water and fresh water separated by a sharp interface in the vertical plane. In order to improve the accuracy and stability of the numerical solution, a new implicit scheme was developed for calculating the motion of the interface. The performance of this scheme was tested by means of numerical simulation. The numerical results are compared to experimental results for a salt-water upconing under a drain problem. ?? 1985.
Boundary-element modelling of dynamics in external poroviscoelastic problems
NASA Astrophysics Data System (ADS)
Igumnov, L. A.; Litvinchuk, S. Yu; Ipatov, A. A.; Petrov, A. N.
2018-04-01
A problem of a spherical cavity in porous media is considered. Porous media are assumed to be isotropic poroelastic or isotropic poroviscoelastic. The poroviscoelastic formulation is treated as a combination of Biot’s theory of poroelasticity and elastic-viscoelastic correspondence principle. Such viscoelastic models as Kelvin–Voigt, Standard linear solid, and a model with weakly singular kernel are considered. Boundary field study is employed with the help of the boundary element method. The direct approach is applied. The numerical scheme is based on the collocation method, regularized boundary integral equation, and Radau stepped scheme.
Toward a Standardized Internet Measurement.
ERIC Educational Resources Information Center
Chen, Hsiang; Tan, Zixiang
This paper investigates measurement issues related to elements of the Internet and calls for a standardized measuring scheme to resolve the problem of the measurement. The dilemmas in measuring the elements of the Internet are identified, and previous studies are reviewed. Elements of the Internet are categorized into population, usage, protocol…
NASA Astrophysics Data System (ADS)
Kel'manov, A. V.; Khandeev, V. I.
2016-02-01
The strongly NP-hard problem of partitioning a finite set of points of Euclidean space into two clusters of given sizes (cardinalities) minimizing the sum (over both clusters) of the intracluster sums of squared distances from the elements of the clusters to their centers is considered. It is assumed that the center of one of the sought clusters is specified at the desired (arbitrary) point of space (without loss of generality, at the origin), while the center of the other one is unknown and determined as the mean value over all elements of this cluster. It is shown that unless P = NP, there is no fully polynomial-time approximation scheme for this problem, and such a scheme is substantiated in the case of a fixed space dimension.
Simulating Progressive Damage of Notched Composite Laminates with Various Lamination Schemes
NASA Astrophysics Data System (ADS)
Mandal, B.; Chakrabarti, A.
2017-05-01
A three dimensional finite element based progressive damage model has been developed for the failure analysis of notched composite laminates. The material constitutive relations and the progressive damage algorithms are implemented into finite element code ABAQUS using user-defined subroutine UMAT. The existing failure criteria for the composite laminates are modified by including the failure criteria for fiber/matrix shear damage and delamination effects. The proposed numerical model is quite efficient and simple compared to other progressive damage models available in the literature. The efficiency of the present constitutive model and the computational scheme is verified by comparing the simulated results with the results available in the literature. A parametric study has been carried out to investigate the effect of change in lamination scheme on the failure behaviour of notched composite laminates.
Bounds on stochastic chemical kinetic systems at steady state
NASA Astrophysics Data System (ADS)
Dowdy, Garrett R.; Barton, Paul I.
2018-02-01
The method of moments has been proposed as a potential means to reduce the dimensionality of the chemical master equation (CME) appearing in stochastic chemical kinetics. However, attempts to apply the method of moments to the CME usually result in the so-called closure problem. Several authors have proposed moment closure schemes, which allow them to obtain approximations of quantities of interest, such as the mean molecular count for each species. However, these approximations have the dissatisfying feature that they come with no error bounds. This paper presents a fundamentally different approach to the closure problem in stochastic chemical kinetics. Instead of making an approximation to compute a single number for the quantity of interest, we calculate mathematically rigorous bounds on this quantity by solving semidefinite programs. These bounds provide a check on the validity of the moment closure approximations and are in some cases so tight that they effectively provide the desired quantity. In this paper, the bounded quantities of interest are the mean molecular count for each species, the variance in this count, and the probability that the count lies in an arbitrary interval. At present, we consider only steady-state probability distributions, intending to discuss the dynamic problem in a future publication.
A Simple Qualitative Analysis Scheme for Several Environmentally Important Elements
ERIC Educational Resources Information Center
Lambert, Jack L.; Meloan, Clifton E.
1977-01-01
Describes a scheme that uses precipitation, gas evolution, complex ion formation, and flame tests to analyze for the following ions: Hg(I), Hg(II), Sb(III), Cr(III), Pb(II), Sr(II), Cu(II), Cd(II), As(III), chloride, nitrate, and sulfate. (MLH)
A Multi-Axial Scheme for Assessment and Intervention.
ERIC Educational Resources Information Center
Kirkland, John; Morgan, Griffith
1984-01-01
Describes a scheme consisting of nine elements or "themes" ordered in terms of their more probable links developed from clinical work with crying infants. Theme categories are genetic/hereditary, psychiatric disorder, central nervous system, medical health (chronic), medical health (acute), nutrition/allergy, form, psychosocial…
Computational aspects of helicopter trim analysis and damping levels from Floquet theory
NASA Technical Reports Server (NTRS)
Gaonkar, Gopal H.; Achar, N. S.
1992-01-01
Helicopter trim settings of periodic initial state and control inputs are investigated for convergence of Newton iteration in computing the settings sequentially and in parallel. The trim analysis uses a shooting method and a weak version of two temporal finite element methods with displacement formulation and with mixed formulation of displacements and momenta. These three methods broadly represent two main approaches of trim analysis: adaptation of initial-value and finite element boundary-value codes to periodic boundary conditions, particularly for unstable and marginally stable systems. In each method, both the sequential and in-parallel schemes are used and the resulting nonlinear algebraic equations are solved by damped Newton iteration with an optimally selected damping parameter. The impact of damped Newton iteration, including earlier-observed divergence problems in trim analysis, is demonstrated by the maximum condition number of the Jacobian matrices of the iterative scheme and by virtual elimination of divergence. The advantages of the in-parallel scheme over the conventional sequential scheme are also demonstrated.
NASA Technical Reports Server (NTRS)
Wang, Xiao Yen; Chang, Sin-Chung; Jorgenson, Philip C. E.
1999-01-01
The space-time conservation element and solution element(CE/SE) method is used to study the sound-shock interaction problem. The order of accuracy of numerical schemes is investigated. The linear model problem.govemed by the 1-D scalar convection equation, sound-shock interaction problem governed by the 1-D Euler equations, and the 1-D shock-tube problem which involves moving shock waves and contact surfaces are solved to investigate the order of accuracy of numerical schemes. It is concluded that the accuracy of the CE/SE numerical scheme with designed 2nd-order accuracy becomes 1st order when a moving shock wave exists. However, the absolute error in the CE/SE solution downstream of the shock wave is on the same order as that obtained using a fourth-order accurate essentially nonoscillatory (ENO) scheme. No special techniques are used for either high-frequency low-amplitude waves or shock waves.
Efficient simulation of incompressible viscous flow over multi-element airfoils
NASA Technical Reports Server (NTRS)
Rogers, Stuart E.; Wiltberger, N. Lyn; Kwak, Dochan
1993-01-01
The incompressible, viscous, turbulent flow over single and multi-element airfoils is numerically simulated in an efficient manner by solving the incompressible Navier-Stokes equations. The solution algorithm employs the method of pseudo compressibility and utilizes an upwind differencing scheme for the convective fluxes, and an implicit line-relaxation scheme. The motivation for this work includes interest in studying high-lift take-off and landing configurations of various aircraft. In particular, accurate computation of lift and drag at various angles of attack up to stall is desired. Two different turbulence models are tested in computing the flow over an NACA 4412 airfoil; an accurate prediction of stall is obtained. The approach used for multi-element airfoils involves the use of multiple zones of structured grids fitted to each element. Two different approaches are compared; a patched system of grids, and an overlaid Chimera system of grids. Computational results are presented for two-element, three-element, and four-element airfoil configurations. Excellent agreement with experimental surface pressure coefficients is seen. The code converges in less than 200 iterations, requiring on the order of one minute of CPU time on a CRAY YMP per element in the airfoil configuration.
Toward protocols for quantum-ensured privacy and secure voting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonanome, Marianna; Buzek, Vladimir; Ziman, Mario
2011-08-15
We present a number of schemes that use quantum mechanics to preserve privacy, in particular, we show that entangled quantum states can be useful in maintaining privacy. We further develop our original proposal [see M. Hillery, M. Ziman, V. Buzek, and M. Bielikova, Phys. Lett. A 349, 75 (2006)] for protecting privacy in voting, and examine its security under certain types of attacks, in particular dishonest voters and external eavesdroppers. A variation of these quantum-based schemes can be used for multiparty function evaluation. We consider functions corresponding to group multiplication of N group elements, with each element chosen by amore » different party. We show how quantum mechanics can be useful in maintaining the privacy of the choices group elements.« less
ERIC Educational Resources Information Center
Subba Rao, G. M.; Vijayapushapm, T.; Venkaiah, K.; Pavarala, V.
2012-01-01
Objective: To assess quantity and quality of nutrition and food safety information in science textbooks prescribed by the Central Board of Secondary Education (CBSE), India for grades I through X. Design: Content analysis. Methods: A coding scheme was developed for quantitative and qualitative analyses. Two investigators independently coded the…
Jenke, Dennis; Rivera, Christine; Mortensen, Tammy; Amin, Parul; Chacko, Molly; Tran, Thang; Chum, James
2013-01-01
Nearly 100 individual test articles, representative of materials used in pharmaceutical applications such as packaging and devices, were extracted under exaggerated conditions and the levels of 32 metals and trace elements (Ag, Al, As, B, Ba, Be, Bi, Ca, Cd, Co, Cr, Cu, Fe, Ge, Li, Mg, Mn, Mo, Na, Ni, P, Pb, S, Sb, Se, Si, Sn, Sr, Ti, V, Zn, and Zr) were measured in the extracts. The extracting solvents included aqueous mixtures at low and high pH and an organic solvent mixture (40/60 ethanol water). The sealed vessel extractions were performed by placing an appropriate portion of the test articles and an appropriate volume of extracting solution in inert extraction vessels and exposing the extraction units (and associated extraction blanks) to defined conditions of temperature and duration. The levels of extracted target elements were measured by inductively coupled plasma atomic emission spectroscopy. The overall reporting threshold for most of the targeted elements was 0.05 μg/mL, which corresponds to 0.5 μg/g for the most commonly utilized extraction stoichiometry (1 g of material per 10 mL of extracting solvent). The targeted elements could be classified into four major groups depending on the frequency with which they were present in the over 250 extractions reported in this study. Thirteen elements (Ag, As, Be, Cd, Co, Ge, Li, Mo, Ni, Sn, Ti, V, and Zr) were not extracted in reportable quantities from any of the test articles under any of the extraction conditions. Eight additional elements (Bi, Cr, Cu, Mn, Pb, Sb, Se, and Sr) were rarely extracted from the test articles at reportable levels, and three other elements (Ba, Fe, and P) were infrequently extracted from the test articles at reportable levels. The remaining eight elements (Al, B, Ca, Mg, Na, S, Si, and Zn) were more frequently present in the extracts in reportable quantities. These general trends in accumulation behavior were compared to compiled lists of elements of concern as impurities in pharmaceutical products. Nearly 100 individual test articles, representative of materials used in pharmaceutical applications such as packaging and devices, were extracted under exaggerated conditions, and the levels of thirty-two metals and trace elements (Ag, Al, As, B, Ba, Be, Bi, Ca, Cd, Co, Cr, Cu, Fe, Ge, Li, Mg, Mn, Mo, Na, Ni, P, Pb, S, Sb, Se, Si, Sn, Sr, Ti, V, Zn, and Zr) were measured in the extracts. The targeted elements could be classified into four major groups depending on the frequency with which they were present in the extractions reported in this study: those elements that were not extracted in reportable quantities from any of the test articles under any of the extraction conditions, those elements that were rarely extracted from the test articles at reportable levels, those elements that were infrequently extracted from the test articles at reportable levels, and those elements that were more frequently present in the extracts in reportable quantities.
Discrete Element Modelling of Floating Debris
NASA Astrophysics Data System (ADS)
Mahaffey, Samantha; Liang, Qiuhua; Parkin, Geoff; Large, Andy; Rouainia, Mohamed
2016-04-01
Flash flooding is characterised by high velocity flows which impact vulnerable catchments with little warning time and as such, result in complex flow dynamics which are difficult to replicate through modelling. The impacts of flash flooding can be made yet more severe by the transport of both natural and anthropogenic debris, ranging from tree trunks to vehicles, wheelie bins and even storage containers, the effects of which have been clearly evident during recent UK flooding. This cargo of debris can have wide reaching effects and result in actual flood impacts which diverge from those predicted. A build-up of debris may lead to partial channel blockage and potential flow rerouting through urban centres. Build-up at bridges and river structures also leads to increased hydraulic loading which may result in damage and possible structural failure. Predicting the impacts of debris transport; however, is difficult as conventional hydrodynamic modelling schemes do not intrinsically include floating debris within their calculations. Subsequently a new tool has been developed using an emerging approach, which incorporates debris transport through the coupling of two existing modelling techniques. A 1D hydrodynamic modelling scheme has here been coupled with a 2D discrete element scheme to form a new modelling tool which predicts the motion and flow-interaction of floating debris. Hydraulic forces arising from flow around the object are applied to instigate its motion. Likewise, an equivalent opposing force is applied to fluid cells, enabling backwater effects to be simulated. Shock capturing capabilities make the tool applicable to predicting the complex flow dynamics associated with flash flooding. The modelling scheme has been applied to experimental case studies where cylindrical wooden dowels are transported by a dam-break wave. These case studies enable validation of the tool's shock capturing capabilities and the coupling technique applied between the two numerical schemes. The results show that the tool is able to adequately replicate water depth and depth-averaged velocity of a dam-break wave, as well as velocity and displacement of floating cylindrical elements, thus validating its shock capturing capabilities and the coupling technique applied for this simple test case. Future development of the tool will incorporate a 2D hydrodynamic scheme and a 3D discrete element scheme in order to model the more complex processes associated with debris transport.
NASA Astrophysics Data System (ADS)
Briceño, Raúl A.; Hansen, Maxwell T.; Monahan, Christopher J.
2017-07-01
Lattice quantum chromodynamics (QCD) provides the only known systematic, nonperturbative method for first-principles calculations of nucleon structure. However, for quantities such as light-front parton distribution functions (PDFs) and generalized parton distributions (GPDs), the restriction to Euclidean time prevents direct calculation of the desired observable. Recently, progress has been made in relating these quantities to matrix elements of spatially nonlocal, zero-time operators, referred to as quasidistributions. Still, even for these time-independent matrix elements, potential subtleties have been identified in the role of the Euclidean signature. In this work, we investigate the analytic behavior of spatially nonlocal correlation functions and demonstrate that the matrix elements obtained from Euclidean lattice QCD are identical to those obtained using the Lehmann-Symanzik-Zimmermann reduction formula in Minkowski space. After arguing the equivalence on general grounds, we also show that it holds in a perturbative calculation, where special care is needed to identify the lattice prediction. Finally we present a proof of the uniqueness of the matrix elements obtained from Minkowski and Euclidean correlation functions to all order in perturbation theory.
Briceno, Raul A.; Hansen, Maxwell T.; Monahan, Christopher J.
2017-07-11
Lattice quantum chromodynamics (QCD) provides the only known systematic, nonperturbative method for first-principles calculations of nucleon structure. However, for quantities such as light-front parton distribution functions (PDFs) and generalized parton distributions (GPDs), the restriction to Euclidean time prevents direct calculation of the desired observable. Recently, progress has been made in relating these quantities to matrix elements of spatially nonlocal, zero-time operators, referred to as quasidistributions. Still, even for these time-independent matrix elements, potential subtleties have been identified in the role of the Euclidean signature. In this work, we investigate the analytic behavior of spatially nonlocal correlation functions and demonstrate thatmore » the matrix elements obtained from Euclidean lattice QCD are identical to those obtained using the Lehmann-Symanzik-Zimmermann reduction formula in Minkowski space. After arguing the equivalence on general grounds, we also show that it holds in a perturbative calculation, where special care is needed to identify the lattice prediction. Lastly, we present a proof of the uniqueness of the matrix elements obtained from Minkowski and Euclidean correlation functions to all order in perturbation theory.« less
Finite element based micro-mechanics modeling of textile composites
NASA Technical Reports Server (NTRS)
Glaessgen, E. H.; Griffin, O. H., Jr.
1995-01-01
Textile composites have the advantage over laminated composites of a significantly greater damage tolerance and resistance to delamination. Currently, a disadvantage of textile composites is the inability to examine the details of the internal response of these materials under load. Traditional approaches to the study fo textile based composite materials neglect many of the geometric details that affect the performance of the material. The present three dimensional analysis, based on the representative volume element (RVE) of a plain weave, allows prediction of the internal details of displacement, strain, stress, and failure quantities. Through this analysis, the effect of geometric and material parameters on the aforementioned quantities are studied.
Li, Wei; Yi, Huangjian; Zhang, Qitan; Chen, Duofang; Liang, Jimin
2012-01-01
An extended finite element method (XFEM) for the forward model of 3D optical molecular imaging is developed with simplified spherical harmonics approximation (SPN). In XFEM scheme of SPN equations, the signed distance function is employed to accurately represent the internal tissue boundary, and then it is used to construct the enriched basis function of the finite element scheme. Therefore, the finite element calculation can be carried out without the time-consuming internal boundary mesh generation. Moreover, the required overly fine mesh conforming to the complex tissue boundary which leads to excess time cost can be avoided. XFEM conveniences its application to tissues with complex internal structure and improves the computational efficiency. Phantom and digital mouse experiments were carried out to validate the efficiency of the proposed method. Compared with standard finite element method and classical Monte Carlo (MC) method, the validation results show the merits and potential of the XFEM for optical imaging. PMID:23227108
Li, Wei; Yi, Huangjian; Zhang, Qitan; Chen, Duofang; Liang, Jimin
2012-01-01
An extended finite element method (XFEM) for the forward model of 3D optical molecular imaging is developed with simplified spherical harmonics approximation (SP(N)). In XFEM scheme of SP(N) equations, the signed distance function is employed to accurately represent the internal tissue boundary, and then it is used to construct the enriched basis function of the finite element scheme. Therefore, the finite element calculation can be carried out without the time-consuming internal boundary mesh generation. Moreover, the required overly fine mesh conforming to the complex tissue boundary which leads to excess time cost can be avoided. XFEM conveniences its application to tissues with complex internal structure and improves the computational efficiency. Phantom and digital mouse experiments were carried out to validate the efficiency of the proposed method. Compared with standard finite element method and classical Monte Carlo (MC) method, the validation results show the merits and potential of the XFEM for optical imaging.
Computational Aeroacoustics by the Space-time CE/SE Method
NASA Technical Reports Server (NTRS)
Loh, Ching Y.
2001-01-01
In recent years, a new numerical methodology for conservation laws-the Space-Time Conservation Element and Solution Element Method (CE/SE), was developed by Dr. Chang of NASA Glenn Research Center and collaborators. In nature, the new method may be categorized as a finite volume method, where the conservation element (CE) is equivalent to a finite control volume (or cell) and the solution element (SE) can be understood as the cell interface. However, due to its rigorous treatment of the fluxes and geometry, it is different from the existing schemes. The CE/SE scheme features: (1) space and time treated on the same footing, the integral equations of conservation laws are solve( for with second order accuracy, (2) high resolution, low dispersion and low dissipation, (3) novel, truly multi-dimensional, simple but effective non-reflecting boundary condition, (4) effortless implementation of computation, no numerical fix or parameter choice is needed, an( (5) robust enough to cover a wide spectrum of compressible flow: from weak linear acoustic waves to strong, discontinuous waves (shocks) appropriate for linear and nonlinear aeroacoustics. Currently, the CE/SE scheme has been developed to such a stage that a 3-13 unstructured CE/SE Navier-Stokes solver is already available. However, in the present paper, as a general introduction to the CE/SE method, only the 2-D unstructured Euler CE/SE solver is chosen as a prototype and is sketched in Section 2. Then applications of the CE/SE scheme to linear, nonlinear aeroacoustics and airframe noise are depicted in Sections 3, 4, and 5 respectively to demonstrate its robustness and capability.
Asynchronous Communication Scheme For Hypercube Computer
NASA Technical Reports Server (NTRS)
Madan, Herb S.
1988-01-01
Scheme devised for asynchronous-message communication system for Mark III hypercube concurrent-processor network. Network consists of up to 1,024 processing elements connected electrically as though were at corners of 10-dimensional cube. Each node contains two Motorola 68020 processors along with Motorola 68881 floating-point processor utilizing up to 4 megabytes of shared dynamic random-access memory. Scheme intended to support applications requiring passage of both polled or solicited and unsolicited messages.
An Empirical Method for Determining the Lunar Gravity Field. Ph.D. Thesis - George Washington Univ.
NASA Technical Reports Server (NTRS)
Ferrari, A. J.
1971-01-01
A method has been devised to determine the spherical harmonic coefficients of the lunar gravity field. This method consists of a two-step data reduction and estimation process. In the first step, a weighted least-squares empirical orbit determination scheme is applied to Doppler tracking data from lunar orbits to estimate long-period Kepler elements and rates. Each of the Kepler elements is represented by an independent function of time. The long-period perturbing effects of the earth, sun, and solar radiation are explicitly modeled in this scheme. Kepler element variations estimated by this empirical processor are ascribed to the non-central lunar gravitation features. Doppler data are reduced in this manner for as many orbits as are available. In the second step, the Kepler element rates are used as input to a second least-squares processor that estimates lunar gravity coefficients using the long-period Lagrange perturbation equations.
NASA Technical Reports Server (NTRS)
Venkatachari, Balaji Shankar; Streett, Craig L.; Chang, Chau-Lyan; Friedlander, David J.; Wang, Xiao-Yen; Chang, Sin-Chung
2016-01-01
Despite decades of development of unstructured mesh methods, high-fidelity time-accurate simulations are still predominantly carried out on structured, or unstructured hexahedral meshes by using high-order finite-difference, weighted essentially non-oscillatory (WENO), or hybrid schemes formed by their combinations. In this work, the space-time conservation element solution element (CESE) method is used to simulate several flow problems including supersonic jet/shock interaction and its impact on launch vehicle acoustics, and direct numerical simulations of turbulent flows using tetrahedral meshes. This paper provides a status report for the continuing development of the space-time conservation element solution element (CESE) numerical and software framework under the Revolutionary Computational Aerosciences (RCA) project. Solution accuracy and large-scale parallel performance of the numerical framework is assessed with the goal of providing a viable paradigm for future high-fidelity flow physics simulations.
AN ADVANCED LEAKAGE SCHEME FOR NEUTRINO TREATMENT IN ASTROPHYSICAL SIMULATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perego, A.; Cabezón, R. M.; Käppeli, R., E-mail: albino.perego@physik.tu-darmstadt.de
We present an Advanced Spectral Leakage (ASL) scheme to model neutrinos in the context of core-collapse supernovae (CCSNe) and compact binary mergers. Based on previous gray leakage schemes, the ASL scheme computes the neutrino cooling rates by interpolating local production and diffusion rates (relevant in optically thin and thick regimes, respectively) separately for discretized values of the neutrino energy. Neutrino trapped components are also modeled, based on equilibrium and timescale arguments. The better accuracy achieved by the spectral treatment allows a more reliable computation of neutrino heating rates in optically thin conditions. The scheme has been calibrated and tested against Boltzmannmore » transport in the context of Newtonian spherically symmetric models of CCSNe. ASL shows a very good qualitative and a partial quantitative agreement for key quantities from collapse to a few hundreds of milliseconds after core bounce. We have proved the adaptability and flexibility of our ASL scheme, coupling it to an axisymmetric Eulerian and to a three-dimensional smoothed particle hydrodynamics code to simulate core collapse. Therefore, the neutrino treatment presented here is ideal for large parameter-space explorations, parametric studies, high-resolution tests, code developments, and long-term modeling of asymmetric configurations, where more detailed neutrino treatments are not available or are currently computationally too expensive.« less
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
2010-01-01
Cell-centered and node-centered approaches have been compared for unstructured finite-volume discretization of inviscid fluxes. The grids range from regular grids to irregular grids, including mixed-element grids and grids with random perturbations of nodes. Accuracy, complexity, and convergence rates of defect-correction iterations are studied for eight nominally second-order accurate schemes: two node-centered schemes with weighted and unweighted least-squares (LSQ) methods for gradient reconstruction and six cell-centered schemes two node-averaging with and without clipping and four schemes that employ different stencils for LSQ gradient reconstruction. The cell-centered nearest-neighbor (CC-NN) scheme has the lowest complexity; a version of the scheme that involves smart augmentation of the LSQ stencil (CC-SA) has only marginal complexity increase. All other schemes have larger complexity; complexity of node-centered (NC) schemes are somewhat lower than complexity of cell-centered node-averaging (CC-NA) and full-augmentation (CC-FA) schemes. On highly anisotropic grids typical of those encountered in grid adaptation, discretization errors of five of the six cell-centered schemes converge with second order on all tested grids; the CC-NA scheme with clipping degrades solution accuracy to first order. The NC schemes converge with second order on regular and/or triangular grids and with first order on perturbed quadrilaterals and mixed-element grids. All schemes may produce large relative errors in gradient reconstruction on grids with perturbed nodes. Defect-correction iterations for schemes employing weighted least-square gradient reconstruction diverge on perturbed stretched grids. Overall, the CC-NN and CC-SA schemes offer the best options of the lowest complexity and secondorder discretization errors. On anisotropic grids over a curved body typical of turbulent flow simulations, the discretization errors converge with second order and are small for the CC-NN, CC-SA, and CC-FA schemes on all grids and for NC schemes on triangular grids; the discretization errors of the CC-NA scheme without clipping do not converge on irregular grids. Accurate gradient reconstruction can be achieved by introducing a local approximate mapping; without approximate mapping, only the NC scheme with weighted LSQ method provides accurate gradients. Defect correction iterations for the CC-NA scheme without clipping diverge; for the NC scheme with weighted LSQ method, the iterations either diverge or converge very slowly. The best option in curved geometries is the CC-SA scheme that offers low complexity, second-order discretization errors, and fast convergence.
19 CFR 4.38 - Release of cargo.
Code of Federal Regulations, 2011 CFR
2011-04-01
... for release without submission of paper documents after reviewing the entry data submitted... imported. (1) Where the cargo arrives by vessel, the report shall consist of the following data elements... following data elements: (i) Air waybill number; (ii) Quantity released; (iii) Entry number (including filer...
19 CFR 4.38 - Release of cargo.
Code of Federal Regulations, 2013 CFR
2013-04-01
... for release without submission of paper documents after reviewing the entry data submitted... imported. (1) Where the cargo arrives by vessel, the report shall consist of the following data elements... following data elements: (i) Air waybill number; (ii) Quantity released; (iii) Entry number (including filer...
19 CFR 4.38 - Release of cargo.
Code of Federal Regulations, 2012 CFR
2012-04-01
... for release without submission of paper documents after reviewing the entry data submitted... imported. (1) Where the cargo arrives by vessel, the report shall consist of the following data elements... following data elements: (i) Air waybill number; (ii) Quantity released; (iii) Entry number (including filer...
19 CFR 4.38 - Release of cargo.
Code of Federal Regulations, 2014 CFR
2014-04-01
... for release without submission of paper documents after reviewing the entry data submitted... imported. (1) Where the cargo arrives by vessel, the report shall consist of the following data elements... following data elements: (i) Air waybill number; (ii) Quantity released; (iii) Entry number (including filer...
19 CFR 4.38 - Release of cargo.
Code of Federal Regulations, 2010 CFR
2010-04-01
... for release without submission of paper documents after reviewing the entry data submitted... imported. (1) Where the cargo arrives by vessel, the report shall consist of the following data elements... following data elements: (i) Air waybill number; (ii) Quantity released; (iii) Entry number (including filer...
Hollow cathode lamp based Faraday anomalous dispersion optical filter.
Pan, Duo; Xue, Xiaobo; Shang, Haosen; Luo, Bin; Chen, Jingbiao; Guo, Hong
2016-07-15
The Faraday anomalous dispersion optical filter (FADOF), which has acquired wide applications, is mainly limited to some gaseous elements and low melting-point metals before, for the restriction of the attainable atomic density. In conventional FADOF systems a high atomic density is usually achieved by thermal equilibrium at the saturated vapor pressure, hence for elements with high melting-points a high temperature is required. To avoid this restriction, we propose a scheme of FADOF based on the hollow cathode lamp (HCL), instead of atomic vapor cells. Experimental results in strontium atoms verified this scheme, where a transmission peak corresponding to the (88)Sr (5s(2))(1)S0 - (5s5p)(1)P1 transition (461 nm) is obtained, with a maximum transmittance of 62.5% and a bandwith of 1.19 GHz. The dependence of transmission on magnetic field and HCL discharge current is also studied. Since the state-of-art commercial HCLs cover about 70 elements, this scheme can greatly expand the applications of FADOFs, and the abundant atomic transitions they provide bring the HCL based FADOFs potential applications for frequency stabilization.
Hollow cathode lamp based Faraday anomalous dispersion optical filter
NASA Astrophysics Data System (ADS)
Pan, Duo; Xue, Xiaobo; Shang, Haosen; Luo, Bin; Chen, Jingbiao; Guo, Hong
2016-07-01
The Faraday anomalous dispersion optical filter (FADOF), which has acquired wide applications, is mainly limited to some gaseous elements and low melting-point metals before, for the restriction of the attainable atomic density. In conventional FADOF systems a high atomic density is usually achieved by thermal equilibrium at the saturated vapor pressure, hence for elements with high melting-points a high temperature is required. To avoid this restriction, we propose a scheme of FADOF based on the hollow cathode lamp (HCL), instead of atomic vapor cells. Experimental results in strontium atoms verified this scheme, where a transmission peak corresponding to the 88Sr (5s2)1S0 - (5s5p)1P1 transition (461 nm) is obtained, with a maximum transmittance of 62.5% and a bandwith of 1.19 GHz. The dependence of transmission on magnetic field and HCL discharge current is also studied. Since the state-of-art commercial HCLs cover about 70 elements, this scheme can greatly expand the applications of FADOFs, and the abundant atomic transitions they provide bring the HCL based FADOFs potential applications for frequency stabilization.
Development and feasibility testing of the Pediatric Emergency Discharge Interaction Coding Scheme.
Curran, Janet A; Taylor, Alexandra; Chorney, Jill; Porter, Stephen; Murphy, Andrea; MacPhee, Shannon; Bishop, Andrea; Haworth, Rebecca
2017-08-01
Discharge communication is an important aspect of high-quality emergency care. This study addresses the gap in knowledge on how to describe discharge communication in a paediatric emergency department (ED). The objective of this feasibility study was to develop and test a coding scheme to characterize discharge communication between health-care providers (HCPs) and caregivers who visit the ED with their children. The Pediatric Emergency Discharge Interaction Coding Scheme (PEDICS) and coding manual were developed following a review of the literature and an iterative refinement process involving HCP observations, inter-rater assessments and team consensus. The coding scheme was pilot-tested through observations of HCPs across a range of shifts in one urban paediatric ED. Overall, 329 patient observations were carried out across 50 observational shifts. Inter-rater reliability was evaluated in 16% of the observations. The final version of the PEDICS contained 41 communication elements. Kappa scores were greater than .60 for the majority of communication elements. The most frequently observed communication elements were under the Introduction node and the least frequently observed were under the Social Concerns node. HCPs initiated the majority of the communication. Pediatric Emergency Discharge Interaction Coding Scheme addresses an important gap in the discharge communication literature. The tool is useful for mapping patterns of discharge communication between HCPs and caregivers. Results from our pilot test identified deficits in specific areas of discharge communication that could impact adherence to discharge instructions. The PEDICS would benefit from further testing with a different sample of HCPs. © 2017 The Authors. Health Expectations Published by John Wiley & Sons Ltd.
Methods for High-Order Multi-Scale and Stochastic Problems Analysis, Algorithms, and Applications
2016-10-17
finite volume schemes, discontinuous Galerkin finite element method, and related methods, for solving computational fluid dynamics (CFD) problems and...approximation for finite element methods. (3) The development of methods of simulation and analysis for the study of large scale stochastic systems of...laws, finite element method, Bernstein-Bezier finite elements , weakly interacting particle systems, accelerated Monte Carlo, stochastic networks 16
A Shear Deformable Shell Element for Laminated Composites
NASA Technical Reports Server (NTRS)
Chao, W. C.; Reddy, J. N.
1984-01-01
A three-dimensional element based on the total Lagrangian description of the motion of a layered anisotropic composite medium is developed, validated, and used to analyze layered composite shells. The element contains the following features: geometric nonlinearity, dynamic (transient) behavior, and arbitrary lamination scheme and lamina properties. Numerical results of nonlinear bending, natural vibration, and transient response are presented to illustrate the capabilities of the element.
NASA Astrophysics Data System (ADS)
Farrell, Patricio; Koprucki, Thomas; Fuhrmann, Jürgen
2017-10-01
We compare three thermodynamically consistent numerical fluxes known in the literature, appearing in a Voronoï finite volume discretization of the van Roosbroeck system with general charge carrier statistics. Our discussion includes an extension of the Scharfetter-Gummel scheme to non-Boltzmann (e.g. Fermi-Dirac) statistics. It is based on the analytical solution of a two-point boundary value problem obtained by projecting the continuous differential equation onto the interval between neighboring collocation points. Hence, it serves as a reference flux. The exact solution of the boundary value problem can be approximated by computationally cheaper fluxes which modify certain physical quantities. One alternative scheme averages the nonlinear diffusion (caused by the non-Boltzmann nature of the problem), another one modifies the effective density of states. To study the differences between these three schemes, we analyze the Taylor expansions, derive an error estimate, visualize the flux error and show how the schemes perform for a carefully designed p-i-n benchmark simulation. We present strong evidence that the flux discretization based on averaging the nonlinear diffusion has an edge over the scheme based on modifying the effective density of states.
Acceleration of planar foils by the indirect-direct drive scheme
NASA Astrophysics Data System (ADS)
Honrubia, J. J.; Martínez-Val, J. M.; Bocher, J. L.; Faucheux, G.
1996-05-01
We have investigated the hydrodynamic response of plastic and aluminum foils accelerated by a pulse formed by an x-ray prepulse followed by the main laser pulse. This illumination scheme, so-called indirect-direct drive scheme, has been proposed as an alternative to the direct and indirect drive. The advantages of such a scheme are that it can contribute to solve the problem of uniformity of the direct drive and, at the same time, it can be much more efficient and use simpler targets than the indirect-drive. Experiments about this hybrid drive scheme have been performed at Limeil with the PHEBUS facility and the standard experimental set-up and diagnostics. The agreement between experiments and simulations is good for quantities such as the energy of the laser converted into x-rays and the burnthrough time of the converter foil. To simulate the full hydrodynamic evolution of the converter and target foils separated a distance of 1 mm, 2-D effects should be taken into account. The basic goals have been to check the simulation codes developed by the Institute of Nuclear Fusion and to determine the hydrodynamic response of the target foil to the hybrid pulse. These goals have been fulfilled.
A Statistical Approach for the Concurrent Coupling of Molecular Dynamics and Finite Element Methods
NASA Technical Reports Server (NTRS)
Saether, E.; Yamakov, V.; Glaessgen, E.
2007-01-01
Molecular dynamics (MD) methods are opening new opportunities for simulating the fundamental processes of material behavior at the atomistic level. However, increasing the size of the MD domain quickly presents intractable computational demands. A robust approach to surmount this computational limitation has been to unite continuum modeling procedures such as the finite element method (FEM) with MD analyses thereby reducing the region of atomic scale refinement. The challenging problem is to seamlessly connect the two inherently different simulation techniques at their interface. In the present work, a new approach to MD-FEM coupling is developed based on a restatement of the typical boundary value problem used to define a coupled domain. The method uses statistical averaging of the atomistic MD domain to provide displacement interface boundary conditions to the surrounding continuum FEM region, which, in return, generates interface reaction forces applied as piecewise constant traction boundary conditions to the MD domain. The two systems are computationally disconnected and communicate only through a continuous update of their boundary conditions. With the use of statistical averages of the atomistic quantities to couple the two computational schemes, the developed approach is referred to as an embedded statistical coupling method (ESCM) as opposed to a direct coupling method where interface atoms and FEM nodes are individually related. The methodology is inherently applicable to three-dimensional domains, avoids discretization of the continuum model down to atomic scales, and permits arbitrary temperatures to be applied.
DNS of Flows over Periodic Hills using a Discontinuous-Galerkin Spectral-Element Method
NASA Technical Reports Server (NTRS)
Diosady, Laslo T.; Murman, Scott M.
2014-01-01
Direct numerical simulation (DNS) of turbulent compressible flows is performed using a higher-order space-time discontinuous-Galerkin finite-element method. The numerical scheme is validated by performing DNS of the evolution of the Taylor-Green vortex and turbulent flow in a channel. The higher-order method is shown to provide increased accuracy relative to low-order methods at a given number of degrees of freedom. The turbulent flow over a periodic array of hills in a channel is simulated at Reynolds number 10,595 using an 8th-order scheme in space and a 4th-order scheme in time. These results are validated against previous large eddy simulation (LES) results. A preliminary analysis provides insight into how these detailed simulations can be used to improve Reynoldsaveraged Navier-Stokes (RANS) modeling
Finite-element lattice Boltzmann simulations of contact line dynamics
NASA Astrophysics Data System (ADS)
Matin, Rastin; Krzysztof Misztal, Marek; Hernández-García, Anier; Mathiesen, Joachim
2018-01-01
The lattice Boltzmann method has become one of the standard techniques for simulating a wide range of fluid flows. However, the intrinsic coupling of momentum and space discretization restricts the traditional lattice Boltzmann method to regular lattices. Alternative off-lattice Boltzmann schemes exist for both single- and multiphase flows that decouple the velocity discretization from the underlying spatial grid. The current study extends the applicability of these off-lattice methods by introducing a finite element formulation that enables simulating contact line dynamics for partially wetting fluids. This work exemplifies the implementation of the scheme and furthermore presents benchmark experiments that show the scheme reduces spurious currents at the liquid-vapor interface by at least two orders of magnitude compared to a nodal implementation and allows for predicting the equilibrium states accurately in the range of moderate contact angles.
Effective implementation of wavelet Galerkin method
NASA Astrophysics Data System (ADS)
Finěk, Václav; Šimunková, Martina
2012-11-01
It was proved by W. Dahmen et al. that an adaptive wavelet scheme is asymptotically optimal for a wide class of elliptic equations. This scheme approximates the solution u by a linear combination of N wavelets and a benchmark for its performance is the best N-term approximation, which is obtained by retaining the N largest wavelet coefficients of the unknown solution. Moreover, the number of arithmetic operations needed to compute the approximate solution is proportional to N. The most time consuming part of this scheme is the approximate matrix-vector multiplication. In this contribution, we will introduce our implementation of wavelet Galerkin method for Poisson equation -Δu = f on hypercube with homogeneous Dirichlet boundary conditions. In our implementation, we identified nonzero elements of stiffness matrix corresponding to the above problem and we perform matrix-vector multiplication only with these nonzero elements.
Binary counting with chemical reactions.
Kharam, Aleksandra; Jiang, Hua; Riedel, Marc; Parhi, Keshab
2011-01-01
This paper describes a scheme for implementing a binary counter with chemical reactions. The value of the counter is encoded by logical values of "0" and "1" that correspond to the absence and presence of specific molecular types, respectively. It is incremented when molecules of a trigger type are injected. Synchronization is achieved with reactions that produce a sustained three-phase oscillation. This oscillation plays a role analogous to a clock signal in digital electronics. Quantities are transferred between molecular types in different phases of the oscillation. Unlike all previous schemes for chemical computation, this scheme is dependent only on coarse rate categories for the reactions ("fast" and "slow"). Given such categories, the computation is exact and independent of the specific reaction rates. Although conceptual for the time being, the methodology has potential applications in domains of synthetic biology such as biochemical sensing and drug delivery. We are exploring DNA-based computation via strand displacement as a possible experimental chassis.
NASA Astrophysics Data System (ADS)
Gyrya, V.; Lipnikov, K.
2017-11-01
We present the arbitrary order mimetic finite difference (MFD) discretization for the diffusion equation with non-symmetric tensorial diffusion coefficient in a mixed formulation on general polygonal meshes. The diffusion tensor is assumed to be positive definite. The asymmetry of the diffusion tensor requires changes to the standard MFD construction. We present new approach for the construction that guarantees positive definiteness of the non-symmetric mass matrix in the space of discrete velocities. The numerically observed convergence rate for the scalar quantity matches the predicted one in the case of the lowest order mimetic scheme. For higher orders schemes, we observed super-convergence by one order for the scalar variable which is consistent with the previously published result for a symmetric diffusion tensor. The new scheme was also tested on a time-dependent problem modeling the Hall effect in the resistive magnetohydrodynamics.
Importance sampling variance reduction for the Fokker–Planck rarefied gas particle method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collyer, B.S., E-mail: benjamin.collyer@gmail.com; London Mathematical Laboratory, 14 Buckingham Street, London WC2N 6DF; Connaughton, C.
The Fokker–Planck approximation to the Boltzmann equation, solved numerically by stochastic particle schemes, is used to provide estimates for rarefied gas flows. This paper presents a variance reduction technique for a stochastic particle method that is able to greatly reduce the uncertainty of the estimated flow fields when the characteristic speed of the flow is small in comparison to the thermal velocity of the gas. The method relies on importance sampling, requiring minimal changes to the basic stochastic particle scheme. We test the importance sampling scheme on a homogeneous relaxation, planar Couette flow and a lid-driven-cavity flow, and find thatmore » our method is able to greatly reduce the noise of estimated quantities. Significantly, we find that as the characteristic speed of the flow decreases, the variance of the noisy estimators becomes independent of the characteristic speed.« less
Bouzid, Assil; Pasquarello, Alfredo
2018-04-19
Based on constant Fermi-level molecular dynamics and a proper alignment scheme, we perform simulations of the Pt(111)/water interface under variable bias potential referenced to the standard hydrogen electrode (SHE). Our scheme yields a potential of zero charge μ pzc of ∼0.22 eV relative to the SHE and a double layer capacitance C dl of ≃19 μF cm -2 , in excellent agreement with experimental measurements. In addition, we study the structural reorganization of the electrical double layer for bias potentials ranging from -0.92 eV to +0.44 eV and find that O down configurations, which are dominant at potentials above the pzc, reorient to favor H down configurations as the measured potential becomes negative. Our modeling scheme allows one to not only access atomic-scale processes at metal/water interfaces, but also to quantitatively estimate macroscopic electrochemical quantities.
Gyrya, V.; Lipnikov, K.
2017-07-18
Here, we present the arbitrary order mimetic finite difference (MFD) discretization for the diffusion equation with non-symmetric tensorial diffusion coefficient in a mixed formulation on general polygonal meshes. The diffusion tensor is assumed to be positive definite. The asymmetry of the diffusion tensor requires changes to the standard MFD construction. We also present new approach for the construction that guarantees positive definiteness of the non-symmetric mass matrix in the space of discrete velocities. The numerically observed convergence rate for the scalar quantity matches the predicted one in the case of the lowest order mimetic scheme. For higher orders schemes, wemore » observed super-convergence by one order for the scalar variable which is consistent with the previously published result for a symmetric diffusion tensor. The new scheme was also tested on a time-dependent problem modeling the Hall effect in the resistive magnetohydrodynamics.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gyrya, V.; Lipnikov, K.
Here, we present the arbitrary order mimetic finite difference (MFD) discretization for the diffusion equation with non-symmetric tensorial diffusion coefficient in a mixed formulation on general polygonal meshes. The diffusion tensor is assumed to be positive definite. The asymmetry of the diffusion tensor requires changes to the standard MFD construction. We also present new approach for the construction that guarantees positive definiteness of the non-symmetric mass matrix in the space of discrete velocities. The numerically observed convergence rate for the scalar quantity matches the predicted one in the case of the lowest order mimetic scheme. For higher orders schemes, wemore » observed super-convergence by one order for the scalar variable which is consistent with the previously published result for a symmetric diffusion tensor. The new scheme was also tested on a time-dependent problem modeling the Hall effect in the resistive magnetohydrodynamics.« less
Seakeeping with the semi-Lagrangian particle finite element method
NASA Astrophysics Data System (ADS)
Nadukandi, Prashanth; Servan-Camas, Borja; Becker, Pablo Agustín; Garcia-Espinosa, Julio
2017-07-01
The application of the semi-Lagrangian particle finite element method (SL-PFEM) for the seakeeping simulation of the wave adaptive modular vehicle under spray generating conditions is presented. The time integration of the Lagrangian advection is done using the explicit integration of the velocity and acceleration along the streamlines (X-IVAS). Despite the suitability of the SL-PFEM for the considered seakeeping application, small time steps were needed in the X-IVAS scheme to control the solution accuracy. A preliminary proposal to overcome this limitation of the X-IVAS scheme for seakeeping simulations is presented.
Fluid-structure interaction with the entropic lattice Boltzmann method
NASA Astrophysics Data System (ADS)
Dorschner, B.; Chikatamarla, S. S.; Karlin, I. V.
2018-02-01
We propose a fluid-structure interaction (FSI) scheme using the entropic multi-relaxation time lattice Boltzmann (KBC) model for the fluid domain in combination with a nonlinear finite element solver for the structural part. We show the validity of the proposed scheme for various challenging setups by comparison to literature data. Beyond validation, we extend the KBC model to multiphase flows and couple it with a finite element method (FEM) solver. Robustness and viability of the entropic multi-relaxation time model for complex FSI applications is shown by simulations of droplet impact on elastic superhydrophobic surfaces.
ERIC Educational Resources Information Center
Ramful, Ajay
2012-01-01
In line with continuing efforts to explain the demanding nature of multiplicative reasoning among middle-school students, this study explores the fine-grained knowledge elements that two pairs of 7th and 8th graders deployed in their attempt to coordinate the known and unknown quantities in the gear-wheel problem. Failure to conceptualize the…
NASA Astrophysics Data System (ADS)
Rashvand, Taghi
2016-11-01
We present a new scheme for quantum teleportation that one can teleport an unknown state via a non-maximally entangled channel with certainly, using an auxiliary system. In this scheme depending on the state of the auxiliary system, one can find a class of orthogonal vectors set as a basis which by performing von Neumann measurement in each element of this class Alice can teleport an unknown state with unit fidelity and unit probability. A comparison of our scheme with some previous schemes is given and we will see that our scheme has advantages that the others do not.
NASA Astrophysics Data System (ADS)
Savin, Andrei V.; Smirnov, Petr G.
2018-05-01
Simulation of collisional dynamics of a large ensemble of monodisperse particles by the method of discrete elements is considered. Verle scheme is used for integration of the equations of motion. Non-conservativeness of the finite-difference scheme is discovered depending on the time step, which is equivalent to a pure-numerical energy source appearance in the process of collision. Compensation method for the source is proposed and tested.
Optical realization of optimal symmetric real state quantum cloning machine
NASA Astrophysics Data System (ADS)
Hu, Gui-Yu; Zhang, Wen-Hai; Ye, Liu
2010-01-01
We present an experimentally uniform linear optical scheme to implement the optimal 1→2 symmetric and optimal 1→3 symmetric economical real state quantum cloning machine of the polarization state of the single photon. This scheme requires single-photon sources and two-photon polarization entangled state as input states. It also involves linear optical elements and three-photon coincidence. Then we consider the realistic realization of the scheme by using the parametric down-conversion as photon resources. It is shown that under certain condition, the scheme is feasible by current experimental technology.
Karayannis, Nicholas V; Jull, Gwendolen A; Hodges, Paul W
2012-02-20
Several classification schemes, each with its own philosophy and categorizing method, subgroup low back pain (LBP) patients with the intent to guide treatment. Physiotherapy derived schemes usually have a movement impairment focus, but the extent to which other biological, psychological, and social factors of pain are encompassed requires exploration. Furthermore, within the prevailing 'biological' domain, the overlap of subgrouping strategies within the orthopaedic examination remains unexplored. The aim of this study was "to review and clarify through developer/expert survey, the theoretical basis and content of physical movement classification schemes, determine their relative reliability and similarities/differences, and to consider the extent of incorporation of the bio-psycho-social framework within the schemes". A database search for relevant articles related to LBP and subgrouping or classification was conducted. Five dominant movement-based schemes were identified: Mechanical Diagnosis and Treatment (MDT), Treatment Based Classification (TBC), Pathoanatomic Based Classification (PBC), Movement System Impairment Classification (MSI), and O'Sullivan Classification System (OCS) schemes. Data were extracted and a survey sent to the classification scheme developers/experts to clarify operational criteria, reliability, decision-making, and converging/diverging elements between schemes. Survey results were integrated into the review and approval obtained for accuracy. Considerable diversity exists between schemes in how movement informs subgrouping and in the consideration of broader neurosensory, cognitive, emotional, and behavioural dimensions of LBP. Despite differences in assessment philosophy, a common element lies in their objective to identify a movement pattern related to a pain reduction strategy. Two dominant movement paradigms emerge: (i) loading strategies (MDT, TBC, PBC) aimed at eliciting a phenomenon of centralisation of symptoms; and (ii) modified movement strategies (MSI, OCS) targeted towards documenting the movement impairments associated with the pain state. Schemes vary on: the extent to which loading strategies are pursued; the assessment of movement dysfunction; and advocated treatment approaches. A biomechanical assessment predominates in the majority of schemes (MDT, PBC, MSI), certain psychosocial aspects (fear-avoidance) are considered in the TBC scheme, certain neurophysiologic (central versus peripherally mediated pain states) and psychosocial (cognitive and behavioural) aspects are considered in the OCS scheme.
Unobtainium? Critical Elements for New Energy Technologies
NASA Astrophysics Data System (ADS)
Jaffe, Robert
2011-03-01
I will report on a recently completed study jointly sponsored by the APS Panel on Public Affairs (POPA) and the Material Research Society (MRS). The twin pressures of increasing demand for energy and increasing concern about anthropogenic climate change have stimulated research into new sources of energy and novel ways to harvest, transmit, store, transform or conserve it. At the same time, advances in physics, chemistry, and material science have enabled researchers to identify chemical elements with properties that can be finely tuned to their specific needs and to employ them in new energy-related technologies. Elements like dysprosium, gallium, germanium, indium, lanthanum, neodymium, rhenium, or tellurium, which were once laboratory curiosities, now figure centrally when novel energy systems are discussed. Many of these elements are not at present mined, refined, or traded in large quantities. However new technologies can only impact our energy needs if they can be scaled from laboratory, to demonstration, to massive implementation. As a result, some previously unfamiliar elements will be needed in great quantities. We refer to these elements as energy-critical elements (ECEs). Although the technologies in which they are employed and their abundance in the Earth's crust vary greatly, ECEs have many features in common. The purpose of the POPA/MRS study was to evaluate constraints on availability of energy-critical elements and to make recommendations that can help avoid these obstructions.
THERAPY WITH P$sup 32$ IN POLYCYTHEMIA (in German)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waschulewski, H.; Dorffel, E.W.
1958-01-01
Therapy vdth P/sup 32/ is being used more and more in polycythemia vera rubra. There ls no generally valid dosage scheme. Body-weight, blood-pictare and general condition furnish centaln clues. At present, we administer about 0.08 mc/kg body-weight as initial dose adding later, if necessary, further quantities under careful control of the blood-picture. (auth)
ERIC Educational Resources Information Center
Ma, Yongjun; Wan, Yanlan
2017-01-01
Based on previous international studies, a content analysis scheme has been designed and used from the perspective of culture to study the history of science (HOS) in science textbooks. Nineteen sets of Chinese science textbooks have been analyzed. It has been found that there are noticeable changes in the quantity, content, layout, presentation,…
A Kirchhoff approach to seismic modeling and prestack depth migration
NASA Astrophysics Data System (ADS)
Liu, Zhen-Yue
1993-05-01
The Kirchhoff integral provides a robust method for implementing seismic modeling and prestack depth migration, which can handle lateral velocity variation and turning waves. With a little extra computation cost, the Kirchoff-type migration can obtain multiple outputs that have the same phase but different amplitudes, compared with that of other migration methods. The ratio of these amplitudes is helpful in computing some quantities such as reflection angle. I develop a seismic modeling and prestack depth migration method based on the Kirchhoff integral, that handles both laterally variant velocity and a dip beyond 90 degrees. The method uses a finite-difference algorithm to calculate travel times and WKBJ amplitudes for the Kirchhoff integral. Compared to ray-tracing algorithms, the finite-difference algorithm gives an efficient implementation and single-valued quantities (first arrivals) on output. In my finite difference algorithm, the upwind scheme is used to calculate travel times, and the Crank-Nicolson scheme is used to calculate amplitudes. Moreover, interpolation is applied to save computation cost. The modeling and migration algorithms require a smooth velocity function. I develop a velocity-smoothing technique based on damped least-squares to aid in obtaining a successful migration.
NASA Astrophysics Data System (ADS)
Xie, Qing; Xiao, Zhixiang; Ren, Zhuyin
2018-09-01
A spectral radius scaling semi-implicit time stepping scheme has been developed for simulating unsteady compressible reactive flows with detailed chemistry, in which the spectral radius in the LUSGS scheme has been augmented to account for viscous/diffusive and reactive terms and a scalar matrix is proposed to approximate the chemical Jacobian using the minimum species destruction timescale. The performance of the semi-implicit scheme, together with a third-order explicit Runge-Kutta scheme and a Strang splitting scheme, have been investigated in auto-ignition and laminar premixed and nonpremixed flames of three representative fuels, e.g., hydrogen, methane, and n-heptane. Results show that the minimum species destruction time scale can well represent the smallest chemical time scale in reactive flows and the proposed scheme can significantly increase the allowable time steps in simulations. The scheme is stable when the time step is as large as 10 μs, which is about three to five orders of magnitude larger than the smallest time scales in various tests considered. For the test flames considered, the semi-implicit scheme achieves second order of accuracy in time. Moreover, the errors in quantities of interest are smaller than those from the Strang splitting scheme indicating the accuracy gain when the reaction and transport terms are solved coupled. Results also show that the relative efficiency of different schemes depends on fuel mechanisms and test flames. When the minimum time scale in reactive flows is governed by transport processes instead of chemical reactions, the proposed semi-implicit scheme is more efficient than the splitting scheme. Otherwise, the relative efficiency depends on the cost in sub-iterations for convergence within each time step and in the integration for chemistry substep. Then, the capability of the compressible reacting flow solver and the proposed semi-implicit scheme is demonstrated for capturing the hydrogen detonation waves. Finally, the performance of the proposed method is demonstrated in a two-dimensional hydrogen/air diffusion flame.
A modified symplectic PRK scheme for seismic wave modeling
NASA Astrophysics Data System (ADS)
Liu, Shaolin; Yang, Dinghui; Ma, Jian
2017-02-01
A new scheme for the temporal discretization of the seismic wave equation is constructed based on symplectic geometric theory and a modified strategy. The ordinary differential equation in terms of time, which is obtained after spatial discretization via the spectral-element method, is transformed into a Hamiltonian system. A symplectic partitioned Runge-Kutta (PRK) scheme is used to solve the Hamiltonian system. A term related to the multiplication of the spatial discretization operator with the seismic wave velocity vector is added into the symplectic PRK scheme to create a modified symplectic PRK scheme. The symplectic coefficients of the new scheme are determined via Taylor series expansion. The positive coefficients of the scheme indicate that its long-term computational capability is more powerful than that of conventional symplectic schemes. An exhaustive theoretical analysis reveals that the new scheme is highly stable and has low numerical dispersion. The results of three numerical experiments demonstrate the high efficiency of this method for seismic wave modeling.
NASA Astrophysics Data System (ADS)
Bradshaw, A. M.; Reuter, B.; Hamacher, T.
2015-08-01
The energy transformation process beginning to take place in many countries as a response to climate change will reduce substantially the consumption of fossil fuels, but at the same time cause a large increase in the demand for other raw materials. Whereas it is difficult to estimate the quantities of, for example, iron, copper and aluminium required, the situation is somewhat simpler for the rare elements that might be needed in a sustainable energy economy based largely on photovoltaic sources, wind and possibly nuclear fusion. We consider briefly each of these technologies and discuss the supply risks associated with the rare elements required, if they were to be used in the quantities that might be required for a global energy transformation process. In passing, we point out the need in resource studies to define the terms "rare", "scarce" and "critical" and to use them in a consistent way.
Fusheini, Adam; Marnoch, Gordon; Gray, Ann Marie
2016-10-01
Ghana's National Health Insurance Scheme (NHIS), established by an Act of Parliament (Act 650), in 2003 and since replaced by Act 852 of 2012 remains, in African terms, unprecedented in terms of growth and coverage. As a result, the scheme has received praise for its associated legal reforms, clinical audit mechanisms and for serving as a hub for knowledge sharing and learning within the context of South-South cooperation. The scheme continues to shape national health insurance thinking in Africa. While the success, especially in coverage and financial access has been highlighted by many authors, insufficient attention has been paid to critical and context-specific factors. This paper seeks to fill that gap. Based on an empirical qualitative case study of stakeholders' views on challenges and success factors in four mutual schemes (district offices) located in two regions of Ghana, the study uses the concept of policy translation to assess whether the Ghana scheme could provide useful lessons to other African and developing countries in their quest to implement social/NHISs. In the study, interviewees referred to both 'hard and soft' elements as driving the "success" of the Ghana scheme. The main 'hard elements' include bureaucratic and legal enforcement capacities; IT; financing; governance, administration and management; regulating membership of the scheme; and service provision and coverage capabilities. The 'soft' elements identified relate to: the background/context of the health insurance scheme; innovative ways of funding the NHIS, the hybrid nature of the Ghana scheme; political will, commitment by government, stakeholders and public cooperation; social structure of Ghana (solidarity); and ownership and participation. Other developing countries can expect to translate rather than re-assemble a national health insurance programme in an incomplete and highly modified form over a period of years, amounting to a process best conceived as germination as opposed to emulation. The Ghana experience illustrates that in adopting health financing systems that function well, countries need to customise systems (policy customisation) to suit their socio-economic, political and administrative settings. Home-grown health financing systems that resonate with social values will also need to be found in the process of translation. © 2017 The Author(s); Published by Kerman University of Medical Sciences. This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Further Education and Training: A Comparison of Policy Models in Britain and Norway.
ERIC Educational Resources Information Center
Skinningsrud, Tone
1995-01-01
Compares public intervention schemes in Britain and Norway supporting participation of public educational institutions in the delivery of continuing labor force development and training. These schemes demonstrate that British policy is based on belief in free market principles, while Norwegian policy combines elements of consumer choice and legal…
A semi-Lagrangian advection scheme for radioactive tracers in the NCEP Regional Spectral Model (RSM)
NASA Astrophysics Data System (ADS)
Chang, E.-C.; Yoshimura, K.
2015-10-01
In this study, the non-iteration dimensional-split semi-Lagrangian (NDSL) advection scheme is applied to the National Centers for Environmental Prediction (NCEP) Regional Spectral Model (RSM) to alleviate the Gibbs phenomenon. The Gibbs phenomenon is a problem wherein negative values of positive-definite quantities (e.g., moisture and tracers) are generated by the spectral space transformation in a spectral model system. To solve this problem, the spectral prognostic specific humidity and radioactive tracer advection scheme is replaced by the NDSL advection scheme, which considers advection of tracers in a grid system without spectral space transformations. A regional version of the NDSL is developed in this study and is applied to the RSM. Idealized experiments show that the regional version of the NDSL is successful. The model runs for an actual case study suggest that the NDSL can successfully advect radioactive tracers (iodine-131 and cesium-137) without noise from the Gibbs phenomenon. The NDSL can also remove negative specific humidity values produced in spectral calculations without losing detailed features.
A study of the response of nonlinear springs
NASA Technical Reports Server (NTRS)
Hyer, M. W.; Knott, T. W.; Johnson, E. R.
1991-01-01
The various phases to developing a methodology for studying the response of a spring-reinforced arch subjected to a point load are discussed. The arch is simply supported at its ends with both the spring and the point load assumed to be at midspan. The spring is present to off-set the typical snap through behavior normally associated with arches, and to provide a structure that responds with constant resistance over a finite displacement. The various phases discussed consist of the following: (1) development of the closed-form solution for the shallow arch case; (2) development of a finite difference analysis to study (shallow) arches; and (3) development of a finite element analysis for studying more general shallow and nonshallow arches. The two numerical analyses rely on a continuation scheme to move the solution past limit points, and to move onto bifurcated paths, both characteristics being common to the arch problem. An eigenvalue method is used for a continuation scheme. The finite difference analysis is based on a mixed formulation (force and displacement variables) of the governing equations. The governing equations for the mixed formulation are in first order form, making the finite difference implementation convenient. However, the mixed formulation is not well-suited for the eigenvalue continuation scheme. This provided the motivation for the displacement based finite element analysis. Both the finite difference and the finite element analyses are compared with the closed form shallow arch solution. Agreement is excellent, except for the potential problems with the finite difference analysis and the continuation scheme. Agreement between the finite element analysis and another investigator's numerical analysis for deep arches is also good.
NASA Astrophysics Data System (ADS)
Ruiz-Baier, Ricardo; Lunati, Ivan
2016-10-01
We present a novel discretization scheme tailored to a class of multiphase models that regard the physical system as consisting of multiple interacting continua. In the framework of mixture theory, we consider a general mathematical model that entails solving a system of mass and momentum equations for both the mixture and one of the phases. The model results in a strongly coupled and nonlinear system of partial differential equations that are written in terms of phase and mixture (barycentric) velocities, phase pressure, and saturation. We construct an accurate, robust and reliable hybrid method that combines a mixed finite element discretization of the momentum equations with a primal discontinuous finite volume-element discretization of the mass (or transport) equations. The scheme is devised for unstructured meshes and relies on mixed Brezzi-Douglas-Marini approximations of phase and total velocities, on piecewise constant elements for the approximation of phase or total pressures, as well as on a primal formulation that employs discontinuous finite volume elements defined on a dual diamond mesh to approximate scalar fields of interest (such as volume fraction, total density, saturation, etc.). As the discretization scheme is derived for a general formulation of multicontinuum physical systems, it can be readily applied to a large class of simplified multiphase models; on the other, the approach can be seen as a generalization of these models that are commonly encountered in the literature and employed when the latter are not sufficiently accurate. An extensive set of numerical test cases involving two- and three-dimensional porous media are presented to demonstrate the accuracy of the method (displaying an optimal convergence rate), the physics-preserving properties of the mixed-primal scheme, as well as the robustness of the method (which is successfully used to simulate diverse physical phenomena such as density fingering, Terzaghi's consolidation, deformation of a cantilever bracket, and Boycott effects). The applicability of the method is not limited to flow in porous media, but can also be employed to describe many other physical systems governed by a similar set of equations, including e.g. multi-component materials.
NASA Astrophysics Data System (ADS)
Xie, Dexuan
2014-10-01
The Poisson-Boltzmann equation (PBE) is one widely-used implicit solvent continuum model in the calculation of electrostatic potential energy for biomolecules in ionic solvent, but its numerical solution remains a challenge due to its strong singularity and nonlinearity caused by its singular distribution source terms and exponential nonlinear terms. To effectively deal with such a challenge, in this paper, new solution decomposition and minimization schemes are proposed, together with a new PBE analysis on solution existence and uniqueness. Moreover, a PBE finite element program package is developed in Python based on the FEniCS program library and GAMer, a molecular surface and volumetric mesh generation program package. Numerical tests on proteins and a nonlinear Born ball model with an analytical solution validate the new solution decomposition and minimization schemes, and demonstrate the effectiveness and efficiency of the new PBE finite element program package.
NASA Astrophysics Data System (ADS)
Busto, S.; Ferrín, J. L.; Toro, E. F.; Vázquez-Cendón, M. E.
2018-01-01
In this paper the projection hybrid FV/FE method presented in [1] is extended to account for species transport equations. Furthermore, turbulent regimes are also considered thanks to the k-ε model. Regarding the transport diffusion stage new schemes of high order of accuracy are developed. The CVC Kolgan-type scheme and ADER methodology are extended to 3D. The latter is modified in order to profit from the dual mesh employed by the projection algorithm and the derivatives involved in the diffusion term are discretized using a Galerkin approach. The accuracy and stability analysis of the new method are carried out for the advection-diffusion-reaction equation. Within the projection stage the pressure correction is computed by a piecewise linear finite element method. Numerical results are presented, aimed at verifying the formal order of accuracy of the scheme and to assess the performance of the method on several realistic test problems.
NASA Astrophysics Data System (ADS)
Gaikwad, Akshay; Rehal, Diksha; Singh, Amandeep; Arvind, Dorai, Kavita
2018-02-01
We present the NMR implementation of a scheme for selective and efficient quantum process tomography without ancilla. We generalize this scheme such that it can be implemented efficiently using only a set of measurements involving product operators. The method allows us to estimate any element of the quantum process matrix to a desired precision, provided a set of quantum states can be prepared efficiently. Our modified technique requires fewer experimental resources as compared to the standard implementation of selective and efficient quantum process tomography, as it exploits the special nature of NMR measurements to allow us to compute specific elements of the process matrix by a restrictive set of subsystem measurements. To demonstrate the efficacy of our scheme, we experimentally tomograph the processes corresponding to "no operation," a controlled-NOT (CNOT), and a controlled-Hadamard gate on a two-qubit NMR quantum information processor, with high fidelities.
Optical flip-flops in a polarization-encoded optical shadow-casting scheme.
Rizvi, R A; Zubairy, M S
1994-06-10
We propose a novel scheme that optically implements various types of binary sequential logic elements. This is based on a polarization-encoded optical shadow-casting system. The proposed system architecture is capable of implementing synchronous as well as asynchronous sequential circuits owing to the inherent structural flexibility of optical shadow casting. By employing the proposed system, we present the design and implementation schemes of a J-K flip-flop and clocked R-S and D latches. The main feature of these flip-flops is that the propagation of the signal from the input plane to the output (i.e., processing) and from the output plane to the source plane (i.e., feedback) is all optical. Consequently the efficiency of these elements in terms of speed is increased. The only electronic part in the system is the detection of the outputs and the switching of the source plane.
NASA Technical Reports Server (NTRS)
Kirk, Benjamin S.; Bova, Stephen W.; Bond, Ryan B.
2011-01-01
Presentation topics include background and motivation; physical modeling including governing equations and thermochemistry; finite element formulation; results of inviscid thermal nonequilibrium chemically reacting flow and viscous thermal equilibrium chemical reacting flow; and near-term effort.
NASA Astrophysics Data System (ADS)
Abdul-Majeed, Wameath Sh
This research is dedicated to develop a fully integrated system for heavy metals determination in water samples based on micro fluidic plasma atomizers. Several configurations of dielectric barrier discharge (DBD) atomizer are designed, fabricated and tested toward this target. Finally, a combination of annular and rectangular DBD atomizers has been utilized to develop a scheme for heavy metals determination. The present thesis has combined both theoretical and experimental investigations to fulfil the requirements. Several mathematical studies are implemented to explore the optimal design parameters for best system performance. On the other hand, expanded experimental explorations are conducted to assess the proposed operational approaches. The experiments were designed according to a central composite rotatable design; hence, an empirical model has been produced for each studied case. Moreover, several statistical approaches are adopted to analyse the system performance and to deduce the optimal operational parameters.. The introduction of the examined analyte to the plasma atomizer has been achieved by applying chemical schemes, where the element in the sample has been derivitized by using different kinds of reducing agents to produce vapour species (e.g. hydrides) for a group of nine elements examined in this research individually and simultaneously. Moreover, other derivatization schemes based on photochemical vapour generation assisted by ultrasound irradiation are also investigated. Generally speaking, the detection limits achieved in this research for the examined set of elements (by applying hydroborate scheme) are found to be acceptable in accordance with the standard limits in drinking water. The results of copper compared with the data from other technologies in the literature, showed a competitive detection limit obtained from applying the developed scheme, with an advantage of conducting simultaneous, fully automated, insitu, online- real time analysis as well as a possibility of connecting the proposed device to control loops..
Vainieri, Milena; Lungu, Daniel Adrian; Nuti, Sabina
2018-01-30
Pay for performance (P4P) programs have been widely analysed in literature, and the results regarding their impact on performance are mixed. Moreover, in the real-life setting, reward schemes are designed combining multiple elements altogether, yet, it is not clear what happens when they are applied using different combinations. To provide insights on how P4P programs are influenced by 5 key elements: whom, what, how, how many targets, and how much to reward. A qualitative longitudinal analysis of 10 years of P4P reward schemes adopted by the regional administrations of Tuscany and Lombardy (Italy) was conducted. The effects of the P4P features on performance are discussed considering both overall and specific indicators. Both regions applied financial reward schemes for General Managers by linking the variable pay to performance. While Tuscany maintained a relatively stable financial incentive design and governance tools, Lombardy changed some elements of the design and introduced, in 2012, a P4P program aimed to reward the providers. The main differences between the 2 cases regard the number of targets (how many), the type (what), and the method applied to set targets (how). Considering the overall performance obtained by the 2 regions, it seems that whom, how, and how much to reward are not relevant in the success of P4P programs; instead, the number (how many) and the type (what) of targets set may influence the performance improvement processes driven by financial reward schemes. © 2018 The Authors. The International Journal of Health Planning and Management published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Bürger, Raimund; Kumar, Sarvesh; Ruiz-Baier, Ricardo
2015-10-01
The sedimentation-consolidation and flow processes of a mixture of small particles dispersed in a viscous fluid at low Reynolds numbers can be described by a nonlinear transport equation for the solids concentration coupled with the Stokes problem written in terms of the mixture flow velocity and the pressure field. Here both the viscosity and the forcing term depend on the local solids concentration. A semi-discrete discontinuous finite volume element (DFVE) scheme is proposed for this model. The numerical method is constructed on a baseline finite element family of linear discontinuous elements for the approximation of velocity components and concentration field, whereas the pressure is approximated by piecewise constant elements. The unique solvability of both the nonlinear continuous problem and the semi-discrete DFVE scheme is discussed, and optimal convergence estimates in several spatial norms are derived. Properties of the model and the predicted space accuracy of the proposed formulation are illustrated by detailed numerical examples, including flows under gravity with changing direction, a secondary settling tank in an axisymmetric setting, and batch sedimentation in a tilted cylindrical vessel.
A spectral hybridizable discontinuous Galerkin method for elastic-acoustic wave propagation
NASA Astrophysics Data System (ADS)
Terrana, S.; Vilotte, J. P.; Guillot, L.
2018-04-01
We introduce a time-domain, high-order in space, hybridizable discontinuous Galerkin (DG) spectral element method (HDG-SEM) for wave equations in coupled elastic-acoustic media. The method is based on a first-order hyperbolic velocity-strain formulation of the wave equations written in conservative form. This method follows the HDG approach by introducing a hybrid unknown, which is the approximation of the velocity on the elements boundaries, as the only globally (i.e. interelement) coupled degrees of freedom. In this paper, we first present a hybridized formulation of the exact Riemann solver at the element boundaries, taking into account elastic-elastic, acoustic-acoustic and elastic-acoustic interfaces. We then use this Riemann solver to derive an explicit construction of the HDG stabilization function τ for all the above-mentioned interfaces. We thus obtain an HDG scheme for coupled elastic-acoustic problems. This scheme is then discretized in space on quadrangular/hexahedral meshes using arbitrary high-order polynomial basis for both volumetric and hybrid fields, using an approach similar to the spectral element methods. This leads to a semi-discrete system of algebraic differential equations (ADEs), which thanks to the structure of the global conservativity condition can be reformulated easily as a classical system of first-order ordinary differential equations in time, allowing the use of classical explicit or implicit time integration schemes. When an explicit time scheme is used, the HDG method can be seen as a reformulation of a DG with upwind fluxes. The introduction of the velocity hybrid unknown leads to relatively simple computations at the element boundaries which, in turn, makes the HDG approach competitive with the DG-upwind methods. Extensive numerical results are provided to illustrate and assess the accuracy and convergence properties of this HDG-SEM. The approximate velocity is shown to converge with the optimal order of k + 1 in the L2-norm, when element polynomials of order k are used, and to exhibit the classical spectral convergence of SEM. Additional inexpensive local post-processing in both the elastic and the acoustic case allow to achieve higher convergence orders. The HDG scheme provides a natural framework for coupling classical, continuous Galerkin SEM with HDG-SEM in the same simulation, and it is shown numerically in this paper. As such, the proposed HDG-SEM can combine the efficiency of the continuous SEM with the flexibility of the HDG approaches. Finally, more complex numerical results, inspired from real geophysical applications, are presented to illustrate the capabilities of the method for wave propagation in heterogeneous elastic-acoustic media with complex geometries.
NASA Technical Reports Server (NTRS)
Usab, William J., Jr.; Jiang, Yi-Tsann
1991-01-01
The objective of the present research is to develop a general solution adaptive scheme for the accurate prediction of inviscid quasi-three-dimensional flow in advanced compressor and turbine designs. The adaptive solution scheme combines an explicit finite-volume time-marching scheme for unstructured triangular meshes and an advancing front triangular mesh scheme with a remeshing procedure for adapting the mesh as the solution evolves. The unstructured flow solver has been tested on a series of two-dimensional airfoil configurations including a three-element analytic test case presented here. Mesh adapted quasi-three-dimensional Euler solutions are presented for three spanwise stations of the NASA rotor 67 transonic fan. Computed solutions are compared with available experimental data.
Two-level schemes for the advection equation
NASA Astrophysics Data System (ADS)
Vabishchevich, Petr N.
2018-06-01
The advection equation is the basis for mathematical models of continuum mechanics. In the approximate solution of nonstationary problems it is necessary to inherit main properties of the conservatism and monotonicity of the solution. In this paper, the advection equation is written in the symmetric form, where the advection operator is the half-sum of advection operators in conservative (divergent) and non-conservative (characteristic) forms. The advection operator is skew-symmetric. Standard finite element approximations in space are used. The standard explicit two-level scheme for the advection equation is absolutely unstable. New conditionally stable regularized schemes are constructed, on the basis of the general theory of stability (well-posedness) of operator-difference schemes, the stability conditions of the explicit Lax-Wendroff scheme are established. Unconditionally stable and conservative schemes are implicit schemes of the second (Crank-Nicolson scheme) and fourth order. The conditionally stable implicit Lax-Wendroff scheme is constructed. The accuracy of the investigated explicit and implicit two-level schemes for an approximate solution of the advection equation is illustrated by the numerical results of a model two-dimensional problem.
Hermetic Glass-To-Metal Seal For Instrumentation Window
NASA Technical Reports Server (NTRS)
Hill, Arthur J.
1992-01-01
Proposed mounting scheme for optical element of instrumentation window in pressure vessel ensures truly hermetic seal while minimizing transmission of stress to optical element. Brazed metal seal superior to conventional gaskets of elastomer, carbon, asbestos, or other material compressed between optical element and wall of vessel. Concentric brazed joints in proposed seal bond metal ring to wall of vessel and to optical element. U-shaped cross section allows ring to flex under pressure.
Non-symbolic arithmetic in adults and young children.
Barth, Hilary; La Mont, Kristen; Lipton, Jennifer; Dehaene, Stanislas; Kanwisher, Nancy; Spelke, Elizabeth
2006-01-01
Five experiments investigated whether adults and preschool children can perform simple arithmetic calculations on non-symbolic numerosities. Previous research has demonstrated that human adults, human infants, and non-human animals can process numerical quantities through approximate representations of their magnitudes. Here we consider whether these non-symbolic numerical representations might serve as a building block of uniquely human, learned mathematics. Both adults and children with no training in arithmetic successfully performed approximate arithmetic on large sets of elements. Success at these tasks did not depend on non-numerical continuous quantities, modality-specific quantity information, the adoption of alternative non-arithmetic strategies, or learned symbolic arithmetic knowledge. Abstract numerical quantity representations therefore are computationally functional and may provide a foundation for formal mathematics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, Jaromy; Sun Zaijing; Wells, Doug
2009-03-10
Photon activation analysis detected elements in two NIST standards that did not have reported concentration values. A method is currently being developed to infer these concentrations by using scaling parameters and the appropriate known quantities within the NIST standard itself. Scaling parameters include: threshold, peak and endpoint energies; photo-nuclear cross sections for specific isotopes; Bremstrahlung spectrum; target thickness; and photon flux. Photo-nuclear cross sections and energies from the unknown elements must also be known. With these quantities, the same integral was performed for both the known and unknown elements resulting in an inference of the concentration of the un-reported elementmore » based on the reported value. Since Rb and Mn were elements that were reported in the standards, and because they had well-identified peaks, they were used as the standards of inference to determine concentrations of the unreported elements of As, I, Nb, Y, and Zr. This method was tested by choosing other known elements within the standards and inferring a value based on the stated procedure. The reported value of Mn in the first NIST standard was 403{+-}15 ppm and the reported value of Ca in the second NIST standard was 87000 ppm (no reported uncertainty). The inferred concentrations were 370{+-}23 ppm and 80200{+-}8700 ppm respectively.« less
Dealing with the time-varying parameter problem of robot manipulators performing path tracking tasks
NASA Technical Reports Server (NTRS)
Song, Y. D.; Middleton, R. H.
1992-01-01
Many robotic applications involve time-varying payloads during the operation of the robot. It is therefore of interest to consider control schemes that deal with time-varying parameters. Using the properties of the element by element (or Hadarmad) product of matrices, we obtain the robot dynamics in parameter-isolated form, from which a new control scheme is developed. The controller proposed yields zero asymptotic tracking errors when applied to robotic systems with time-varying parameters by using a switching type control law. The results obtained are global in the initial state of the robot, and can be applied to rapidly varying systems.
A discontinuous Galerkin method for poroelastic wave propagation: The two-dimensional case
NASA Astrophysics Data System (ADS)
Dudley Ward, N. F.; Lähivaara, T.; Eveson, S.
2017-12-01
In this paper, we consider a high-order discontinuous Galerkin (DG) method for modelling wave propagation in coupled poroelastic-elastic media. The upwind numerical flux is derived as an exact solution for the Riemann problem including the poroelastic-elastic interface. Attenuation mechanisms in both Biot's low- and high-frequency regimes are considered. The current implementation supports non-uniform basis orders which can be used to control the numerical accuracy element by element. In the numerical examples, we study the convergence properties of the proposed DG scheme and provide experiments where the numerical accuracy of the scheme under consideration is compared to analytic and other numerical solutions.
Using a binaural biomimetic array to identify bottom objects ensonified by echolocating dolphins
Heiweg, D.A.; Moore, P.W.; Martin, S.W.; Dankiewicz, L.A.
2006-01-01
The development of a unique dolphin biomimetic sonar produced data that were used to study signal processing methods for object identification. Echoes from four metallic objects proud on the bottom, and a substrate-only condition, were generated by bottlenose dolphins trained to ensonify the targets in very shallow water. Using the two-element ('binaural') receive array, object echo spectra were collected and submitted for identification to four neural network architectures. Identification accuracy was evaluated over two receive array configurations, and five signal processing schemes. The four neural networks included backpropagation, learning vector quantization, genetic learning and probabilistic network architectures. The processing schemes included four methods that capitalized on the binaural data, plus a monaural benchmark process. All the schemes resulted in above-chance identification accuracy when applied to learning vector quantization and backpropagation. Beam-forming or concatenation of spectra from both receive elements outperformed the monaural benchmark, with higher sensitivity and lower bias. Ultimately, best object identification performance was achieved by the learning vector quantization network supplied with beam-formed data. The advantages of multi-element signal processing for object identification are clearly demonstrated in this development of a first-ever dolphin biomimetic sonar. ?? 2006 IOP Publishing Ltd.
Hollow cathode lamp based Faraday anomalous dispersion optical filter
Pan, Duo; Xue, Xiaobo; Shang, Haosen; Luo, Bin; Chen, Jingbiao; Guo, Hong
2016-01-01
The Faraday anomalous dispersion optical filter (FADOF), which has acquired wide applications, is mainly limited to some gaseous elements and low melting-point metals before, for the restriction of the attainable atomic density. In conventional FADOF systems a high atomic density is usually achieved by thermal equilibrium at the saturated vapor pressure, hence for elements with high melting-points a high temperature is required. To avoid this restriction, we propose a scheme of FADOF based on the hollow cathode lamp (HCL), instead of atomic vapor cells. Experimental results in strontium atoms verified this scheme, where a transmission peak corresponding to the 88Sr (5s2)1S0 − (5s5p)1P1 transition (461 nm) is obtained, with a maximum transmittance of 62.5% and a bandwith of 1.19 GHz. The dependence of transmission on magnetic field and HCL discharge current is also studied. Since the state-of-art commercial HCLs cover about 70 elements, this scheme can greatly expand the applications of FADOFs, and the abundant atomic transitions they provide bring the HCL based FADOFs potential applications for frequency stabilization. PMID:27418112
Identity-Based Verifiably Encrypted Signatures without Random Oracles
NASA Astrophysics Data System (ADS)
Zhang, Lei; Wu, Qianhong; Qin, Bo
Fair exchange protocol plays an important role in electronic commerce in the case of exchanging digital contracts. Verifiably encrypted signatures provide an optimistic solution to these scenarios with an off-line trusted third party. In this paper, we propose an identity-based verifiably encrypted signature scheme. The scheme is non-interactive to generate verifiably encrypted signatures and the resulting encrypted signature consists of only four group elements. Based on the computational Diffie-Hellman assumption, our scheme is proven secure without using random oracles. To the best of our knowledge, this is the first identity-based verifiably encrypted signature scheme provably secure in the standard model.
An Energy Decaying Scheme for Nonlinear Dynamics of Shells
NASA Technical Reports Server (NTRS)
Bottasso, Carlo L.; Bauchau, Olivier A.; Choi, Jou-Young; Bushnell, Dennis M. (Technical Monitor)
2000-01-01
A novel integration scheme for nonlinear dynamics of geometrically exact shells is developed based on the inextensible director assumption. The new algorithm is designed so as to imply the strict decay of the system total mechanical energy at each time step, and consequently unconditional stability is achieved in the nonlinear regime. Furthermore, the scheme features tunable high frequency numerical damping and it is therefore stiffly accurate. The method is tested for a finite element spatial formulation of shells based on mixed interpolations of strain tensorial components and on a two-parameter representation of director rotations. The robustness of the, scheme is illustrated with the help of numerical examples.
Performance of hashed cache data migration schemes on multicomputers
NASA Technical Reports Server (NTRS)
Hiranandani, Seema; Saltz, Joel; Mehrotra, Piyush; Berryman, Harry
1991-01-01
After conducting an examination of several data-migration mechanisms which permit an explicit and controlled mapping of data to memory, a set of schemes for storage and retrieval of off-processor array elements is experimentally evaluated and modeled. All schemes considered have their basis in the use of hash tables for efficient access of nonlocal data. The techniques in question are those of hashed cache, partial enumeration, and full enumeration; in these, nonlocal data are stored in hash tables, so that the operative difference lies in the amount of memory used by each scheme and in the retrieval mechanism used for nonlocal data.
Phase-locked laser array through global antenna mutual coupling
Kao, Tsung -Yu; Reno, John L.; Hu, Qing
2016-01-01
Here, phase locking of an array of lasers is a highly effective way in beam shaping, to increase the output power, and to reduce lasing threshold. In this work, we present a novel phase-locking mechanism based on "antenna mutual coupling" wherein laser elements interact through far-field radiations with definite phase relations. This allows long-range global coupling among array elements to achieve robust 2-dimensional phase-locked laser array. The new scheme is ideal for lasers with deep sub-wavelength confined cavity such as nanolasers, where the divergent beam pattern could be used to form strong coupling among elements in the array. We experimentallymore » demonstrated such a scheme using sub-wavelength short-cavity surface-emitting lasers at terahertz frequency. More than 37 laser elements are phase-locked to each other, delivering up to 6.5 mW single-mode radiations at ~3 terahertz, with maximum 450-mW/A slope efficiency and near diffraction limit beam divergence.« less
Finite element analysis of the end notched flexure specimen for measuring Mode II fracture toughness
NASA Technical Reports Server (NTRS)
Gillespie, J. W., Jr.; Carlsson, L. A.; Pipes, R. B.
1986-01-01
The paper presents a finite element analysis of the end-notched flexure (ENF) test specimen for Mode II interlaminar fracture testing of composite materials. Virtual crack closure and compliance techniques employed to calculate strain energy release rates from linear elastic two-dimensional analysis indicate that the ENF specimen is a pure Mode II fracture test within the constraints of small deflection theory. Furthermore, the ENF fracture specimen is shown to be relatively insensitive to process-induced cracks, offset from the laminate midplane. Frictional effects are investigated by including the contact problem in the finite element model. A parametric study investigating the influence of delamination length, span, thickness, and material properties assessed the accuracy of beam theory expressions for compliance and strain energy release rate, GII. Finite element results indicate that data reduction schemes based upon beam theory underestimate GII by approximately 20-40 percent for typical unidirectional graphite fiber composite test specimen geometries. Consequently, an improved data reduction scheme is proposed.
2013-08-15
DR. BINAYAK PANDA LOADS A SAMPLE IN THE IMS-6F SECONDARY ION MASS SPECTROSCOPE’S ULTRA HIGH VACUUM CHAMBER. IT IS CAPABLE OF ANALYZING VERY LIGHT ELEMENTS SUCH AS HYDROGEN AND LITHIUM IN ALLOYS. IT CAN ALSO ANALYZE VERY SMALL QUANTITIES OF IMPURITIES IN MATERIALS AT PARTS PER MILLION LEVELS, AND DETERMINE ISOTOPE RATIOS OF ELEMENTS, ALL IN SOLID SAMPLES.
Soviet Computers and Cybernetics: Shortcomings and Military Applications.
1980-06-01
FOOTNOTES.......................................24 BIBLIOGRAPHY......................................28 INTRODUCTION Military scientific technological...exploration which have alarmed some Western analysts. America’s scientific and technological advantages are integral elements in the delicate world balance...inferior quantity only up to a point, where superior numbers take over. A major element in the military scientific technological competition between
DOE Office of Scientific and Technical Information (OSTI.GOV)
Briceno, Raul A.; Hansen, Maxwell T.; Monahan, Christopher J.
Lattice quantum chromodynamics (QCD) provides the only known systematic, nonperturbative method for first-principles calculations of nucleon structure. However, for quantities such as light-front parton distribution functions (PDFs) and generalized parton distributions (GPDs), the restriction to Euclidean time prevents direct calculation of the desired observable. Recently, progress has been made in relating these quantities to matrix elements of spatially nonlocal, zero-time operators, referred to as quasidistributions. Still, even for these time-independent matrix elements, potential subtleties have been identified in the role of the Euclidean signature. In this work, we investigate the analytic behavior of spatially nonlocal correlation functions and demonstrate thatmore » the matrix elements obtained from Euclidean lattice QCD are identical to those obtained using the Lehmann-Symanzik-Zimmermann reduction formula in Minkowski space. After arguing the equivalence on general grounds, we also show that it holds in a perturbative calculation, where special care is needed to identify the lattice prediction. Lastly, we present a proof of the uniqueness of the matrix elements obtained from Minkowski and Euclidean correlation functions to all order in perturbation theory.« less
The TeachScheme! Project: Computing and Programming for Every Student
ERIC Educational Resources Information Center
Felleisen, Matthias; Findler, Robert Bruce; Flatt, Matthew; Krishnamurthi, Shriram
2004-01-01
The TeachScheme! Project aims to reform three aspects of introductory programming courses in secondary schools. First, we use a design method that asks students to develop programs in a stepwise fashion such that each step produces a well-specified intermediate product. Second, we use an entire series of sublanguages, not just one. Each element of…
An Improved Flame Test for Qualitative Analysis Using a Multichannel UV-Visible Spectrophotometer
ERIC Educational Resources Information Center
Blitz, Jonathan P.; Sheeran, Daniel J.; Becker, Thomas L.
2006-01-01
Qualitative analysis schemes are used in undergraduate laboratory settings as a way to introduce equilibrium concepts and logical thinking. The main component of all qualitative analysis schemes is a flame test, as the color of light emitted from certain elements is distinctive and a flame photometer or spectrophotometer in each laboratory is…
Supercomputer implementation of finite element algorithms for high speed compressible flows
NASA Technical Reports Server (NTRS)
Thornton, E. A.; Ramakrishnan, R.
1986-01-01
Prediction of compressible flow phenomena using the finite element method is of recent origin and considerable interest. Two shock capturing finite element formulations for high speed compressible flows are described. A Taylor-Galerkin formulation uses a Taylor series expansion in time coupled with a Galerkin weighted residual statement. The Taylor-Galerkin algorithms use explicit artificial dissipation, and the performance of three dissipation models are compared. A Petrov-Galerkin algorithm has as its basis the concepts of streamline upwinding. Vectorization strategies are developed to implement the finite element formulations on the NASA Langley VPS-32. The vectorization scheme results in finite element programs that use vectors of length of the order of the number of nodes or elements. The use of the vectorization procedure speeds up processing rates by over two orders of magnitude. The Taylor-Galerkin and Petrov-Galerkin algorithms are evaluated for 2D inviscid flows on criteria such as solution accuracy, shock resolution, computational speed and storage requirements. The convergence rates for both algorithms are enhanced by local time-stepping schemes. Extension of the vectorization procedure for predicting 2D viscous and 3D inviscid flows are demonstrated. Conclusions are drawn regarding the applicability of the finite element procedures for realistic problems that require hundreds of thousands of nodes.
NASA Technical Reports Server (NTRS)
Han, Mei; Braun, Scott A.; Olson, William S.; Persson, P. Ola G.; Bao, Jian-Wen
2009-01-01
Seen by the human eye, precipitation particles are commonly drops of rain, flakes of snow, or lumps of hail that reach the ground. Remote sensors and numerical models usually deal with information about large collections of rain, snow, and hail (or graupel --also called soft hail ) in a volume of air. Therefore, the size and number of the precipitation particles and how particles interact, evolve, and fall within the volume of air need to be represented using physical laws and mathematical tools, which are often implemented as cloud and precipitation microphysical parameterizations in numerical models. To account for the complexity of the precipitation physical processes, scientists have developed various types of such schemes in models. The accuracy of numerical weather forecasting may vary dramatically when different types of these schemes are employed. Therefore, systematic evaluations of cloud and precipitation schemes are of great importance for improvement of weather forecasts. This study is one such endeavor; it pursues quantitative assessment of all the available cloud and precipitation microphysical schemes in a weather model (MM5) through comparison with the observations obtained by National Aeronautics and Space Administration (NASA) s and Japan Aerospace Exploration Agency (JAXA) s Tropical Rainfall Measuring Mission (TRMM) precipitation radar (PR) and microwave imager (TMI). When satellite sensors (like PR or TMI) detect information from precipitation particles, they cannot directly observe the microphysical quantities (e.g., water species phase, density, size, and amount etc.). Instead, they tell how much radiation is absorbed by rain, reflected away from the sensor by snow or graupel, or reflected back to the satellite. On the other hand, the microphysical quantities in the model are usually well represented in microphysical schemes and can be converted to radiative properties that can be directly compared to the corresponding PR and TMI observations. This study employs this method to evaluate the accuracy of the simulated radiative properties by the MM5 model with different microphysical schemes. It is found that the representations of particle density, size, and mass in the different schemes in the MM5 model determine the model s performance when predicting a winter storm over the eastern Pacific Ocean. Schemes lacking moderate density particles (i.e. graupel), with snow flakes that are too large, or with excessive mass of snow or graupel lead to degraded prediction of the radiative properties as observed by the TRMM satellite. This study demonstrates the uniqueness of the combination of both an active microwave sensor (PR) and passive microwave sensor (TMI) onboard TRMM on assessing the accuracy of numerical weather forecasting. It improves our understanding of the physical and radiative properties of different types of precipitation particles and provides suggestions for better representation of cloud and precipitation processes in numerical models. It would, ultimately, contribute to answering questions like "Why did it not rain when the forecast says it would?"
Towards and FVE-FAC Method for Determining Thermocapillary Effects on Weld Pool Shape
NASA Technical Reports Server (NTRS)
Canright, David; Henson, Van Emden
1996-01-01
Several practical materials processes, e.g., welding, float-zone purification, and Czochralski crystal growth, involve a pool of molten metal with a free surface, with strong temperature gradients along the surface. In some cases, the resulting thermocapillary flow is vigorous enough to convect heat toward the edges of the pool, increasing the driving force in a sort of positive feedback. In this work we examine this mechanism and its effect on the solid-liquid interface through a model problem: a half space of pure substance with concentrated axisymmetric surface heating, where surface tension is strong enough to keep the liquid free surface flat. The numerical method proposed for this problem utilizes a finite volume element (FVE) discretization in cylindrical coordinates. Because of the axisymmetric nature of the model problem, the control volumes used are torroidal prisms, formed by taking a polygonal cross-section in the (r, z) plane and sweeping it completely around the z-axis. Conservation of energy (in the solid), and conservation of energy, momentum, and mass (in the liquid) are enforced globally by integrating these quantities and enforcing conservation over each control volume. Judicious application of the Divergence Theorem and Stokes' Theorem, combined with a Crank-Nicolson time-stepping scheme leads to an implicit algebraic system to be solved at each time step. It is known that near the boundary of the pool, that is, near the solid-liquid interface, the full conduction-convection solution will require extremely fine length scales to resolve the physical behavior of the system. Furthermore, this boundary moves as a function of time. Accordingly, we develop the foundation of an adaptive refinement scheme based on the principles of Fast Adaptive Composite Grid methods (FAC). Implementation of the method and numerical results will appear in a later report.
Applications of Taylor-Galerkin finite element method to compressible internal flow problems
NASA Technical Reports Server (NTRS)
Sohn, Jeong L.; Kim, Yongmo; Chung, T. J.
1989-01-01
A two-step Taylor-Galerkin finite element method with Lapidus' artificial viscosity scheme is applied to several test cases for internal compressible inviscid flow problems. Investigations for the effect of supersonic/subsonic inlet and outlet boundary conditions on computational results are particularly emphasized.
NASA Technical Reports Server (NTRS)
Ecer, A.; Akay, H. U.
1981-01-01
The finite element method is applied for the solution of transonic potential flows through a cascade of airfoils. Convergence characteristics of the solution scheme are discussed. Accuracy of the numerical solutions is investigated for various flow regions in the transonic flow configuration. The design of an efficient finite element computational grid is discussed for improving accuracy and convergence.
An upwind multigrid method for solving viscous flows on unstructured triangular meshes. M.S. Thesis
NASA Technical Reports Server (NTRS)
Bonhaus, Daryl Lawrence
1993-01-01
A multigrid algorithm is combined with an upwind scheme for solving the two dimensional Reynolds averaged Navier-Stokes equations on triangular meshes resulting in an efficient, accurate code for solving complex flows around multiple bodies. The relaxation scheme uses a backward-Euler time difference and relaxes the resulting linear system using a red-black procedure. Roe's flux-splitting scheme is used to discretize convective and pressure terms, while a central difference is used for the diffusive terms. The multigrid scheme is demonstrated for several flows around single and multi-element airfoils, including inviscid, laminar, and turbulent flows. The results show an appreciable speed up of the scheme for inviscid and laminar flows, and dramatic increases in efficiency for turbulent cases, especially those on increasingly refined grids.
Development of the Semi-implicit Time Integration in KIM-SH
NASA Astrophysics Data System (ADS)
NAM, H.
2015-12-01
The Korea Institute of Atmospheric Prediction Systems (KIAPS) was founded in 2011 by the Korea Meteorological Administration (KMA) to develop Korea's own global Numerical Weather Prediction (NWP) system as nine year (2011-2019) project. The KIM-SH is a KIAPS integrated model-spectral element based in the HOMME. In KIM-SH, the explicit schemes are employed. We introduce the three- and two-time-level semi-implicit scheme in KIM-SH as the time integration. Explicit schemes however have a tendancy to be unstable and require very small timesteps while semi-implicit schemes are very stable and can have much larger timesteps.We define the linear and reference values, then by definition of semi-implicit scheme, we apply the linear solver as GMRES. The numerical results from experiments will be introduced with the current development status of the time integration in KIM-SH. Several numerical examples are shown to confirm the efficiency and reliability of the proposed schemes.
2012-05-01
GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7 . PERFORMING ORGANIZATION NAME(S...2.3.3 Classification using template matching ...................................................... 7 2.4 Details of classification schemes... 7 2.4.1 Camp Butner TEMTADS data inversion and classification scheme .......... 9
Advance finite element modeling of rotor blade aeroelasticity
NASA Technical Reports Server (NTRS)
Straub, F. K.; Sangha, K. B.; Panda, B.
1994-01-01
An advanced beam finite element has been developed for modeling rotor blade dynamics and aeroelasticity. This element is part of the Element Library of the Second Generation Comprehensive Helicopter Analysis System (2GCHAS). The element allows modeling of arbitrary rotor systems, including bearingless rotors. It accounts for moderately large elastic deflections, anisotropic properties, large frame motion for maneuver simulation, and allows for variable order shape functions. The effects of gravity, mechanically applied and aerodynamic loads are included. All kinematic quantities required to compute airloads are provided. In this paper, the fundamental assumptions and derivation of the element matrices are presented. Numerical results are shown to verify the formulation and illustrate several features of the element.
A radiative transfer scheme that considers absorption, scattering, and distribution of light-absorbing elemental carbon (EC) particles collected on a quartz-fiber filter was developed to explain simultaneous filter reflectance and transmittance observations prior to and during...
T. Heartsill Scalley; F.N. Scatena; S. Moya; A.E. Lugo
2012-01-01
In heterotrophic streams the retention and export of coarse particulate organic matter and associated elements are fundamental biogeochemical processes that influence water quality, food webs and the structural complexity of forested headwater streams. Nevertheless, few studies have documented the quantity and quality of exported organic matter over multiple years and...
A new conformal absorbing boundary condition for finite element meshes and parallelization of FEMATS
NASA Technical Reports Server (NTRS)
Chatterjee, A.; Volakis, J. L.; Nguyen, J.; Nurnberger, M.; Ross, D.
1993-01-01
Some of the progress toward the development and parallelization of an improved version of the finite element code FEMATS is described. This is a finite element code for computing the scattering by arbitrarily shaped three dimensional surfaces composite scatterers. The following tasks were worked on during the report period: (1) new absorbing boundary conditions (ABC's) for truncating the finite element mesh; (2) mixed mesh termination schemes; (3) hierarchical elements and multigridding; (4) parallelization; and (5) various modeling enhancements (antenna feeds, anisotropy, and higher order GIBC).
NASA Technical Reports Server (NTRS)
Downer, Janice Diane
1990-01-01
The dynamic analysis of three dimensional elastic beams which experience large rotational and large deformational motions are examined. The beam motion is modeled using an inertial reference for the translational displacements and a body-fixed reference for the rotational quantities. Finite strain rod theories are then defined in conjunction with the beam kinematic description which accounts for the effects of stretching, bending, torsion, and transverse shear deformations. A convected coordinate representation of the Cauchy stress tensor and a conjugate strain definition is introduced to model the beam deformation. To treat the beam dynamics, a two-stage modification of the central difference algorithm is presented to integrate the translational coordinates and the angular velocity vector. The angular orientation is then obtained from the application of an implicit integration algorithm to the Euler parameter/angular velocity kinematical relation. The combined developments of the objective internal force computation with the dynamic solution procedures result in the computational preservation of total energy for undamped systems. The present methodology is also extended to model the dynamics of deployment/retrieval of the flexible members. A moving spatial grid corresponding to the configuration of a deployed rigid beam is employed as a reference for the dynamic variables. A transient integration scheme which accurately accounts for the deforming spatial grid is derived from a space-time finite element discretization of a Hamiltonian variational statement. The computational results of this general deforming finite element beam formulation are compared to reported results for a planar inverse-spaghetti problem.
NASA Astrophysics Data System (ADS)
Schröder, Jörg; Viebahn, Nils; Wriggers, Peter; Auricchio, Ferdinando; Steeger, Karl
2017-09-01
In this work we investigate different mixed finite element formulations for the detection of critical loads for the possible occurrence of bifurcation and limit points. In detail, three- and two-field formulations for incompressible and quasi-incompressible materials are analyzed. In order to apply various penalty functions for the volume dilatation in displacement/pressure mixed elements we propose a new consistent scheme capturing the non linearities of the penalty constraints. It is shown that for all mixed formulations, which can be reduced to a generalized displacement scheme, a straight forward stability analysis is possible. However, problems based on the classical saddle-point structure require a different analyses based on the change of the signature of the underlying matrix system. The basis of these investigations is the work from Auricchio et al. (Comput Methods Appl Mech Eng 194:1075-1092, 2005, Comput Mech 52:1153-1167, 2013).
NASA Technical Reports Server (NTRS)
Padovan, Joe
1986-01-01
In a three part series of papers, a generalized finite element analysis scheme is developed to handle the steady and transient response of moving/rolling nonlinear viscoelastic structure. This paper considers the development of the moving/rolling element strategy, including the effects of large deformation kinematics and viscoelasticity modelled by fractional integro-differential operators. To improve the solution strategy, a special hierarchical constraint procedure is developed for the case of steady rolling/translating as well as a transient scheme involving the use of a Grunwaldian representation of the fractional operator. In the second and third parts of the paper, 3-D extensions are developed along with transient contact strategies enabling the handling of impacts with obstructions. Overall, the various developments are benchmarked via comprehensive 2- and 3-D simulations. These are correlated with experimental data to define modelling capabilities.
NASA Astrophysics Data System (ADS)
Balzani, Daniel; Gandhi, Ashutosh; Tanaka, Masato; Schröder, Jörg
2015-05-01
In this paper a robust approximation scheme for the numerical calculation of tangent stiffness matrices is presented in the context of nonlinear thermo-mechanical finite element problems and its performance is analyzed. The scheme extends the approach proposed in Kim et al. (Comput Methods Appl Mech Eng 200:403-413, 2011) and Tanaka et al. (Comput Methods Appl Mech Eng 269:454-470, 2014 and bases on applying the complex-step-derivative approximation to the linearizations of the weak forms of the balance of linear momentum and the balance of energy. By incorporating consistent perturbations along the imaginary axis to the displacement as well as thermal degrees of freedom, we demonstrate that numerical tangent stiffness matrices can be obtained with accuracy up to computer precision leading to quadratically converging schemes. The main advantage of this approach is that contrary to the classical forward difference scheme no round-off errors due to floating-point arithmetics exist within the calculation of the tangent stiffness. This enables arbitrarily small perturbation values and therefore leads to robust schemes even when choosing small values. An efficient algorithmic treatment is presented which enables a straightforward implementation of the method in any standard finite-element program. By means of thermo-elastic and thermo-elastoplastic boundary value problems at finite strains the performance of the proposed approach is analyzed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Haixia; Zhang, Jing
We propose a scheme for continuous-variable quantum cloning of coherent states with phase-conjugate input modes using linear optics. The quantum cloning machine yields M identical optimal clones from N replicas of a coherent state and N replicas of its phase conjugate. This scheme can be straightforwardly implemented with the setups accessible at present since its optical implementation only employs simple linear optical elements and homodyne detection. Compared with the original scheme for continuous-variable quantum cloning with phase-conjugate input modes proposed by Cerf and Iblisdir [Phys. Rev. Lett. 87, 247903 (2001)], which utilized a nondegenerate optical parametric amplifier, our scheme losesmore » the output of phase-conjugate clones and is regarded as irreversible quantum cloning.« less
Meng, Qingyue; Fang, Hai; Liu, Xiaoyun; Yuan, Beibei; Xu, Jin
2015-10-10
Fragmentation in social health insurance schemes is an important factor for inequitable access to health care and financial protection for people covered by different health insurance schemes in China. To fulfil its commitment of universal health coverage by 2020, the Chinese Government needs to prioritise addressing this issue. After analysing the situation of fragmentation, this Review summarises efforts to consolidate health insurance schemes both in China and internationally. Rural migrants, elderly people, and those with non-communicable diseases in China will greatly benefit from consolidation of the existing health insurance schemes with extended funding pools, thereby narrowing the disparities among health insurance schemes in fund level and benefit package. Political commitments, institutional innovations, and a feasible implementation plan are the major elements needed for success in consolidation. Achievement of universal health coverage in China needs systemic strategies including consolidation of the social health insurance schemes. Copyright © 2015 Elsevier Ltd. All rights reserved.
Joo, Hyun-Woo; Lee, Chang-Hwan; Rho, Jong-Seok; Jung, Hyun-Kyo
2003-08-01
In this paper, an inversion scheme for piezoelectric constants of piezoelectric transformers is proposed. The impedance of piezoelectric transducers is calculated using a three-dimensional finite element method. The validity of this is confirmed experimentally. The effects of material coefficients on piezoelectric transformers are investigated numerically. Six material coefficient variables for piezoelectric transformers were selected, and a design sensitivity method was adopted as an inversion scheme. The validity of the proposed method was confirmed by step-up ratio calculations. The proposed method is applied to the analysis of a sample piezoelectric transformer, and its resonance characteristics are obtained by numerically combined equivalent circuit method.
Hypermatrix scheme for finite element systems on CDC STAR-100 computer
NASA Technical Reports Server (NTRS)
Noor, A. K.; Voigt, S. J.
1975-01-01
A study is made of the adaptation of the hypermatrix (block matrix) scheme for solving large systems of finite element equations to the CDC STAR-100 computer. Discussion is focused on the organization of the hypermatrix computation using Cholesky decomposition and the mode of storage of the different submatrices to take advantage of the STAR pipeline (streaming) capability. Consideration is also given to the associated data handling problems and the means of balancing the I/Q and cpu times in the solution process. Numerical examples are presented showing anticipated gain in cpu speed over the CDC 6600 to be obtained by using the proposed algorithms on the STAR computer.
1980-02-12
planet across the limb of the Sun at the end of a transit. Elements of an Orbit - See orbital elements . Elevation - The height of a point on the...That component of libration due to variations in the geometric position of the Earth relative to the Moon. 71 ś" Orbital Elements - The quantities which...completely describe the size, shape, and orientation of an object’s orbit as well as its location in it. The classical set consists of the semi-major
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gates, A.A.; McCarthy, P.G.; Edl, J.W.
1975-05-01
Elemental tritium is shipped at low pressure in a stainless steel container (LP-50) surrounded by an aluminum vessel and Celotex insulation at least 4 in. thick in a steel drum. Each package contains a large quantity (greater than a Type A quantity) of nonfissile material, as defined in AECM 0529. This report provides the details of the safety analysis performed for this type container.
Excess entropy scaling for the segmental and global dynamics of polyethylene melts.
Voyiatzis, Evangelos; Müller-Plathe, Florian; Böhm, Michael C
2014-11-28
The range of validity of the Rosenfeld and Dzugutov excess entropy scaling laws is analyzed for unentangled linear polyethylene chains. We consider two segmental dynamical quantities, i.e. the bond and the torsional relaxation times, and two global ones, i.e. the chain diffusion coefficient and the viscosity. The excess entropy is approximated by either a series expansion of the entropy in terms of the pair correlation function or by an equation of state for polymers developed in the context of the self associating fluid theory. For the whole range of temperatures and chain lengths considered, the two estimates of the excess entropy are linearly correlated. The scaled bond and torsional relaxation times fall into a master curve irrespective of the chain length and the employed scaling scheme. Both quantities depend non-linearly on the excess entropy. For a fixed chain length, the reduced diffusion coefficient and viscosity scale linearly with the excess entropy. An empirical reduction to a chain length-independent master curve is accessible for both dynamic quantities. The Dzugutov scheme predicts an increased value of the scaled diffusion coefficient with increasing chain length which contrasts physical expectations. The origin of this trend can be traced back to the density dependence of the scaling factors. This finding has not been observed previously for Lennard-Jones chain systems (Macromolecules, 2013, 46, 8710-8723). Thus, it limits the applicability of the Dzugutov approach to polymers. In connection with diffusion coefficients and viscosities, the Rosenfeld scaling law appears to be of higher quality than the Dzugutov approach. An empirical excess entropy scaling is also proposed which leads to a chain length-independent correlation. It is expected to be valid for polymers in the Rouse regime.
Rare earth elements: end use and recyclability
Goonan, Thomas G.
2011-01-01
Rare earth elements are used in mature markets (such as catalysts, glassmaking, lighting, and metallurgy), which account for 59 percent of the total worldwide consumption of rare earth elements, and in newer, high-growth markets (such as battery alloys, ceramics, and permanent magnets), which account for 41 percent of the total worldwide consumption of rare earth elements. In mature market segments, lanthanum and cerium constitute about 80 percent of rare earth elements used, and in new market segments, dysprosium, neodymium, and praseodymium account for about 85 percent of rare earth elements used. Regardless of the end use, rare earth elements are not recycled in large quantities, but could be if recycling became mandated or very high prices of rare earth elements made recycling feasible.
NASA Astrophysics Data System (ADS)
Vitale, Valerio; Dziedzic, Jacek; Albaugh, Alex; Niklasson, Anders M. N.; Head-Gordon, Teresa; Skylaris, Chris-Kriton
2017-03-01
Iterative energy minimization with the aim of achieving self-consistency is a common feature of Born-Oppenheimer molecular dynamics (BOMD) and classical molecular dynamics with polarizable force fields. In the former, the electronic degrees of freedom are optimized, while the latter often involves an iterative determination of induced point dipoles. The computational effort of the self-consistency procedure can be reduced by re-using converged solutions from previous time steps. However, this must be done carefully, as not to break time-reversal symmetry, which negatively impacts energy conservation. Self-consistent schemes based on the extended Lagrangian formalism, where the initial guesses for the optimized quantities are treated as auxiliary degrees of freedom, constitute one elegant solution. We report on the performance of two integration schemes with the same underlying extended Lagrangian structure, which we both employ in two radically distinct regimes—in classical molecular dynamics simulations with the AMOEBA polarizable force field and in BOMD simulations with the Onetep linear-scaling density functional theory (LS-DFT) approach. Both integration schemes are found to offer significant improvements over the standard (unpropagated) molecular dynamics formulation in both the classical and LS-DFT regimes.
Vitale, Valerio; Dziedzic, Jacek; Albaugh, Alex; Niklasson, Anders M N; Head-Gordon, Teresa; Skylaris, Chris-Kriton
2017-03-28
Iterative energy minimization with the aim of achieving self-consistency is a common feature of Born-Oppenheimer molecular dynamics (BOMD) and classical molecular dynamics with polarizable force fields. In the former, the electronic degrees of freedom are optimized, while the latter often involves an iterative determination of induced point dipoles. The computational effort of the self-consistency procedure can be reduced by re-using converged solutions from previous time steps. However, this must be done carefully, as not to break time-reversal symmetry, which negatively impacts energy conservation. Self-consistent schemes based on the extended Lagrangian formalism, where the initial guesses for the optimized quantities are treated as auxiliary degrees of freedom, constitute one elegant solution. We report on the performance of two integration schemes with the same underlying extended Lagrangian structure, which we both employ in two radically distinct regimes-in classical molecular dynamics simulations with the AMOEBA polarizable force field and in BOMD simulations with the Onetep linear-scaling density functional theory (LS-DFT) approach. Both integration schemes are found to offer significant improvements over the standard (unpropagated) molecular dynamics formulation in both the classical and LS-DFT regimes.
Vitale, Valerio; Dziedzic, Jacek; Albaugh, Alex; ...
2017-03-28
Iterative energy minimization with the aim of achieving self-consistency is a common feature of Born-Oppenheimer molecular dynamics (BOMD) and classical molecular dynamics with polarizable force fields. In the former, the electronic degrees of freedom are optimized, while the latter often involves an iterative determination of induced point dipoles. The computational effort of the self-consistency procedure can be reduced by re-using converged solutions from previous time steps. However, this must be done carefully, as not to break time-reversal symmetry, which negatively impacts energy conservation. Self-consistent schemes based on the extended Lagrangian formalism, where the initial guesses for the optimized quantities aremore » treated as auxiliary degrees of freedom, constitute one elegant solution. We report on the performance of two integration schemes with the same underlying extended Lagrangian structure, which we both employ in two radically distinct regimes—in classical molecular dynamics simulations with the AMOEBA polarizable force field and in BOMD simulations with the Onetep linear-scaling density functional theory (LS-DFT) approach. Furthermore, both integration schemes are found to offer significant improvements over the standard (unpropagated) molecular dynamics formulation in both the classical and LS-DFT regimes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vitale, Valerio; Dziedzic, Jacek; Albaugh, Alex
Iterative energy minimization with the aim of achieving self-consistency is a common feature of Born-Oppenheimer molecular dynamics (BOMD) and classical molecular dynamics with polarizable force fields. In the former, the electronic degrees of freedom are optimized, while the latter often involves an iterative determination of induced point dipoles. The computational effort of the self-consistency procedure can be reduced by re-using converged solutions from previous time steps. However, this must be done carefully, as not to break time-reversal symmetry, which negatively impacts energy conservation. Self-consistent schemes based on the extended Lagrangian formalism, where the initial guesses for the optimized quantities aremore » treated as auxiliary degrees of freedom, constitute one elegant solution. We report on the performance of two integration schemes with the same underlying extended Lagrangian structure, which we both employ in two radically distinct regimes—in classical molecular dynamics simulations with the AMOEBA polarizable force field and in BOMD simulations with the Onetep linear-scaling density functional theory (LS-DFT) approach. Furthermore, both integration schemes are found to offer significant improvements over the standard (unpropagated) molecular dynamics formulation in both the classical and LS-DFT regimes.« less
NASA Astrophysics Data System (ADS)
Lohmann, Christoph; Kuzmin, Dmitri; Shadid, John N.; Mabuza, Sibusiso
2017-09-01
This work extends the flux-corrected transport (FCT) methodology to arbitrary order continuous finite element discretizations of scalar conservation laws on simplex meshes. Using Bernstein polynomials as local basis functions, we constrain the total variation of the numerical solution by imposing local discrete maximum principles on the Bézier net. The design of accuracy-preserving FCT schemes for high order Bernstein-Bézier finite elements requires the development of new algorithms and/or generalization of limiting techniques tailored for linear and multilinear Lagrange elements. In this paper, we propose (i) a new discrete upwinding strategy leading to local extremum bounded low order approximations with compact stencils, (ii) high order variational stabilization based on the difference between two gradient approximations, and (iii) new localized limiting techniques for antidiffusive element contributions. The optional use of a smoothness indicator, based on a second derivative test, makes it possible to potentially avoid unnecessary limiting at smooth extrema and achieve optimal convergence rates for problems with smooth solutions. The accuracy of the proposed schemes is assessed in numerical studies for the linear transport equation in 1D and 2D.
Using EIGER for Antenna Design and Analysis
NASA Technical Reports Server (NTRS)
Champagne, Nathan J.; Khayat, Michael; Kennedy, Timothy F.; Fink, Patrick W.
2007-01-01
EIGER (Electromagnetic Interactions GenERalized) is a frequency-domain electromagnetics software package that is built upon a flexible framework, designed using object-oriented techniques. The analysis methods used include moment method solutions of integral equations, finite element solutions of partial differential equations, and combinations thereof. The framework design permits new analysis techniques (boundary conditions, Green#s functions, etc.) to be added to the software suite with a sensible effort. The code has been designed to execute (in serial or parallel) on a wide variety of platforms from Intel-based PCs and Unix-based workstations. Recently, new potential integration scheme s that avoid singularity extraction techniques have been added for integral equation analysis. These new integration schemes are required for facilitating the use of higher-order elements and basis functions. Higher-order elements are better able to model geometrical curvature using fewer elements than when using linear elements. Higher-order basis functions are beneficial for simulating structures with rapidly varying fields or currents. Results presented here will demonstrate curren t and future capabilities of EIGER with respect to analysis of installed antenna system performance in support of NASA#s mission of exploration. Examples include antenna coupling within an enclosed environment and antenna analysis on electrically large manned space vehicles.
A comparative study of an ABC and an artificial absorber for truncating finite element meshes
NASA Technical Reports Server (NTRS)
Oezdemir, T.; Volakis, John L.
1993-01-01
The type of mesh termination used in the context of finite element formulations plays a major role on the efficiency and accuracy of the field solution. The performance of an absorbing boundary condition (ABC) and an artificial absorber (a new concept) for terminating the finite element mesh was evaluated. This analysis is done in connection with the problem of scattering by a finite slot array in a thick ground plane. The two approximate mesh truncation schemes are compared with the exact finite element-boundary integral (FEM-BI) method in terms of accuracy and efficiency. It is demonstrated that both approximate truncation schemes yield reasonably accurate results even when the mesh is extended only 0.3 wavelengths away from the array aperture. However, the artificial absorber termination method leads to a substantially more efficient solution. Moreover, it is shown that the FEM-BI method remains quite competitive with the FEM-artificial absorber method when the FFT is used for computing the matrix-vector products in the iterative solution algorithm. These conclusions are indeed surprising and of major importance in electromagnetic simulations based on the finite element method.
NASA Astrophysics Data System (ADS)
Bai, Guang-Fu; Hu, Lin; Jiang, Yang; Tian, Jing; Zi, Yue-Jiao; Wu, Ting-Wei; Huang, Feng-Qin
2017-08-01
In this paper, a photonic microwave waveform generator based on a dual-parallel Mach-Zehnder modulator is proposed and experimentally demonstrated. In this reported scheme, only one radio frequency signal is used to drive the dual-parallel Mach-Zehnder modulator. Meanwhile, dispersive elements or filters are not required in the proposed scheme, which make the scheme simpler and more stable. In this way, six variables can be adjusted. Through the different combinations of these variables, basic waveforms with full duty and small duty cycle can be generated. Tunability of the generator can be achieved by adjusting the frequency of the RF signal and the optical carrier. The corresponding theoretical analysis and simulation have been conducted. With guidance of theory and simulation, proof-of-concept experiments are carried out. The basic waveforms, including Gaussian, saw-up, and saw-down waveforms, with full duty and small duty cycle are generated at the repetition rate of 2 GHz. The theoretical and simulation results agree with the experimental results very well.
Efficient parallel resolution of the simplified transport equations in mixed-dual formulation
NASA Astrophysics Data System (ADS)
Barrault, M.; Lathuilière, B.; Ramet, P.; Roman, J.
2011-03-01
A reactivity computation consists of computing the highest eigenvalue of a generalized eigenvalue problem, for which an inverse power algorithm is commonly used. Very fine modelizations are difficult to treat for our sequential solver, based on the simplified transport equations, in terms of memory consumption and computational time. A first implementation of a Lagrangian based domain decomposition method brings to a poor parallel efficiency because of an increase in the power iterations [1]. In order to obtain a high parallel efficiency, we improve the parallelization scheme by changing the location of the loop over the subdomains in the overall algorithm and by benefiting from the characteristics of the Raviart-Thomas finite element. The new parallel algorithm still allows us to locally adapt the numerical scheme (mesh, finite element order). However, it can be significantly optimized for the matching grid case. The good behavior of the new parallelization scheme is demonstrated for the matching grid case on several hundreds of nodes for computations based on a pin-by-pin discretization.
Investigation on improved Gabor order tracking technique and its applications
NASA Astrophysics Data System (ADS)
Pan, Min-Chun; Chiu, Chun-Ching
2006-08-01
The study proposes an improved Gabor order tracking (GOT) technique to cope with crossing-order/spectral components that cannot be effectively separated by using the original GOT scheme. The improvement aids both the reconstruction and interpretation of two crossing orders/spectra such as a transmission-element-regarding order and a structural resonance. The dual function of the Gabor elementary function can affect the precision of tracked orders. In the paper, its influence on the computed Gabor expansion coefficients is investigated. For applying the improved scheme in practical works, the separation and extraction of close-order components of vibration signals measured from a transmission-element test bench is illustrated by using both the GOT and Vold-Kalman filtering OT methods. Additionally, comparisons between these two schemes are summarized from processing results. The other experimental work demonstrates the ranking of noise components from a riding electric scooter. Singled-out dominant noise sources can be referred for subsequent design-remodeling tasks.
Novel Multiplexing Technique for Detector and Mixer Arrays
NASA Technical Reports Server (NTRS)
Karasik, Boris S.; McGrath, William R.
2001-01-01
Future submillimeter and far-infrared space telescopes will require large-format (many 1000's of elements) imaging detector arrays to perform state-of-the-art astronomical observations. A crucial issue related to a focal plane array is a readout scheme which is compatible with large numbers of cryogenically-cooled (typically < 1 K) detectors elements. When the number of elements becomes of the order of thousands, the physical layout for individual readout amplifiers becomes nearly impossible to realize for practical systems. Another important concern is the large number of wires leading to a 0.1-0.3 K platform. In the case of superconducting transition edge sensors (TES), a scheme for time-division multiplexing of SQUID read-out amplifiers has been recently demonstrated. In this scheme the number of SQUIDs is equal to the number (N) of the detectors, but only one SQUID is turned on at a time. The SQUIDs are connected in series in each column of the array, so the number of wires leading to the amplifiers can be reduced, but it is still of the order of N. Another approach uses a frequency domain multiplexing scheme of the bolometer array. The bolometers are biased with ac currents whose frequencies are individual for each element and are much higher than the bolometer bandwidth. The output signals are connected in series in a summing loop which is coupled to a single SQUID amplifier. The total number of channels depends on the ratio between the SQUID bandwidth and the bolometer bandwidth and can be at least 100 according to the authors. An important concern about this technique is a contribution of the out-of-band Johnson noise which multiplies by factor N(exp 1/2) for each frequency channel. We propose a novel solution for large format arrays based on the Hadamard transform coding technique which requires only one amplifier to read out the entire array of potentially many 1000's of elements and uses approximately 10 wires between the cold stage and room temperature electronics. This can significantly reduce the complexity of the readout circuits.
NASA Astrophysics Data System (ADS)
Boscheri, Walter; Dumbser, Michael
2014-10-01
In this paper we present a new family of high order accurate Arbitrary-Lagrangian-Eulerian (ALE) one-step ADER-WENO finite volume schemes for the solution of nonlinear systems of conservative and non-conservative hyperbolic partial differential equations with stiff source terms on moving tetrahedral meshes in three space dimensions. A WENO reconstruction technique is used to achieve high order of accuracy in space, while an element-local space-time Discontinuous Galerkin finite element predictor on moving curved meshes is used to obtain a high order accurate one-step time discretization. Within the space-time predictor the physical element is mapped onto a reference element using a high order isoparametric approach, where the space-time basis and test functions are given by the Lagrange interpolation polynomials passing through a predefined set of space-time nodes. Since our algorithm is cell-centered, the final mesh motion is computed by using a suitable node solver algorithm. A rezoning step as well as a flattener strategy are used in some of the test problems to avoid mesh tangling or excessive element deformations that may occur when the computation involves strong shocks or shear waves. The ALE algorithm presented in this article belongs to the so-called direct ALE methods because the final Lagrangian finite volume scheme is based directly on a space-time conservation formulation of the governing PDE system, with the rezoned geometry taken already into account during the computation of the fluxes. We apply our new high order unstructured ALE schemes to the 3D Euler equations of compressible gas dynamics, for which a set of classical numerical test problems has been solved and for which convergence rates up to sixth order of accuracy in space and time have been obtained. We furthermore consider the equations of classical ideal magnetohydrodynamics (MHD) as well as the non-conservative seven-equation Baer-Nunziato model of compressible multi-phase flows with stiff relaxation source terms.
De Allegri, Manuela; Sanon, Mamadou; Bridges, John; Sauerborn, Rainer
2006-03-01
This paper presents a qualitative investigation of consumers' preferences regarding the single elements of a community-based health insurance (CBI) scheme recently implemented in a rural region in west Africa. The aim is to provide adequate policy-guidance to decision makers in low and middle income countries by producing an in-depth understanding of how consumers' preferences may affect decision to participate in such schemes. Although it has long been suggested that feeble levels of participation may very well be an expression of consumers' dissatisfaction with scheme design, little systematic efforts have so far been channelled towards supporting such argument with empirical evidence. Consumers' preferences were explored through means of 32 individual interviews with household heads. Analysis used the method of constant comparison and was conducted by two independent researchers. Data from 10 focus group discussions provided an additional valuable source of triangulation. Findings suggest that decision to enrol is closely linked to whether the single elements of the scheme match consumers' needs and expectations. In particular, consumers justified decision to join or not to join the insurance scheme in relation to their preference for the unit of enrolment, the premium level and the payment modalities, the benefit package, the health service provider network and the CBI managerial structure. The discussion of the findings focuses on how understanding consumers' preferences and incorporating them in the design of a CBI scheme may result in increased participation rates, ensuring that poor populations gain better access to health services and enjoy greater protection against the cost of illness.
A Cross-Layer, Anomaly-Based IDS for WSN and MANET
Amouri, Amar; Manthena, Raju
2018-01-01
Intrusion detection system (IDS) design for mobile adhoc networks (MANET) is a crucial component for maintaining the integrity of the network. The need for rapid deployment of IDS capability with minimal data availability for training and testing is an important requirement of such systems, especially for MANETs deployed in highly dynamic scenarios, such as battlefields. This work proposes a two-level detection scheme for detecting malicious nodes in MANETs. The first level deploys dedicated sniffers working in promiscuous mode. Each sniffer utilizes a decision-tree-based classifier that generates quantities which we refer to as correctly classified instances (CCIs) every reporting time. In the second level, the CCIs are sent to an algorithmically run supernode that calculates quantities, which we refer to as the accumulated measure of fluctuation (AMoF) of the received CCIs for each node under test (NUT). A key concept that is used in this work is that the variability of the smaller size population which represents the number of malicious nodes in the network is greater than the variance of the larger size population which represents the number of normal nodes in the network. A linear regression process is then performed in parallel with the calculation of the AMoF for fitting purposes and to set a proper threshold based on the slope of the fitted lines. As a result, the malicious nodes are efficiently and effectively separated from the normal nodes. The proposed scheme is tested for various node velocities and power levels and shows promising detection performance even at low-power levels. The results presented also apply to wireless sensor networks (WSN) and represent a novel IDS scheme for such networks. PMID:29470446
A Cross-Layer, Anomaly-Based IDS for WSN and MANET.
Amouri, Amar; Morgera, Salvatore D; Bencherif, Mohamed A; Manthena, Raju
2018-02-22
Intrusion detection system (IDS) design for mobile adhoc networks (MANET) is a crucial component for maintaining the integrity of the network. The need for rapid deployment of IDS capability with minimal data availability for training and testing is an important requirement of such systems, especially for MANETs deployed in highly dynamic scenarios, such as battlefields. This work proposes a two-level detection scheme for detecting malicious nodes in MANETs. The first level deploys dedicated sniffers working in promiscuous mode. Each sniffer utilizes a decision-tree-based classifier that generates quantities which we refer to as correctly classified instances (CCIs) every reporting time. In the second level, the CCIs are sent to an algorithmically run supernode that calculates quantities, which we refer to as the accumulated measure of fluctuation (AMoF) of the received CCIs for each node under test (NUT). A key concept that is used in this work is that the variability of the smaller size population which represents the number of malicious nodes in the network is greater than the variance of the larger size population which represents the number of normal nodes in the network. A linear regression process is then performed in parallel with the calculation of the AMoF for fitting purposes and to set a proper threshold based on the slope of the fitted lines. As a result, the malicious nodes are efficiently and effectively separated from the normal nodes. The proposed scheme is tested for various node velocities and power levels and shows promising detection performance even at low-power levels. The results presented also apply to wireless sensor networks (WSN) and represent a novel IDS scheme for such networks.
High-order asynchrony-tolerant finite difference schemes for partial differential equations
NASA Astrophysics Data System (ADS)
Aditya, Konduri; Donzis, Diego A.
2017-12-01
Synchronizations of processing elements (PEs) in massively parallel simulations, which arise due to communication or load imbalances between PEs, significantly affect the scalability of scientific applications. We have recently proposed a method based on finite-difference schemes to solve partial differential equations in an asynchronous fashion - synchronization between PEs is relaxed at a mathematical level. While standard schemes can maintain their stability in the presence of asynchrony, their accuracy is drastically affected. In this work, we present a general methodology to derive asynchrony-tolerant (AT) finite difference schemes of arbitrary order of accuracy, which can maintain their accuracy when synchronizations are relaxed. We show that there are several choices available in selecting a stencil to derive these schemes and discuss their effect on numerical and computational performance. We provide a simple classification of schemes based on the stencil and derive schemes that are representative of different classes. Their numerical error is rigorously analyzed within a statistical framework to obtain the overall accuracy of the solution. Results from numerical experiments are used to validate the performance of the schemes.
NASA Astrophysics Data System (ADS)
Kumari, Komal; Donzis, Diego
2017-11-01
Highly resolved computational simulations on massively parallel machines are critical in understanding the physics of a vast number of complex phenomena in nature governed by partial differential equations. Simulations at extreme levels of parallelism present many challenges with communication between processing elements (PEs) being a major bottleneck. In order to fully exploit the computational power of exascale machines one needs to devise numerical schemes that relax global synchronizations across PEs. This asynchronous computations, however, have a degrading effect on the accuracy of standard numerical schemes.We have developed asynchrony-tolerant (AT) schemes that maintain order of accuracy despite relaxed communications. We show, analytically and numerically, that these schemes retain their numerical properties with multi-step higher order temporal Runge-Kutta schemes. We also show that for a range of optimized parameters,the computation time and error for AT schemes is less than their synchronous counterpart. Stability of the AT schemes which depends upon history and random nature of delays, are also discussed. Support from NSF is gratefully acknowledged.
A Dynamic Finite Element Method for Simulating the Physics of Faults Systems
NASA Astrophysics Data System (ADS)
Saez, E.; Mora, P.; Gross, L.; Weatherley, D.
2004-12-01
We introduce a dynamic Finite Element method using a novel high level scripting language to describe the physical equations, boundary conditions and time integration scheme. The library we use is the parallel Finley library: a finite element kernel library, designed for solving large-scale problems. It is incorporated as a differential equation solver into a more general library called escript, based on the scripting language Python. This library has been developed to facilitate the rapid development of 3D parallel codes, and is optimised for the Australian Computational Earth Systems Simulator Major National Research Facility (ACcESS MNRF) supercomputer, a 208 processor SGI Altix with a peak performance of 1.1 TFlops. Using the scripting approach we obtain a parallel FE code able to take advantage of the computational efficiency of the Altix 3700. We consider faults as material discontinuities (the displacement, velocity, and acceleration fields are discontinuous at the fault), with elastic behavior. The stress continuity at the fault is achieved naturally through the expression of the fault interactions in the weak formulation. The elasticity problem is solved explicitly in time, using the Saint Verlat scheme. Finally, we specify a suitable frictional constitutive relation and numerical scheme to simulate fault behaviour. Our model is based on previous work on modelling fault friction and multi-fault systems using lattice solid-like models. We adapt the 2D model for simulating the dynamics of parallel fault systems described to the Finite-Element method. The approach uses a frictional relation along faults that is slip and slip-rate dependent, and the numerical integration approach introduced by Mora and Place in the lattice solid model. In order to illustrate the new Finite Element model, single and multi-fault simulation examples are presented.
Dispersion analysis of the Pn -Pn-1DG mixed finite element pair for atmospheric modelling
NASA Astrophysics Data System (ADS)
Melvin, Thomas
2018-02-01
Mixed finite element methods provide a generalisation of staggered grid finite difference methods with a framework to extend the method to high orders. The ability to generate a high order method is appealing for applications on the kind of quasi-uniform grids that are popular for atmospheric modelling, so that the method retains an acceptable level of accuracy even around special points in the grid. The dispersion properties of such schemes are important to study as they provide insight into the numerical adjustment to imbalance that is an important component in atmospheric modelling. This paper extends the recent analysis of the P2 - P1DG pair, that is a quadratic continuous and linear discontinuous finite element pair, to higher polynomial orders and also spectral element type pairs. In common with the previously studied element pair, and also with other schemes such as the spectral element and discontinuous Galerkin methods, increasing the polynomial order is found to provide a more accurate dispersion relation for the well resolved part of the spectrum but at the cost of a number of unphysical spectral gaps. The effects of these spectral gaps are investigated and shown to have a varying impact depending upon the width of the gap. Finally, the tensor product nature of the finite element spaces is exploited to extend the dispersion analysis into two-dimensions.
Balichev, Iu
1997-01-01
To investigation were submitted the particularities of the process of visual perception of technical drawings and schemes in advanced and backward pupils, who were mastering the specialties of "building and architecture", "hydroconstruction", "transport construction", "geodesy". The time was registered, which was necessary to advanced and backward pupils for unveiling the different elements in the drawing, scheme, and such attributes of the drawing as: orientation, length, curves of the lined, the boundary between them; time for identification of the specific designations, symbols, group of symbols, elements of the sketch from the simple to the complex ones. The results of the investigations revealed that in the advanced pupils the perception (unveiling) of the different elements of the technical drawing proceeded very rapidly, almost automatically. In the backward pupils this process elapsed reliably more slowly. It was demonstrated that the growing up pupils, who were distinguished with more rapid perception of the different elements of the drawing (advanced ones) more rapidly and more exactly dealt with solution of the technical tasks as compared with these, who more slowly unveiled the looked for elements (backwardness). Some other individual particularities were also established with respect to the visual perception of the elements of the technical drawing and its properties in advanced and backward pupils who were mastering the investigated professions.
Gonzales, Matthew J.; Sturgeon, Gregory; Krishnamurthy, Adarsh; Hake, Johan; Jonas, René; Stark, Paul; Rappel, Wouter-Jan; Narayan, Sanjiv M.; Zhang, Yongjie; Segars, W. Paul; McCulloch, Andrew D.
2013-01-01
High-order cubic Hermite finite elements have been valuable in modeling cardiac geometry, fiber orientations, biomechanics, and electrophysiology, but their use in solving three-dimensional problems has been limited to ventricular models with simple topologies. Here, we utilized a subdivision surface scheme and derived a generalization of the “local-to-global” derivative mapping scheme of cubic Hermite finite elements to construct bicubic and tricubic Hermite models of the human atria with extraordinary vertices from computed tomography images of a patient with atrial fibrillation. To an accuracy of 0.6 millimeters, we were able to capture the left atrial geometry with only 142 bicubic Hermite finite elements, and the right atrial geometry with only 90. The left and right atrial bicubic Hermite meshes were G1 continuous everywhere except in the one-neighborhood of extraordinary vertices, where the mean dot products of normals at adjacent elements were 0.928 and 0.925. We also constructed two biatrial tricubic Hermite models and defined fiber orientation fields in agreement with diagrammatic data from the literature using only 42 angle parameters. The meshes all have good quality metrics, uniform element sizes, and elements with aspect ratios near unity, and are shared with the public. These new methods will allow for more compact and efficient patient-specific models of human atrial and whole heart physiology. PMID:23602918
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung
1993-01-01
A new numerical framework for solving conservation laws is being developed. This new approach differs substantially in both concept and methodology from the well-established methods--i.e., finite difference, finite volume, finite element, and spectral methods. It is conceptually simple and designed to avoid several key limitations to the above traditional methods. An explicit model scheme for solving a simple 1-D unsteady convection-diffusion equation is constructed and used to illuminate major differences between the current method and those mentioned above. Unexpectedly, its amplification factors for the pure convection and pure diffusion cases are identical to those of the Leapfrog and the DuFort-Frankel schemes, respectively. Also, this explicit scheme and its Navier-Stokes extension have the unusual property that their stabilities are limited only by the CFL condition. Moreover, despite the fact that it does not use any flux-limiter or slope-limiter, the Navier-Stokes solver is capable of generating highly accurate shock tube solutions with shock discontinuities being resolved within one mesh interval. An accurate Euler solver also is constructed through another extension. It has many unusual properties, e.g., numerical diffusion at all mesh points can be controlled by a set of local parameters.
Discrete ellipsoidal statistical BGK model and Burnett equations
NASA Astrophysics Data System (ADS)
Zhang, Yu-Dong; Xu, Ai-Guo; Zhang, Guang-Cai; Chen, Zhi-Hua; Wang, Pei
2018-06-01
A new discrete Boltzmann model, the discrete ellipsoidal statistical Bhatnagar-Gross-Krook (ESBGK) model, is proposed to simulate nonequilibrium compressible flows. Compared with the original discrete BGK model, the discrete ES-BGK has a flexible Prandtl number. For the discrete ES-BGK model in the Burnett level, two kinds of discrete velocity model are introduced and the relations between nonequilibrium quantities and the viscous stress and heat flux in the Burnett level are established. The model is verified via four benchmark tests. In addition, a new idea is introduced to recover the actual distribution function through the macroscopic quantities and their space derivatives. The recovery scheme works not only for discrete Boltzmann simulation but also for hydrodynamic ones, for example, those based on the Navier-Stokes or the Burnett equations.
NASA Technical Reports Server (NTRS)
1999-01-01
This document describes the design of the leading edge suction system for flight demonstration of hybrid laminar flow control on the Boeing 757 airplane. The exterior pressures on the wing surface and the required suction quantity and distribution were determined in previous work. A system consisting of porous skin, sub-surface spanwise passages ("flutes"), pressure regulating screens and valves, collection fittings, ducts and a turbocompressor was defined to provide the required suction flow. Provisions were also made for flexible control of suction distribution and quantity for HLFC research purposes. Analysis methods for determining pressure drops and flow for transpiration heating for thermal anti-icing are defined. The control scheme used to observe and modulate suction distribution in flight is described.
Fast Pixel Buffer For Processing With Lookup Tables
NASA Technical Reports Server (NTRS)
Fisher, Timothy E.
1992-01-01
Proposed scheme for buffering data on intensities of picture elements (pixels) of image increases rate or processing beyond that attainable when data read, one pixel at time, from main image memory. Scheme applied in design of specialized image-processing circuitry. Intended to optimize performance of processor in which electronic equivalent of address-lookup table used to address those pixels in main image memory required for processing.
ICASE Semiannual Report, October 1, 1992 through March 31, 1993
1993-06-01
NUMERICAL MATHEMATICS Saul Abarbanel Further results have been obtained regarding long time integration of high order compact finite difference schemes...overall accuracy. These problems are common to all numerical methods: finite differences , finite elements and spectral methods. It should be noted that...fourth order finite difference scheme. * In the same case, the D6 wavelets provide a sixth order finite difference , noncompact formula. * The wavelets
High-Order Hyperbolic Residual-Distribution Schemes on Arbitrary Triangular Grids
NASA Technical Reports Server (NTRS)
Mazaheri, Alireza; Nishikawa, Hiroaki
2015-01-01
In this paper, we construct high-order hyperbolic residual-distribution schemes for general advection-diffusion problems on arbitrary triangular grids. We demonstrate that the second-order accuracy of the hyperbolic schemes can be greatly improved by requiring the scheme to preserve exact quadratic solutions. We also show that the improved second-order scheme can be easily extended to third-order by further requiring the exactness for cubic solutions. We construct these schemes based on the LDA and the SUPG methodology formulated in the framework of the residual-distribution method. For both second- and third-order-schemes, we construct a fully implicit solver by the exact residual Jacobian of the second-order scheme, and demonstrate rapid convergence of 10-15 iterations to reduce the residuals by 10 orders of magnitude. We demonstrate also that these schemes can be constructed based on a separate treatment of the advective and diffusive terms, which paves the way for the construction of hyperbolic residual-distribution schemes for the compressible Navier-Stokes equations. Numerical results show that these schemes produce exceptionally accurate and smooth solution gradients on highly skewed and anisotropic triangular grids, including curved boundary problems, using linear elements. We also present Fourier analysis performed on the constructed linear system and show that an under-relaxation parameter is needed for stabilization of Gauss-Seidel relaxation.
Nonnegative methods for bilinear discontinuous differencing of the S N equations on quadrilaterals
Maginot, Peter G.; Ragusa, Jean C.; Morel, Jim E.
2016-12-22
Historically, matrix lumping and ad hoc flux fixups have been the only methods used to eliminate or suppress negative angular flux solutions associated with the unlumped bilinear discontinuous (UBLD) finite element spatial discretization of the two-dimensional S N equations. Though matrix lumping inhibits negative angular flux solutions of the S N equations, it does not guarantee strictly positive solutions. In this paper, we develop and define a strictly nonnegative, nonlinear, Petrov-Galerkin finite element method that fully preserves the bilinear discontinuous spatial moments of the transport equation. Additionally, we define two ad hoc fixups that maintain particle balance and explicitly setmore » negative nodes of the UBLD finite element solution to zero but use different auxiliary equations to fully define their respective solutions. We assess the ability to inhibit negative angular flux solutions and the accuracy of every spatial discretization that we consider using a glancing void test problem with a discontinuous solution known to stress numerical methods. Though significantly more computationally intense, the nonlinear Petrov-Galerkin scheme results in a strictly nonnegative solution and is a more accurate solution than all the other methods considered. One fixup, based on shape preserving, results in a strictly nonnegative final solution but has increased numerical diffusion relative to the Petrov-Galerkin scheme and is less accurate than the UBLD solution. The second fixup, which preserves as many spatial moments as possible while setting negative values of the unlumped solution to zero, is less accurate than the Petrov-Galerkin scheme but is more accurate than the other fixup. However, it fails to guarantee a strictly nonnegative final solution. As a result, the fully lumped bilinear discontinuous finite element solution is the least accurate method, with significantly more numerical diffusion than the Petrov-Galerkin scheme and both fixups.« less
Nonnegative methods for bilinear discontinuous differencing of the S N equations on quadrilaterals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maginot, Peter G.; Ragusa, Jean C.; Morel, Jim E.
Historically, matrix lumping and ad hoc flux fixups have been the only methods used to eliminate or suppress negative angular flux solutions associated with the unlumped bilinear discontinuous (UBLD) finite element spatial discretization of the two-dimensional S N equations. Though matrix lumping inhibits negative angular flux solutions of the S N equations, it does not guarantee strictly positive solutions. In this paper, we develop and define a strictly nonnegative, nonlinear, Petrov-Galerkin finite element method that fully preserves the bilinear discontinuous spatial moments of the transport equation. Additionally, we define two ad hoc fixups that maintain particle balance and explicitly setmore » negative nodes of the UBLD finite element solution to zero but use different auxiliary equations to fully define their respective solutions. We assess the ability to inhibit negative angular flux solutions and the accuracy of every spatial discretization that we consider using a glancing void test problem with a discontinuous solution known to stress numerical methods. Though significantly more computationally intense, the nonlinear Petrov-Galerkin scheme results in a strictly nonnegative solution and is a more accurate solution than all the other methods considered. One fixup, based on shape preserving, results in a strictly nonnegative final solution but has increased numerical diffusion relative to the Petrov-Galerkin scheme and is less accurate than the UBLD solution. The second fixup, which preserves as many spatial moments as possible while setting negative values of the unlumped solution to zero, is less accurate than the Petrov-Galerkin scheme but is more accurate than the other fixup. However, it fails to guarantee a strictly nonnegative final solution. As a result, the fully lumped bilinear discontinuous finite element solution is the least accurate method, with significantly more numerical diffusion than the Petrov-Galerkin scheme and both fixups.« less
The Use of Non-Standard Devices in Finite Element Analysis
NASA Technical Reports Server (NTRS)
Schur, Willi W.; Broduer, Steve (Technical Monitor)
2001-01-01
A general mathematical description of the response behavior of thin-skin pneumatic envelopes and many other membrane and cable structures produces under-constrained systems that pose severe difficulties to analysis. These systems are mobile, and the general mathematical description exposes the mobility. Yet the response behavior of special under-constrained structures under special loadings can be accurately predicted using a constrained mathematical description. The static response behavior of systems that are infinitesimally mobile, such as a non-slack membrane subtended from a rigid or elastic boundary frame, can be easily analyzed using such general mathematical description as afforded by the non-linear, finite element method using an implicit solution scheme if the incremental uploading is guided through a suitable path. Similarly, if such structures are assembled with structural lack of fit that provides suitable self-stress, then dynamic response behavior can be predicted by the non-linear, finite element method and an implicit solution scheme. An explicit solution scheme is available for evolution problems. Such scheme can be used via the method of dynamic relaxation to obtain the solution to a static problem. In some sense, pneumatic envelopes and many other compliant structures can be said to have destiny under a specified loading system. What that means to the analyst is that what happens on the evolution path of the solution is irrelevant as long as equilibrium is achieved at destiny under full load and that the equilibrium is stable in the vicinity of that load. The purpose of this paper is to alert practitioners to the fact that non-standard procedures in finite element analysis are useful and can be legitimate although they burden their users with the requirement to use special caution. Some interesting findings that are useful to the US Scientific Balloon Program and that could not be obtained without non-standard techniques are presented.
Identity Bargaining: A Policy Systems Research Model of Career Development.
ERIC Educational Resources Information Center
Slawski, Carl
A detailed, general and comprehensive accounting scheme is presented, consisting of nine stages of career development, three major sets of elements contributing to career choice (in terms of personal, cultural and situational roles), and 20 hypotheses relating the separate elements. Implicit in the model is a novel procedure and method for…
SIMULATIONS OF 2D AND 3D THERMOCAPILLARY FLOWS BY A LEAST-SQUARES FINITE ELEMENT METHOD. (R825200)
Numerical results for time-dependent 2D and 3D thermocapillary flows are presented in this work. The numerical algorithm is based on the Crank-Nicolson scheme for time integration, Newton's method for linearization, and a least-squares finite element method, together with a matri...
Fast, Massively Parallel Data Processors
NASA Technical Reports Server (NTRS)
Heaton, Robert A.; Blevins, Donald W.; Davis, ED
1994-01-01
Proposed fast, massively parallel data processor contains 8x16 array of processing elements with efficient interconnection scheme and options for flexible local control. Processing elements communicate with each other on "X" interconnection grid with external memory via high-capacity input/output bus. This approach to conditional operation nearly doubles speed of various arithmetic operations.
NASA Astrophysics Data System (ADS)
Zhang, Zhi-Qian; Liu, G. R.; Khoo, Boo Cheong
2013-02-01
A three-dimensional immersed smoothed finite element method (3D IS-FEM) using four-node tetrahedral element is proposed to solve 3D fluid-structure interaction (FSI) problems. The 3D IS-FEM is able to determine accurately the physical deformation of the nonlinear solids placed within the incompressible viscous fluid governed by Navier-Stokes equations. The method employs the semi-implicit characteristic-based split scheme to solve the fluid flows and smoothed finite element methods to calculate the transient dynamics responses of the nonlinear solids based on explicit time integration. To impose the FSI conditions, a novel, effective and sufficiently general technique via simple linear interpolation is presented based on Lagrangian fictitious fluid meshes coinciding with the moving and deforming solid meshes. In the comparisons to the referenced works including experiments, it is clear that the proposed 3D IS-FEM ensures stability of the scheme with the second order spatial convergence property; and the IS-FEM is fairly independent of a wide range of mesh size ratio.
A Viable Scheme for Elemental Extraction and Purification Using In-Situ Planetary Resources
NASA Technical Reports Server (NTRS)
Sen, S.; Schofield, E.; ODell, S.; Ray, C. S.
2005-01-01
NASA's new strategic direction includes establishing a self-sufficient, affordable and safe human and robotic presence outside the low earth orbit. Some of the items required for a self-sufficient extra-terrestrial habitat will include materials for power generation (e.g. Si for solar cells) and habitat construction (e.g. Al, Fe, and Ti). In this paper we will present a viable elemental extraction and refining process from in-situ regolith which would be optimally continuous, robotically automated, and require a minimum amount of astronaut supervision and containment facilities, The approach is based on using a concentrated heat source and translating sample geometry to enable simultaneous oxide reduction and elemental refining. Preliminary results will be presented to demonstrate that the proposed zone refining process is capable of segregating or refining important elements such as Si (for solar cell fabrication) and Fe (for habitat construction). A conceptual scheme will be presented whereby such a process could be supported by use of solar energy and a precursor robotic mission on the surface of the moon.
Development of a morphological convolution operator for bearing fault detection
NASA Astrophysics Data System (ADS)
Li, Yifan; Liang, Xihui; Liu, Weiwei; Wang, Yan
2018-05-01
This paper presents a novel signal processing scheme, namely morphological convolution operator (MCO) lifted morphological undecimated wavelet (MUDW), for rolling element bearing fault detection. In this scheme, a MCO is first designed to fully utilize the advantage of the closing & opening gradient operator and the closing-opening & opening-closing gradient operator for feature extraction as well as the merit of excellent denoising characteristics of the convolution operator. The MCO is then introduced into MUDW for the purpose of improving the fault detection ability of the reported MUDWs. Experimental vibration signals collected from a train wheelset test rig and the bearing data center of Case Western Reserve University are employed to evaluate the effectiveness of the proposed MCO lifted MUDW on fault detection of rolling element bearings. The results show that the proposed approach has a superior performance in extracting fault features of defective rolling element bearings. In addition, comparisons are performed between two reported MUDWs and the proposed MCO lifted MUDW. The MCO lifted MUDW outperforms both of them in detection of outer race faults and inner race faults of rolling element bearings.
OXIDATION OF TRANSURANIC ELEMENTS
Moore, R.L.
1959-02-17
A method is reported for oxidizing neptunium or plutonium in the presence of cerous values without also oxidizing the cerous values. The method consists in treating an aqueous 1N nitric acid solution, containing such cerous values together with the trivalent transuranic elements, with a quantity of hydrogen peroxide stoichiometrically sufficient to oxidize the transuranic values to the hexavalent state, and digesting the solution at room temperature.
Chemistry of berkelium: A review
NASA Astrophysics Data System (ADS)
Hobart, D. E.; Peterson, J. R.
Element 97 was first produced in December 1949, by the bombardment of americium-241 with accelerated alpha particles. This new element was named berkelium (Bk) after Berkeley, California, the city of its discovery. In the 36 years since the discovery of Bk, a substantial amount of knowledge concerning the physicochemical properties of this relatively scarce transplutonium element was acquired. All of the Bk isotopes of mass numbers 240 and 242 through 251 are presently known, but only berkelium-249 is available in sufficient quantities for bulk chemical studies. About 0.7 gram of this isotope was isolated at the HFIR/TRU Complex in Oak Ridge, Tennessee in the last 18 years. Over the same time period, the scale of experimental work using berkelium-249 has increased from the tracer level to bulk studies at the microgram level to solution and solid state investigations with milligram quantities. Extended knowledge of the physicochemical behavior of berkelium is important in its own right, because Bk is the first member of the second half of the actinide series. In addition, such information should enable more accurate extrapolations to the predicted behavior of heavier elements for which experimental studies are severely limited by lack of material and/or by intense radioactivity.
NASA Technical Reports Server (NTRS)
Giles, G. L.; Rogers, J. L., Jr.
1982-01-01
The implementation includes a generalized method for specifying element cross-sectional dimensions as design variables that can be used in analytically calculating derivatives of output quantities from static stress, vibration, and buckling analyses for both membrane and bending elements. Limited sample results for static displacements and stresses are presented to indicate the advantages of analytically calclating response derivatives compared to finite difference methods. Continuing developments to implement these procedures into an enhanced version of the system are also discussed.
Forming of the Most Convenient Bent Constructional Elements with a Permissible Strength Given
NASA Astrophysics Data System (ADS)
Fligiel, M.
2014-11-01
In the present study, the limiting values are determined of the criteria quantities of optimal forming of the most convenient bent supporting structure for the case of static loads in the range of the Hooke's law applicability. As the criterion of the most convenient constructional element, the following were accepted: the smallest length of the activity of internal forces as well as the equal potential and the gradient of the potential energy of elastic deformation at each point of the constructional element.
Maintenance Enterprise Resource Planning: Information Value Among Supply Chain Elements
2014-04-30
is the Economic Order Cost (EOQ) model, Production Order Quantity Cost, and Quantity Discount Model( Heizer & Render , 2007, pp. 489–490...demand for another item. Following an aircraft, the items to assemble the aircraft are dependent demand ( Heizer & Render , 2007, pp. 562–563). MERP...6), 947–950. doi:10.1287/opre.38.6.947 Heizer , J., & Render , B. (2007). Principles of Operations Management (7th ed., p. 684). Upper Saddle River
Algorithms for elasto-plastic-creep postbuckling
NASA Technical Reports Server (NTRS)
Padovan, J.; Tovichakchaikul, S.
1984-01-01
This paper considers the development of an improved constrained time stepping scheme which can efficiently and stably handle the pre-post-buckling behavior of general structure subject to high temperature environments. Due to the generality of the scheme, the combined influence of elastic-plastic behavior can be handled in addition to time dependent creep effects. This includes structural problems exhibiting indefinite tangent properties. To illustrate the capability of the procedure, several benchmark problems employing finite element analyses are presented. These demonstrate the numerical efficiency and stability of the scheme. Additionally, the potential influence of complex creep histories on the buckling characteristics is considered.
NASA Astrophysics Data System (ADS)
Gusev, A. A.; Chuluunbaatar, O.; Vinitsky, S. I.; Derbov, V. L.; Hai, L. L.; Kazaryan, E. M.; Sarkisyan, H. A.
2018-04-01
We present new calculation schemes using high-order finite element method implemented on unstructured grids with triangle elements for solving boundary-value problems that describe axially symmetric quantum dots. The efficiency of the algorithms and software is demonstrated by benchmark calculations of the energy spectrum, the envelope eigenfunctions of electron, hole and exciton states, and the direct interband light absorption in conical and spheroidal impenetrable quantum dots.
Crack Turning and Arrest Mechanisms for Integral Structure
NASA Technical Reports Server (NTRS)
Pettit, Richard; Ingraffea, Anthony
1999-01-01
In the course of several years of research efforts to predict crack turning and flapping in aircraft fuselage structures and other problems related to crack turning, the 2nd order maximum tangential stress theory has been identified as the theory most capable of predicting the observed test results. This theory requires knowledge of a material specific characteristic length, and also a computation of the stress intensity factors and the T-stress, or second order term in the asymptotic stress field in the vicinity of the crack tip. A characteristic length, r(sub c), is proposed for ductile materials pertaining to the onset of plastic instability, as opposed to the void spacing theories espoused by previous investigators. For the plane stress case, an approximate estimate of r(sub c), is obtained from the asymptotic field for strain hardening materials given by Hutchinson, Rice and Rosengren (HRR). A previous study using of high order finite element methods to calculate T-stresses by contour integrals resulted in extremely high accuracy values obtained for selected test specimen geometries, and a theoretical error estimation parameter was defined. In the present study, it is shown that a large portion of the error in finite element computations of both K and T are systematic, and can be corrected after the initial solution if the finite element implementation utilizes a similar crack tip discretization scheme for all problems. This scheme is applied for two-dimensional problems to a both a p-version finite element code, showing that sufficiently accurate values of both K(sub I) and T can be obtained with fairly low order elements if correction is used. T-stress correction coefficients are also developed for the singular crack tip rosette utilized in the adaptive mesh finite element code FRANC2D, and shown to reduce the error in the computed T-stress significantly. Stress intensity factor correction was not attempted for FRANC2D because it employs a highly accurate quarter-point scheme to obtain stress intensity factors.
Geometry of the theory space in the exact renormalization group formalism
NASA Astrophysics Data System (ADS)
Pagani, C.; Sonoda, H.
2018-01-01
We consider the theory space as a manifold whose coordinates are given by the couplings appearing in the Wilson action. We discuss how to introduce connections on this theory space. A particularly intriguing connection can be defined directly from the solution of the exact renormalization group (ERG) equation. We advocate a geometric viewpoint that lets us define straightforwardly physically relevant quantities invariant under the changes of a renormalization scheme.
Optical image security using Stokes polarimetry of spatially variant polarized beam
NASA Astrophysics Data System (ADS)
Fatima, Areeba; Nishchal, Naveen K.
2018-06-01
We propose a novel security scheme that uses vector beam characterized by the spatially variant polarization distribution. A vector beam is so generated that its helical components carry tailored phases corresponding to the image/images that is/are to be encrypted. The tailoring of phase has been done by employing the modified Gerchberg-Saxton algorithm for phase retrieval. Stokes parameters for the final vector beam is evaluated and is used to construct the ciphertext and one of the keys. The advantage of the proposed scheme is that it generates real ciphertext and keys which are easier to transmit and store than complex quantities. Moreover, the known plaintext attack is not applicable to this system. As a proof-of-concept, simulation results have been presented for securing single and double gray-scale images.
Application of an efficient hybrid scheme for aeroelastic analysis of advanced propellers
NASA Technical Reports Server (NTRS)
Srivastava, R.; Sankar, N. L.; Reddy, T. S. R.; Huff, D. L.
1989-01-01
An efficient 3-D hybrid scheme is applied for solving Euler equations to analyze advanced propellers. The scheme treats the spanwise direction semi-explicitly and the other two directions implicitly, without affecting the accuracy, as compared to a fully implicit scheme. This leads to a reduction in computer time and memory requirement. The calculated power coefficients for two advanced propellers, SR3 and SR7L, and various advanced ratios showed good correlation with experiment. Spanwise distribution of elemental power coefficient and steady pressure coefficient differences also showed good agreement with experiment. A study of the effect of structural flexibility on the performance of the advanced propellers showed that structural deformation due to centrifugal and aero loading should be included for better correlation.
The alpha(3) Scheme - A Fourth-Order Neutrally Stable CESE Solver
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung
2007-01-01
The conservation element and solution element (CESE) development is driven by a belief that a solver should (i) enforce conservation laws in both space and time, and (ii) be built from a non-dissipative (i.e., neutrally stable) core scheme so that the numerical dissipation can be controlled effectively. To provide a solid foundation for a systematic CESE development of high order schemes, in this paper we describe a new 4th-order neutrally stable CESE solver of the advection equation Theta u/Theta + alpha Theta u/Theta x = 0. The space-time stencil of this two-level explicit scheme is formed by one point at the upper time level and three points at the lower time level. Because it is associated with three independent mesh variables u(sup n) (sub j), (u(sub x))(sup n) (sub j) , and (uxz)(sup n) (sub j) (the numerical analogues of u, Theta u/Theta x, and Theta(exp 2)u/Theta x(exp 2), respectively) and four equations per mesh point, the new scheme is referred to as the alpha(3) scheme. As in the case of other similar CESE neutrally stable solvers, the alpha(3) scheme enforces conservation laws in space-time locally and globally, and it has the basic, forward marching, and backward marching forms. These forms are equivalent and satisfy a space-time inversion (STI) invariant property which is shared by the advection equation. Based on the concept of STI invariance, a set of algebraic relations is developed and used to prove that the alpha(3) scheme must be neutrally stable when it is stable. Moreover it is proved rigorously that all three amplification factors of the alpha(3) scheme are of unit magnitude for all phase angles if |v| <= 1/2 (v = alpha delta t/delta x). This theoretical result is consistent with the numerical stability condition |v| <= 1/2. Through numerical experiments, it is established that the alpha(3) scheme generally is (i) 4th-order accurate for the mesh variables u(sup n) (sub j) and (ux)(sup n) (sub j); and 2nd-order accurate for (uxx)(sup n) (sub j). However, in some exceptional cases, the scheme can achieve perfect accuracy aside from round-off errors.
High-temperature brushless DC motor controller
Cieslewski, Crzegorz; Lindblom, Scott C.; Maldonado, Frank J.; Eckert, Michael Nathan
2017-05-16
A motor control system for deployment in high temperature environments includes a controller; a first half-bridge circuit that includes a first high-side switching element and a first low-side switching element; a second half-bridge circuit that includes a second high-side switching element and a second low-side switching element; and a third half-bridge circuit that includes a third high-side switching element and a third; low-side switching element. The motor controller is arranged to apply a pulse width modulation (PWM) scheme to switch the first half-bridge circuit, second half-bridge circuit, and third half-bridge circuit to power a motor.
NASA Technical Reports Server (NTRS)
Buehler, Martin G. (Inventor)
1988-01-01
A set of addressable test structures, each of which uses addressing schemes to access individual elements of the structure in a matrix, is used to test the quality of a wafer before integrated circuits produced thereon are diced, packaged and subjected to final testing. The electrical characteristic of each element is checked and compared to the electrical characteristic of all other like elements in the matrix. The effectiveness of the addressable test matrix is in readily analyzing the electrical characteristics of the test elements and in providing diagnostic information.
Finite-element numerical modeling of atmospheric turbulent boundary layer
NASA Technical Reports Server (NTRS)
Lee, H. N.; Kao, S. K.
1979-01-01
A dynamic turbulent boundary-layer model in the neutral atmosphere is constructed, using a dynamic turbulent equation of the eddy viscosity coefficient for momentum derived from the relationship among the turbulent dissipation rate, the turbulent kinetic energy and the eddy viscosity coefficient, with aid of the turbulent second-order closure scheme. A finite-element technique was used for the numerical integration. In preliminary results, the behavior of the neutral planetary boundary layer agrees well with the available data and with the existing elaborate turbulent models, using a finite-difference scheme. The proposed dynamic formulation of the eddy viscosity coefficient for momentum is particularly attractive and can provide a viable alternative approach to study atmospheric turbulence, diffusion and air pollution.
Application of thermodynamics to silicate crystalline solutions
NASA Technical Reports Server (NTRS)
Saxena, S. K.
1972-01-01
A review of thermodynamic relations is presented, describing Guggenheim's regular solution models, the simple mixture, the zeroth approximation, and the quasi-chemical model. The possibilities of retrieving useful thermodynamic quantities from phase equilibrium studies are discussed. Such quantities include the activity-composition relations and the free energy of mixing in crystalline solutions. Theory and results of the study of partitioning of elements in coexisting minerals are briefly reviewed. A thermodynamic study of the intercrystalline and intracrystalline ion exchange relations gives useful information on the thermodynamic behavior of the crystalline solutions involved. Such information is necessary for the solution of most petrogenic problems and for geothermometry. Thermodynamic quantities for tungstates (CaWO4-SrWO4) are calculated.
Memristor-Based Synapse Design and Training Scheme for Neuromorphic Computing Architecture
2012-06-01
system level built upon the conventional Von Neumann computer architecture [2][3]. Developing the neuromorphic architecture at chip level by...SCHEME FOR NEUROMORPHIC COMPUTING ARCHITECTURE 5a. CONTRACT NUMBER FA8750-11-2-0046 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER 62788F 6...creation of memristor-based neuromorphic computing architecture. Rather than the existing crossbar-based neuron network designs, we focus on memristor
NASA Astrophysics Data System (ADS)
Liu, Junzi; Cheng, Lan
2018-04-01
An atomic mean-field (AMF) spin-orbit (SO) approach within exact two-component theory (X2C) is reported, thereby exploiting the exact decoupling scheme of X2C, the one-electron approximation for the scalar-relativistic contributions, the mean-field approximation for the treatment of the two-electron SO contribution, and the local nature of the SO interactions. The Hamiltonian of the proposed SOX2CAMF scheme comprises the one-electron X2C Hamiltonian, the instantaneous two-electron Coulomb interaction, and an AMF SO term derived from spherically averaged Dirac-Coulomb Hartree-Fock calculations of atoms; no molecular relativistic two-electron integrals are required. Benchmark calculations for bond lengths, harmonic frequencies, dipole moments, and electric-field gradients for a set of diatomic molecules containing elements across the periodic table show that the SOX2CAMF scheme offers a balanced treatment for SO and scalar-relativistic effects and appears to be a promising candidate for applications to heavy-element containing systems. SOX2CAMF coupled-cluster calculations of molecular properties for bismuth compounds (BiN, BiP, BiF, BiCl, and BiI) are also presented and compared with experimental results to further demonstrate the accuracy and applicability of the SOX2CAMF scheme.
Finite Volume Element (FVE) discretization and multilevel solution of the axisymmetric heat equation
NASA Astrophysics Data System (ADS)
Litaker, Eric T.
1994-12-01
The axisymmetric heat equation, resulting from a point-source of heat applied to a metal block, is solved numerically; both iterative and multilevel solutions are computed in order to compare the two processes. The continuum problem is discretized in two stages: finite differences are used to discretize the time derivatives, resulting is a fully implicit backward time-stepping scheme, and the Finite Volume Element (FVE) method is used to discretize the spatial derivatives. The application of the FVE method to a problem in cylindrical coordinates is new, and results in stencils which are analyzed extensively. Several iteration schemes are considered, including both Jacobi and Gauss-Seidel; a thorough analysis of these schemes is done, using both the spectral radii of the iteration matrices and local mode analysis. Using this discretization, a Gauss-Seidel relaxation scheme is used to solve the heat equation iteratively. A multilevel solution process is then constructed, including the development of intergrid transfer and coarse grid operators. Local mode analysis is performed on the components of the amplification matrix, resulting in the two-level convergence factors for various combinations of the operators. A multilevel solution process is implemented by using multigrid V-cycles; the iterative and multilevel results are compared and discussed in detail. The computational savings resulting from the multilevel process are then discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Weixiong; Wang, Yaqi; DeHart, Mark D.
2016-09-01
In this report, we present a new upwinding scheme for the multiscale capability in Rattlesnake, the MOOSE based radiation transport application. Comparing with the initial implementation of multiscale utilizing Lagrange multipliers to impose strong continuity of angular flux on interface of in-between subdomains, this scheme does not require the particular domain partitioning. This upwinding scheme introduces discontinuity of angular flux and resembles the classic upwinding technique developed for solving first order transport equation using discontinuous finite element method (DFEM) on the subdomain interfaces. Because this scheme restores the causality of radiation streaming on the interfaces, significant accuracy improvement can bemore » observed with moderate increase of the degrees of freedom comparing with the continuous method over the entire solution domain. Hybrid SN-PN is implemented and tested with this upwinding scheme. Numerical results show that the angular smoothing required by Lagrange multiplier method is not necessary for the upwinding scheme.« less
Application of the Spectral Element Method to Interior Noise Problems
NASA Technical Reports Server (NTRS)
Doyle, James F.
1998-01-01
The primary effort of this research project was focused the development of analytical methods for the accurate prediction of structural acoustic noise and response. Of particular interest was the development of curved frame and shell spectral elements for the efficient computational of structural response and of schemes to match this to the surrounding fluid.
High order parallel numerical schemes for solving incompressible flows
NASA Technical Reports Server (NTRS)
Lin, Avi; Milner, Edward J.; Liou, May-Fun; Belch, Richard A.
1992-01-01
The use of parallel computers for numerically solving flow fields has gained much importance in recent years. This paper introduces a new high order numerical scheme for computational fluid dynamics (CFD) specifically designed for parallel computational environments. A distributed MIMD system gives the flexibility of treating different elements of the governing equations with totally different numerical schemes in different regions of the flow field. The parallel decomposition of the governing operator to be solved is the primary parallel split. The primary parallel split was studied using a hypercube like architecture having clusters of shared memory processors at each node. The approach is demonstrated using examples of simple steady state incompressible flows. Future studies should investigate the secondary split because, depending on the numerical scheme that each of the processors applies and the nature of the flow in the specific subdomain, it may be possible for a processor to seek better, or higher order, schemes for its particular subcase.
Detailed modeling of the atmospheric degradation mechanism of very-short lived brominated species
NASA Astrophysics Data System (ADS)
Krysztofiak, G.; Catoire, V.; Poulet, G.; Marécal, V.; Pirre, M.; Louis, F.; Canneaux, S.; Josse, B.
2012-11-01
Detailed chemical reaction schemes for the atmospheric degradations of the very short-lived species (VSLS) bromoform (CHBr3) and dibromomethane (CH2Br2) have been established. These degradation schemes have been implemented in the meteorological/tracer transport model CATT-BRAMS used in the present case as pseudo one-dimensional model with chemistry of CH4, CO, HOx, NOx, NOy and Ox. They include the main possible reactions of the intermediate brominated peroxy radicals RO2 (with R = CH2Br, CHBr2 and CBr3) for which the most likely reaction pathways with HO2 have been found using ab initio computational calculations. The full degradation schemes have been run for two well-defined realistic scenarios, “clean” atmosphere and “moderately” NOy-polluted atmosphere, as representative of a tropical coastal region where these VSLS natural emissions are expected to be important. The Henry's law constants of the brominated organics products have been estimated by using the Bond Contribution Method (BCM; Meylan and Howard, 1991) or the Molecular Connectivity Index (MCI; Nirmalakhandan and Speece, 1988). Using these constants, the least soluble species formed from the VSLS degradation are found to be CBr2O, CHBrO, CBr3O2NO2, CHBr2O2NO2, BrO, BrONO2 and HOBr, which leads those to be potentially transported into the tropical tropopause layer (TTL) in case of deep convection and contribute to stratospheric bromine additionally to the original substances. For bromoform and dibromomethane degradation, the moderate NOy pollution increases the production of the least soluble species and thus approximately doubles the bromine quantity potentially able to reach the TTL (from 22.5% to 43% for CHBr3 and from 8.8% to 20.2% for CH2Br2). The influence of the reactions of the RO2 radicals with HO2, CH3O2 and NO2 on the nature and abundance of the stable intermediate and end-products has been tested for CHBr3 degradation. As a result, the reactions of the RO2 radicals with NO2 have no impact. Taking into account the reaction between RO2 and CH3O2 and modifying the branching ratios of the reaction between RO2 and HO2 lead to a small impact on the bromoform degradation by slightly decreasing (by 10%) the bromine quantity potentially able to reach the TTL. As a final point, in contrast to CHBr3, CH2Br2 degradation produces negligible quantities of organics species and the effects of pollution increase only the inorganic species production. By taking into account the results of these tests, new simplified degradation schemes for CHBr3 and CH2Br2 are proposed.
Involution and Difference Schemes for the Navier-Stokes Equations
NASA Astrophysics Data System (ADS)
Gerdt, Vladimir P.; Blinkov, Yuri A.
In the present paper we consider the Navier-Stokes equations for the two-dimensional viscous incompressible fluid flows and apply to these equations our earlier designed general algorithmic approach to generation of finite-difference schemes. In doing so, we complete first the Navier-Stokes equations to involution by computing their Janet basis and discretize this basis by its conversion into the integral conservation law form. Then we again complete the obtained difference system to involution with eliminating the partial derivatives and extracting the minimal Gröbner basis from the Janet basis. The elements in the obtained difference Gröbner basis that do not contain partial derivatives of the dependent variables compose a conservative difference scheme. By exploiting arbitrariness in the numerical integration approximation we derive two finite-difference schemes that are similar to the classical scheme by Harlow and Welch. Each of the two schemes is characterized by a 5×5 stencil on an orthogonal and uniform grid. We also demonstrate how an inconsistent difference scheme with a 3×3 stencil is generated by an inappropriate numerical approximation of the underlying integrals.
NASA Astrophysics Data System (ADS)
Cirigliano, Vincenzo; Dekens, Wouter; Mereghetti, Emanuele; Walker-Loud, André
2018-06-01
We present the first chiral effective theory derivation of the neutrinoless double-β decay n n →p p potential induced by light Majorana neutrino exchange. The effective-field-theory framework has allowed us to identify and parametrize short- and long-range contributions previously missed in the literature. These contributions cannot be absorbed into parametrizations of the single-nucleon form factors. Starting from the quark and gluon level, we perform the matching onto chiral effective field theory and subsequently onto the nuclear potential. To derive the nuclear potential mediating neutrinoless double-β decay, the hard, soft, and potential neutrino modes must be integrated out. This is performed through next-to-next-to-leading order in the chiral power counting, in both the Weinberg and pionless schemes. At next-to-next-to-leading order, the amplitude receives additional contributions from the exchange of ultrasoft neutrinos, which can be expressed in terms of nuclear matrix elements of the weak current and excitation energies of the intermediate nucleus. These quantities also control the two-neutrino double-β decay amplitude. Finally, we outline strategies to determine the low-energy constants that appear in the potentials, by relating them to electromagnetic couplings and/or by matching to lattice QCD calculations.
NASA Astrophysics Data System (ADS)
Silva, Luís Carlos; Milani, Gabriele; Lourenço, Paulo B.
2017-11-01
Two finite element homogenized-based strategies are presented for the out-of-plane behaviour characterization of an English bond masonry wall. A finite element micro-modelling approach using Cauchy stresses and first order movements are assumed for both strategies. The material nonlinearity is lumped on joints interfaces and bricks are considered elastic. Nevertheless, the first model is based on a Plane-stress assumption, in which the out-of-plane quantities are derived through on-thickness wall integration considering a Kirchhoff-plate theory. The second model is a tridimensional one, in which the homogenized out-of-plane quantities can be directly derived after solving the boundary value problem. The comparison is conducted by assessing the obtained out-of-plane bending- and torsion-curvature diagrams. A good agreement is found for the present study case.
Hyperbolicity measures democracy in real-world networks
NASA Astrophysics Data System (ADS)
Borassi, Michele; Chessa, Alessandro; Caldarelli, Guido
2015-09-01
In this work, we analyze the hyperbolicity of real-world networks, a geometric quantity that measures if a space is negatively curved. We provide two improvements in our understanding of this quantity: first of all, in our interpretation, a hyperbolic network is "aristocratic", since few elements "connect" the system, while a non-hyperbolic network has a more "democratic" structure with a larger number of crucial elements. The second contribution is the introduction of the average hyperbolicity of the neighbors of a given node. Through this definition, we outline an "influence area" for the vertices in the graph. We show that in real networks the influence area of the highest degree vertex is small in what we define "local" networks (i.e., social or peer-to-peer networks), and large in "global" networks (i.e., power grid, metabolic networks, or autonomous system networks).
NASA Technical Reports Server (NTRS)
Himansu, Ananda; Chang, Sin-Chung; Yu, Sheng-Tao; Wang, Xiao-Yen; Loh, Ching-Yuen; Jorgenson, Philip C. E.
1999-01-01
In this overview paper, we review the basic principles of the method of space-time conservation element and solution element for solving the conservation laws in one and two spatial dimensions. The present method is developed on the basis of local and global flux conservation in a space-time domain, in which space and time are treated in a unified manner. In contrast to the modern upwind schemes, the approach here does not use the Riemann solver and the reconstruction procedure as the building blocks. The drawbacks of the upwind approach, such as the difficulty of rationally extending the 1D scalar approach to systems of equations and particularly to multiple dimensions is here contrasted with the uniformity and ease of generalization of the Conservation Element and Solution Element (CE/SE) 1D scalar schemes to systems of equations and to multiple spatial dimensions. The assured compatibility with the simplest type of unstructured meshes, and the uniquely simple nonreflecting boundary conditions of the present method are also discussed. The present approach has yielded high-resolution shocks, rarefaction waves, acoustic waves, vortices, ZND detonation waves, and shock/acoustic waves/vortices interactions. Moreover, since no directional splitting is employed, numerical resolution of two-dimensional calculations is comparable to that of the one-dimensional calculations. Some sample applications displaying the strengths and broad applicability of the CE/SE method are reviewed.
Metal characterization of white hawthorn organs and infusions.
Juranović Cindrić, Iva; Zeiner, Michaela; Konanov, Darija Mihajlov; Stingeder, Gerhard
2015-02-18
Hawthorn is one of the most commonly used European and North American phytopharmaceuticals. Because there is no information on metals in seeds, and only rare data for leaves and flowers, the aim of the present study was elemental analysis of the white hawthorn (Crataegus monogyna) by inductively coupled plasma emission spectrometry (ICP-AES) or inductively coupled plasma mass spectrometry (ICP-MS) after digestion in a microwave-assisted system. The limits of detection are below 2 μg/g for ICP-AES and 0.5 μg/g for ICP-MS. Hawthorn leaves and flowers contain essential elements at concentrations (mean values, RSD 2-8%) in mg/g of Ca, 1-4; K, 4-5; Mg, 1-2; and Na, <0.2); and at μg/g levels of Ba, 1-10; Co, <0.16; Cr, <1.4; Cu, 0.6-7; Fe, 1-37; Li, <0.5; Mn, 1-13; Mo, <0.17; Ni, <0.6; Sr, 0.2-2; and Zn, 1-31. Toxic elements were found in low quantities: As (<0.04), Cd (0.04-0.1), and Pb (0.1-2). Up to 10% of the metals is extracted into the infusions. The analyzed plant parts and infusions contain essential elements justifying its use as a medicinal plant, whereas the low quantities of harmful elements will not pose any risk to humans when consumed.
Design of an extensive information representation scheme for clinical narratives.
Deléger, Louise; Campillos, Leonardo; Ligozat, Anne-Laure; Névéol, Aurélie
2017-09-11
Knowledge representation frameworks are essential to the understanding of complex biomedical processes, and to the analysis of biomedical texts that describe them. Combined with natural language processing (NLP), they have the potential to contribute to retrospective studies by unlocking important phenotyping information contained in the narrative content of electronic health records (EHRs). This work aims to develop an extensive information representation scheme for clinical information contained in EHR narratives, and to support secondary use of EHR narrative data to answer clinical questions. We review recent work that proposed information representation schemes and applied them to the analysis of clinical narratives. We then propose a unifying scheme that supports the extraction of information to address a large variety of clinical questions. We devised a new information representation scheme for clinical narratives that comprises 13 entities, 11 attributes and 37 relations. The associated annotation guidelines can be used to consistently apply the scheme to clinical narratives and are https://cabernet.limsi.fr/annotation_guide_for_the_merlot_french_clinical_corpus-Sept2016.pdf . The information scheme includes many elements of the major schemes described in the clinical natural language processing literature, as well as a uniquely detailed set of relations.
NASA Astrophysics Data System (ADS)
Horstmann, Jan Tobias; Le Garrec, Thomas; Mincu, Daniel-Ciprian; Lévêque, Emmanuel
2017-11-01
Despite the efficiency and low dissipation of the stream-collide scheme of the discrete-velocity Boltzmann equation, which is nowadays implemented in many lattice Boltzmann solvers, a major drawback exists over alternative discretization schemes, i.e. finite-volume or finite-difference, that is the limitation to Cartesian uniform grids. In this paper, an algorithm is presented that combines the positive features of each scheme in a hybrid lattice Boltzmann method. In particular, the node-based streaming of the distribution functions is coupled with a second-order finite-volume discretization of the advection term of the Boltzmann equation under the Bhatnagar-Gross-Krook approximation. The algorithm is established on a multi-domain configuration, with the individual schemes being solved on separate sub-domains and connected by an overlapping interface of at least 2 grid cells. A critical parameter in the coupling is the CFL number equal to unity, which is imposed by the stream-collide algorithm. Nevertheless, a semi-implicit treatment of the collision term in the finite-volume formulation allows us to obtain a stable solution for this condition. The algorithm is validated in the scope of three different test cases on a 2D periodic mesh. It is shown that the accuracy of the combined discretization schemes agrees with the order of each separate scheme involved. The overall numerical error of the hybrid algorithm in the macroscopic quantities is contained between the error of the two individual algorithms. Finally, we demonstrate how such a coupling can be used to adapt to anisotropic flows with some gradual mesh refinement in the FV domain.
Tansel, Berrin
2017-01-01
Advancements in technology, materials development, and manufacturing processes have changed the consumer products and composition of municipal solid waste (MSW) since 1960s. Increasing quantities of discarded consumer products remain a major challenge for recycling efforts, especially for discarded electronic products (also referred as e-waste). The growing demand for high tech products has increased the e-waste quantities and its cross boundary transport globally. This paper reviews the challenges associated with increasing e-waste quantities. The increasing need for raw materials (especially for rare earth and minor elements) and unregulated e-waste recycling operations in developing and underdeveloped counties contribute to the growing concerns for e-waste management. Although the markets for recycled materials are increasing; there are major challenges for development of the necessary infrastructure for e-waste management and accountability as well as development of effective materials recovery technologies and product design. Copyright © 2016 Elsevier Ltd. All rights reserved.
Apparatus for and method of simulating turbulence
Dimas, Athanassios; Lottati, Isaac; Bernard, Peter; Collins, James; Geiger, James C.
2003-01-01
In accordance with a preferred embodiment of the invention, a novel apparatus for and method of simulating physical processes such as fluid flow is provided. Fluid flow near a boundary or wall of an object is represented by a collection of vortex sheet layers. The layers are composed of a grid or mesh of one or more geometrically shaped space filling elements. In the preferred embodiment, the space filling elements take on a triangular shape. An Eulerian approach is employed for the vortex sheets, where a finite-volume scheme is used on the prismatic grid formed by the vortex sheet layers. A Lagrangian approach is employed for the vortical elements (e.g., vortex tubes or filaments) found in the remainder of the flow domain. To reduce the computational time, a hairpin removal scheme is employed to reduce the number of vortex filaments, and a Fast Multipole Method (FMM), preferably implemented using parallel processing techniques, reduces the computation of the velocity field.
Real-time adaptive finite element solution of time-dependent Kohn-Sham equation
NASA Astrophysics Data System (ADS)
Bao, Gang; Hu, Guanghui; Liu, Di
2015-01-01
In our previous paper (Bao et al., 2012 [1]), a general framework of using adaptive finite element methods to solve the Kohn-Sham equation has been presented. This work is concerned with solving the time-dependent Kohn-Sham equations. The numerical methods are studied in the time domain, which can be employed to explain both the linear and the nonlinear effects. A Crank-Nicolson scheme and linear finite element space are employed for the temporal and spatial discretizations, respectively. To resolve the trouble regions in the time-dependent simulations, a heuristic error indicator is introduced for the mesh adaptive methods. An algebraic multigrid solver is developed to efficiently solve the complex-valued system derived from the semi-implicit scheme. A mask function is employed to remove or reduce the boundary reflection of the wavefunction. The effectiveness of our method is verified by numerical simulations for both linear and nonlinear phenomena, in which the effectiveness of the mesh adaptive methods is clearly demonstrated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCarrick, H., E-mail: hlm2124@columbia.edu; Flanigan, D.; Jones, G.
We discuss the design, fabrication, and testing of prototype horn-coupled, lumped-element kinetic inductance detectors (LEKIDs) designed for cosmic microwave background studies. The LEKIDs are made from a thin aluminum film deposited on a silicon wafer and patterned using standard photolithographic techniques at STAR Cryoelectronics, a commercial device foundry. We fabricated 20-element arrays, optimized for a spectral band centered on 150 GHz, to test the sensitivity and yield of the devices as well as the multiplexing scheme. We characterized the detectors in two configurations. First, the detectors were tested in a dark environment with the horn apertures covered, and second, themore » horn apertures were pointed towards a beam-filling cryogenic blackbody load. These tests show that the multiplexing scheme is robust and scalable, the yield across multiple LEKID arrays is 91%, and the measured noise-equivalent temperatures for a 4 K optical load are in the range 26±6 μK√(s)« less
Method and an apparatus for non-invasively determining the quantity of an element in a body organ
Vartsky, D.; Ellis, K.J.; Cohn, S.H.
1980-06-27
An apparatus and a method for determining in a body organ the amount of an element with the aid of a gaseous gamma ray source, where the element and the source are paired in predetermined pairs, and with the aid of at least one detector selected from the group consisting of Ge(Li) and NaI(Tl). Gamma rays are directed towards the organ, thereby resonantly scattering the gamma rays from nuclei of the element in the organ; the intensity of the gamma rays is detected by the detector; and the amount of the element in the organ is then substantially proportional to the detected intensity of the gamma rays.
NASA Astrophysics Data System (ADS)
Ahmed, Naveed; Adnan; Khan, Umar; Tauseef Mohyud-Din, Syed; Waheed, Asif
2017-07-01
This paper aims to explore the flow of water saturated with copper nanoparticles of different shapes between parallel Riga plates. The plates are placed horizontally in the coordinate axis. Influence of the linear thermal radiation is also taken into account. The equations governing the flow have been transformed into a nondimensional form by employing a set of similarity transformations. The obtained system is solved analytically (variation-of-parameters method) and numerically (Runge-Kutta scheme). Under certain conditions, a special case of the model is also explored. Furthermore, influences of the physical quantities on velocity and thermal fields are discussed with the graphical aid over the domain of interest. The quantities of engineering and practical interest (skin friction coefficient and local rate of heat transfer) are also explored graphically.
Sensorless Estimation and Nonlinear Control of a Rotational Energy Harvester
NASA Astrophysics Data System (ADS)
Nunna, Kameswarie; Toh, Tzern T.; Mitcheson, Paul D.; Astolfi, Alessandro
2013-12-01
It is important to perform sensorless monitoring of parameters in energy harvesting devices in order to determine the operating states of the system. However, physical measurements of these parameters is often a challenging task due to the unavailability of access points. This paper presents, as an example application, the design of a nonlinear observer and a nonlinear feedback controller for a rotational energy harvester. A dynamic model of a rotational energy harvester with its power electronic interface is derived and validated. This model is then used to design a nonlinear observer and a nonlinear feedback controller which yield a sensorless closed-loop system. The observer estimates the mechancial quantities from the measured electrical quantities while the control law sustains power generation across a range of source rotation speeds. The proposed scheme is assessed through simulations and experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aslangul, C.; Bouchaud, J.; Georges, A.
The authors present new exact results for a one-dimensional asymmetric disordered hopping model. The lattice is taken infinite from the start and they do not resort to the periodization scheme used by Derrida. An explicit resummation allows for the calculation of the velocity V and the diffusion constant D (which are found to coincide with those given by Derrida) and for demonstrating that V is indeed a self-averaging quantity; the same property is established for D in the limiting case of a directed walk.
Nuclear Data Uncertainties for Typical LWR Fuel Assemblies and a Simple Reactor Core
NASA Astrophysics Data System (ADS)
Rochman, D.; Leray, O.; Hursin, M.; Ferroukhi, H.; Vasiliev, A.; Aures, A.; Bostelmann, F.; Zwermann, W.; Cabellos, O.; Diez, C. J.; Dyrda, J.; Garcia-Herranz, N.; Castro, E.; van der Marck, S.; Sjöstrand, H.; Hernandez, A.; Fleming, M.; Sublet, J.-Ch.; Fiorito, L.
2017-01-01
The impact of the current nuclear data library covariances such as in ENDF/B-VII.1, JEFF-3.2, JENDL-4.0, SCALE and TENDL, for relevant current reactors is presented in this work. The uncertainties due to nuclear data are calculated for existing PWR and BWR fuel assemblies (with burn-up up to 40 GWd/tHM, followed by 10 years of cooling time) and for a simplified PWR full core model (without burn-up) for quantities such as k∞, macroscopic cross sections, pin power or isotope inventory. In this work, the method of propagation of uncertainties is based on random sampling of nuclear data, either from covariance files or directly from basic parameters. Additionally, possible biases on calculated quantities are investigated such as the self-shielding treatment. Different calculation schemes are used, based on CASMO, SCALE, DRAGON, MCNP or FISPACT-II, thus simulating real-life assignments for technical-support organizations. The outcome of such a study is a comparison of uncertainties with two consequences. One: although this study is not expected to lead to similar results between the involved calculation schemes, it provides an insight on what can happen when calculating uncertainties and allows to give some perspectives on the range of validity on these uncertainties. Two: it allows to dress a picture of the state of the knowledge as of today, using existing nuclear data library covariances and current methods.
Physicians' responses to financial and social incentives: A medically framed real effort experiment.
Lagarde, Mylène; Blaauw, Duane
2017-04-01
Because compensation policies have critical implications for the provision of health care, and evidence of their effects is limited and difficult to study in the real world, laboratory experiments may be a valuable methodology to study the behavioural responses of health care providers. With this experiment undertaken in 2013, we add to this new literature by designing a new medically framed real effort task to test the effects of different remuneration schemes in a multi-tasking context. We assess the impact of different incentives on the quantity (productivity) and quality of outputs of 132 participants. We also test whether the existence of benefits to patients influences effort. The results show that salary yields the lowest quantity of output, and fee-for-service the highest. By contrast, we find that the highest quality is achieved when participants are paid by salary, followed by capitation. We also find a lot of heterogeneity in behaviour, with intrinsically motivated individuals hardly sensitive to financial incentives. Finally, we find that when work quality benefits patients directly, subjects improve the quality of their output, while maintaining the same levels of productivity. This paper adds to a nascent literature by providing a new approach to studying remuneration schemes and modelling the medical decision making environment in the lab. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Fusheini, Adam; Marnoch, Gordon; Gray, Ann Marie
2017-01-01
Background: Ghana’s National Health Insurance Scheme (NHIS), established by an Act of Parliament (Act 650), in 2003 and since replaced by Act 852 of 2012 remains, in African terms, unprecedented in terms of growth and coverage. As a result, the scheme has received praise for its associated legal reforms, clinical audit mechanisms and for serving as a hub for knowledge sharing and learning within the context of South-South cooperation. The scheme continues to shape national health insurance thinking in Africa. While the success, especially in coverage and financial access has been highlighted by many authors, insufficient attention has been paid to critical and context-specific factors. This paper seeks to fill that gap. Methods: Based on an empirical qualitative case study of stakeholders’ views on challenges and success factors in four mutual schemes (district offices) located in two regions of Ghana, the study uses the concept of policy translation to assess whether the Ghana scheme could provide useful lessons to other African and developing countries in their quest to implement social/NHISs. Results: In the study, interviewees referred to both ‘hard and soft’ elements as driving the "success" of the Ghana scheme. The main ‘hard elements’ include bureaucratic and legal enforcement capacities; IT; financing; governance, administration and management; regulating membership of the scheme; and service provision and coverage capabilities. The ‘soft’ elements identified relate to: the background/context of the health insurance scheme; innovative ways of funding the NHIS, the hybrid nature of the Ghana scheme; political will, commitment by government, stakeholders and public cooperation; social structure of Ghana (solidarity); and ownership and participation. Conclusion: Other developing countries can expect to translate rather than re-assemble a national health insurance programme in an incomplete and highly modified form over a period of years, amounting to a process best conceived as germination as opposed to emulation. The Ghana experience illustrates that in adopting health financing systems that function well, countries need to customise systems (policy customisation) to suit their socio-economic, political and administrative settings. Home-grown health financing systems that resonate with social values will also need to be found in the process of translation PMID:28812815
NASA Astrophysics Data System (ADS)
Li, Meng; Gu, Xian-Ming; Huang, Chengming; Fei, Mingfa; Zhang, Guoyu
2018-04-01
In this paper, a fast linearized conservative finite element method is studied for solving the strongly coupled nonlinear fractional Schrödinger equations. We prove that the scheme preserves both the mass and energy, which are defined by virtue of some recursion relationships. Using the Sobolev inequalities and then employing the mathematical induction, the discrete scheme is proved to be unconditionally convergent in the sense of L2-norm and H α / 2-norm, which means that there are no any constraints on the grid ratios. Then, the prior bound of the discrete solution in L2-norm and L∞-norm are also obtained. Moreover, we propose an iterative algorithm, by which the coefficient matrix is independent of the time level, and thus it leads to Toeplitz-like linear systems that can be efficiently solved by Krylov subspace solvers with circulant preconditioners. This method can reduce the memory requirement of the proposed linearized finite element scheme from O (M2) to O (M) and the computational complexity from O (M3) to O (Mlog M) in each iterative step, where M is the number of grid nodes. Finally, numerical results are carried out to verify the correction of the theoretical analysis, simulate the collision of two solitary waves, and show the utility of the fast numerical solution techniques.
Study on launch scheme of space-net capturing system.
Gao, Qingyu; Zhang, Qingbin; Feng, Zhiwei; Tang, Qiangang
2017-01-01
With the continuous progress in active debris-removal technology, scientists are increasingly concerned about the concept of space-net capturing system. The space-net capturing system is a long-range-launch flexible capture system, which has great potential to capture non-cooperative targets such as inactive satellites and upper stages. In this work, the launch scheme is studied by experiment and simulation, including two-step ejection and multi-point-traction analyses. The numerical model of the tether/net is based on finite element method and is verified by full-scale ground experiment. The results of the ground experiment and numerical simulation show that the two-step ejection and six-point traction scheme of the space-net system is superior to the traditional one-step ejection and four-point traction launch scheme.
Study on launch scheme of space-net capturing system
Zhang, Qingbin; Feng, Zhiwei; Tang, Qiangang
2017-01-01
With the continuous progress in active debris-removal technology, scientists are increasingly concerned about the concept of space-net capturing system. The space-net capturing system is a long-range-launch flexible capture system, which has great potential to capture non-cooperative targets such as inactive satellites and upper stages. In this work, the launch scheme is studied by experiment and simulation, including two-step ejection and multi-point-traction analyses. The numerical model of the tether/net is based on finite element method and is verified by full-scale ground experiment. The results of the ground experiment and numerical simulation show that the two-step ejection and six-point traction scheme of the space-net system is superior to the traditional one-step ejection and four-point traction launch scheme. PMID:28877187
Multiple crack detection in 3D using a stable XFEM and global optimization
NASA Astrophysics Data System (ADS)
Agathos, Konstantinos; Chatzi, Eleni; Bordas, Stéphane P. A.
2018-02-01
A numerical scheme is proposed for the detection of multiple cracks in three dimensional (3D) structures. The scheme is based on a variant of the extended finite element method (XFEM) and a hybrid optimizer solution. The proposed XFEM variant is particularly well-suited for the simulation of 3D fracture problems, and as such serves as an efficient solution to the so-called forward problem. A set of heuristic optimization algorithms are recombined into a multiscale optimization scheme. The introduced approach proves effective in tackling the complex inverse problem involved, where identification of multiple flaws is sought on the basis of sparse measurements collected near the structural boundary. The potential of the scheme is demonstrated through a set of numerical case studies of varying complexity.
2011-07-19
multidomain methods, Discontinuous Galerkin methods, interfacial treatment ∗ Jorge A. Escobar-Vargas, School of Civil and Environmental Engineering, Cornell...Click here to view linked References 1. Introduction Geophysical flows exhibit a complex structure and dynamics over a broad range of scales that...hyperbolic problems, where the interfacial patching was implemented with an upwind scheme based on a modified method of characteristics. This approach
Towards a large-scale scalable adaptive heart model using shallow tree meshes
NASA Astrophysics Data System (ADS)
Krause, Dorian; Dickopf, Thomas; Potse, Mark; Krause, Rolf
2015-10-01
Electrophysiological heart models are sophisticated computational tools that place high demands on the computing hardware due to the high spatial resolution required to capture the steep depolarization front. To address this challenge, we present a novel adaptive scheme for resolving the deporalization front accurately using adaptivity in space. Our adaptive scheme is based on locally structured meshes. These tensor meshes in space are organized in a parallel forest of trees, which allows us to resolve complicated geometries and to realize high variations in the local mesh sizes with a minimal memory footprint in the adaptive scheme. We discuss both a non-conforming mortar element approximation and a conforming finite element space and present an efficient technique for the assembly of the respective stiffness matrices using matrix representations of the inclusion operators into the product space on the so-called shallow tree meshes. We analyzed the parallel performance and scalability for a two-dimensional ventricle slice as well as for a full large-scale heart model. Our results demonstrate that the method has good performance and high accuracy.
NASA Astrophysics Data System (ADS)
Chang, C. L.; Chen, C. Y.; Sung, C. C.; Liou, D. H.; Chang, C. Y.; Cha, H. C.
This work presents a new fuel sensor-less control scheme for liquid feed fuel cells that is able to control the supply to a fuel cell system for operation under dynamic loading conditions. The control scheme uses cell-operating characteristics, such as potential, current, and power, to regulate the fuel concentration of a liquid feed fuel cell without the need for a fuel concentration sensor. A current integral technique has been developed to calculate the quantity of fuel required at each monitoring cycle, which can be combined with the concentration regulating process to control the fuel supply for stable operation. As verified by systematic experiments, this scheme can effectively control the fuel supply of a liquid feed fuel cell with reduced response time, even under conditions where the membrane electrolyte assembly (MEA) deteriorates gradually. This advance will aid the commercialization of liquid feed fuel cells and make them more adaptable for use in portable and automotive power units such as laptops, e-bikes, and handicap cars.
Hosogaya, Shigemi; Ozaki, Yukio
2005-06-01
Many external quality assessment schemes (EQAS) are performed to support quality improvement of the services provided by participating laboratories for the benefits of patients. The EQAS organizer shall be responsible for ensuring that the method of evaluation is appropriate for maintenance of the credibility of the schemes. Procedures to evaluate each participating laboratory are gradually being standardized. In most cases of EQAS, the peer group mean is used as a target of accuracy, and the peer group standard deviation is used as a criterion for inter-laboratory variation. On the other hand, Fraser CG, et al. proposed desirable quality specifications for any imprecision and inaccuracies, which were derived from inter- and intra-biologic variations. We also proposed allowable limits of analytical error, being less than one-half of the average intra-individual variation for evaluation of imprecision, and less than one-quarter of the inter- plus intra-individual variation for evaluation of inaccuracy. When expressed in coefficient of variation terms, these allowable limits may be applied at a wide range of levels of quantity.
Entropy Splitting for High Order Numerical Simulation of Vortex Sound at Low Mach Numbers
NASA Technical Reports Server (NTRS)
Mueller, B.; Yee, H. C.; Mansour, Nagi (Technical Monitor)
2001-01-01
A method of minimizing numerical errors, and improving nonlinear stability and accuracy associated with low Mach number computational aeroacoustics (CAA) is proposed. The method consists of two levels. From the governing equation level, we condition the Euler equations in two steps. The first step is to split the inviscid flux derivatives into a conservative and a non-conservative portion that satisfies a so called generalized energy estimate. This involves the symmetrization of the Euler equations via a transformation of variables that are functions of the physical entropy. Owing to the large disparity of acoustic and stagnation quantities in low Mach number aeroacoustics, the second step is to reformulate the split Euler equations in perturbation form with the new unknowns as the small changes of the conservative variables with respect to their large stagnation values. From the numerical scheme level, a stable sixth-order central interior scheme with a third-order boundary schemes that satisfies the discrete analogue of the integration-by-parts procedure used in the continuous energy estimate (summation-by-parts property) is employed.
Dynamo-based scheme for forecasting the magnitude of solar activity cycles
NASA Technical Reports Server (NTRS)
Layden, A. C.; Fox, P. A.; Howard, J. M.; Sarajedini, A.; Schatten, K. H.
1991-01-01
This paper presents a general framework for forecasting the smoothed maximum level of solar activity in a given cycle, based on a simple understanding of the solar dynamo. This type of forecasting requires knowledge of the sun's polar magnetic field strength at the preceding activity minimum. Because direct measurements of this quantity are difficult to obtain, the quality of a number of proxy indicators already used by other authors is evaluated, which are physically related to the sun's polar field. These indicators are subjected to a rigorous statistical analysis, and the analysis technique for each indicator is specified in detail in order to simplify and systematize reanalysis for future use. It is found that several of these proxies are in fact poorly correlated or uncorrelated with solar activity, and thus are of little value for predicting activity maxima. Also presented is a scheme in which the predictions of the individual proxies are combined via an appropriately weighted mean to produce a compound prediction. The scheme is then applied to the current cycle 22, and a maximum smoothed international sunspot number of 171 + or - 26 is estimated.
NASA Technical Reports Server (NTRS)
Chulya, Abhisak; Walker, Kevin P.
1991-01-01
A new scheme to integrate a system of stiff differential equations for both the elasto-plastic creep and the unified viscoplastic theories is presented. The method has high stability, allows large time increments, and is implicit and iterative. It is suitable for use with continuum damage theories. The scheme was incorporated into MARC, a commercial finite element code through a user subroutine called HYPELA. Results from numerical problems under complex loading histories are presented for both small and large scale analysis. To demonstrate the scheme's accuracy and efficiency, comparisons to a self-adaptive forward Euler method are made.
NASA Technical Reports Server (NTRS)
Chulya, A.; Walker, K. P.
1989-01-01
A new scheme to integrate a system of stiff differential equations for both the elasto-plastic creep and the unified viscoplastic theories is presented. The method has high stability, allows large time increments, and is implicit and iterative. It is suitable for use with continuum damage theories. The scheme was incorporated into MARC, a commercial finite element code through a user subroutine called HYPELA. Results from numerical problems under complex loading histories are presented for both small and large scale analysis. To demonstrate the scheme's accuracy and efficiency, comparisons to a self-adaptive forward Euler method are made.
A bottom-up route to enhance thermoelectric figures of merit in graphene nanoribbons
Sevinçli, Hâldun; Sevik, Cem; Çaın, Tahir; Cuniberti, Gianaurelio
2013-01-01
We propose a hybrid nano-structuring scheme for tailoring thermal and thermoelectric transport properties of graphene nanoribbons. Geometrical structuring and isotope cluster engineering are the elements that constitute the proposed scheme. Using first-principles based force constants and Hamiltonians, we show that the thermal conductance of graphene nanoribbons can be reduced by 98.8% at room temperature and the thermoelectric figure of merit, ZT, can be as high as 3.25 at T = 800 K. The proposed scheme relies on a recently developed bottom-up fabrication method, which is proven to be feasible for synthesizing graphene nanoribbons with an atomic precision. PMID:23390578
Real time infrared aerosol analyzer
Johnson, Stanley A.; Reedy, Gerald T.; Kumar, Romesh
1990-01-01
Apparatus for analyzing aerosols in essentially real time includes a virtual impactor which separates coarse particles from fine and ultrafine particles in an aerosol sample. The coarse and ultrafine particles are captured in PTFE filters, and the fine particles impact onto an internal light reflection element. The composition and quantity of the particles on the PTFE filter and on the internal reflection element are measured by alternately passing infrared light through the filter and the internal light reflection element, and analyzing the light through infrared spectrophotometry to identify the particles in the sample.
PCR technology for screening and quantification of genetically modified organisms (GMOs).
Holst-Jensen, Arne; Rønning, Sissel B; Løvseth, Astrid; Berdal, Knut G
2003-04-01
Although PCR technology has obvious limitations, the potentially high degree of sensitivity and specificity explains why it has been the first choice of most analytical laboratories interested in detection of genetically modified (GM) organisms (GMOs) and derived materials. Because the products that laboratories receive for analysis are often processed and refined, the quality and quantity of target analyte (e.g. protein or DNA) frequently challenges the sensitivity of any detection method. Among the currently available methods, PCR methods are generally accepted as the most sensitive and reliable methods for detection of GM-derived material in routine applications. The choice of target sequence motif is the single most important factor controlling the specificity of the PCR method. The target sequence is normally a part of the modified gene construct, for example a promoter, a terminator, a gene, or a junction between two of these elements. However, the elements may originate from wildtype organisms, they may be present in more than one GMO, and their copy number may also vary from one GMO to another. They may even be combined in a similar way in more than one GMO. Thus, the choice of method should fit the purpose. Recent developments include event-specific methods, particularly useful for identification and quantification of GM content. Thresholds for labelling are now in place in many countries including those in the European Union. The success of the labelling schemes is dependent upon the efficiency with which GM-derived material can be detected. We will present an overview of currently available PCR methods for screening and quantification of GM-derived DNA, and discuss their applicability and limitations. In addition, we will discuss some of the major challenges related to determination of the limits of detection (LOD) and quantification (LOQ), and to validation of methods.
NASA Astrophysics Data System (ADS)
Pin, Victor Gómez
In his book about the Categories (that is about the ultimate elements of classification and order), in the chapter concerning the quantity (IV, 20) Aristotle says that this concept recovers two kinds of modalities: the discrete quantity and the continuous quantity and he gives as examples the number for the first one; line, surface, solid, times and space for the second one. The main philosophical problem raised by this text is to determine which of the two modalities of the quantity has the ontological priority over the other (given two concepts A and B, we assume that A has ontological priority over B if every entity that possesses the quality B possesses necessarily the quality A). The problem is magnified by the fact that space, which in some part of Aristotle's Physics is mentioned not only as a category properly speaking but even as the main category whose power can be amazing, is in the evoked text of the Categories's Book reduced to expression of the continuum, and sharing this condition with time. In this matter the controversy is constant through the common history of Science and Philosophy.
Foliar nutrient status of young red spruce and balsam fir in a fertilized stand
Miroslaw M. Czapowskyj; L. O. Safford; Russell D. Briggs
1980-01-01
Average dry weight and nutrient levels in current foliage from red spruce and balsam fir seedlings and saplings in the understory of a 25-year old aspen and birch stand were observed 3 years after N, P, and lime treatments were applied. Elemental concentrations were plotted as a function of needle weight and quantity of element per needle. This allows interpretation of...
Air Force Research Laboratory Wright Site Guide to Technical Publishing
2005-04-01
Scientific and Technical Reports—Elements, Organization, and Design manual (and a version modified for documents generated for AFRL) • Merriam-Webster’s...notice page --SF 298 --original graphics /halftones Indicate the following on the letter of transmittal sheet: --quantity of copies required for...Elements, Organization and Design ? The WRS CDRL for a final report requires that the standard be followed. The only exception is SBIR Phase 1
Barrenechea, Gabriel R; Burman, Erik; Karakatsani, Fotini
2017-01-01
For the case of approximation of convection-diffusion equations using piecewise affine continuous finite elements a new edge-based nonlinear diffusion operator is proposed that makes the scheme satisfy a discrete maximum principle. The diffusion operator is shown to be Lipschitz continuous and linearity preserving. Using these properties we provide a full stability and error analysis, which, in the diffusion dominated regime, shows existence, uniqueness and optimal convergence. Then the algebraic flux correction method is recalled and we show that the present method can be interpreted as an algebraic flux correction method for a particular definition of the flux limiters. The performance of the method is illustrated on some numerical test cases in two space dimensions.
NASA Astrophysics Data System (ADS)
Al-Rousan, R. Z.
2015-09-01
The main objective of this study was to assess the effect of the number and schemes of carbon-fiber-reinforced polymer (CFRP) sheets on the capacity of bending moment, the ultimate displacement, the ultimate tensile strain of CFRP, the yielding moment, concrete compression strain, and the energy absorption of RC beams and to provide useful relationships that can be effectively utilized to determine the required number of CFRP sheets for a necessary increase in the flexural strength of the beams without a major loss in their ductility. To accomplish this, various RC beams, identical in their geometric and reinforcement details and having different number and configurations of CFRP sheets, are modeled and analyzed using the ANSYS software and a nonlinear finite-element analysis.
Chemical experiments with superheavy elements.
Türler, Andreas
2010-01-01
Unnoticed by many chemists, the Periodic Table of the Elements has been extended significantly in the last couple of years and the 7th period has very recently been completed with eka-Rn (element 118) currently being the heaviest element whose synthesis has been reported. These 'superheavy' elements (also called transactinides with atomic number > or = 104 (Rf)) have been artificially synthesized in fusion reactions at accelerators in minute quantities of a few single atoms. In addition, all isotopes of the transactinide elements are radioactive and decay with rather short half-lives. Nevertheless, it has been possible in some cases to investigate experimentally chemical properties of transactinide elements and even synthesize simple compounds. The experimental investigation of superheavy elements is especially intriguing, since theoretical calculations predict significant deviations from periodic trends due to the influence of strong relativistic effects. In this contribution first experiments with hassium (Hs, atomic number 108), copernicium (Cn, atomic number 112) and element 114 (eka-Pb) are reviewed.
Markov chain Monte Carlo estimation of quantum states
NASA Astrophysics Data System (ADS)
Diguglielmo, James; Messenger, Chris; Fiurášek, Jaromír; Hage, Boris; Samblowski, Aiko; Schmidt, Tabea; Schnabel, Roman
2009-03-01
We apply a Bayesian data analysis scheme known as the Markov chain Monte Carlo to the tomographic reconstruction of quantum states. This method yields a vector, known as the Markov chain, which contains the full statistical information concerning all reconstruction parameters including their statistical correlations with no a priori assumptions as to the form of the distribution from which it has been obtained. From this vector we can derive, e.g., the marginal distributions and uncertainties of all model parameters, and also of other quantities such as the purity of the reconstructed state. We demonstrate the utility of this scheme by reconstructing the Wigner function of phase-diffused squeezed states. These states possess non-Gaussian statistics and therefore represent a nontrivial case of tomographic reconstruction. We compare our results to those obtained through pure maximum-likelihood and Fisher information approaches.
Thermodynamics of quantum information scrambling
NASA Astrophysics Data System (ADS)
Campisi, Michele; Goold, John
2017-06-01
Scrambling of quantum information can conveniently be quantified by so-called out-of-time-order correlators (OTOCs), i.e., correlators of the type <[Wτ,V ] †[Wτ,V ] > , whose measurements present a formidable experimental challenge. Here we report on a method for the measurement of OTOCs based on the so-called two-point measurement scheme developed in the field of nonequilibrium quantum thermodynamics. The scheme is of broader applicability than methods employed in current experiments and provides a clear-cut interpretation of quantum information scrambling in terms of nonequilibrium fluctuations of thermodynamic quantities, such as work and heat. Furthermore, we provide a numerical example on a spin chain which highlights the utility of our thermodynamic approach when understanding the differences between integrable and ergodic behaviors. We also discuss how the method can be used to extend the reach of current experiments.
Adiabatic regularization for gauge fields and the conformal anomaly
NASA Astrophysics Data System (ADS)
Chu, Chong-Sun; Koyama, Yoji
2017-03-01
Adiabatic regularization for quantum field theory in conformally flat spacetime is known for scalar and Dirac fermion fields. In this paper, we complete the construction by establishing the adiabatic regularization scheme for the gauge field. We show that the adiabatic expansion for the mode functions and the adiabatic vacuum can be defined in a similar way using Wentzel-Kramers-Brillouin-type (WKB-type) solutions as the scalar fields. As an application of the adiabatic method, we compute the trace of the energy momentum tensor and reproduce the known result for the conformal anomaly obtained by the other regularization methods. The availability of the adiabatic expansion scheme for the gauge field allows one to study various renormalized physical quantities of theories coupled to (non-Abelian) gauge fields in conformally flat spacetime, such as conformal supersymmetric Yang Mills, inflation, and cosmology.
Turbulent fluid motion IV-averages, Reynolds decomposition, and the closure problem
NASA Technical Reports Server (NTRS)
Deissler, Robert G.
1992-01-01
Ensemble, time, and space averages as applied to turbulent quantities are discussed, and pertinent properties of the averages are obtained. Those properties, together with Reynolds decomposition, are used to derive the averaged equations of motion and the one- and two-point moment or correlation equations. The terms in the various equations are interpreted. The closure problem of the averaged equations is discussed, and possible closure schemes are considered. Those schemes usually require an input of supplemental information unless the averaged equations are closed by calculating their terms by a numerical solution of the original unaveraged equations. The law of the wall for velocities and temperatures, the velocity- and temperature-defect laws, and the logarithmic laws for velocities and temperatures are derived. Various notions of randomness and their relation to turbulence are considered in light of ergodic theory.
Multigrid solution of compressible turbulent flow on unstructured meshes using a two-equation model
NASA Technical Reports Server (NTRS)
Mavriplis, D. J.; Matinelli, L.
1994-01-01
The steady state solution of the system of equations consisting of the full Navier-Stokes equations and two turbulence equations has been obtained using a multigrid strategy of unstructured meshes. The flow equations and turbulence equations are solved in a loosely coupled manner. The flow equations are advanced in time using a multistage Runge-Kutta time-stepping scheme with a stability-bound local time step, while turbulence equations are advanced in a point-implicit scheme with a time step which guarantees stability and positivity. Low-Reynolds-number modifications to the original two-equation model are incorporated in a manner which results in well-behaved equations for arbitrarily small wall distances. A variety of aerodynamic flows are solved, initializing all quantities with uniform freestream values. Rapid and uniform convergence rates for the flow and turbulence equations are observed.
NASA Technical Reports Server (NTRS)
Giles, G. L.; Rogers, J. L., Jr.
1982-01-01
The methodology used to implement structural sensitivity calculations into a major, general-purpose finite-element analysis system (SPAR) is described. This implementation includes a generalized method for specifying element cross-sectional dimensions as design variables that can be used in analytically calculating derivatives of output quantities from static stress, vibration, and buckling analyses for both membrane and bending elements. Limited sample results for static displacements and stresses are presented to indicate the advantages of analytically calculating response derivatives compared to finite difference methods. Continuing developments to implement these procedures into an enhanced version of SPAR are also discussed.
An Implicit Upwind Algorithm for Computing Turbulent Flows on Unstructured Grids
NASA Technical Reports Server (NTRS)
Anerson, W. Kyle; Bonhaus, Daryl L.
1994-01-01
An implicit, Navier-Stokes solution algorithm is presented for the computation of turbulent flow on unstructured grids. The inviscid fluxes are computed using an upwind algorithm and the solution is advanced in time using a backward-Euler time-stepping scheme. At each time step, the linear system of equations is approximately solved with a point-implicit relaxation scheme. This methodology provides a viable and robust algorithm for computing turbulent flows on unstructured meshes. Results are shown for subsonic flow over a NACA 0012 airfoil and for transonic flow over a RAE 2822 airfoil exhibiting a strong upper-surface shock. In addition, results are shown for 3 element and 4 element airfoil configurations. For the calculations, two one equation turbulence models are utilized. For the NACA 0012 airfoil, a pressure distribution and force data are compared with other computational results as well as with experiment. Comparisons of computed pressure distributions and velocity profiles with experimental data are shown for the RAE airfoil and for the 3 element configuration. For the 4 element case, comparisons of surface pressure distributions with experiment are made. In general, the agreement between the computations and the experiment is good.
Analysis of the transient behavior of rubbing components
NASA Technical Reports Server (NTRS)
Quezdou, M. B.; Mullen, R. L.
1986-01-01
Finite element equations are developed for studying deformations and temperatures resulting from frictional heating in sliding system. The formulation is done for linear steady state motion in two dimensions. The equations include the effect of the velocity on the moving components. This gives spurious oscillations in their solutions by Galerkin finite element methods. A method called streamline upwind scheme is used to try to deal with this deficiency. The finite element program is then used to investigate the friction of heating in gas path seal.
Theoretical Predictions of Cross-Sections of the Super-Heavy Elements
NASA Astrophysics Data System (ADS)
Bouriquet, B.; Kosenko, G.; Abe, Y.
The evaluation of the residue cross-sections of reactionssynthesising superheavy elements has been achieved by the combination of the two-step model for fusion and the evaporation code (KEWPIE) for survival probability. The theoretical scheme of those calculations is presented, and some encouraging results are given, together with some difficulties. With this approach, the measured excitation functions of the 1n reactions producing elements with Z=108, 110, 111 and 112 are well reproduced. Thus, the model has been used to predict the cross-sections of the reactions leading to the formation of the elements with Z=113 and Z=114.
NASA Astrophysics Data System (ADS)
Bauer, Werner; Behrens, Jörn
2017-04-01
We present a locally conservative, low-order finite element (FE) discretization of the covariant 1D linear shallow-water equations written in split form (cf. tet{[1]}). The introduction of additional differential forms (DF) that build pairs with the original ones permits a splitting of these equations into topological momentum and continuity equations and metric-dependent closure equations that apply the Hodge-star. Our novel discretization framework conserves this geometrical structure, in particular it provides for all DFs proper FE spaces such that the differential operators (here gradient and divergence) hold in strong form. The discrete topological equations simply follow by trivial projections onto piecewise constant FE spaces without need to partially integrate. The discrete Hodge-stars operators, representing the discretized metric equations, are realized by nontrivial Galerkin projections (GP). Here they follow by projections onto either a piecewise constant (GP0) or a piecewise linear (GP1) space. Our framework thus provides essentially three different schemes with significantly different behavior. The split scheme using twice GP1 is unstable and shares the same discrete dispersion relation and similar second-order convergence rates as the conventional P1-P1 FE scheme that approximates both velocity and height variables by piecewise linear spaces. The split scheme that applies both GP1 and GP0 is stable and shares the dispersion relation of the conventional P1-P0 FE scheme that approximates the velocity by a piecewise linear and the height by a piecewise constant space with corresponding second- and first-order convergence rates. Exhibiting for both velocity and height fields second-order convergence rates, we might consider the split GP1-GP0 scheme though as stable versions of the conventional P1-P1 FE scheme. For the split scheme applying twice GP0, we are not aware of a corresponding conventional formulation to compare with. Though exhibiting larger absolute error values, it shows similar convergence rates as the other split schemes, but does not provide a satisfactory approximation of the dispersion relation as short waves are propagated much to fast. Despite this, the finding of this new scheme illustrates the potential of our discretization framework as a toolbox to find and to study new FE schemes based on new combinations of FE spaces. [1] Bauer, W. [2016], A new hierarchically-structured n-dimensional covariant form of rotating equations of geophysical fluid dynamics, GEM - International Journal on Geomathematics, 7(1), 31-101.
Metal-Assisted Laser-Induced Gas Plasma for the Direct Analysis of Powder Using Pulse CO2 Laser
NASA Astrophysics Data System (ADS)
Khumaeni, A.; Lie, Z. S.; Kurniawan, K. H.; Kagawa, K.
2017-01-01
Analysis of powder samples available in small quantities has been carried out using metal-assisted gas plasma by utilizing a transversely excited atmospheric (TEA) CO2 laser. The powder was homogeneously mixed with Si grease, and the mixed powder was painted on a metal subtarget. When a TEA CO2 laser was directly focused on the metal subtarget at atmospheric pressure of He gas, a high-temperature He gas plasma was induced. It is assumed that the powder particles were vaporized to be effectively atomized and excited in the gas plasma region. This method has been employed in the rapid analyses of elements in organic and inorganic powder samples present in small quantities. Detection of trace elements of Cr and Pb has been successfully made by using the supplement powder and loam soil, respectively. The detection limits of Pb in loam soil were approximately 20 mg/kg.
Classification of ring artifacts for their effective removal using type adaptive correction schemes.
Anas, Emran Mohammad Abu; Lee, Soo Yeol; Hasan, Kamrul
2011-06-01
High resolution tomographic images acquired with a digital X-ray detector are often degraded by the so called ring artifacts. In this paper, a detail analysis including the classification, detection and correction of these ring artifacts is presented. At first, a novel idea for classifying rings into two categories, namely type I and type II rings, is proposed based on their statistical characteristics. The defective detector elements and the dusty scintillator screens result in type I ring and the mis-calibrated detector elements lead to type II ring. Unlike conventional approaches, we emphasize here on the separate detection and correction schemes for each type of rings for their effective removal. For the detection of type I ring, the histogram of the responses of the detector elements is used and a modified fast image inpainting algorithm is adopted to correct the responses of the defective pixels. On the other hand, to detect the type II ring, first a simple filtering scheme is presented based on the fast Fourier transform (FFT) to smooth the sum curve derived form the type I ring corrected projection data. The difference between the sum curve and its smoothed version is then used to detect their positions. Then, to remove the constant bias suffered by the responses of the mis-calibrated detector elements with view angle, an estimated dc shift is subtracted from them. The performance of the proposed algorithm is evaluated using real micro-CT images and is compared with three recently reported algorithms. Simulation results demonstrate superior performance of the proposed technique as compared to the techniques reported in the literature. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Lassen, J.; Li, R.; Raeder, S.; Zhao, X.; Dekker, T.; Heggen, H.; Kunz, P.; P. Levy, C. D.; Mostanmand, M.; Teigelhöfer, A.; Ames, F.
2017-11-01
Developments at TRIUMF's isotope separator and accelerator (ISAC) resonance ionization laser ion source (RILIS) in the past years have concentrated on increased reliability for on-line beam delivery of radioactive isotopes to experiments, as well as increasing the number of elements available through resonance ionization and searching for ionization schemes with improved efficiency. The current status of these developments is given with a list of two step laser ionization schemes implemented recently.
Transient and steady state viscoelastic rolling contact
NASA Technical Reports Server (NTRS)
Padovan, J.; Paramadilok, O.
1985-01-01
Based on moving total Lagrangian coordinates, a so-called traveling Hughes type contact strategy is developed. Employing the modified contact scheme in conjunction with a traveling finite element strategy, an overall solution methodology is developed to handle transient and steady viscoelastic rolling contact. To verify the scheme, the results of both experimental and analytical benchmarking is presented. The experimental benchmarking includes the handling of rolling tires up to their upper bound behavior, namely the standing wave response.
Bolis, A; Cantwell, C D; Kirby, R M; Sherwin, S J
2014-01-01
We investigate the relative performance of a second-order Adams–Bashforth scheme and second-order and fourth-order Runge–Kutta schemes when time stepping a 2D linear advection problem discretised using a spectral/hp element technique for a range of different mesh sizes and polynomial orders. Numerical experiments explore the effects of short (two wavelengths) and long (32 wavelengths) time integration for sets of uniform and non-uniform meshes. The choice of time-integration scheme and discretisation together fixes a CFL limit that imposes a restriction on the maximum time step, which can be taken to ensure numerical stability. The number of steps, together with the order of the scheme, affects not only the runtime but also the accuracy of the solution. Through numerical experiments, we systematically highlight the relative effects of spatial resolution and choice of time integration on performance and provide general guidelines on how best to achieve the minimal execution time in order to obtain a prescribed solution accuracy. The significant role played by higher polynomial orders in reducing CPU time while preserving accuracy becomes more evident, especially for uniform meshes, compared with what has been typically considered when studying this type of problem.© 2014. The Authors. International Journal for Numerical Methods in Fluids published by John Wiley & Sons, Ltd. PMID:25892840
The construction of causal schemes: learning mechanisms at the knowledge level.
diSessa, Andrea A
2014-06-01
This work uses microgenetic study of classroom learning to illuminate (1) the role of pre-instructional student knowledge in the construction of normative scientific knowledge, and (2) the learning mechanisms that drive change. Three enactments of an instructional sequence designed to lead to a scientific understanding of thermal equilibration are used as data sources. Only data from a scaffolded student inquiry preceding introduction of a normative model were used. Hence, the study involves nearly autonomous student learning. In two classes, students developed stable and socially shared explanations ("causal schemes") for understanding thermal equilibration. One case resulted in a near-normative understanding, while the other resulted in a non-normative "alternative conception." The near-normative case seems to be a particularly clear example wherein the constructed causal scheme is a composition of previously documented naïve conceptions. Detailed prior description of these naive elements allows a much better than usual view of the corresponding details of change during construction of the new scheme. A list of candidate mechanisms that can account for observed change is presented. The non-normative construction seems also to be a composition, albeit of a different structural form, using a different (although similar) set of naïve elements. This article provides one of very few high-resolution process analyses showing the productive use of naïve knowledge in learning. © 2014 Cognitive Science Society, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Z.; Department of Applied Mathematics and Mechanics, University of Science and Technology Beijing, Beijing 100083; Lin, P.
In this paper, we investigate numerically a diffuse interface model for the Navier–Stokes equation with fluid–fluid interface when the fluids have different densities [48]. Under minor reformulation of the system, we show that there is a continuous energy law underlying the system, assuming that all variables have reasonable regularities. It is shown in the literature that an energy law preserving method will perform better for multiphase problems. Thus for the reformulated system, we design a C{sup 0} finite element method and a special temporal scheme where the energy law is preserved at the discrete level. Such a discrete energy lawmore » (almost the same as the continuous energy law) for this variable density two-phase flow model has never been established before with C{sup 0} finite element. A Newton method is introduced to linearise the highly non-linear system of our discretization scheme. Some numerical experiments are carried out using the adaptive mesh to investigate the scenario of coalescing and rising drops with differing density ratio. The snapshots for the evolution of the interface together with the adaptive mesh at different times are presented to show that the evolution, including the break-up/pinch-off of the drop, can be handled smoothly by our numerical scheme. The discrete energy functional for the system is examined to show that the energy law at the discrete level is preserved by our scheme.« less
Femtogram-scale photothermal spectroscopy of explosive molecules on nanostrings.
Biswas, T S; Miriyala, N; Doolin, C; Liu, X; Thundat, T; Davis, J P
2014-11-18
We demonstrate detection of femtogram-scale quantities of the explosive molecule 1,3,5-trinitroperhydro-1,3,5-triazine (RDX) via combined nanomechanical photothermal spectroscopy and mass desorption. Photothermal spectroscopy provides a spectroscopic fingerprint of the molecule, which is unavailable using mass adsorption/desorption alone. Our measurement, based on thermomechanical measurement of silicon nitride nanostrings, represents the highest mass resolution ever demonstrated via nanomechanical photothermal spectroscopy. This detection scheme is quick, label-free, and is compatible with parallelized molecular analysis of multicomponent targets.
A multiscale approach to accelerate pore-scale simulation of porous electrodes
NASA Astrophysics Data System (ADS)
Zheng, Weibo; Kim, Seung Hyun
2017-04-01
A new method to accelerate pore-scale simulation of porous electrodes is presented. The method combines the macroscopic approach with pore-scale simulation by decomposing a physical quantity into macroscopic and local variations. The multiscale method is applied to the potential equation in pore-scale simulation of a Proton Exchange Membrane Fuel Cell (PEMFC) catalyst layer, and validated with the conventional approach for pore-scale simulation. Results show that the multiscale scheme substantially reduces the computational cost without sacrificing accuracy.
Higher order solution of the Euler equations on unstructured grids using quadratic reconstruction
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Frederickson, Paul O.
1990-01-01
High order accurate finite-volume schemes for solving the Euler equations of gasdynamics are developed. Central to the development of these methods are the construction of a k-exact reconstruction operator given cell-averaged quantities and the use of high order flux quadrature formulas. General polygonal control volumes (with curved boundary edges) are considered. The formulations presented make no explicit assumption as to complexity or convexity of control volumes. Numerical examples are presented for Ringleb flow to validate the methodology.
Numerical Simulations of Reacting Flows Using Asynchrony-Tolerant Schemes for Exascale Computing
NASA Astrophysics Data System (ADS)
Cleary, Emmet; Konduri, Aditya; Chen, Jacqueline
2017-11-01
Communication and data synchronization between processing elements (PEs) are likely to pose a major challenge in scalability of solvers at the exascale. Recently developed asynchrony-tolerant (AT) finite difference schemes address this issue by relaxing communication and synchronization between PEs at a mathematical level while preserving accuracy, resulting in improved scalability. The performance of these schemes has been validated for simple linear and nonlinear homogeneous PDEs. However, many problems of practical interest are governed by highly nonlinear PDEs with source terms, whose solution may be sensitive to perturbations caused by communication asynchrony. The current work applies the AT schemes to combustion problems with chemical source terms, yielding a stiff system of PDEs with nonlinear source terms highly sensitive to temperature. Examples shown will use single-step and multi-step CH4 mechanisms for 1D premixed and nonpremixed flames. Error analysis will be discussed both in physical and spectral space. Results show that additional errors introduced by the AT schemes are negligible and the schemes preserve their accuracy. We acknowledge funding from the DOE Computational Science Graduate Fellowship administered by the Krell Institute.
Hybrid inversions of CO2 fluxes at regional scale applied to network design
NASA Astrophysics Data System (ADS)
Kountouris, Panagiotis; Gerbig, Christoph; -Thomas Koch, Frank
2013-04-01
Long term observations of atmospheric greenhouse gas measuring stations, located at representative regions over the continent, improve our understanding of greenhouse gas sources and sinks. These mixing ratio measurements can be linked to surface fluxes by atmospheric transport inversions. Within the upcoming years new stations are to be deployed, which requires decision making tools with respect to the location and the density of the network. We are developing a method to assess potential greenhouse gas observing networks in terms of their ability to recover specific target quantities. As target quantities we use CO2 fluxes aggregated to specific spatial and temporal scales. We introduce a high resolution inverse modeling framework, which attempts to combine advantages from pixel based inversions with those of a carbon cycle data assimilation system (CCDAS). The hybrid inversion system consists of the Lagrangian transport model STILT, the diagnostic biosphere model VPRM and a Bayesian inversion scheme. We aim to retrieve the spatiotemporal distribution of net ecosystem exchange (NEE) at a high spatial resolution (10 km x 10 km) by inverting for spatially and temporally varying scaling factors for gross ecosystem exchange (GEE) and respiration (R) rather than solving for the fluxes themselves. Thus the state space includes parameters for controlling photosynthesis and respiration, but unlike in a CCDAS it allows for spatial and temporal variations, which can be expressed as NEE(x,y,t) = λG(x,y,t) GEE(x,y,t) + λR(x,y,t) R(x,y,t) . We apply spatially and temporally correlated uncertainties by using error covariance matrices with non-zero off-diagonal elements. Synthetic experiments will test our system and select the optimal a priori error covariance by using different spatial and temporal correlation lengths on the error statistics of the a priori covariance and comparing the optimized fluxes against the 'known truth'. As 'known truth' we use independent fluxes generated from a different biosphere model (BIOME-BGC). Initially we perform single-station inversions for Ochsenkopf tall tower located in Germany. Further expansion of the inversion framework to multiple stations and its application to network design will address the questions of how well a set of network stations can constrain a given target quantity, and whether there are objective criteria to select an optimal configuration for new stations that maximizes the uncertainty reduction.
Sherwood, K B; Lewis, G J
2000-12-01
In recent years notions of self-help and voluntarism have emerged as key elements in the delivery of services in rural England. This paper explores these themes by reference to 'Rural Wheels', a voluntary medical transport scheme in rural Northamptonshire, introduced to overcome the closure of branch surgeries and to provide access to a new medical centre. By focusing upon the organisation and operations of the scheme, the paper highlights the important role it plays in the welfare of rural residents, particularly elderly women. Yet, because effectively it is run by a small core group, the paper raises questions not just about the viability of this scheme but also about the increasing commitment of central government to the voluntary sector as a means of delivering health care to rural people.
NASA Technical Reports Server (NTRS)
Shyy, W.; Thakur, S.; Udaykumar, H. S.
1993-01-01
A high accuracy convection scheme using a sequential solution technique has been developed and applied to simulate the longitudinal combustion instability and its active control. The scheme has been devised in the spirit of the Total Variation Diminishing (TVD) concept with special source term treatment. Due to the substantial heat release effect, a clear delineation of the key elements employed by the scheme, i.e., the adjustable damping factor and the source term treatment has been made. By comparing with the first-order upwind scheme previously utilized, the present results exhibit less damping and are free from spurious oscillations, offering improved quantitative accuracy while confirming the spectral analysis reported earlier. A simple feedback type of active control has been found to be capable of enhancing or attenuating the magnitude of the combustion instability.
NASA Technical Reports Server (NTRS)
Engquist, B. E. (Editor); Osher, S. (Editor); Somerville, R. C. J. (Editor)
1985-01-01
Papers are presented on such topics as the use of semi-Lagrangian advective schemes in meteorological modeling; computation with high-resolution upwind schemes for hyperbolic equations; dynamics of flame propagation in a turbulent field; a modified finite element method for solving the incompressible Navier-Stokes equations; computational fusion magnetohydrodynamics; and a nonoscillatory shock capturing scheme using flux-limited dissipation. Consideration is also given to the use of spectral techniques in numerical weather prediction; numerical methods for the incorporation of mountains in atmospheric models; techniques for the numerical simulation of large-scale eddies in geophysical fluid dynamics; high-resolution TVD schemes using flux limiters; upwind-difference methods for aerodynamic problems governed by the Euler equations; and an MHD model of the earth's magnetosphere.
On numerical reconstructions of lithographic masks in DUV scatterometry
NASA Astrophysics Data System (ADS)
Henn, M.-A.; Model, R.; Bär, M.; Wurm, M.; Bodermann, B.; Rathsfeld, A.; Gross, H.
2009-06-01
The solution of the inverse problem in scatterometry employing deep ultraviolet light (DUV) is discussed, i.e. we consider the determination of periodic surface structures from light diffraction patterns. With decreasing dimensions of the structures on photo lithography masks and wafers, increasing demands on the required metrology techniques arise. Scatterometry as a non-imaging indirect optical method is applied to periodic line structures in order to determine the sidewall angles, heights, and critical dimensions (CD), i.e., the top and bottom widths. The latter quantities are typically in the range of tens of nanometers. All these angles, heights, and CDs are the fundamental figures in order to evaluate the quality of the manufacturing process. To measure those quantities a DUV scatterometer is used, which typically operates at a wavelength of 193 nm. The diffraction of light by periodic 2D structures can be simulated using the finite element method for the Helmholtz equation. The corresponding inverse problem seeks to reconstruct the grating geometry from measured diffraction patterns. Fixing the class of gratings and the set of measurements, this inverse problem reduces to a finite dimensional nonlinear operator equation. Reformulating the problem as an optimization problem, a vast number of numerical schemes can be applied. Our tool is a sequential quadratic programing (SQP) variant of the Gauss-Newton iteration. In a first step, in which we use a simulated data set, we investigate how accurate the geometrical parameters of an EUV mask can be reconstructed, using light in the DUV range. We then determine the expected uncertainties of geometric parameters by reconstructing from simulated input data perturbed by noise representing the estimated uncertainties of input data. In the last step, we use the measurement data obtained from the new DUV scatterometer at PTB to determine the geometrical parameters of a typical EUV mask with our reconstruction algorithm. The results are compared to the outcome of investigations with two alternative methods namely EUV scatterometry and SEM measurements.
Mechanical balance laws for fully nonlinear and weakly dispersive water waves
NASA Astrophysics Data System (ADS)
Kalisch, Henrik; Khorsand, Zahra; Mitsotakis, Dimitrios
2016-10-01
The Serre-Green-Naghdi system is a coupled, fully nonlinear system of dispersive evolution equations which approximates the full water wave problem. The system is known to describe accurately the wave motion at the surface of an incompressible inviscid fluid in the case when the fluid flow is irrotational and two-dimensional. The system is an extension of the well known shallow-water system to the situation where the waves are long, but not so long that dispersive effects can be neglected. In the current work, the focus is on deriving mass, momentum and energy densities and fluxes associated with the Serre-Green-Naghdi system. These quantities arise from imposing balance equations of the same asymptotic order as the evolution equations. In the case of an even bed, the conservation equations are satisfied exactly by the solutions of the Serre-Green-Naghdi system. The case of variable bathymetry is more complicated, with mass and momentum conservation satisfied exactly, and energy conservation satisfied only in a global sense. In all cases, the quantities found here reduce correctly to the corresponding counterparts in both the Boussinesq and the shallow-water scaling. One consequence of the present analysis is that the energy loss appearing in the shallow-water theory of undular bores is fully compensated by the emergence of oscillations behind the bore front. The situation is analyzed numerically by approximating solutions of the Serre-Green-Naghdi equations using a finite-element discretization coupled with an adaptive Runge-Kutta time integration scheme, and it is found that the energy is indeed conserved nearly to machine precision. As a second application, the shoaling of solitary waves on a plane beach is analyzed. It appears that the Serre-Green-Naghdi equations are capable of predicting both the shape of the free surface and the evolution of kinetic and potential energy with good accuracy in the early stages of shoaling.
Time integration algorithms for the two-dimensional Euler equations on unstructured meshes
NASA Technical Reports Server (NTRS)
Slack, David C.; Whitaker, D. L.; Walters, Robert W.
1994-01-01
Explicit and implicit time integration algorithms for the two-dimensional Euler equations on unstructured grids are presented. Both cell-centered and cell-vertex finite volume upwind schemes utilizing Roe's approximate Riemann solver are developed. For the cell-vertex scheme, a four-stage Runge-Kutta time integration, a fourstage Runge-Kutta time integration with implicit residual averaging, a point Jacobi method, a symmetric point Gauss-Seidel method and two methods utilizing preconditioned sparse matrix solvers are presented. For the cell-centered scheme, a Runge-Kutta scheme, an implicit tridiagonal relaxation scheme modeled after line Gauss-Seidel, a fully implicit lower-upper (LU) decomposition, and a hybrid scheme utilizing both Runge-Kutta and LU methods are presented. A reverse Cuthill-McKee renumbering scheme is employed for the direct solver to decrease CPU time by reducing the fill of the Jacobian matrix. A comparison of the various time integration schemes is made for both first-order and higher order accurate solutions using several mesh sizes, higher order accuracy is achieved by using multidimensional monotone linear reconstruction procedures. The results obtained for a transonic flow over a circular arc suggest that the preconditioned sparse matrix solvers perform better than the other methods as the number of elements in the mesh increases.
Discontinuous Galerkin Finite Element Method for Parabolic Problems
NASA Technical Reports Server (NTRS)
Kaneko, Hideaki; Bey, Kim S.; Hou, Gene J. W.
2004-01-01
In this paper, we develop a time and its corresponding spatial discretization scheme, based upon the assumption of a certain weak singularity of parallel ut(t) parallel Lz(omega) = parallel ut parallel2, for the discontinuous Galerkin finite element method for one-dimensional parabolic problems. Optimal convergence rates in both time and spatial variables are obtained. A discussion of automatic time-step control method is also included.
Proton-Proton Fusion and Tritium β Decay from Lattice Quantum Chromodynamics
NASA Astrophysics Data System (ADS)
Savage, Martin J.; Shanahan, Phiala E.; Tiburzi, Brian C.; Wagman, Michael L.; Winter, Frank; Beane, Silas R.; Chang, Emmanuel; Davoudi, Zohreh; Detmold, William; Orginos, Kostas; Nplqcd Collaboration
2017-08-01
The nuclear matrix element determining the p p →d e+ν fusion cross section and the Gamow-Teller matrix element contributing to tritium β decay are calculated with lattice quantum chromodynamics for the first time. Using a new implementation of the background field method, these quantities are calculated at the SU(3) flavor-symmetric value of the quark masses, corresponding to a pion mass of mπ˜806 MeV . The Gamow-Teller matrix element in tritium is found to be 0.979(03)(10) at these quark masses, which is within 2 σ of the experimental value. Assuming that the short-distance correlated two-nucleon contributions to the matrix element (meson-exchange currents) depend only mildly on the quark masses, as seen for the analogous magnetic interactions, the calculated p p →d e+ν transition matrix element leads to a fusion cross section at the physical quark masses that is consistent with its currently accepted value. Moreover, the leading two-nucleon axial counterterm of pionless effective field theory is determined to be L1 ,A=3.9 (0.2 )(1.0 )(0.4 )(0.9 ) fm3 at a renormalization scale set by the physical pion mass, also agreeing within the accepted phenomenological range. This work concretely demonstrates that weak transition amplitudes in few-nucleon systems can be studied directly from the fundamental quark and gluon degrees of freedom and opens the way for subsequent investigations of many important quantities in nuclear physics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dimitrov, Vesselin, E-mail: vesselin@uctm.edu; Komatsu, Takayuki, E-mail: komatsu@mst.nagaokaut.ac.jp
2012-12-15
A suitable relationship between free-cation polarizability and electronegativity of elements in different valence states and with the most common coordination numbers has been searched on the basis of the similarity in physical nature of both quantities. In general, the cation polarizability increases with decreasing element electronegativity. A systematic periodic change in the polarizability against the electronegativity has been observed in the isoelectronic series. It has been found that generally the optical basicity increases and the single bond strength of simple oxides decreases with decreasing the electronegativity. The observed trends have been discussed on the basis of electron donation ability ofmore » the oxide ions and type of chemical bonding in simple oxides. - Graphical abstract: This figure shows the single bond strength of simple oxides as a function of element electronegativity. A remarkable correlation exists between these independently obtained quantities. High values of electronegativity correspond to high values of single bond strength and vice versa. It is obvious that the observed trend in this figure is closely related to the type of chemical bonding in corresponding oxide. Highlights: Black-Right-Pointing-Pointer A suitable relationship between free-cation polarizability and electronegativity of elements was searched. Black-Right-Pointing-Pointer The cation polarizability increases with decreasing element electronegativity. Black-Right-Pointing-Pointer The single bond strength of simple oxides decreases with decreasing the electronegativity. Black-Right-Pointing-Pointer The observed trends were discussed on the basis of type of chemical bonding in simple oxides.« less
Results of the GABLS3 diurnal-cycle benchmark for wind energy applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodrigo, J. Sanz; Allaerts, D.; Avila, M.
We present results of the GABLS3 model intercomparison benchmark revisited for wind energy applications. The case consists of a diurnal cycle, measured at the 200-m tall Cabauw tower in the Netherlands, including a nocturnal low-level jet. The benchmark includes a sensitivity analysis of WRF simulations using two input meteorological databases and five planetary boundary-layer schemes. A reference set of mesoscale tendencies is used to drive microscale simulations using RANS k-ϵ and LES turbulence models. The validation is based on rotor-based quantities of interest. Cycle-integrated mean absolute errors are used to quantify model performance. The results of the benchmark are usedmore » to discuss input uncertainties from mesoscale modelling, different meso-micro coupling strategies (online vs offline) and consistency between RANS and LES codes when dealing with boundary-layer mean flow quantities. Altogether, all the microscale simulations produce a consistent coupling with mesoscale forcings.« less
Results of the GABLS3 diurnal-cycle benchmark for wind energy applications
Rodrigo, J. Sanz; Allaerts, D.; Avila, M.; ...
2017-06-13
We present results of the GABLS3 model intercomparison benchmark revisited for wind energy applications. The case consists of a diurnal cycle, measured at the 200-m tall Cabauw tower in the Netherlands, including a nocturnal low-level jet. The benchmark includes a sensitivity analysis of WRF simulations using two input meteorological databases and five planetary boundary-layer schemes. A reference set of mesoscale tendencies is used to drive microscale simulations using RANS k-ϵ and LES turbulence models. The validation is based on rotor-based quantities of interest. Cycle-integrated mean absolute errors are used to quantify model performance. The results of the benchmark are usedmore » to discuss input uncertainties from mesoscale modelling, different meso-micro coupling strategies (online vs offline) and consistency between RANS and LES codes when dealing with boundary-layer mean flow quantities. Altogether, all the microscale simulations produce a consistent coupling with mesoscale forcings.« less
Experimental and Theoretical Study of Propeller Spinner/Shank Interference. M.S. Thesis
NASA Technical Reports Server (NTRS)
Cornell, C. C.
1986-01-01
A fundamental experimental and theoretical investigation into the aerodynamic interference associated with propeller spinner and shank regions was conducted. The research program involved a theoretical assessment of solutions previously proposed, followed by a systematic experimental study to supplement the existing data base. As a result, a refined computational procedure was established for prediction of interference effects in terms of interference drag and resolved into propeller thrust and torque components. These quantities were examined with attention to engineering parameters such as two spinner finess ratios, three blade shank forms, and two/three/four/six/eight blades. Consideration of the physics of the phenomena aided in the logical deduction of two individual interference quantities (cascade effects and spinner/shank juncture interference). These interference effects were semi-empirically modeled using existing theories and placed into a compatible form with an existing propeller performance scheme which provided the basis for examples of application.
A fast and efficient segmentation scheme for cell microscopic image.
Lebrun, G; Charrier, C; Lezoray, O; Meurie, C; Cardot, H
2007-04-27
Microscopic cellular image segmentation schemes must be efficient for reliable analysis and fast to process huge quantity of images. Recent studies have focused on improving segmentation quality. Several segmentation schemes have good quality but processing time is too expensive to deal with a great number of images per day. For segmentation schemes based on pixel classification, the classifier design is crucial since it is the one which requires most of the processing time necessary to segment an image. The main contribution of this work is focused on how to reduce the complexity of decision functions produced by support vector machines (SVM) while preserving recognition rate. Vector quantization is used in order to reduce the inherent redundancy present in huge pixel databases (i.e. images with expert pixel segmentation). Hybrid color space design is also used in order to improve data set size reduction rate and recognition rate. A new decision function quality criterion is defined to select good trade-off between recognition rate and processing time of pixel decision function. The first results of this study show that fast and efficient pixel classification with SVM is possible. Moreover posterior class pixel probability estimation is easy to compute with Platt method. Then a new segmentation scheme using probabilistic pixel classification has been developed. This one has several free parameters and an automatic selection must dealt with, but criteria for evaluate segmentation quality are not well adapted for cell segmentation, especially when comparison with expert pixel segmentation must be achieved. Another important contribution in this paper is the definition of a new quality criterion for evaluation of cell segmentation. The results presented here show that the selection of free parameters of the segmentation scheme by optimisation of the new quality cell segmentation criterion produces efficient cell segmentation.
Finite Element Evaluation of Pervious Concrete Pavement for Roadway Shoulders
DOT National Transportation Integrated Search
2011-10-01
Stormwater quantity control is an important issue that needs to be addressed in roadway and ancillary transportation facility design. : Pervious concrete has provided an effective solution for storm runoff for parking lots, sidewalks, bike trails, an...
Moving template analysis of crack growth. 1: Procedure development
NASA Astrophysics Data System (ADS)
Padovan, Joe; Guo, Y. H.
1994-06-01
Based on a moving template procedure, this two part series will develop a method to follow the crack tip physics in a self-adaptive manner which provides a uniformly accurate prediction of crack growth. For multiple crack environments, this is achieved by attaching a moving template to each crack tip. The templates are each individually oriented to follow the associated growth orientation and rate. In this part, the essentials of the procedure are derived for application to fatigue crack environments. Overall the scheme derived possesses several hierarchical levels, i.e. the global model, the interpolatively tied moving template, and a multilevel element death option to simulate the crack wake. To speed up computation, the hierarchical polytree scheme is used to reorganize the global stiffness inversion process. In addition to developing the various features of the scheme, the accuracy of predictions for various crack lengths is also benchmarked. Part 2 extends the scheme to multiple crack problems. Extensive benchmarking is also presented to verify the scheme.
A Linearized Prognostic Cloud Scheme in NASAs Goddard Earth Observing System Data Assimilation Tools
NASA Technical Reports Server (NTRS)
Holdaway, Daniel; Errico, Ronald M.; Gelaro, Ronald; Kim, Jong G.; Mahajan, Rahul
2015-01-01
A linearized prognostic cloud scheme has been developed to accompany the linearized convection scheme recently implemented in NASA's Goddard Earth Observing System data assimilation tools. The linearization, developed from the nonlinear cloud scheme, treats cloud variables prognostically so they are subject to linearized advection, diffusion, generation, and evaporation. Four linearized cloud variables are modeled, the ice and water phases of clouds generated by large-scale condensation and, separately, by detraining convection. For each species the scheme models their sources, sublimation, evaporation, and autoconversion. Large-scale, anvil and convective species of precipitation are modeled and evaporated. The cloud scheme exhibits linearity and realistic perturbation growth, except around the generation of clouds through large-scale condensation. Discontinuities and steep gradients are widely used here and severe problems occur in the calculation of cloud fraction. For data assimilation applications this poor behavior is controlled by replacing this part of the scheme with a perturbation model. For observation impacts, where efficiency is less of a concern, a filtering is developed that examines the Jacobian. The replacement scheme is only invoked if Jacobian elements or eigenvalues violate a series of tuned constants. The linearized prognostic cloud scheme is tested by comparing the linear and nonlinear perturbation trajectories for 6-, 12-, and 24-h forecast times. The tangent linear model performs well and perturbations of clouds are well captured for the lead times of interest.
NASA Technical Reports Server (NTRS)
Guruswamy, Guru P.; MacMurdy, Dale E.; Kapania, Rakesh K.
1994-01-01
Strong interactions between flow about an aircraft wing and the wing structure can result in aeroelastic phenomena which significantly impact aircraft performance. Time-accurate methods for solving the unsteady Navier-Stokes equations have matured to the point where reliable results can be obtained with reasonable computational costs for complex non-linear flows with shock waves, vortices and separations. The ability to combine such a flow solver with a general finite element structural model is key to an aeroelastic analysis in these flows. Earlier work involved time-accurate integration of modal structural models based on plate elements. A finite element model was developed to handle three-dimensional wing boxes, and incorporated into the flow solver without the need for modal analysis. Static condensation is performed on the structural model to reduce the structural degrees of freedom for the aeroelastic analysis. Direct incorporation of the finite element wing-box structural model with the flow solver requires finding adequate methods for transferring aerodynamic pressures to the structural grid and returning deflections to the aerodynamic grid. Several schemes were explored for handling the grid-to-grid transfer of information. The complex, built-up nature of the wing-box complicated this transfer. Aeroelastic calculations for a sample wing in transonic flow comparing various simple transfer schemes are presented and discussed.
Combining states without scale hierarchies with ordered parton showers
Fischer, Nadine; Prestel, Stefan
2017-09-12
Here, we present a parameter-free scheme to combine fixed-order multi-jet results with parton-shower evolution. The scheme produces jet cross sections with leading-order accuracy in the complete phase space of multiple emissions, resumming large logarithms when appropriate, while not arbitrarily enforcing ordering on momentum configurations beyond the reach of the parton-shower evolution equation. This then requires the development of a matrix-element correction scheme for complex phase-spaces including ordering conditions as well as a systematic scale-setting procedure for unordered phase-space points. Our algorithm does not require a merging-scale parameter. We implement the new method in the Vincia framework and compare to LHCmore » data.« less
Multigrid solutions to quasi-elliptic schemes
NASA Technical Reports Server (NTRS)
Brandt, A.; Taasan, S.
1985-01-01
Quasi-elliptic schemes arise from central differencing or finite element discretization of elliptic systems with odd order derivatives on non-staggered grids. They are somewhat unstable and less accurate then corresponding staggered-grid schemes. When usual multigrid solvers are applied to them, the asymptotic algebraic convergence is necessarily slow. Nevertheless, it is shown by mode analyses and numerical experiments that the usual FMG algorithm is very efficient in solving quasi-elliptic equations to the level of truncation errors. Also, a new type of multigrid algorithm is presented, mode analyzed and tested, for which even the asymptotic algebraic convergence is fast. The essence of that algorithm is applicable to other kinds of problems, including highly indefinite ones.
Multigrid solutions to quasi-elliptic schemes
NASA Technical Reports Server (NTRS)
Brandt, A.; Taasan, S.
1985-01-01
Quasi-elliptic schemes arise from central differencing or finite element discretization of elliptic systems with odd order derivatives on non-staggered grids. They are somewhat unstable and less accurate than corresponding staggered-grid schemes. When usual multigrid solvers are applied to them, the asymptotic algebraic convergence is necessarily slow. Nevertheless, it is shown by mode analyses and numerical experiments that the usual FMG algorithm is very efficient in solving quasi-elliptic equations to the level of truncation errors. Also, a new type of multigrid algorithm is presented, mode analyzed and tested, for which even the asymptotic algebraic convergence is fast. The essence of that algorithm is applicable to other kinds of problems, including highly indefinite ones.
TAP 2: A finite element program for thermal analysis of convectively cooled structures
NASA Technical Reports Server (NTRS)
Thornton, E. A.
1980-01-01
A finite element computer program (TAP 2) for steady-state and transient thermal analyses of convectively cooled structures is presented. The program has a finite element library of six elements: two conduction/convection elements to model heat transfer in a solid, two convection elements to model heat transfer in a fluid, and two integrated conduction/convection elements to represent combined heat transfer in tubular and plate/fin fluid passages. Nonlinear thermal analysis due to temperature-dependent thermal parameters is performed using the Newton-Raphson iteration method. Transient analyses are performed using an implicit Crank-Nicolson time integration scheme with consistent or lumped capacitance matrices as an option. Program output includes nodal temperatures and element heat fluxes. Pressure drops in fluid passages may be computed as an option. User instructions and sample problems are presented in appendixes.
Transport on Riemannian manifold for functional connectivity-based classification.
Ng, Bernard; Dressler, Martin; Varoquaux, Gaël; Poline, Jean Baptiste; Greicius, Michael; Thirion, Bertrand
2014-01-01
We present a Riemannian approach for classifying fMRI connectivity patterns before and after intervention in longitudinal studies. A fundamental difficulty with using connectivity as features is that covariance matrices live on the positive semi-definite cone, which renders their elements inter-related. The implicit independent feature assumption in most classifier learning algorithms is thus violated. In this paper, we propose a matrix whitening transport for projecting the covariance estimates onto a common tangent space to reduce the statistical dependencies between their elements. We show on real data that our approach provides significantly higher classification accuracy than directly using Pearson's correlation. We further propose a non-parametric scheme for identifying significantly discriminative connections from classifier weights. Using this scheme, a number of neuroanatomically meaningful connections are found, whereas no significant connections are detected with pure permutation testing.
Dynamic video encryption algorithm for H.264/AVC based on a spatiotemporal chaos system.
Xu, Hui; Tong, Xiao-Jun; Zhang, Miao; Wang, Zhu; Li, Ling-Hao
2016-06-01
Video encryption schemes mostly employ the selective encryption method to encrypt parts of important and sensitive video information, aiming to ensure the real-time performance and encryption efficiency. The classic block cipher is not applicable to video encryption due to the high computational overhead. In this paper, we propose the encryption selection control module to encrypt video syntax elements dynamically which is controlled by the chaotic pseudorandom sequence. A novel spatiotemporal chaos system and binarization method is used to generate a key stream for encrypting the chosen syntax elements. The proposed scheme enhances the resistance against attacks through the dynamic encryption process and high-security stream cipher. Experimental results show that the proposed method exhibits high security and high efficiency with little effect on the compression ratio and time cost.
Numerical simulation of a shear-thinning fluid through packed spheres
NASA Astrophysics Data System (ADS)
Liu, Hai Long; Moon, Jong Sin; Hwang, Wook Ryol
2012-12-01
Flow behaviors of a non-Newtonian fluid in spherical microstructures have been studied by a direct numerical simulation. A shear-thinning (power-law) fluid through both regular and randomly packed spheres has been numerically investigated in a representative unit cell with the tri-periodic boundary condition, employing a rigorous three-dimensional finite-element scheme combined with fictitious-domain mortar-element methods. The present scheme has been validated for the classical spherical packing problems with literatures. The flow mobility of regular packing structures, including simple cubic (SC), body-centered cubic (BCC), face-centered cubic (FCC), as well as randomly packed spheres, has been investigated quantitatively by considering the amount of shear-thinning, the pressure gradient and the porosity as parameters. Furthermore, the mechanism leading to the main flow path in a highly shear-thinning fluid through randomly packed spheres has been discussed.
Improved Convergence and Robustness of USM3D Solutions on Mixed Element Grids (Invited)
NASA Technical Reports Server (NTRS)
Pandya, Mohagna J.; Diskin, Boris; Thomas, James L.; Frink, Neal T.
2015-01-01
Several improvements to the mixed-element USM3D discretization and defect-correction schemes have been made. A new methodology for nonlinear iterations, called the Hierarchical Adaptive Nonlinear Iteration Scheme (HANIS), has been developed and implemented. It provides two additional hierarchies around a simple and approximate preconditioner of USM3D. The hierarchies are a matrix-free linear solver for the exact linearization of Reynolds-averaged Navier Stokes (RANS) equations and a nonlinear control of the solution update. Two variants of the new methodology are assessed on four benchmark cases, namely, a zero-pressure gradient flat plate, a bump-in-channel configuration, the NACA 0012 airfoil, and a NASA Common Research Model configuration. The new methodology provides a convergence acceleration factor of 1.4 to 13 over the baseline solver technology.
An adaptive grid scheme using the boundary element method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Munipalli, R.; Anderson, D.A.
1996-09-01
A technique to solve the Poisson grid generation equations by Green`s function related methods has been proposed, with the source terms being purely position dependent. The use of distributed singularities in the flow domain coupled with the boundary element method (BEM) formulation is presented in this paper as a natural extension of the Green`s function method. This scheme greatly simplifies the adaption process. The BEM reduces the dimensionality of the given problem by one. Internal grid-point placement can be achieved for a given boundary distribution by adding continuous and discrete source terms in the BEM formulation. A distribution of vortexmore » doublets is suggested as a means of controlling grid-point placement and grid-line orientation. Examples for sample adaption problems are presented and discussed. 15 refs., 20 figs.« less
Exploring Flavor Physics with Lattice QCD
NASA Astrophysics Data System (ADS)
Du, Daping; Fermilab/MILC Collaborations Collaboration
2016-03-01
The Standard Model has been a very good description of the subatomic particle physics. In the search for physics beyond the Standard Model in the context of flavor physics, it is important to sharpen our probes using some gold-plated processes (such as B rare decays), which requires the knowledge of the input parameters, such as the Cabibbo-Kobayashi-Maskawa (CKM) matrix elements and other nonperturbative quantities, with sufficient precision. Lattice QCD is so far the only first-principle method which could compute these quantities with competitive and systematically improvable precision using the state of the art simulation techniques. I will discuss the recent progress of lattice QCD calculations on some of these nonpurturbative quantities and their applications in flavor physics. I will also discuss the implications and future perspectives of these calculations in flavor physics.
NASA Astrophysics Data System (ADS)
Iqbal, Muhammad; Lamy, Isabelle; Bermond, Alain
2014-05-01
Presently changes in the land use of contaminated and marginal agricultural lands from conventional annual food crops to perennial non-food bionergy crops are being encouraged globally. This is being done to avoid food chain contamination with metal and organic contaminants and to meet world energy needs without disturbing normal fertile agricultural lands. Changes in land use from the annual cropping systems to the perennial cropping systems are known to modify organic matter quality and quantity in case of non contaminated soils. In the case of contaminated soils such changes are susceptible to alter trace metal availabilities but studies reporting such changes are scarce. Different single extraction protocols are used to assess the trace element availability in soils. The efficiency of these extractants depends upon soil conditions and may vary case to case. The objective of the present work was to assess the changes in trace metal availability of contaminated soils when annual crops system is replaced by a perennial crop system using different single extraction protocols. A strategy of studying Cd and Zn availabilities of two sites differing in the soil texture and origin of pollution was adopted i.e. the site of Metaleurop (North of France) and the site of Pierrelaye (Paris Region). They differed in the degree of metal pollution (for Cu, Pb, Cd and Zn) and in the quantity and nature of organic matter (different C/N values). The samples used for this study involved the soils under annual crops and the perennial crop i.e. miscanthus. We investigated the trace metal availabilities of the soils using different single extraction protocols involving chemical metal extractions with EDTA, DTPA and NH4NO3 at equilibrium and kinetic EDTA extractions. The results for the soil under miscanthus compared to annual crop soil showed that single extraction schemes using chelating agents like EDTA and DTPA, however, failed to show if the metal availability can be impacted by land use. The differences in metal availability in the soils under miscanthus and annual crops were highlighted by the weaker extractant NH4NO3 and by kinetic extractions using EDTA. For the Metaleurop site, a trend of decrease in Cd and Zn availability in the soil under perennial miscanthus crop compared to the soil under annual crop was observed. For the organic matter rich sandy soils of Pierrelaye labile Zn increased while Cd was decreased. These results showed little impact on trace metal availabilities at the earlier stage of changes in land use (3 years after conversion). However, on longer terms, the impact can be more remarkable. The study also highlighted the efficacy of the use of combination of metal availability assessment approaches instead of relying on single approaches. In addition the type of changes being occurred in metal availability can be predicted using combined extraction approaches because the mechanisms behind each extraction scheme and their target metal pools being different.
Computational Modeling for the Flow Over a Multi-Element Airfoil
NASA Technical Reports Server (NTRS)
Liou, William W.; Liu, Feng-Jun
1999-01-01
The flow over a multi-element airfoil is computed using two two-equation turbulence models. The computations are performed using the INS2D) Navier-Stokes code for two angles of attack. Overset grids are used for the three-element airfoil. The computed results are compared with experimental data for the surface pressure, skin friction coefficient, and velocity magnitude. The computed surface quantities generally agree well with the measurement. The computed results reveal the possible existence of a mixing-layer-like region of flow next to the suction surface of the slat for both angles of attack.
Natarajan, Logesh Kumar; Wu, Sean F
2012-06-01
This paper presents helpful guidelines and strategies for reconstructing the vibro-acoustic quantities on a highly non-spherical surface by using the Helmholtz equation least squares (HELS). This study highlights that a computationally simple code based on the spherical wave functions can produce an accurate reconstruction of the acoustic pressure and normal surface velocity on planar surfaces. The key is to select the optimal origin of the coordinate system behind the planar surface, choose a target structural wavelength to be reconstructed, set an appropriate stand-off distance and microphone spacing, use a hybrid regularization scheme to determine the optimal number of the expansion functions, etc. The reconstructed vibro-acoustic quantities are validated rigorously via experiments by comparing the reconstructed normal surface velocity spectra and distributions with the benchmark data obtained by scanning a laser vibrometer over the plate surface. Results confirm that following the proposed guidelines and strategies can ensure the accuracy in reconstructing the normal surface velocity up to the target structural wavelength, and produce much more satisfactory results than a straight application of the original HELS formulations. Experiment validations on a baffled, square plate were conducted inside a fully anechoic chamber.
Volatile elements in Allende inclusions. [Mn, Na and Cl relation to meteorite evolution
NASA Technical Reports Server (NTRS)
Grossman, L.; Ganapathy, R.
1975-01-01
New data are presented on the relatively volatile elements (Mn, Na, and Cl) in coarse- and fine-grained Ca/Al-rich inclusions of different textures and mineralogy in the Allende meteorite. It is shown that the coarse-grained inclusions condensed from the solar nebula at high temperature and contained vanishingly small quantities of volatile elements at that time. Later, volatiles were added to these during the metamorphism of the Allende parent body. The fine-grained inclusions were also affected by the addition of volatiles during this metamorphism but, unlike the coarse-grained ones, they incorporated large amounts of volatiles when they condensed from the solar nebula, accounting for their higher volatile element contents.
A hydrological emulator for global applications - HE v1.0.0
NASA Astrophysics Data System (ADS)
Liu, Yaling; Hejazi, Mohamad; Li, Hongyi; Zhang, Xuesong; Leng, Guoyong
2018-03-01
While global hydrological models (GHMs) are very useful in exploring water resources and interactions between the Earth and human systems, their use often requires numerous model inputs, complex model calibration, and high computation costs. To overcome these challenges, we construct an efficient open-source and ready-to-use hydrological emulator (HE) that can mimic complex GHMs at a range of spatial scales (e.g., basin, region, globe). More specifically, we construct both a lumped and a distributed scheme of the HE based on the monthly abcd model to explore the tradeoff between computational cost and model fidelity. Model predictability and computational efficiency are evaluated in simulating global runoff from 1971 to 2010 with both the lumped and distributed schemes. The results are compared against the runoff product from the widely used Variable Infiltration Capacity (VIC) model. Our evaluation indicates that the lumped and distributed schemes present comparable results regarding annual total quantity, spatial pattern, and temporal variation of the major water fluxes (e.g., total runoff, evapotranspiration) across the global 235 basins (e.g., correlation coefficient r between the annual total runoff from either of these two schemes and the VIC is > 0.96), except for several cold (e.g., Arctic, interior Tibet), dry (e.g., North Africa) and mountainous (e.g., Argentina) regions. Compared against the monthly total runoff product from the VIC (aggregated from daily runoff), the global mean Kling-Gupta efficiencies are 0.75 and 0.79 for the lumped and distributed schemes, respectively, with the distributed scheme better capturing spatial heterogeneity. Notably, the computation efficiency of the lumped scheme is 2 orders of magnitude higher than the distributed one and 7 orders more efficient than the VIC model. A case study of uncertainty analysis for the world's 16 basins with top annual streamflow is conducted using 100 000 model simulations, and it demonstrates the lumped scheme's extraordinary advantage in computational efficiency. Our results suggest that the revised lumped abcd model can serve as an efficient and reasonable HE for complex GHMs and is suitable for broad practical use, and the distributed scheme is also an efficient alternative if spatial heterogeneity is of more interest.
Slaughter, Susan E; Zimmermann, Gabrielle L; Nuspl, Megan; Hanson, Heather M; Albrecht, Lauren; Esmail, Rosmin; Sauro, Khara; Newton, Amanda S; Donald, Maoliosa; Dyson, Michele P; Thomson, Denise; Hartling, Lisa
2017-12-06
As implementation science advances, the number of interventions to promote the translation of evidence into healthcare, health systems, or health policy is growing. Accordingly, classification schemes for these knowledge translation (KT) interventions have emerged. A recent scoping review identified 51 classification schemes of KT interventions to integrate evidence into healthcare practice; however, the review did not evaluate the quality of the classification schemes or provide detailed information to assist researchers in selecting a scheme for their context and purpose. This study aimed to further examine and assess the quality of these classification schemes of KT interventions, and provide information to aid researchers when selecting a classification scheme. We abstracted the following information from each of the original 51 classification scheme articles: authors' objectives; purpose of the scheme and field of application; socioecologic level (individual, organizational, community, system); adaptability (broad versus specific); target group (patients, providers, policy-makers), intent (policy, education, practice), and purpose (dissemination versus implementation). Two reviewers independently evaluated the methodological quality of the development of each classification scheme using an adapted version of the AGREE II tool. Based on these assessments, two independent reviewers reached consensus about whether to recommend each scheme for researcher use, or not. Of the 51 original classification schemes, we excluded seven that were not specific classification schemes, not accessible or duplicates. Of the remaining 44 classification schemes, nine were not recommended. Of the 35 recommended classification schemes, ten focused on behaviour change and six focused on population health. Many schemes (n = 29) addressed practice considerations. Fewer schemes addressed educational or policy objectives. Twenty-five classification schemes had broad applicability, six were specific, and four had elements of both. Twenty-three schemes targeted health providers, nine targeted both patients and providers and one targeted policy-makers. Most classification schemes were intended for implementation rather than dissemination. Thirty-five classification schemes of KT interventions were developed and reported with sufficient rigour to be recommended for use by researchers interested in KT in healthcare. Our additional categorization and quality analysis will aid in selecting suitable classification schemes for research initiatives in the field of implementation science.
Opportunistic Beamforming with Wireless Powered 1-bit Feedback Through Rectenna Array
NASA Astrophysics Data System (ADS)
Krikidis, Ioannis
2015-11-01
This letter deals with the opportunistic beamforming (OBF) scheme for multi-antenna downlink with spatial randomness. In contrast to conventional OBF, the terminals return only 1-bit feedback, which is powered by wireless power transfer through a rectenna array. We study two fundamental topologies for the combination of the rectenna elements; the direct-current combiner and the radio-frequency combiner. The beam outage probability is derived in closed form for both combination schemes, by using high order statistics and stochastic geometry.
Parallel-Vector Algorithm For Rapid Structural Anlysis
NASA Technical Reports Server (NTRS)
Agarwal, Tarun R.; Nguyen, Duc T.; Storaasli, Olaf O.
1993-01-01
New algorithm developed to overcome deficiency of skyline storage scheme by use of variable-band storage scheme. Exploits both parallel and vector capabilities of modern high-performance computers. Gives engineers and designers opportunity to include more design variables and constraints during optimization of structures. Enables use of more refined finite-element meshes to obtain improved understanding of complex behaviors of aerospace structures leading to better, safer designs. Not only attractive for current supercomputers but also for next generation of shared-memory supercomputers.
NASA Astrophysics Data System (ADS)
Chen, Hui-Na; Liu, Jin-Ming
2009-10-01
We present an optical scheme to almost completely teleport a bipartite entangled coherent state using a four-partite cluster-type entangled coherent state as quantum channel. The scheme is based on optical elements such as beam splitters, phase shifters, and photon detectors. We also obtain the average fidelity of the teleportation process. It is shown that the average fidelity is quite close to unity if the mean photon number of the coherent state is not too small.
An unsteady Euler scheme for the analysis of ducted propellers
NASA Technical Reports Server (NTRS)
Srivastava, R.
1992-01-01
An efficient unsteady solution procedure has been developed for analyzing inviscid unsteady flow past ducted propeller configurations. This scheme is first order accurate in time and second order accurate in space. The solution procedure has been applied to a ducted propeller consisting of an 8-bladed SR7 propeller with a duct of NACA 0003 airfoil cross section around it, operating in a steady axisymmetric flowfield. The variation of elemental blade loading with radius, compares well with other published numerical results.
Autonomous learning by simple dynamical systems with delayed feedback.
Kaluza, Pablo; Mikhailov, Alexander S
2014-09-01
A general scheme for the construction of dynamical systems able to learn generation of the desired kinds of dynamics through adjustment of their internal structure is proposed. The scheme involves intrinsic time-delayed feedback to steer the dynamics towards the target performance. As an example, a system of coupled phase oscillators, which can, by changing the weights of connections between its elements, evolve to a dynamical state with the prescribed (low or high) synchronization level, is considered and investigated.
Yi, Faliu; Jeoung, Yousun; Moon, Inkyu
2017-05-20
In recent years, many studies have focused on authentication of two-dimensional (2D) images using double random phase encryption techniques. However, there has been little research on three-dimensional (3D) imaging systems, such as integral imaging, for 3D image authentication. We propose a 3D image authentication scheme based on a double random phase integral imaging method. All of the 2D elemental images captured through integral imaging are encrypted with a double random phase encoding algorithm and only partial phase information is reserved. All the amplitude and other miscellaneous phase information in the encrypted elemental images is discarded. Nevertheless, we demonstrate that 3D images from integral imaging can be authenticated at different depths using a nonlinear correlation method. The proposed 3D image authentication algorithm can provide enhanced information security because the decrypted 2D elemental images from the sparse phase cannot be easily observed by the naked eye. Additionally, using sparse phase images without any amplitude information can greatly reduce data storage costs and aid in image compression and data transmission.
Muscle-driven finite element simulation of human foot movements.
Spyrou, L A; Aravas, N
2012-01-01
This paper describes a finite element scheme for realistic muscle-driven simulation of human foot movements. The scheme is used to simulate human ankle plantar flexion. A three-dimensional anatomically detailed finite element model of human foot and lower leg is developed and the idea of generating natural foot movement based entirely on the contraction of the plantar flexor muscles is used. The bones, ligaments, articular cartilage, muscles, tendons, as well as the rest soft tissues of human foot and lower leg are included in the model. A realistic three-dimensional continuum constitutive model that describes the biomechanical behaviour of muscles and tendons is used. Both the active and passive properties of muscle tissue are accounted for. The materials for bones and ligaments are considered as homogeneous, isotropic and linearly elastic, whereas the articular cartilage and the rest soft tissues (mainly fat) are defined as hyperelastic materials. The model is used to estimate muscle tissue deformations as well as stresses and strains that develop in the lower leg muscles during plantar flexion of the ankle. Stresses and strains that develop in Achilles tendon during such a movement are also investigated.
NASA Astrophysics Data System (ADS)
Qin, Shanlin; Liu, Fawang; Turner, Ian W.
2018-03-01
The consideration of diffusion processes in magnetic resonance imaging (MRI) signal attenuation is classically described by the Bloch-Torrey equation. However, many recent works highlight the distinct deviation in MRI signal decay due to anomalous diffusion, which motivates the fractional order generalization of the Bloch-Torrey equation. In this work, we study the two-dimensional multi-term time and space fractional diffusion equation generalized from the time and space fractional Bloch-Torrey equation. By using the Galerkin finite element method with a structured mesh consisting of rectangular elements to discretize in space and the L1 approximation of the Caputo fractional derivative in time, a fully discrete numerical scheme is derived. A rigorous analysis of stability and error estimation is provided. Numerical experiments in the square and L-shaped domains are performed to give an insight into the efficiency and reliability of our method. Then the scheme is applied to solve the multi-term time and space fractional Bloch-Torrey equation, which shows that the extra time derivative terms impact the relaxation process.
Purity of Gaussian states: Measurement schemes and time evolution in noisy channels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paris, Matteo G.A.; Illuminati, Fabrizio; Serafini, Alessio
2003-07-01
We present a systematic study of the purity for Gaussian states of single-mode continuous variable systems. We prove the connection of purity to observable quantities for these states, and show that the joint measurement of two conjugate quadratures is necessary and sufficient to determine the purity at any time. The statistical reliability and the range of applicability of the proposed measurement scheme are tested by means of Monte Carlo simulated experiments. We then consider the dynamics of purity in noisy channels. We derive an evolution equation for the purity of general Gaussian states both in thermal and in squeezed thermalmore » baths. We show that purity is maximized at any given time for an initial coherent state evolving in a thermal bath, or for an initial squeezed state evolving in a squeezed thermal bath whose asymptotic squeezing is orthogonal to that of the input state.« less
Bellomo, Guido; Bosyk, Gustavo M; Holik, Federico; Zozor, Steeve
2017-11-07
Based on the problem of quantum data compression in a lossless way, we present here an operational interpretation for the family of quantum Rényi entropies. In order to do this, we appeal to a very general quantum encoding scheme that satisfies a quantum version of the Kraft-McMillan inequality. Then, in the standard situation, where one is intended to minimize the usual average length of the quantum codewords, we recover the known results, namely that the von Neumann entropy of the source bounds the average length of the optimal codes. Otherwise, we show that by invoking an exponential average length, related to an exponential penalization over large codewords, the quantum Rényi entropies arise as the natural quantities relating the optimal encoding schemes with the source description, playing an analogous role to that of von Neumann entropy.
Multigrid solution of compressible turbulent flow on unstructured meshes using a two-equation model
NASA Technical Reports Server (NTRS)
Mavriplis, D. J.; Martinelli, L.
1991-01-01
The system of equations consisting of the full Navier-Stokes equations and two turbulence equations was solved for in the steady state using a multigrid strategy on unstructured meshes. The flow equations and turbulence equations are solved in a loosely coupled manner. The flow equations are advanced in time using a multistage Runge-Kutta time stepping scheme with a stability bound local time step, while the turbulence equations are advanced in a point-implicit scheme with a time step which guarantees stability and positively. Low Reynolds number modifications to the original two equation model are incorporated in a manner which results in well behaved equations for arbitrarily small wall distances. A variety of aerodynamic flows are solved for, initializing all quantities with uniform freestream values, and resulting in rapid and uniform convergence rates for the flow and turbulence equations.
A simple quantum mechanical treatment of scattering in nanoscale transistors
NASA Astrophysics Data System (ADS)
Venugopal, R.; Paulsson, M.; Goasguen, S.; Datta, S.; Lundstrom, M. S.
2003-05-01
We present a computationally efficient, two-dimensional quantum mechanical simulation scheme for modeling dissipative electron transport in thin body, fully depleted, n-channel, silicon-on-insulator transistors. The simulation scheme, which solves the nonequilibrium Green's function equations self consistently with Poisson's equation, treats the effect of scattering using a simple approximation inspired by the "Büttiker probes," often used in mesoscopic physics. It is based on an expansion of the active device Hamiltonian in decoupled mode space. Simulation results are used to highlight quantum effects, discuss the physics of scattering and to relate the quantum mechanical quantities used in our model to experimentally measured low field mobilities. Additionally, quantum boundary conditions are rigorously derived and the effects of strong off-equilibrium transport are examined. This paper shows that our approximate treatment of scattering, is an efficient and useful simulation method for modeling electron transport in nanoscale, silicon-on-insulator transistors.
NASA Technical Reports Server (NTRS)
Padavala, Satyasrinivas; Palazzolo, Alan B.; Vallely, Pat; Ryan, Steve
1994-01-01
An improved dynamic analysis for liquid annular seals with arbitrary profile based on a method, first proposed by Nelson and Nguyen, is presented. An improved first order solution that incorporates a continuous interpolation of perturbed quantities in the circumferential direction, is presented. The original method uses an approximation scheme for circumferential gradients, based on Fast Fourier Transforms (FFT). A simpler scheme based on cubic splines is found to be computationally more efficient with better convergence at higher eccentricities. A new approach of computing dynamic coefficients based on external specified load is introduced. This improved analysis is extended to account for arbitrarily varying seal profile in both axial and circumferential directions. An example case of an elliptical seal with varying degrees of axial curvature is analyzed. A case study based on actual operating clearances of an interstage seal of the Space Shuttle Main Engine High Pressure Oxygen Turbopump is presented.
NASA Astrophysics Data System (ADS)
Liang, Weibin; Ouyang, Sen; Huang, Xiang; Su, Weijian
2017-05-01
The existing modeling process of power quality about electrified railways connected to power grid is complicated and the simulation scene is incomplete, so this paper puts forward a novel evaluation method of power quality based on PSCAD/ETMDC. Firstly, a model of power quality about electrified railways connected to power grid is established, which is based on testing report or measured data. The equivalent model of electrified locomotive contains power characteristic and harmonic characteristic, which are substituted by load and harmonic source. Secondly, in order to make evaluation more complete, an analysis scheme has been put forward. The scheme uses a combination of three-dimensions of electrified locomotive, which contains types, working conditions and quantity. At last, Shenmao Railway is taken as example to evaluate the power quality at different scenes, and the result shows electrified railways connected to power grid have significant effect on power quality.
Luminescent sensing and imaging of oxygen: Fierce competition to the Clark electrode
2015-01-01
Luminescence‐based sensing schemes for oxygen have experienced a fast growth and are in the process of replacing the Clark electrode in many fields. Unlike electrodes, sensing is not limited to point measurements via fiber optic microsensors, but includes additional features such as planar sensing, imaging, and intracellular assays using nanosized sensor particles. In this essay, I review and discuss the essentials of (i) common solid‐state sensor approaches based on the use of luminescent indicator dyes and host polymers; (ii) fiber optic and planar sensing schemes; (iii) nanoparticle‐based intracellular sensing; and (iv) common spectroscopies. Optical sensors are also capable of multiple simultaneous sensing (such as O2 and temperature). Sensors for O2 are produced nowadays in large quantities in industry. Fields of application include sensing of O2 in plant and animal physiology, in clinical chemistry, in marine sciences, in the chemical industry and in process biotechnology. PMID:26113255
NASA Astrophysics Data System (ADS)
Raburn, Daniel Louis
We have developed a preconditioned, globalized Jacobian-free Newton-Krylov (JFNK) solver for calculating equilibria with magnetic islands. The solver has been developed in conjunction with the Princeton Iterative Equilibrium Solver (PIES) and includes two notable enhancements over a traditional JFNK scheme: (1) globalization of the algorithm by a sophisticated backtracking scheme, which optimizes between the Newton and steepest-descent directions; and, (2) adaptive preconditioning, wherein information regarding the system Jacobian is reused between Newton iterations to form a preconditioner for our GMRES-like linear solver. We have developed a formulation for calculating saturated neoclassical tearing modes (NTMs) which accounts for the incomplete loss of a bootstrap current due to gradients of multiple physical quantities. We have applied the coupled PIES-JFNK solver to calculate saturated island widths on several shots from the Tokamak Fusion Test Reactor (TFTR) and have found reasonable agreement with experimental measurement.
Cartesian Off-Body Grid Adaption for Viscous Time- Accurate Flow Simulation
NASA Technical Reports Server (NTRS)
Buning, Pieter G.; Pulliam, Thomas H.
2011-01-01
An improved solution adaption capability has been implemented in the OVERFLOW overset grid CFD code. Building on the Cartesian off-body approach inherent in OVERFLOW and the original adaptive refinement method developed by Meakin, the new scheme provides for automated creation of multiple levels of finer Cartesian grids. Refinement can be based on the undivided second-difference of the flow solution variables, or on a specific flow quantity such as vorticity. Coupled with load-balancing and an inmemory solution interpolation procedure, the adaption process provides very good performance for time-accurate simulations on parallel compute platforms. A method of using refined, thin body-fitted grids combined with adaption in the off-body grids is presented, which maximizes the part of the domain subject to adaption. Two- and three-dimensional examples are used to illustrate the effectiveness and performance of the adaption scheme.
DAMAGE ASSESSMENT OF RC BEAMS BY NONLINEAR FINITE ELEMENT ANALYSES
NASA Astrophysics Data System (ADS)
Saito, Shigehiko; Maki, Takeshi; Tsuchiya, Satoshi; Watanabe, Tadatomo
This paper presents damage assessment schemes by using 2-dimensional nonlinear finite element analyses. The second strain invariant of deviatoric strain tensor and consumed strain energy are calculated by local strain at each integration po int of finite elements. Those scalar values are averaged over certain region. The produced nonlocal values are used for indices to verify structural safety by confirming which the ultimate limit state for failure is reached or not. Flexural and shear failure of reinforced concrete beams are estimated by us ing the proposed indices.
NASA Astrophysics Data System (ADS)
Pospelova, I. Y.; Pospelova, M. Y.; Bondarenko, A. S.; Kornilov, D. A.
2018-05-01
The modeling for Smart Energy Coating is presented. The coating is able to produce electricity on the surface of pipelines and structural elements. Along with electric output, Smart Energy Coating ensures the stable temperature conditions of work for structures, pipelines and regulating elements. The energy production scheme is based on the Peltier principle and the insulating layer with a phase transition. Thermally conductive inclusions of the inside layer with a phase transition material ensure the stable operation of the Peltier element.
Selected spectroscopic results on element 115 decay chains
Rudolph, D.; Forsberg, U.; Golubev, P.; ...
2014-08-24
We observed thirty correlated α-decay chains in an experiment studying the fusion-evaporation reaction 48Ca + 243Am at the GSI Helmholtzzentrum fur Schwerionenforschung. The decay characteristics of the majority of these 30 chains are consistent with previous observations and interpretations of such chains to originate from isotopes of element Z = 115. High-resolution α-photon coincidence spectroscopy in conjunction with comprehensive Monte-Carlo simulations allow to propose excitation schemes of atomic nuclei of the heaviest elements, thereby probing nuclear structure models near the 'Island of Stability' with unprecedented experimental precision.
A weak Galerkin generalized multiscale finite element method
Mu, Lin; Wang, Junping; Ye, Xiu
2016-03-31
In this study, we propose a general framework for weak Galerkin generalized multiscale (WG-GMS) finite element method for the elliptic problems with rapidly oscillating or high contrast coefficients. This general WG-GMS method features in high order accuracy on general meshes and can work with multiscale basis derived by different numerical schemes. A special case is studied under this WG-GMS framework in which the multiscale basis functions are obtained by solving local problem with the weak Galerkin finite element method. Convergence analysis and numerical experiments are obtained for the special case.
A weak Galerkin generalized multiscale finite element method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mu, Lin; Wang, Junping; Ye, Xiu
In this study, we propose a general framework for weak Galerkin generalized multiscale (WG-GMS) finite element method for the elliptic problems with rapidly oscillating or high contrast coefficients. This general WG-GMS method features in high order accuracy on general meshes and can work with multiscale basis derived by different numerical schemes. A special case is studied under this WG-GMS framework in which the multiscale basis functions are obtained by solving local problem with the weak Galerkin finite element method. Convergence analysis and numerical experiments are obtained for the special case.
NASA Technical Reports Server (NTRS)
Nakazawa, Shohei
1989-01-01
The internal structure is discussed of the MHOST finite element program designed for 3-D inelastic analysis of gas turbine hot section components. The computer code is the first implementation of the mixed iterative solution strategy for improved efficiency and accuracy over the conventional finite element method. The control structure of the program is covered along with the data storage scheme and the memory allocation procedure and the file handling facilities including the read and/or write sequences.
Adaptive finite element method for turbulent flow near a propeller
NASA Astrophysics Data System (ADS)
Pelletier, Dominique; Ilinca, Florin; Hetu, Jean-Francois
1994-11-01
This paper presents an adaptive finite element method based on remeshing to solve incompressible turbulent free shear flow near a propeller. Solutions are obtained in primitive variables using a highly accurate finite element approximation on unstructured grids. Turbulence is modeled by a mixing length formulation. Two general purpose error estimators, which take into account swirl and the variation of the eddy viscosity, are presented and applied to the turbulent wake of a propeller. Predictions compare well with experimental measurements. The proposed adaptive scheme is robust, reliable and cost effective.
Pyrotechnic device provides one-shot heat source
NASA Technical Reports Server (NTRS)
Haller, H. C.; Lalli, V. R.
1968-01-01
Pyrotechnic heater provides a one-shot heat source capable of creating a predetermined temperature around sealed packages. It is composed of a blend of an active chemical element and another compound which reacts exothermically when ignited and produces fixed quantities of heat.
HUMAN EYE OPTICS: Determination of positions of optical elements of the human eye
NASA Astrophysics Data System (ADS)
Galetskii, S. O.; Cherezova, T. Yu
2009-02-01
An original method for noninvasive determining the positions of elements of intraocular optics is proposed. The analytic dependence of the measurement error on the optical-scheme parameters and the restriction in distance from the element being measured are determined within the framework of the method proposed. It is shown that the method can be efficiently used for determining the position of elements in the classical Gullstrand eye model and personalised eye models. The positions of six optical surfaces of the Gullstrand eye model and four optical surfaces of the personalised eye model can be determined with an error of less than 0.25 mm.
Trace element analysis of coal by neutron activation.
NASA Technical Reports Server (NTRS)
Sheibley, D. W.
1973-01-01
The irradiation, counting, and data reduction scheme is described for an analysis capability of 1000 samples per year. Up to 56 elements are reported on each sample. The precision and accuracy of the method are shown for 25 elements designated as hazardous by the Environmental Protection Agency (EPA). The interference corrections for selenium and ytterbium on mercury and ytterbium on selenium are described. The effect of bromine and antimony on the determination of arsenic is also mentioned. The use of factorial design techniques to evaluate interferences in the determination of mercury, selenium, and arsenic is shown. Some typical trace element results for coal, fly ash, and bottom ash are given.
Trace element analysis of coal by neutron activation
NASA Technical Reports Server (NTRS)
Sheibley, D. W.
1973-01-01
The irradiation, counting, and data reduction scheme is described for an analysis capability of 1000 samples per year. Up to 56 elements are reported on each sample. The precision and accuracy of the method are shown for 25 elements designated as hazardous by the Environmental Protection Agency (EPA). The interference corrections for selenium and ytterbium on mercury and ytterbium on selenium are described. The effect of bromine and antimony on the determination of arsenic is also mentioned. The use of factorial design techniques to evaluate interferences in the determination of mercury, selenium, and arsenic is shown. Some typical trace element results for coal, fly ash, and bottom ash are given.
Asynchronous variational integration using continuous assumed gradient elements.
Wolff, Sebastian; Bucher, Christian
2013-03-01
Asynchronous variational integration (AVI) is a tool which improves the numerical efficiency of explicit time stepping schemes when applied to finite element meshes with local spatial refinement. This is achieved by associating an individual time step length to each spatial domain. Furthermore, long-term stability is ensured by its variational structure. This article presents AVI in the context of finite elements based on a weakened weak form (W2) Liu (2009) [1], exemplified by continuous assumed gradient elements Wolff and Bucher (2011) [2]. The article presents the main ideas of the modified AVI, gives implementation notes and a recipe for estimating the critical time step.
Information criteria for quantifying loss of reversibility in parallelized KMC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gourgoulias, Konstantinos, E-mail: gourgoul@math.umass.edu; Katsoulakis, Markos A., E-mail: markos@math.umass.edu; Rey-Bellet, Luc, E-mail: luc@math.umass.edu
Parallel Kinetic Monte Carlo (KMC) is a potent tool to simulate stochastic particle systems efficiently. However, despite literature on quantifying domain decomposition errors of the particle system for this class of algorithms in the short and in the long time regime, no study yet explores and quantifies the loss of time-reversibility in Parallel KMC. Inspired by concepts from non-equilibrium statistical mechanics, we propose the entropy production per unit time, or entropy production rate, given in terms of an observable and a corresponding estimator, as a metric that quantifies the loss of reversibility. Typically, this is a quantity that cannot bemore » computed explicitly for Parallel KMC, which is why we develop a posteriori estimators that have good scaling properties with respect to the size of the system. Through these estimators, we can connect the different parameters of the scheme, such as the communication time step of the parallelization, the choice of the domain decomposition, and the computational schedule, with its performance in controlling the loss of reversibility. From this point of view, the entropy production rate can be seen both as an information criterion to compare the reversibility of different parallel schemes and as a tool to diagnose reversibility issues with a particular scheme. As a demonstration, we use Sandia Lab's SPPARKS software to compare different parallelization schemes and different domain (lattice) decompositions.« less
Li, Chuan; Petukh, Marharyta; Li, Lin; Alexov, Emil
2013-08-15
Due to the enormous importance of electrostatics in molecular biology, calculating the electrostatic potential and corresponding energies has become a standard computational approach for the study of biomolecules and nano-objects immersed in water and salt phase or other media. However, the electrostatics of large macromolecules and macromolecular complexes, including nano-objects, may not be obtainable via explicit methods and even the standard continuum electrostatics methods may not be applicable due to high computational time and memory requirements. Here, we report further development of the parallelization scheme reported in our previous work (Li, et al., J. Comput. Chem. 2012, 33, 1960) to include parallelization of the molecular surface and energy calculations components of the algorithm. The parallelization scheme utilizes different approaches such as space domain parallelization, algorithmic parallelization, multithreading, and task scheduling, depending on the quantity being calculated. This allows for efficient use of the computing resources of the corresponding computer cluster. The parallelization scheme is implemented in the popular software DelPhi and results in speedup of several folds. As a demonstration of the efficiency and capability of this methodology, the electrostatic potential, and electric field distributions are calculated for the bovine mitochondrial supercomplex illustrating their complex topology, which cannot be obtained by modeling the supercomplex components alone. Copyright © 2013 Wiley Periodicals, Inc.
Information criteria for quantifying loss of reversibility in parallelized KMC
NASA Astrophysics Data System (ADS)
Gourgoulias, Konstantinos; Katsoulakis, Markos A.; Rey-Bellet, Luc
2017-01-01
Parallel Kinetic Monte Carlo (KMC) is a potent tool to simulate stochastic particle systems efficiently. However, despite literature on quantifying domain decomposition errors of the particle system for this class of algorithms in the short and in the long time regime, no study yet explores and quantifies the loss of time-reversibility in Parallel KMC. Inspired by concepts from non-equilibrium statistical mechanics, we propose the entropy production per unit time, or entropy production rate, given in terms of an observable and a corresponding estimator, as a metric that quantifies the loss of reversibility. Typically, this is a quantity that cannot be computed explicitly for Parallel KMC, which is why we develop a posteriori estimators that have good scaling properties with respect to the size of the system. Through these estimators, we can connect the different parameters of the scheme, such as the communication time step of the parallelization, the choice of the domain decomposition, and the computational schedule, with its performance in controlling the loss of reversibility. From this point of view, the entropy production rate can be seen both as an information criterion to compare the reversibility of different parallel schemes and as a tool to diagnose reversibility issues with a particular scheme. As a demonstration, we use Sandia Lab's SPPARKS software to compare different parallelization schemes and different domain (lattice) decompositions.
Role of relativity in high-pressure phase transitions of thallium.
Kotmool, Komsilp; Chakraborty, Sudip; Bovornratanaraks, Thiti; Ahuja, Rajeev
2017-02-20
We demonstrate the relativistic effects in high-pressure phase transitions of heavy element thallium. The known first phase transition from h.c.p. to f.c.c. is initially investigated by various relativistic levels and exchange-correlation functionals as implemented in FPLO method, as well as scalar relativistic scheme within PAW formalism. The electronic structure calculations are interpreted from the perspective of energetic stability and electronic density of states. The full relativistic scheme (FR) within L(S)DA performs to be the scheme that resembles mostly with experimental results with a transition pressure of 3 GPa. The s-p hybridization and the valence-core overlapping of 6s and 5d states are the primary reasons behind the f.c.c. phase occurrence. A recent proposed phase, i.e., a body-centered tetragonal (b.c.t.) phase, is confirmed with a small distortion from the f.c.c. phase. We have also predicted a reversible b.c.t. → f.c.c. phase transition at 800 GPa. This finding has been suggested that almost all the III-A elements (Ga, In and Tl) exhibit the b.c.t. → f.c.c. phase transition at extremely high pressure.
Finite element code development for modeling detonation of HMX composites
NASA Astrophysics Data System (ADS)
Duran, Adam V.; Sundararaghavan, Veera
2017-01-01
In this work, we present a hydrodynamics code for modeling shock and detonation waves in HMX. A stable efficient solution strategy based on a Taylor-Galerkin finite element (FE) discretization was developed to solve the reactive Euler equations. In our code, well calibrated equations of state for the solid unreacted material and gaseous reaction products have been implemented, along with a chemical reaction scheme and a mixing rule to define the properties of partially reacted states. A linear Gruneisen equation of state was employed for the unreacted HMX calibrated from experiments. The JWL form was used to model the EOS of gaseous reaction products. It is assumed that the unreacted explosive and reaction products are in both pressure and temperature equilibrium. The overall specific volume and internal energy was computed using the rule of mixtures. Arrhenius kinetics scheme was integrated to model the chemical reactions. A locally controlled dissipation was introduced that induces a non-oscillatory stabilized scheme for the shock front. The FE model was validated using analytical solutions for SOD shock and ZND strong detonation models. Benchmark problems are presented for geometries in which a single HMX crystal is subjected to a shock condition.
Highly accurate adaptive finite element schemes for nonlinear hyperbolic problems
NASA Astrophysics Data System (ADS)
Oden, J. T.
1992-08-01
This document is a final report of research activities supported under General Contract DAAL03-89-K-0120 between the Army Research Office and the University of Texas at Austin from July 1, 1989 through June 30, 1992. The project supported several Ph.D. students over the contract period, two of which are scheduled to complete dissertations during the 1992-93 academic year. Research results produced during the course of this effort led to 6 journal articles, 5 research reports, 4 conference papers and presentations, 1 book chapter, and two dissertations (nearing completion). It is felt that several significant advances were made during the course of this project that should have an impact on the field of numerical analysis of wave phenomena. These include the development of high-order, adaptive, hp-finite element methods for elastodynamic calculations and high-order schemes for linear and nonlinear hyperbolic systems. Also, a theory of multi-stage Taylor-Galerkin schemes was developed and implemented in the analysis of several wave propagation problems, and was configured within a general hp-adaptive strategy for these types of problems. Further details on research results and on areas requiring additional study are given in the Appendix.
NASA Astrophysics Data System (ADS)
Fradi, Aniss
The ability to allocate the active power (MW) loading on transmission lines and transformers, is the basis of the "flow based" transmission allocation system developed by the North American Electric Reliability Council. In such a system, the active power flows must be allocated to each line or transformer in proportion to the active power being transmitted by each transaction imposed on the system. Currently, this is accomplished through the use of the linear Power Transfer Distribution Factors (PTDFs). Unfortunately, no linear allocation models exist for other energy transmission quantities, such as MW and MVAR losses, MVAR and MVA flows, etc. Early allocation schemes were developed to allocate MW losses due to transactions to branches in a transmission system, however they exhibited diminished accuracy, since most of them are based on linear power flow modeling of the transmission system. This thesis presents a new methodology to calculate Energy Transaction Allocation factors (ETA factors, or eta factors), using the well-known process of integration of a first derivative function, as well as consistent and well-established mathematical and AC power flow models. The factors give a highly accurate allocation of any non-linear system quantity to transactions placed on the transmission system. The thesis also extends the new ETA factors calculation procedure to restructure a new economic dispatch scheme where multiple sets of generators are economically dispatched to meet their corresponding load and their share of the losses.
NASA Astrophysics Data System (ADS)
Yang, Chou-Hsun; Hsu, Chao-Ping
2013-10-01
The electron transfer (ET) rate prediction requires the electronic coupling values. The Generalized Mulliken-Hush (GMH) and Fragment Charge Difference (FCD) schemes have been useful approaches to calculate ET coupling from an excited state calculation. In their typical form, both methods use two eigenstates in forming the target charge-localized diabatic states. For problems involve three or four states, a direct generalization is possible, but it is necessary to pick and assign the locally excited or charge-transfer states involved. In this work, we generalize the 3-state scheme for a multi-state FCD without the need of manual pick or assignment for the states. In this scheme, the diabatic states are obtained separately in the charge-transfer or neutral excited subspaces, defined by their eigenvalues in the fragment charge-difference matrix. In each subspace, the Hamiltonians are diagonalized, and there exist off-diagonal Hamiltonian matrix elements between different subspaces, particularly the charge-transfer and neutral excited diabatic states. The ET coupling values are obtained as the corresponding off-diagonal Hamiltonian matrix elements. A similar multi-state GMH scheme can also be developed. We test the new multi-state schemes for the performance in systems that have been studied using more than two states with FCD or GMH. We found that the multi-state approach yields much better charge-localized states in these systems. We further test for the dependence on the number of state included in the calculation of ET couplings. The final coupling values are converged when the number of state included is increased. In one system where experimental value is available, the multi-state FCD coupling value agrees better with the previous experimental result. We found that the multi-state GMH and FCD are useful when the original two-state approach fails.
On the dynamics of approximating schemes for dissipative nonlinear equations
NASA Technical Reports Server (NTRS)
Jones, Donald A.
1993-01-01
Since one can rarely write down the analytical solutions to nonlinear dissipative partial differential equations (PDE's), it is important to understand whether, and in what sense, the behavior of approximating schemes to these equations reflects the true dynamics of the original equations. Further, because standard error estimates between approximations of the true solutions coming from spectral methods - finite difference or finite element schemes, for example - and the exact solutions grow exponentially in time, this analysis provides little value in understanding the infinite time behavior of a given approximating scheme. The notion of the global attractor has been useful in quantifying the infinite time behavior of dissipative PDEs, such as the Navier-Stokes equations. Loosely speaking, the global attractor is all that remains of a sufficiently large bounded set in phase space mapped infinitely forward in time under the evolution of the PDE. Though the attractor has been shown to have some nice properties - it is compact, connected, and finite dimensional, for example - it is in general quite complicated. Nevertheless, the global attractor gives a way to understand how the infinite time behavior of approximating schemes such as the ones coming from a finite difference, finite element, or spectral method relates to that of the original PDE. Indeed, one can often show that such approximations also have a global attractor. We therefore only need to understand how the structure of the attractor for the PDE behaves under approximation. This is by no means a trivial task. Several interesting results have been obtained in this direction. However, we will not go into the details. We mention here that approximations generally lose information about the system no matter how accurate they are. There are examples that show certain parts of the attractor may be lost by arbitrary small perturbations of the original equations.
System and method for liquid silicon containment
Cliber, James A; Clark, Roger F; Stoddard, Nathan G; Von Dollen, Paul
2013-05-28
This invention relates to a system and a method for liquid silicon containment, such as during the casting of high purity silicon used in solar cells or solar modules. The containment apparatus includes a shielding member adapted to prevent breaching molten silicon from contacting structural elements or cooling elements of a casting device, and a volume adapted to hold a quantity of breaching molten silicon with the volume formed by a bottom and one or more sides.