NASA Technical Reports Server (NTRS)
Chambers, Gary D.; King, Elizabeth A.; Oleson, Keith
1992-01-01
In response to the changing aerospace economic climate, Martin Marietta Astronautics Group (MMAG) has adopted a Total Quality Management (TQM) philosophy to maintain a competitive edge. TQM emphasizes continuous improvement of processes, motivation to improve from within, cross-functional involvement, people empowerment, customer satisfaction, and modern process control techniques. The four major initiatives of TQM are Product Excellence, Manufacturing Resource Planning (MRP II), People Empowerment, and Subcontract Management. The Defense Space and Communications (DS&C) Test Lab's definition and implementation of the MRP II and people empowerment initiatives within TQM are discussed. The application of MRP II to environmental test planning and operations processes required a new and innovative approach. In an 18 month span, the test labs implemented MRP II and people empowerment and achieved a Class 'A' operational status. This resulted in numerous benefits, both tangible and intangible, including significant cost savings and improved quality of life. A detailed description of the implementation process and results are addressed.
NASA Astrophysics Data System (ADS)
Chambers, Gary D.; King, Elizabeth A.; Oleson, Keith
1992-11-01
In response to the changing aerospace economic climate, Martin Marietta Astronautics Group (MMAG) has adopted a Total Quality Management (TQM) philosophy to maintain a competitive edge. TQM emphasizes continuous improvement of processes, motivation to improve from within, cross-functional involvement, people empowerment, customer satisfaction, and modern process control techniques. The four major initiatives of TQM are Product Excellence, Manufacturing Resource Planning (MRP II), People Empowerment, and Subcontract Management. The Defense Space and Communications (DS&C) Test Lab's definition and implementation of the MRP II and people empowerment initiatives within TQM are discussed. The application of MRP II to environmental test planning and operations processes required a new and innovative approach. In an 18 month span, the test labs implemented MRP II and people empowerment and achieved a Class 'A' operational status. This resulted in numerous benefits, both tangible and intangible, including significant cost savings and improved quality of life. A detailed description of the implementation process and results are addressed.
NASA Astrophysics Data System (ADS)
Malecha, Ziemowit; Lubryka, Eliza
2017-11-01
The numerical model of thin layers, characterized by a defined wrapping pattern can be a crucial element of many computational problems related to engineering and science. A motivating example is found in multilayer electrical insulation, which is an important component of superconducting magnets and other cryogenic installations. The wrapping pattern of the insulation can significantly affect heat transport and the performance of the considered instruments. The major objective of this study is to develop the numerical boundary conditions (BC) needed to model the wrapping pattern of thin insulation. An example of the practical application of the proposed BC includes the heat transfer of Rutherford NbTi cables immersed in super-fluid helium (He II) across thin layers of electrical insulation. The proposed BC and a mathematical model of heat transfer in He II are implemented in the open source CFD toolbox OpenFOAM. The implemented mathematical model and the BC are compared in the experiments. The study confirms that the thermal resistance of electrical insulation can be lowered by implementing the proper wrapping pattern. The proposed BC can be useful in the study of new patterns for wrapping schemes. The work has been supported by statutory funds from Polish Ministry for Science and Higher Education for the year of 2017.
Numerical modelling in biosciences using delay differential equations
NASA Astrophysics Data System (ADS)
Bocharov, Gennadii A.; Rihan, Fathalla A.
2000-12-01
Our principal purposes here are (i) to consider, from the perspective of applied mathematics, models of phenomena in the biosciences that are based on delay differential equations and for which numerical approaches are a major tool in understanding their dynamics, (ii) to review the application of numerical techniques to investigate these models. We show that there are prima facie reasons for using such models: (i) they have a richer mathematical framework (compared with ordinary differential equations) for the analysis of biosystem dynamics, (ii) they display better consistency with the nature of certain biological processes and predictive results. We analyze both the qualitative and quantitative role that delays play in basic time-lag models proposed in population dynamics, epidemiology, physiology, immunology, neural networks and cell kinetics. We then indicate suitable computational techniques for the numerical treatment of mathematical problems emerging in the biosciences, comparing them with those implemented by the bio-modellers.
Design and Implementation of Hybrid CORDIC Algorithm Based on Phase Rotation Estimation for NCO
Zhang, Chaozhu; Han, Jinan; Li, Ke
2014-01-01
The numerical controlled oscillator has wide application in radar, digital receiver, and software radio system. Firstly, this paper introduces the traditional CORDIC algorithm. Then in order to improve computing speed and save resources, this paper proposes a kind of hybrid CORDIC algorithm based on phase rotation estimation applied in numerical controlled oscillator (NCO). Through estimating the direction of part phase rotation, the algorithm reduces part phase rotation and add-subtract unit, so that it decreases delay. Furthermore, the paper simulates and implements the numerical controlled oscillator by Quartus II software and Modelsim software. Finally, simulation results indicate that the improvement over traditional CORDIC algorithm is achieved in terms of ease of computation, resource utilization, and computing speed/delay while maintaining the precision. It is suitable for high speed and precision digital modulation and demodulation. PMID:25110750
Simulation of a GOX-kerosene subscale rocket combustion chamber
NASA Astrophysics Data System (ADS)
Höglauer, Christoph; Kniesner, Björn; Knab, Oliver; Kirchberger, Christoph; Schlieben, Gregor; Kau, Hans-Peter
2011-12-01
In view of future film cooling tests at the Institute for Flight Propulsion (LFA) at Technische Universität München, the Astrium in-house spray combustion CFD tool Rocflam-II was validated against first test data gained from this rocket test bench without film cooling. The subscale rocket combustion chamber uses GOX and kerosene as propellants which are injected through a single double swirl element. Especially the modeling of the double swirl element and the measured wall roughness were adapted on the LFA hardware. Additionally, new liquid kerosene fluid properties were implemented and verified in Rocflam-II. Also the influences of soot deposition and hot gas radiation on the wall heat flux were analytically and numerically estimated. In context of reviewing the implemented evaporation model in Rocflam-II, the binary diffusion coefficient and its pressure dependency were analyzed. Finally simulations have been performed for different load points with Rocflam-II showing a good agreement compared to test data.
NASA Astrophysics Data System (ADS)
Prigozhin, Leonid; Sokolovsky, Vladimir
2018-05-01
We consider the fast Fourier transform (FFT) based numerical method for thin film magnetization problems (Vestgården and Johansen 2012 Supercond. Sci. Technol. 25 104001), compare it with the finite element methods, and evaluate its accuracy. Proposed modifications of this method implementation ensure stable convergence of iterations and enhance its efficiency. A new method, also based on the FFT, is developed for 3D bulk magnetization problems. This method is based on a magnetic field formulation, different from the popular h-formulation of eddy current problems typically employed with the edge finite elements. The method is simple, easy to implement, and can be used with a general current–voltage relation; its efficiency is illustrated by numerical simulations.
NASA Astrophysics Data System (ADS)
Carlone, Pierpaolo; Astarita, Antonello; Rubino, Felice; Pasquino, Nicola; Aprea, Paolo
2016-12-01
In this paper, a selective laser post-deposition on pure grade II titanium coatings, cold-sprayed on AA2024-T3 sheets, was experimentally and numerically investigated. Morphological features, microstructure, and chemical composition of the treated zone were assessed by means of optical microscopy, scanning electron microscopy, and energy dispersive X-ray spectrometry. Microhardness measurements were also carried out to evaluate the mechanical properties of the coating. A numerical model of the laser treatment was implemented and solved to simulate the process and discuss the experimental outcomes. Obtained results highlighted the key role played by heat input and dimensional features on the effectiveness of the treatment.
ERIC Educational Resources Information Center
Blankenship, Whitney G.
2015-01-01
From the moment the United States entered World War II, public schools across the nation bombarded the Office of Education Wartime Commission requesting advice on how to mobilize schools for the war effort. American schools would rise to the occasion, implementing numerous programs including pre-induction training and the Victory Corps. The…
Review of Methods and Approaches for Deriving Numeric ...
EPA will propose numeric criteria for nitrogen/phosphorus pollution to protect estuaries, coastal areas and South Florida inland flowing waters that have been designated Class I, II and III , as well as downstream protective values (DPVs) to protect estuarine and marine waters. In accordance with the formal determination and pursuant to a subsequent consent decree, these numeric criteria are being developed to translate and implement Florida’s existing narrative nutrient criterion, to protect the designated use that Florida has previously set for these waters, at Rule 62-302.530(47)(b), F.A.C. which provides that “In no case shall nutrient concentrations of a body of water be altered so as to cause an imbalance in natural populations of aquatic flora or fauna.” Under the Clean Water Act and EPA’s implementing regulations, these numeric criteria must be based on sound scientific rationale and reflect the best available scientific knowledge. EPA has previously published a series of peer reviewed technical guidance documents to develop numeric criteria to address nitrogen/phosphorus pollution in different water body types. EPA recognizes that available and reliable data sources for use in numeric criteria development vary across estuarine and coastal waters in Florida and flowing waters in South Florida. In addition, scientifically defensible approaches for numeric criteria development have different requirements that must be taken into consider
Membrane triangles with corner drilling freedoms. III - Implementation and performance evaluation
NASA Technical Reports Server (NTRS)
Felippa, Carlos A.; Alexander, Scott
1992-01-01
This paper completes a three-part series on the formulation of 3-node, 9-dof membrane triangles with corner drilling freedoms based on parametrized variational principles. The first four sections cover element implementation details including determination of optimal parameters and treatment of distributed loads. Then three elements of this type, labeled ALL, FF and EFF-ANDES, are tested on standard plane stress problems. ALL represents numerically integrated versions of Allman's 1988 triangle; FF is based on the free formulation triangle presented by Bergan and Felippa in 1985; and EFF-ANDES represent two different formulations of the optimal triangle derived in Parts I and II. The numerical studies indicate that the ALL, FF and EFF-ANDES elements are comparable in accuracy for elements of unitary aspect ratios. The ALL elements are found to stiffen rapidly in inplane bending for high aspect ratios, whereas the FF and EFF elements maintain accuracy. The EFF and ANDES implementations have a moderate edge in formation speed over the FF.
Wall function treatment for bubbly boundary layers at low void fractions.
Soares, Daniel V; Bitencourt, Marcelo C; Loureiro, Juliana B R; Silva Freire, Atila P
2018-01-01
The present work investigates the role of different treatments of the lower boundary condition on the numerical prediction of bubbly flows. Two different wall function formulations are tested against experimental data obtained for bubbly boundary layers: (i) a new analytical solution derived through asymptotic techniques and (ii) the previous formulation of Troshko and Hassan (IJHMT, 44, 871-875, 2001a). A modified k-e model is used to close the averaged Navier-Stokes equations together with the hypothesis that turbulence can be modelled by a linear superposition of bubble and shear induced eddy viscosities. The work shows, in particular, how four corrections must the implemented in the standard single-phase k-e model to account for the effects of bubbles. The numerical implementation of the near wall functions is made through a finite elements code.
NASA Technical Reports Server (NTRS)
Kelly, G. L.; Berthold, G.; Abbott, L.
1982-01-01
A 5 MHZ single-board microprocessor system which incorporates an 8086 CPU and an 8087 Numeric Data Processor is used to implement the control laws for the NASA Drones for Aerodynamic and Structural Testing, Aeroelastic Research Wing II. The control laws program was executed in 7.02 msec, with initialization consuming 2.65 msec and the control law loop 4.38 msec. The software emulator execution times for these two tasks were 36.67 and 61.18, respectively, for a total of 97.68 msec. The space, weight and cost reductions achieved in the present, aircraft control application of this combination of a 16-bit microprocessor with an 80-bit floating point coprocessor may be obtainable in other real time control applications.
Quantum generalisation of feedforward neural networks
NASA Astrophysics Data System (ADS)
Wan, Kwok Ho; Dahlsten, Oscar; Kristjánsson, Hlér; Gardner, Robert; Kim, M. S.
2017-09-01
We propose a quantum generalisation of a classical neural network. The classical neurons are firstly rendered reversible by adding ancillary bits. Then they are generalised to being quantum reversible, i.e., unitary (the classical networks we generalise are called feedforward, and have step-function activation functions). The quantum network can be trained efficiently using gradient descent on a cost function to perform quantum generalisations of classical tasks. We demonstrate numerically that it can: (i) compress quantum states onto a minimal number of qubits, creating a quantum autoencoder, and (ii) discover quantum communication protocols such as teleportation. Our general recipe is theoretical and implementation-independent. The quantum neuron module can naturally be implemented photonically.
NASA Astrophysics Data System (ADS)
Chirico, G. B.; Medina, H.; Romano, N.
2014-07-01
This paper examines the potential of different algorithms, based on the Kalman filtering approach, for assimilating near-surface observations into a one-dimensional Richards equation governing soil water flow in soil. Our specific objectives are: (i) to compare the efficiency of different Kalman filter algorithms in retrieving matric pressure head profiles when they are implemented with different numerical schemes of the Richards equation; (ii) to evaluate the performance of these algorithms when nonlinearities arise from the nonlinearity of the observation equation, i.e. when surface soil water content observations are assimilated to retrieve matric pressure head values. The study is based on a synthetic simulation of an evaporation process from a homogeneous soil column. Our first objective is achieved by implementing a Standard Kalman Filter (SKF) algorithm with both an explicit finite difference scheme (EX) and a Crank-Nicolson (CN) linear finite difference scheme of the Richards equation. The Unscented (UKF) and Ensemble Kalman Filters (EnKF) are applied to handle the nonlinearity of a backward Euler finite difference scheme. To accomplish the second objective, an analogous framework is applied, with the exception of replacing SKF with the Extended Kalman Filter (EKF) in combination with a CN numerical scheme, so as to handle the nonlinearity of the observation equation. While the EX scheme is computationally too inefficient to be implemented in an operational assimilation scheme, the retrieval algorithm implemented with a CN scheme is found to be computationally more feasible and accurate than those implemented with the backward Euler scheme, at least for the examined one-dimensional problem. The UKF appears to be as feasible as the EnKF when one has to handle nonlinear numerical schemes or additional nonlinearities arising from the observation equation, at least for systems of small dimensionality as the one examined in this study.
NASA Technical Reports Server (NTRS)
Padovan, J.; Adams, M.; Lam, P.; Fertis, D.; Zeid, I.
1982-01-01
Second-year efforts within a three-year study to develop and extend finite element (FE) methodology to efficiently handle the transient/steady state response of rotor-bearing-stator structure associated with gas turbine engines are outlined. The two main areas aim at (1) implanting the squeeze film damper element into a general purpose FE code for testing and evaluation; and (2) determining the numerical characteristics of the FE-generated rotor-bearing-stator simulation scheme. The governing FE field equations are set out and the solution methodology is presented. The choice of ADINA as the general-purpose FE code is explained, and the numerical operational characteristics of the direct integration approach of FE-generated rotor-bearing-stator simulations is determined, including benchmarking, comparison of explicit vs. implicit methodologies of direct integration, and demonstration problems.
ACOSS FIVE (Active Control of Space Structures). Phase 1A
1982-03-01
The control design MKUCTUKAL MOOC L PtRFOHMANCl MÜDtL DISTURBANCE MODEL I ’ II Q|S£) XM=) STATE SPACE MODEL KEDUCED MODELS (HAC... library ) whose detailed numerical procedures, structural reduction, eigen-computations, etc., are implemented dif- ferently than in NASTRAN. SPAR was...i-i. rCappesser ..ctn. ..ir. A. .^llliars i /ui N. t-t. i.yer orlva ..rlin^ton, ^\\ 22209 o j i c e 7 11 \\ttn. iULO Library
Numerical fatigue 3D-FE modeling of indirect composite-restored posterior teeth.
Ausiello, Pietro; Franciosa, Pasquale; Martorelli, Massimo; Watts, David C
2011-05-01
In restored teeth, stresses at the tooth-restoration interface during masticatory processes may fracture the teeth or the restoration and cracks may grow and propagate. The aim was to apply numerical methodologies to simulate the behavior of a restored tooth and to evaluate fatigue lifetimes before crack failure. Using a CAD-FEM procedure and fatigue mechanic laws, the fatigue damage of a restored molar was numerically estimated. Tessellated surfaces of enamel and dentin were extracted by applying segmentation and classification algorithms, to sets of 2D image data. A user-friendly GUI, which enables selection and visualization of 3D tessellated surfaces, was developed in a MatLab(®) environment. The tooth-boundary surfaces of enamel and dentin were then created by sweeping operations through cross-sections. A class II MOD cavity preparation was then added into the 3D model and tetrahedral mesh elements were generated. Fatigue simulation was performed by combining a preliminary static FEA simulation with classical fatigue mechanical laws. Regions with the shortest fatigue-life were located around the fillets of the class II MOD cavity, where the static stress was highest. The described method can be successfully adopted to generate detailed 3D-FE models of molar teeth, with different cavities and restorative materials. This method could be quickly implemented for other dental or biomechanical applications. Copyright © 2010 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
The principle of acoustic time reversal and holography
NASA Astrophysics Data System (ADS)
Zverev, V. A.
2004-11-01
On the basis of earlier results (V. A. Zverev, Radiooptics (1975)), the principle of the time reversal of waves (TRW) with the use of a time-reversed signal is considered (M. Fink et al., Time-Reversed Acoustics, Rep. Prog. Phys. 63 (2000)). Both the common mathematical basis and the difference between the TRW and holography are revealed. The following conclusions are drawn: (i) to implement the TRW, it is necessary that the spatial and time coordinates be separated in the initial signal; (ii) two methods of implementing the TRW are possible, namely, the time reversal and the use of an inverse filter; (iii) certain differences exist in the spatial focusing by the TRW and holography; and (iv) on the basis of the theory developed, a numerical modeling of the TRW becomes possible.
Haynes, Tiffany; Turner, Jerome; Smith, Johnny; Curran, Geoffrey; Bryant-Moore, Keneshia; Ounpraseuth, Songthip T; Kramer, Teresa; Harris, Kimberly; Hutchins, Ellen; Yeary, Karen Hye-Cheon Kim
2018-01-01
Rural African Americans are disproportionately exposed to numerous stressors such as poverty that place them at risk for experiencing elevated levels of depressive symptoms. Effective treatments for decreasing depressive symptoms exist, but rural African Americans often fail to receive adequate and timely care. Churches have been used to address physical health outcomes in rural African American communities, but few have focused primarily on addressing mental health outcomes. Our partnership, consisting of faith community leaders and academic researchers, adapted an evidence-based behavioral activation intervention for use with rural African American churches. This 8-session intervention was adapted to include faith-based themes, Scripture, and other aspects of the rural African American faith culture (e.g. bible studies) This manuscript describes a Hybrid-II implementation trial that seeks to test the effectiveness of the culturally adapted evidence-based intervention (Renewed and Empowered for the Journey to Overcome in Christ: REJOICE) and gather preliminary data on the strategies necessary to support the successful implementation of this intervention in 24 rural African American churches. This study employs a randomized one-way crossover cluster design to assess effectiveness in reducing depressive symptoms and gather preliminary data regarding implementation outcomes, specifically fidelity, associated with 2 implementation strategies: training only and training+coaching calls. This project has the potential to generate knowledge that will lead to improvements in the provision of mental health interventions within the rural African American community. Further, the use of the Hybrid-II design has the potential to advance our understanding of strategies that will support the implementation of and sustainability of mental health interventions within rural African American faith communities. NCT02860741. Registered August 5, 2016. Copyright © 2017 Elsevier Inc. All rights reserved.
Code Verification of the HIGRAD Computational Fluid Dynamics Solver
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Buren, Kendra L.; Canfield, Jesse M.; Hemez, Francois M.
2012-05-04
The purpose of this report is to outline code and solution verification activities applied to HIGRAD, a Computational Fluid Dynamics (CFD) solver of the compressible Navier-Stokes equations developed at the Los Alamos National Laboratory, and used to simulate various phenomena such as the propagation of wildfires and atmospheric hydrodynamics. Code verification efforts, as described in this report, are an important first step to establish the credibility of numerical simulations. They provide evidence that the mathematical formulation is properly implemented without significant mistakes that would adversely impact the application of interest. Highly accurate analytical solutions are derived for four code verificationmore » test problems that exercise different aspects of the code. These test problems are referred to as: (i) the quiet start, (ii) the passive advection, (iii) the passive diffusion, and (iv) the piston-like problem. These problems are simulated using HIGRAD with different levels of mesh discretization and the numerical solutions are compared to their analytical counterparts. In addition, the rates of convergence are estimated to verify the numerical performance of the solver. The first three test problems produce numerical approximations as expected. The fourth test problem (piston-like) indicates the extent to which the code is able to simulate a 'mild' discontinuity, which is a condition that would typically be better handled by a Lagrangian formulation. The current investigation concludes that the numerical implementation of the solver performs as expected. The quality of solutions is sufficient to provide credible simulations of fluid flows around wind turbines. The main caveat associated to these findings is the low coverage provided by these four problems, and somewhat limited verification activities. A more comprehensive evaluation of HIGRAD may be beneficial for future studies.« less
Inferring physical properties of galaxies from their emission-line spectra
NASA Astrophysics Data System (ADS)
Ucci, G.; Ferrara, A.; Gallerani, S.; Pallottini, A.
2017-02-01
We present a new approach based on Supervised Machine Learning algorithms to infer key physical properties of galaxies (density, metallicity, column density and ionization parameter) from their emission-line spectra. We introduce a numerical code (called GAME, GAlaxy Machine learning for Emission lines) implementing this method and test it extensively. GAME delivers excellent predictive performances, especially for estimates of metallicity and column densities. We compare GAME with the most widely used diagnostics (e.g. R23, [N II] λ6584/Hα indicators) showing that it provides much better accuracy and wider applicability range. GAME is particularly suitable for use in combination with Integral Field Unit spectroscopy, both for rest-frame optical/UV nebular lines and far-infrared/sub-millimeter lines arising from photodissociation regions. Finally, GAME can also be applied to the analysis of synthetic galaxy maps built from numerical simulations.
2000-12-01
Numerical Simulations ..... ................. .... 42 1.4.1. Impact of a rod on a rigid wall ..... ................. .... 42 1.4.2. Impact of two...dissipative properties of the proposed scheme . . . . 81 II.4. Representative Numerical Simulations ...... ................. ... 84 11.4.1. Forging of...Representative numerical simulations ...... ............. .. 123 111.3. Model Problem II: a Simplified Model of Thin Beams ... ......... ... 127 III
Two-dimensional numerical modeling and solution of convection heat transfer in turbulent He II
NASA Technical Reports Server (NTRS)
Zhang, Burt X.; Karr, Gerald R.
1991-01-01
Numerical schemes are employed to investigate heat transfer in the turbulent flow of He II. FEM is used to solve a set of equations governing the heat transfer and hydrodynamics of He II in the turbulent regime. Numerical results are compared with available experimental data and interpreted in terms of conventional heat transfer parameters such as the Prandtl number, the Peclet number, and the Nusselt number. Within the prescribed Reynolds number domain, the Gorter-Mellink thermal counterflow mechanism becomes less significant, and He II acts like an ordinary fluid. The convection heat transfer characteristics of He II in the highly turbulent regime can be successfully described by using the conventional turbulence and heat transfer theories.
5 CFR 302.304 - Order of consideration.
Code of Federal Regulations, 2012 CFR
2012-01-01
... States Code, in the order of his/her numerical ranking. (ii) The name of each other qualified applicant in the order of his/her numerical ranking. (2) Order B. (i) The name of each qualified preference...'s reemployment list, in the order of his/her numerical ranking. (ii) The name of each qualified...
5 CFR 302.304 - Order of consideration.
Code of Federal Regulations, 2011 CFR
2011-01-01
... States Code, in the order of his/her numerical ranking. (ii) The name of each other qualified applicant in the order of his/her numerical ranking. (2) Order B. (i) The name of each qualified preference...'s reemployment list, in the order of his/her numerical ranking. (ii) The name of each qualified...
5 CFR 302.304 - Order of consideration.
Code of Federal Regulations, 2014 CFR
2014-01-01
... States Code, in the order of his/her numerical ranking. (ii) The name of each other qualified applicant in the order of his/her numerical ranking. (2) Order B. (i) The name of each qualified preference...'s reemployment list, in the order of his/her numerical ranking. (ii) The name of each qualified...
5 CFR 302.304 - Order of consideration.
Code of Federal Regulations, 2013 CFR
2013-01-01
... States Code, in the order of his/her numerical ranking. (ii) The name of each other qualified applicant in the order of his/her numerical ranking. (2) Order B. (i) The name of each qualified preference...'s reemployment list, in the order of his/her numerical ranking. (ii) The name of each qualified...
A computationally efficient scheme for the non-linear diffusion equation
NASA Astrophysics Data System (ADS)
Termonia, P.; Van de Vyver, H.
2009-04-01
This Letter proposes a new numerical scheme for integrating the non-linear diffusion equation. It is shown that it is linearly stable. Some tests are presented comparing this scheme to a popular decentered version of the linearized Crank-Nicholson scheme, showing that, although this scheme is slightly less accurate in treating the highly resolved waves, (i) the new scheme better treats highly non-linear systems, (ii) better handles the short waves, (iii) for a given test bed turns out to be three to four times more computationally cheap, and (iv) is easier in implementation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sovinec, Carl R.
The University of Wisconsin-Madison component of the Plasma Science and Innovation Center (PSI Center) contributed to modeling capabilities and algorithmic efficiency of the Non-Ideal Magnetohydrodynamics with Rotation (NIMROD) Code, which is widely used to model macroscopic dynamics of magnetically confined plasma. It also contributed to the understanding of direct-current (DC) injection of electrical current for initiating and sustaining plasma in three spherical torus experiments: the Helicity Injected Torus-II (HIT-II), the Pegasus Toroidal Experiment, and the National Spherical Torus Experiment (NSTX). The effort was funded through the PSI Center's cooperative agreement with the University of Washington and Utah State University overmore » the period of March 1, 2005 - August 31, 2016. In addition to the computational and physics accomplishments, the Wisconsin effort contributed to the professional education of four graduate students and two postdoctoral research associates. The modeling for HIT-II and Pegasus was directly supported by the cooperative agreement, and contributions to the NSTX modeling were in support of work by Dr. Bickford Hooper, who was funded through a separate grant. Our primary contribution to model development is the implementation of detailed closure relations for collisional plasma. Postdoctoral associate Adam Bayliss implemented the temperature-dependent effects of Braginskii's parallel collisional ion viscosity. As a graduate student, John O'Bryan added runtime options for Braginskii's models and Ji's K2 models of thermal conduction with magnetization effects and thermal equilibration. As a postdoctoral associate, O'Bryan added the magnetization effects for ion viscosity. Another area of model development completed through the PSI-Center is the implementation of Chodura's phenomenological resistivity model. Finally, we investigated and tested linear electron parallel viscosity, leveraged by support from the Center for Extended Magnetohydrodynamic Modeling (CEMM). Work on algorithmic efficiency improved NIMROD's element-based computations. We reordered arrays and eliminated a level of looping for computations over the data points that are used for numerical integration over elements. Moreover, the reordering allows fewer and larger communication calls when using distributed-memory parallel computation, thereby avoiding a data starvation problem that limited parallel scaling over NIMROD's Fourier components for the periodic coordinate. Together with improved parallel preconditioning, work that was supported by CEMM, these developments allowed NIMROD's first scaling to over 10,000 processor cores. Another algorithm improvement supported by the PSI Center is nonlinear numerical diffusivities for implicit advection. We also developed the Stitch code to enhance the flexibility of NIMROD's preprocessing. Our simulations of HIT-II considered conditions with and without fluctuation-induced amplification of poloidal flux, but our validation efforts focused on conditions without amplification. A significant finding is that NIMROD reproduces the dependence of net plasma current as the imposed poloidal flux is varied. The modeling of Pegasus startup from localized DC injectors predicted that development of a tokamak-like configuration occurs through a sequence of current-filament merger events. Comparison of experimentally measured and numerically computed cross-power spectra enhance confidence in NIMROD's simulation of magnetic fluctuations; however, energy confinement remains an open area for further research. Our contributions to the NSTX study include adaptation of the helicity-injection boundary conditions from the HIT-II simulations and support for linear analysis and computation of 3D current-driven instabilities.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-23
...EPA is proposing to approve a portion of the State Implementation Plan (SIP) revision submitted by the State of Oregon for the purpose of addressing the third element of the interstate transport provisions of Clean Air Act (CAA or the Act) section 110(a)(2)(D)(i)(II) for the 1997 8-hour ozone National Ambient Air Quality Standards (NAAQS or standards) and the 1997 and 2006 fine particulate matter (PM2.5) NAAQS. The third element of CAA section 110(a)(2)(D)(i)(II) requires that a State not interfere with any other State's required measures to prevent significant deterioration (PSD) of its air quality. EPA is also proposing to approve numerous revisions to the Oregon SIP that were submitted to EPA by the State of Oregon on October 8, 2008; October 10, 2008; March 17, 2009; June 23, 2010; December 22, 2010 and May 5, 2011. The revisions include updating Oregon's new source review (NSR) rules to be consistent with current Federal regulations and streamlining Oregon's air quality rules by clarifying requirements, removing duplicative rules, and correcting errors. The revisions were submitted in accordance with the requirements of section 110 and part D of the Act).
SERVER DEVELOPMENT FOR NSLS-II PHYSICS APPLICATIONS AND PERFORMANCE ANALYSIS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, G.; Kraimer, M.
2011-03-28
The beam commissioning software framework of NSLS-II project adopts a client/server based architecture to replace the more traditional monolithic high level application approach. The server software under development is available via an open source sourceforge project named epics-pvdata, which consists of modules pvData, pvAccess, pvIOC, and pvService. Examples of two services that already exist in the pvService module are itemFinder, and gather. Each service uses pvData to store in-memory transient data, pvService to transfer data over the network, and pvIOC as the service engine. The performance benchmarking for pvAccess and both gather service and item finder service are presented inmore » this paper. The performance comparison between pvAccess and Channel Access are presented also. For an ultra low emittance synchrotron radiation light source like NSLS II, the control system requirements, especially for beam control are tight. To control and manipulate the beam effectively, a use case study has been performed to satisfy the requirement and theoretical evaluation has been performed. The analysis shows that model based control is indispensable for beam commissioning and routine operation. However, there are many challenges such as how to re-use a design model for on-line model based control, and how to combine the numerical methods for modeling of a realistic lattice with the analytical techniques for analysis of its properties. To satisfy the requirements and challenges, adequate system architecture for the software framework for beam commissioning and operation is critical. The existing traditional approaches are self-consistent, and monolithic. Some of them have adopted a concept of middle layer to separate low level hardware processing from numerical algorithm computing, physics modelling, data manipulating and plotting, and error handling. However, none of the existing approaches can satisfy the requirement. A new design has been proposed by introducing service oriented architecture technology, and client interface is undergoing. The design and implementation adopted a new EPICS implementation, namely epics-pvdata [9], which is under active development. The implementation of this project under Java is close to stable, and binding to other language such as C++ and/or Python is undergoing. In this paper, we focus on the performance benchmarking and comparison for pvAccess and Channel Access, the performance evaluation for 2 services, gather and item finder respectively.« less
NASA Technical Reports Server (NTRS)
Gnoffo, Peter A.; Johnston, Christopher O.; Thompson, Richard A.
2009-01-01
A description of models and boundary conditions required for coupling radiation and ablation physics to a hypersonic flow simulation is provided. Chemical equilibrium routines for varying elemental mass fraction are required in the flow solver to integrate with the equilibrium chemistry assumption employed in the ablation models. The capability also enables an equilibrium catalytic wall boundary condition in the non-ablating case. The paper focuses on numerical implementation issues using FIRE II, Mars return, and Apollo 4 applications to provide context for discussion. Variable relaxation factors applied to the Jacobian elements of partial equilibrium relations required for convergence are defined. Challenges of strong radiation coupling in a shock capturing algorithm are addressed. Results are presented to show how the current suite of models responds to a wide variety of conditions involving coupled radiation and ablation.
Complete Numerical Solution of the Diffusion Equation of Random Genetic Drift
Zhao, Lei; Yue, Xingye; Waxman, David
2013-01-01
A numerical method is presented to solve the diffusion equation for the random genetic drift that occurs at a single unlinked locus with two alleles. The method was designed to conserve probability, and the resulting numerical solution represents a probability distribution whose total probability is unity. We describe solutions of the diffusion equation whose total probability is unity as complete. Thus the numerical method introduced in this work produces complete solutions, and such solutions have the property that whenever fixation and loss can occur, they are automatically included within the solution. This feature demonstrates that the diffusion approximation can describe not only internal allele frequencies, but also the boundary frequencies zero and one. The numerical approach presented here constitutes a single inclusive framework from which to perform calculations for random genetic drift. It has a straightforward implementation, allowing it to be applied to a wide variety of problems, including those with time-dependent parameters, such as changing population sizes. As tests and illustrations of the numerical method, it is used to determine: (i) the probability density and time-dependent probability of fixation for a neutral locus in a population of constant size; (ii) the probability of fixation in the presence of selection; and (iii) the probability of fixation in the presence of selection and demographic change, the latter in the form of a changing population size. PMID:23749318
NASA Astrophysics Data System (ADS)
Grose, C. J.
2008-05-01
Numerical geodynamics models of heat transfer are typically thought of as specialized topics of research requiring knowledge of specialized modelling software, linux platforms, and state-of-the-art finite-element codes. I have implemented analytical and numerical finite-difference techniques with Microsoft Excel 2007 spreadsheets to solve for complex solid-earth heat transfer problems for use by students, teachers, and practicing scientists without specialty in geodynamics modelling techniques and applications. While implementation of equations for use in Excel spreadsheets is occasionally cumbersome, once case boundary structure and node equations are developed, spreadsheet manipulation becomes routine. Model experimentation by modifying parameter values, geometry, and grid resolution makes Excel a useful tool whether in the classroom at the undergraduate or graduate level or for more engaging student projects. Furthermore, the ability to incorporate complex geometries and heat-transfer characteristics makes it ideal for first and occasionally higher order geodynamics simulations to better understand and constrain the results of professional field research in a setting that does not require the constraints of state-of-the-art modelling codes. The straightforward expression and manipulation of model equations in excel can also serve as a medium to better understand the confusing notations of advanced mathematical problems. To illustrate the power and robustness of computation and visualization in spreadsheet models I focus primarily on one-dimensional analytical and two-dimensional numerical solutions to two case problems: (i) the cooling of oceanic lithosphere and (ii) temperatures within subducting slabs. Excel source documents will be made available.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-11
...; Comment Request; Implementation of Title I/II Program Initiatives AGENCY: Institute of Educational... note that written comments received in response to this notice will be considered public records. Title of Collection: Implementation of Title I/II Program Initiatives. OMB Control Number: 1850-New. Type...
Causo, Maria Serena; Ciccotti, Giovanni; Bonella, Sara; Vuilleumier, Rodolphe
2006-08-17
Linearized mixed quantum-classical simulations are a promising approach for calculating time-correlation functions. At the moment, however, they suffer from some numerical problems that may compromise their efficiency and reliability in applications to realistic condensed-phase systems. In this paper, we present a method that improves upon the convergence properties of the standard algorithm for linearized calculations by implementing a cumulant expansion of the relevant averages. The effectiveness of the new approach is tested by applying it to the challenging computation of the diffusion of an excess electron in a metal-molten salt solution.
1989-09-01
STD-2167A by William T. Livings September 1989 Thesis Advisor: Barry A. Frew Approved for public release; distribution is unlimited UNCLASSIFIED...f’P) TfLI Po*i~o1 (InoriudC A,.’-g IOle) *’, i 14 41 iProf.- Barry A. Frow (Q11rioqAr. DD Form 1 413, JUN 86 ’Ciij ’iI ’ ti)P ,i I. ij j-~-~I i~4~~6...easily changed or corrected when errors are found; and programs that are delivered for use months or even years too late. ( Pressman , 1988, pp. I- 2
40 CFR 52.63 - PM10 State Implementation Plan development in group II areas.
Code of Federal Regulations, 2014 CFR
2014-07-01
... development in group II areas. 52.63 Section 52.63 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS Alabama § 52.63 PM10 State Implementation Plan development in group II areas. On March 15, 1989, the State submitted a...
40 CFR 52.881 - PM10 State implementation plan development in group II areas.
Code of Federal Regulations, 2012 CFR
2012-07-01
... development in group II areas. 52.881 Section 52.881 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS Kansas § 52.881 PM10 State implementation plan development in group II areas. The state has submitted a committal SIP for...
40 CFR 52.63 - PM10 State Implementation Plan development in group II areas.
Code of Federal Regulations, 2010 CFR
2010-07-01
... development in group II areas. 52.63 Section 52.63 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS Alabama § 52.63 PM10 State Implementation Plan development in group II areas. On March 15, 1989, the State submitted a...
40 CFR 52.63 - PM10 State Implementation Plan development in group II areas.
Code of Federal Regulations, 2011 CFR
2011-07-01
... development in group II areas. 52.63 Section 52.63 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS Alabama § 52.63 PM10 State Implementation Plan development in group II areas. On March 15, 1989, the State submitted a...
40 CFR 52.935 - PM10 State implementation plan development in group II areas.
Code of Federal Regulations, 2010 CFR
2010-07-01
... development in group II areas. 52.935 Section 52.935 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS Kentucky § 52.935 PM10 State implementation plan development in group II areas. On July 7, 1988, the State submitted a...
40 CFR 52.935 - PM10 State implementation plan development in group II areas.
Code of Federal Regulations, 2014 CFR
2014-07-01
... development in group II areas. 52.935 Section 52.935 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS Kentucky § 52.935 PM10 State implementation plan development in group II areas. On July 7, 1988, the State submitted a...
40 CFR 52.935 - PM10 State implementation plan development in group II areas.
Code of Federal Regulations, 2013 CFR
2013-07-01
... development in group II areas. 52.935 Section 52.935 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS Kentucky § 52.935 PM10 State implementation plan development in group II areas. On July 7, 1988, the State submitted a...
40 CFR 52.881 - PM10 State implementation plan development in group II areas.
Code of Federal Regulations, 2010 CFR
2010-07-01
... development in group II areas. 52.881 Section 52.881 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS Kansas § 52.881 PM10 State implementation plan development in group II areas. The state has submitted a committal SIP for...
40 CFR 52.935 - PM10 State implementation plan development in group II areas.
Code of Federal Regulations, 2011 CFR
2011-07-01
... development in group II areas. 52.935 Section 52.935 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS Kentucky § 52.935 PM10 State implementation plan development in group II areas. On July 7, 1988, the State submitted a...
40 CFR 52.881 - PM10 State implementation plan development in group II areas.
Code of Federal Regulations, 2011 CFR
2011-07-01
... development in group II areas. 52.881 Section 52.881 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS Kansas § 52.881 PM10 State implementation plan development in group II areas. The state has submitted a committal SIP for...
40 CFR 52.881 - PM10 State implementation plan development in group II areas.
Code of Federal Regulations, 2014 CFR
2014-07-01
... development in group II areas. 52.881 Section 52.881 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS Kansas § 52.881 PM10 State implementation plan development in group II areas. The state has submitted a committal SIP for...
40 CFR 52.63 - PM10 State Implementation Plan development in group II areas.
Code of Federal Regulations, 2012 CFR
2012-07-01
... development in group II areas. 52.63 Section 52.63 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS Alabama § 52.63 PM10 State Implementation Plan development in group II areas. On March 15, 1989, the State submitted a...
40 CFR 52.63 - PM10 State Implementation Plan development in group II areas.
Code of Federal Regulations, 2013 CFR
2013-07-01
... development in group II areas. 52.63 Section 52.63 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS Alabama § 52.63 PM10 State Implementation Plan development in group II areas. On March 15, 1989, the State submitted a...
40 CFR 52.881 - PM10 State implementation plan development in group II areas.
Code of Federal Regulations, 2013 CFR
2013-07-01
... development in group II areas. 52.881 Section 52.881 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS Kansas § 52.881 PM10 State implementation plan development in group II areas. The state has submitted a committal SIP for...
40 CFR 52.935 - PM10 State implementation plan development in group II areas.
Code of Federal Regulations, 2012 CFR
2012-07-01
... development in group II areas. 52.935 Section 52.935 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS Kentucky § 52.935 PM10 State implementation plan development in group II areas. On July 7, 1988, the State submitted a...
Brownson, Ross C; Chriqui, Jamie F; Burgeson, Charlene R; Fisher, Megan C; Ness, Roberta B
2010-06-01
Childhood obesity is a serious public health problem resulting from energy imbalance (when the intake of energy is greater than the amount of energy expended through physical activity). Numerous health authorities have identified policy interventions as promising strategies for creating population-wide improvements in physical activity. This case study focuses on energy expenditure through physical activity (with a particular emphasis on school-based physical education [PE]). Policy-relevant evidence for promoting physical activity in youth may take numerous forms, including epidemiologic data and other supporting evidence (e.g., qualitative data). The implementation and evaluation of school PE interventions leads to a set of lessons related to epidemiology and evidence-based policy. These include the need to: (i) enhance the focus on external validity, (ii) develop more policy-relevant evidence on the basis of "natural experiments," (iii) understand that policy making is political, (iv) better articulate the factors that influence policy dissemination, (v) understand the real-world constraints when implementing policy in school environments, and (vi) build transdisciplinary teams for policy progress. The issues described in this case study provide leverage points for practitioners, policy makers, and researchers as they seek to translate epidemiology to policy. Copyright 2010 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolan, Sam R.; Barack, Leor; Wardell, Barry
2011-10-15
This is the second in a series of papers aimed at developing a practical time-domain method for self-force calculations in Kerr spacetime. The key elements of the method are (i) removal of a singular part of the perturbation field with a suitable analytic 'puncture' based on the Detweiler-Whiting decomposition, (ii) decomposition of the perturbation equations in azimuthal (m-)modes, taking advantage of the axial symmetry of the Kerr background, (iii) numerical evolution of the individual m-modes in 2+1 dimensions with a finite-difference scheme, and (iv) reconstruction of the physical self-force from the mode sum. Here we report an implementation of themore » method to compute the scalar-field self-force along circular equatorial geodesic orbits around a Kerr black hole. This constitutes a first time-domain computation of the self-force in Kerr geometry. Our time-domain code reproduces the results of a recent frequency-domain calculation by Warburton and Barack, but has the added advantage of being readily adaptable to include the backreaction from the self-force in a self-consistent manner. In a forthcoming paper--the third in the series--we apply our method to the gravitational self-force (in the Lorenz gauge).« less
Numerical implementation of the S-matrix algorithm for modeling of relief diffraction gratings
NASA Astrophysics Data System (ADS)
Yaremchuk, Iryna; Tamulevičius, Tomas; Fitio, Volodymyr; Gražulevičiūte, Ieva; Bobitski, Yaroslav; Tamulevičius, Sigitas
2013-11-01
A new numerical implementation is developed to calculate the diffraction efficiency of relief diffraction gratings. In the new formulation, vectors containing the expansion coefficients of electric and magnetic fields on boundaries of the grating layer are expressed by additional constants. An S-matrix algorithm has been systematically described in detail and adapted to a simple matrix form. This implementation is suitable for the study of optical characteristics of periodic structures by using modern object-oriented programming languages and different standard mathematical software. The modeling program has been developed on the basis of this numerical implementation and tested by comparison with other commercially available programs and experimental data. Numerical examples are given to show the usefulness of the new implementation.
NASA Astrophysics Data System (ADS)
Hill, C.
2008-12-01
Low cost graphic cards today use many, relatively simple, compute cores to deliver support for memory bandwidth of more than 100GB/s and theoretical floating point performance of more than 500 GFlop/s. Right now this performance is, however, only accessible to highly parallel algorithm implementations that, (i) can use a hundred or more, 32-bit floating point, concurrently executing cores, (ii) can work with graphics memory that resides on the graphics card side of the graphics bus and (iii) can be partially expressed in a language that can be compiled by a graphics programming tool. In this talk we describe our experiences implementing a complete, but relatively simple, time dependent shallow-water equations simulation targeting a cluster of 30 computers each hosting one graphics card. The implementation takes into account the considerations (i), (ii) and (iii) listed previously. We code our algorithm as a series of numerical kernels. Each kernel is designed to be executed by multiple threads of a single process. Kernels are passed memory blocks to compute over which can be persistent blocks of memory on a graphics card. Each kernel is individually implemented using the NVidia CUDA language but driven from a higher level supervisory code that is almost identical to a standard model driver. The supervisory code controls the overall simulation timestepping, but is written to minimize data transfer between main memory and graphics memory (a massive performance bottle-neck on current systems). Using the recipe outlined we can boost the performance of our cluster by nearly an order of magnitude, relative to the same algorithm executing only on the cluster CPU's. Achieving this performance boost requires that many threads are available to each graphics processor for execution within each numerical kernel and that the simulations working set of data can fit into the graphics card memory. As we describe, this puts interesting upper and lower bounds on the problem sizes for which this technology is currently most useful. However, many interesting problems fit within this envelope. Looking forward, we extrapolate our experience to estimate full-scale ocean model performance and applicability. Finally we describe preliminary hybrid mixed 32-bit and 64-bit experiments with graphics cards that support 64-bit arithmetic, albeit at a lower performance.
Arbitrary order 2D virtual elements for polygonal meshes: part II, inelastic problem
NASA Astrophysics Data System (ADS)
Artioli, E.; Beirão da Veiga, L.; Lovadina, C.; Sacco, E.
2017-10-01
The present paper is the second part of a twofold work, whose first part is reported in Artioli et al. (Comput Mech, 2017. doi: 10.1007/s00466-017-1404-5), concerning a newly developed Virtual element method (VEM) for 2D continuum problems. The first part of the work proposed a study for linear elastic problem. The aim of this part is to explore the features of the VEM formulation when material nonlinearity is considered, showing that the accuracy and easiness of implementation discovered in the analysis inherent to the first part of the work are still retained. Three different nonlinear constitutive laws are considered in the VEM formulation. In particular, the generalized viscoelastic model, the classical Mises plasticity with isotropic/kinematic hardening and a shape memory alloy constitutive law are implemented. The versatility with respect to all the considered nonlinear material constitutive laws is demonstrated through several numerical examples, also remarking that the proposed 2D VEM formulation can be straightforwardly implemented as in a standard nonlinear structural finite element method framework.
MRP (materiel requirements planning) II implementation: a case study.
Sheldon, D
1994-05-01
Manufacturing resource planning (MRP II) is a powerful and effective business planning template on which to build a continuous improvement culture. MRP II, when successfully implemented, encourages a disciplined yet nonthreatening environment centered on measurement and accountability. From the education that accompanies an MRP II implementation, the employees can better understand the vision and mission of the organization. This common goal keeps everyone's energy directed toward the same final objective. The Raymond Corporation is a major materiels handling equipment manufacturer headquartered in Greene, New York, with class "A" MRP II manufacturing facilities in Greene and Brantford, Ontario and an aftermark distribution facility in East Syracuse, New York. Prior to the implementation of MRP II in its Greene plant (from 1988 through 1990) good intentions and hard work were proving to be less than necessary to compete in the global market. Certified class "A" in February 1990. The Raymond Corporation has built a world-class organization from these foundations.
40 CFR 52.823 - PM10 State Implementation Plan Development in Group II Areas.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Development in Group II Areas. 52.823 Section 52.823 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... State Implementation Plan Development in Group II Areas. The Iowa Department of Natural Resources...: Three groups within the State of Iowa have been classified as Group II areas for fine particulate (PM-10...
40 CFR 52.823 - PM10 State Implementation Plan Development in Group II Areas.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Development in Group II Areas. 52.823 Section 52.823 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... State Implementation Plan Development in Group II Areas. The Iowa Department of Natural Resources...: Three groups within the State of Iowa have been classified as Group II areas for fine particulate (PM-10...
40 CFR 52.823 - PM10 State Implementation Plan Development in Group II Areas.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Development in Group II Areas. 52.823 Section 52.823 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... State Implementation Plan Development in Group II Areas. The Iowa Department of Natural Resources...: Three groups within the State of Iowa have been classified as Group II areas for fine particulate (PM-10...
40 CFR 52.823 - PM10 State Implementation Plan Development in Group II Areas.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Development in Group II Areas. 52.823 Section 52.823 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... State Implementation Plan Development in Group II Areas. The Iowa Department of Natural Resources...: Three groups within the State of Iowa have been classified as Group II areas for fine particulate (PM-10...
40 CFR 52.823 - PM10 State Implementation Plan Development in Group II Areas.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Development in Group II Areas. 52.823 Section 52.823 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... State Implementation Plan Development in Group II Areas. The Iowa Department of Natural Resources...: Three groups within the State of Iowa have been classified as Group II areas for fine particulate (PM-10...
NASA Astrophysics Data System (ADS)
Xu, Xiaoyang; Deng, Xiao-Long
2016-04-01
In this paper, an improved weakly compressible smoothed particle hydrodynamics (SPH) method is proposed to simulate transient free surface flows of viscous and viscoelastic fluids. The improved SPH algorithm includes the implementation of (i) the mixed symmetric correction of kernel gradient to improve the accuracy and stability of traditional SPH method and (ii) the Rusanov flux in the continuity equation for improving the computation of pressure distributions in the dynamics of liquids. To assess the effectiveness of the improved SPH algorithm, a number of numerical examples including the stretching of an initially circular water drop, dam breaking flow against a vertical wall, the impact of viscous and viscoelastic fluid drop with a rigid wall, and the extrudate swell of viscoelastic fluid have been presented and compared with available numerical and experimental data in literature. The convergent behavior of the improved SPH algorithm has also been studied by using different number of particles. All numerical results demonstrate that the improved SPH algorithm proposed here is capable of modeling free surface flows of viscous and viscoelastic fluids accurately and stably, and even more important, also computing an accurate and little oscillatory pressure field.
Sainath, Kamalesh; Teixeira, Fernando L; Donderici, Burkay
2014-01-01
We develop a general-purpose formulation, based on two-dimensional spectral integrals, for computing electromagnetic fields produced by arbitrarily oriented dipoles in planar-stratified environments, where each layer may exhibit arbitrary and independent anisotropy in both its (complex) permittivity and permeability tensors. Among the salient features of our formulation are (i) computation of eigenmodes (characteristic plane waves) supported in arbitrarily anisotropic media in a numerically robust fashion, (ii) implementation of an hp-adaptive refinement for the numerical integration to evaluate the radiation and weakly evanescent spectra contributions, and (iii) development of an adaptive extension of an integral convergence acceleration technique to compute the strongly evanescent spectrum contribution. While other semianalytic techniques exist to solve this problem, none have full applicability to media exhibiting arbitrary double anisotropies in each layer, where one must account for the whole range of possible phenomena (e.g., mode coupling at interfaces and nonreciprocal mode propagation). Brute-force numerical methods can tackle this problem but only at a much higher computational cost. The present formulation provides an efficient and robust technique for field computation in arbitrary planar-stratified environments. We demonstrate the formulation for a number of problems related to geophysical exploration.
47 CFR 90.769 - Construction and implementation of Phase II nationwide licenses.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Use of Frequencies in the 220-222 MHz Band Policies Governing the Licensing and Use of Phase II Ea, Regional and Nationwide Systems § 90.769 Construction and implementation of Phase II nationwide licenses...
Höfler, K; Schwarzer, S
2000-06-01
Building on an idea of Fogelson and Peskin [J. Comput. Phys. 79, 50 (1988)] we describe the implementation and verification of a simulation technique for systems of non-Brownian particles in fluids at Reynolds numbers up to about 20 on the particle scale. This direct simulation technique fills a gap between simulations in the viscous regime and high-Reynolds-number modeling. It also combines sufficient computational accuracy with numerical efficiency and allows studies of several thousand, in principle arbitrarily shaped, extended and hydrodynamically interacting particles on regular work stations. We verify the algorithm in two and three dimensions for (i) single falling particles and (ii) a fluid flowing through a bed of fixed spheres. In the context of sedimentation we compute the volume fraction dependence of the mean sedimentation velocity. The results are compared with experimental and other numerical results both in the viscous and inertial regime and we find very satisfactory agreement.
Direct numerical simulation of incompressible multiphase flow with phase change
NASA Astrophysics Data System (ADS)
Lee, Moon Soo; Riaz, Amir; Aute, Vikrant
2017-09-01
Simulation of multiphase flow with phase change is challenging because of the potential for unphysical pressure oscillations, spurious velocity fields and mass flux errors across the interface. The resulting numerical errors may become critical when large density contrasts are present. To address these issues, we present a new approach for multiphase flow with phase change that features, (i) a smooth distribution of sharp velocity jumps and mass flux within a narrow region surrounding the interface, (ii) improved mass flux projection from the implicit interface onto the uniform Cartesian grid and (iii) post-advection velocity correction step to ensure accurate velocity divergence in interfacial cells. These new features are implemented in combination with a sharp treatment of the jumps in pressure and temperature gradient. A series of 1-D, 2-D, axisymmetric and 3-D problems are solved to verify the improvements afforded by the new approach. Axisymmetric film boiling results are also presented, which show good qualitative agreement with heat transfer correlations as well as experimental observations of bubble shapes.
NASA Technical Reports Server (NTRS)
Shafer, Jaclyn; Watson, Leela R.
2015-01-01
NASA's Launch Services Program, Ground Systems Development and Operations, Space Launch System and other programs at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS) use the daily and weekly weather forecasts issued by the 45th Weather Squadron (45 WS) as decision tools for their day-to-day and launch operations on the Eastern Range (ER). Examples include determining if they need to limit activities such as vehicle transport to the launch pad, protect people, structures or exposed launch vehicles given a threat of severe weather, or reschedule other critical operations. The 45 WS uses numerical weather prediction models as a guide for these weather forecasts, particularly the Air Force Weather Agency (AFWA) 1.67 km Weather Research and Forecasting (WRF) model. Considering the 45 WS forecasters' and Launch Weather Officers' (LWO) extensive use of the AFWA model, the 45 WS proposed a task at the September 2013 Applied Meteorology Unit (AMU) Tasking Meeting requesting the AMU verify this model. Due to the lack of archived model data available from AFWA, verification is not yet possible. Instead, the AMU proposed to implement and verify the performance of an ER version of the high-resolution WRF Environmental Modeling System (EMS) model configured by the AMU (Watson 2013) in real time. Implementing a real-time version of the ER WRF-EMS would generate a larger database of model output than in the previous AMU task for determining model performance, and allows the AMU more control over and access to the model output archive. The tasking group agreed to this proposal; therefore the AMU implemented the WRF-EMS model on the second of two NASA AMU modeling clusters. The AMU also calculated verification statistics to determine model performance compared to observational data. Finally, the AMU made the model output available on the AMU Advanced Weather Interactive Processing System II (AWIPS II) servers, which allows the 45 WS and AMU staff to customize the model output display on the AMU and Range Weather Operations (RWO) AWIPS II client computers and conduct real-time subjective analyses.
Pinning time statistics for vortex lines in disordered environments.
Dobramysl, Ulrich; Pleimling, Michel; Täuber, Uwe C
2014-12-01
We study the pinning dynamics of magnetic flux (vortex) lines in a disordered type-II superconductor. Using numerical simulations of a directed elastic line model, we extract the pinning time distributions of vortex line segments. We compare different model implementations for the disorder in the surrounding medium: discrete, localized pinning potential wells that are either attractive and repulsive or purely attractive, and whose strengths are drawn from a Gaussian distribution; as well as continuous Gaussian random potential landscapes. We find that both schemes yield power-law distributions in the pinned phase as predicted by extreme-event statistics, yet they differ significantly in their effective scaling exponents and their short-time behavior.
Delrue, Steven; Aleshin, Vladislav; Truyaert, Kevin; Bou Matar, Olivier; Van Den Abeele, Koen
2018-01-01
Our study aims at the creation of a numerical toolbox that describes wave propagation in samples containing internal contacts (e.g. cracks, delaminations, debondings, imperfect intergranular joints) of known geometry with postulated contact interaction laws including friction. The code consists of two entities: the contact model and the solid mechanics module. Part I of the paper concerns an in-depth description of a constitutive model for realistic contacts or cracks that takes into account the roughness of the contact faces and the associated effects of friction and hysteresis. In the crack model, three different contact states can be recognized: contact loss, total sliding and partial slip. Normal (clapping) interactions between the crack faces are implemented using a quadratic stress-displacement relation, whereas tangential (friction) interactions were introduced using the Coulomb friction law for the total sliding case, and the Method of Memory Diagrams (MMD) in case of partial slip. In the present part of the paper, we integrate the developed crack model into finite element software in order to simulate elastic wave propagation in a solid material containing internal contacts or cracks. We therefore implemented the comprehensive crack model in MATLAB® and introduced it in the Structural Mechanics Module of COMSOL Multiphysics®. The potential of the approach for ultrasound based inspection of solids with cracks showing acoustic nonlinearity is demonstrated by means of an example of shear wave propagation in an aluminum sample containing a single crack with rough surfaces and friction. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Rattez, Hadrien; Stefanou, Ioannis; Sulem, Jean; Veveakis, Manolis; Poulet, Thomas
2018-06-01
In this paper we study the phenomenon of localization of deformation in fault gouges during seismic slip. This process is of key importance to understand frictional heating and energy budget during an earthquake. A infinite layer of fault gouge is modeled as a Cosserat continuum taking into account Thermo-Hydro-Mechanical (THM) couplings. The theoretical aspects of the problem are presented in the companion paper (Rattez et al., 2017a), together with a linear stability analysis to determine the conditions of localization and estimate the shear band thickness. In this Part II of the study, we investigate the post-bifurcation evolution of the system by integrating numerically the full system of non-linear equations using the method of Finite Elements. The problem is formulated in the framework of Cosserat theory. It enables to introduce information about the microstructure of the material in the constitutive equations and to regularize the mathematical problem in the post-localization regime. We emphasize the influence of the size of the microstructure and of the softening law on the material response and the strain localization process. The weakening effect of pore fluid thermal pressurization induced by shear heating is examined and quantified. It enhances the weakening process and contributes to the narrowing of shear band thickness. Moreover, due to THM couplings an apparent rate-dependency is observed, even for rate-independent material behavior. Finally, comparisons show that when the perturbed field of shear deformation dominates, the estimation of the shear band thickness obtained from linear stability analysis differs from the one obtained from the finite element computations, demonstrating the importance of post-localization numerical simulations.
2014-01-01
Background The HIV/AIDS epidemic continues to disproportionately affect African American communities in the US, particularly those located in urban areas. Despite the fact that HIV is often transmitted from one sexual partner to another, most HIV prevention interventions have focused only on individuals, rather than couples. This five-year study investigates community-based implementation, effectiveness, and sustainability of ‘Eban II,’ an evidence-based risk reduction intervention for African-American heterosexual, serodiscordant couples. Methods/design This hybrid implementation/effectiveness implementation study is guided by organizational change theory as conceptualized in the Texas Christian University Program Change Model (PCM), a model of phased organizational change from exposure to adoption, implementation, and sustainability. The primary implementation aims are to assist 10 community-based organizations (CBOs) to implement and sustain Eban II; specifically, to partner with CBOs to expose providers to the intervention; facilitate its adoption, implementation and sustainment; and to evaluate processes and determinants of implementation, effectiveness, fidelity, and sustainment. The primary effectiveness aim is to evaluate the effect of Eban II on participant (n = 200 couples) outcomes, specifically incidents of protected sex and proportion of condom use. We will also determine the cost-effectiveness of implementation, as measured by implementation costs and potential cost savings. A mixed methods evaluation will examine implementation at the agency level; staff members from the CBOs will complete baseline measures of organizational context and climate, while key stakeholders will be interviewed periodically throughout implementation. Effectiveness of Eban II will be assessed using a randomized delayed enrollment (waitlist) control design to evaluate the impact of treatment on outcomes at posttest and three-month follow-up. Multi-level hierarchical modeling with a multi-level nested structure will be used to evaluate the effects of agency- and couples-level characteristics on couples-level outcomes (e.g., condom use). Discussion This study will produce important information regarding the value of the Eban II program and a theory-guided implementation process and tools designed for use in implementing Eban II and other evidence-based programs in demographically diverse, resource-constrained treatment settings. Trial registration NCT00644163 PMID:24950708
Hamilton, Alison B; Mittman, Brian S; Williams, John K; Liu, Honghu H; Eccles, Alicia M; Hutchinson, Craig S; Wyatt, Gail E
2014-06-20
The HIV/AIDS epidemic continues to disproportionately affect African American communities in the US, particularly those located in urban areas. Despite the fact that HIV is often transmitted from one sexual partner to another, most HIV prevention interventions have focused only on individuals, rather than couples. This five-year study investigates community-based implementation, effectiveness, and sustainability of 'Eban II,' an evidence-based risk reduction intervention for African-American heterosexual, serodiscordant couples. This hybrid implementation/effectiveness implementation study is guided by organizational change theory as conceptualized in the Texas Christian University Program Change Model (PCM), a model of phased organizational change from exposure to adoption, implementation, and sustainability. The primary implementation aims are to assist 10 community-based organizations (CBOs) to implement and sustain Eban II; specifically, to partner with CBOs to expose providers to the intervention; facilitate its adoption, implementation and sustainment; and to evaluate processes and determinants of implementation, effectiveness, fidelity, and sustainment. The primary effectiveness aim is to evaluate the effect of Eban II on participant (n = 200 couples) outcomes, specifically incidents of protected sex and proportion of condom use. We will also determine the cost-effectiveness of implementation, as measured by implementation costs and potential cost savings. A mixed methods evaluation will examine implementation at the agency level; staff members from the CBOs will complete baseline measures of organizational context and climate, while key stakeholders will be interviewed periodically throughout implementation. Effectiveness of Eban II will be assessed using a randomized delayed enrollment (waitlist) control design to evaluate the impact of treatment on outcomes at posttest and three-month follow-up. Multi-level hierarchical modeling with a multi-level nested structure will be used to evaluate the effects of agency- and couples-level characteristics on couples-level outcomes (e.g., condom use). This study will produce important information regarding the value of the Eban II program and a theory-guided implementation process and tools designed for use in implementing Eban II and other evidence-based programs in demographically diverse, resource-constrained treatment settings. NCT00644163.
MRP (materiel requirements planning) II: successful implementation the hard way.
Grubbs, S C
1994-05-01
Many manufacturing companies embark on MRP II implementation projects as a method for improvement. In spite of an increasing body of knowledge regarding successful implementations, companies continue to attempt new approaches. This article reviews an actual implementation, featuring some of the mistakes made and the efforts required to still achieve "Class A" performance levels.
Three-Dimensional Modeling of Fracture Clusters in Geothermal Reservoirs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghassemi, Ahmad
The objective of this is to develop a 3-D numerical model for simulating mode I, II, and III (tensile, shear, and out-of-plane) propagation of multiple fractures and fracture clusters to accurately predict geothermal reservoir stimulation using the virtual multi-dimensional internal bond (VMIB). Effective development of enhanced geothermal systems can significantly benefit from improved modeling of hydraulic fracturing. In geothermal reservoirs, where the temperature can reach or exceed 350oC, thermal and poro-mechanical processes play an important role in fracture initiation and propagation. In this project hydraulic fracturing of hot subsurface rock mass will be numerically modeled by extending the virtual multiplemore » internal bond theory and implementing it in a finite element code, WARP3D, a three-dimensional finite element code for solid mechanics. The new constitutive model along with the poro-thermoelastic computational algorithms will allow modeling the initiation and propagation of clusters of fractures, and extension of pre-existing fractures. The work will enable the industry to realistically model stimulation of geothermal reservoirs. The project addresses the Geothermal Technologies Office objective of accurately predicting geothermal reservoir stimulation (GTO technology priority item). The project goal will be attained by: (i) development of the VMIB method for application to 3D analysis of fracture clusters; (ii) development of poro- and thermoelastic material sub-routines for use in 3D finite element code WARP3D; (iii) implementation of VMIB and the new material routines in WARP3D to enable simulation of clusters of fractures while accounting for the effects of the pore pressure, thermal stress and inelastic deformation; (iv) simulation of 3D fracture propagation and coalescence and formation of clusters, and comparison with laboratory compression tests; and (v) application of the model to interpretation of injection experiments (planned by our industrial partner) with reference to the impact of the variations in injection rate and temperature, rock properties, and in-situ stress.« less
Anderson, Cynthia M; Borgmeier, Chris
2010-01-01
To meet the complex social behavioral and academic needs of all students, schools benefit from having available multiple evidence-based interventions of varying intensity. School-wide positive behavior support provides a framework within which a continuum of evidence-based interventions can be implemented in a school. This framework includes three levels or tiers of intervention; Tier I (primary or universal), Tier II (secondary or targeted), and Tier III (tertiary or individualized) supports. In this paper we review the logic behind school-wide positive behavior support and then focus on Tier II interventions, as this level of support has received the least attention in the literature. We delineate the key features of Tier II interventions as implemented within school-wide positive behavior support, provide guidelines for matching Tier II interventions to school and student needs, and describe how schools plan for implementation and maintenance of selected interventions.
NASA Astrophysics Data System (ADS)
Hu, Qiang
2017-09-01
We develop an approach of the Grad-Shafranov (GS) reconstruction for toroidal structures in space plasmas, based on in situ spacecraft measurements. The underlying theory is the GS equation that describes two-dimensional magnetohydrostatic equilibrium, as widely applied in fusion plasmas. The geometry is such that the arbitrary cross-section of the torus has rotational symmetry about the rotation axis, Z, with a major radius, r0. The magnetic field configuration is thus determined by a scalar flux function, Ψ, and a functional F that is a single-variable function of Ψ. The algorithm is implemented through a two-step approach: i) a trial-and-error process by minimizing the residue of the functional F(Ψ) to determine an optimal Z-axis orientation, and ii) for the chosen Z, a χ2 minimization process resulting in a range of r0. Benchmark studies of known analytic solutions to the toroidal GS equation with noise additions are presented to illustrate the two-step procedure and to demonstrate the performance of the numerical GS solver, separately. For the cases presented, the errors in Z and r0 are 9° and 22%, respectively, and the relative percent error in the numerical GS solutions is smaller than 10%. We also make public the computer codes for these implementations and benchmark studies.
Koniges, Alice; Liu, Wangyi; Lidia, Steven; ...
2016-04-01
We explore the simulation challenges and requirements for experiments planned on facilities such as the NDCX-II ion accelerator at LBNL, currently undergoing commissioning. Hydrodynamic modeling of NDCX-II experiments include certain lower temperature effects, e.g., surface tension and target fragmentation, that are not generally present in extreme high-energy laser facility experiments, where targets are completely vaporized in an extremely short period of time. Target designs proposed for NDCX-II range from metal foils of order one micron thick (thin targets) to metallic foam targets several tens of microns thick (thick targets). These high-energy-density experiments allow for the study of fracture as wellmore » as the process of bubble and droplet formation. We incorporate these physics effects into a code called ALE-AMR that uses a combination of Arbitrary Lagrangian Eulerian hydrodynamics and Adaptive Mesh Refinement. Inclusion of certain effects becomes tricky as we must deal with non-orthogonal meshes of various levels of refinement in three dimensions. A surface tension model used for droplet dynamics is implemented in ALE-AMR using curvature calculated from volume fractions. Thick foam target experiments provide information on how ion beam induced shock waves couple into kinetic energy of fluid flow. Although NDCX-II is not fully commissioned, experiments are being conducted that explore material defect production and dynamics.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowen, Benjamin; Ruebel, Oliver; Fischer, Curt Fischer R.
BASTet is an advanced software library written in Python. BASTet serves as the analysis and storage library for the OpenMSI project. BASTet is an integrate framework for: i) storage of spectral imaging data, ii) storage of derived analysis data, iii) provenance of analyses, iv) integration and execution of analyses via complex workflows. BASTet implements the API for the HDF5 storage format used by OpenMSI. Analyses that are developed using BASTet benefit from direct integration with storage format, automatic tracking of provenance, and direct integration with command-line and workflow execution tools. BASTet also defines interfaces to enable developers to directly integratemore » their analysis with OpenMSI's web-based viewing infrastruture without having to know OpenMSI. BASTet also provides numerous helper classes and tools to assist with the conversion of data files, ease parallel implementation of analysis algorithms, ease interaction with web-based functions, description methods for data reduction. BASTet also includes detailed developer documentation, user tutorials, iPython notebooks, and other supporting documents.« less
Scemama, Anthony; Caffarel, Michel; Oseret, Emmanuel; Jalby, William
2013-04-30
Various strategies to implement efficiently quantum Monte Carlo (QMC) simulations for large chemical systems are presented. These include: (i) the introduction of an efficient algorithm to calculate the computationally expensive Slater matrices. This novel scheme is based on the use of the highly localized character of atomic Gaussian basis functions (not the molecular orbitals as usually done), (ii) the possibility of keeping the memory footprint minimal, (iii) the important enhancement of single-core performance when efficient optimization tools are used, and (iv) the definition of a universal, dynamic, fault-tolerant, and load-balanced framework adapted to all kinds of computational platforms (massively parallel machines, clusters, or distributed grids). These strategies have been implemented in the QMC=Chem code developed at Toulouse and illustrated with numerical applications on small peptides of increasing sizes (158, 434, 1056, and 1731 electrons). Using 10-80 k computing cores of the Curie machine (GENCI-TGCC-CEA, France), QMC=Chem has been shown to be capable of running at the petascale level, thus demonstrating that for this machine a large part of the peak performance can be achieved. Implementation of large-scale QMC simulations for future exascale platforms with a comparable level of efficiency is expected to be feasible. Copyright © 2013 Wiley Periodicals, Inc.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-01
... DEPARTMENT OF EDUCATION Implementation of Title I/II Program Initiatives; Extension of Public Comment Period; Correction AGENCY: Department of Education. ACTION: Correction notice. SUMMARY: On October... Title I/II Program Initiatives,'' Docket ID ED- 2013-ICCD-0090. The comment period for this information...
Numerical Implementation of the Cohesive Soil Bounding Surface Plasticity Model. Volume I.
1983-02-01
AD-R24 866 NUMERICAL IMPLEMENTATION OF THE COHESIVE SOIL BOUNDING 1/2 SURFACE PLASTICITY ..(U) CALIFORNIA UNIV DAVIS DEPT OF CIVIL ENGINEERING L R...a study of various numerical means for implementing the bounding surface plasticity model for cohesive soils is presented. A comparison is made of... Plasticity Models 17 3.4 Selection Of Methods For Comparison 17 3.5 Theory 20 3.5.1 Solution Methods 20 3.5.2 Reduction Of The Number Of Equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gartling, D.K.
The theoretical and numerical background for the finite element computer program, TORO II, is presented in detail. TORO II is designed for the multi-dimensional analysis of nonlinear, electromagnetic field problems described by the quasi-static form of Maxwell`s equations. A general description of the boundary value problems treated by the program is presented. The finite element formulation and the associated numerical methods used in TORO II are also outlined. Instructions for the use of the code are documented in SAND96-0903; examples of problems analyzed with the code are also provided in the user`s manual. 24 refs., 8 figs.
NASA Astrophysics Data System (ADS)
Mucha, Waldemar; Kuś, Wacław
2018-01-01
The paper presents a practical implementation of hybrid simulation using Real Time Finite Element Method (RTFEM). Hybrid simulation is a technique for investigating dynamic material and structural properties of mechanical systems by performing numerical analysis and experiment at the same time. It applies to mechanical systems with elements too difficult or impossible to model numerically. These elements are tested experimentally, while the rest of the system is simulated numerically. Data between the experiment and numerical simulation are exchanged in real time. Authors use Finite Element Method to perform the numerical simulation. The following paper presents the general algorithm for hybrid simulation using RTFEM and possible improvements of the algorithm for computation time reduction developed by the authors. The paper focuses on practical implementation of presented methods, which involves testing of a mountain bicycle frame, where the shock absorber is tested experimentally while the rest of the frame is simulated numerically.
Integrating impact evaluation in the design and implementation of monitoring marine protected areas
Ahmadia, Gabby N.; Glew, Louise; Provost, Mikaela; Gill, David; Hidayat, Nur Ismu; Mangubhai, Sangeeta; Purwanto; Fox, Helen E.
2015-01-01
Quasi-experimental impact evaluation approaches, which enable scholars to disentangle effects of conservation interventions from broader changes in the environment, are gaining momentum in the conservation sector. However, rigorous impact evaluation using statistical matching techniques to estimate the counterfactual have yet to be applied to marine protected areas (MPAs). While there are numerous studies investigating ‘impacts’ of MPAs that have generated considerable insights, results are variable. This variation has been linked to the biophysical and social context in which they are established, as well as attributes of management and governance. To inform decisions about MPA placement, design and implementation, we need to expand our understanding of conditions under which MPAs are likely to lead to positive outcomes by embracing advances in impact evaluation methodologies. Here, we describe the integration of impact evaluation within an MPA network monitoring programme in the Bird's Head Seascape, Indonesia. Specifically we (i) highlight the challenges of implementation ‘on the ground’ and in marine ecosystems and (ii) describe the transformation of an existing monitoring programme into a design appropriate for impact evaluation. This study offers one potential model for mainstreaming impact evaluation in the conservation sector. PMID:26460128
Implementing a Flipped Classroom Approach in a University Numerical Methods Mathematics Course
ERIC Educational Resources Information Center
Johnston, Barbara M.
2017-01-01
This paper describes and analyses the implementation of a "flipped classroom" approach, in an undergraduate mathematics course on numerical methods. The approach replaced all the lecture contents by instructor-made videos and was implemented in the consecutive years 2014 and 2015. The sequential case study presented here begins with an…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-23
...] Approval and Promulgation of Implementation Plans; North Carolina; Removal of Stage II Gasoline Vapor... measures for new and upgraded gasoline dispensing facilities in the State. The September 18, 2009, SIP... .0953), entitled Vapor Return Piping for Stage II Vapor Recovery, for all new or improved gasoline tanks...
Modeling Code Is Helping Cleveland Develop New Products
NASA Technical Reports Server (NTRS)
1998-01-01
Master Builders, Inc., is a 350-person company in Cleveland, Ohio, that develops and markets specialty chemicals for the construction industry. Developing new products involves creating many potential samples and running numerous tests to characterize the samples' performance. Company engineers enlisted NASA's help to replace cumbersome physical testing with computer modeling of the samples' behavior. Since the NASA Lewis Research Center's Structures Division develops mathematical models and associated computation tools to analyze the deformation and failure of composite materials, its researchers began a two-phase effort to modify Lewis' Integrated Composite Analyzer (ICAN) software for Master Builders' use. Phase I has been completed, and Master Builders is pleased with the results. The company is now working to begin implementation of Phase II.
NASA Astrophysics Data System (ADS)
Jansen van Rensburg, Gerhardus J.; Kok, Schalk; Wilke, Daniel N.
2018-03-01
This paper presents the development and numerical implementation of a state variable based thermomechanical material model, intended for use within a fully implicit finite element formulation. Plastic hardening, thermal recovery and multiple cycles of recrystallisation can be tracked for single peak as well as multiple peak recrystallisation response. The numerical implementation of the state variable model extends on a J2 isotropic hypo-elastoplastic modelling framework. The complete numerical implementation is presented as an Abaqus UMAT and linked subroutines. Implementation is discussed with detailed explanation of the derivation and use of various sensitivities, internal state variable management and multiple recrystallisation cycle contributions. A flow chart explaining the proposed numerical implementation is provided as well as verification on the convergence of the material subroutine. The material model is characterised using two high temperature data sets for cobalt and copper. The results of finite element analyses using the material parameter values characterised on the copper data set are also presented.
Thürmer, J Lukas; Wieber, Frank; Gollwitzer, Peter M
2017-01-01
There are two key motivators to perform well in a group: making a contribution that (a) is crucial for the group (indispensability) and that (b) the other group members recognize (identifiability). We argue that indispensability promotes setting collective ("We") goals whereas identifiability induces individual ("I") goals. Although both goals may enhance performance, they should align with different strategies. Whereas pursuing collective goals should involve more cooperation, pursuing individual goals should involve less cooperation. Two experiments support this reasoning and show that planning out collective goals with collective implementation intentions (cIIs or "We-plans") relies on cooperation but planning out individual goals with individual implementation intentions (IIs or "I-plans") does not. In Experiment 1, three-member groups first formed a collective or an individual goal and then performed a first round of a physical persistence task. Groups then either formed a respective implementation intention (cII or II) or a control plan and then performed a second round of the task. Although groups with cIIs and IIs performed better on a physical persistence task than respective control groups, only cII groups interacted more cooperatively during task performance. To confirm the causal role of these interaction processes, Experiment 2 used the same persistence task and manipulated whether groups could communicate: When communication was hindered, groups with cIIs but not groups with IIs performed worse. Communication thus qualifies as a process making cIIs effective. The present research offers a psychology of action account to small group performance.
Implementing real-time robotic systems using CHIMERA II
NASA Technical Reports Server (NTRS)
Stewart, David B.; Schmitz, Donald E.; Khosla, Pradeep K.
1990-01-01
A description is given of the CHIMERA II programming environment and operating system, which was developed for implementing real-time robotic systems. Sensor-based robotic systems contain both general- and special-purpose hardware, and thus the development of applications tends to be a time-consuming task. The CHIMERA II environment is designed to reduce the development time by providing a convenient software interface between the hardware and the user. CHIMERA II supports flexible hardware configurations which are based on one or more VME-backplanes. All communication across multiple processors is transparent to the user through an extensive set of interprocessor communication primitives. CHIMERA II also provides a high-performance real-time kernel which supports both deadline and highest-priority-first scheduling. The flexibility of CHIMERA II allows hierarchical models for robot control, such as NASREM, to be implemented with minimal programming time and effort.
14 CFR 120.225 - How to implement an alcohol testing program.
Code of Federal Regulations, 2010 CFR
2010-01-01
... principal place of business prior to starting operations, (ii) Implement an FAA alcohol testing program no... District Office nearest to your principal place of business. (3) An air traffic control facility not... Specification,(ii) Implement an FAA alcohol testing program no later than the date you start operations, and...
Wang, Jinlong; Lu, Mai; Hu, Yanwen; Chen, Xiaoqiang; Pan, Qiangqiang
2015-12-01
Neuron is the basic unit of the biological neural system. The Hodgkin-Huxley (HH) model is one of the most realistic neuron models on the electrophysiological characteristic description of neuron. Hardware implementation of neuron could provide new research ideas to clinical treatment of spinal cord injury, bionics and artificial intelligence. Based on the HH model neuron and the DSP Builder technology, in the present study, a single HH model neuron hardware implementation was completed in Field Programmable Gate Array (FPGA). The neuron implemented in FPGA was stimulated by different types of current, the action potential response characteristics were analyzed, and the correlation coefficient between numerical simulation result and hardware implementation result were calculated. The results showed that neuronal action potential response of FPGA was highly consistent with numerical simulation result. This work lays the foundation for hardware implementation of neural network.
New Parallel Algorithms for Landscape Evolution Model
NASA Astrophysics Data System (ADS)
Jin, Y.; Zhang, H.; Shi, Y.
2017-12-01
Most landscape evolution models (LEM) developed in the last two decades solve the diffusion equation to simulate the transportation of surface sediments. This numerical approach is difficult to parallelize due to the computation of drainage area for each node, which needs huge amount of communication if run in parallel. In order to overcome this difficulty, we developed two parallel algorithms for LEM with a stream net. One algorithm handles the partition of grid with traditional methods and applies an efficient global reduction algorithm to do the computation of drainage areas and transport rates for the stream net; the other algorithm is based on a new partition algorithm, which partitions the nodes in catchments between processes first, and then partitions the cells according to the partition of nodes. Both methods focus on decreasing communication between processes and take the advantage of massive computing techniques, and numerical experiments show that they are both adequate to handle large scale problems with millions of cells. We implemented the two algorithms in our program based on the widely used finite element library deal.II, so that it can be easily coupled with ASPECT.
NASA Astrophysics Data System (ADS)
Kredler, L.; Häußler, W.; Martin, N.; Böni, P.
The flux is still a major limiting factor in neutron research. For instruments being supplied by cold neutrons using neutron guides, both at present steady-state and at new spallation neutron sources, it is therefore important to optimize the instrumental setup and the neutron guidance. Optimization of neutron guide geometry and of the instrument itself can be performed by numerical ray-tracing simulations using existing open-access codes. In this paper, we discuss how such Monte Carlo simulations have been employed in order to plan improvements of the Neutron Resonant Spin Echo spectrometer RESEDA (FRM II, Germany) as well as the neutron guides before and within the instrument. The essential components have been represented with the help of the McStas ray-tracing package. The expected intensity has been tested by means of several virtual detectors, implemented in the simulation code. Comparison between simulations and preliminary measurements results shows good agreement and demonstrates the reliability of the numerical approach. These results will be taken into account in the planning of new components installed in the guide system.
Planning Targets for Phase II Watershed Implementation Plans
On August 1, 2011, EPA provided planning targets for nitrogen, phosphorus and sediment for the Phase II Watershed Implementation Plans (WIPs) of the Chesapeake Bay TMDL. This page provides the letters containing those planning targets.
The accuracy of semi-numerical reionization models in comparison with radiative transfer simulations
NASA Astrophysics Data System (ADS)
Hutter, Anne
2018-03-01
We have developed a modular semi-numerical code that computes the time and spatially dependent ionization of neutral hydrogen (H I), neutral (He I) and singly ionized helium (He II) in the intergalactic medium (IGM). The model accounts for recombinations and provides different descriptions for the photoionization rate that are used to calculate the residual H I fraction in ionized regions. We compare different semi-numerical reionization schemes to a radiative transfer (RT) simulation. We use the RT simulation as a benchmark, and find that the semi-numerical approaches produce similar H II and He II morphologies and power spectra of the H I 21cm signal throughout reionization. As we do not track partial ionization of He II, the extent of the double ionized helium (He III) regions is consistently smaller. In contrast to previous comparison projects, the ionizing emissivity in our semi-numerical scheme is not adjusted to reproduce the redshift evolution of the RT simulation, but directly derived from the RT simulation spectra. Among schemes that identify the ionized regions by the ratio of the number of ionization and absorption events on different spatial smoothing scales, we find those that mark the entire sphere as ionized when the ionization criterion is fulfilled to result in significantly accelerated reionization compared to the RT simulation. Conversely, those that flag only the central cell as ionized yield very similar but slightly delayed redshift evolution of reionization, with up to 20% ionizing photons lost. Despite the overall agreement with the RT simulation, our results suggests that constraining ionizing emissivity sensitive parameters from semi-numerical galaxy formation-reionization models are subject to photon nonconservation.
Implementing a GPU-based numerical algorithm for modelling dynamics of a high-speed train
NASA Astrophysics Data System (ADS)
Sytov, E. S.; Bratus, A. S.; Yurchenko, D.
2018-04-01
This paper discusses the initiative of implementing a GPU-based numerical algorithm for studying various phenomena associated with dynamics of a high-speed railway transport. The proposed numerical algorithm for calculating a critical speed of the bogie is based on the first Lyapunov number. Numerical algorithm is validated by analytical results, derived for a simple model. A dynamic model of a carriage connected to a new dual-wheelset flexible bogie is studied for linear and dry friction damping. Numerical results obtained by CPU, MPU and GPU approaches are compared and appropriateness of these methods is discussed.
Solomon, Daniel H; Lu, Bing; Yu, Zhi; Corrigan, Cassandra; Harrold, Leslie R; Smolen, Josef S; Fraenkel, Liana; Katz, Jeffrey N; Losina, Elena
2018-01-05
We conducted a two-phase randomized controlled trial of a Learning Collaborative (LC) to facilitate implementation of treat to target (TTT) to manage rheumatoid arthritis (RA). We found substantial improvement in implementation of TTT in Phase I. Herein, we report on a second 9 months (Phase II) where we examined maintenance of response in Phase I and predictors of greater improvement in TTT adherence. We recruited 11 rheumatology sites and randomized them to either receive the LC during Phase I or to a wait-list control group that received the LC intervention during Phase II. The outcome was change in TTT implementation score (0 to 100, 100 is best) from pre- to post-intervention. TTT implementation score is defined as a percent of components documented in visit notes. Analyses examined: 1) the extent that the Phase I intervention teams sustained improvement in TTT; and, 2) predictors of TTT improvement. The analysis included 636 RA patients. At baseline, mean TTT implementation score was 11% in Phase I intervention sites and 13% in Phase II sites. After the intervention, TTT implementation score improved to 57% in the Phase I intervention sites and to 58% in the Phase II sites. Intervention sites from Phase I sustained the improvement during the Phase II (52%). Predictors of greater TTT improvement included only having rheumatologist providers at the site, academic affiliation of the site, fewer providers per site, and the rheumatologist provider being a trainee. Improvement in TTT remained relatively stable over a post-intervention period. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-15
... Resource Management, to EPA in two separate SIP revisions on October 19, 2007, and July 1, 2011. These SIP...) Implementation Rule NSR Update Phase II (hereafter referred to as the ``Ozone Implementation NSR Update'' or ``Phase II Rule'') recognizing nitrogen oxide (NO X ) as an ozone precursor, among other requirements. In...
ERIC Educational Resources Information Center
Moll, Emmett J.
The Milwaukee (Wisconsin) Public Schools (MPS) recently implemented a new, state-designed accounting system, called the Wisconsin Elementary and Secondary School Accounting System (WESSAS), based on guidelines proposed in the U.S. Office of Education's Handbook II. This report describes and discusses that implementation and provides numerous…
40 CFR 52.1423 - PM10 State implementation plan development in group II areas.
Code of Federal Regulations, 2011 CFR
2011-07-01
... development in group II areas. 52.1423 Section 52.1423 Protection of Environment ENVIRONMENTAL PROTECTION...) Nebraska § 52.1423 PM10 State implementation plan development in group II areas. The state of Nebraska...: (a) An area in the City of Omaha and the area in and around the Village of Weeping Water have been...
40 CFR 52.1423 - PM10 State implementation plan development in group II areas.
Code of Federal Regulations, 2012 CFR
2012-07-01
... development in group II areas. 52.1423 Section 52.1423 Protection of Environment ENVIRONMENTAL PROTECTION...) Nebraska § 52.1423 PM10 State implementation plan development in group II areas. The state of Nebraska...: (a) An area in the City of Omaha and the area in and around the Village of Weeping Water have been...
40 CFR 52.1423 - PM10 State implementation plan development in group II areas.
Code of Federal Regulations, 2013 CFR
2013-07-01
... development in group II areas. 52.1423 Section 52.1423 Protection of Environment ENVIRONMENTAL PROTECTION...) Nebraska § 52.1423 PM10 State implementation plan development in group II areas. The state of Nebraska...: (a) An area in the City of Omaha and the area in and around the Village of Weeping Water have been...
40 CFR 52.1423 - PM10 State implementation plan development in group II areas.
Code of Federal Regulations, 2014 CFR
2014-07-01
... development in group II areas. 52.1423 Section 52.1423 Protection of Environment ENVIRONMENTAL PROTECTION...) Nebraska § 52.1423 PM10 State implementation plan development in group II areas. The state of Nebraska...: (a) An area in the City of Omaha and the area in and around the Village of Weeping Water have been...
NASA Astrophysics Data System (ADS)
Manganaro, L.; Russo, G.; Bourhaleb, F.; Fausti, F.; Giordanengo, S.; Monaco, V.; Sacchi, R.; Vignati, A.; Cirio, R.; Attili, A.
2018-04-01
One major rationale for the application of heavy ion beams in tumour therapy is their increased relative biological effectiveness (RBE). The complex dependencies of the RBE on dose, biological endpoint, position in the field etc require the use of biophysical models in treatment planning and clinical analysis. This study aims to introduce a new software, named ‘Survival’, to facilitate the radiobiological computations needed in ion therapy. The simulation toolkit was written in C++ and it was developed with a modular architecture in order to easily incorporate different radiobiological models. The following models were successfully implemented: the local effect model (LEM, version I, II and III) and variants of the microdosimetric-kinetic model (MKM). Different numerical evaluation approaches were also implemented: Monte Carlo (MC) numerical methods and a set of faster analytical approximations. Among the possible applications, the toolkit was used to reproduce the RBE versus LET for different ions (proton, He, C, O, Ne) and different cell lines (CHO, HSG). Intercomparison between different models (LEM and MKM) and computational approaches (MC and fast approximations) were performed. The developed software could represent an important tool for the evaluation of the biological effectiveness of charged particles in ion beam therapy, in particular when coupled with treatment simulations. Its modular architecture facilitates benchmarking and inter-comparison between different models and evaluation approaches. The code is open source (GPL2 license) and available at https://github.com/batuff/Survival.
Implementation of a Proficiency-Based Diploma System in Maine: Phase II--District Level Analysis
ERIC Educational Resources Information Center
Silvernail, David L.; Stump, Erika K.; McCafferty, Anita Stewart; Hawes, Kathryn M.
2014-01-01
This report describes the findings from Phase II of a study of Maine's implementation of a proficiency-based diploma system. At the request of the Joint Standing Committee on Education and Cultural Affairs of the Maine Legislature, the Maine Policy Research Institute (MEPRI) has conducted a two-phased study of the implementation of Maine law…
Background Document for Capacity Analysis for Newly Listed ...
... II II II M •i n". M " " It .1 tt It ..' I!' .. II .1 It \\ tl 0 4 . • • ' i ni process ois&s.F'iCH (It t ... Num&er fen!«r //cm p»s» T) A R Unt Nutnbt ' ' 1 . V 1 1 1 1 1 1 1 1 2 i 2 2 ...
Numerics and subgrid-scale modeling in large eddy simulations of stratocumulus clouds.
Pressel, Kyle G; Mishra, Siddhartha; Schneider, Tapio; Kaul, Colleen M; Tan, Zhihong
2017-06-01
Stratocumulus clouds are the most common type of boundary layer cloud; their radiative effects strongly modulate climate. Large eddy simulations (LES) of stratocumulus clouds often struggle to maintain fidelity to observations because of the sharp gradients occurring at the entrainment interfacial layer at the cloud top. The challenge posed to LES by stratocumulus clouds is evident in the wide range of solutions found in the LES intercomparison based on the DYCOMS-II field campaign, where simulated liquid water paths for identical initial and boundary conditions varied by a factor of nearly 12. Here we revisit the DYCOMS-II RF01 case and show that the wide range of previous LES results can be realized in a single LES code by varying only the numerical treatment of the equations of motion and the nature of subgrid-scale (SGS) closures. The simulations that maintain the greatest fidelity to DYCOMS-II observations are identified. The results show that using weighted essentially non-oscillatory (WENO) numerics for all resolved advective terms and no explicit SGS closure consistently produces the highest-fidelity simulations. This suggests that the numerical dissipation inherent in WENO schemes functions as a high-quality, implicit SGS closure for this stratocumulus case. Conversely, using oscillatory centered difference numerical schemes for momentum advection, WENO numerics for scalars, and explicitly modeled SGS fluxes consistently produces the lowest-fidelity simulations. We attribute this to the production of anomalously large SGS fluxes near the cloud tops through the interaction of numerical error in the momentum field with the scalar SGS model.
Numerics and subgrid‐scale modeling in large eddy simulations of stratocumulus clouds
Mishra, Siddhartha; Schneider, Tapio; Kaul, Colleen M.; Tan, Zhihong
2017-01-01
Abstract Stratocumulus clouds are the most common type of boundary layer cloud; their radiative effects strongly modulate climate. Large eddy simulations (LES) of stratocumulus clouds often struggle to maintain fidelity to observations because of the sharp gradients occurring at the entrainment interfacial layer at the cloud top. The challenge posed to LES by stratocumulus clouds is evident in the wide range of solutions found in the LES intercomparison based on the DYCOMS‐II field campaign, where simulated liquid water paths for identical initial and boundary conditions varied by a factor of nearly 12. Here we revisit the DYCOMS‐II RF01 case and show that the wide range of previous LES results can be realized in a single LES code by varying only the numerical treatment of the equations of motion and the nature of subgrid‐scale (SGS) closures. The simulations that maintain the greatest fidelity to DYCOMS‐II observations are identified. The results show that using weighted essentially non‐oscillatory (WENO) numerics for all resolved advective terms and no explicit SGS closure consistently produces the highest‐fidelity simulations. This suggests that the numerical dissipation inherent in WENO schemes functions as a high‐quality, implicit SGS closure for this stratocumulus case. Conversely, using oscillatory centered difference numerical schemes for momentum advection, WENO numerics for scalars, and explicitly modeled SGS fluxes consistently produces the lowest‐fidelity simulations. We attribute this to the production of anomalously large SGS fluxes near the cloud tops through the interaction of numerical error in the momentum field with the scalar SGS model. PMID:28943997
NASA Astrophysics Data System (ADS)
Nagakura, Hiroki; Iwakami, Wakana; Furusawa, Shun; Sumiyoshi, Kohsuke; Yamada, Shoichi; Matsufuru, Hideo; Imakura, Akira
2017-04-01
We present a newly developed moving-mesh technique for the multi-dimensional Boltzmann-Hydro code for the simulation of core-collapse supernovae (CCSNe). What makes this technique different from others is the fact that it treats not only hydrodynamics but also neutrino transfer in the language of the 3 + 1 formalism of general relativity (GR), making use of the shift vector to specify the time evolution of the coordinate system. This means that the transport part of our code is essentially general relativistic, although in this paper it is applied only to the moving curvilinear coordinates in the flat Minknowski spacetime, since the gravity part is still Newtonian. The numerical aspect of the implementation is also described in detail. Employing the axisymmetric two-dimensional version of the code, we conduct two test computations: oscillations and runaways of proto-neutron star (PNS). We show that our new method works fine, tracking the motions of PNS correctly. We believe that this is a major advancement toward the realistic simulation of CCSNe.
NASA Astrophysics Data System (ADS)
Chaillat, Stéphanie; Desiderio, Luca; Ciarlet, Patrick
2017-12-01
In this work, we study the accuracy and efficiency of hierarchical matrix (H-matrix) based fast methods for solving dense linear systems arising from the discretization of the 3D elastodynamic Green's tensors. It is well known in the literature that standard H-matrix based methods, although very efficient tools for asymptotically smooth kernels, are not optimal for oscillatory kernels. H2-matrix and directional approaches have been proposed to overcome this problem. However the implementation of such methods is much more involved than the standard H-matrix representation. The central questions we address are twofold. (i) What is the frequency-range in which the H-matrix format is an efficient representation for 3D elastodynamic problems? (ii) What can be expected of such an approach to model problems in mechanical engineering? We show that even though the method is not optimal (in the sense that more involved representations can lead to faster algorithms) an efficient solver can be easily developed. The capabilities of the method are illustrated on numerical examples using the Boundary Element Method.
NASA Astrophysics Data System (ADS)
Schmidt, Burkhard; Hartmann, Carsten
2018-07-01
WavePacket is an open-source program package for numeric simulations in quantum dynamics. It can solve time-independent or time-dependent linear Schrödinger and Liouville-von Neumann-equations in one or more dimensions. Also coupled equations can be treated, which allows, e.g., to simulate molecular quantum dynamics beyond the Born-Oppenheimer approximation. Optionally accounting for the interaction with external electric fields within the semi-classical dipole approximation, WavePacket can be used to simulate experiments involving tailored light pulses in photo-induced physics or chemistry. Being highly versatile and offering visualization of quantum dynamics 'on the fly', WavePacket is well suited for teaching or research projects in atomic, molecular and optical physics as well as in physical or theoretical chemistry. Building on the previous Part I [Comp. Phys. Comm. 213, 223-234 (2017)] which dealt with closed quantum systems and discrete variable representations, the present Part II focuses on the dynamics of open quantum systems, with Lindblad operators modeling dissipation and dephasing. This part also describes the WavePacket function for optimal control of quantum dynamics, building on rapid monotonically convergent iteration methods. Furthermore, two different approaches to dimension reduction implemented in WavePacket are documented here. In the first one, a balancing transformation based on the concepts of controllability and observability Gramians is used to identify states that are neither well controllable nor well observable. Those states are either truncated or averaged out. In the other approach, the H2-error for a given reduced dimensionality is minimized by H2 optimal model reduction techniques, utilizing a bilinear iterative rational Krylov algorithm. The present work describes the MATLAB version of WavePacket 5.3.0 which is hosted and further developed at the Sourceforge platform, where also extensive Wiki-documentation as well as numerous worked-out demonstration examples with animated graphics can be found.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-03
....aa; II.D.1.bb; II.D.1.kk; II.D.1.nn; II.D.1.oo; II.D.1.aaa; II.D.1.bbb; II.D.1.ccc; II.D.1.fff; II.D...; II.D.1.y; II.D.1.aa; II.D.1.bb; II.D.1.kk; II.D.1.nn; II.D.1.oo; II.D.1.aaa; II.D.1.bbb; II.D.1.ccc...
Adapted all-numerical correlator for face recognition applications
NASA Astrophysics Data System (ADS)
Elbouz, M.; Bouzidi, F.; Alfalou, A.; Brosseau, C.; Leonard, I.; Benkelfat, B.-E.
2013-03-01
In this study, we suggest and validate an all-numerical implementation of a VanderLugt correlator which is optimized for face recognition applications. The main goal of this implementation is to take advantage of the benefits (detection, localization, and identification of a target object within a scene) of correlation methods and exploit the reconfigurability of numerical approaches. This technique requires a numerical implementation of the optical Fourier transform. We pay special attention to adapt the correlation filter to this numerical implementation. One main goal of this work is to reduce the size of the filter in order to decrease the memory space required for real time applications. To fulfil this requirement, we code the reference images with 8 bits and study the effect of this coding on the performances of several composite filters (phase-only filter, binary phase-only filter). The saturation effect has for effect to decrease the performances of the correlator for making a decision when filters contain up to nine references. Further, an optimization is proposed based for an optimized segmented composite filter. Based on this approach, we present tests with different faces demonstrating that the above mentioned saturation effect is significantly reduced while minimizing the size of the learning data base.
Implementing the UCSD PASCAL system on the MODCOMP computer. [deep space network
NASA Technical Reports Server (NTRS)
Wolfe, T.
1980-01-01
The implementation of an interactive software development system (UCSD PASCAL) on the MODCOMP computer is discussed. The development of an interpreter for the MODCOMP II and the MODCOMP IV computers, written in MODCOMP II assembly language, is described. The complete Pascal programming system was run successfully on a MODCOMP II and MODCOMP IV under both the MAX II/III and MAX IV operating systems. The source code for an 8080 microcomputer version of the interpreter was used as the design for the MODCOMP interpreter. A mapping of the functions within the 8080 interpreter into MODCOMP II assembly language was the method used to code the interpreter.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-25
... action on the revision to APEN exemption II.D.1.uuu., because we proposed approval of the revision in the....uuu; II.D.1.eeee. No Action--Un-Revised Provisions.. II.B; II.B.1.a; II.B.3.b; II.B.4; II.B.5; II.B.6...
Numerical Hydrodynamic Study of Hypothetical Levee Setback Scenarios
2018-01-01
ER D C /C HL T R- 18 -1 Flood and Coastal Systems Research and Development Program Numerical Hydrodynamic Study of Hypothetical Levee...default. Flood and Coastal Systems Research and Development Program ERDC/CHL TR-18-1 January 2018 Numerical Hydrodynamic Study of Hypothetical...Reduction” ERDC/CHL TR-18-1 ii Abstract A numerical hydrodynamic study was conducted to compare multiple levee setback alternatives to the base
Influence of surface rectangular defect winding layer on burst pressure of CNG-II composite cylinder
NASA Astrophysics Data System (ADS)
You, H. X.; Peng, L.; Zhao, C.; Ma, K.; Zhang, S.
2018-01-01
To study the influence of composite materials’ surface defect on the burst pressure of CNG-II composite cylinder, the surface defect was simplified as a rectangular slot of certain size on the basis of actually investigating the shape of cylinder’s surface defect. A CNG-II composite cylinder with a rectangular slot defect (2mm in depth) was used for burst test, and the numerical simulation software ANSYS was used to calculate its burst pressure. Through comparison between the burst pressure in the test and the numerical analysis result, the correctness of the numerical analysis method was verified. On this basis, the numerical analysis method was conducted for composite cylinders with surface defect in other depth. The result showed that surface defect in the form of rectangular slot had no significant effect on the liner stress of composite cylinder. Instead, it had a great influence on the stress of fiber-wrapped layer. The burst pressure of the composite cylinder decreased as the defect depth increasing. The hoop stress at the bottom of the defect in the shape of rectangular slot exceeded the maximum of the composite materials’ tensile strength, which could result in the burst pressure of composite cylinders decreasing.
NASA Astrophysics Data System (ADS)
Mahlmann, J. F.; Cerdá-Durán, P.; Aloy, M. A.
2018-07-01
The study of the electrodynamics of static, axisymmetric, and force-free Kerr magnetospheres relies vastly on solutions of the so-called relativistic Grad-Shafranov equation (GSE). Different numerical approaches to the solution of the GSE have been introduced in the literature, but none of them has been fully assessed from the numerical point of view in terms of efficiency and quality of the solutions found. We present a generalization of these algorithms and give a detailed background on the algorithmic implementation. We assess the numerical stability of the implemented algorithms and quantify the convergence of the presented methodology for the most established set-ups (split-monopole, paraboloidal, BH disc, uniform).
NASA Astrophysics Data System (ADS)
Mahlmann, J. F.; Cerdá-Durán, P.; Aloy, M. A.
2018-04-01
The study of the electrodynamics of static, axisymmetric and force-free Kerr magnetospheres relies vastly on solutions of the so called relativistic Grad-Shafranov equation (GSE). Different numerical approaches to the solution of the GSE have been introduced in the literature, but none of them has been fully assessed from the numerical point of view in terms of efficiency and quality of the solutions found. We present a generalization of these algorithms and give detailed background on the algorithmic implementation. We assess the numerical stability of the implemented algorithms and quantify the convergence of the presented methodology for the most established setups (split-monopole, paraboloidal, BH-disk, uniform).
Numerical Methods for Analysis of Charged Vacancy Diffusion in Dielectric Solids
2006-12-01
theory for charged vacancy diffusion in elastic dielectric materials is formulated and implemented numerically in a finite difference code. The...one of the co-authors on neutral vacancy kinetics (Grinfeld and Hazzledine, 1997). The theory is implemented numerically in a finite difference code...accuracy of order ( )2x∆ , using a finite difference approximation (Hoffman, 1992) for the second spatial derivative of φ : ( )21 1 0ˆ2 /i i i i Rxφ
NASA Astrophysics Data System (ADS)
Amiraux, Mathieu
Rotorcraft Blade-Vortex Interaction (BVI) remains one of the most challenging flow phenomenon to simulate numerically. Over the past decade, the HART-II rotor test and its extensive experimental dataset has been a major database for validation of CFD codes. Its strong BVI signature, with high levels of intrusive noise and vibrations, makes it a difficult test for computational methods. The main challenge is to accurately capture and preserve the vortices which interact with the rotor, while predicting correct blade deformations and loading. This doctoral dissertation presents the application of a coupled CFD/CSD methodology to the problem of helicopter BVI and compares three levels of fidelity for aerodynamic modeling: a hybrid lifting-line/free-wake (wake coupling) method, with modified compressible unsteady model; a hybrid URANS/free-wake method; and a URANS-based wake capturing method, using multiple overset meshes to capture the entire flow field. To further increase numerical correlation, three helicopter fuselage models are implemented in the framework. The first is a high resolution 3D GPU panel code; the second is an immersed boundary based method, with 3D elliptic grid adaption; the last one uses a body-fitted, curvilinear fuselage mesh. The main contribution of this work is the implementation and systematic comparison of multiple numerical methods to perform BVI modeling. The trade-offs between solution accuracy and computational cost are highlighted for the different approaches. Various improvements have been made to each code to enhance physical fidelity, while advanced technologies, such as GPU computing, have been employed to increase efficiency. The resulting numerical setup covers all aspects of the simulation creating a truly multi-fidelity and multi-physics framework. Overall, the wake capturing approach showed the best BVI phasing correlation and good blade deflection predictions, with slightly under-predicted aerodynamic loading magnitudes. However, it proved to be much more expensive than the other two methods. Wake coupling with RANS solver had very good loading magnitude predictions, and therefore good acoustic intensities, with acceptable computational cost. The lifting-line based technique often had over-predicted aerodynamic levels, due to the degree of empiricism of the model, but its very short run-times, thanks to GPU technology, makes it a very attractive approach.
NASA Astrophysics Data System (ADS)
Holgate, J. T.; Coppins, M.
2018-04-01
Plasma-surface interactions are ubiquitous in the field of plasma science and technology. Much of the physics of these interactions can be captured with a simple model comprising a cold ion fluid and electrons which satisfy the Boltzmann relation. However, this model permits analytical solutions in a very limited number of cases. This paper presents a versatile and robust numerical implementation of the model for arbitrary surface geometries in cartesian and axisymmetric cylindrical coordinates. Specific examples of surfaces with sinusoidal corrugations, trenches, and hemi-ellipsoidal protrusions verify this numerical implementation. The application of the code to problems involving plasma-liquid interactions, plasma etching, and electron emission from the surface is discussed.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-25
...] Approval and Promulgation of Implementation Plans; Kentucky; Stage II Requirements for Enterprise Holdings... Kentucky Division for Air Quality (KDAQ) on April 25, 2013, for the purpose of exempting an Enterprise... subject Enterprise Holdings, Inc., facility is currently being constructed at the Cincinnati/Northern...
Numerical simulation for turbulent heating around the forebody fairing of H-II rocket
NASA Astrophysics Data System (ADS)
Nomura, Shigeaki; Yamamoto, Yukimitsu; Fukushima, Yukio
Concerning the heat transfer distributions around the nose fairing of the Japanese new launch vehicle H-II rocket, numerical simulations have been conducted for the conditions along its nominal ascent trajectory and some experimental tests have been conducted additionally to confirm the numerical results. The thin layer approximated Navier-Stokes equations with Baldwin-Lomax's algebraic turbulent model were solved by the time dependent finite difference method. Results of numerical simulations showed that a high peak heating would occur near the stagnation point on the spherical nose portion due to the transition to turbulent flow during the period when large stagnation point heating was predicted. The experiments were conducted under the condition of M = 5 and Re = 10 to the 6th which was similar to the flight condition where the maximum stagnation point heating would occur. The experimental results also showed a high peak heating near the stagnation point over the spherical nose portion.
Scratched: World War II Airborne Operations That Never Happened
2014-05-22
Approved for Public Release; Distribution is Unlimited SCRATCHED: WORLD WAR II AIRBORNE OPERATIONS THAT NEVER HAPPENED A Monograph by...2. REPORT TYPE Master’s Thesis 3. DATES COVERED (From - To) JUN 2013-MAY 2014 4. TITLE AND SUBTITLE Scratched: World War II Airborne...Maastricht gap, to get Allied troops through the West Wall. For numerous reasons, the overall Allied airborne effort of World War II provided mixed
43 CFR 3809.202 - Under what conditions will BLM defer to State regulation of operations?
Code of Federal Regulations, 2013 CFR
2013-10-01
... standards on a provision-by-provision basis to determine— (i) Whether non-numerical State standards are functionally equivalent to BLM counterparts; and (ii) Whether numerical State standards are the same as corresponding numerical BLM standards, except that State review and approval time frames do not have to be the...
43 CFR 3809.202 - Under what conditions will BLM defer to State regulation of operations?
Code of Federal Regulations, 2012 CFR
2012-10-01
... standards on a provision-by-provision basis to determine— (i) Whether non-numerical State standards are functionally equivalent to BLM counterparts; and (ii) Whether numerical State standards are the same as corresponding numerical BLM standards, except that State review and approval time frames do not have to be the...
43 CFR 3809.202 - Under what conditions will BLM defer to State regulation of operations?
Code of Federal Regulations, 2011 CFR
2011-10-01
... standards on a provision-by-provision basis to determine— (i) Whether non-numerical State standards are functionally equivalent to BLM counterparts; and (ii) Whether numerical State standards are the same as corresponding numerical BLM standards, except that State review and approval time frames do not have to be the...
43 CFR 3809.202 - Under what conditions will BLM defer to State regulation of operations?
Code of Federal Regulations, 2014 CFR
2014-10-01
... standards on a provision-by-provision basis to determine— (i) Whether non-numerical State standards are functionally equivalent to BLM counterparts; and (ii) Whether numerical State standards are the same as corresponding numerical BLM standards, except that State review and approval time frames do not have to be the...
Numerical study of the Columbia high-beta device: Torus-II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Izzo, R.
1981-01-01
The ionization, heating and subsequent long-time-scale behavior of the helium plasma in the Columbia fusion device, Torus-II, is studied. The purpose of this work is to perform numerical simulations while maintaining a high level of interaction with experimentalists. The device is operated as a toroidal z-pinch to prepare the gas for heating. This ionization of helium is studied using a zero-dimensional, two-fluid code. It is essentially an energy balance calculation that follows the development of the various charge states of the helium and any impurities (primarily silicon and oxygen) that are present. The code is an atomic physics model ofmore » Torus-II. In addition to ionization, we include three-body and radiative recombination processes.« less
NASA Astrophysics Data System (ADS)
Heister, Timo; Dannberg, Juliane; Gassmöller, Rene; Bangerth, Wolfgang
2017-08-01
Computations have helped elucidate the dynamics of Earth's mantle for several decades already. The numerical methods that underlie these simulations have greatly evolved within this time span, and today include dynamically changing and adaptively refined meshes, sophisticated and efficient solvers, and parallelization to large clusters of computers. At the same time, many of the methods - discussed in detail in a previous paper in this series - were developed and tested primarily using model problems that lack many of the complexities that are common to the realistic models our community wants to solve today. With several years of experience solving complex and realistic models, we here revisit some of the algorithm designs of the earlier paper and discuss the incorporation of more complex physics. In particular, we re-consider time stepping and mesh refinement algorithms, evaluate approaches to incorporate compressibility, and discuss dealing with strongly varying material coefficients, latent heat, and how to track chemical compositions and heterogeneities. Taken together and implemented in a high-performance, massively parallel code, the techniques discussed in this paper then allow for high resolution, 3-D, compressible, global mantle convection simulations with phase transitions, strongly temperature dependent viscosity and realistic material properties based on mineral physics data.
A Green's function method for local and non-local parallel transport in general magnetic fields
NASA Astrophysics Data System (ADS)
Del-Castillo-Negrete, Diego; Chacón, Luis
2009-11-01
The study of transport in magnetized plasmas is a problem of fundamental interest in controlled fusion and astrophysics research. Three issues make this problem particularly challenging: (i) The extreme anisotropy between the parallel (i.e., along the magnetic field), χ, and the perpendicular, χ, conductivities (χ/χ may exceed 10^10 in fusion plasmas); (ii) Magnetic field lines chaos which in general complicates (and may preclude) the construction of magnetic field line coordinates; and (iii) Nonlocal parallel transport in the limit of small collisionality. Motivated by these issues, we present a Lagrangian Green's function method to solve the local and non-local parallel transport equation applicable to integrable and chaotic magnetic fields. The numerical implementation employs a volume-preserving field-line integrator [Finn and Chac'on, Phys. Plasmas, 12 (2005)] for an accurate representation of the magnetic field lines regardless of the level of stochasticity. The general formalism and its algorithmic properties are discussed along with illustrative analytical and numerical examples. Problems of particular interest include: the departures from the Rochester--Rosenbluth diffusive scaling in the weak magnetic chaos regime, the interplay between non-locality and chaos, and the robustness of transport barriers in reverse shear configurations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amerio, S.; Behari, S.; Boyd, J.
The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and D0 experiments each have approximately 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 and beyond. To achieve this goal, we have implemented a system that utilizes virtualization, automated validation, and migration to new standards inmore » both software and data storage technology and leverages resources available from currently-running experiments at Fermilab. Lastly, these efforts have also provided useful lessons in ensuring long-term data access for numerous experiments, and enable high-quality scientific output for years to come.« less
Data preservation at the Fermilab Tevatron
NASA Astrophysics Data System (ADS)
Amerio, S.; Behari, S.; Boyd, J.; Brochmann, M.; Culbertson, R.; Diesburg, M.; Freeman, J.; Garren, L.; Greenlee, H.; Herner, K.; Illingworth, R.; Jayatilaka, B.; Jonckheere, A.; Li, Q.; Naymola, S.; Oleynik, G.; Sakumoto, W.; Varnes, E.; Vellidis, C.; Watts, G.; White, S.
2017-04-01
The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and D0 experiments each have approximately 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 and beyond. To achieve this goal, we have implemented a system that utilizes virtualization, automated validation, and migration to new standards in both software and data storage technology and leverages resources available from currently-running experiments at Fermilab. These efforts have also provided useful lessons in ensuring long-term data access for numerous experiments, and enable high-quality scientific output for years to come.
Loop quantum cosmology of Bianchi IX: effective dynamics
NASA Astrophysics Data System (ADS)
Corichi, Alejandro; Montoya, Edison
2017-03-01
We study solutions to the effective equations for the Bianchi IX class of spacetimes within loop quantum cosmology (LQC). We consider Bianchi IX models whose matter content is a massless scalar field, by numerically solving the loop quantum cosmology effective equations, with and without inverse triad corrections. The solutions are classified using certain geometrically motivated classical observables. We show that both effective theories—with lapse N = V and N = 1—resolve the big bang singularity and reproduce the classical dynamics far from the bounce. Moreover, due to the positive spatial curvature, there is an infinite number of bounces and recollapses. We study the limit of large field momentum and show that both effective theories reproduce the same dynamics, thus recovering general relativity. We implement a procedure to identify amongst the Bianchi IX solutions, those that behave like k = 0,1 FLRW as well as Bianchi I, II, and VII0 models. The effective solutions exhibit Bianchi I phases with Bianchi II transitions and also Bianchi VII0 phases, which had not been studied before. We comment on the possible implications of these results for a quantum modification to the classical BKL behaviour.
47 CFR 400.4 - Application requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... proposed to be funded for the implementation and operation of Phase II E-911 services or migration to an IP... telecommunications services in the implementation and delivery of Phase II E-911 services or for migration to an IP...-911 services or for migration to an IP-enabled emergency network. (2) Project budget. A project budget...
47 CFR 400.4 - Application requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... proposed to be funded for the implementation and operation of Phase II E-911 services or migration to an IP... telecommunications services in the implementation and delivery of Phase II E-911 services or for migration to an IP...-911 services or for migration to an IP-enabled emergency network. (2) Project budget. A project budget...
77 FR 40589 - Notice of Proposed Information Collection Requests; Institute of Education Sciences...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-10
... Sciences; Implementation of Title I/II Program Initiatives SUMMARY: This evaluation will examine the implementation of core policies promoted by Title I and Title II at the state district, and school levels in four...- 401-0920. Please specify the complete title of the information collection and OMB Control Number when...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-22
... Competition Bureau Seeks Updates and Corrections to TelcoMaster Table for Connect America Cost Model AGENCY... centers to particular holding companies for purposes of Connect America Phase II implementation. DATES... companies for purposes of Connect America Phase II implementation. 2. The USF/ICC Transformation Order, 76...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Priimak, Dmitri
2014-12-01
We present a finite difference numerical algorithm for solving two dimensional spatially homogeneous Boltzmann transport equation which describes electron transport in a semiconductor superlattice subject to crossed time dependent electric and constant magnetic fields. The algorithm is implemented both in C language targeted to CPU and in CUDA C language targeted to commodity NVidia GPU. We compare performances and merits of one implementation versus another and discuss various software optimisation techniques.
Cull, Brooke J; Dzewaltowski, David A; Guagliano, Justin M; Rosenkranz, Sara K; Knutson, Cassandra K; Rosenkranz, Richard R
2018-01-01
To evaluate the effectiveness of in-person versus online Girl Scout leader wellness training for implementation of wellness-promoting practices during troop meetings (phase I) and to assess training adoption and current practices across the council (phase II). Pragmatic superiority trial (phase 1) followed by serial cross-sectional study (phase II). Girl Scout troop meetings in Northeast Kansas. Eighteen troop leaders from 3 counties (phase 1); 113 troop leaders from 7 counties (phase II). Phase I: Troop leaders attended 2 wellness training sessions (first in groups, second individually), wherein leaders set wellness-promoting practice implementation goals, self-monitored progress, and received guidance and resources for implementation. Leaders received the intervention in person or online. Phase I: At baseline and postintervention, leaders completed a wellness-promoting practice implementation questionnaire assessing practices during troop meetings (max score = 11). Phase II: Leaders completed a survey about typical troop practices and interest in further training. Phase I: Generalized linear mixed modeling. Phase I: In-person training increased wellness-promoting practice implementation more than online training (in person = 2.1 ± 1.8; online = 0.2 ± 1.2; P = .022). Phase II: Fifty-six percent of leaders adopted the training. For 8 of 11 wellness categories, greater than 50% of leaders employed wellness-promoting practices. In-person training was superior to online training for improvements in wellness-promoting practices. Wellness training was adopted by the majority of leaders across the council.
NASA Astrophysics Data System (ADS)
Gusev, Anatoly; Fomin, Vladimir; Diansky, Nikolay; Korshenko, Evgeniya
2017-04-01
In this paper, we present the improved version of the ocean general circulation sigma-model developed in the Institute of Numerical Mathematics of the Russian Academy of Sciences (INM RAS). The previous version referred to as INMOM (Institute of Numerical Mathematics Ocean Model) is used as the oceanic component of the IPCC climate system model INMCM (Institute of Numerical Mathematics Climate Model (Volodin et al 2010,2013). Besides, INMOM as the only sigma-model was used for simulations according to CORE-II scenario (Danabasoglu et al. 2014,2016; Downes et al. 2015; Farneti et al. 2015). In general, INMOM results are comparable to ones of other OGCMs and were used for investigation of climatic variations in the North Atlantic (Gusev and Diansky 2014). However, detailed analysis of some CORE-II INMOM results revealed some disadvantages of the INMOM leading to considerable errors in reproducing some ocean characteristics. So, the mass transport in the Antarctic Circumpolar Current (ACC) was overestimated. As well, there were noticeable errors in reproducing thermohaline structure of the ocean. After analysing the previous results, the new version of the OGCM was developed. It was decided to entitle is INMSOM (Institute of Numerical Mathematics Sigma Ocean Model). The new title allows one to distingwish the new model, first, from its older version, and second, from another z-model developed in the INM RAS and referred to as INMIO (Institute of Numerical Mathematics and Institute of Oceanology ocean model) (Ushakov et al. 2016). There were numerous modifications in the model, some of them are as follows. 1) Formulation of the ocean circulation problem in terms of full free surface with taking into account water amount variation. 2) Using tensor form of lateral viscosity operator invariant to rotation. 3) Using isopycnal diffusion including Gent-McWilliams mixing. 4) Using atmospheric forcing computation according to NCAR methodology (Large and Yeager 2009). 5) Improvement river runoff algorithm accounting the total amount of discharged water. 6) Using explicit leapfrog time scheme for all lateral operators and implicit Euler scheme for vertical diffusion and viscosity. The INMSOM is tested by reproducing World Ocean circulation and thermohaline characteristics using the well-proved CORE dataset. The presentation is devoted to the analysis of new INMSOM simulation results, estimation of their quality and comparison to the ones previously obtained with the INMOM. The main aim of the INMSOM development is using it as the oceanic component of the next version of INMCM. The work was supported by the Russian Foundation for Basic Research (grants № 16-05-00534 and № 15-05-07539) References 1. Danabasoglu, G., Yeager S.G., Bailey D., et al., 2014: North Atlantic simulations in Coordinated Ocean-ice Reference Experiments phase II (CORE-II). Part I: Mean states. Ocean Modelling, 73, 76-107. 2. Danabasoglu, G., Yeager S.G., Kim W.M. et al., 2016: North Atlantic simulations in Coordinated Ocean-ice Reference Experiments phase II (CORE-II). Part II: Inter-annual to decadal variability. Ocean Modelling, 97, 65-90. 3. Downes S.M., Farneti R., Uotila P. et al. An assessment of Southern Ocean water masses and sea ice during 1988-2007 in a suite of interannual CORE-II simulations. Ocean Modelling (2015), 94, 67-94. 4. Farneti R., Downes S.M., Griffies S.M. et al. An assessment of Antarctic Circumpolar Current and Southern Ocean Meridional Overturning Circulation during 1958-2007 in a suite of interannual CORE-II simulations, Ocean Modelling (2015), 93, 84-120. 5. Gusev A.V. and Diansky N.A. Numerical simulation of the World ocean circulation and its climatic variability for 1948-2007 using the INMOM. Izvestiya, Atmospheric and Oceanic Physics, 2014, V. 50, N. 1, P. 1-12 6. Large, W., Yeager, S., 2009. The global climatology of an interannually varying air-sea flux data set. Clim Dyn, V. 33, P. 341-364. 7. Ushakov K.V., Grankina T.B., Ibraev R.A. Modeling the water circulation in the North Atlantic in the scope of the CORE-II experiment. Izvestiya, Atmospheric and Oceanic Physics. 2016. V. 52, № 4, P. 365-375
Spectral methods in general relativity and large Randall-Sundrum II black holes
NASA Astrophysics Data System (ADS)
Abdolrahimi, Shohreh; Cattoën, Céline; Page, Don N.; \\\\; Yaghoobpour-Tari, Shima
2013-06-01
Using a novel numerical spectral method, we have found solutions for large static Randall-Sundrum II (RSII) black holes by perturbing a numerical AdS5-CFT4 solution to the Einstein equation with a negative cosmological constant Λ that is asymptotically conformal to the Schwarzschild metric. We used a numerical spectral method independent of the Ricci-DeTurck-flow method used by Figueras, Lucietti, and Wiseman for a similar numerical solution. We have compared our black-hole solution to the one Figueras and Wiseman have derived by perturbing their numerical AdS5-CFT4 solution, showing that our solution agrees closely with theirs. We have obtained a closed-form approximation to the metric of the black hole on the brane. We have also deduced the new results that to first order in 1/(-ΛM2), the Hawking temperature and entropy of an RSII static black hole have the same values as the Schwarzschild metric with the same mass, but the horizon area is increased by about 4.7/(-Λ).
Digital optical interconnects for photonic computing
NASA Astrophysics Data System (ADS)
Guilfoyle, Peter S.; Stone, Richard V.; Zeise, Frederick F.
1994-05-01
A 32-bit digital optical computer (DOC II) has been implemented in hardware utilizing 8,192 free-space optical interconnects. The architecture exploits parallel interconnect technology by implementing microcode at the primitive level. A burst mode of 0.8192 X 1012 binary operations per sec has been reliably demonstrated. The prototype has been successful in demonstrating general purpose computation. In addition to emulating the RISC instruction set within the UNIX operating environment, relational database text search operations have been implemented on DOC II.
Silva, F G A; de Moura, M F S F; Dourado, N; Xavier, J; Pereira, F A M; Morais, J J L; Dias, M I R; Lourenço, P J; Judas, F M
2017-08-01
Fracture characterization of human cortical bone under mode II loading was analyzed using a miniaturized version of the end-notched flexure test. A data reduction scheme based on crack equivalent concept was employed to overcome uncertainties on crack length monitoring during the test. The crack tip shear displacement was experimentally measured using digital image correlation technique to determine the cohesive law that mimics bone fracture behavior under mode II loading. The developed procedure was validated by finite element analysis using cohesive zone modeling considering a trapezoidal with bilinear softening relationship. Experimental load-displacement curves, resistance curves and crack tip shear displacement versus applied displacement were used to validate the numerical procedure. The excellent agreement observed between the numerical and experimental results reveals the appropriateness of the proposed test and procedure to characterize human cortical bone fracture under mode II loading. The proposed methodology can be viewed as a novel valuable tool to be used in parametric and methodical clinical studies regarding features (e.g., age, diseases, drugs) influencing bone shear fracture under mode II loading.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-24
... under the Clean Air Act (CAA), EPA is taking final action to approve state implementation plan (SIP... the background for this action? II. What action is EPA taking? III. Statutory and Executive Order... proposal to approve Indiana's state board provisions. II. What action is EPA taking? For the reasons...
Implementation of Head Start Planned Variation: 1970-1971. Part II.
ERIC Educational Resources Information Center
Lukas, Carol Van Deusen; Wohlleb, Cynthia
This volume of appendices is Part II of a study of program implementation in 12 models of Head Start Planned Variation. It presents details of the data analysis, copies of data collection instruments, and additional analyses and statistics. The appendices are: (A) Analysis of Variance Designs, (B) Copies of Instruments, (C) Additional Analyses,…
Numerical Simulation of Oblique Impacts: Impact Melt and Transient Cavity Size
NASA Technical Reports Server (NTRS)
Artemieva, N. A.; Ivanov, B. A.
2001-01-01
We present 3D hydrocode numerical modeling for oblique impacts (i) to estimate the melt production and (ii) to trace the evolution of the transient cavity shape till the crater collapse. Additional information is contained in the original extended abstract.
Rapid metabolism of exogenous angiotensin II by catecholaminergic neuronal cells in culture media.
Basu, Urmi; Seravalli, Javier; Madayiputhiya, Nandakumar; Adamec, Jiri; Case, Adam J; Zimmerman, Matthew C
2015-02-01
Angiotensin II (AngII) acts on central neurons to increase neuronal firing and induce sympathoexcitation, which contribute to the pathogenesis of cardiovascular diseases including hypertension and heart failure. Numerous studies have examined the precise AngII-induced intraneuronal signaling mechanism in an attempt to identify new therapeutic targets for these diseases. Considering the technical challenges in studying specific intraneuronal signaling pathways in vivo, especially in the cardiovascular control brain regions, most studies have relied on neuronal cell culture models. However, there are numerous limitations in using cell culture models to study AngII intraneuronal signaling, including the lack of evidence indicating the stability of AngII in culture media. Herein, we tested the hypothesis that exogenous AngII is rapidly metabolized in neuronal cell culture media. Using liquid chromatography-tandem mass spectrometry, we measured levels of AngII and its metabolites, Ang III, Ang IV, and Ang-1-7, in neuronal cell culture media after administration of exogenous AngII (100 nmol/L) to a neuronal cell culture model (CATH.a neurons). AngII levels rapidly declined in the media, returning to near baseline levels within 3 h of administration. Additionally, levels of Ang III and Ang-1-7 acutely increased, while levels of Ang IV remained unchanged. Replenishing the media with exogenous AngII every 3 h for 24 h resulted in a consistent and significant increase in AngII levels for the duration of the treatment period. These data indicate that AngII is rapidly metabolized in neuronal cell culture media, and replenishing the media at least every 3 h is needed to sustain chronically elevated levels. © 2015 The Authors. Physiological Reports published by Wiley Periodicals, Inc. on behalf of the American Physiological Society and The Physiological Society.
1982-01-29
N - Nw .VA COMPUTER PROGRAM USER’S MANUAL FOR . 0FIREFINDER DIGITAL TOPOGRAPHIC DATA VERIFICATION LIBRARY DUBBING SYSTEM VOLUME II DUBBING 29 JANUARY...Digital Topographic Data Verification Library Dubbing System, Volume II, Dubbing 6. PERFORMING ORG. REPORT NUMER 7. AUTHOR(q) S. CONTRACT OR GRANT...Software Library FIREFINDER Dubbing 20. ABSTRACT (Continue an revWee *Ide II necessary end identify by leek mauber) PThis manual describes the computer
Marom, Gil; Bluestein, Danny
2016-01-01
This paper evaluated the influence of various numerical implementation assumptions on predicting blood damage in cardiovascular devices using Lagrangian methods with Eulerian computational fluid dynamics. The implementation assumptions that were tested included various seeding patterns, stochastic walk model, and simplified trajectory calculations with pathlines. Post processing implementation options that were evaluated included single passage and repeated passages stress accumulation and time averaging. This study demonstrated that the implementation assumptions can significantly affect the resulting stress accumulation, i.e., the blood damage model predictions. Careful considerations should be taken in the use of Lagrangian models. Ultimately, the appropriate assumptions should be considered based the physics of the specific case and sensitivity analysis, similar to the ones presented here, should be employed.
Fusing Symbolic and Numerical Diagnostic Computations
NASA Technical Reports Server (NTRS)
James, Mark
2007-01-01
X-2000 Anomaly Detection Language denotes a developmental computing language, and the software that establishes and utilizes the language, for fusing two diagnostic computer programs, one implementing a numerical analysis method, the other implementing a symbolic analysis method into a unified event-based decision analysis software system for realtime detection of events (e.g., failures) in a spacecraft, aircraft, or other complex engineering system. The numerical analysis method is performed by beacon-based exception analysis for multi-missions (BEAMs), which has been discussed in several previous NASA Tech Briefs articles. The symbolic analysis method is, more specifically, an artificial-intelligence method of the knowledge-based, inference engine type, and its implementation is exemplified by the Spacecraft Health Inference Engine (SHINE) software. The goal in developing the capability to fuse numerical and symbolic diagnostic components is to increase the depth of analysis beyond that previously attainable, thereby increasing the degree of confidence in the computed results. In practical terms, the sought improvement is to enable detection of all or most events, with no or few false alarms.
40 CFR 52.1671 - Classification of regions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS (CONTINUED) New York § 52.1671 Classification of regions. The New York plans were evaluated on the basis of the following classifications: Air... II III III III Central New York Intrastate I II III I I Genesee-Finger Lakes Intrastate II II III III...
EFFECTS OF ELECTROOSMOSIS ON SOIL TEMPERATURE AND HYDRAULIC HEAD: II. NUMERICAL SIMULATION
A numerical model to simulate the distributions of voltage, soil temperature, and hydraulic head during the field test of electroosmosis was developed. The two-dimensional governing equations for the distributions of voltage, soil temperature, and hydraulic head within a cylindri...
Large Black Holes in the Randall-Sundrum II Model
NASA Astrophysics Data System (ADS)
Yaghoobpour Tari, Shima
The Einstein equation with a negative cosmological constant ! in the five dimensions for the Randall-Sundrum II model, which includes a black hole, has been solved numerically. We have constructed an AdS5-CFT 4 solution numerically, using a spectral method to minimize the integral of the square of the error of the Einstein equation, with 210 parameters to be determined by optimization. This metric is conformal to the Schwarzschild metric at an AdS5 boundary with an infinite scale factor. So, we consider this solution as an infinite-mass black hole solution. We have rewritten the infinite-mass black hole in the Fefferman-Graham form and obtained the numerical components of the CFT energy-momentum tensor. Using them, we have perturbed the metric to relocate the brane from infinity and obtained a large static black hole solution for the Randall- Sundrum II model. The changes of mass, entropy, temperature and area of the large black hole from the Schwarzschild metric are studied up to the first order for the perturbation parameter 1/(-Λ5M 2). The Hawking temperature and entropy for our large black hole have the same values as the Schwarzschild metric with the same mass, but the horizon area is increased by about 4.7/(-Λ5). Figueras, Lucietti, and Wiseman found an AdS5-CFT4 solution using an independent and different method from us, called the Ricci-DeTurck-flow method. Then, Figueras and Wiseman perturbed this solution in a same way as we have done and obtained the solution for the large black hole in the Randall-Sundrum II model. These two numerical solutions are the first mathematical proofs for having a large black hole in the Randall-Sundrum II. We have compared their results with ours for the CFT energy-momentum tensor components and the perturbed metric. We have shown that the results are closely in agreement, which can be considered as evidence that the solution for the large black hole in the Randall-Sundrum II model exists.
The AGORA High-resolution Galaxy Simulations Comparison Project II: Isolated disk test
Kim, Ji-hoon; Agertz, Oscar; Teyssier, Romain; ...
2016-12-20
Using an isolated Milky Way-mass galaxy simulation, we compare results from 9 state-of-the-art gravito-hydrodynamics codes widely used in the numerical community. We utilize the infrastructure we have built for the AGORA High-resolution Galaxy Simulations Comparison Project. This includes the common disk initial conditions, common physics models (e.g., radiative cooling and UV background by the standardized package Grackle) and common analysis toolkit yt, all of which are publicly available. Subgrid physics models such as Jeans pressure floor, star formation, supernova feedback energy, and metal production are carefully constrained across code platforms. With numerical accuracy that resolves the disk scale height, wemore » find that the codes overall agree well with one another in many dimensions including: gas and stellar surface densities, rotation curves, velocity dispersions, density and temperature distribution functions, disk vertical heights, stellar clumps, star formation rates, and Kennicutt-Schmidt relations. Quantities such as velocity dispersions are very robust (agreement within a few tens of percent at all radii) while measures like newly-formed stellar clump mass functions show more significant variation (difference by up to a factor of ~3). Systematic differences exist, for example, between mesh-based and particle-based codes in the low density region, and between more diffusive and less diffusive schemes in the high density tail of the density distribution. Yet intrinsic code differences are generally small compared to the variations in numerical implementations of the common subgrid physics such as supernova feedback. Lastly, our experiment reassures that, if adequately designed in accordance with our proposed common parameters, results of a modern high-resolution galaxy formation simulation are more sensitive to input physics than to intrinsic differences in numerical schemes.« less
The AGORA High-resolution Galaxy Simulations Comparison Project II: Isolated disk test
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Ji-hoon; Agertz, Oscar; Teyssier, Romain
Using an isolated Milky Way-mass galaxy simulation, we compare results from 9 state-of-the-art gravito-hydrodynamics codes widely used in the numerical community. We utilize the infrastructure we have built for the AGORA High-resolution Galaxy Simulations Comparison Project. This includes the common disk initial conditions, common physics models (e.g., radiative cooling and UV background by the standardized package Grackle) and common analysis toolkit yt, all of which are publicly available. Subgrid physics models such as Jeans pressure floor, star formation, supernova feedback energy, and metal production are carefully constrained across code platforms. With numerical accuracy that resolves the disk scale height, wemore » find that the codes overall agree well with one another in many dimensions including: gas and stellar surface densities, rotation curves, velocity dispersions, density and temperature distribution functions, disk vertical heights, stellar clumps, star formation rates, and Kennicutt-Schmidt relations. Quantities such as velocity dispersions are very robust (agreement within a few tens of percent at all radii) while measures like newly-formed stellar clump mass functions show more significant variation (difference by up to a factor of ~3). Systematic differences exist, for example, between mesh-based and particle-based codes in the low density region, and between more diffusive and less diffusive schemes in the high density tail of the density distribution. Yet intrinsic code differences are generally small compared to the variations in numerical implementations of the common subgrid physics such as supernova feedback. Lastly, our experiment reassures that, if adequately designed in accordance with our proposed common parameters, results of a modern high-resolution galaxy formation simulation are more sensitive to input physics than to intrinsic differences in numerical schemes.« less
THE AGORA HIGH-RESOLUTION GALAXY SIMULATIONS COMPARISON PROJECT. II. ISOLATED DISK TEST
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Ji-hoon; Agertz, Oscar; Teyssier, Romain
Using an isolated Milky Way-mass galaxy simulation, we compare results from nine state-of-the-art gravito-hydrodynamics codes widely used in the numerical community. We utilize the infrastructure we have built for the AGORA High-resolution Galaxy Simulations Comparison Project. This includes the common disk initial conditions, common physics models (e.g., radiative cooling and UV background by the standardized package Grackle) and common analysis toolkit yt, all of which are publicly available. Subgrid physics models such as Jeans pressure floor, star formation, supernova feedback energy, and metal production are carefully constrained across code platforms. With numerical accuracy that resolves the disk scale height, wemore » find that the codes overall agree well with one another in many dimensions including: gas and stellar surface densities, rotation curves, velocity dispersions, density and temperature distribution functions, disk vertical heights, stellar clumps, star formation rates, and Kennicutt–Schmidt relations. Quantities such as velocity dispersions are very robust (agreement within a few tens of percent at all radii) while measures like newly formed stellar clump mass functions show more significant variation (difference by up to a factor of ∼3). Systematic differences exist, for example, between mesh-based and particle-based codes in the low-density region, and between more diffusive and less diffusive schemes in the high-density tail of the density distribution. Yet intrinsic code differences are generally small compared to the variations in numerical implementations of the common subgrid physics such as supernova feedback. Our experiment reassures that, if adequately designed in accordance with our proposed common parameters, results of a modern high-resolution galaxy formation simulation are more sensitive to input physics than to intrinsic differences in numerical schemes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boyd, J.; Herner, K.; Jayatilaka, B.
The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and DO experiments each have nearly 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 or beyond. To achieve this, we are implementing a system that utilizes virtualization, automated validation, and migration to new standards in bothmore » software and data storage technology as well as leveraging resources available from currently-running experiments at Fermilab. Furthermore, these efforts will provide useful lessons in ensuring long-term data access for numerous experiments throughout high-energy physics, and provide a roadmap for high-quality scientific output for years to come.« less
NASA Technical Reports Server (NTRS)
Acikmese, Behcet A.; Carson, John M., III
2005-01-01
A robustly stabilizing MPC (model predictive control) algorithm for uncertain nonlinear systems is developed that guarantees the resolvability of the associated finite-horizon optimal control problem in a receding-horizon implementation. The control consists of two components; (i) feedforward, and (ii) feedback part. Feed-forward control is obtained by online solution of a finite-horizon optimal control problem for the nominal system dynamics. The feedback control policy is designed off-line based on a bound on the uncertainty in the system model. The entire controller is shown to be robustly stabilizing with a region of attraction composed of initial states for which the finite-horizon optimal control problem is feasible. The controller design for this algorithm is demonstrated on a class of systems with uncertain nonlinear terms that have norm-bounded derivatives, and derivatives in polytopes. An illustrative numerical example is also provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simakov, Andrei N., E-mail: simakov@lanl.gov; Molvig, Kim
2016-03-15
Paper I [A. N. Simakov and K. Molvig, Phys. Plasmas 23, 032115 (2016)] obtained a fluid description for an unmagnetized collisional plasma with multiple ion species. To evaluate collisional plasma transport fluxes, required for such a description, two linear systems of equations need to be solved to obtain corresponding transport coefficients. In general, this should be done numerically. Herein, the general formalism is used to obtain analytical expressions for such fluxes for several specific cases of interest: a deuterium-tritium plasma; a plasma containing two ion species with strongly disparate masses, which agrees with previously obtained results; and a three ionmore » species plasma made of deuterium, tritium, and gold. These results can be used for understanding the behavior of the aforementioned plasmas, or for verifying a code implementation of the general multi-ion formalism.« less
Liu, Jian; Miller, William H
2011-03-14
We show the exact expression of the quantum mechanical time correlation function in the phase space formulation of quantum mechanics. The trajectory-based dynamics that conserves the quantum canonical distribution-equilibrium Liouville dynamics (ELD) proposed in Paper I is then used to approximately evaluate the exact expression. It gives exact thermal correlation functions (of even nonlinear operators, i.e., nonlinear functions of position or momentum operators) in the classical, high temperature, and harmonic limits. Various methods have been presented for the implementation of ELD. Numerical tests of the ELD approach in the Wigner or Husimi phase space have been made for a harmonic oscillator and two strongly anharmonic model problems, for each potential autocorrelation functions of both linear and nonlinear operators have been calculated. It suggests ELD can be a potentially useful approach for describing quantum effects for complex systems in condense phase.
Data preservation at the Fermilab Tevatron
Amerio, S.; Behari, S.; Boyd, J.; ...
2017-01-22
The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and D0 experiments each have approximately 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 and beyond. To achieve this goal, we have implemented a system that utilizes virtualization, automated validation, and migration to new standards inmore » both software and data storage technology and leverages resources available from currently-running experiments at Fermilab. Lastly, these efforts have also provided useful lessons in ensuring long-term data access for numerous experiments, and enable high-quality scientific output for years to come.« less
Data preservation at the Fermilab Tevatron
Boyd, J.; Herner, K.; Jayatilaka, B.; ...
2015-12-23
The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and DO experiments each have nearly 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 or beyond. To achieve this, we are implementing a system that utilizes virtualization, automated validation, and migration to new standards in bothmore » software and data storage technology as well as leveraging resources available from currently-running experiments at Fermilab. Furthermore, these efforts will provide useful lessons in ensuring long-term data access for numerous experiments throughout high-energy physics, and provide a roadmap for high-quality scientific output for years to come.« less
Simakov, Andrei Nikolaevich; Molvig, Kim
2016-03-17
Paper I [A. N. Simakov and K. Molvig, Phys. Plasmas23, 032115 (2016)] obtained a fluid description for an unmagnetized collisional plasma with multiple ion species. To evaluate collisional plasmatransport fluxes, required for such a description, two linear systems of equations need to be solved to obtain corresponding transport coefficients. In general, this should be done numerically. Herein, the general formalism is used to obtain analytical expressions for such fluxes for several specific cases of interest: a deuterium-tritium plasma; a plasma containing two ion species with strongly disparate masses, which agrees with previously obtained results; and a three ion species plasmamore » made of deuterium, tritium, and gold. We find that these results can be used for understanding the behavior of the aforementioned plasmas, or for verifying a code implementation of the general multi-ion formalism.« less
A technique for global monitoring of net solar irradiance at the ocean surface. II - Validation
NASA Technical Reports Server (NTRS)
Chertock, Beth; Frouin, Robert; Gautier, Catherine
1992-01-01
The generation and validation of the first satellite-based long-term record of surface solar irradiance over the global oceans are addressed. The record is generated using Nimbus-7 earth radiation budget (ERB) wide-field-of-view plentary-albedo data as input to a numerical algorithm designed and implemented based on radiative transfer theory. The mean monthly values of net surface solar irradiance are computed on a 9-deg latitude-longitude spatial grid for November 1978-October 1985. The new data set is validated in comparisons with short-term, regional, high-resolution, satellite-based records. The ERB-based values of net surface solar irradiance are compared with corresponding values based on radiance measurements taken by the Visible-Infrared Spin Scan Radiometer aboard GOES series satellites. Errors in the new data set are estimated to lie between 10 and 20 W/sq m on monthly time scales.
Data preservation at the Fermilab Tevatron
NASA Astrophysics Data System (ADS)
Boyd, J.; Herner, K.; Jayatilaka, B.; Roser, R.; Sakumoto, W.
2015-12-01
The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and DO experiments each have nearly 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 or beyond. To achieve this, we are implementing a system that utilizes virtualization, automated validation, and migration to new standards in both software and data storage technology as well as leveraging resources available from currently-running experiments at Fermilab. These efforts will provide useful lessons in ensuring long-term data access for numerous experiments throughout high-energy physics, and provide a roadmap for high-quality scientific output for years to come.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-16
... the December 12, 2006, EPA policy memorandum from Stephen D. Page, entitled ``Removal of Stage II... memorandum from Stephen D. Page entitled, ``Removal of Stage II Vapor Recovery in Situations Where Widespread...
Beam position monitor gate functionality implementation and applications
Cheng, Weixing; Ha, Kiman; Li, Yongjun; ...
2018-06-14
We introduce a novel technique to implement gate functionality for the beam position monitors (BPM) at the National Synchrotron Light Source II (NSLS-II). The functionality, now implemented in FPGA, allows us to acquire two separated bunch-trains’ synchronized turn-by-turn (TBT) data simultaneously with the NSLS-II in-house developed BPM system. The gated position resolution is improved about 3 times by narrowing the sampling width. Experimentally we demonstrated that the machine lattice could be transparently characterized with the gated TBT data of a short diagnostic bunch-train Cheng et al., 2017; Li et al., 2017. Other applications, for example, precisely characterizing storage ring impedance/wake-fieldmore » through recording the beam positions of two separated bunch trains has been experimentally demonstrated.« less
Beam position monitor gate functionality implementation and applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Weixing; Ha, Kiman; Li, Yongjun
We introduce a novel technique to implement gate functionality for the beam position monitors (BPM) at the National Synchrotron Light Source II (NSLS-II). The functionality, now implemented in FPGA, allows us to acquire two separated bunch-trains’ synchronized turn-by-turn (TBT) data simultaneously with the NSLS-II in-house developed BPM system. The gated position resolution is improved about 3 times by narrowing the sampling width. Experimentally we demonstrated that the machine lattice could be transparently characterized with the gated TBT data of a short diagnostic bunch-train Cheng et al., 2017; Li et al., 2017. Other applications, for example, precisely characterizing storage ring impedance/wake-fieldmore » through recording the beam positions of two separated bunch trains has been experimentally demonstrated.« less
compuGUT: An in silico platform for simulating intestinal fermentation
NASA Astrophysics Data System (ADS)
Moorthy, Arun S.; Eberl, Hermann J.
The microbiota inhabiting the colon and its effect on health is a topic of significant interest. In this paper, we describe the compuGUT - a simulation tool developed to assist in exploring interactions between intestinal microbiota and their environment. The primary numerical machinery is implemented in C, and the accessory scripts for loading and visualization are prepared in bash (LINUX) and R. SUNDIALS libraries are employed for numerical integration, and googleVis API for interactive visualization. Supplementary material includes a concise description of the underlying mathematical model, and detailed characterization of numerical errors and computing times associated with implementation parameters.
Effect of Accessory Power Take-off Variation on a Turbofan Engine Performance
2012-09-26
amount of energy from the low pressure spool shaft. A high bypass turbofan engine was modeled using the Numerical Propulsion System Simulation ( NPSS ...4 II.2 Power Extraction Techniques ..........................................................................8 II.3 NPSS ...Methodology and Simulation Setup ...........................................................................25 III.1 Engine NPSS Model
An implementation of the look-ahead Lanczos algorithm for non-Hermitian matrices, part 2
NASA Technical Reports Server (NTRS)
Freund, Roland W.; Nachtigal, Noel M.
1990-01-01
It is shown how the look-ahead Lanczos process (combined with a quasi-minimal residual QMR) approach) can be used to develop a robust black box solver for large sparse non-Hermitian linear systems. Details of an implementation of the resulting QMR algorithm are presented. It is demonstrated that the QMR method is closely related to the biconjugate gradient (BCG) algorithm; however, unlike BCG, the QMR algorithm has smooth convergence curves and good numerical properties. We report numerical experiments with our implementation of the look-ahead Lanczos algorithm, both for eigenvalue problem and linear systems. Also, program listings of FORTRAN implementations of the look-ahead algorithm and the QMR method are included.
NASA Technical Reports Server (NTRS)
Rodriguez, Ernesto; Kim, Yunjin; Durden, Stephen L.
1992-01-01
A numerical evaluation is presented of the regime of validity for various rough surface scattering theories against numerical results obtained by employing the method of moments. The contribution of each theory is considered up to second order in the perturbation expansion for the surface current. Considering both vertical and horizontal polarizations, the unified perturbation method provides best results among all theories weighed.
Numerical computation of gravitational field for general axisymmetric objects
NASA Astrophysics Data System (ADS)
Fukushima, Toshio
2016-10-01
We developed a numerical method to compute the gravitational field of a general axisymmetric object. The method (I) numerically evaluates a double integral of the ring potential by the split quadrature method using the double exponential rules, and (II) derives the acceleration vector by numerically differentiating the numerically integrated potential by Ridder's algorithm. Numerical comparison with the analytical solutions for a finite uniform spheroid and an infinitely extended object of the Miyamoto-Nagai density distribution confirmed the 13- and 11-digit accuracy of the potential and the acceleration vector computed by the method, respectively. By using the method, we present the gravitational potential contour map and/or the rotation curve of various axisymmetric objects: (I) finite uniform objects covering rhombic spindles and circular toroids, (II) infinitely extended spheroids including Sérsic and Navarro-Frenk-White spheroids, and (III) other axisymmetric objects such as an X/peanut-shaped object like NGC 128, a power-law disc with a central hole like the protoplanetary disc of TW Hya, and a tear-drop-shaped toroid like an axisymmetric equilibrium solution of plasma charge distribution in an International Thermonuclear Experimental Reactor-like tokamak. The method is directly applicable to the electrostatic field and will be easily extended for the magnetostatic field. The FORTRAN 90 programs of the new method and some test results are electronically available.
Marom, Gil; Bluestein, Danny
2016-01-01
Summary This paper evaluated the influence of various numerical implementation assumptions on predicting blood damage in cardiovascular devices using Lagrangian methods with Eulerian computational fluid dynamics. The implementation assumptions that were tested included various seeding patterns, stochastic walk model, and simplified trajectory calculations with pathlines. Post processing implementation options that were evaluated included single passage and repeated passages stress accumulation and time averaging. This study demonstrated that the implementation assumptions can significantly affect the resulting stress accumulation, i.e., the blood damage model predictions. Careful considerations should be taken in the use of Lagrangian models. Ultimately, the appropriate assumptions should be considered based the physics of the specific case and sensitivity analysis, similar to the ones presented here, should be employed. PMID:26679833
Implementation of a partitioned algorithm for simulation of large CSI problems
NASA Technical Reports Server (NTRS)
Alvin, Kenneth F.; Park, K. C.
1991-01-01
The implementation of a partitioned numerical algorithm for determining the dynamic response of coupled structure/controller/estimator finite-dimensional systems is reviewed. The partitioned approach leads to a set of coupled first and second-order linear differential equations which are numerically integrated with extrapolation and implicit step methods. The present software implementation, ACSIS, utilizes parallel processing techniques at various levels to optimize performance on a shared-memory concurrent/vector processing system. A general procedure for the design of controller and filter gains is also implemented, which utilizes the vibration characteristics of the structure to be solved. Also presented are: example problems; a user's guide to the software; the procedures and algorithm scripts; a stability analysis for the algorithm; and the source code for the parallel implementation.
Ekegren, Christina L; Donaldson, Alex; Gabbe, Belinda J; Finch, Caroline F
Previous research aimed at improving injury surveillance standards has focused mainly on issues of data quality rather than upon the implementation of surveillance systems. There are numerous settings where injury surveillance is not mandatory and having a better understanding of the barriers to conducting injury surveillance would lead to improved implementation strategies. One such setting is community sport, where a lack of available epidemiological data has impaired efforts to reduce injury. This study aimed to i) evaluate use of an injury surveillance system following delivery of an implementation strategy; and ii) investigate factors influencing the implementation of the system in community sports clubs. A total of 78 clubs were targeted for implementation of an online injury surveillance system (approximately 4000 athletes) in five community Australian football leagues concurrently enrolled in a pragmatic trial of an injury prevention program called FootyFirst. System implementation was evaluated quantitatively, using the RE-AIM framework, and qualitatively, via semi-structured interviews with targeted-users. Across the 78 clubs, there was 69% reach, 44% adoption, 23% implementation and 9% maintenance. Reach and adoption were highest in those leagues receiving concurrent support for the delivery of FootyFirst. Targeted-users identified several barriers and facilitators to implementation including personal (e.g. belief in the importance of injury surveillance), socio-contextual (e.g. understaffing and athlete underreporting) and systems factors (e.g. the time taken to upload injury data into the online system). The injury surveillance system was implemented and maintained by a small proportion of clubs. Outcomes were best in those leagues receiving concurrent support for the delivery of FootyFirst, suggesting that engagement with personnel at all levels can enhance uptake of surveillance systems. Interview findings suggest that increased uptake could also be achieved by educating club personnel on the importance of recording injuries, developing clearer injury surveillance guidelines, increasing club staffing and better remunerating those who conduct surveillance, as well as offering flexible surveillance systems in a range of accessible formats. By increasing the usage of surveillance systems, data will better represent the target population and increase our understanding of the injury problem, and how to prevent it, in specific settings.
NASA Astrophysics Data System (ADS)
Colera, Manuel; Pérez-Saborid, Miguel
2017-09-01
A finite differences scheme is proposed in this work to compute in the time domain the compressible, subsonic, unsteady flow past an aerodynamic airfoil using the linearized potential theory. It improves and extends the original method proposed in this journal by Hariharan, Ping and Scott [1] by considering: (i) a non-uniform mesh, (ii) an implicit time integration algorithm, (iii) a vectorized implementation and (iv) the coupled airfoil dynamics and fluid dynamic loads. First, we have formulated the method for cases in which the airfoil motion is given. The scheme has been tested on well known problems in unsteady aerodynamics -such as the response to a sudden change of the angle of attack and to a harmonic motion of the airfoil- and has been proved to be more accurate and efficient than other finite differences and vortex-lattice methods found in the literature. Secondly, we have coupled our method to the equations governing the airfoil dynamics in order to numerically solve problems where the airfoil motion is unknown a priori as happens, for example, in the cases of the flutter and the divergence of a typical section of a wing or of a flexible panel. Apparently, this is the first self-consistent and easy-to-implement numerical analysis in the time domain of the compressible, linearized coupled dynamics of the (generally flexible) airfoil-fluid system carried out in the literature. The results for the particular case of a rigid airfoil show excellent agreement with those reported by other authors, whereas those obtained for the case of a cantilevered flexible airfoil in compressible flow seem to be original or, at least, not well-known.
A numerical technique for linear elliptic partial differential equations in polygonal domains.
Hashemzadeh, P; Fokas, A S; Smitheman, S A
2015-03-08
Integral representations for the solution of linear elliptic partial differential equations (PDEs) can be obtained using Green's theorem. However, these representations involve both the Dirichlet and the Neumann values on the boundary, and for a well-posed boundary-value problem (BVPs) one of these functions is unknown. A new transform method for solving BVPs for linear and integrable nonlinear PDEs usually referred to as the unified transform ( or the Fokas transform ) was introduced by the second author in the late Nineties. For linear elliptic PDEs, this method can be considered as the analogue of Green's function approach but now it is formulated in the complex Fourier plane instead of the physical plane. It employs two global relations also formulated in the Fourier plane which couple the Dirichlet and the Neumann boundary values. These relations can be used to characterize the unknown boundary values in terms of the given boundary data, yielding an elegant approach for determining the Dirichlet to Neumann map . The numerical implementation of the unified transform can be considered as the counterpart in the Fourier plane of the well-known boundary integral method which is formulated in the physical plane. For this implementation, one must choose (i) a suitable basis for expanding the unknown functions and (ii) an appropriate set of complex values, which we refer to as collocation points, at which to evaluate the global relations. Here, by employing a variety of examples we present simple guidelines of how the above choices can be made. Furthermore, we provide concrete rules for choosing the collocation points so that the condition number of the matrix of the associated linear system remains low.
ERIC Educational Resources Information Center
Hagermoser Sanetti, Lisa M.; Williamson, Kathleen M.; Long, Anna C. J.; Kratochwill, Thomas R.
2018-01-01
Numerous evidence-based classroom management strategies to prevent and respond to problem behavior have been identified, but research consistently indicates teachers rarely implement them with sufficient implementation fidelity. The purpose of this study was to evaluate the effectiveness of implementation planning, a strategy involving logistical…
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2014-01-01
This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.
NASA Technical Reports Server (NTRS)
Saleeb, A. F.; Chang, T. Y. P.; Wilt, T.; Iskovitz, I.
1989-01-01
The research work performed during the past year on finite element implementation and computational techniques pertaining to high temperature composites is outlined. In the present research, two main issues are addressed: efficient geometric modeling of composite structures and expedient numerical integration techniques dealing with constitutive rate equations. In the first issue, mixed finite elements for modeling laminated plates and shells were examined in terms of numerical accuracy, locking property and computational efficiency. Element applications include (currently available) linearly elastic analysis and future extension to material nonlinearity for damage predictions and large deformations. On the material level, various integration methods to integrate nonlinear constitutive rate equations for finite element implementation were studied. These include explicit, implicit and automatic subincrementing schemes. In all cases, examples are included to illustrate the numerical characteristics of various methods that were considered.
Implementation of Title I and Title II-A Program Initiatives: Results from 2013-14. NCEE 2017-4014
ERIC Educational Resources Information Center
Troppe, Patricia; Milanowski, Anthony T.; Heid, Camilla; Gill, Brian; Ross, Christine
2017-01-01
This report describes the implementation of policies and initiatives supported by Title I and Title II-A of the federal Elementary and Secondary Education Act (ESEA) during the 2013-14 school year. Title I is one of the U.S. Department of Education's largest programs, accounting for $15 billion in the 2016 federal budget. Historically, Title I has…
Implementation of RCCL, a robot control C library on a microVAX II
NASA Technical Reports Server (NTRS)
Lee, Jin S.; Hayati, Samad; Hayward, Vincent; Lloyd, John E.
1987-01-01
The robot control C library (RCCL), a high-level robot programing system which enables a progammer to employ a set of system calls to specify robot manipulator tasks, is discussed. The general structure of RCCL is described, and the implementation of RCCL on a microVAX II is examined. Proposed extensions and improvements of RCCL relevant to NASA's telerobotic system are addressed.
McLaughlin, Douglas B
2012-01-01
The utility of numeric nutrient criteria established for certain surface waters is likely to be affected by the uncertainty that exists in the presence of a causal link between nutrient stressor variables and designated use-related biological responses in those waters. This uncertainty can be difficult to characterize, interpret, and communicate to a broad audience of environmental stakeholders. The US Environmental Protection Agency (USEPA) has developed a systematic planning process to support a variety of environmental decisions, but this process is not generally applied to the development of national or state-level numeric nutrient criteria. This article describes a method for implementing such an approach and uses it to evaluate the numeric total P criteria recently proposed by USEPA for colored lakes in Florida, USA. An empirical, log-linear relationship between geometric mean concentrations of total P (a potential stressor variable) and chlorophyll a (a nutrient-related response variable) in these lakes-that is assumed to be causal in nature-forms the basis for the analysis. The use of the geometric mean total P concentration of a lake to correctly indicate designated use status, defined in terms of a 20 µg/L geometric mean chlorophyll a threshold, is evaluated. Rates of decision errors analogous to the Type I and Type II error rates familiar in hypothesis testing, and a 3rd error rate, E(ni) , referred to as the nutrient criterion-based impairment error rate, are estimated. The results show that USEPA's proposed "baseline" and "modified" nutrient criteria approach, in which data on both total P and chlorophyll a may be considered in establishing numeric nutrient criteria for a given lake within a specified range, provides a means for balancing and minimizing designated use attainment decision errors. Copyright © 2011 SETAC.
Numerical systems on a minicomputer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Jr., Roy Leonard
1973-02-01
This thesis defines the concept of a numerical system for a minicomputer and provides a description of the software and computer system configuration necessary to implement such a system. A procedure for creating a numerical system from a FORTRAN program is developed and an example is presented.
Adaptive Wavelet Modeling of Geophysical Data
NASA Astrophysics Data System (ADS)
Plattner, A.; Maurer, H.; Dahmen, W.; Vorloeper, J.
2009-12-01
Despite the ever-increasing power of modern computers, realistic modeling of complex three-dimensional Earth models is still a challenging task and requires substantial computing resources. The overwhelming majority of current geophysical modeling approaches includes either finite difference or non-adaptive finite element algorithms, and variants thereof. These numerical methods usually require the subsurface to be discretized with a fine mesh to accurately capture the behavior of the physical fields. However, this may result in excessive memory consumption and computing times. A common feature of most of these algorithms is that the modeled data discretizations are independent of the model complexity, which may be wasteful when there are only minor to moderate spatial variations in the subsurface parameters. Recent developments in the theory of adaptive numerical solvers have the potential to overcome this problem. Here, we consider an adaptive wavelet based approach that is applicable to a large scope of problems, also including nonlinear problems. To the best of our knowledge such algorithms have not yet been applied in geophysics. Adaptive wavelet algorithms offer several attractive features: (i) for a given subsurface model, they allow the forward modeling domain to be discretized with a quasi minimal number of degrees of freedom, (ii) sparsity of the associated system matrices is guaranteed, which makes the algorithm memory efficient, and (iii) the modeling accuracy scales linearly with computing time. We have implemented the adaptive wavelet algorithm for solving three-dimensional geoelectric problems. To test its performance, numerical experiments were conducted with a series of conductivity models exhibiting varying degrees of structural complexity. Results were compared with a non-adaptive finite element algorithm, which incorporates an unstructured mesh to best fit subsurface boundaries. Such algorithms represent the current state-of-the-art in geoelectrical modeling. An analysis of the numerical accuracy as a function of the number of degrees of freedom revealed that the adaptive wavelet algorithm outperforms the finite element solver for simple and moderately complex models, whereas the results become comparable for models with spatially highly variable electrical conductivities. The linear dependency of the modeling error and the computing time proved to be model-independent. This feature will allow very efficient computations using large-scale models as soon as our experimental code is optimized in terms of its implementation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nurmikko, Arto; Humphrey, Maris
2014-07-10
The goal of this grant was the development of a new type of scanning acoustic microscope for nanometer resolution ultrasound imaging, based on ultrafast optoacoustics (>GHz). In the microscope, subpicosecond laser pulses was used to generate and detect very high frequency ultrasound with nanometer wavelengths. We report here on the outcome of the 3-year DOE/BES grant which involved the design, multifaceted construction, and proof-of-concept demonstration of an instrument that can be used for quantitative imaging of nanoscale material features – including features that may be buried so as to be inaccessible to conventional lightwave or electron microscopies. The research programmore » has produced a prototype scanning optoacoustic microscope which, in combination with advanced computational modeling, is a system-level new technology (two patents issues) which offer novel means for precision metrology of material nanostructures, particularly those that are of contemporary interest to the frontline micro- and optoelectronics device industry. For accomplishing the ambitious technical goals, the research roadmap was designed and implemented in two phases. In Phase I, we constructed a “non-focusing” optoacoustic microscope instrument (“POAM”), with nanometer vertical (z-) resolution, while limited to approximately 10 micrometer scale lateral recolution. The Phase I version of the instrument which was guided by extensive acoustic and optical numerical modeling of the basic underlying acoustic and optical physics, featured nanometer scale close loop positioning between the optoacoustic transducer element and a nanostructured material sample under investigation. In phase II, we implemented and demonstrated a scanning version of the instrument (“SOAM”) where incident acoustic energy is focused, and scanned on lateral (x-y) spatial scale in the 100 nm range as per the goals of the project. In so doing we developed advanced numerical simulations to provide computational models of the focusing of multi-GHz acoustic waves to the nanometer scale and innovated a series fabrication approaches for a new type of broadband high-frequency acoustic focusing microscope objective by applying methods on nanoimprinting and focused-ion beam techniques. In the following, the Phase I and Phase II instrument development is reported as Section II. The first segment of this section describes the POAM instrument and its development, while including much of the underlying ultrafast acoustic physics which is common to all of our work for this grant. Then, the science and engineering of the SOAM instrument is described, including the methods of fabricating new types of acoustic microlenses. The results section is followed by reports on publications (Section III), Participants (Section IV), and statement of full use of the allocated grant funds (Section V).« less
The classical D-type expansion of spherical H II regions
NASA Astrophysics Data System (ADS)
Williams, Robin J. R.; Bibas, Thomas G.; Haworth, Thomas J.; Mackey, Jonathan
2018-06-01
Recent numerical and analytic work has highlighted some shortcomings in our understanding of the dynamics of H II region expansion, especially at late times, when the H II region approaches pressure equilibrium with the ambient medium. Here we reconsider the idealized case of a constant radiation source in a uniform and spherically symmetric ambient medium, with an isothermal equation of state. A thick-shell solution is developed which captures the stalling of the ionization front and the decay of the leading shock to a weak compression wave as it escapes to large radii. An acoustic approximation is introduced to capture the late-time damped oscillations of the H II region about the stagnation radius. Putting these together, a matched asymptotic equation is derived for the radius of the ionization front which accounts for both the inertia of the expanding shell and the finite temperature of the ambient medium. The solution to this equation is shown to agree very well with the numerical solution at all times, and is superior to all previously published solutions. The matched asymptotic solution can also accurately model the variation of H II region radius for a time-varying radiation source.
A reanalysis of the SWP-HI IUE observations of Capella
NASA Technical Reports Server (NTRS)
Wood, Brian E.; Ayres, T. R.
1995-01-01
We have reanalyzed the numerous high-resolution, far-ultraviolet observations of Capella made by the International Ultraviolet Explorer (IUE) in its 16 yr lifetime. Our purpose was to search for long-term profile variations in Capella's ultraviolet emission lines and to complement the analysis of Goddard High Resolution Spectrograph (GHRS) observations of Capella, discussed in a companion paper (Linsky et al. 1995). We implemented a state-of-the-art photometric correction and spectral extraction procedure to improve S/N and control potential sources for systematic errors. Nevertheless, we were unable to find compelling evidence for any significant long-term profile variations. Previous work has shown that the G8 primary star is only a minor contributor to the high-excitation transition region lines but is a significant contributor to the low-excitation chromospheric lines. We have found exceptions to this rule, however. We find that the G8 star is responsible for a significant portion of Capella's N V lambda lambda 1239, 1243 emission, but is not a large contributor to the S I lambda 1296, Cl I lambda 1352, and O lambda 1356 lines. We suggest possible explanations for these behaviors. We also find evidence that the He II lambda 1640 emission from the G1 star is from the transition region, while the He II lambda 1640 emission from the G8 star is chromospheric, consistent with the findings of Linsky et al. (1994). The C II lambda 1336 line shows a weak central reversal. It is blueshifted by about 9 km/s with respect to the centroid of the emission from the G1 star. While the central reversal of the C II line is blueshifted by about 9 km/s with respect to the centroid of the emission from the G1 star. While the central reversal of the C II line is blueshifted, the central reversal of the Si III lambda 1207 line discussed by Linsky et al. (1994) is not.
Numerical simulation of circular cylinders in free-fall
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romero-Gomez, Pedro; Richmond, Marshall C.
2016-02-01
In this work, we combined the use of (i) overset meshes, (ii) a 6 degree-of-freedom (6- DOF) motion solver, and (iii) an eddy-resolving flow simulation approach to resolve the drag and secondary movement of large-sized cylinders settling in a quiescent fluid at moderate terminal Reynolds numbers (1,500 < Re < 28,000). These three strategies were implemented in a series of computational fluid dynamics (CFD) solutions to describe the fluid-structure interactions and the resulting effects on the cylinder motion. Using the drag coefficient, oscillation period, and maximum angular displacement as baselines, the findings show good agreement between the present CFD resultsmore » and corresponding data of published laboratory experiments. We discussed the computational expense incurred in using the present modeling approach. We also conducted a preceding simulation of flow past a fixed cylinder at Re = 3,900, which tested the influence of the turbulence approach (time-averaging vs eddy-resolving) and the meshing strategy (continuous vs. overset) on the numerical results. The outputs indicated a strong effect of the former and an insignificant influence of the latter. The long-term motivation for the present study is the need to understand the motion of an autonomous sensor of cylindrical shape used to measure the hydraulic conditions occurring in operating hydropower turbines.« less
Modeling antimicrobial tolerance and treatment of heterogeneous biofilms.
Zhao, Jia; Seeluangsawat, Paisa; Wang, Qi
2016-12-01
A multiphasic, hydrodynamic model for spatially heterogeneous biofilms based on the phase field formulation is developed and applied to analyze antimicrobial tolerance of biofilms by acknowledging the existence of persistent and susceptible cells in the total population of bacteria. The model implements a new conversion rate between persistent and susceptible cells and its homogeneous dynamics is bench-marked against a known experiment quantitatively. It is then discretized and solved on graphic processing units (GPUs) in 3-D space and time. With the model, biofilm development and antimicrobial treatment of biofilms in a flow cell are investigated numerically. Model predictions agree qualitatively well with available experimental observations. Specifically, numerical results demonstrate that: (i) in a flow cell, nutrient, diffused in solvent and transported by hydrodynamics, has an apparent impact on persister formation, thereby antimicrobial persistence of biofilms; (ii) dosing antimicrobial agents inside biofilms is more effective than dosing through diffusion in solvent; (iii) periodic dosing is less effective in antimicrobial treatment of biofilms in a nutrient deficient environment than in a nutrient sufficient environment. This model provides us with a simulation tool to analyze mechanisms of biofilm tolerance to antimicrobial agents and to derive potentially optimal dosing strategies for biofilm control and treatment. Copyright © 2016 Elsevier Inc. All rights reserved.
A Summary of the Naval Postgraduate School Research Program.
1979-09-30
Research (M. G. Sovereign) 116 Review of COMWTH II Model (M. G. Sovereign and J. K. Arima ) 117 Optimization of Combat Dynamics (J. G. Taylor) 118...Studies (R. L. Elsberry) 291 4 Numerical Models of Ocean Circulation and Climate Interaction--A Review (R. L. Haney) 292 Numerical Studies of the Dynamics... climatic numerical models to investigate the various mechan- isms pertinent to the large-scale interaction between tropi- cal atmosphere and oceans. Among
Some Key Factors in Policy Implementation.
ERIC Educational Resources Information Center
Rowen, Henry
Business policy texts identify numerous steps that make up the policy implementation process for private firms. On the surface, these steps also appear applicable to the implementation of public policies. However, the problems of carrying out these implementing steps in the public sector are significantly different than in the private sector due…
Numerical built-in method for the nonlinear JRC/JCS model in rock joint.
Liu, Qunyi; Xing, Wanli; Li, Ying
2014-01-01
The joint surface is widely distributed in the rock, thus leading to the nonlinear characteristics of rock mass strength and limiting the effectiveness of the linear model in reflecting characteristics. The JRC/JCS model is the nonlinear failure criterion and generally believed to describe the characteristics of joints better than other models. In order to develop the numerical program for JRC/JCS model, this paper established the relationship between the parameters of the JRC/JCS and Mohr-Coulomb models. Thereafter, the numerical implement method and implementation process of the JRC/JCS model were discussed and the reliability of the numerical method was verified by the shear tests of jointed rock mass. Finally, the effect of the JRC/JCS model parameters on the shear strength of the joint was analyzed.
A versatile embedded boundary adaptive mesh method for compressible flow in complex geometry
NASA Astrophysics Data System (ADS)
Al-Marouf, M.; Samtaney, R.
2017-05-01
We present an embedded ghost fluid method for numerical solutions of the compressible Navier Stokes (CNS) equations in arbitrary complex domains. A PDE multidimensional extrapolation approach is used to reconstruct the solution in the ghost fluid regions and imposing boundary conditions on the fluid-solid interface, coupled with a multi-dimensional algebraic interpolation for freshly cleared cells. The CNS equations are numerically solved by the second order multidimensional upwind method. Block-structured adaptive mesh refinement, implemented with the Chombo framework, is utilized to reduce the computational cost while keeping high resolution mesh around the embedded boundary and regions of high gradient solutions. The versatility of the method is demonstrated via several numerical examples, in both static and moving geometry, ranging from low Mach number nearly incompressible flows to supersonic flows. Our simulation results are extensively verified against other numerical results and validated against available experimental results where applicable. The significance and advantages of our implementation, which revolve around balancing between the solution accuracy and implementation difficulties, are briefly discussed as well.
Parallelization of elliptic solver for solving 1D Boussinesq model
NASA Astrophysics Data System (ADS)
Tarwidi, D.; Adytia, D.
2018-03-01
In this paper, a parallel implementation of an elliptic solver in solving 1D Boussinesq model is presented. Numerical solution of Boussinesq model is obtained by implementing a staggered grid scheme to continuity, momentum, and elliptic equation of Boussinesq model. Tridiagonal system emerging from numerical scheme of elliptic equation is solved by cyclic reduction algorithm. The parallel implementation of cyclic reduction is executed on multicore processors with shared memory architectures using OpenMP. To measure the performance of parallel program, large number of grids is varied from 28 to 214. Two test cases of numerical experiment, i.e. propagation of solitary and standing wave, are proposed to evaluate the parallel program. The numerical results are verified with analytical solution of solitary and standing wave. The best speedup of solitary and standing wave test cases is about 2.07 with 214 of grids and 1.86 with 213 of grids, respectively, which are executed by using 8 threads. Moreover, the best efficiency of parallel program is 76.2% and 73.5% for solitary and standing wave test cases, respectively.
ERIC Educational Resources Information Center
Troppe, Patricia; Milanowski, Anthony T.; Heid, Camilla; Gill, Brian; Ross, Christine
2017-01-01
This report describes the implementation of policies and initiatives supported by Title I and Title II-A of the federal Elementary and Secondary Education Act (ESEA) during the 2013-14 school year. Title I is one of the U.S. Department of Education's largest programs, accounting for $15 billion in the 2016 federal budget. Historically, Title I has…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-12
... 1997 PM 2.5 NAAQS dated October 22, 2008: 110(a)(2)(A), (B), (C), (D)(ii), (E), (F), (G), (H), (J), (K...), (B), (C), (D)(ii), (E), (F), (G), (H), (J), (K), (L), and (M) necessary to implement, maintain, and... September 22, 2008 addressed the section 110(a)(2) requirements for the 1997 PM 2.5 NAAQS; and the submittal...
Challenges and Opportunities in the Discovery of New Therapeutics Targeting the Kynurenine Pathway.
Dounay, Amy B; Tuttle, Jamison B; Verhoest, Patrick R
2015-11-25
The kynurenine pathway is responsible for the metabolism of more than 95% of dietary tryptophan (TRP) and produces numerous bioactive metabolites. Recent studies have focused on three enzymes in this pathway: indoleamine dioxygenase (IDO1), kynurenine monooxygenase (KMO), and kynurenine aminotransferase II (KAT II). IDO1 inhibitors are currently in clinical trials for the treatment of cancer, and these agents may also have therapeutic utility in neurological disorders, including multiple sclerosis. KMO inhibitors are being investigated as potential treatments for neurodegenerative diseases, such as Huntington's and Alzheimer's diseases. KAT II inhibitors have been proposed in new therapeutic approaches toward psychiatric and cognitive disorders, including cognitive impairment associated with schizophrenia. Numerous medicinal chemistry studies are currently aimed at the design of novel, potent, and selective inhibitors for each of these enzymes. The emerging opportunities and significant challenges associated with pharmacological modulation of these enzymes will be explored in this review.
NASA Technical Reports Server (NTRS)
Mikellides, Ioannis G.; Katz, Ira; Goebel, Dan M.; Jameson, Kristina K.
2006-01-01
Numerical simulations with the time-dependent Orificed Cathode (OrCa2D-II) computer code show that classical enhancements of the plasma resistivity can not account for the elevated electron temperatures and steep plasma potential gradients measured in the plume of a 25-27.5 A discharge hollow cathode. The cathode, which employs a 0.11-in diameter orifice, was operated at 5.5 sccm without an applied magnetic field using two different anode geometries. It is found that anomalous resistivity based on electron-driven instabilities improves the comparison between theory and experiment. It is also estimated that other effects such as the Hall-effect from the self-induced magnetic field, not presently included in OrCa2D-II, may contribute to the constriction of the current density streamlines thus explaining the higher plasma densities observed along the centerline.
Design of two-dimensional channels with prescribed velocity distributions along the channel walls
NASA Technical Reports Server (NTRS)
Stanitz, John D
1953-01-01
A general method of design is developed for two-dimensional unbranched channels with prescribed velocities as a function of arc length along the channel walls. The method is developed for both compressible and incompressible, irrotational, nonviscous flow and applies to the design of elbows, diffusers, nozzles, and so forth. In part I solutions are obtained by relaxation methods; in part II solutions are obtained by a Green's function. Five numerical examples are given in part I including three elbow designs with the same prescribed velocity as a function of arc length along the channel walls but with incompressible, linearized compressible, and compressible flow. One numerical example is presented in part II for an accelerating elbow with linearized compressible flow, and the time required for the solution by a Green's function in part II was considerably less than the time required for the same solution by relaxation methods in part I.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1985-12-01
MARPOL was developed to minimize accidental and operational pollution from ships carrying noxious liquid substances in bulk. Accidental pollution could result from a collision, a grounding, or an overflow of a cargo tank. Operational pollution results from the disposal of cargo tank washings. Major amendments were made to the original Annex II by the International Maritime Organization. The United States and other States party to MARPOL will implement Annex II, as amended, on April 7, 1987. Implementation will affect seagoing ships transporting noxious liquid substances to and from such ships. The attached documents contain internationally agreed requirements, interpretations, and guidelinesmore » necessary for the implementation of Annex II. The documents attached include: (1) MARPOL Annex II as amended by amendments adopted by the twenty-second session of the IMO Marine Environment Protection Committee; (2) Unified Interpretations of Annex II; (3) Standards for the Procedures and Arrangements for the Discharge of Noxious Liquid Substances; (4) Ammendments to the Bulk Chemical Code and the International Bulk Chemical Code to include marine pollution concerns; (5) Guidelines on the Provision of Adequate Reception Facilities in Ports, Part II (Noxious Liquid Substances). The contents of these documents are being placed in regulations. The purpose of this document is to give members of the interested public advance notification of impending regulations.« less
NASA Astrophysics Data System (ADS)
Tang, Jingshi; Wang, Haihong; Chen, Qiuli; Chen, Zhonggui; Zheng, Jinjun; Cheng, Haowen; Liu, Lin
2018-07-01
Onboard orbit determination (OD) is often used in space missions, with which mission support can be partially accomplished autonomously, with less dependency on ground stations. In major Global Navigation Satellite Systems (GNSS), inter-satellite link is also an essential upgrade in the future generations. To serve for autonomous operation, sequential OD method is crucial to provide real-time or near real-time solutions. The Extended Kalman Filter (EKF) is an effective and convenient sequential estimator that is widely used in onboard application. The filter requires the solutions of state transition matrix (STM) and the process noise transition matrix, which are always obtained by numerical integration. However, numerically integrating the differential equations is a CPU intensive process and consumes a large portion of the time in EKF procedures. In this paper, we present an implementation that uses the analytical solutions of these transition matrices to replace the numerical calculations. This analytical implementation is demonstrated and verified using a fictitious constellation based on selected medium Earth orbit (MEO) and inclined Geosynchronous orbit (IGSO) satellites. We show that this implementation performs effectively and converges quickly, steadily and accurately in the presence of considerable errors in the initial values, measurements and force models. The filter is able to converge within 2-4 h of flight time in our simulation. The observation residual is consistent with simulated measurement error, which is about a few centimeters in our scenarios. Compared to results implemented with numerically integrated STM, the analytical implementation shows results with consistent accuracy, while it takes only about half the CPU time to filter a 10-day measurement series. The future possible extensions are also discussed to fit in various missions.
Time-domain simulation of damped impacted plates. II. Numerical model and results.
Lambourg, C; Chaigne, A; Matignon, D
2001-04-01
A time-domain model for the flexural vibrations of damped plates was presented in a companion paper [Part I, J. Acoust. Soc. Am. 109, 1422-1432 (2001)]. In this paper (Part II), the damped-plate model is extended to impact excitation, using Hertz's law of contact, and is solved numerically in order to synthesize sounds. The numerical method is based on the use of a finite-difference scheme of second order in time and fourth order in space. As a consequence of the damping terms, the stability and dispersion properties of this scheme are modified, compared to the undamped case. The numerical model is used for the time-domain simulation of vibrations and sounds produced by impact on isotropic and orthotropic plates made of various materials (aluminum, glass, carbon fiber and wood). The efficiency of the method is validated by comparisons with analytical and experimental data. The sounds produced show a high degree of similarity with real sounds and allow a clear recognition of each constitutive material of the plate without ambiguity.
Health Activities Project (HAP), Trial Edition II.
ERIC Educational Resources Information Center
Buller, Dave; And Others
Contained within this Health Activities Project (HAP) trial edition (set II) are a teacher information folio and numerous student activity folios which center around the idea that students in grades 5-8 can control their own health and safety. Each student folio is organized into a Synopsis, Health Background, Materials, Setting Up, and Activities…
An Oral History Project: World War II Veterans Share Memories in My Classroom
ERIC Educational Resources Information Center
Fuchs, David W.
2004-01-01
This article describes how the author developed and implemented a course on World War II that has an oral history component. The author describes the format of the World War II course and the oral history component within the course framework. The author uses classroom presentations by veterans to enliven his World War II history class and enhance…
NASA Astrophysics Data System (ADS)
Fakhari, Abbas; Li, Yaofa; Bolster, Diogo; Christensen, Kenneth T.
2018-04-01
We implement a phase-field based lattice-Boltzmann (LB) method for numerical simulation of multiphase flows in heterogeneous porous media at pore scales with wettability effects. The present method can handle large density and viscosity ratios, pertinent to many practical problems. As a practical application, we study multiphase flow in a micromodel representative of CO2 invading a water-saturated porous medium at reservoir conditions, both numerically and experimentally. We focus on two flow cases with (i) a crossover from capillary fingering to viscous fingering at a relatively small capillary number, and (ii) viscous fingering at a relatively moderate capillary number. Qualitative and quantitative comparisons are made between numerical results and experimental data for temporal and spatial CO2 saturation profiles, and good agreement is found. In particular, a correlation analysis shows that any differences between simulations and results are comparable to intra-experimental differences from replicate experiments. A key conclusion of this work is that system behavior is highly sensitive to boundary conditions, particularly inlet and outlet ones. We finish with a discussion on small-scale flow features, such as the emergence of strong recirculation zones as well as flow in which the residual phase is trapped, including a close look at the detailed formation of a water cone. Overall, the proposed model yields useful information, such as the spatiotemporal evolution of the CO2 front and instantaneous velocity fields, which are valuable for understanding the mechanisms of CO2 infiltration at the pore scale.
Determinant Computation on the GPU using the Condensation Method
NASA Astrophysics Data System (ADS)
Anisul Haque, Sardar; Moreno Maza, Marc
2012-02-01
We report on a GPU implementation of the condensation method designed by Abdelmalek Salem and Kouachi Said for computing the determinant of a matrix. We consider two types of coefficients: modular integers and floating point numbers. We evaluate the performance of our code by measuring its effective bandwidth and argue that it is numerical stable in the floating point number case. In addition, we compare our code with serial implementation of determinant computation from well-known mathematical packages. Our results suggest that a GPU implementation of the condensation method has a large potential for improving those packages in terms of running time and numerical stability.
Recent Progress in Discrete Dislocation Dynamics and Its Applications to Micro Plasticity
NASA Astrophysics Data System (ADS)
Po, Giacomo; Mohamed, Mamdouh S.; Crosby, Tamer; Erel, Can; El-Azab, Anter; Ghoniem, Nasr
2014-10-01
We present a self-contained review of the discrete dislocation dynamics (DDD) method for the numerical investigation of plasticity in crystals, focusing on recent development and implementation progress. The review covers the theoretical foundations of DDD within the framework of incompatible elasticity, its numerical implementation via the nodal method, the extension of the method to finite domains and several implementation details. Applications of the method to current topics in micro-plasticity are presented, including the size effects in nano-indentation, the evolution of the dislocation microstructure in persistent slip bands, and the phenomenon of dislocation avalanches in micro-pillar compression.
Technology II: Implementation Planning Guide.
ERIC Educational Resources Information Center
California Community Colleges, Sacramento. Office of the Chancellor.
The California Community Colleges (CCC) are facing a number of challenges, including the explosive use of the Internet, the digital divide, the need for integrating technology into teaching and learning, the impact of Tidal Wave II, and the need to ensure that technology is accessible to persons with disabilities. The CCCs' Technology II Strategic…
Articulated, Performance-Based Instruction Guide for Drafting II. Final Document. Revised.
ERIC Educational Resources Information Center
Henderson, William Edward, Jr.
Developed during a project designed to provide continuous, performance-based vocational training at the secondary and postsecondary levels, this instructional guide is intended to help teachers implement a laterally and vertically articulated secondary level drafting II program. Introductory materials include a description of Drafting II,…
NASA Astrophysics Data System (ADS)
Ghezavati, V. R.; Beigi, M.
2016-12-01
During the last decade, the stringent pressures from environmental and social requirements have spurred an interest in designing a reverse logistics (RL) network. The success of a logistics system may depend on the decisions of the facilities locations and vehicle routings. The location-routing problem (LRP) simultaneously locates the facilities and designs the travel routes for vehicles among established facilities and existing demand points. In this paper, the location-routing problem with time window (LRPTW) and homogeneous fleet type and designing a multi-echelon, and capacitated reverse logistics network, are considered which may arise in many real-life situations in logistics management. Our proposed RL network consists of hybrid collection/inspection centers, recovery centers and disposal centers. Here, we present a new bi-objective mathematical programming (BOMP) for LRPTW in reverse logistic. Since this type of problem is NP-hard, the non-dominated sorting genetic algorithm II (NSGA-II) is proposed to obtain the Pareto frontier for the given problem. Several numerical examples are presented to illustrate the effectiveness of the proposed model and algorithm. Also, the present work is an effort to effectively implement the ɛ-constraint method in GAMS software for producing the Pareto-optimal solutions in a BOMP. The results of the proposed algorithm have been compared with the ɛ-constraint method. The computational results show that the ɛ-constraint method is able to solve small-size instances to optimality within reasonable computing times, and for medium-to-large-sized problems, the proposed NSGA-II works better than the ɛ-constraint.
NASA Astrophysics Data System (ADS)
Drabik, Timothy J.; Lee, Sing H.
1986-11-01
The intrinsic parallelism characteristics of easily realizable optical SIMD arrays prompt their present consideration in the implementation of highly structured algorithms for the numerical solution of multidimensional partial differential equations and the computation of fast numerical transforms. Attention is given to a system, comprising several spatial light modulators (SLMs), an optical read/write memory, and a functional block, which performs simple, space-invariant shifts on images with sufficient flexibility to implement the fastest known methods for partial differential equations as well as a wide variety of numerical transforms in two or more dimensions. Either fixed or floating-point arithmetic may be used. A performance projection of more than 1 billion floating point operations/sec using SLMs with 1000 x 1000-resolution and operating at 1-MHz frame rates is made.
Computation of Sound Propagation by Boundary Element Method
NASA Technical Reports Server (NTRS)
Guo, Yueping
2005-01-01
This report documents the development of a Boundary Element Method (BEM) code for the computation of sound propagation in uniform mean flows. The basic formulation and implementation follow the standard BEM methodology; the convective wave equation and the boundary conditions on the surfaces of the bodies in the flow are formulated into an integral equation and the method of collocation is used to discretize this equation into a matrix equation to be solved numerically. New features discussed here include the formulation of the additional terms due to the effects of the mean flow and the treatment of the numerical singularities in the implementation by the method of collocation. The effects of mean flows introduce terms in the integral equation that contain the gradients of the unknown, which is undesirable if the gradients are treated as additional unknowns, greatly increasing the sizes of the matrix equation, or if numerical differentiation is used to approximate the gradients, introducing numerical error in the computation. It is shown that these terms can be reformulated in terms of the unknown itself, making the integral equation very similar to the case without mean flows and simple for numerical implementation. To avoid asymptotic analysis in the treatment of numerical singularities in the method of collocation, as is conventionally done, we perform the surface integrations in the integral equation by using sub-triangles so that the field point never coincide with the evaluation points on the surfaces. This simplifies the formulation and greatly facilitates the implementation. To validate the method and the code, three canonic problems are studied. They are respectively the sound scattering by a sphere, the sound reflection by a plate in uniform mean flows and the sound propagation over a hump of irregular shape in uniform flows. The first two have analytical solutions and the third is solved by the method of Computational Aeroacoustics (CAA), all of which are used to compare the BEM solutions. The comparisons show very good agreements and validate the accuracy of the BEM approach implemented here.
Lax-Friedrichs sweeping scheme for static Hamilton-Jacobi equations
NASA Astrophysics Data System (ADS)
Kao, Chiu Yen; Osher, Stanley; Qian, Jianliang
2004-05-01
We propose a simple, fast sweeping method based on the Lax-Friedrichs monotone numerical Hamiltonian to approximate viscosity solutions of arbitrary static Hamilton-Jacobi equations in any number of spatial dimensions. By using the Lax-Friedrichs numerical Hamiltonian, we can easily obtain the solution at a specific grid point in terms of its neighbors, so that a Gauss-Seidel type nonlinear iterative method can be utilized. Furthermore, by incorporating a group-wise causality principle into the Gauss-Seidel iteration by following a finite group of characteristics, we have an easy-to-implement, sweeping-type, and fast convergent numerical method. However, unlike other methods based on the Godunov numerical Hamiltonian, some computational boundary conditions are needed in the implementation. We give a simple recipe which enforces a version of discrete min-max principle. Some convergence analysis is done for the one-dimensional eikonal equation. Extensive 2-D and 3-D numerical examples illustrate the efficiency and accuracy of the new approach. To our knowledge, this is the first fast numerical method based on discretizing the Hamilton-Jacobi equation directly without assuming convexity and/or homogeneity of the Hamiltonian.
Distributed numerical controllers
NASA Astrophysics Data System (ADS)
Orban, Peter E.
2001-12-01
While the basic principles of Numerical Controllers (NC) have not changed much during the years, the implementation of NCs' has changed tremendously. NC equipment has evolved from yesterday's hard-wired specialty control apparatus to today's graphics intensive, networked, increasingly PC based open systems, controlling a wide variety of industrial equipment with positioning needs. One of the newest trends in NC technology is the distributed implementation of the controllers. Distributed implementation promises to offer robustness, lower implementation costs, and a scalable architecture. Historically partitioning has been done along the hierarchical levels, moving individual modules into self contained units. The paper discusses various NC architectures, the underlying technology for distributed implementation, and relevant design issues. First the functional requirements of individual NC modules are analyzed. Module functionality, cycle times, and data requirements are examined. Next the infrastructure for distributed node implementation is reviewed. Various communication protocols and distributed real-time operating system issues are investigated and compared. Finally, a different, vertical system partitioning, offering true scalability and reconfigurability is presented.
NASA Astrophysics Data System (ADS)
Llewellin, E. W.
2010-02-01
LBflow is a flexible, extensible implementation of the lattice Boltzmann method, developed with geophysical applications in mind. The theoretical basis for LBflow, and its implementation, are presented in the companion paper, 'Part I'. This article covers the practical usage of LBflow and presents guidelines for obtaining optimal results from available computing power. The relationships among simulation resolution, accuracy, runtime and memory requirements are investigated in detail. Particular attention is paid to the origin, quantification and minimization of errors. LBflow is validated against analytical, numerical and experimental results for a range of three-dimensional flow geometries. The fluid conductance of prismatic pipes with various cross sections is calculated with LBflow and found to be in excellent agreement with published results. Simulated flow along sinusoidally constricted pipes gives good agreement with experimental data for a wide range of Reynolds number. The permeability of packs of spheres is determined and shown to be in excellent agreement with analytical results. The accuracy of internal flow patterns within the investigated geometries is also in excellent quantitative agreement with published data. The development of vortices within a sinusoidally constricted pipe with increasing Reynolds number is shown, demonstrating the insight that LBflow can offer as a 'virtual laboratory' for fluid flow.
DOT National Transportation Integrated Search
2014-06-01
The Research and Implementation Manual describes the administrative processes used by : Research Administration to develop and implement the Michigan Department of Transportation : (MDOT) research program. Contents of this manual include a discussion...
John Todd--Numerical Mathematics Pioneer
ERIC Educational Resources Information Center
Albers, Don
2007-01-01
John Todd, now in his mid-90s, began his career as a pure mathematician, but World War II interrupted that. In this interview, he talks about his education, the significant developments in his becoming a numerical analyst, and the journey that concluded at Caltech. Among the interesting stories are how he met his wife-to-be the mathematician Olga…
Block structured adaptive mesh and time refinement for hybrid, hyperbolic + N-body systems
NASA Astrophysics Data System (ADS)
Miniati, Francesco; Colella, Phillip
2007-11-01
We present a new numerical algorithm for the solution of coupled collisional and collisionless systems, based on the block structured adaptive mesh and time refinement strategy (AMR). We describe the issues associated with the discretization of the system equations and the synchronization of the numerical solution on the hierarchy of grid levels. We implement a code based on a higher order, conservative and directionally unsplit Godunov’s method for hydrodynamics; a symmetric, time centered modified symplectic scheme for collisionless component; and a multilevel, multigrid relaxation algorithm for the elliptic equation coupling the two components. Numerical results that illustrate the accuracy of the code and the relative merit of various implemented schemes are also presented.
NASA Astrophysics Data System (ADS)
Liu, Fenglai; Kong, Jing
2018-07-01
Unique technical challenges and their solutions for implementing semi-numerical Hartree-Fock exchange on the Phil Processor are discussed, especially concerning the single- instruction-multiple-data type of processing and small cache size. Benchmark calculations on a series of buckyball molecules with various Gaussian basis sets on a Phi processor and a six-core CPU show that the Phi processor provides as much as 12 times of speedup with large basis sets compared with the conventional four-center electron repulsion integration approach performed on the CPU. The accuracy of the semi-numerical scheme is also evaluated and found to be comparable to that of the resolution-of-identity approach.
NASA Astrophysics Data System (ADS)
Gencoglu, Muharrem Tuncay; Baskonus, Haci Mehmet; Bulut, Hasan
2017-01-01
The main aim of this manuscript is to obtain numerical solutions for the nonlinear model of interpersonal relationships with time fractional derivative. The variational iteration method is theoretically implemented and numerically conducted only to yield the desired solutions. Numerical simulations of desired solutions are plotted by using Wolfram Mathematica 9. The authors would like to thank the reviewers for their comments that help improve the manuscript.
Confrontation of the Magnetically Arrested Disc scenario with observations of FR II sources
NASA Astrophysics Data System (ADS)
Rusinek, Katarzyna; Sikora, Marek
2017-10-01
The main aim of our work was to check whether powers of jets in FR II radio galaxies (RGs) and quasars (QSOs) can be reproduced by the Magnetically Arrested Disc (MAD) scenario. Assuming that established in the recent numerical simulations of the MAD scenario the (H/R)^2 dependence of the jet production efficiency is correct, we demonstrate that in order to reproduce the observed jet powers in FR II sources: (i) accretion discs must be geometrically much thicker than the standard ones; (ii) and/or that the jet production is strongly modulated.
Magnetic Bianchi type II string cosmological model in loop quantum cosmology
NASA Astrophysics Data System (ADS)
Rikhvitsky, Victor; Saha, Bijan; Visinescu, Mihai
2014-07-01
The loop quantum cosmology of the Bianchi type II string cosmological model in the presence of a homogeneous magnetic field is studied. We present the effective equations which provide modifications to the classical equations of motion due to quantum effects. The numerical simulations confirm that the big bang singularity is resolved by quantum gravity effects.
NASA Astrophysics Data System (ADS)
Tang, Jinyun; Riley, William J.; Niu, Jie
2015-12-01
We implemented the Amenu-Kumar model in the Community Land Model (CLM4.5) to simulate plant Root Hydraulic Redistribution (RHR) and analyzed its influence on CLM hydrology from site to global scales. We evaluated two numerical implementations: the first solved the coupled equations of root and soil water transport concurrently, while the second solved the two equations sequentially. Through sensitivity analysis, we demonstrate that the sequentially coupled implementation (SCI) is numerically incorrect, whereas the tightly coupled implementation (TCI) is numerically robust with numerical time steps varying from 1 to 30 min. At the site-level, we found the SCI approach resulted in better agreement with measured evapotranspiration (ET) at the AmeriFlux Blodgett Forest site, California, whereas the two approaches resulted in equally poor agreement between predicted and measured ET at the LBA Tapajos KM67 Mature Forest site in Amazon, Brazil. Globally, the SCI approach overestimated annual land ET by as much as 3.5 mm d-1 in some grid cells when compared to the TCI estimates. These comparisons demonstrate that TCI is a more robust numerical implementation of RHR. However, we found, even with TCI, that incorporating RHR resulted in worse agreement with measured soil moisture at both the Blodgett Forest and Tapajos sites and degraded the agreement between simulated terrestrial water storage anomaly and Gravity Recovery and Climate Experiment (GRACE) observations. We find including RHR in CLM4.5 improved ET predictions compared with the FLUXNET-MTE estimates north of 20° N but led to poorer predictions in the tropics. The biases in ET were robust and significant regardless of the four different pedotransfer functions or of the two meteorological forcing data sets we applied. We also found that the simulated water table was unrealistically sensitive to RHR. Therefore, we contend that further structural and data improvements are warranted to improve the hydrological dynamics in CLM4.5.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Jinyun; Riley, William J.; Niu, Jie
We implemented the Amenu-Kumar model in the Community Land Model (CLM4.5) to simulate plant Root Hydraulic Redistribution (RHR) and analyzed its influence on CLM hydrology from site to global scales. We evaluated two numerical implementations: the first solved the coupled equations of root and soil water transport concurrently, while the second solved the two equations sequentially. Through sensitivity analysis, we demonstrate that the sequentially coupled implementation (SCI) is numerically incorrect, whereas the tightly coupled implementation (TCI) is numerically robust with numerical time steps varying from 1 to 30 min. At the site-level, we found the SCI approach resulted in bettermore » agreement with measured evapotranspiration (ET) at the AmeriFlux Blodgett Forest site, California, whereas the two approaches resulted in equally poor agreement between predicted and measured ET at the LBA Tapajos KM67 Mature Forest site in Amazon, Brazil. Globally, the SCI approach overestimated annual land ET by as much as 3.5 mm d -1 in some grid cells when compared to the TCI estimates. These comparisons demonstrate that TCI is a more robust numerical implementation of RHR. However, we found, even with TCI, that incorporating RHR resulted in worse agreement with measured soil moisture at both the Blodgett Forest and Tapajos sites and degraded the agreement between simulated terrestrial water storage anomaly and Gravity Recovery and Climate Experiment (GRACE) observations. We find including RHR in CLM4.5 improved ET predictions compared with the FLUXNET-MTE estimates north of 20° N but led to poorer predictions in the tropics. The biases in ET were robust and significant regardless of the four different pedotransfer functions or of the two meteorological forcing data sets we applied. We also found that the simulated water table was unrealistically sensitive to RHR. Therefore, we contend that further structural and data improvements are warranted to improve the hydrological dynamics in CLM4.5.« less
Tang, Jinyun; Riley, William J.; Niu, Jie
2015-11-12
We implemented the Amenu-Kumar model in the Community Land Model (CLM4.5) to simulate plant Root Hydraulic Redistribution (RHR) and analyzed its influence on CLM hydrology from site to global scales. We evaluated two numerical implementations: the first solved the coupled equations of root and soil water transport concurrently, while the second solved the two equations sequentially. Through sensitivity analysis, we demonstrate that the sequentially coupled implementation (SCI) is numerically incorrect, whereas the tightly coupled implementation (TCI) is numerically robust with numerical time steps varying from 1 to 30 min. At the site-level, we found the SCI approach resulted in bettermore » agreement with measured evapotranspiration (ET) at the AmeriFlux Blodgett Forest site, California, whereas the two approaches resulted in equally poor agreement between predicted and measured ET at the LBA Tapajos KM67 Mature Forest site in Amazon, Brazil. Globally, the SCI approach overestimated annual land ET by as much as 3.5 mm d -1 in some grid cells when compared to the TCI estimates. These comparisons demonstrate that TCI is a more robust numerical implementation of RHR. However, we found, even with TCI, that incorporating RHR resulted in worse agreement with measured soil moisture at both the Blodgett Forest and Tapajos sites and degraded the agreement between simulated terrestrial water storage anomaly and Gravity Recovery and Climate Experiment (GRACE) observations. We find including RHR in CLM4.5 improved ET predictions compared with the FLUXNET-MTE estimates north of 20° N but led to poorer predictions in the tropics. The biases in ET were robust and significant regardless of the four different pedotransfer functions or of the two meteorological forcing data sets we applied. We also found that the simulated water table was unrealistically sensitive to RHR. Therefore, we contend that further structural and data improvements are warranted to improve the hydrological dynamics in CLM4.5.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-13
...EPA is proposing a limited approval and a limited disapproval of a revision to the West Virginia State Implementation Plan (SIP) submitted by the State of West Virginia through the West Virginia Department of Environmental Protection (WVDEP) on June 18, 2008, that addresses regional haze for the first implementation period. This revision addresses the requirements of the Clean Air Act (CAA) and EPA's rules that require states to prevent any future, and remedy any existing, anthropogenic impairment of visibility in mandatory Class I areas caused by emissions of air pollutants from numerous sources located over a wide geographic area (also referred to as the ``regional haze program''). States are required to assure reasonable progress toward the national goal of achieving natural visibility conditions in Class I areas. EPA is proposing a limited approval of this SIP revision to implement the regional haze requirements for West Virginia on the basis that the revision, as a whole, strengthens the West Virginia SIP. Also in this action, EPA is proposing a limited disapproval of this same SIP revision because of the deficiencies in the State's June 2008 regional haze SIP submittal arising from the remand by the U.S. Court of Appeals for the District of Columbia (D.C. Circuit) to EPA of the Clean Air Interstate Rule (CAIR). EPA is also proposing to approve this revision as meeting the requirements of 110(a)(2)(D)(i)(II) and 110(a)(2)(J), relating to visibility protection for the 1997 8-Hour Ozone National Ambient Air Quality Standard (NAAQS) and the 1997 and 2006 fine particulate matter (PM2.5) NAAQS.
Evaluation of Proteus as a Tool for the Rapid Development of Models of Hydrologic Systems
NASA Astrophysics Data System (ADS)
Weigand, T. M.; Farthing, M. W.; Kees, C. E.; Miller, C. T.
2013-12-01
Models of modern hydrologic systems can be complex and involve a variety of operators with varying character. The goal is to implement approximations of such models that are both efficient for the developer and computationally efficient, which is a set of naturally competing objectives. Proteus is a Python-based toolbox that supports prototyping of model formulations as well as a wide variety of modern numerical methods and parallel computing. We used Proteus to develop numerical approximations for three models: Richards' equation, a brine flow model derived using the Thermodynamically Constrained Averaging Theory (TCAT), and a multiphase TCAT-based tumor growth model. For Richards' equation, we investigated discontinuous Galerkin solutions with higher order time integration based on the backward difference formulas. The TCAT brine flow model was implemented using Proteus and a variety of numerical methods were compared to hand coded solutions. Finally, an existing tumor growth model was implemented in Proteus to introduce more advanced numerics and allow the code to be run in parallel. From these three example models, Proteus was found to be an attractive open-source option for rapidly developing high quality code for solving existing and evolving computational science models.
40 CFR 72.73 - State issuance of Phase II permits.
Code of Federal Regulations, 2010 CFR
2010-07-01
... (CONTINUED) PERMITS REGULATION Acid Rain Phase II Implementation § 72.73 State issuance of Phase II permits... permit program under part 70 of this chapter and that has a State Acid Rain program accepted by the Administrator under § 72.71 shall be responsible for administering and enforcing Acid Rain permits effective in...
40 CFR 72.73 - State issuance of Phase II permits.
Code of Federal Regulations, 2014 CFR
2014-07-01
... (CONTINUED) PERMITS REGULATION Acid Rain Phase II Implementation § 72.73 State issuance of Phase II permits... permit program under part 70 of this chapter and that has a State Acid Rain program accepted by the Administrator under § 72.71 shall be responsible for administering and enforcing Acid Rain permits effective in...
40 CFR 72.73 - State issuance of Phase II permits.
Code of Federal Regulations, 2012 CFR
2012-07-01
... (CONTINUED) PERMITS REGULATION Acid Rain Phase II Implementation § 72.73 State issuance of Phase II permits... permit program under part 70 of this chapter and that has a State Acid Rain program accepted by the Administrator under § 72.71 shall be responsible for administering and enforcing Acid Rain permits effective in...
40 CFR 72.73 - State issuance of Phase II permits.
Code of Federal Regulations, 2013 CFR
2013-07-01
... (CONTINUED) PERMITS REGULATION Acid Rain Phase II Implementation § 72.73 State issuance of Phase II permits... permit program under part 70 of this chapter and that has a State Acid Rain program accepted by the Administrator under § 72.71 shall be responsible for administering and enforcing Acid Rain permits effective in...
40 CFR 72.74 - Federal issuance of Phase II permits.
Code of Federal Regulations, 2010 CFR
2010-07-01
... PROGRAMS (CONTINUED) PERMITS REGULATION Acid Rain Phase II Implementation § 72.74 Federal issuance of Phase II permits. (a)(1) The Administrator will be responsible for administering and enforcing Acid Rain... and enforcing Acid Rain permits for such sources under § 72.73(a). (2) After and to the extent the...
40 CFR 72.74 - Federal issuance of Phase II permits.
Code of Federal Regulations, 2014 CFR
2014-07-01
... PROGRAMS (CONTINUED) PERMITS REGULATION Acid Rain Phase II Implementation § 72.74 Federal issuance of Phase II permits. (a)(1) The Administrator will be responsible for administering and enforcing Acid Rain... and enforcing Acid Rain permits for such sources under § 72.73(a). (2) After and to the extent the...
40 CFR 72.74 - Federal issuance of Phase II permits.
Code of Federal Regulations, 2012 CFR
2012-07-01
... PROGRAMS (CONTINUED) PERMITS REGULATION Acid Rain Phase II Implementation § 72.74 Federal issuance of Phase II permits. (a)(1) The Administrator will be responsible for administering and enforcing Acid Rain... and enforcing Acid Rain permits for such sources under § 72.73(a). (2) After and to the extent the...
40 CFR 72.74 - Federal issuance of Phase II permits.
Code of Federal Regulations, 2013 CFR
2013-07-01
... PROGRAMS (CONTINUED) PERMITS REGULATION Acid Rain Phase II Implementation § 72.74 Federal issuance of Phase II permits. (a)(1) The Administrator will be responsible for administering and enforcing Acid Rain... and enforcing Acid Rain permits for such sources under § 72.73(a). (2) After and to the extent the...
40 CFR 72.73 - State issuance of Phase II permits.
Code of Federal Regulations, 2011 CFR
2011-07-01
... (CONTINUED) PERMITS REGULATION Acid Rain Phase II Implementation § 72.73 State issuance of Phase II permits... permit program under part 70 of this chapter and that has a State Acid Rain program accepted by the Administrator under § 72.71 shall be responsible for administering and enforcing Acid Rain permits effective in...
40 CFR 72.74 - Federal issuance of Phase II permits.
Code of Federal Regulations, 2011 CFR
2011-07-01
... PROGRAMS (CONTINUED) PERMITS REGULATION Acid Rain Phase II Implementation § 72.74 Federal issuance of Phase II permits. (a)(1) The Administrator will be responsible for administering and enforcing Acid Rain... and enforcing Acid Rain permits for such sources under § 72.73(a). (2) After and to the extent the...
2013-09-30
numerical efforts undertaken here implement established aspects of Boussinesq -type modeling, developed by the PI and other researchers. These aspects...the Boussinesq -type framework, and then implement in a numerical model. Once this comprehensive model is developed and tested against established...phenomena that might be observed at New River. WORK COMPLETED In FY13 we have continued the development of a Boussinesq -type formulation that
Parallel Implementation of a High Order Implicit Collocation Method for the Heat Equation
NASA Technical Reports Server (NTRS)
Kouatchou, Jules; Halem, Milton (Technical Monitor)
2000-01-01
We combine a high order compact finite difference approximation and collocation techniques to numerically solve the two dimensional heat equation. The resulting method is implicit arid can be parallelized with a strategy that allows parallelization across both time and space. We compare the parallel implementation of the new method with a classical implicit method, namely the Crank-Nicolson method, where the parallelization is done across space only. Numerical experiments are carried out on the SGI Origin 2000.
Models for twistable elastic polymers in Brownian dynamics, and their implementation for LAMMPS.
Brackley, C A; Morozov, A N; Marenduzzo, D
2014-04-07
An elastic rod model for semi-flexible polymers is presented. Theory for a continuum rod is reviewed, and it is shown that a popular discretised model used in numerical simulations gives the correct continuum limit. Correlation functions relating to both bending and twisting of the rod are derived for both continuous and discrete cases, and results are compared with numerical simulations. Finally, two possible implementations of the discretised model in the multi-purpose molecular dynamics software package LAMMPS are described.
Memoranda from the Chair of EPA's Science Policy Council to the Science Policy Council and the Science Policy Council Steering Committee regarding Implementation of the Cancer Guidelines and Accompanying Supplemental Guidance.
DOT National Transportation Integrated Search
2015-01-01
The Research and Implementation Manual describes the administrative processes used by Research Administration to develop and implement the Michigan Department of Transportation (MDOT) research program. Contents of this manual include a discussion of ...
Efficient Simulation of Secondary Fluorescence Via NIST DTSA-II Monte Carlo.
Ritchie, Nicholas W M
2017-06-01
Secondary fluorescence, the final term in the familiar matrix correction triumvirate Z·A·F, is the most challenging for Monte Carlo models to simulate. In fact, only two implementations of Monte Carlo models commonly used to simulate electron probe X-ray spectra can calculate secondary fluorescence-PENEPMA and NIST DTSA-II a (DTSA-II is discussed herein). These two models share many physical models but there are some important differences in the way each implements X-ray emission including secondary fluorescence. PENEPMA is based on PENELOPE, a general purpose software package for simulation of both relativistic and subrelativistic electron/positron interactions with matter. On the other hand, NIST DTSA-II was designed exclusively for simulation of X-ray spectra generated by subrelativistic electrons. NIST DTSA-II uses variance reduction techniques unsuited to general purpose code. These optimizations help NIST DTSA-II to be orders of magnitude more computationally efficient while retaining detector position sensitivity. Simulations execute in minutes rather than hours and can model differences that result from detector position. Both PENEPMA and NIST DTSA-II are capable of handling complex sample geometries and we will demonstrate that both are of similar accuracy when modeling experimental secondary fluorescence data from the literature.
75 FR 68911 - Regulations Under the Genetic Information Nondiscrimination Act of 2008
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-09
...The Equal Employment Opportunity Commission (``EEOC'' or ``Commission'') is issuing a final rule to implement Title II of the Genetic Information Nondiscrimination Act of 2008 (``GINA''). Congress enacted Title II of GINA to protect job applicants, current and former employees, labor union members, and apprentices and trainees from discrimination based on their genetic information. Title II of GINA requires the EEOC to issue implementing regulations. The Commission issued a proposed rule in the Federal Register on March 2, 2009, for a sixty-day notice and comment period that ended on May 1, 2009. After consideration of the public comments, the Commission has revised portions of both the final rule and the preamble.
NASA Technical Reports Server (NTRS)
Beers, B. L.; Pine, V. W.; Hwang, H. C.; Bloomberg, H. W.; Lin, D. L.; Schmidt, M. J.; Strickland, D. J.
1979-01-01
The model consists of four phases: single electron dynamics, single electron avalanche, negative streamer development, and tree formation. Numerical algorithms and computer code implementations are presented for the first three phases. An approach to developing a code description of fourth phase is discussed. Numerical results are presented for a crude material model of Teflon.
Numerical Algorithms for Acoustic Integrals - The Devil is in the Details
NASA Technical Reports Server (NTRS)
Brentner, Kenneth S.
1996-01-01
The accurate prediction of the aeroacoustic field generated by aerospace vehicles or nonaerospace machinery is necessary for designers to control and reduce source noise. Powerful computational aeroacoustic methods, based on various acoustic analogies (primarily the Lighthill acoustic analogy) and Kirchhoff methods, have been developed for prediction of noise from complicated sources, such as rotating blades. Both methods ultimately predict the noise through a numerical evaluation of an integral formulation. In this paper, we consider three generic acoustic formulations and several numerical algorithms that have been used to compute the solutions to these formulations. Algorithms for retarded-time formulations are the most efficient and robust, but they are difficult to implement for supersonic-source motion. Collapsing-sphere and emission-surface formulations are good alternatives when supersonic-source motion is present, but the numerical implementations of these formulations are more computationally demanding. New algorithms - which utilize solution adaptation to provide a specified error level - are needed.
Turovets, Sergei; Volkov, Vasily; Zherdetsky, Aleksej; Prakonina, Alena; Malony, Allen D
2014-01-01
The Electrical Impedance Tomography (EIT) and electroencephalography (EEG) forward problems in anisotropic inhomogeneous media like the human head belongs to the class of the three-dimensional boundary value problems for elliptic equations with mixed derivatives. We introduce and explore the performance of several new promising numerical techniques, which seem to be more suitable for solving these problems. The proposed numerical schemes combine the fictitious domain approach together with the finite-difference method and the optimally preconditioned Conjugate Gradient- (CG-) type iterative method for treatment of the discrete model. The numerical scheme includes the standard operations of summation and multiplication of sparse matrices and vector, as well as FFT, making it easy to implement and eligible for the effective parallel implementation. Some typical use cases for the EIT/EEG problems are considered demonstrating high efficiency of the proposed numerical technique.
Contribution of the Recent AUSM Schemes to the Overflow Code: Implementation and Validation
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing; Buning, Pieter G.
2000-01-01
We shall present results of a recent collaborative effort between the authors attempting to implement the numerical flux scheme, AUSM+ and its new developments, into a widely used NASA code, OVERFLOW. This paper is intended to give a thorough and systematic documentation about the solutions of default test cases using the AUSNI+ scheme. Hence we will address various aspects of numerical solutions, such as accuracy, convergence rate, and effects of turbulence models, over a variety of geometries, speed regimes. We will briefly describe the numerical schemes employed in the calculations, including the capability of solving for low-speed flows and multiphase flows by employing the concept of numerical speed of sound. As a bonus, this low Mach number formulations also enhances convergence to steady solutions for flows even at transonic speed. Calculations for complex 3D turbulent flows were performed with several turbulence models and the results display excellent agreements with measured data.
Numerical Modeling of Ablation Heat Transfer
NASA Technical Reports Server (NTRS)
Ewing, Mark E.; Laker, Travis S.; Walker, David T.
2013-01-01
A unique numerical method has been developed for solving one-dimensional ablation heat transfer problems. This paper provides a comprehensive description of the method, along with detailed derivations of the governing equations. This methodology supports solutions for traditional ablation modeling including such effects as heat transfer, material decomposition, pyrolysis gas permeation and heat exchange, and thermochemical surface erosion. The numerical scheme utilizes a control-volume approach with a variable grid to account for surface movement. This method directly supports implementation of nontraditional models such as material swelling and mechanical erosion, extending capabilities for modeling complex ablation phenomena. Verifications of the numerical implementation are provided using analytical solutions, code comparisons, and the method of manufactured solutions. These verifications are used to demonstrate solution accuracy and proper error convergence rates. A simple demonstration of a mechanical erosion (spallation) model is also provided to illustrate the unique capabilities of the method.
77 FR 30212 - Approval and Promulgation of Air Quality Implementation Plans; Vermont; Regional Haze
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-22
... Promulgation of Air Quality Implementation Plans; Vermont; Regional Haze AGENCY: Environmental Protection... Implementation Plan (SIP) that addresses regional haze for the first planning period from 2008 through 2018. The... numerous sources located over a wide geographic area (also referred to as the ``regional haze program...
Volume II: Ecosystem management: principles and applications.
M.E. Jensen; P.S. Bourgeron
1994-01-01
This document provides land managers with practical suggestions for implementing ecosystem management. It contains 28 papers organized into five sections: historical perspectives, ecological principles, sampling design, case studies, and implementation strategies.
NASA Technical Reports Server (NTRS)
Bruno, John
1984-01-01
The results of an investigation into the feasibility of using the MPP for direct and large eddy simulations of the Navier-Stokes equations is presented. A major part of this study was devoted to the implementation of two of the standard numerical algorithms for CFD. These implementations were not run on the Massively Parallel Processor (MPP) since the machine delivered to NASA Goddard does not have sufficient capacity. Instead, a detailed implementation plan was designed and from these were derived estimates of the time and space requirements of the algorithms on a suitably configured MPP. In addition, other issues related to the practical implementation of these algorithms on an MPP-like architecture were considered; namely, adaptive grid generation, zonal boundary conditions, the table lookup problem, and the software interface. Performance estimates show that the architectural components of the MPP, the Staging Memory and the Array Unit, appear to be well suited to the numerical algorithms of CFD. This combined with the prospect of building a faster and larger MMP-like machine holds the promise of achieving sustained gigaflop rates that are required for the numerical simulations in CFD.
Hierarchical matrices implemented into the boundary integral approaches for gravity field modelling
NASA Astrophysics Data System (ADS)
Čunderlík, Róbert; Vipiana, Francesca
2017-04-01
Boundary integral approaches applied for gravity field modelling have been recently developed to solve the geodetic boundary value problems numerically, or to process satellite observations, e.g. from the GOCE satellite mission. In order to obtain numerical solutions of "cm-level" accuracy, such approaches require very refined level of the disretization or resolution. This leads to enormous memory requirements that need to be reduced. An implementation of the Hierarchical Matrices (H-matrices) can significantly reduce a numerical complexity of these approaches. A main idea of the H-matrices is based on an approximation of the entire system matrix that is split into a family of submatrices. Large submatrices are stored in factorized representation, while small submatrices are stored in standard representation. This allows reducing memory requirements significantly while improving the efficiency. The poster presents our preliminary results of implementations of the H-matrices into the existing boundary integral approaches based on the boundary element method or the method of fundamental solution.
NASA Technical Reports Server (NTRS)
Nixon, Douglas D.
2009-01-01
Discrete/Continuous (D/C) control theory is a new generalized theory of discrete-time control that expands the concept of conventional (exact) discrete-time control to create a framework for design and implementation of discretetime control systems that include a continuous-time command function generator so that actuator commands need not be constant between control decisions, but can be more generally defined and implemented as functions that vary with time across sample period. Because the plant/control system construct contains two linear subsystems arranged in tandem, a novel dual-kernel counter-flow convolution integral appears in the formulation. As part of the D/C system design and implementation process, numerical evaluation of that integral over the sample period is required. Three fundamentally different evaluation methods and associated algorithms are derived for the constant-coefficient case. Numerical results are matched against three available examples that have closed-form solutions.
Object oriented development of engineering software using CLIPS
NASA Technical Reports Server (NTRS)
Yoon, C. John
1991-01-01
Engineering applications involve numeric complexity and manipulations of a large amount of data. Traditionally, numeric computation has been the concern in developing an engineering software. As engineering application software became larger and more complex, management of resources such as data, rather than the numeric complexity, has become the major software design problem. Object oriented design and implementation methodologies can improve the reliability, flexibility, and maintainability of the resulting software; however, some tasks are better solved with the traditional procedural paradigm. The C Language Integrated Production System (CLIPS), with deffunction and defgeneric constructs, supports the procedural paradigm. The natural blending of object oriented and procedural paradigms has been cited as the reason for the popularity of the C++ language. The CLIPS Object Oriented Language's (COOL) object oriented features are more versatile than C++'s. A software design methodology based on object oriented and procedural approaches appropriate for engineering software, and to be implemented in CLIPS was outlined. A method for sensor placement for Space Station Freedom is being implemented in COOL as a sample problem.
Terahertz bandwidth all-optical Hilbert transformers based on long-period gratings.
Ashrafi, Reza; Azaña, José
2012-07-01
A novel, all-optical design for implementing terahertz (THz) bandwidth real-time Hilbert transformers is proposed and numerically demonstrated. An all-optical Hilbert transformer can be implemented using a uniform-period long-period grating (LPG) with a properly designed amplitude-only grating apodization profile, incorporating a single π-phase shift in the middle of the grating length. The designed LPG-based Hilbert transformers can be practically implemented using either fiber-optic or integrated-waveguide technologies. As a generalization, photonic fractional Hilbert transformers are also designed based on the same optical platform. In this general case, the resulting LPGs have multiple π-phase shifts along the grating length. Our numerical simulations confirm that all-optical Hilbert transformers capable of processing arbitrary optical signals with bandwidths well in the THz range can be implemented using feasible fiber/waveguide LPG designs.
NASA Astrophysics Data System (ADS)
Lashkin, S. V.; Kozelkov, A. S.; Yalozo, A. V.; Gerasimov, V. Yu.; Zelensky, D. K.
2017-12-01
This paper describes the details of the parallel implementation of the SIMPLE algorithm for numerical solution of the Navier-Stokes system of equations on arbitrary unstructured grids. The iteration schemes for the serial and parallel versions of the SIMPLE algorithm are implemented. In the description of the parallel implementation, special attention is paid to computational data exchange among processors under the condition of the grid model decomposition using fictitious cells. We discuss the specific features for the storage of distributed matrices and implementation of vector-matrix operations in parallel mode. It is shown that the proposed way of matrix storage reduces the number of interprocessor exchanges. A series of numerical experiments illustrates the effect of the multigrid SLAE solver tuning on the general efficiency of the algorithm; the tuning involves the types of the cycles used (V, W, and F), the number of iterations of a smoothing operator, and the number of cells for coarsening. Two ways (direct and indirect) of efficiency evaluation for parallelization of the numerical algorithm are demonstrated. The paper presents the results of solving some internal and external flow problems with the evaluation of parallelization efficiency by two algorithms. It is shown that the proposed parallel implementation enables efficient computations for the problems on a thousand processors. Based on the results obtained, some general recommendations are made for the optimal tuning of the multigrid solver, as well as for selecting the optimal number of cells per processor.
Impact of implementation choices on quantitative predictions of cell-based computational models
NASA Astrophysics Data System (ADS)
Kursawe, Jochen; Baker, Ruth E.; Fletcher, Alexander G.
2017-09-01
'Cell-based' models provide a powerful computational tool for studying the mechanisms underlying the growth and dynamics of biological tissues in health and disease. An increasing amount of quantitative data with cellular resolution has paved the way for the quantitative parameterisation and validation of such models. However, the numerical implementation of cell-based models remains challenging, and little work has been done to understand to what extent implementation choices may influence model predictions. Here, we consider the numerical implementation of a popular class of cell-based models called vertex models, which are often used to study epithelial tissues. In two-dimensional vertex models, a tissue is approximated as a tessellation of polygons and the vertices of these polygons move due to mechanical forces originating from the cells. Such models have been used extensively to study the mechanical regulation of tissue topology in the literature. Here, we analyse how the model predictions may be affected by numerical parameters, such as the size of the time step, and non-physical model parameters, such as length thresholds for cell rearrangement. We find that vertex positions and summary statistics are sensitive to several of these implementation parameters. For example, the predicted tissue size decreases with decreasing cell cycle durations, and cell rearrangement may be suppressed by large time steps. These findings are counter-intuitive and illustrate that model predictions need to be thoroughly analysed and implementation details carefully considered when applying cell-based computational models in a quantitative setting.
The Bean model in suprconductivity: Variational formulation and numerical solution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prigozhin, L.
The Bean critical-state model describes the penetration of magnetic field into type-II superconductors. Mathematically, this is a free boundary problem and its solution is of interest in applied superconductivity. We derive a variational formulation for the Bean model and use it to solve two-dimensional and axially symmetric critical-state problems numerically. 25 refs., 9 figs., 1 tab.
24 CFR 135.30 - Numerical goals for meeting the greatest extent feasible requirement.
Code of Federal Regulations, 2010 CFR
2010-04-01
... date of this rule. (3) For recipients that do not engage in training, or hiring, but award contracts to... (b) of this section apply to new hires. The numerical goals reflect the aggregate hires. Efforts to... percent of the aggregate number of new hires for the one year period beginning in FY 1995; (ii) 20 percent...
NASA Technical Reports Server (NTRS)
Choi, S. R.; Gyekenyesi, J. P.
2001-01-01
Slow crack growth analysis was performed with three different loading histories including constant stress- rate/constant stress-rate testing (Case I loading), constant stress/constant stress-rate testing (Case II loading), and cyclic stress/constant stress-rate testing (Case III loading). Strength degradation due to slow crack growth and/or damage accumulation was determined numerically as a function of percentage of interruption time between the two loading sequences for a given loading history. The numerical solutions were examined with the experimental data determined at elevated temperatures using four different advanced ceramic materials, two silicon nitrides, one silicon carbide and one alumina for the Case I loading history, and alumina for the Case II loading history. The numerical solutions were in reasonable agreement with the experimental data, indicating that notwithstanding some degree of creep deformation presented for some test materials slow crack growth was a governing mechanism associated with failure for all the rest materials.
NASA Technical Reports Server (NTRS)
Choi, Sung R.; Gyekenyesi, John P.
2000-01-01
Slow crack growth analysis was performed with three different loading histories including constant stress-rate/constant stress-rate testing (Case I loading), constant stress/constant stress-rate testing (Case II loading), and cyclic stress/constant stress-rate testing (Case III loading). Strength degradation due to slow crack growth arid/or damage accumulation was determined numerically as a Function of percentage of interruption time between the two loading sequences for a given loading history. The numerical solutions were examined with the experimental data determined at elevated temperatures using four different advanced ceramic materials, two silicon nitrides, one silicon carbide and one alumina for the Case I loading history, and alumina for the Case II loading history. The numerical solutions were in reasonable agreement with the experimental data, indicating that notwithstanding some degree of creep deformation presented for some test materials slow crack growth was a governing mechanism associated with failure for all the test material&
NASA Astrophysics Data System (ADS)
Katsaounis, T. D.
2005-02-01
The scope of this book is to present well known simple and advanced numerical methods for solving partial differential equations (PDEs) and how to implement these methods using the programming environment of the software package Diffpack. A basic background in PDEs and numerical methods is required by the potential reader. Further, a basic knowledge of the finite element method and its implementation in one and two space dimensions is required. The authors claim that no prior knowledge of the package Diffpack is required, which is true, but the reader should be at least familiar with an object oriented programming language like C++ in order to better comprehend the programming environment of Diffpack. Certainly, a prior knowledge or usage of Diffpack would be a great advantage to the reader. The book consists of 15 chapters, each one written by one or more authors. Each chapter is basically divided into two parts: the first part is about mathematical models described by PDEs and numerical methods to solve these models and the second part describes how to implement the numerical methods using the programming environment of Diffpack. Each chapter closes with a list of references on its subject. The first nine chapters cover well known numerical methods for solving the basic types of PDEs. Further, programming techniques on the serial as well as on the parallel implementation of numerical methods are also included in these chapters. The last five chapters are dedicated to applications, modelled by PDEs, in a variety of fields. The first chapter is an introduction to parallel processing. It covers fundamentals of parallel processing in a simple and concrete way and no prior knowledge of the subject is required. Examples of parallel implementation of basic linear algebra operations are presented using the Message Passing Interface (MPI) programming environment. Here, some knowledge of MPI routines is required by the reader. Examples solving in parallel simple PDEs using Diffpack and MPI are also presented. Chapter 2 presents the overlapping domain decomposition method for solving PDEs. It is well known that these methods are suitable for parallel processing. The first part of the chapter covers the mathematical formulation of the method as well as algorithmic and implementational issues. The second part presents a serial and a parallel implementational framework within the programming environment of Diffpack. The chapter closes by showing how to solve two application examples with the overlapping domain decomposition method using Diffpack. Chapter 3 is a tutorial about how to incorporate the multigrid solver in Diffpack. The method is illustrated by examples such as a Poisson solver, a general elliptic problem with various types of boundary conditions and a nonlinear Poisson type problem. In chapter 4 the mixed finite element is introduced. Technical issues concerning the practical implementation of the method are also presented. The main difficulties of the efficient implementation of the method, especially in two and three space dimensions on unstructured grids, are presented and addressed in the framework of Diffpack. The implementational process is illustrated by two examples, namely the system formulation of the Poisson problem and the Stokes problem. Chapter 5 is closely related to chapter 4 and addresses the problem of how to solve efficiently the linear systems arising by the application of the mixed finite element method. The proposed method is block preconditioning. Efficient techniques for implementing the method within Diffpack are presented. Optimal block preconditioners are used to solve the system formulation of the Poisson problem, the Stokes problem and the bidomain model for the electrical activity in the heart. The subject of chapter 6 is systems of PDEs. Linear and nonlinear systems are discussed. Fully implicit and operator splitting methods are presented. Special attention is paid to how existing solvers for scalar equations in Diffpack can be used to derive fully implicit solvers for systems. The proposed techniques are illustrated in terms of two applications, namely a system of PDEs modelling pipeflow and a two-phase porous media flow. Stochastic PDEs is the topic of chapter 7. The first part of the chapter is a simple introduction to stochastic PDEs; basic analytical properties are presented for simple models like transport phenomena and viscous drag forces. The second part considers the numerical solution of stochastic PDEs. Two basic techniques are presented, namely Monte Carlo and perturbation methods. The last part explains how to implement and incorporate these solvers into Diffpack. Chapter 8 describes how to operate Diffpack from Python scripts. The main goal here is to provide all the programming and technical details in order to glue the programming environment of Diffpack with visualization packages through Python and in general take advantage of the Python interfaces. Chapter 9 attempts to show how to use numerical experiments to measure the performance of various PDE solvers. The authors gathered a rather impressive list, a total of 14 PDE solvers. Solvers for problems like Poisson, Navier--Stokes, elasticity, two-phase flows and methods such as finite difference, finite element, multigrid, and gradient type methods are presented. The authors provide a series of numerical results combining various solvers with various methods in order to gain insight into their computational performance and efficiency. In Chapter 10 the authors consider a computationally challenging problem, namely the computation of the electrical activity of the human heart. After a brief introduction on the biology of the problem the authors present the mathematical models involved and a numerical method for solving them within the framework of Diffpack. Chapter 11 and 12 are closely related; actually they could have been combined in a single chapter. Chapter 11 introduces several mathematical models used in finance, based on the Black--Scholes equation. Chapter 12 considers several numerical methods like Monte Carlo, lattice methods, finite difference and finite element methods. Implementation of these methods within Diffpack is presented in the last part of the chapter. Chapter 13 presents how the finite element method is used for the modelling and analysis of elastic structures. The authors describe the structural elements of Diffpack which include popular elements such as beams and plates and examples are presented on how to use them to simulate elastic structures. Chapter 14 describes an application problem, namely the extrusion of aluminum. This is a rather\\endcolumn complicated process which involves non-Newtonian flow, heat transfer and elasticity. The authors describe the systems of PDEs modelling the underlying process and use a finite element method to obtain a numerical solution. The implementation of the numerical method in Diffpack is presented along with some applications. The last chapter, chapter 15, focuses on mathematical and numerical models of systems of PDEs governing geological processes in sedimentary basins. The underlying mathematical model is solved using the finite element method within a fully implicit scheme. The authors discuss the implementational issues involved within Diffpack and they present results from several examples. In summary, the book focuses on the computational and implementational issues involved in solving partial differential equations. The potential reader should have a basic knowledge of PDEs and the finite difference and finite element methods. The examples presented are solved within the programming framework of Diffpack and the reader should have prior experience with the particular software in order to take full advantage of the book. Overall the book is well written, the subject of each chapter is well presented and can serve as a reference for graduate students, researchers and engineers who are interested in the numerical solution of partial differential equations modelling various applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ji, Haojie; Dhomkar, Siddharth; Roy, Bidisha
2014-10-28
For submonolayer quantum dot (QD) based photonic devices, size and density of QDs are critical parameters, the probing of which requires indirect methods. We report the determination of lateral size distribution of type-II ZnTe/ZnSe stacked submonolayer QDs, based on spectral analysis of the optical signature of Aharanov-Bohm (AB) excitons, complemented by photoluminescence studies, secondary-ion mass spectroscopy, and numerical calculations. Numerical calculations are employed to determine the AB transition magnetic field as a function of the type-II QD radius. The study of four samples grown with different tellurium fluxes shows that the lateral size of QDs increases by just 50%, evenmore » though tellurium concentration increases 25-fold. Detailed spectral analysis of the emission of the AB exciton shows that the QD radii take on only certain values due to vertical correlation and the stacked nature of the QDs.« less
Design of Concurrency Controls for Transaction Processing Systems.
1982-04-02
presented in numerous textbooks (e.g.; see [Wiederhold 771, [Date 77), or [Ulman 80]) and elsewhere; furthermore, this is a problem of on-going research...bIA (tem -II am&p Ilest WI: m9 bme "" from Mt: (mew imew) (enter -eow (en Smemp) am :es. daeems:m W4 (saw. mumv Ibe o O*jeets Ai Daiua* II Suomi -ov
Numerical simulation of a soft-x-ray Li laser pumped with synchrotron radiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rozsnyai, B.; Watanabe, H.; Csonka, P.L.
1985-07-01
Results of a computer simulation are reported for a lithium soft-x-ray laser pumped by synchro- tron radiation. Coherent stimulated emission of the photons of interest occurs in Li II 1s2p..-->..Li II 1s/sup 2/ transitions. Calculated results include the dominant ion and photon densities and the laser gain.
ERIC Educational Resources Information Center
Gehart, Diane R.
2012-01-01
A continuation of Part I, which introduced mental health recovery concepts to family therapists, Part II of this article outlines a collaborative, appreciative approach for working in recovery-oriented contexts. This approach draws primarily upon postmodern therapies, which have numerous social justice and strength-based practices that are easily…
2015-02-04
dislocation dynamics models ( DDD ), continuum representations). Coupling of these models is difficult. Coupling of atomistics and DDD models has been...explored to some extent, but the coupling between DDD and continuum models of the evolution of large populations of dislocations is essentially unexplored
Numerical Modeling of Contaminant Transport with Rate-Limited Sorption/Desorption in an Aquifer.
1989-12-01
thes -t anil duratiin (f ’ls’atiij srrts, andi tos gaint fuirthesr midi , of the physical processes involved Ii tie( transport of foreign materials Ii...foruriila rejireselitatiur i f Iic lc~cls of curitirinarit Ili both regliMs waV In~t.rl \\ th105 rtest rictiori, the miodel omu1 hot siriiilaie pulse
NASA Technical Reports Server (NTRS)
Manobianco, John; Uccellini, Louis W.; Brill, Keith F.; Kuo, Ying-Hwa
1992-01-01
A mesoscale numerical model is combined with a dynamic data assimilation via Newtonian relaxation, or 'nudging', to provide initial conditions for subsequent simulations of the QE II cyclone. Both the nudging technique and the inclusion of supplementary data are shown to have a large positive impact on the simulation of the QE II cyclone during the initial phase of rapid cyclone development. Within the initial development period (from 1200 to 1800 UTC 9 September 1978), the dynamic assimilation of operational and bogus data yields a coherent two-layer divergence pattern that is not well defined in the model run using only the operational data and static initialization. Diagnostic analysis based on the simulations show that the initial development of the QE II storm between 0000 UTC 9 September and 0000 UTC 10 September was embedded within an indirect circulation of an intense 300-hPa jet streak, was related to baroclinic processes extending throughout a deep portion of the troposphere, and was associated with a classic two-layer mass-divergence profile expected for an extratropical cyclone.
Mr.CAS-A minimalistic (pure) Ruby CAS for fast prototyping and code generation
NASA Astrophysics Data System (ADS)
Ragni, Matteo
There are Computer Algebra System (CAS) systems on the market with complete solutions for manipulation of analytical models. But exporting a model that implements specific algorithms on specific platforms, for target languages or for particular numerical library, is often a rigid procedure that requires manual post-processing. This work presents a Ruby library that exposes core CAS capabilities, i.e. simplification, substitution, evaluation, etc. The library aims at programmers that need to rapidly prototype and generate numerical code for different target languages, while keeping separated mathematical expression from the code generation rules, where best practices for numerical conditioning are implemented. The library is written in pure Ruby language and is compatible with most Ruby interpreters.
Numerical Stimulation of Multicomponent Chromatography Using Spreadsheets.
ERIC Educational Resources Information Center
Frey, Douglas D.
1990-01-01
Illustrated is the use of spreadsheet programs for implementing finite difference numerical simulations of chromatography as an instructional tool in a separations course. Discussed are differential equations, discretization and integration, spreadsheet development, computer requirements, and typical simulation results. (CW)
2 CFR 1.200 - Purpose of chapters I and II.
Code of Federal Regulations, 2010 CFR
2010-01-01
... (and thereby implement the Federal Financial Assistance Management Improvement Act of 1999, Pub. L. 106... Introduction toSubtitle A § 1.200 Purpose of chapters I and II. (a) Chapters I and II of subtitle A provide OMB... procedures for management of the agencies' grants and agreements. (b) There are two chapters for publication...
Afonine, Pavel V.; Adams, Paul D.; Urzhumtsev, Alexandre
2018-06-08
TLS modelling was developed by Schomaker and Trueblood to describe atomic displacement parameters through concerted (rigid-body) harmonic motions of an atomic group [Schomaker & Trueblood (1968), Acta Cryst. B 24 , 63–76]. The results of a TLS refinement are T , L and S matrices that provide individual anisotropic atomic displacement parameters (ADPs) for all atoms belonging to the group. These ADPs can be calculated analytically using a formula that relates the elements of the TLS matrices to atomic parameters. Alternatively, ADPs can be obtained numerically from the parameters of concerted atomic motions corresponding to the TLS matrices. Both proceduresmore » are expected to produce the same ADP values and therefore can be used to assess the results of TLS refinement. Here, the implementation of this approach in PHENIX is described and several illustrations, including the use of all models from the PDB that have been subjected to TLS refinement, are provided.« less
Variational discretization of the nonequilibrium thermodynamics of simple systems
NASA Astrophysics Data System (ADS)
Gay-Balmaz, François; Yoshimura, Hiroaki
2018-04-01
In this paper, we develop variational integrators for the nonequilibrium thermodynamics of simple closed systems. These integrators are obtained by a discretization of the Lagrangian variational formulation of nonequilibrium thermodynamics developed in (Gay-Balmaz and Yoshimura 2017a J. Geom. Phys. part I 111 169–93 Gay-Balmaz and Yoshimura 2017b J. Geom. Phys. part II 111 194–212) and thus extend the variational integrators of Lagrangian mechanics, to include irreversible processes. In the continuous setting, we derive the structure preserving property of the flow of such systems. This property is an extension of the symplectic property of the flow of the Euler–Lagrange equations. In the discrete setting, we show that the discrete flow solution of our numerical scheme verifies a discrete version of this property. We also present the regularity conditions which ensure the existence of the discrete flow. We finally illustrate our discrete variational schemes with the implementation of an example of a simple and closed system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bassi, Gabriele; Blednykh, Alexei; Smalyuk, Victor
A novel algorithm for self-consistent simulations of long-range wakefield effects has been developed and applied to the study of both longitudinal and transverse coupled-bunch instabilities at NSLS-II. The algorithm is implemented in the new parallel tracking code space (self-consistent parallel algorithm for collective effects) discussed in the paper. The code is applicable for accurate beam dynamics simulations in cases where both bunch-to-bunch and intrabunch motions need to be taken into account, such as chromatic head-tail effects on the coupled-bunch instability of a beam with a nonuniform filling pattern, or multibunch and single-bunch effects of a passive higher-harmonic cavity. The numericalmore » simulations have been compared with analytical studies. For a beam with an arbitrary filling pattern, intensity-dependent complex frequency shifts have been derived starting from a system of coupled Vlasov equations. The analytical formulas and numerical simulations confirm that the analysis is reduced to the formulation of an eigenvalue problem based on the known formulas of the complex frequency shifts for the uniform filling pattern case.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Afonine, Pavel V.; Adams, Paul D.; Urzhumtsev, Alexandre
TLS modelling was developed by Schomaker and Trueblood to describe atomic displacement parameters through concerted (rigid-body) harmonic motions of an atomic group [Schomaker & Trueblood (1968), Acta Cryst. B 24 , 63–76]. The results of a TLS refinement are T , L and S matrices that provide individual anisotropic atomic displacement parameters (ADPs) for all atoms belonging to the group. These ADPs can be calculated analytically using a formula that relates the elements of the TLS matrices to atomic parameters. Alternatively, ADPs can be obtained numerically from the parameters of concerted atomic motions corresponding to the TLS matrices. Both proceduresmore » are expected to produce the same ADP values and therefore can be used to assess the results of TLS refinement. Here, the implementation of this approach in PHENIX is described and several illustrations, including the use of all models from the PDB that have been subjected to TLS refinement, are provided.« less
NASA Technical Reports Server (NTRS)
Zhai, Chengxing; Milman, Mark H.; Regehr, Martin W.; Best, Paul K.
2007-01-01
In the companion paper, [Appl. Opt. 46, 5853 (2007)] a highly accurate white light interference model was developed from just a few key parameters characterized in terms of various moments of the source and instrument transmission function. We develop and implement the end-to-end process of calibrating these moment parameters together with the differential dispersion of the instrument and applying them to the algorithms developed in the companion paper. The calibration procedure developed herein is based on first obtaining the standard monochromatic parameters at the pixel level: wavenumber, phase, intensity, and visibility parameters via a nonlinear least-squares procedure that exploits the structure of the model. The pixel level parameters are then combined to obtain the required 'global' moment and dispersion parameters. The process is applied to both simulated scenarios of astrometric observations and to data from the microarcsecond metrology testbed (MAM), an interferometer testbed that has played a prominent role in the development of this technology.
NASA Astrophysics Data System (ADS)
Maschio, Lorenzo; Kirtman, Bernard; Rérat, Michel; Orlando, Roberto; Dovesi, Roberto
2013-10-01
In this work, we validate a new, fully analytical method for calculating Raman intensities of periodic systems, developed and presented in Paper I [L. Maschio, B. Kirtman, M. Rérat, R. Orlando, and R. Dovesi, J. Chem. Phys. 139, 164101 (2013)]. Our validation of this method and its implementation in the CRYSTAL code is done through several internal checks as well as comparison with experiment. The internal checks include consistency of results when increasing the number of periodic directions (from 0D to 1D, 2D, 3D), comparison with numerical differentiation, and a test of the sum rule for derivatives of the polarizability tensor. The choice of basis set as well as the Hamiltonian is also studied. Simulated Raman spectra of α-quartz and of the UiO-66 Metal-Organic Framework are compared with the experimental data.
A GIS-based modeling system for petroleum waste management. Geographical information system.
Chen, Z; Huang, G H; Li, J B
2003-01-01
With an urgent need for effective management of petroleum-contaminated sites, a GIS-aided simulation (GISSIM) system is presented in this study. The GISSIM contains two components: an advanced 3D numerical model and a geographical information system (GIS), which are integrated within a general framework. The modeling component undertakes simulation for the fate of contaminants in subsurface unsaturated and saturated zones. The GIS component is used in three areas throughout the system development and implementation process: (i) managing spatial and non-spatial databases; (ii) linking inputs, model, and outputs; and (iii) providing an interface between the GISSIM and its users. The developed system is applied to a North American case study. Concentrations of benzene, toluene, and xylenes in groundwater under a petroleum-contaminated site are dynamically simulated. Reasonable outputs have been obtained and presented graphically. They provide quantitative and scientific bases for further assessment of site-contamination impacts and risks, as well as decisions on practical remediation actions.
Numerical evaluation of the bispectrum in multiple field inflation—the transport approach with code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dias, Mafalda; Frazer, Jonathan; Mulryne, David J.
2016-12-01
We present a complete framework for numerical calculation of the power spectrum and bispectrum in canonical inflation with an arbitrary number of light or heavy fields. Our method includes all relevant effects at tree-level in the loop expansion, including (i) interference between growing and decaying modes near horizon exit; (ii) correlation and coupling between species near horizon exit and on superhorizon scales; (iii) contributions from mass terms; and (iv) all contributions from coupling to gravity. We track the evolution of each correlation function from the vacuum state through horizon exit and the superhorizon regime, with no need to match quantummore » and classical parts of the calculation; when integrated, our approach corresponds exactly with the tree-level Schwinger or 'in-in' formulation of quantum field theory. In this paper we give the equations necessary to evolve all two- and three-point correlation functions together with suitable initial conditions. The final formalism is suitable to compute the amplitude, shape, and scale dependence of the bispectrum in models with | f {sub NL}| of order unity or less, which are a target for future galaxy surveys such as Euclid, DESI and LSST. As an illustration we apply our framework to a number of examples, obtaining quantitatively accurate predictions for their bispectra for the first time. Two accompanying reports describe publicly-available software packages that implement the method.« less
George, David L.; Iverson, Richard M.
2014-01-01
We evaluate a new depth-averaged mathematical model that is designed to simulate all stages of debris-flow motion, from initiation to deposition. A companion paper shows how the model’s five governing equations describe simultaneous evolution of flow thickness, solid volume fraction, basal pore-fluid pressure, and two components of flow momentum. Each equation contains a source term that represents the influence of state-dependent granular dilatancy. Here we recapitulate the equations and analyze their eigenstructure to show that they form a hyperbolic system with desirable stability properties. To solve the equations we use a shock-capturing numerical scheme with adaptive mesh refinement, implemented in an open-source software package we call D-Claw. As tests of D-Claw, we compare model output with results from two sets of large-scale debris-flow experiments. One set focuses on flow initiation from landslides triggered by rising pore-water pressures, and the other focuses on downstream flow dynamics, runout, and deposition. D-Claw performs well in predicting evolution of flow speeds, thicknesses, and basal pore-fluid pressures measured in each type of experiment. Computational results illustrate the critical role of dilatancy in linking coevolution of the solid volume fraction and pore-fluid pressure, which mediates basal Coulomb friction and thereby regulates debris-flow dynamics.
Numerical evaluation of the bispectrum in multiple field inflation—the transport approach with code
NASA Astrophysics Data System (ADS)
Dias, Mafalda; Frazer, Jonathan; Mulryne, David J.; Seery, David
2016-12-01
We present a complete framework for numerical calculation of the power spectrum and bispectrum in canonical inflation with an arbitrary number of light or heavy fields. Our method includes all relevant effects at tree-level in the loop expansion, including (i) interference between growing and decaying modes near horizon exit; (ii) correlation and coupling between species near horizon exit and on superhorizon scales; (iii) contributions from mass terms; and (iv) all contributions from coupling to gravity. We track the evolution of each correlation function from the vacuum state through horizon exit and the superhorizon regime, with no need to match quantum and classical parts of the calculation; when integrated, our approach corresponds exactly with the tree-level Schwinger or `in-in' formulation of quantum field theory. In this paper we give the equations necessary to evolve all two- and three-point correlation functions together with suitable initial conditions. The final formalism is suitable to compute the amplitude, shape, and scale dependence of the bispectrum in models with |fNL| of order unity or less, which are a target for future galaxy surveys such as Euclid, DESI and LSST. As an illustration we apply our framework to a number of examples, obtaining quantitatively accurate predictions for their bispectra for the first time. Two accompanying reports describe publicly-available software packages that implement the method.
Long-range spin coherence in a strongly coupled all-electronic dot-cavity system
NASA Astrophysics Data System (ADS)
Ferguson, Michael Sven; Oehri, David; Rössler, Clemens; Ihn, Thomas; Ensslin, Klaus; Blatter, Gianni; Zilberberg, Oded
2017-12-01
We present a theoretical analysis of spin-coherent electronic transport across a mesoscopic dot-cavity system. Such spin-coherent transport has been recently demonstrated in an experiment with a dot-cavity hybrid implemented in a high-mobility two-dimensional electron gas [C. Rössler et al., Phys. Rev. Lett. 115, 166603 (2015), 10.1103/PhysRevLett.115.166603] and its spectroscopic signatures have been interpreted in terms of a competition between Kondo-type dot-lead and molecular-type dot-cavity singlet formation. Our analysis brings forward all the transport features observed in the experiments and supports the claim that a spin-coherent molecular singlet forms across the full extent of the dot-cavity device. Our model analysis includes (i) a single-particle numerical investigation of the two-dimensional geometry, its quantum-coral-type eigenstates, and associated spectroscopic transport features, (ii) the derivation of an effective interacting model based on the observations of the numerical and experimental studies, and (iii) the prediction of transport characteristics through the device using a combination of a master-equation approach on top of exact eigenstates of the dot-cavity system, and an equation-of-motion analysis that includes Kondo physics. The latter provides additional temperature scaling predictions for the many-body phase transition between molecular- and Kondo-singlet formation and its associated transport signatures.
A numerical algorithm for the explicit calculation of SU(N) and SL(N,C) Clebsch-Gordan coefficients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alex, Arne; Delft, Jan von; Kalus, Matthias
2011-02-15
We present an algorithm for the explicit numerical calculation of SU(N) and SL(N,C) Clebsch-Gordan coefficients, based on the Gelfand-Tsetlin pattern calculus. Our algorithm is well suited for numerical implementation; we include a computer code in an appendix. Our exposition presumes only familiarity with the representation theory of SU(2).
Causes and consequences of anti-infective drug stock-outs.
Luans, C; Cardiet, I; Rogé, P; Baslé, B; Le Corre, P; Revest, M; Michelet, C; Tattevin, P
2014-10-01
Anti-infective drugs stock-outs are increasingly frequent, and this is unlikely to change. There are numerous causes for this, mostly related to parameters difficult to control: i) 60 to 80% of raw material or components are produced outside of Europe (compared to 20% 30 years ago), with subsequent loss of independence for their procurement; ii) the economic crisis drives the pharmaceutical companies to stop producing drugs of limited profitability (even among important drugs); iii) the enforcement of regulatory requirements and quality control procedures result in an increasing number of drugs being blocked during production. The therapeutic class most affected by drug stock-outs is that of anti-infective drugs, especially injectable ones, and many therapeutic dead ends have recently occurred. We provide an update on this issue, and suggest 2 major actions for improvement: i) to implement a group dedicated to anticipating drug stock-outs within the anti-infective committee in each health care center, with the objectives of organizing and coordinating the response whenever a drug stock-out is deemed at risk (i.e., contingency plans, substitution, communication to prescribers); ii) a national reflection lead by scientific societies, in collaboration with government agencies, upstream of the most problematic drug stock-outs, to elaborate and disseminate consensus guidelines for the management of these stock-outs. Copyright © 2014. Published by Elsevier SAS.
NASA Astrophysics Data System (ADS)
Plattner, A.; Maurer, H. R.; Vorloeper, J.; Dahmen, W.
2010-08-01
Despite the ever-increasing power of modern computers, realistic modelling of complex 3-D earth models is still a challenging task and requires substantial computing resources. The overwhelming majority of current geophysical modelling approaches includes either finite difference or non-adaptive finite element algorithms and variants thereof. These numerical methods usually require the subsurface to be discretized with a fine mesh to accurately capture the behaviour of the physical fields. However, this may result in excessive memory consumption and computing times. A common feature of most of these algorithms is that the modelled data discretizations are independent of the model complexity, which may be wasteful when there are only minor to moderate spatial variations in the subsurface parameters. Recent developments in the theory of adaptive numerical solvers have the potential to overcome this problem. Here, we consider an adaptive wavelet-based approach that is applicable to a large range of problems, also including nonlinear problems. In comparison with earlier applications of adaptive solvers to geophysical problems we employ here a new adaptive scheme whose core ingredients arose from a rigorous analysis of the overall asymptotically optimal computational complexity, including in particular, an optimal work/accuracy rate. Our adaptive wavelet algorithm offers several attractive features: (i) for a given subsurface model, it allows the forward modelling domain to be discretized with a quasi minimal number of degrees of freedom, (ii) sparsity of the associated system matrices is guaranteed, which makes the algorithm memory efficient and (iii) the modelling accuracy scales linearly with computing time. We have implemented the adaptive wavelet algorithm for solving 3-D geoelectric problems. To test its performance, numerical experiments were conducted with a series of conductivity models exhibiting varying degrees of structural complexity. Results were compared with a non-adaptive finite element algorithm, which incorporates an unstructured mesh to best-fitting subsurface boundaries. Such algorithms represent the current state-of-the-art in geoelectric modelling. An analysis of the numerical accuracy as a function of the number of degrees of freedom revealed that the adaptive wavelet algorithm outperforms the finite element solver for simple and moderately complex models, whereas the results become comparable for models with high spatial variability of electrical conductivities. The linear dependence of the modelling error and the computing time proved to be model-independent. This feature will allow very efficient computations using large-scale models as soon as our experimental code is optimized in terms of its implementation.
Status of parallel Python-based implementation of UEDGE
NASA Astrophysics Data System (ADS)
Umansky, M. V.; Pankin, A. Y.; Rognlien, T. D.; Dimits, A. M.; Friedman, A.; Joseph, I.
2017-10-01
The tokamak edge transport code UEDGE has long used the code-development and run-time framework Basis. However, with the support for Basis expected to terminate in the coming years, and with the advent of the modern numerical language Python, it has become desirable to move UEDGE to Python, to ensure its long-term viability. Our new Python-based UEDGE implementation takes advantage of the portable build system developed for FACETS. The new implementation gives access to Python's graphical libraries and numerical packages for pre- and post-processing, and support of HDF5 simplifies exchanging data. The older serial version of UEDGE has used for time-stepping the Newton-Krylov solver NKSOL. The renovated implementation uses backward Euler discretization with nonlinear solvers from PETSc, which has the promise to significantly improve the UEDGE parallel performance. We will report on assessment of some of the extended UEDGE capabilities emerging in the new implementation, and will discuss the future directions. Work performed for U.S. DOE by LLNL under contract DE-AC52-07NA27344.
NASA Technical Reports Server (NTRS)
Atkins, H. L.; Helenbrook, B. T.
2005-01-01
This paper describes numerical experiments with P-multigrid to corroborate analysis, validate the present implementation, and to examine issues that arise in the implementations of the various combinations of relaxation schemes, discretizations and P-multigrid methods. The two approaches to implement P-multigrid presented here are equivalent for most high-order discretization methods such as spectral element, SUPG, and discontinuous Galerkin applied to advection; however it is discovered that the approach that mimics the common geometric multigrid implementation is less robust, and frequently unstable when applied to discontinuous Galerkin discretizations of di usion. Gauss-Seidel relaxation converges 40% faster than block Jacobi, as predicted by analysis; however, the implementation of Gauss-Seidel is considerably more expensive that one would expect because gradients in most neighboring elements must be updated. A compromise quasi Gauss-Seidel relaxation method that evaluates the gradient in each element twice per iteration converges at rates similar to those predicted for true Gauss-Seidel.
Numerical implementation of non-local polycrystal plasticity using fast Fourier transforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lebensohn, Ricardo A.; Needleman, Alan
Here, we present the numerical implementation of a non-local polycrystal plasticity theory using the FFT-based formulation of Suquet and co-workers. Gurtin (2002) non-local formulation, with geometry changes neglected, has been incorporated in the EVP-FFT algorithm of Lebensohn et al. (2012). Numerical procedures for the accurate estimation of higher order derivatives of micromechanical fields, required for feedback into single crystal constitutive relations, are identified and applied. A simple case of a periodic laminate made of two fcc crystals with different plastic properties is first used to assess the soundness and numerical stability of the proposed algorithm and to study the influencemore » of different model parameters on the predictions of the non-local model. Different behaviors at grain boundaries are explored, and the one consistent with the micro-clamped condition gives the most pronounced size effect. The formulation is applied next to 3-D fcc polycrystals, illustrating the possibilities offered by the proposed numerical scheme to analyze the mechanical response of polycrystalline aggregates in three dimensions accounting for size dependence arising from plastic strain gradients with reasonable computing times.« less
Numerical implementation of non-local polycrystal plasticity using fast Fourier transforms
Lebensohn, Ricardo A.; Needleman, Alan
2016-03-28
Here, we present the numerical implementation of a non-local polycrystal plasticity theory using the FFT-based formulation of Suquet and co-workers. Gurtin (2002) non-local formulation, with geometry changes neglected, has been incorporated in the EVP-FFT algorithm of Lebensohn et al. (2012). Numerical procedures for the accurate estimation of higher order derivatives of micromechanical fields, required for feedback into single crystal constitutive relations, are identified and applied. A simple case of a periodic laminate made of two fcc crystals with different plastic properties is first used to assess the soundness and numerical stability of the proposed algorithm and to study the influencemore » of different model parameters on the predictions of the non-local model. Different behaviors at grain boundaries are explored, and the one consistent with the micro-clamped condition gives the most pronounced size effect. The formulation is applied next to 3-D fcc polycrystals, illustrating the possibilities offered by the proposed numerical scheme to analyze the mechanical response of polycrystalline aggregates in three dimensions accounting for size dependence arising from plastic strain gradients with reasonable computing times.« less
Hyperboloidal evolution of test fields in three spatial dimensions
NASA Astrophysics Data System (ADS)
Zenginoǧlu, Anıl; Kidder, Lawrence E.
2010-06-01
We present the numerical implementation of a clean solution to the outer boundary and radiation extraction problems within the 3+1 formalism for hyperbolic partial differential equations on a given background. Our approach is based on compactification at null infinity in hyperboloidal scri fixing coordinates. We report numerical tests for the particular example of a scalar wave equation on Minkowski and Schwarzschild backgrounds. We address issues related to the implementation of the hyperboloidal approach for the Einstein equations, such as nonlinear source functions, matching, and evaluation of formally singular terms at null infinity.
TRIADS: A phase-resolving model for nonlinear shoaling of directional wave spectra
NASA Astrophysics Data System (ADS)
Sheremet, Alex; Davis, Justin R.; Tian, Miao; Hanson, Jeffrey L.; Hathaway, Kent K.
2016-03-01
We investigate the performance of TRIADS, a numerical implementation of a phase-resolving, nonlinear, spectral model describing directional wave evolution in intermediate and shallow water. TRIADS simulations of shoaling waves generated by Hurricane Bill, 2009 are compared to directional spectral estimates based on observations collected at the Field Research Facility of the US Army Corps Of Engineers, at Duck, NC. Both the ability of the model to capture the processes essential to the nonlinear wave evolution, and the efficiency of the numerical implementations are analyzed and discussed.
NASA Technical Reports Server (NTRS)
Coppolino, R. N.
1974-01-01
Details are presented of the implementation of the new formulation into NASTRAN including descriptions of the DMAP statements required for conversion of the program and details pertaining to problem definition and bulk data considerations. Details of the current 1/8-scale space shuttle external tank mathematical model, numerical results and analysis/test comparisons are also presented. The appendices include a description and listing of a FORTRAN program used to develop harmonic transformation bulk data (multipoint constraint statements) and sample bulk data information for a number of hydroelastic problems.
NASA Astrophysics Data System (ADS)
Hellmich, S.; Mottola, S.; Hahn, G.; Kührt, E.; Hlawitschka, M.
2014-07-01
Simulations of dynamical processes in planetary systems represent an important tool for studying the orbital evolution of the systems [1--3]. Using modern numerical integration methods, it is possible to model systems containing many thousands of objects over timescales of several hundred million years. However, in general, supercomputers are needed to get reasonable simulation results in acceptable execution times [3]. To exploit the ever-growing computation power of Graphics Processing Units (GPUs) in modern desktop computers, we implemented cuSwift, a library of numerical integration methods for studying long-term dynamical processes in planetary systems. cuSwift can be seen as a re-implementation of the famous SWIFT integrator package written by Hal Levison and Martin Duncan. cuSwift is written in C/CUDA and contains different integration methods for various purposes. So far, we have implemented three algorithms: a 15th-order Radau integrator [4], the Wisdom-Holman Mapping (WHM) integrator [5], and the Regularized Mixed Variable Symplectic (RMVS) Method [6]. These algorithms treat only the planets as mutually gravitationally interacting bodies whereas asteroids and comets (or other minor bodies of interest) are treated as massless test particles which are gravitationally influenced by the massive bodies but do not affect each other or the massive bodies. The main focus of this work is on the symplectic methods (WHM and RMVS) which use a larger time step and thus are capable of integrating many particles over a large time span. As an additional feature, we implemented the non-gravitational Yarkovsky effect as described by M. Brož [7]. With cuSwift, we show that the use of modern GPUs makes it possible to speed up these methods by more than one order of magnitude compared to the single-core CPU implementation, thereby enabling modest workstation computers to perform long-term dynamical simulations. We use these methods to study the influence of the Yarkovsky effect on resonant asteroids. We present first results and compare them with integrations done with the original algorithms implemented in SWIFT in order to assess the numerical precision of cuSwift and to demonstrate the speed-up we achieved using the GPU.
Chesapeake Bay Watershed Implementation Plans (WIPs)
This page provides an overview of Watershed Implementation Plans (WIP) and how they play an important role in restoring the Chesapeake Bay. The page also provides links to each jurisdiction's Phase I, II, and III WIP.
Implementing a flipped classroom approach in a university numerical methods mathematics course
NASA Astrophysics Data System (ADS)
Johnston, Barbara M.
2017-05-01
This paper describes and analyses the implementation of a 'flipped classroom' approach, in an undergraduate mathematics course on numerical methods. The approach replaced all the lecture contents by instructor-made videos and was implemented in the consecutive years 2014 and 2015. The sequential case study presented here begins with an examination of the attitudes of the 2014 cohort to the approach in general as well as analysing their use of the videos. Based on these responses, the instructor makes a number of changes (for example, the use of 'cloze' summary notes and the introduction of an extra, optional tutorial class) before repeating the 'flipped classroom' approach the following year. The attitudes to the approach and the video usage of the 2015 cohort are then compared with the 2014 cohort and further changes that could be implemented for the next cohort are suggested.
Optimally analyzing and implementing of bolt fittings in steel structure based on ANSYS
NASA Astrophysics Data System (ADS)
Han, Na; Song, Shuangyang; Cui, Yan; Wu, Yongchun
2018-03-01
ANSYS simulation software for its excellent performance become outstanding one in Computer-aided Engineering (CAE) family, it is committed to the innovation of engineering simulation to help users to shorten the design process. First, a typical procedure to implement CAE was design. The framework of structural numerical analysis on ANSYS Technology was proposed. Then, A optimally analyzing and implementing of bolt fittings in beam-column join of steel structure was implemented by ANSYS, which was display the cloud chart of XY-shear stress, the cloud chart of YZ-shear stress and the cloud chart of Y component of stress. Finally, ANSYS software simulating results was compared with the measured results by the experiment. The result of ANSYS simulating and analyzing is reliable, efficient and optical. In above process, a structural performance's numerical simulating and analyzing model were explored for engineering enterprises' practice.
Implementing a Multiple Criteria Model Base in Co-Op with a Graphical User Interface Generator
1993-09-23
PROMETHEE ................................ 44 A. THE ALGORITHM S ................................... 44 1. Basic Algorithm of PROMETHEE I and... PROMETHEE II ..... 45 a. Use of the Algorithm in PROMETHEE I ............. 49 b. Use of the Algorithm in PROMETHEE II ............. 50 V 2. Algorithm of... PROMETHEE V ......................... 50 B. SCREEN DESIGNS OF PROMETHEE ...................... 51 1. PROMETHEE I and PROMETHEE II ................... 52 a
Poster - Thur Eve - 07: CNSC Update: "What's New in Class II".
Heimann, M
2012-07-01
The Accelerators and Class II Facilities Division (ACFD) of the Canadian Nuclear Safety Commission (CNSC), is responsible for the oversight of radiotherapy facilities containing Class II prescribed equipment in Canada. This poster will highlight a number of new initiatives that the CNSC has implemented recently that have an impact on radiotherapy facility licensees. The presentation will discuss the recent policy decision to regulate particle accelerators of above 1 MeV. Challenges and progress with respect to the implementation of the policy will be presented. Other initiatives which will be described include: • The new ACFD webspace on the CNSC website, with direct links to relevant information on licensing, compliance and Class II prescribed equipment • The improved structure of the Appendix of Licence Documents that is part of every Class II licence • Updated licence application guides • Changes to Annual Compliance reporting requirements and progress on the ACR-Online initiative • Changes to some regulatory expectations related to medical accelerator facilities • Consolidation of Class II facility licences The poster will also include other initiatives that may be of particular interest to COMP membership. © 2012 American Association of Physicists in Medicine.
Implementing New Non-Chromate Coatings Systems (Briefing Charts)
2011-02-09
Initiate Cr6+ authorization process for continued Cr6+ use using the form, Authorization to Use Hexavalent Chromium. YES NO • Approval of...Aluminum and magnesium anodizing • Hard Chrome Plating • Type II conversion coating on aluminum alloys under chromated primer • Type II conversion coating...Elimination of Hexavalent Chromium 80% 5% 14% 1% Type II Type III Type IC Type IC Fatigue Critical 50% 50% Type II Type IC FRC-SE (JAX) Fully Integrated FRC
A Real-Time Marker-Based Visual Sensor Based on a FPGA and a Soft Core Processor
Tayara, Hilal; Ham, Woonchul; Chong, Kil To
2016-01-01
This paper introduces a real-time marker-based visual sensor architecture for mobile robot localization and navigation. A hardware acceleration architecture for post video processing system was implemented on a field-programmable gate array (FPGA). The pose calculation algorithm was implemented in a System on Chip (SoC) with an Altera Nios II soft-core processor. For every frame, single pass image segmentation and Feature Accelerated Segment Test (FAST) corner detection were used for extracting the predefined markers with known geometries in FPGA. Coplanar PosIT algorithm was implemented on the Nios II soft-core processor supplied with floating point hardware for accelerating floating point operations. Trigonometric functions have been approximated using Taylor series and cubic approximation using Lagrange polynomials. Inverse square root method has been implemented for approximating square root computations. Real time results have been achieved and pixel streams have been processed on the fly without any need to buffer the input frame for further implementation. PMID:27983714
A Real-Time Marker-Based Visual Sensor Based on a FPGA and a Soft Core Processor.
Tayara, Hilal; Ham, Woonchul; Chong, Kil To
2016-12-15
This paper introduces a real-time marker-based visual sensor architecture for mobile robot localization and navigation. A hardware acceleration architecture for post video processing system was implemented on a field-programmable gate array (FPGA). The pose calculation algorithm was implemented in a System on Chip (SoC) with an Altera Nios II soft-core processor. For every frame, single pass image segmentation and Feature Accelerated Segment Test (FAST) corner detection were used for extracting the predefined markers with known geometries in FPGA. Coplanar PosIT algorithm was implemented on the Nios II soft-core processor supplied with floating point hardware for accelerating floating point operations. Trigonometric functions have been approximated using Taylor series and cubic approximation using Lagrange polynomials. Inverse square root method has been implemented for approximating square root computations. Real time results have been achieved and pixel streams have been processed on the fly without any need to buffer the input frame for further implementation.
Numerical simulation of h-adaptive immersed boundary method for freely falling disks
NASA Astrophysics Data System (ADS)
Zhang, Pan; Xia, Zhenhua; Cai, Qingdong
2018-05-01
In this work, a freely falling disk with aspect ratio 1/10 is directly simulated by using an adaptive numerical model implemented on a parallel computation framework JASMIN. The adaptive numerical model is a combination of the h-adaptive mesh refinement technique and the implicit immersed boundary method (IBM). Our numerical results agree well with the experimental results in all of the six degrees of freedom of the disk. Furthermore, very similar vortex structures observed in the experiment were also obtained.
Numerical Algorithm for Delta of Asian Option
Zhang, Boxiang; Yu, Yang; Wang, Weiguo
2015-01-01
We study the numerical solution of the Greeks of Asian options. In particular, we derive a close form solution of Δ of Asian geometric option and use this analytical form as a control to numerically calculate Δ of Asian arithmetic option, which is known to have no explicit close form solution. We implement our proposed numerical method and compare the standard error with other classical variance reduction methods. Our method provides an efficient solution to the hedging strategy with Asian options. PMID:26266271
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-10
... the 2008 Lead National Ambient Air Quality Standards AGENCY: Environmental Protection Agency (EPA...) of the Clean Air Act (CAA), necessary to implement, maintain, and enforce the 2008 lead national..., necessary to implement, maintain, and enforce the 2008 lead NAAQS. II. Summary of SIP Revision On October 17...
78 FR 16679 - Center for Drug Evaluation and Research Medical Policy Council; Request for Comments
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-18
... Council to ensure better coordination of medical policy development and implementation within CDER and... and implementation. II. Range of Medical Policy Issues To Be Considered FDA envisions a variety of... to other products; or Strategies for implementation of a new policy. III. Establishment of a Docket...
Study of Statewide Type II Noise Abatement Program for the Texas Department of Transportation
DOT National Transportation Integrated Search
2000-02-01
This project will provide sufficient information to the Texas Department of Transportation and the Texas Transportation Commission to make an informed decision regarding the development and implementation of a statewide Type II Noise Abatement Progra...
DOT National Transportation Integrated Search
2015-05-01
This report documents the System Design and Architecture for the Phase II implementation of the Integrated Dynamic Transit Operations (IDTO) Prototype bundle within the Dynamic Mobility Applications (DMA) portion of the Connected Vehicle Program. Thi...
Verifying the error bound of numerical computation implemented in computer systems
Sawada, Jun
2013-03-12
A verification tool receives a finite precision definition for an approximation of an infinite precision numerical function implemented in a processor in the form of a polynomial of bounded functions. The verification tool receives a domain for verifying outputs of segments associated with the infinite precision numerical function. The verification tool splits the domain into at least two segments, wherein each segment is non-overlapping with any other segment and converts, for each segment, a polynomial of bounded functions for the segment to a simplified formula comprising a polynomial, an inequality, and a constant for a selected segment. The verification tool calculates upper bounds of the polynomial for the at least two segments, beginning with the selected segment and reports the segments that violate a bounding condition.
40 CFR 63.708 - Implementation and enforcement.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) National Emission Standards for Magnetic Tape Manufacturing Operations § 63.708 Implementation and... §§ 63.701 and 63.703. (2) Approval of major alternatives to test methods under § 63.7(e)(2)(ii) and (f...
40 CFR 52.146 - Particulate matter (PM-10) Group II SIP commitments.
Code of Federal Regulations, 2010 CFR
2010-07-01
... submitted a revision to the State Implementation Plan (SIP) for Casa Grande, Show Low, Safford, Flagstaff... Implementation Plan (SIP) requirements for Casa Grande, Show Low, Safford, Flagstaff and Joseph City as provided...
An Enriched Shell Finite Element for Progressive Damage Simulation in Composite Laminates
NASA Technical Reports Server (NTRS)
McElroy, Mark W.
2016-01-01
A formulation is presented for an enriched shell nite element capable of progressive damage simulation in composite laminates. The element uses a discrete adaptive splitting approach for damage representation that allows for a straightforward model creation procedure based on an initially low delity mesh. The enriched element is veri ed for Mode I, Mode II, and mixed Mode I/II delamination simulation using numerical benchmark data. Experimental validation is performed using test data from a delamination-migration experiment. Good correlation was found between the enriched shell element model results and the numerical and experimental data sets. The work presented in this paper is meant to serve as a rst milestone in the enriched element's development with an ultimate goal of simulating three-dimensional progressive damage processes in multidirectional laminates.
Using models in Integrated Ecosystem Assessment of coastal areas
NASA Astrophysics Data System (ADS)
Solidoro, Cosimo; Bandelj, Vinko; Cossarini, Gianpiero; Melaku Canu, Donata; Libralato, Simone
2014-05-01
Numerical Models can greatly contribute to integrated ecological assessment of coastal and marine systems. Indeed, models can: i) assist in the identification of efficient sampling strategy; ii) provide space interpolation and time extrapolation of experiemtanl data which are based on the knowedge on processes dynamics and causal realtionships which is coded within the model, iii) provide estimates of hardly measurable indicators. Furthermore model can provide indication on potential effects of implementation of alternative management policies. Finally, by providing a synthetic representation of an ideal system, based on its essential dynamic, model return a picture of ideal behaviour of a system in the absence of external perturbation, alteration, noise, which might help in the identification of reference behaivuor. As an important example, model based reanalyses of biogeochemical and ecological properties are an urgent need for the estimate of the environmental status and the assessment of efficacy of conservation and environmental policies, also with reference to the enforcement of the European MSFD. However, the use of numerical models, and particularly of ecological models, in modeling and in environmental management still is far from be the rule, possibly because of a lack in realizing the benefits which a full integration of modeling and montoring systems might provide, possibly because of a lack of trust in modeling results, or because many problems still exists in the development, validation and implementation of models. For istance, assessing the validity of model results is a complex process that requires the definition of appropriate indicators, metrics, methodologies and faces with the scarcity of real-time in-situ biogeochemical data. Furthermore, biogeochemical models typically consider dozens of variables which are heavily undersampled. Here we show how the integration of mathematical model and monitoring data can support integrated ecosystem assessment of a waterbody by reviewing applications from a complex coastal ecosystem, the Lagoon of Venice, and explore potential applications to other coastal and open sea system, up to the scale of the Mediterannean Sea.
Gottschalck, J.; Wheeler, M.; Weickmann, K.; ...
2010-09-01
The U.S. Climate Variability and Predictability (CLIVAR) MJO Working Group (MJOWG) has taken steps to promote the adoption of a uniform diagnostic and set of skill metrics for analyzing and assessing dynamical forecasts of the MJO. Here we describe the framework and initial implementation of the approach using real-time forecast data from multiple operational numerical weather prediction (NWP) centers. The objectives of this activity are to provide a means to i) quantitatively compare skill of MJO forecasts across operational centers, ii) measure gains in forecast skill over time by a given center and the community as a whole, and iii)more » facilitate the development of a multimodel forecast of the MJO. The MJO diagnostic is based on extensive deliberations among the MJOWG in conjunction with input from a number of operational centers and makes use of the MJO index of Wheeler and Hendon. This forecast activity has been endorsed by the Working Group on Numerical Experimentation (WGNE), the international body that fosters the development of atmospheric models for NWP and climate studies. The Climate Prediction Center (CPC) within the National Centers for Environmental Prediction (NCEP) is hosting the acquisition of the forecast data, application of the MJO diagnostic, and real-time display of the standardized forecasts. The activity has contributed to the production of 1–2-week operational outlooks at NCEP and activities at other centers. Further enhancements of the diagnostic's implementation, including more extensive analysis, comparison, illustration, and verification of the contributions from the participating centers, will increase the usefulness and application of these forecasts and potentially lead to more skillful predictions of the MJO and indirectly extratropical and other weather variability (e.g., tropical cyclones) influenced by the MJO. The purpose of this article is to inform the larger scientific and operational forecast communities of the MJOWG forecast effort and invite participation from additional operational centers.« less
NASA Astrophysics Data System (ADS)
Eshuis, Henk; Yarkony, Julian; Furche, Filipp
2010-06-01
The random phase approximation (RPA) is an increasingly popular post-Kohn-Sham correlation method, but its high computational cost has limited molecular applications to systems with few atoms. Here we present an efficient implementation of RPA correlation energies based on a combination of resolution of the identity (RI) and imaginary frequency integration techniques. We show that the RI approximation to four-index electron repulsion integrals leads to a variational upper bound to the exact RPA correlation energy if the Coulomb metric is used. Auxiliary basis sets optimized for second-order Møller-Plesset (MP2) calculations are well suitable for RPA, as is demonstrated for the HEAT [A. Tajti et al., J. Chem. Phys. 121, 11599 (2004)] and MOLEKEL [F. Weigend et al., Chem. Phys. Lett. 294, 143 (1998)] benchmark sets. Using imaginary frequency integration rather than diagonalization to compute the matrix square root necessary for RPA, evaluation of the RPA correlation energy requires O(N4 log N) operations and O(N3) storage only; the price for this dramatic improvement over existing algorithms is a numerical quadrature. We propose a numerical integration scheme that is exact in the two-orbital case and converges exponentially with the number of grid points. For most systems, 30-40 grid points yield μH accuracy in triple zeta basis sets, but much larger grids are necessary for small gap systems. The lowest-order approximation to the present method is a post-Kohn-Sham frequency-domain version of opposite-spin Laplace-transform RI-MP2 [J. Jung et al., Phys. Rev. B 70, 205107 (2004)]. Timings for polyacenes with up to 30 atoms show speed-ups of two orders of magnitude over previous implementations. The present approach makes it possible to routinely compute RPA correlation energies of systems well beyond 100 atoms, as is demonstrated for the octapeptide angiotensin II.
Eshuis, Henk; Yarkony, Julian; Furche, Filipp
2010-06-21
The random phase approximation (RPA) is an increasingly popular post-Kohn-Sham correlation method, but its high computational cost has limited molecular applications to systems with few atoms. Here we present an efficient implementation of RPA correlation energies based on a combination of resolution of the identity (RI) and imaginary frequency integration techniques. We show that the RI approximation to four-index electron repulsion integrals leads to a variational upper bound to the exact RPA correlation energy if the Coulomb metric is used. Auxiliary basis sets optimized for second-order Møller-Plesset (MP2) calculations are well suitable for RPA, as is demonstrated for the HEAT [A. Tajti et al., J. Chem. Phys. 121, 11599 (2004)] and MOLEKEL [F. Weigend et al., Chem. Phys. Lett. 294, 143 (1998)] benchmark sets. Using imaginary frequency integration rather than diagonalization to compute the matrix square root necessary for RPA, evaluation of the RPA correlation energy requires O(N(4) log N) operations and O(N(3)) storage only; the price for this dramatic improvement over existing algorithms is a numerical quadrature. We propose a numerical integration scheme that is exact in the two-orbital case and converges exponentially with the number of grid points. For most systems, 30-40 grid points yield muH accuracy in triple zeta basis sets, but much larger grids are necessary for small gap systems. The lowest-order approximation to the present method is a post-Kohn-Sham frequency-domain version of opposite-spin Laplace-transform RI-MP2 [J. Jung et al., Phys. Rev. B 70, 205107 (2004)]. Timings for polyacenes with up to 30 atoms show speed-ups of two orders of magnitude over previous implementations. The present approach makes it possible to routinely compute RPA correlation energies of systems well beyond 100 atoms, as is demonstrated for the octapeptide angiotensin II.
Krause, Lennard; Herbst-Irmer, Regine; Sheldrick, George M; Stalke, Dietmar
2015-02-01
The quality of diffraction data obtained using silver and molybdenum microsources has been compared for six model compounds with a wide range of absorption factors. The experiments were performed on two 30 W air-cooled Incoatec IµS microfocus sources with multilayer optics mounted on a Bruker D8 goniometer with a SMART APEX II CCD detector. All data were analysed, processed and refined using standard Bruker software. The results show that Ag K α radiation can be beneficial when heavy elements are involved. A numerical absorption correction based on the positions and indices of the crystal faces is shown to be of limited use for the highly focused microsource beams, presumably because the assumption that the crystal is completely bathed in a (top-hat profile) beam of uniform intensity is no longer valid. Fortunately the empirical corrections implemented in SADABS , although originally intended as a correction for absorption, also correct rather well for the variations in the effective volume of the crystal irradiated. In three of the cases studied (two Ag and one Mo) the final SHELXL R 1 against all data after application of empirical corrections implemented in SADABS was below 1%. Since such corrections are designed to optimize the agreement of the intensities of equivalent reflections with different paths through the crystal but the same Bragg 2θ angles, a further correction is required for the 2θ dependence of the absorption. For this, SADABS uses the transmission factor of a spherical crystal with a user-defined value of μ r (where μ is the linear absorption coefficient and r is the effective radius of the crystal); the best results are obtained when r is biased towards the smallest crystal dimension. The results presented here suggest that the IUCr publication requirement that a numerical absorption correction must be applied for strongly absorbing crystals is in need of revision.
School Counselor Lead Initial Individual Career and Academic Plan Implementation Design
ERIC Educational Resources Information Center
Moeder-Chandler, Markus
2017-01-01
In Fall of 2014 for Fountain-Fort Carson School District #8 undertook a revamping of graduation and state-mandated ICAP requirements for implementation for the graduating class of 2021. This design and implementation process included numerous stakeholders and several years of planning from Fall of 2014 to Spring of 2017. The design and…
Teaching Modeling with Partial Differential Equations: Several Successful Approaches
ERIC Educational Resources Information Center
Myers, Joseph; Trubatch, David; Winkel, Brian
2008-01-01
We discuss the introduction and teaching of partial differential equations (heat and wave equations) via modeling physical phenomena, using a new approach that encompasses constructing difference equations and implementing these in a spreadsheet, numerically solving the partial differential equations using the numerical differential equation…
Verification of Numerical Programs: From Real Numbers to Floating Point Numbers
NASA Technical Reports Server (NTRS)
Goodloe, Alwyn E.; Munoz, Cesar; Kirchner, Florent; Correnson, Loiec
2013-01-01
Numerical algorithms lie at the heart of many safety-critical aerospace systems. The complexity and hybrid nature of these systems often requires the use of interactive theorem provers to verify that these algorithms are logically correct. Usually, proofs involving numerical computations are conducted in the infinitely precise realm of the field of real numbers. However, numerical computations in these algorithms are often implemented using floating point numbers. The use of a finite representation of real numbers introduces uncertainties as to whether the properties veri ed in the theoretical setting hold in practice. This short paper describes work in progress aimed at addressing these concerns. Given a formally proven algorithm, written in the Program Verification System (PVS), the Frama-C suite of tools is used to identify sufficient conditions and verify that under such conditions the rounding errors arising in a C implementation of the algorithm do not affect its correctness. The technique is illustrated using an algorithm for detecting loss of separation among aircraft.
NASA Astrophysics Data System (ADS)
Xia, Xilin; Liang, Qiuhua; Ming, Xiaodong; Hou, Jingming
2018-01-01
This document addresses the comments raised by Lu et al. (2017). Lu et al. (2017) proposed an alternative numerical treatment for implementing the fully implicit friction discretization in Xia et al. (2017). The method by Lu et al. (2017) is also effective, but not necessarily easier to implement or more efficient. The numerical wiggles observed by Lu et al. (2017) do not affect the overall solution accuracy of the surface reconstruction method (SRM). SRM introduces an antidiffusion effect, which may also lead to more accurate numerical predictions than hydrostatic reconstruction (HR) but may be the cause of the numerical wiggles. As suggested by Lu et al. (2017), HR may perform equally well if fine enough grids are used, which has been investigated and recognized in the literature. However, the use of refined meshes in simulations will inevitably increase computational cost and the grid sizes as suggested are too small for real-world applications.
Evaluation of a transfinite element numerical solution method for nonlinear heat transfer problems
NASA Technical Reports Server (NTRS)
Cerro, J. A.; Scotti, S. J.
1991-01-01
Laplace transform techniques have been widely used to solve linear, transient field problems. A transform-based algorithm enables calculation of the response at selected times of interest without the need for stepping in time as required by conventional time integration schemes. The elimination of time stepping can substantially reduce computer time when transform techniques are implemented in a numerical finite element program. The coupling of transform techniques with spatial discretization techniques such as the finite element method has resulted in what are known as transfinite element methods. Recently attempts have been made to extend the transfinite element method to solve nonlinear, transient field problems. This paper examines the theoretical basis and numerical implementation of one such algorithm, applied to nonlinear heat transfer problems. The problem is linearized and solved by requiring a numerical iteration at selected times of interest. While shown to be acceptable for weakly nonlinear problems, this algorithm is ineffective as a general nonlinear solution method.
A spectral boundary integral equation method for the 2-D Helmholtz equation
NASA Technical Reports Server (NTRS)
Hu, Fang Q.
1994-01-01
In this paper, we present a new numerical formulation of solving the boundary integral equations reformulated from the Helmholtz equation. The boundaries of the problems are assumed to be smooth closed contours. The solution on the boundary is treated as a periodic function, which is in turn approximated by a truncated Fourier series. A Fourier collocation method is followed in which the boundary integral equation is transformed into a system of algebraic equations. It is shown that in order to achieve spectral accuracy for the numerical formulation, the nonsmoothness of the integral kernels, associated with the Helmholtz equation, must be carefully removed. The emphasis of the paper is on investigating the essential elements of removing the nonsmoothness of the integral kernels in the spectral implementation. The present method is robust for a general boundary contour. Aspects of efficient implementation of the method using FFT are also discussed. A numerical example of wave scattering is given in which the exponential accuracy of the present numerical method is demonstrated.
NASA Technical Reports Server (NTRS)
Chuang, C.-H.; Goodson, Troy D.; Ledsinger, Laura A.
1995-01-01
This report describes current work in the numerical computation of multiple burn, fuel-optimal orbit transfers and presents an analysis of the second variation for extremal multiple burn orbital transfers as well as a discussion of a guidance scheme which may be implemented for such transfers. The discussion of numerical computation focuses on the use of multivariate interpolation to aid the computation in the numerical optimization. The second variation analysis includes the development of the conditions for the examination of both fixed and free final time transfers. Evaluations for fixed final time are presented for extremal one, two, and three burn solutions of the first variation. The free final time problem is considered for an extremal two burn solution. In addition, corresponding changes of the second variation formulation over thrust arcs and coast arcs are included. The guidance scheme discussed is an implicit scheme which implements a neighboring optimal feedback guidance strategy to calculate both thrust direction and thrust on-off times.
40 CFR 52.2571 - Classification of regions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS (CONTINUED) Wisconsin § 52.2571 Classification of regions. The Wisconsin plan was evaluated on the basis of the following classifications: Air... Duluth (Minnesota)-Superior (Wisconsin) Interstate I II III III III North Central Wisconsin Intrastate II...
40 CFR 52.2571 - Classification of regions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS (CONTINUED) Wisconsin § 52.2571 Classification of regions. The Wisconsin plan was evaluated on the basis of the following classifications: Air... Duluth (Minnesota)-Superior (Wisconsin) Interstate I II III III III North Central Wisconsin Intrastate II...
40 CFR 52.2571 - Classification of regions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS (CONTINUED) Wisconsin § 52.2571 Classification of regions. The Wisconsin plan was evaluated on the basis of the following classifications: Air... Duluth (Minnesota)-Superior (Wisconsin) Interstate I II III III III North Central Wisconsin Intrastate II...
Implementation and Evaluation of Two Design Concepts of the Passive Ring Resonator Laser Gyroscope.
1983-12-01
The cavity mirrors consist of 23 dielec- tric layers on a Zerodur substrate (Ref 1). The reflectivity of each mirror is 0.99995 (Ref 1). The...Conditions at the Cavity Input Mirror ...II1-8 6 Cavity Power Transmission vs. Frequency.. ........ II-10 7 Spatial Phase Distortion of the Reflected...32 16 Piano-Spherical Square vty.........II3 17 Astigmatism of a Spherical Mirror in a Ring 18 Case Is Circular-Circular Mode Match..........e...II
A domain-specific compiler for a parallel multiresolution adaptive numerical simulation environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajbhandari, Samyam; Kim, Jinsung; Krishnamoorthy, Sriram
This paper describes the design and implementation of a layered domain-specific compiler to support MADNESS---Multiresolution ADaptive Numerical Environment for Scientific Simulation. MADNESS is a high-level software environment for the solution of integral and differential equations in many dimensions, using adaptive and fast harmonic analysis methods with guaranteed precision. MADNESS uses k-d trees to represent spatial functions and implements operators like addition, multiplication, differentiation, and integration on the numerical representation of functions. The MADNESS runtime system provides global namespace support and a task-based execution model including futures. MADNESS is currently deployed on massively parallel supercomputers and has enabled many science advances.more » Due to the highly irregular and statically unpredictable structure of the k-d trees representing the spatial functions encountered in MADNESS applications, only purely runtime approaches to optimization have previously been implemented in the MADNESS framework. This paper describes a layered domain-specific compiler developed to address some performance bottlenecks in MADNESS. The newly developed static compile-time optimizations, in conjunction with the MADNESS runtime support, enable significant performance improvement for the MADNESS framework.« less
Fully coupled methods for multiphase morphodynamics
NASA Astrophysics Data System (ADS)
Michoski, C.; Dawson, C.; Mirabito, C.; Kubatko, E. J.; Wirasaet, D.; Westerink, J. J.
2013-09-01
We present numerical methods for a system of equations consisting of the two dimensional Saint-Venant shallow water equations (SWEs) fully coupled to a completely generalized Exner formulation of hydrodynamically driven sediment discharge. This formulation is implemented by way of a discontinuous Galerkin (DG) finite element method, using a Roe Flux for the advective components and the unified form for the dissipative components. We implement a number of Runge-Kutta time integrators, including a family of strong stability preserving (SSP) schemes, and Runge-Kutta Chebyshev (RKC) methods. A brief discussion is provided regarding implementational details for generalizable computer algebra tokenization using arbitrary algebraic fluxes. We then run numerical experiments to show standard convergence rates, and discuss important mathematical and numerical nuances that arise due to prominent features in the coupled system, such as the emergence of nondifferentiable and sharp zero crossing functions, radii of convergence in manufactured solutions, and nonconservative product (NCP) formalisms. Finally we present a challenging application model concerning hydrothermal venting across metalliferous muds in the presence of chemical reactions occurring in low pH environments.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-21
...: (A), (B), (C), (D)(ii), (E), (F), (G), (H), (J), (K), (L), and (M). DATES: This action is effective... elements for the 1997 8-hour ozone NAAQS: (A), (B), (C), (D)(ii), (E), (F), (G), (H), (J), (K), (L), and (M... 1997 ozone NAAQS: (A), (B), (C), (D)(ii), (E), (F), (G), (H), (J), (K), (L), (M). EPA is taking no...
... your coagulation factors. Coagulation factors are known by Roman numerals (I, II VIII, etc.) or by name ( ... need this test if you have a family history of bleeding disorders. Most bleeding disorders are inherited . ...
Symmetry-breaking instability of quadratic soliton bound states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delque, Michaeel; Departement d'Optique P.M. Duffieux, Institut FEMTO-ST, Universite de Franche-Comte, CNRS UMR 6174, F-25030 Besancon; Fanjoux, Gil
We study both numerically and experimentally two-dimensional soliton bound states in quadratic media and demonstrate their symmetry-breaking instability. The experiment is performed in a potassium titanyl phosphate crystal in a type-II configuration. The bound state is generated by the copropagation of the antisymmetric fundamental beam locked in phase with the symmetrical second harmonic one. Experimental results are in good agreement with numerical simulations of the nonlinear wave equations.
Influence of Alternative Engine Concepts on LCTR2 Sizing and Mission Profile
2012-01-01
II), and engine performance was estimated with the Numerical Propulsion System Simulation ( NPSS ). Design trades for the ACE vs. VSPT are presented...Maximum Continuous Power MRP Maximum Rated Power (take-off power) NDARC NASA Design and Analysis of Rotorcraft NPSS Numerical Propulsion System...System Simulation ( NPSS ). Design trades for the ACE vs. VSPT are presented in terms of vehicle weight empty for variations in mission altitude and
Numerical Relativity, Black Hole Mergers, and Gravitational Waves: Part II
NASA Technical Reports Server (NTRS)
Centrella, Joan
2012-01-01
This series of 3 lectures will present recent developments in numerical relativity, and their applications to simulating black hole mergers and computing the resulting gravitational waveforms. In this second lecture, we focus on simulations of black hole binary mergers. We hig hlight the instabilities that plagued the codes for many years, the r ecent breakthroughs that led to the first accurate simulations, and the current state of the art.
[Clinical practice guidelines in Peru: evaluation of its quality using the AGREE II instrument].
Canelo-Aybar, Carlos; Balbin, Graciela; Perez-Gomez, Ángela; Florez, Iván D
2016-01-01
To evaluate the methodological quality of clinical practice guidelines (CPGs) put into practice by the Peruvian Ministry of Health (MINSA), 17 CPGs from the ministry, published between 2009 and 2014, were independently evaluated by three methodologic experts using the AGREE II instrument. The score of AGREE II domains was low and very low in all CPGs: scope and purpose (medium, 44%), clarity of presentation (medium, 47%), participation of decision-makers (medium, 8%), methodological rigor (medium, 5%), applicability (medium, 5%), and editorial independence (medium, 8%). In conclusion, the methodological quality of CPGs implemented by the MINSA is low. Consequently, its use could not be recommended. The implementation of the methodology for the development of CPGs described in the recentlypublished CPG methodological preparation manual in Peru is a pressing need.
GLYDE-II: The GLYcan data exchange format
Ranzinger, Rene; Kochut, Krys J.; Miller, John A.; Eavenson, Matthew; Lütteke, Thomas; York, William S.
2017-01-01
Summary The GLYcan Data Exchange (GLYDE) standard has been developed for the representation of the chemical structures of monosaccharides, glycans and glycoconjugates using a connection table formalism formatted in XML. This format allows structures, including those that do not exist in any database, to be unambiguously represented and shared by diverse computational tools. GLYDE implements a partonomy model based on human language along with rules that provide consistent structural representations, including a robust namespace for specifying monosaccharides. This approach facilitates the reuse of data processing software at the level of granularity that is most appropriate for extraction of the desired information. GLYDE-II has already been used as a key element of several glycoinformatics tools. The philosophical and technical underpinnings of GLYDE-II and recent implementation of its enhanced features are described. PMID:28955652
Total Quality Management (TQM). Implementers Workshop
1990-05-15
SHEE’T :s t’ii ,rrl DEPARTMENT OF DEFENSE May 15, 1990 Lfl CN I TOTAL QUALITY MANAGEMENT (TQM) Implementers Workshop © Copyright 1990 Booz.Allen...must be continually performed in order to achieve successful TQM implementation. 1-5 = TOTAL QUALITY MANAGEMENT Implementers Workshop Course Content...information, please refer to the student manual, Total Quality Management (TOM) Awareness Seminar, that was provided for the Awareness Course. You may
Spanish methodological approach for biosphere assessment of radioactive waste disposal.
Agüero, A; Pinedo, P; Cancio, D; Simón, I; Moraleda, M; Pérez-Sánchez, D; Trueba, C
2007-10-01
The development of radioactive waste disposal facilities requires implementation of measures that will afford protection of human health and the environment over a specific temporal frame that depends on the characteristics of the wastes. The repository design is based on a multi-barrier system: (i) the near-field or engineered barrier, (ii) far-field or geological barrier and (iii) the biosphere system. Here, the focus is on the analysis of this last system, the biosphere. A description is provided of conceptual developments, methodological aspects and software tools used to develop the Biosphere Assessment Methodology in the context of high-level waste (HLW) disposal facilities in Spain. This methodology is based on the BIOMASS "Reference Biospheres Methodology" and provides a logical and systematic approach with supplementary documentation that helps to support the decisions necessary for model development. It follows a five-stage approach, such that a coherent biosphere system description and the corresponding conceptual, mathematical and numerical models can be built. A discussion on the improvements implemented through application of the methodology to case studies in international and national projects is included. Some facets of this methodological approach still require further consideration, principally an enhanced integration of climatology, geography and ecology into models considering evolution of the environment, some aspects of the interface between the geosphere and biosphere, and an accurate quantification of environmental change processes and rates.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III
2004-01-01
This final report will document the accomplishments of the work of this project. 1) The incremental-iterative (II) form of the reverse-mode (adjoint) method for computing first-order (FO) aerodynamic sensitivity derivatives (SDs) has been successfully implemented and tested in a 2D CFD code (called ANSERS) using the reverse-mode capability of ADIFOR 3.0. These preceding results compared very well with similar SDS computed via a black-box (BB) application of the reverse-mode capability of ADIFOR 3.0, and also with similar SDs calculated via the method of finite differences. 2) Second-order (SO) SDs have been implemented in the 2D ASNWERS code using the very efficient strategy that was originally proposed (but not previously tested) of Reference 3, Appendix A. Furthermore, these SO SOs have been validated for accuracy and computational efficiency. 3) Studies were conducted in Quasi-1D and 2D concerning the smoothness (or lack of smoothness) of the FO and SO SD's for flows with shock waves. The phenomenon is documented in the publications of this study (listed subsequently), however, the specific numerical mechanism which is responsible for this unsmoothness phenomenon was not discovered. 4) The FO and SO derivatives for Quasi-1D and 2D flows were applied to predict aerodynamic design uncertainties, and were also applied in robust design optimization studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yueqiang, E-mail: yueqiang.liu@ccfe.ac.uk; Chapman, I. T.; Graves, J. P.
2014-05-15
A non-perturbative magnetohydrodynamic-kinetic hybrid formulation is developed and implemented into the MARS-K code [Liu et al., Phys. Plasmas 15, 112503 (2008)] that takes into account the anisotropy and asymmetry [Graves et al., Nature Commun. 3, 624 (2012)] of the equilibrium distribution of energetic particles (EPs) in particle pitch angle space, as well as first order finite orbit width (FOW) corrections for both passing and trapped EPs. Anisotropic models, which affect both the adiabatic and non-adiabatic drift kinetic energy contributions, are implemented for both neutral beam injection and ion cyclotron resonant heating induced EPs. The first order FOW correction does notmore » contribute to the precessional drift resonance of trapped particles, but generally remains finite for the bounce and transit resonance contributions, as well as for the adiabatic contributions from asymmetrically distributed passing particles. Numerical results for a 9MA steady state ITER plasma suggest that (i) both the anisotropy and FOW effects can be important for the resistive wall mode stability in ITER plasmas; and (ii) the non-perturbative approach predicts less kinetic stabilization of the mode, than the perturbative approach, in the presence of anisotropy and FOW effects for the EPs. The latter may partially be related to the modification of the eigenfunction of the mode by the drift kinetic effects.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gartling, D.K.
User instructions are given for the finite element, electromagnetics program, TORO II. The theoretical background and numerical methods used in the program are documented in SAND95-2472. The present document also describes a number of example problems that have been analyzed with the code and provides sample input files for typical simulations. 20 refs., 34 figs., 3 tabs.
Paratransit Handbook : a Guide to Paratransit System Implementation volume II - parts 4 and 5
DOT National Transportation Integrated Search
1979-02-01
This Paratransit Handbook has been developed to aid public officials, planners and system operators in planning, designing, implementing, operating and evaluating integrated paratransit systems. The Handbook represents a compedium of techniques and e...
Computationally efficient method for optical simulation of solar cells and their applications
NASA Astrophysics Data System (ADS)
Semenikhin, I.; Zanuccoli, M.; Fiegna, C.; Vyurkov, V.; Sangiorgi, E.
2013-01-01
This paper presents two novel implementations of the Differential method to solve the Maxwell equations in nanostructured optoelectronic solid state devices. The first proposed implementation is based on an improved and computationally efficient T-matrix formulation that adopts multiple-precision arithmetic to tackle the numerical instability problem which arises due to evanescent modes. The second implementation adopts the iterative approach that allows to achieve low computational complexity O(N logN) or better. The proposed algorithms may work with structures with arbitrary spatial variation of the permittivity. The developed two-dimensional numerical simulator is applied to analyze the dependence of the absorption characteristics of a thin silicon slab on the morphology of the front interface and on the angle of incidence of the radiation with respect to the device surface.
Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics
NASA Technical Reports Server (NTRS)
Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.
NASA Astrophysics Data System (ADS)
Cordier, Florian; Tassi, Pablo; Claude, Nicolas; Crosato, Alessandra; Rodrigues, Stéphane; Pham van Bang, Damien
2017-04-01
Numerical modelling of graded sediment transport in rivers remains a challenge [Siviglia and Crosato, 2016] and only few studies have considered the non-uniform distribution of sediment, although sediment grading is an inherent characteristic of natural rivers. The present work aims at revisiting the morphodynamics module of the Telemac-Mascaret modelling system and to integrate the latest developments to model the effects of non-uniform sediment on i) the sediment transport capacity estimated at the interface between the flow and the riverbed and on ii) the vertical sorting of sediment deposits in response to sediment supply changes. The implementation of these two processes has a key role on the modelling of bar dynamics in aggrading/degrading channels [Blom, 2008]. Numerical modelling of graded sediment transport remains a challenge due to the difficulty to reproduce the non-linear interactions between grains of different shape and size. Application of classical bedload equations usually fails in reproducing relevant transport rates [Recking, 2010 and references therein]. In this work, the graded sediment transport model of Wilcock and Crowe [2003] and the active layer concept of Hirano [1971] for the formulation of the exchange layer are implemented. The ability to reproduce the formation and evolution of graded-sediment bars is assessed on the basis of laboratory experiences from the literature. References: Blom, A., Ribberink, J. S., and Parker, G. 2008. Vertical sorting and the morphodynamics of bed form-dominated rivers: A sorting evolution model. Journal of Geophysical Research: Earth Surface, 113(F1). Lauer, J. W., Viparelli, E., and Piégay, H. 2016. Morphodynamics and sediment tracers in 1-d (mast-1d): 1-d sediment transport that includes exchange with an off-channel sediment reservoir. Advances in Water Resources. Recking, A. 2010. A comparison between flume and field bed load transport data and consequences for surface-based bed load transport prediction. Water Resources Research, 46(3). W03518. Siviglia, A. and Crosato, A. 2016. Numerical modelling of river morphodynamics: latest developments and remaining challenges. Advances in Water Resources, 90:1-9. Wilcock, P. R. and Crowe, J. C. 2003. Surface-based transport model for mixed-size sediment. Journal of Hydraulic Engineering, 129(2):120-128.
Focusing Intense Charged Particle Beams with Achromatic Effects for Heavy Ion Fusion
NASA Astrophysics Data System (ADS)
Mitrani, James; Kaganovich, Igor
2012-10-01
Final focusing systems designed to minimize the effects of chromatic aberrations in the Neutralized Drift Compression Experiment (NDCX-II) are described. NDCX-II is a linear induction accelerator, designed to accelerate short bunches at high current. Previous experiments showed that neutralized drift compression significantly compresses the beam longitudinally (˜60x) in the z-direction, resulting in a narrow distribution in z-space, but a wide distribution in pz-space. Using simple lenses (e.g., solenoids, quadrupoles) to focus beam bunches with wide distributions in pz-space results in chromatic aberrations, leading to lower beam intensities (J/cm^2). Therefore, the final focusing system must be designed to compensate for chromatic aberrations. The paraxial ray equations and beam envelope equations are numerically solved for parameters appropriate to NDCX-II. Based on these results, conceptual designs for final focusing systems using a combination of solenoids and/or quadrupoles are optimized to compensate for chromatic aberrations. Lens aberrations and emittance growth will be investigated, and analytical results will be compared with results from numerical particle-in-cell (PIC) simulation codes.
Epidural labour analgesia using Bupivacaine and Clonidine
Syal, K; Dogra, RK; Ohri, A; Chauhan, G; Goel, A
2011-01-01
Background: To compare the effects of addition of Clonidine (60 μg) to Epidural Bupivacaine (0.125%) for labour analgesia, with regard to duration of analgesia, duration of labour, ambulation, incidence of instrumentation and caesarean section, foetal outcome, patient satisfaction and side effects. Patients & Methods: On demand, epidural labour analgesia was given to 50 nulliparous healthy term parturients (cephalic presentation), divided in two groups randomly. Group I received bupivacaine (0.125%) alone, whereas Group II received bupivacaine (0.125%) along with Clonidine (60 μg). 10 ml of 0.125% bupivacaine was injected as first dose and further doses titrated with patient relief (Numerical Rating Scale <3). Top ups were given whenever Numerical Rating Scale went above 5. Results: There was statistically significant prolongation of duration of analgesia in Group II, with no difference in duration of labour, ambulation, incidence of instrumentation and caesarean section or foetal outcome. Also clonidine gave dose sparing effect to bupivacaine and there was better patient satisfaction without any significant side effects in Group II. Conclusion: Clonidine is a useful adjunct to bupivacaine for epidural labour analgesia and can be considered as alternative to opioids. PMID:21804714
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moussa, Jonathan E.
2013-05-13
This piece of software is a new feature implemented inside an existing open-source library. Specifically, it is a new implementation of a density functional (HSE, short for Heyd-Scuseria-Ernzerhof) for a repository of density functionals, the libxc library. It fixes some numerical problems with existing implementations, as outlined in a scientific paper recently submitted for publication. Density functionals are components of electronic structure simulations, which model properties of electrons inside molecules and crystals.
U10 : Trusted Truck(R) II (phase B).
DOT National Transportation Integrated Search
2009-01-01
Phase B of the Trusted Truck II project built on the system developed in Phase A (or Year 1). For the implementation portion of the project, systems were added to the trailer to provide additional diagnostic trailer data that can be sent to the TTM...
Field testing of hand-held infrared thermography, phase II TPF-5(247) : final report.
DOT National Transportation Integrated Search
2016-05-01
This report is the second of two volumes that document results from the pooled fund study TPF-5 (247), Development of : Handheld Infrared Thermography, Phase II. The interim report (volume I) studied the implementation of handheld thermography : by p...
TREC Initiative with Cheshire II.
ERIC Educational Resources Information Center
Larson, Ray R.
2001-01-01
Describes the University of California at Berkeley's participation in the TREC (Text Retrieval Conference) interactive track experiments. Highlights include results of searches on two systems, Cheshire II and ZPRISE; system design goals and implementation; precision and recall results; search questions by topic and system; and results of…
Mechanical testing of bones: the positive synergy of finite-element models and in vitro experiments.
Cristofolini, Luca; Schileo, Enrico; Juszczyk, Mateusz; Taddei, Fulvia; Martelli, Saulo; Viceconti, Marco
2010-06-13
Bone biomechanics have been extensively investigated in the past both with in vitro experiments and numerical models. In most cases either approach is chosen, without exploiting synergies. Both experiments and numerical models suffer from limitations relative to their accuracy and their respective fields of application. In vitro experiments can improve numerical models by: (i) preliminarily identifying the most relevant failure scenarios; (ii) improving the model identification with experimentally measured material properties; (iii) improving the model identification with accurately measured actual boundary conditions; and (iv) providing quantitative validation based on mechanical properties (strain, displacements) directly measured from physical specimens being tested in parallel with the modelling activity. Likewise, numerical models can improve in vitro experiments by: (i) identifying the most relevant loading configurations among a number of motor tasks that cannot be replicated in vitro; (ii) identifying acceptable simplifications for the in vitro simulation; (iii) optimizing the use of transducers to minimize errors and provide measurements at the most relevant locations; and (iv) exploring a variety of different conditions (material properties, interface, etc.) that would require enormous experimental effort. By reporting an example of successful investigation of the femur, we show how a combination of numerical modelling and controlled experiments within the same research team can be designed to create a virtuous circle where models are used to improve experiments, experiments are used to improve models and their combination synergistically provides more detailed and more reliable results than can be achieved with either approach singularly.
NASA Technical Reports Server (NTRS)
Duque, Earl P. N.; Johnson, Wayne; vanDam, C. P.; Chao, David D.; Cortes, Regina; Yee, Karen
1999-01-01
Accurate, reliable and robust numerical predictions of wind turbine rotor power remain a challenge to the wind energy industry. The literature reports various methods that compare predictions to experiments. The methods vary from Blade Element Momentum Theory (BEM), Vortex Lattice (VL), to variants of Reynolds-averaged Navier-Stokes (RaNS). The BEM and VL methods consistently show discrepancies in predicting rotor power at higher wind speeds mainly due to inadequacies with inboard stall and stall delay models. The RaNS methodologies show promise in predicting blade stall. However, inaccurate rotor vortex wake convection, boundary layer turbulence modeling and grid resolution has limited their accuracy. In addition, the inherently unsteady stalled flow conditions become computationally expensive for even the best endowed research labs. Although numerical power predictions have been compared to experiment. The availability of good wind turbine data sufficient for code validation experimental data that has been extracted from the IEA Annex XIV download site for the NREL Combined Experiment phase II and phase IV rotor. In addition, the comparisons will show data that has been further reduced into steady wind and zero yaw conditions suitable for comparisons to "steady wind" rotor power predictions. In summary, the paper will present and discuss the capabilities and limitations of the three numerical methods and make available a database of experimental data suitable to help other numerical methods practitioners validate their own work.
NASA Astrophysics Data System (ADS)
Bravo, Agustín; Barham, Richard; Ruiz, Mariano; López, Juan Manuel; De Arcas, Guillermo; Alonso, Jesus
2012-12-01
In part I, the feasibility of using three-dimensional (3D) finite elements (FEs) to model the acoustic behaviour of the IEC 60318-1 artificial ear was studied and the numerical approach compared with classical lumped elements modelling. It was shown that by using a more complex acoustic model that took account of thermo-viscous effects, geometric shapes and dimensions, it was possible to develop a realistic model. This model then had clear advantages in comparison with the models based on equivalent circuits using lumped parameters. In fact results from FE modelling produce a better understanding about the physical phenomena produced inside ear simulator couplers, facilitating spatial and temporal visualization of the sound fields produced. The objective of this study (part II) is to extend the investigation by validating the numerical calculations against measurements on an ear simulator conforming to IEC 60318-1. For this purpose, an appropriate commercially available device is taken and a complete 3D FE model developed for it. The numerical model is based on key dimensional data obtained with a non-destructive x-ray inspection technique. Measurements of the acoustic transfer impedance have been carried out on the same device at a national measurement institute using the method embodied in IEC 60318-1. Having accounted for the actual device dimensions, the thermo-viscous effects inside narrow slots and holes and environmental conditions, the results of the numerical modelling were found to be in good agreement with the measured values.
A generic multi-hazard and multi-risk framework and its application illustrated in a virtual city
NASA Astrophysics Data System (ADS)
Mignan, Arnaud; Euchner, Fabian; Wiemer, Stefan
2013-04-01
We present a generic framework to implement hazard correlations in multi-risk assessment strategies. We consider hazard interactions (process I), time-dependent vulnerability (process II) and time-dependent exposure (process III). Our approach is based on the Monte Carlo method to simulate a complex system, which is defined from assets exposed to a hazardous region. We generate 1-year time series, sampling from a stochastic set of events. Each time series corresponds to one risk scenario and the analysis of multiple time series allows for the probabilistic assessment of losses and for the recognition of more or less probable risk paths. Each sampled event is associated to a time of occurrence, a damage footprint and a loss footprint. The occurrence of an event depends on its rate, which is conditional on the occurrence of past events (process I, concept of correlation matrix). Damage depends on the hazard intensity and on the vulnerability of the asset, which is conditional on previous damage on that asset (process II). Losses are the product of damage and exposure value, this value being the original exposure minus previous losses (process III, no reconstruction considered). The Monte Carlo method allows for a straightforward implementation of uncertainties and for implementation of numerous interactions, which is otherwise challenging in an analytical multi-risk approach. We apply our framework to a synthetic data set, defined by a virtual city within a virtual region. This approach gives the opportunity to perform multi-risk analyses in a controlled environment while not requiring real data, which may be difficultly accessible or simply unavailable to the public. Based on the heuristic approach, we define a 100 by 100 km region where earthquakes, volcanic eruptions, fluvial floods, hurricanes and coastal floods can occur. All hazards are harmonized to a common format. We define a 20 by 20 km city, composed of 50,000 identical buildings with a fixed economic value. Vulnerability curves are defined in terms of mean damage ratio as a function of hazard intensity. All data are based on simple equations found in the literature and on other simplifications. We show the impact of earthquake-earthquake interaction and hurricane-storm surge coupling, as well as of time-dependent vulnerability and exposure, on aggregated loss curves. One main result is the emergence of low probability-high consequences (extreme) events when correlations are implemented. While the concept of virtual city can suggest the theoretical benefits of multi-risk assessment for decision support, identifying their real-world practicality will require the study of real test sites.
Ahn, Jeong H.; Rechsteiner, Andreas; Strome, Susan; Kelly, William G.
2016-01-01
The elongation phase of transcription by RNA Polymerase II (Pol II) involves numerous events that are tightly coordinated, including RNA processing, histone modification, and chromatin remodeling. RNA splicing factors are associated with elongating Pol II, and the interdependent coupling of splicing and elongation has been documented in several systems. Here we identify a conserved, multi-domain cyclophilin family member, SIG-7, as an essential factor for both normal transcription elongation and co-transcriptional splicing. In embryos depleted for SIG-7, RNA levels for over a thousand zygotically expressed genes are substantially reduced, Pol II becomes significantly reduced at the 3’ end of genes, marks of transcription elongation are reduced, and unspliced mRNAs accumulate. Our findings suggest that SIG-7 plays a central role in both Pol II elongation and co-transcriptional splicing and may provide an important link for their coordination and regulation. PMID:27541139
Oxidative removal of Mn(II) from solution catalysed by the γ-FeOOH (lepidocrocite) surface
NASA Astrophysics Data System (ADS)
Sung, Windsor; Morgan, James J.
1981-12-01
A laboratory study was undertaken to ascertain the role of surface catalysis in Mn(II) oxidative removal. γ-FeOOH, a ferric oxyhydroxide formed by O2 oxidation of ferrous iron in solution, was studied in the following ways: surface charge characteristics by acid base titration, adsorption of Mn(II) and surface oxidation of Mn(II). A rate law was formulated to account for the effects of pH and the amount of surface on the surface oxidation rate of Mn(II). The presence of milli-molar levels of γ-FeOOH was shown to reduce significantly the half-life of Mn(II) in 0.7 M NaCl from hundreds of hours to hours. The numerical values of the surface rate constants for the γ-FeOOH and that reported for colloidal MnO2 are comparable in order of magnitude.
Chung, Sun Ju; Asgharnejad, Mahnaz; Bauer, Lars; Ramirez, Francisco; Jeon, Beomseok
2016-08-01
To evaluate the dopamine receptor agonist, rotigotine, for improving depressive symptoms in patients with Parkinson's disease (PD). Patients were randomized 1:1 to rotigotine or placebo, titrated for ≤7 weeks, and maintained at optimal/maximum dose for 8-weeks. Primary efficacy variable: 17- item Hamilton Depression Rating Scale (HAM-D 17) total score change from baseline to end-of-maintenance. Secondary variables: changes in Beck Depression Inventory-II, Unified Parkinson's Disease Rating Scale (UPDRS) II (activities of daily living [ADL]) and III (motor) subscores, UPDRS II+III total, patient-rated Apathy Scale (AS), and Snaith-Hamilton Pleasure Scale. Of 380 patients randomized, 149/184 (81.0%) rotigotine-treated and 164/196 (83.7%) placebo-treated patients completed the study. mean (±SD) age 65.2 (±8.5) years; time since PD-diagnosis 2.74 (±3.08) years; 42.6% male. The treatment difference (LS mean [95% CI]) in change from baseline HAM-D 17 was -1.12 (-2.56, 0.33; p = 0.1286). UPDRS II, III, II+III and AS scores improved numerically with rotigotine versus placebo. Common adverse events with higher incidence with rotigotine: nausea, application/instillation site reactions, vomiting, and pruritus. Forty-one (10.8%) patients discontinued owing to adverse events (25 rotigotine/16 placebo). No statistically significant improvement in depressive symptoms were observed with rotigotine versus placebo. ADL, motor function, and patient-rated apathy improved numerically. ClinicalTrials.gov: NCT01523301.
NASA Astrophysics Data System (ADS)
Blakely, Christopher D.
This dissertation thesis has three main goals: (1) To explore the anatomy of meshless collocation approximation methods that have recently gained attention in the numerical analysis community; (2) Numerically demonstrate why the meshless collocation method should clearly become an attractive alternative to standard finite-element methods due to the simplicity of its implementation and its high-order convergence properties; (3) Propose a meshless collocation method for large scale computational geophysical fluid dynamics models. We provide numerical verification and validation of the meshless collocation scheme applied to the rotational shallow-water equations on the sphere and demonstrate computationally that the proposed model can compete with existing high performance methods for approximating the shallow-water equations such as the SEAM (spectral-element atmospheric model) developed at NCAR. A detailed analysis of the parallel implementation of the model, along with the introduction of parallel algorithmic routines for the high-performance simulation of the model will be given. We analyze the programming and computational aspects of the model using Fortran 90 and the message passing interface (mpi) library along with software and hardware specifications and performance tests. Details from many aspects of the implementation in regards to performance, optimization, and stabilization will be given. In order to verify the mathematical correctness of the algorithms presented and to validate the performance of the meshless collocation shallow-water model, we conclude the thesis with numerical experiments on some standardized test cases for the shallow-water equations on the sphere using the proposed method.
A spline-based approach for computing spatial impulse responses.
Ellis, Michael A; Guenther, Drake; Walker, William F
2007-05-01
Computer simulations are an essential tool for the design of phased-array ultrasonic imaging systems. FIELD II, which determines the two-way temporal response of a transducer at a point in space, is the current de facto standard for ultrasound simulation tools. However, the need often arises to obtain two-way spatial responses at a single point in time, a set of dimensions for which FIELD II is not well optimized. This paper describes an analytical approach for computing the two-way, far-field, spatial impulse response from rectangular transducer elements under arbitrary excitation. The described approach determines the response as the sum of polynomial functions, making computational implementation quite straightforward. The proposed algorithm, named DELFI, was implemented as a C routine under Matlab and results were compared to those obtained under similar conditions from the well-established FIELD II program. Under the specific conditions tested here, the proposed algorithm was approximately 142 times faster than FIELD II for computing spatial sensitivity functions with similar amounts of error. For temporal sensitivity functions with similar amounts of error, the proposed algorithm was about 1.7 times slower than FIELD II using rectangular elements and 19.2 times faster than FIELD II using triangular elements. DELFI is shown to be an attractive complement to FIELD II, especially when spatial responses are needed at a specific point in time.
Macías-Díaz, J E; Macías, Siegfried; Medina-Ramírez, I E
2013-12-01
In this manuscript, we present a computational model to approximate the solutions of a partial differential equation which describes the growth dynamics of microbial films. The numerical technique reported in this work is an explicit, nonlinear finite-difference methodology which is computationally implemented using Newton's method. Our scheme is compared numerically against an implicit, linear finite-difference discretization of the same partial differential equation, whose computer coding requires an implementation of the stabilized bi-conjugate gradient method. Our numerical results evince that the nonlinear approach results in a more efficient approximation to the solutions of the biofilm model considered, and demands less computer memory. Moreover, the positivity of initial profiles is preserved in the practice by the nonlinear scheme proposed. Copyright © 2013 Elsevier Ltd. All rights reserved.
Tempest - Efficient Computation of Atmospheric Flows Using High-Order Local Discretization Methods
NASA Astrophysics Data System (ADS)
Ullrich, P. A.; Guerra, J. E.
2014-12-01
The Tempest Framework composes several compact numerical methods to easily facilitate intercomparison of atmospheric flow calculations on the sphere and in rectangular domains. This framework includes the implementations of Spectral Elements, Discontinuous Galerkin, Flux Reconstruction, and Hybrid Finite Element methods with the goal of achieving optimal accuracy in the solution of atmospheric problems. Several advantages of this approach are discussed such as: improved pressure gradient calculation, numerical stability by vertical/horizontal splitting, arbitrary order of accuracy, etc. The local numerical discretization allows for high performance parallel computation and efficient inclusion of parameterizations. These techniques are used in conjunction with a non-conformal, locally refined, cubed-sphere grid for global simulations and standard Cartesian grids for simulations at the mesoscale. A complete implementation of the methods described is demonstrated in a non-hydrostatic setting.
Numerical implementation of isolated horizon boundary conditions
NASA Astrophysics Data System (ADS)
Jaramillo, José Luis; Ansorg, Marcus; Limousin, François
2007-01-01
We study the numerical implementation of a set of boundary conditions derived from the isolated horizon formalism, and which characterize a black hole whose horizon is in quasiequilibrium. More precisely, we enforce these geometrical prescriptions as inner boundary conditions on an excised sphere, in the numerical resolution of the conformal thin sandwich equations. As main results, we first establish the consistency of including in the set of boundary conditions a constant surface gravity prescription, interpretable as a lapse boundary condition, and second we assess how the prescriptions presented recently by Dain et al. for guaranteeing the well-posedness of the conformal transverse traceless equations with quasiequilibrium horizon conditions extend to the conformal thin sandwich elliptic system. As a consequence of the latter analysis, we discuss the freedom of prescribing the expansion associated with the ingoing null normal at the horizon.
Human factors for capacity building: lessons learned from the OpenMRS implementers network.
Seebregts, C J; Mamlin, B W; Biondich, P G; Fraser, H S F; Wolfe, B A; Jazayeri, D; Miranda, J; Blaya, J; Sinha, C; Bailey, C T; Kanter, A S
2010-01-01
The overall objective of this project was to investigate ways to strengthen the OpenMRS community by (i) developing capacity and implementing a network focusing specifically on the needs of OpenMRS implementers, (ii) strengthening community-driven aspects of OpenMRS and providing a dedicated forum for implementation-specific issues, and; (iii) providing regional support for OpenMRS implementations as well as mentorship and training. The methods used included (i) face-to-face networking using meetings and workshops; (ii) online collaboration tools, peer support and mentorship programmes; (iii) capacity and community development programmes, and; (iv) community outreach programmes. The community-driven approach, combined with a few simple interventions, has been a key factor in the growth and success of the OpenMRS Implementers Network. It has contributed to implementations in at least twenty-three different countries using basic online tools; and provided mentorship and peer support through an annual meeting, workshops and an internship program. The OpenMRS Implementers Network has formed collaborations with several other open source networks and is evolving regional OpenMRS Centres of Excellence to provide localized support for OpenMRS development and implementation. These initiatives are increasing the range of functionality and sustainability of open source software in the health domain, resulting in improved adoption and enterprise-readiness. Social organization and capacity development activities are important in growing a successful community-driven open source software model.
Fuzzy Variable Speed Limit Device Modification and Testing - Phase II
DOT National Transportation Integrated Search
2001-07-01
In a previous project, Northern Arizona University (NAU) and the Arizona Department of Transportation (ADOT) designed and implemented the prototype of a variable speed limit (VSL) system for rural highways. The VSL system implements a real-time fuzzy...
NASA Astrophysics Data System (ADS)
Park, George Ilhwan; Moin, Parviz
2016-01-01
This paper focuses on numerical and practical aspects associated with a parallel implementation of a two-layer zonal wall model for large-eddy simulation (LES) of compressible wall-bounded turbulent flows on unstructured meshes. A zonal wall model based on the solution of unsteady three-dimensional Reynolds-averaged Navier-Stokes (RANS) equations on a separate near-wall grid is implemented in an unstructured, cell-centered finite-volume LES solver. The main challenge in its implementation is to couple two parallel, unstructured flow solvers for efficient boundary data communication and simultaneous time integrations. A coupling strategy with good load balancing and low processors underutilization is identified. Face mapping and interpolation procedures at the coupling interface are explained in detail. The method of manufactured solution is used for verifying the correct implementation of solver coupling, and parallel performance of the combined wall-modeled LES (WMLES) solver is investigated. The method has successfully been applied to several attached and separated flows, including a transitional flow over a flat plate and a separated flow over an airfoil at an angle of attack.
NASA Technical Reports Server (NTRS)
Joshi, R. P.; Deshpande, M. D. (Technical Monitor)
2003-01-01
A study into the problem of determining electromagnetic solutions at high frequencies for problems involving complex geometries, large sizes and multiple sources (e.g. antennas) has been initiated. Typical applications include the behavior of antennas (and radiators) installed on complex conducting structures (e.g. ships, aircrafts, etc..) with strong interactions between antennas, the radiation patterns, and electromagnetic signals is of great interest for electromagnetic compatibility control. This includes the overall performance evaluation and control of all on-board radiating systems, electromagnetic interference, and personnel radiation hazards. Electromagnetic computational capability exists at NASA LaRC, and many of the codes developed are based on the Moment Method (MM). However, the MM is computationally intensive, and this places a limit on the size of objects and structures that can be modeled. Here, two approaches are proposed: (i) a current-based hybrid scheme that combines the MM with Physical optics, and (ii) an Alternating Direction Implicit-Finite Difference Time Domain (ADI-FDTD) method. The essence of a hybrid technique is to split the overall scattering surface(s) into two regions: (a) a MM zone (MMZ) which can be used over any part of the given geometry, but is most essential over irregular and "non-smooth" geometries, and (b) a PO sub-region (POSR). Currents induced on the scattering and reflecting surfaces can then be computed in two ways depending on whether the region belonged to the MMZ or was part of the POSR. For the MMZ, the current calculations proceed in terms of basis functions with undetermined coefficients (as in the usual MM method), and the answer obtained by solving a system of linear equations. Over the POSR, conduction is obtained as a superposition of two contributions: (i) currents due to the incident magnetic field, and (ii) currents produced by the mutual induction from conduction within the MMZ. This effectively leads to a reduction in the size of linear equations from N to N - Npo with N being the total number of segments for the entire surface and Npo the number of segments over the POSR. The scheme would be appropriate for relatively large, flat surfaces, and at high frequencies. The ADI-FDTD scheme provides for both transient and steady state analyses. The restrictive Courant-Friedrich-Levy (CFL) condition on the time-step is removed, and so large time steps can be chosen even though the spatial grids are small. This report includes the problem definition, a detailed discussion of both the numerical techniques, and numerical implementations for simple surface geometries. Numerical solutions have been derived for a few simple situations.
NASA Technical Reports Server (NTRS)
Padovan, J.; Adams, M.; Fertis, J.; Zeid, I.; Lam, P.
1982-01-01
Finite element codes are used in modelling rotor-bearing-stator structure common to the turbine industry. Engine dynamic simulation is used by developing strategies which enable the use of available finite element codes. benchmarking the elements developed are benchmarked by incorporation into a general purpose code (ADINA); the numerical characteristics of finite element type rotor-bearing-stator simulations are evaluated through the use of various types of explicit/implicit numerical integration operators. Improving the overall numerical efficiency of the procedure is improved.
Numerical simulation of the hydrodynamic instabilities of Richtmyer-Meshkov and Rayleigh-Taylor
NASA Astrophysics Data System (ADS)
Fortova, S. V.; Shepelev, V. V.; Troshkin, O. V.; Kozlov, S. A.
2017-09-01
The paper presents the results of numerical simulation of the development of hydrodynamic instabilities of Richtmyer-Meshkov and Rayleigh-Taylor encountered in experiments [1-3]. For the numerical solution used the TPS software package (Turbulence Problem Solver) that implements a generalized approach to constructing computer programs for a wide range of problems of hydrodynamics, described by the system of equations of hyperbolic type. As numerical methods are used the method of large particles and ENO-scheme of the second order with Roe solver for the approximate solution of the Riemann problem.
Boundary particle method for Laplace transformed time fractional diffusion equations
NASA Astrophysics Data System (ADS)
Fu, Zhuo-Jia; Chen, Wen; Yang, Hai-Tian
2013-02-01
This paper develops a novel boundary meshless approach, Laplace transformed boundary particle method (LTBPM), for numerical modeling of time fractional diffusion equations. It implements Laplace transform technique to obtain the corresponding time-independent inhomogeneous equation in Laplace space and then employs a truly boundary-only meshless boundary particle method (BPM) to solve this Laplace-transformed problem. Unlike the other boundary discretization methods, the BPM does not require any inner nodes, since the recursive composite multiple reciprocity technique (RC-MRM) is used to convert the inhomogeneous problem into the higher-order homogeneous problem. Finally, the Stehfest numerical inverse Laplace transform (NILT) is implemented to retrieve the numerical solutions of time fractional diffusion equations from the corresponding BPM solutions. In comparison with finite difference discretization, the LTBPM introduces Laplace transform and Stehfest NILT algorithm to deal with time fractional derivative term, which evades costly convolution integral calculation in time fractional derivation approximation and avoids the effect of time step on numerical accuracy and stability. Consequently, it can effectively simulate long time-history fractional diffusion systems. Error analysis and numerical experiments demonstrate that the present LTBPM is highly accurate and computationally efficient for 2D and 3D time fractional diffusion equations.
Time-dependent spectral renormalization method
NASA Astrophysics Data System (ADS)
Cole, Justin T.; Musslimani, Ziad H.
2017-11-01
The spectral renormalization method was introduced by Ablowitz and Musslimani (2005) as an effective way to numerically compute (time-independent) bound states for certain nonlinear boundary value problems. In this paper, we extend those ideas to the time domain and introduce a time-dependent spectral renormalization method as a numerical means to simulate linear and nonlinear evolution equations. The essence of the method is to convert the underlying evolution equation from its partial or ordinary differential form (using Duhamel's principle) into an integral equation. The solution sought is then viewed as a fixed point in both space and time. The resulting integral equation is then numerically solved using a simple renormalized fixed-point iteration method. Convergence is achieved by introducing a time-dependent renormalization factor which is numerically computed from the physical properties of the governing evolution equation. The proposed method has the ability to incorporate physics into the simulations in the form of conservation laws or dissipation rates. This novel scheme is implemented on benchmark evolution equations: the classical nonlinear Schrödinger (NLS), integrable PT symmetric nonlocal NLS and the viscous Burgers' equations, each of which being a prototypical example of a conservative and dissipative dynamical system. Numerical implementation and algorithm performance are also discussed.
NASA Astrophysics Data System (ADS)
Ohsaku, Tadafumi
2002-08-01
We solve numerically various types of the gap equations developed in the relativistic BCS and generalized BCS framework, presented in part I of this paper. We apply the method for not only the usual solid metal but also other physical systems by using homogeneous fermion gas approximation. We examine the relativistic effects on the thermal properties and the Meissner effect of the BCS and generalized BCS superconductivity of various cases.
Multi-objective optimal design of sandwich panels using a genetic algorithm
NASA Astrophysics Data System (ADS)
Xu, Xiaomei; Jiang, Yiping; Pueh Lee, Heow
2017-10-01
In this study, an optimization problem concerning sandwich panels is investigated by simultaneously considering the two objectives of minimizing the panel mass and maximizing the sound insulation performance. First of all, the acoustic model of sandwich panels is discussed, which provides a foundation to model the acoustic objective function. Then the optimization problem is formulated as a bi-objective programming model, and a solution algorithm based on the non-dominated sorting genetic algorithm II (NSGA-II) is provided to solve the proposed model. Finally, taking an example of a sandwich panel that is expected to be used as an automotive roof panel, numerical experiments are carried out to verify the effectiveness of the proposed model and solution algorithm. Numerical results demonstrate in detail how the core material, geometric constraints and mechanical constraints impact the optimal designs of sandwich panels.
NASA Astrophysics Data System (ADS)
Romero-Salazar, C.
2016-04-01
A critical-state model is postulated that incorporates, for the first time, the structural anisotropy and flux-line cutting effect in a type-II superconductor. The model is constructed starting from the theoretical scheme of Romero-Salazar and Pérez-Rodríguez to study the anisotropy induced by flux cutting. Here, numerical calculations of the magnetic induction and static magnetization are presented for samples under an alternating magnetic field, orthogonal to a static dc-bias one. The interplay of the two anisotropies is analysed by comparing the numerical results with available experimental data for an yttrium barium copper oxide (YBCO) plate, and a vanadium-titanium (VTi) strip, subjected to a slowly oscillating field {H}y({H}z) in the presence of a static field {H}z({H}y).
NASA Astrophysics Data System (ADS)
Jain, Sonal
2018-01-01
In this paper, we aim to use the alternative numerical scheme given by Gnitchogna and Atangana for solving partial differential equations with integer and non-integer differential operators. We applied this method to fractional diffusion model and fractional Buckmaster models with non-local fading memory. The method yields a powerful numerical algorithm for fractional order derivative to implement. Also we present in detail the stability analysis of the numerical method for solving the diffusion equation. This proof shows that this method is very stable and also converges very quickly to exact solution and finally some numerical simulation is presented.
A discontinuous Galerkin method for poroelastic wave propagation: The two-dimensional case
NASA Astrophysics Data System (ADS)
Dudley Ward, N. F.; Lähivaara, T.; Eveson, S.
2017-12-01
In this paper, we consider a high-order discontinuous Galerkin (DG) method for modelling wave propagation in coupled poroelastic-elastic media. The upwind numerical flux is derived as an exact solution for the Riemann problem including the poroelastic-elastic interface. Attenuation mechanisms in both Biot's low- and high-frequency regimes are considered. The current implementation supports non-uniform basis orders which can be used to control the numerical accuracy element by element. In the numerical examples, we study the convergence properties of the proposed DG scheme and provide experiments where the numerical accuracy of the scheme under consideration is compared to analytic and other numerical solutions.
Department of Defense Index of Specifications and Standards. Part 1. Alphabetical Listing
1989-07-01
the Basic DODISS Part II. PART II, Numerical Listing reflects all active documents in document number sequence within document type. The alphabetic...NPFC 106) 5801 Tabor Avenue P ’ - elphia, PA 19120 "The use Index is mandatory on all military activities . This mandatory provision i as thiat the...Class, is also available as follows: Military Activities : Commanding Officer Naval Publications and Forms Center (ATTN: NPODS) 5801 Tabor Avenue
Bifurcation analysis of parametrically excited bipolar disorder model
NASA Astrophysics Data System (ADS)
Nana, Laurent
2009-02-01
Bipolar II disorder is characterized by alternating hypomanic and major depressive episode. We model the periodic mood variations of a bipolar II patient with a negatively damped harmonic oscillator. The medications administrated to the patient are modeled via a forcing function that is capable of stabilizing the mood variations and of varying their amplitude. We analyze analytically, using perturbation method, the amplitude and stability of limit cycles and check this analysis with numerical simulations.
NASA Astrophysics Data System (ADS)
Watanabe, Koichi; Kita, Takafumi; Arai, Masao
2006-08-01
We develop an alternative method to solve the Eilenberger equations numerically for the vortex-lattice states of type-II superconductors. Using it, we clarify the magnetic-field and impurity-concentration dependences of the magnetization, the entropy, the Pauli paramagnetism, and the mixing of higher Landau levels in the pair potential for two-dimensional s- and d
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-18
... Proclamation 8332 of December 29, 2008, implemented U.S. tariff commitments under the United States-Oman Free... States Implementing the United States-Oman Free Trade Agreement.'' Annex II to that publication included... to certain goods of Oman under the terms of general note 31 to the HTS, subchapter XVI of chapter 99...
Newtonian nudging for a Richards equation-based distributed hydrological model
NASA Astrophysics Data System (ADS)
Paniconi, Claudio; Marrocu, Marino; Putti, Mario; Verbunt, Mark
The objective of data assimilation is to provide physically consistent estimates of spatially distributed environmental variables. In this study a relatively simple data assimilation method has been implemented in a relatively complex hydrological model. The data assimilation technique is Newtonian relaxation or nudging, in which model variables are driven towards observations by a forcing term added to the model equations. The forcing term is proportional to the difference between simulation and observation (relaxation component) and contains four-dimensional weighting functions that can incorporate prior knowledge about the spatial and temporal variability and characteristic scales of the state variable(s) being assimilated. The numerical model couples a three-dimensional finite element Richards equation solver for variably saturated porous media and a finite difference diffusion wave approximation based on digital elevation data for surface water dynamics. We describe the implementation of the data assimilation algorithm for the coupled model and report on the numerical and hydrological performance of the resulting assimilation scheme. Nudging is shown to be successful in improving the hydrological simulation results, and it introduces little computational cost, in terms of CPU and other numerical aspects of the model's behavior, in some cases even improving numerical performance compared to model runs without nudging. We also examine the sensitivity of the model to nudging term parameters including the spatio-temporal influence coefficients in the weighting functions. Overall the nudging algorithm is quite flexible, for instance in dealing with concurrent observation datasets, gridded or scattered data, and different state variables, and the implementation presented here can be readily extended to any of these features not already incorporated. Moreover the nudging code and tests can serve as a basis for implementation of more sophisticated data assimilation techniques in a Richards equation-based hydrological model.
Newtonian Nudging For A Richards Equation-based Distributed Hydrological Model
NASA Astrophysics Data System (ADS)
Paniconi, C.; Marrocu, M.; Putti, M.; Verbunt, M.
In this study a relatively simple data assimilation method has been implemented in a relatively complex hydrological model. The data assimilation technique is Newtonian relaxation or nudging, in which model variables are driven towards observations by a forcing term added to the model equations. The forcing term is proportional to the difference between simulation and observation (relaxation component) and contains four-dimensional weighting functions that can incorporate prior knowledge about the spatial and temporal variability and characteristic scales of the state variable(s) being assimilated. The numerical model couples a three-dimensional finite element Richards equation solver for variably saturated porous media and a finite difference diffusion wave approximation based on digital elevation data for surface water dynamics. We describe the implementation of the data assimilation algorithm for the coupled model and report on the numerical and hydrological performance of the resulting assimila- tion scheme. Nudging is shown to be successful in improving the hydrological sim- ulation results, and it introduces little computational cost, in terms of CPU and other numerical aspects of the model's behavior, in some cases even improving numerical performance compared to model runs without nudging. We also examine the sensitiv- ity of the model to nudging term parameters including the spatio-temporal influence coefficients in the weighting functions. Overall the nudging algorithm is quite flexi- ble, for instance in dealing with concurrent observation datasets, gridded or scattered data, and different state variables, and the implementation presented here can be read- ily extended to any features not already incorporated. Moreover the nudging code and tests can serve as a basis for implementation of more sophisticated data assimilation techniques in a Richards equation-based hydrological model.
NASA Astrophysics Data System (ADS)
Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N. C.
2010-10-01
Conceptual rainfall-runoff models have traditionally been applied without paying much attention to numerical errors induced by temporal integration of water balance dynamics. Reliance on first-order, explicit, fixed-step integration methods leads to computationally cheap simulation models that are easy to implement. Computational speed is especially desirable for estimating parameter and predictive uncertainty using Markov chain Monte Carlo (MCMC) methods. Confirming earlier work of Kavetski et al. (2003), we show here that the computational speed of first-order, explicit, fixed-step integration methods comes at a cost: for a case study with a spatially lumped conceptual rainfall-runoff model, it introduces artificial bimodality in the marginal posterior parameter distributions, which is not present in numerically accurate implementations of the same model. The resulting effects on MCMC simulation include (1) inconsistent estimates of posterior parameter and predictive distributions, (2) poor performance and slow convergence of the MCMC algorithm, and (3) unreliable convergence diagnosis using the Gelman-Rubin statistic. We studied several alternative numerical implementations to remedy these problems, including various adaptive-step finite difference schemes and an operator splitting method. Our results show that adaptive-step, second-order methods, based on either explicit finite differencing or operator splitting with analytical integration, provide the best alternative for accurate and efficient MCMC simulation. Fixed-step or adaptive-step implicit methods may also be used for increased accuracy, but they cannot match the efficiency of adaptive-step explicit finite differencing or operator splitting. Of the latter two, explicit finite differencing is more generally applicable and is preferred if the individual hydrologic flux laws cannot be integrated analytically, as the splitting method then loses its advantage.
Mansoori, Bahar; Erhard, Karen K; Sunshine, Jeffrey L
2012-02-01
The availability of the Picture Archiving and Communication System (PACS) has revolutionized the practice of radiology in the past two decades and has shown to eventually increase productivity in radiology and medicine. PACS implementation and integration may bring along numerous unexpected issues, particularly in a large-scale enterprise. To achieve a successful PACS implementation, identifying the critical success and failure factors is essential. This article provides an overview of the process of implementing and integrating PACS in a comprehensive health system comprising an academic core hospital and numerous community hospitals. Important issues are addressed, touching all stages from planning to operation and training. The impact of an enterprise-wide radiology information system and PACS at the academic medical center (four specialty hospitals), in six additional community hospitals, and in all associated outpatient clinics as well as the implications on the productivity and efficiency of the entire enterprise are presented. Copyright © 2012 AUR. Published by Elsevier Inc. All rights reserved.
Parallel implementation of geometrical shock dynamics for two dimensional converging shock waves
NASA Astrophysics Data System (ADS)
Qiu, Shi; Liu, Kuang; Eliasson, Veronica
2016-10-01
Geometrical shock dynamics (GSD) theory is an appealing method to predict the shock motion in the sense that it is more computationally efficient than solving the traditional Euler equations, especially for converging shock waves. However, to solve and optimize large scale configurations, the main bottleneck is the computational cost. Among the existing numerical GSD schemes, there is only one that has been implemented on parallel computers, with the purpose to analyze detonation waves. To extend the computational advantage of the GSD theory to more general applications such as converging shock waves, a numerical implementation using a spatial decomposition method has been coupled with a front tracking approach on parallel computers. In addition, an efficient tridiagonal system solver for massively parallel computers has been applied to resolve the most expensive function in this implementation, resulting in an efficiency of 0.93 while using 32 HPCC cores. Moreover, symmetric boundary conditions have been developed to further reduce the computational cost, achieving a speedup of 19.26 for a 12-sided polygonal converging shock.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martínez-Sykora, Juan; Pontieu, Bart De; Hansteen, Viggo H.
2017-09-20
We investigate the effects of interactions between ions and neutrals on the chromosphere and overlying corona using 2.5D radiative MHD simulations with the Bifrost code. We have extended the code capabilities implementing ion–neutral interaction effects using the generalized Ohm’s law, i.e., we include the Hall term and the ambipolar diffusion (Pedersen dissipation) in the induction equation. Our models span from the upper convection zone to the corona, with the photosphere, chromosphere, and transition region partially ionized. Our simulations reveal that the interactions between ionized particles and neutral particles have important consequences for the magnetothermodynamics of these modeled layers: (1) ambipolarmore » diffusion increases the temperature in the chromosphere; (2) sporadically the horizontal magnetic field in the photosphere is diffused into the chromosphere, due to the large ambipolar diffusion; (3) ambipolar diffusion concentrates electrical currents, leading to more violent jets and reconnection processes, resulting in (3a) the formation of longer and faster spicules, (3b) heating of plasma during the spicule evolution, and (3c) decoupling of the plasma and magnetic field in spicules. Our results indicate that ambipolar diffusion is a critical ingredient for understanding the magnetothermodynamic properties in the chromosphere and transition region. The numerical simulations have been made publicly available, similar to previous Bifrost simulations. This will allow the community to study realistic numerical simulations with a wider range of magnetic field configurations and physics modules than previously possible.« less
Simulation of Propellant Loading System Senior Design Implement in Computer Algorithm
NASA Technical Reports Server (NTRS)
Bandyopadhyay, Alak
2010-01-01
Propellant loading from the Storage Tank to the External Tank is one of the very important and time consuming pre-launch ground operations for the launch vehicle. The propellant loading system is a complex integrated system involving many physical components such as the storage tank filled with cryogenic fluid at a very low temperature, the long pipe line connecting the storage tank with the external tank, the external tank along with the flare stack, and vent systems for releasing the excess fuel. Some of the very important parameters useful for design purpose are the prediction of pre-chill time, loading time, amount of fuel lost, the maximum pressure rise etc. The physics involved for mathematical modeling is quite complex due to the fact the process is unsteady, there is phase change as some of the fuel changes from liquid to gas state, then conjugate heat transfer in the pipe walls as well as between solid-to-fluid region. The simulation is very tedious and time consuming too. So overall, this is a complex system and the objective of the work is student's involvement and work in the parametric study and optimization of numerical modeling towards the design of such system. The students have to first become familiar and understand the physical process, the related mathematics and the numerical algorithm. The work involves exploring (i) improved algorithm to make the transient simulation computationally effective (reduced CPU time) and (ii) Parametric study to evaluate design parameters by changing the operational conditions
NASA Astrophysics Data System (ADS)
Martínez-Sykora, Juan; De Pontieu, Bart; Carlsson, Mats; Hansteen, Viggo H.; Nóbrega-Siverio, Daniel; Gudiksen, Boris V.
2017-09-01
We investigate the effects of interactions between ions and neutrals on the chromosphere and overlying corona using 2.5D radiative MHD simulations with the Bifrost code. We have extended the code capabilities implementing ion-neutral interaction effects using the generalized Ohm’s law, I.e., we include the Hall term and the ambipolar diffusion (Pedersen dissipation) in the induction equation. Our models span from the upper convection zone to the corona, with the photosphere, chromosphere, and transition region partially ionized. Our simulations reveal that the interactions between ionized particles and neutral particles have important consequences for the magnetothermodynamics of these modeled layers: (1) ambipolar diffusion increases the temperature in the chromosphere; (2) sporadically the horizontal magnetic field in the photosphere is diffused into the chromosphere, due to the large ambipolar diffusion; (3) ambipolar diffusion concentrates electrical currents, leading to more violent jets and reconnection processes, resulting in (3a) the formation of longer and faster spicules, (3b) heating of plasma during the spicule evolution, and (3c) decoupling of the plasma and magnetic field in spicules. Our results indicate that ambipolar diffusion is a critical ingredient for understanding the magnetothermodynamic properties in the chromosphere and transition region. The numerical simulations have been made publicly available, similar to previous Bifrost simulations. This will allow the community to study realistic numerical simulations with a wider range of magnetic field configurations and physics modules than previously possible.
Oh, Hyun Jung; Larose, Robert
2015-01-01
In the context of healthy snacking, this study examines whether the quality of mental imagery determines the effectiveness of combining the implementation intention (II) intervention with mental imagery. This study further explores whether providing narrative healthy snacking scenarios prior to forming an II enhances people's mental imagery experience when they are not motivated to snack healthfully. A 2 × 2 factorial design was employed to test the main effect of providing healthy snacking scenarios prior to II formation, and whether such effect depends on people's motivation level. The results from the experiment (N =148) showed significant main as well as interaction effects of the manipulation (with vs. without reading healthy snacking scenarios prior to II formation) and motivation level on ease and vividness of mental imagery. The regression model with the experiment and follow-up survey data (n = 128) showed a significant relationship between ease of mental imagery and actual snacking behavior after controlling for habit strength. The findings suggest that adding a narrative message to the II intervention can be useful, especially when the intervention involves mental imagery and invites less motivated people.
1984-12-01
Appendix D: CPESIM II Student Manual .........D-1 Appendix E: CPESIM II Instructor Manual .......E-1 Appendix F: The Abridged Report..........F-i Bibliography...operating system is implemented on. A student and instructor user’s manual is provided. vii I • - Development of a User Support Package for CPESIM II (a...was a manual one. The student changes should be collected into a database to ease the instructor workload and to provide a "history" of the evolution of
Hopkins, Laura; Brown-Broderick, Jennifer; Hearn, James; Malcolm, Janine; Chan, James; Hicks-Boucher, Wendy; De Sousa, Filomena; Walker, Mark C; Gagné, Sylvain
2017-08-01
To evaluate the frequency of surgical site infections before and after implementation of a comprehensive, multidisciplinary perioperative glycemic control initiative. As part of a CUSP (Comprehensive Unit-based Safety Program) initiative, between January 5 and December 18, 2015, we implemented comprehensive, multidisciplinary glycemic control initiative to reduce SSI rates in patients undergoing major pelvic surgery for a gynecologic malignancy ('Group II'). Key components of this quality of care initiative included pre-operative HbA1c measurement with special triage for patients meeting criteria for diabetes or pre-diabetes, standardization of available intraoperative insulin choices, rigorous pre-op/intra-op/post-op glucose monitoring with control targets set to maintain BG ≤10mmol/L (180mg/dL) and communication/notification with primary care providers. Effectiveness was evaluated against a similar control group of patients ('Group I') undergoing surgery in 2014 prior to implementation of this initiative. We studied a total of 462 patients. Subjects in the screened (Group II) and comparison (Group I) groups were of similar age (avg. 61.0, 60.0years; p=0.422) and BMI (avg. 31.1, 32.3kg/m 2 ; p=0.257). Descriptive statistics served to compare surgical site infection (SSI) rates and other characteristics across groups. Women undergoing surgery prior to implementation of this algorithm (n=165) had an infection rate of 14.6%. Group II (n=297) showed an over 2-fold reduction in SSI compared to Group I [5.7%; p=0.001, adjRR: 0.45, 95% CI: (0.25, 0.81)]. Additionally, approximately 19% of Group II patients were newly diagnosed with either prediabetes (HbA1C 6.0-6.4) or diabetes (HbA1C≥6.5) and were referred to family or internal medicine for appropriate management. Implementation of a comprehensive multidisciplinary glycemic control initiative can lead to a significant reduction in surgical site infections in addition to early identification of an important health condition in the gynecologic oncology patient population. Copyright © 2017 Elsevier Inc. All rights reserved.
Efficient algorithms and implementations of entropy-based moment closures for rarefied gases
NASA Astrophysics Data System (ADS)
Schaerer, Roman Pascal; Bansal, Pratyuksh; Torrilhon, Manuel
2017-07-01
We present efficient algorithms and implementations of the 35-moment system equipped with the maximum-entropy closure in the context of rarefied gases. While closures based on the principle of entropy maximization have been shown to yield very promising results for moderately rarefied gas flows, the computational cost of these closures is in general much higher than for closure theories with explicit closed-form expressions of the closing fluxes, such as Grad's classical closure. Following a similar approach as Garrett et al. (2015) [13], we investigate efficient implementations of the computationally expensive numerical quadrature method used for the moment evaluations of the maximum-entropy distribution by exploiting its inherent fine-grained parallelism with the parallelism offered by multi-core processors and graphics cards. We show that using a single graphics card as an accelerator allows speed-ups of two orders of magnitude when compared to a serial CPU implementation. To accelerate the time-to-solution for steady-state problems, we propose a new semi-implicit time discretization scheme. The resulting nonlinear system of equations is solved with a Newton type method in the Lagrange multipliers of the dual optimization problem in order to reduce the computational cost. Additionally, fully explicit time-stepping schemes of first and second order accuracy are presented. We investigate the accuracy and efficiency of the numerical schemes for several numerical test cases, including a steady-state shock-structure problem.
Zou, Ling; Zhao, Haihua; Zhang, Hongbin
2016-08-24
This study presents a numerical investigation on using the Jacobian-free Newton–Krylov (JFNK) method to solve the two-phase flow four-equation drift flux model with realistic constitutive correlations (‘closure models’). The drift flux model is based on Isshi and his collaborators’ work. Additional constitutive correlations for vertical channel flow, such as two-phase flow pressure drop, flow regime map, wall boiling and interfacial heat transfer models, were taken from the RELAP5-3D Code Manual and included to complete the model. The staggered grid finite volume method and fully implicit backward Euler method was used for the spatial discretization and time integration schemes, respectively. Themore » Jacobian-free Newton–Krylov method shows no difficulty in solving the two-phase flow drift flux model with a discrete flow regime map. In addition to the Jacobian-free approach, the preconditioning matrix is obtained by using the default finite differencing method provided in the PETSc package, and consequently the labor-intensive implementation of complex analytical Jacobian matrix is avoided. Extensive and successful numerical verification and validation have been performed to prove the correct implementation of the models and methods. Code-to-code comparison with RELAP5-3D has further demonstrated the successful implementation of the drift flux model.« less
Agricultural Science and Mechanics I & II. Task Analyses. Competency-Based Education.
ERIC Educational Resources Information Center
Henrico County Public Schools, Glen Allen, VA. Virginia Vocational Curriculum Center.
This task analysis guide is intended to help teachers and administrators develop instructional materials and implement competency-based education in the agricultural science and mechanics courses. Section 1 contains a validated task inventory for agricultural science and mechanics I and II. For each task, applicable information pertaining to…
78 FR 18305 - Notice of Request for Extension of a Currently Approved Information Collection
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-26
... Identity Verification (PIV) Request for Credential, the USDA Homeland Security Presidential Directive 12... consists of two phases of implementation: Personal Identity Verification phase I (PIV I) and Personal Identity Verification phase II (PIV II). The information requested must be provided by Federal employees...
40 CFR 52.146 - Particulate matter (PM-10) Group II SIP commitments.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 3 2011-07-01 2011-07-01 false Particulate matter (PM-10) Group II SIP commitments. 52.146 Section 52.146 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS Arizona § 52.146 Particulate matter...
40 CFR 72.70 - Relationship to title V operating permit program.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) AIR PROGRAMS (CONTINUED) PERMITS REGULATION Acid Rain Phase II Implementation § 72.70 Relationship to... operating permit programs and acceptance of State Acid Rain programs, the procedure for including State Acid... of an accepted State program, to issue Phase II Acid Rain permits. (b) Relationship to operating...
40 CFR 72.70 - Relationship to title V operating permit program.
Code of Federal Regulations, 2011 CFR
2011-07-01
...) AIR PROGRAMS (CONTINUED) PERMITS REGULATION Acid Rain Phase II Implementation § 72.70 Relationship to... operating permit programs and acceptance of State Acid Rain programs, the procedure for including State Acid... of an accepted State program, to issue Phase II Acid Rain permits. (b) Relationship to operating...
40 CFR 72.70 - Relationship to title V operating permit program.
Code of Federal Regulations, 2013 CFR
2013-07-01
...) AIR PROGRAMS (CONTINUED) PERMITS REGULATION Acid Rain Phase II Implementation § 72.70 Relationship to... operating permit programs and acceptance of State Acid Rain programs, the procedure for including State Acid... of an accepted State program, to issue Phase II Acid Rain permits. (b) Relationship to operating...
40 CFR 72.70 - Relationship to title V operating permit program.
Code of Federal Regulations, 2012 CFR
2012-07-01
...) AIR PROGRAMS (CONTINUED) PERMITS REGULATION Acid Rain Phase II Implementation § 72.70 Relationship to... operating permit programs and acceptance of State Acid Rain programs, the procedure for including State Acid... of an accepted State program, to issue Phase II Acid Rain permits. (b) Relationship to operating...
40 CFR 72.70 - Relationship to title V operating permit program.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) AIR PROGRAMS (CONTINUED) PERMITS REGULATION Acid Rain Phase II Implementation § 72.70 Relationship to... operating permit programs and acceptance of State Acid Rain programs, the procedure for including State Acid... of an accepted State program, to issue Phase II Acid Rain permits. (b) Relationship to operating...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 29 Labor 4 2011-07-01 2011-07-01 false Definitions. 1640.2 Section 1640.2 Labor Regulations... Department's title II regulation) to implement and enforce title II of the ADA with respect to the functional... section 504 agency that has jurisdiction under section 504 and with the EEOC, which has jurisdiction under...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 4 2010-07-01 2010-07-01 false Definitions. 1640.2 Section 1640.2 Labor Regulations... Department's title II regulation) to implement and enforce title II of the ADA with respect to the functional... section 504 agency that has jurisdiction under section 504 and with the EEOC, which has jurisdiction under...
Meal-specific dietary changes from Squires Quest! II: A serious video game intervention
USDA-ARS?s Scientific Manuscript database
"Squire's Quest! II: Saving the Kingdom of Fivealot", an online video-game, promotes fruit-vegetable (FV) consumption. An evaluation study varied type of implementation intentions used during the goal setting process (none; Action, Coping, or both Action + Coping plans). Participants who created Ac...
DOT National Transportation Integrated Search
2009-01-01
The objective of the sensitivity study was to evaluate the input parameters related to AC material properties, traffic, and climate that significantly or insignificantly influence the predicted performance for two specific SISSI flexible pavements: W...
A Curriculum Activities Guide to Water Pollution and Environmental Studies, Volume II - Appendices.
ERIC Educational Resources Information Center
Hershey, John T., Ed.; And Others
This publication, Volume II of a two volume set of water pollution studies, contains seven appendices which support the studies. Appendix 1, Water Quality Parameters, consolidates the technical aspects of water quality including chemical, biological, computer program, and equipment information. Appendix 2, Implementation, outlines techniques…
NASA Astrophysics Data System (ADS)
Liu, Lei; Li, Yaning
2018-07-01
A methodology was developed to use a hyperelastic softening model to predict the constitutive behavior and the spatial damage propagation of nonlinear materials with damage-induced softening under mixed-mode loading. A user subroutine (ABAQUS/VUMAT) was developed for numerical implementation of the model. 3D-printed wavy soft rubbery interfacial layer was used as a material system to verify and validate the methodology. The Arruda - Boyce hyperelastic model is incorporated with the softening model to capture the nonlinear pre-and post- damage behavior of the interfacial layer under mixed Mode I/II loads. To characterize model parameters of the 3D-printed rubbery interfacial layer, a series of scarf-joint specimens were designed, which enabled systematic variation of stress triaxiality via a single geometric parameter, the slant angle. It was found that the important model parameter m is exponentially related to the stress triaxiality. Compact tension specimens of the sinusoidal wavy interfacial layer with different waviness were designed and fabricated via multi-material 3D printing. Finite element (FE) simulations were conducted to predict the spatial damage propagation of the material within the wavy interfacial layer. Compact tension experiments were performed to verify the model prediction. The results show that the model developed is able to accurately predict the damage propagation of the 3D-printed rubbery interfacial layer under complicated stress-state without pre-defined failure criteria.
Health-related interventions among night shift workers: a critical review of the literature.
Neil-Sztramko, Sarah E; Pahwa, Manisha; Demers, Paul A; Gotay, Carolyn C
2014-11-01
Associations between shift work and chronic disease have been observed, but relatively little is known about how to mitigate these adverse health effects. This critical review aimed to (i) synthesize interventions that have been implemented among shift workers to reduce the chronic health effects of shift work and (ii) provide an overall evaluation of study quality. MeSH terms and keywords were created and used to conduct a rigorous search of MEDLINE, CINAHL, and EMBASE for studies published on or before 13 August 2012. Study quality was assessed using a checklist adapted from Downs & Black. Of the 5053 articles retrieved, 44 met the inclusion and exclusion criteria. Over 2354 male and female rotating and permanent night shift workers were included, mostly from the manufacturing, healthcare, and public safety industries. Studies were grouped into four intervention types: (i) shift schedule; (ii) controlled light exposure; (iii) behavioral; and, (iv) pharmacological. Results generally support the benefits of fast-forward rotating shifts; simultaneous use of timed bright light and light-blocking glasses; and physical activity, healthy diet, and health promotion. Mixed results were observed for hypnotics. Study quality varied and numerous deficiencies were identified. Except for hypnotics, several types of interventions reviewed had positive overall effects on chronic disease outcomes. There was substantial heterogeneity among studies with respect to study sample, interventions, and outcomes. There is a need for further high-quality, workplace-based prevention research conducted among shift workers.
Methane rising from the Deep: Hydrates, Bubbles, Oil Spills, and Global Warming
NASA Astrophysics Data System (ADS)
Leifer, I.; Rehder, G. J.; Solomon, E. A.; Kastner, M.; Asper, V. L.; Joye, S. B.
2011-12-01
Elevated methane concentrations in near-surface waters and the atmosphere have been reported for seepage from depths of nearly 1 km at the Gulf of Mexico hydrate observatory (MC118), suggesting that for some methane sources, deepsea methane is not trapped and can contribute to atmospheric greenhouse gas budgets. Ebullition is key with important sensitivity to the formation of hydrate skins and oil coatings, high-pressure solubility, bubble size and bubble plume processes. Bubble ROV tracking studies showed survival to near thermocline depths. Studies with a numerical bubble propagation model demonstrated that consideration of structure I hydrate skins transported most methane only to mid-water column depths. Instead, consideration of structure II hydrates, which are stable to far shallower depths and appropriate for natural gas mixtures, allows bubbles to survive to far shallower depths. Moreover, model predictions of vertical methane and alkane profiles and bubble size evolution were in better agreement with observations after consideration of structure II hydrate properties as well as an improved implementation of plume properties, such as currents. These results demonstrate the importance of correctly incorporating bubble hydrate processes in efforts to predict the impact of deepsea seepage as well as to understand the fate of bubble-transported oil and methane from deepsea pipeline leaks and well blowouts. Application to the DWH spill demonstrated the importance of deepsea processes to the fate of spilled subsurface oil. Because several of these parameters vary temporally (bubble flux, currents, temperature), sensitivity studies indicate the importance of real-time monitoring data.
NASA Astrophysics Data System (ADS)
Abedi, Maysam; Gholami, Ali; Norouzi, Gholam-Hossain
2013-03-01
Previous studies have shown that a well-known multi-criteria decision making (MCDM) technique called Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE II) to explore porphyry copper deposits can prioritize the ground-based exploratory evidential layers effectively. In this paper, the PROMETHEE II method is applied to airborne geophysical (potassium radiometry and magnetometry) data, geological layers (fault and host rock zones), and various extracted alteration layers from remote sensing images. The central Iranian volcanic-sedimentary belt is chosen for this study. A stable downward continuation method as an inverse problem in the Fourier domain using Tikhonov and edge-preserving regularizations is proposed to enhance magnetic data. Numerical analysis of synthetic models show that the reconstructed magnetic data at the ground surface exhibits significant enhancement compared to the airborne data. The reduced-to-pole (RTP) and the analytic signal filters are applied to the magnetic data to show better maps of the magnetic anomalies. Four remote sensing evidential layers including argillic, phyllic, propylitic and hydroxyl alterations are extracted from Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) images in order to map the altered areas associated with porphyry copper deposits. Principal component analysis (PCA) based on six Enhanced Thematic Mapper Plus (ETM+) images is implemented to map iron oxide layer. The final mineral prospectivity map based on desired geo-data set indicates adequately matching of high potential zones with previous working mines and copper deposits.
Mobile computing initiatives within pharmacy education.
Cain, Jeff; Bird, Eleanora R; Jones, Mikael
2008-08-15
To identify mobile computing initiatives within pharmacy education, including how devices are obtained, supported, and utilized within the curriculum. An 18-item questionnaire was developed and delivered to academic affairs deans (or closest equivalent) of 98 colleges and schools of pharmacy. Fifty-four colleges and schools completed the questionnaire for a 55% completion rate. Thirteen of those schools have implemented mobile computing requirements for students. Twenty schools reported they were likely to formally consider implementing a mobile computing initiative within 5 years. Numerous models of mobile computing initiatives exist in terms of device obtainment, technical support, infrastructure, and utilization within the curriculum. Responders identified flexibility in teaching and learning as the most positive aspect of the initiatives and computer-aided distraction as the most negative, Numerous factors should be taken into consideration when deciding if and how a mobile computing requirement should be implemented.
NASA Astrophysics Data System (ADS)
Lu, Xiao-Ping; Huang, Xiang-Jie; Ip, Wing-Huen; Hsia, Chi-Hao
2018-04-01
In the lightcurve inversion process where asteroid's physical parameters such as rotational period, pole orientation and overall shape are searched, the numerical calculations of the synthetic photometric brightness based on different shape models are frequently implemented. Lebedev quadrature is an efficient method to numerically calculate the surface integral on the unit sphere. By transforming the surface integral on the Cellinoid shape model to that on the unit sphere, the lightcurve inversion process based on the Cellinoid shape model can be remarkably accelerated. Furthermore, Matlab codes of the lightcurve inversion process based on the Cellinoid shape model are available on Github for free downloading. The photometric models, i.e., the scattering laws, also play an important role in the lightcurve inversion process, although the shape variations of asteroids dominate the morphologies of the lightcurves. Derived from the radiative transfer theory, the Hapke model can describe the light reflectance behaviors from the viewpoint of physics, while there are also many empirical models in numerical applications. Numerical simulations are implemented for the comparison of the Hapke model with the other three numerical models, including the Lommel-Seeliger, Minnaert, and Kaasalainen models. The results show that the numerical models with simple function expressions can fit well with the synthetic lightcurves generated based on the Hapke model; this good fit implies that they can be adopted in the lightcurve inversion process for asteroids to improve the numerical efficiency and derive similar results to those of the Hapke model.
ERIC Educational Resources Information Center
Zhu, Zhiyong
2008-01-01
In the 1990s, numerous primary and secondary schools in China began experimental exploration and research on the implementation of school development planning (SDP). However, there has been a lack of self-criticism and reflection on the actual implementation situations and changes in SDP's concepts in the participating schools. This study assessed…
CADBIT II - Computer-Aided Design for Built-In Test. Volume 1
1993-06-01
data provided in the CADBIT I Final Report, as indicated in Figure 1.2. "• CADBIT II IMPLEMENTS SYSTEM CONCEPT, REQUIREMENTS, AND DATA DEVELOPED DURING...CADBIT II software was developed using de facto computer standards including Unix, C, and the X Windows-based OSF/Motif graphical user interface... export connectivity infermation. Design Architect is a package for designers that includes schematic capture, VHDL editor, and libraries of digital
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-12
... NAAQS dated October 22, 2008: 110(a)(2)(A), (B), (C), (D)(ii), (E), (F), (G), (H), (J), (K), (L), and (M...), (B), (C), (D)(ii), (E), (F), (G), (H), (J), (K), (L), and (M), or portions thereof. DATES: Written..., K, L, M. November 30, 2007 G April 3, 2008 A, B, C, D(ii), E, F, G, H, J, K, L, M. April 16, 2010 G...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-01
...), (G), (H), (J), (K), (L), and (M), or portions thereof; and the following infrastructure elements for the 2006 PM 2.5 NAAQS: 110(a)(2)(A), (B), (C), (D), (E), (F), (G), (H), (J), (K), (L), and (M), or..., K, L, M. G, H, J , K, L, M. December 7, 2007 D(i)(II)PSD D(i)(II)PSD. June 6, 2008 C, D(i)(II)PSD, J...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-04
... 22, 2008: 110(a)(2)(A), (B), (C), (D)(ii), (E), (F), (G), (H), (J), (K), (L), and (M), or portions... CAA section 110(a)(2)(A), (B), (C), (D)(ii), (E), (F), (G), (H), (J), (K), (L), and (M), or portions... sections 110(a)(2)(A), (B), (C), (D)(ii), (E), (F), (G), (H), (J), (K), (L), and (M), or portions thereof...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolan, Sam R.; Barack, Leor
2011-01-15
To model the radiative evolution of extreme mass-ratio binary inspirals (a key target of the LISA mission), the community needs efficient methods for computation of the gravitational self-force (SF) on the Kerr spacetime. Here we further develop a practical 'm-mode regularization' scheme for SF calculations, and give the details of a first implementation. The key steps in the method are (i) removal of a singular part of the perturbation field with a suitable 'puncture' to leave a sufficiently regular residual within a finite worldtube surrounding the particle's worldline, (ii) decomposition in azimuthal (m) modes, (iii) numerical evolution of the mmore » modes in 2+1D with a finite-difference scheme, and (iv) reconstruction of the SF from the mode sum. The method relies on a judicious choice of puncture, based on the Detweiler-Whiting decomposition. We give a working definition for the ''order'' of the puncture, and show how it determines the convergence rate of the m-mode sum. The dissipative piece of the SF displays an exponentially convergent mode sum, while the m-mode sum for the conservative piece converges with a power law. In the latter case, the individual modal contributions fall off at large m as m{sup -n} for even n and as m{sup -n+1} for odd n, where n is the puncture order. We describe an m-mode implementation with a 4th-order puncture to compute the scalar-field SF along circular geodesics on Schwarzschild. In a forthcoming companion paper we extend the calculation to the Kerr spacetime.« less
Hysteretic Models Considering Axial-Shear-Flexure Interaction
NASA Astrophysics Data System (ADS)
Ceresa, Paola; Negrisoli, Giorgio
2017-10-01
Most of the existing numerical models implemented in finite element (FE) software, at the current state of the art, are not capable to describe, with enough reliability, the interaction between axial, shear and flexural actions under cyclic loading (e.g. seismic actions), neglecting crucial effects for predicting the nature of the collapse of reinforced concrete (RC) structural elements. Just a few existing 3D volume models or fibre beam models can lead to a quite accurate response, but they are still computationally inefficient for typical applications in earthquake engineering and also characterized by very complex formulation. Thus, discrete models with lumped plasticity hinges may be the preferred choice for modelling the hysteretic behaviour due to cyclic loading conditions, in particular with reference to its implementation in a commercial software package. These considerations lead to this research work focused on the development of a model for RC beam-column elements able to consider degradation effects and interaction between the actions under cyclic loading conditions. In order to develop a model for a general 3D discrete hinge element able to take into account the axial-shear-flexural interaction, it is necessary to provide an implementation which involves a corrector-predictor iterative scheme. Furthermore, a reliable constitutive model based on damage plasticity theory is formulated and implemented for its numerical validation. Aim of this research work is to provide the formulation of a numerical model, which will allow implementation within a FE software package for nonlinear cyclic analysis of RC structural members. The developed model accounts for stiffness degradation effect and stiffness recovery for loading reversal.
Analytical and numerical solution for wave reflection from a porous wave absorber
NASA Astrophysics Data System (ADS)
Magdalena, Ikha; Roque, Marian P.
2018-03-01
In this paper, wave reflection from a porous wave absorber is investigated theoretically and numerically. The equations that we used are based on shallow water type model. Modification of motion inside the absorber is by including linearized friction term in momentum equation and introducing a filtered velocity. Here, an analytical solution for wave reflection coefficient from a porous wave absorber over a flat bottom is derived. Numerically, we solve the equations using the finite volume method on a staggered grid. To validate our numerical model, comparison of the numerical reflection coefficient is made against the analytical solution. Further, we implement our numerical scheme to study the evolution of surface waves pass through a porous absorber over varied bottom topography.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
This volume contains Biomedical and Environmental Research, Environmental Control Technology Research, and Operational and Environmental Safety Research project listings. The projects are ordered numerically by log number.
NASA Technical Reports Server (NTRS)
Raiszadeh, Ben; Queen, Eric M.
2002-01-01
A capability to simulate trajectories Of Multiple interacting rigid bodies has been developed. This capability uses the Program to Optimize Simulated Trajectories II (POST II). Previously, POST II had the ability to simulate multiple bodies without interacting forces. The current implementation is used for the Simulation of parachute trajectories, in which the parachute and suspended bodies can be treated as rigid bodies. An arbitrary set of connecting lines can be included in the model and are treated as massless spring-dampers. This paper discusses details of the connection line modeling and results of several test cases used to validate the capability.
ERIC Educational Resources Information Center
Roy, David
2016-01-01
In Drama Education mask work is undertaken and presented as both a methodology and knowledge base. There are numerous workshops and journal articles available for teachers that offer knowledge or implementation of mask work. However, empirical examination of the context or potential implementation of masks as a pedagogical tool remains…
Fostering Integrated Learning and Faculty Collaboration through Curriculum Design: A Case Study
ERIC Educational Resources Information Center
Routhieaux, Robert L.
2015-01-01
Designing and implementing innovative curricula can enhance student learning while simultaneously fostering faculty collaboration. However, innovative curricula can also surface numerous challenges for faculty, staff, students, and administration. This case study documents the design and implementation of an innovative Master of Business…
PETE Preparation for CSPAP at the University of Kentucky
ERIC Educational Resources Information Center
Erwin, Heather E.; Beighle, Aaron; Eckler, Seth
2017-01-01
Numerous strategies to increase physical activity levels among American youth have been recommended and implemented in schools, and physical education teachers have been identified as the logical personnel in schools to spearhead these attempts. Comprehensive school physical activity programs (CSPAPs) are being promoted, implemented and endorsed…
Wang, Ya; Liu, Lu-lu; Gan, Ming-yuan; Tan, Shu-ping; Shum, David; Chan, Raymond
2017-01-01
Abstract Background: Prospective memory (PM) refers to remembering to execute a planned intention in the future, which can been divided as event-based PM (focal, nonfocal) and time-based PM according to the nature of the cue. Focal event-based PM, where the ongoing task requires processing of the characteristics of PM cues, has been found to be benefited from implementation intention (II, ie, an encoding strategy in the format of “if I see X, then I will do Y”). However, to date, it is unclear whether implementation intention can produce a positive effect on nonfocal event-based PM (where the ongoing task is irrelevant with the PM cues) and time-based PM. Moreover, patients with schizophrenia (SCZ) were found to have impairments in these types of PM, and few studies have been conducted to examine the effect of II on these types of PM. This study investigated whether (and how) implementation intention can improve nonfocal event-based PM and time-based PM performance in patients with SCZ. Methods: Forty-two patients with SCZ and 42 healthy control participants were administered both computerized nonfocal event-based PM task and time-based PM task. Patients and healthy controls were further randomly allocated to implementation intention condition (N = 21) and typical instruction condition (N = 21). Results: Patients with SCZ in the implementation intention group showed higher PM accuracy than the typical instruction group in both nonfocal event-based PM task (0.51 ± 0.32 vs 0.19 ± 0.29, t(40) = 3.39, P = .002) and time-based PM task (0.72 ± 0.31 vs 0.39 ± 0.40, t(40) = 2.98, P = .005). Similarly, healthy controls in the II group also showed better PM performance than the typical instruction group in both tasks (all P’s < 0.05). Time check frequency of time-based PM task in the II group of all the participants was significantly higher than the typical instruction group. Conclusion: Implementation intention is an effective strategy for improving different types of PM performance in patients with schizophrenia and can be applied for clinical settings.
Hardware-Independent Proofs of Numerical Programs
NASA Technical Reports Server (NTRS)
Boldo, Sylvie; Nguyen, Thi Minh Tuyen
2010-01-01
On recent architectures, a numerical program may give different answers depending on the execution hardware and the compilation. Our goal is to formally prove properties about numerical programs that are true for multiple architectures and compilers. We propose an approach that states the rounding error of each floating-point computation whatever the environment. This approach is implemented in the Frama-C platform for static analysis of C code. Small case studies using this approach are entirely and automatically proved
Fumeaux, Christophe; Lin, Hungyen; Serita, Kazunori; Withayachumnankul, Withawat; Kaufmann, Thomas; Tonouchi, Masayoshi; Abbott, Derek
2012-07-30
The process of terahertz generation through optical rectification in a nonlinear crystal is modeled using discretized equivalent current sources. The equivalent terahertz sources are distributed in the active volume and computed based on a separately modeled near-infrared pump beam. This approach can be used to define an appropriate excitation for full-wave electromagnetic numerical simulations of the generated terahertz radiation. This enables predictive modeling of the near-field interactions of the terahertz beam with micro-structured samples, e.g. in a near-field time-resolved microscopy system. The distributed source model is described in detail, and an implementation in a particular full-wave simulation tool is presented. The numerical results are then validated through a series of measurements on square apertures. The general principle can be applied to other nonlinear processes with possible implementation in any full-wave numerical electromagnetic solver.
Turbulence modeling for hypersonic flight
NASA Technical Reports Server (NTRS)
Bardina, Jorge E.
1992-01-01
The objective of the present work is to develop, verify, and incorporate two equation turbulence models which account for the effect of compressibility at high speeds into a three dimensional Reynolds averaged Navier-Stokes code and to provide documented model descriptions and numerical procedures so that they can be implemented into the National Aerospace Plane (NASP) codes. A summary of accomplishments is listed: (1) Four codes have been tested and evaluated against a flat plate boundary layer flow and an external supersonic flow; (2) a code named RANS was chosen because of its speed, accuracy, and versatility; (3) the code was extended from thin boundary layer to full Navier-Stokes; (4) the K-omega two equation turbulence model has been implemented into the base code; (5) a 24 degree laminar compression corner flow has been simulated and compared to other numerical simulations; and (6) work is in progress in writing the numerical method of the base code including the turbulence model.
Numerical Tests and Properties of Waves in Radiating Fluids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, B M; Klein, R I
2009-09-03
We discuss the properties of an analytical solution for waves in radiating fluids, with a view towards its implementation as a quantitative test of radiation hydrodynamics codes. A homogeneous radiating fluid in local thermodynamic equilibrium is periodically driven at the boundary of a one-dimensional domain, and the solution describes the propagation of the waves thus excited. Two modes are excited for a given driving frequency, generally referred to as a radiative acoustic wave and a radiative diffusion wave. While the analytical solution is well known, several features are highlighted here that require care during its numerical implementation. We compare themore » solution in a wide range of parameter space to a numerical integration with a Lagrangian radiation hydrodynamics code. Our most significant observation is that flux-limited diffusion does not preserve causality for waves on a homogeneous background.« less
The Construction of 3-d Neutral Density for Arbitrary Data Sets
NASA Astrophysics Data System (ADS)
Riha, S.; McDougall, T. J.; Barker, P. M.
2014-12-01
The Neutral Density variable allows inference of water pathways from thermodynamic properties in the global ocean, and is therefore an essential component of global ocean circulation analysis. The widely used algorithm for the computation of Neutral Density yields accurate results for data sets which are close to the observed climatological ocean. Long-term numerical climate simulations, however, often generate a significant drift from present-day climate, which renders the existing algorithm inaccurate. To remedy this problem, new algorithms which operate on arbitrary data have been developed, which may potentially be used to compute Neutral Density during runtime of a numerical model.We review existing approaches for the construction of Neutral Density in arbitrary data sets, detail their algorithmic structure, and present an analysis of the computational cost for implementations on a single-CPU computer. We discuss possible strategies for the implementation in state-of-the-art numerical models, with a focus on distributed computing environments.
Sandia National Laboratories analysis code data base
NASA Astrophysics Data System (ADS)
Peterson, C. W.
1994-11-01
Sandia National Laboratories' mission is to solve important problems in the areas of national defense, energy security, environmental integrity, and industrial technology. The laboratories' strategy for accomplishing this mission is to conduct research to provide an understanding of the important physical phenomena underlying any problem, and then to construct validated computational models of the phenomena which can be used as tools to solve the problem. In the course of implementing this strategy, Sandia's technical staff has produced a wide variety of numerical problem-solving tools which they use regularly in the design, analysis, performance prediction, and optimization of Sandia components, systems, and manufacturing processes. This report provides the relevant technical and accessibility data on the numerical codes used at Sandia, including information on the technical competency or capability area that each code addresses, code 'ownership' and release status, and references describing the physical models and numerical implementation.
Numerical implementation of isolated horizon boundary conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jaramillo, Jose Luis; Ansorg, Marcus; Limousin, Francois
2007-01-15
We study the numerical implementation of a set of boundary conditions derived from the isolated horizon formalism, and which characterize a black hole whose horizon is in quasiequilibrium. More precisely, we enforce these geometrical prescriptions as inner boundary conditions on an excised sphere, in the numerical resolution of the conformal thin sandwich equations. As main results, we first establish the consistency of including in the set of boundary conditions a constant surface gravity prescription, interpretable as a lapse boundary condition, and second we assess how the prescriptions presented recently by Dain et al. for guaranteeing the well-posedness of the conformalmore » transverse traceless equations with quasiequilibrium horizon conditions extend to the conformal thin sandwich elliptic system. As a consequence of the latter analysis, we discuss the freedom of prescribing the expansion associated with the ingoing null normal at the horizon.« less
77 FR 38006 - Approval and Promulgation of Implementation Plans; State of Iowa: Regional Haze
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-26
... Class I Areas'' contained one numerical error. Iowa's 2002 contribution to Voyagers should read 2.16... environmental effects, using practicable and legally permissible methods, under Executive Order 12898 (59 FR... a new entry (39) in numerical order to read as follows: Sec. 52.820 Identification of plan...
Numerical stability of the error diffusion concept
NASA Astrophysics Data System (ADS)
Weissbach, Severin; Wyrowski, Frank
1992-10-01
The error diffusion algorithm is an easy implementable mean to handle nonlinearities in signal processing, e.g. in picture binarization and coding of diffractive elements. The numerical stability of the algorithm depends on the choice of the diffusion weights. A criterion for the stability of the algorithm is presented and evaluated for some examples.
NASA Astrophysics Data System (ADS)
Divakov, Dmitriy; Malykh, Mikhail; Sevastianov, Leonid; Sevastianov, Anton; Tiutiunnik, Anastasiia
2017-04-01
In the paper we construct a method for approximate solution of the waveguide problem for guided modes of an open irregular waveguide transition. The method is based on straightening of the curved waveguide boundaries by introducing new variables and applying the Kantorovich method to the problem formulated in the new variables to get a system of ordinary second-order differential equations. In the method, the boundary conditions are formulated by analogy with the partial radiation conditions in the similar problem for closed waveguide transitions. The method is implemented in the symbolic-numeric form using the Maple computer algebra system. The coefficient matrices of the system of differential equations and boundary conditions are calculated symbolically, and then the obtained boundary-value problem is solved numerically using the finite difference method. The chosen coordinate functions of Kantorovich expansions provide good conditionality of the coefficient matrices. The numerical experiment simulating the propagation of guided modes in the open waveguide transition confirms the validity of the method proposed to solve the problem.
The generalized scattering coefficient method for plane wave scattering in layered structures
NASA Astrophysics Data System (ADS)
Liu, Yu; Li, Chao; Wang, Huai-Yu; Zhou, Yun-Song
2017-02-01
The generalized scattering coefficient (GSC) method is pedagogically derived and employed to study the scattering of plane waves in homogeneous and inhomogeneous layered structures. The numerical stabilities and accuracies of this method and other commonly used numerical methods are discussed and compared. For homogeneous layered structures, concise scattering formulas with clear physical interpretations and strong numerical stability are obtained by introducing the GSCs. For inhomogeneous layered structures, three numerical methods are employed: the staircase approximation method, the power series expansion method, and the differential equation based on the GSCs. We investigate the accuracies and convergence behaviors of these methods by comparing their predictions to the exact results. The conclusions are as follows. The staircase approximation method has a slow convergence in spite of its simple and intuitive implementation, and a fine stratification within the inhomogeneous layer is required for obtaining accurate results. The expansion method results are sensitive to the expansion order, and the treatment becomes very complicated for relatively complex configurations, which restricts its applicability. By contrast, the GSC-based differential equation possesses a simple implementation while providing fast and accurate results.
Analytic Formulation and Numerical Implementation of an Acoustic Pressure Gradient Prediction
NASA Technical Reports Server (NTRS)
Lee, Seongkyu; Brentner, Kenneth S.; Farassat, Fereidoun
2007-01-01
The scattering of rotor noise is an area that has received little attention over the years, yet the limited work that has been done has shown that both the directivity and intensity of the acoustic field may be significantly modified by the presence of scattering bodies. One of the inputs needed to compute the scattered acoustic field is the acoustic pressure gradient on a scattering surface. Two new analytical formulations of the acoustic pressure gradient have been developed and implemented in the PSU-WOPWOP rotor noise prediction code. These formulations are presented in this paper. The first formulation is derived by taking the gradient of Farassat's retarded-time Formulation 1A. Although this formulation is relatively simple, it requires numerical time differentiation of the acoustic integrals. In the second formulation, the time differentiation is taken inside the integrals analytically. The acoustic pressure gradient predicted by these new formulations is validated through comparison with the acoustic pressure gradient determined by a purely numerical approach for two model rotors. The agreement between analytic formulations and numerical method is excellent for both stationary and moving observers case.
Analytic Formulation and Numerical Implementation of an Acoustic Pressure Gradient Prediction
NASA Technical Reports Server (NTRS)
Lee, Seongkyu; Brentner, Kenneth S.; Farassat, F.; Morris, Philip J.
2008-01-01
Two new analytical formulations of the acoustic pressure gradient have been developed and implemented in the PSU-WOPWOP rotor noise prediction code. The pressure gradient can be used to solve the boundary condition for scattering problems and it is a key aspect to solve acoustic scattering problems. The first formulation is derived from the gradient of the Ffowcs Williams-Hawkings (FW-H) equation. This formulation has a form involving the observer time differentiation outside the integrals. In the second formulation, the time differentiation is taken inside the integrals analytically. This formulation avoids the numerical time differentiation with respect to the observer time, which is computationally more efficient. The acoustic pressure gradient predicted by these new formulations is validated through comparison with available exact solutions for a stationary and moving monopole sources. The agreement between the predictions and exact solutions is excellent. The formulations are applied to the rotor noise problems for two model rotors. A purely numerical approach is compared with the analytical formulations. The agreement between the analytical formulations and the numerical method is excellent for both stationary and moving observer cases.
Numerical equilibrium analysis for structured consumer resource models.
de Roos, A M; Diekmann, O; Getto, P; Kirkilionis, M A
2010-02-01
In this paper, we present methods for a numerical equilibrium and stability analysis for models of a size structured population competing for an unstructured resource. We concentrate on cases where two model parameters are free, and thus existence boundaries for equilibria and stability boundaries can be defined in the (two-parameter) plane. We numerically trace these implicitly defined curves using alternatingly tangent prediction and Newton correction. Evaluation of the maps defining the curves involves integration over individual size and individual survival probability (and their derivatives) as functions of individual age. Such ingredients are often defined as solutions of ODE, i.e., in general only implicitly. In our case, the right-hand sides of these ODE feature discontinuities that are caused by an abrupt change of behavior at the size where juveniles are assumed to turn adult. So, we combine the numerical solution of these ODE with curve tracing methods. We have implemented the algorithms for "Daphnia consuming algae" models in C-code. The results obtained by way of this implementation are shown in the form of graphs.
The CE/SE Method: a CFD Framework for the Challenges of the New Millennium
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung; Yu, Sheng-Tao
2001-01-01
The space-time conservation element and solution element (CE/SE) method, which was originated and is continuously being developed at NASA Glenn Research Center, is a high-resolution, genuinely multidimensional and unstructured-mesh compatible numerical method for solving conservation laws. Since its inception in 1991, the CE/SE method has been used to obtain highly accurate numerical solutions for 1D, 2D and 3D flow problems involving shocks, contact discontinuities, acoustic waves, vortices, shock/acoustic waves/vortices interactions, shock/boundary layers interactions and chemical reactions. Without the aid of preconditioning or other special techniques, it has been applied to both steady and unsteady flows with speeds ranging from Mach number = 0.00288 to 10. In addition, the method has unique features that allow for (i) the use of very simple non-reflecting boundary conditions, and (ii) a unified wall boundary treatment for viscous and inviscid flows. The CE/SE method was developed with the conviction that, with a solid foundation in physics, a robust, coherent and accurate numerical framework can be built without involving overly complex mathematics. As a result, the method was constructed using a set of design principles that facilitate simplicity, robustness and accuracy. The most important among them are: (i) enforcing both local and global flux conservation in space and time, with flux evaluation at an interface being an integral part of the solution procedure and requiring no interpolation or extrapolation; (ii) unifying space and time and treating them as a single entity; and (iii) requiring that a numerical scheme be built from a nondissipative core scheme such that the numerical dissipation can be effectively controlled and, as a result, will not overwhelm the physical dissipation. Part I of the workshop will be devoted to a discussion of these principles along with a description of how the ID, 2D and 3D CE/SE schemes are constructed. In Part II, various applications of the CE/SE method, particularly those involving chemical reactions and acoustics, will be presented. The workshop will be concluded with a sketch of the future research directions.
The upwind control volume scheme for unstructured triangular grids
NASA Technical Reports Server (NTRS)
Giles, Michael; Anderson, W. Kyle; Roberts, Thomas W.
1989-01-01
A new algorithm for the numerical solution of the Euler equations is presented. This algorithm is particularly suited to the use of unstructured triangular meshes, allowing geometric flexibility. Solutions are second-order accurate in the steady state. Implementation of the algorithm requires minimal grid connectivity information, resulting in modest storage requirements, and should enhance the implementation of the scheme on massively parallel computers. A novel form of upwind differencing is developed, and is shown to yield sharp resolution of shocks. Two new artificial viscosity models are introduced that enhance the performance of the new scheme. Numerical results for transonic airfoil flows are presented, which demonstrate the performance of the algorithm.
Jackson, M E; Gnadt, J W
1999-03-01
The object-oriented graphical programming language LabView was used to implement the numerical solution to a computational model of saccade generation in primates. The computational model simulates the activity and connectivity of anatomical strictures known to be involved in saccadic eye movements. The LabView program provides a graphical user interface to the model that makes it easy to observe and modify the behavior of each element of the model. Essential elements of the source code of the LabView program are presented and explained. A copy of the model is available for download from the internet.
Phase Control in Nonlinear Systems
NASA Astrophysics Data System (ADS)
Zambrano, Samuel; Seoane, Jesús M.; Mariño, Inés P.; Sanjuán, Miguel A. F.; Meucci, Riccardo
The following sections are included: * Introduction * Phase Control of Chaos * Description of the model * Numerical exploration of phase control of chaos * Experimental evidence of phase control of chaos * Phase Control of Intermittency in Dynamical Systems * Crisis-induced intermittency and its control * Experimental setup and implementation of the phase control scheme * Phase control of the laser in the pre-crisis regime * Phase control of the intermittency after the crisis * Phase control of the intermittency in the quadratic map * Phase Control of Escapes in Open Dynamical Systems * Control of open dynamical systems * Model description * Numerical simulations and heuristic arguments * Experimental implementation in an electronic circuit * Conclusions and Discussions * Acknowledgments * References
magnum.fe: A micromagnetic finite-element simulation code based on FEniCS
NASA Astrophysics Data System (ADS)
Abert, Claas; Exl, Lukas; Bruckner, Florian; Drews, André; Suess, Dieter
2013-11-01
We have developed a finite-element micromagnetic simulation code based on the FEniCS package called magnum.fe. Here we describe the numerical methods that are applied as well as their implementation with FEniCS. We apply a transformation method for the solution of the demagnetization-field problem. A semi-implicit weak formulation is used for the integration of the Landau-Lifshitz-Gilbert equation. Numerical experiments show the validity of simulation results. magnum.fe is open source and well documented. The broad feature range of the FEniCS package makes magnum.fe a good choice for the implementation of novel micromagnetic finite-element algorithms.
Interface modeling in incompressible media using level sets in Escript
NASA Astrophysics Data System (ADS)
Gross, L.; Bourgouin, L.; Hale, A. J.; Mühlhaus, H.-B.
2007-08-01
We use a finite element (FEM) formulation of the level set method to model geological fluid flow problems involving interface propagation. Interface problems are ubiquitous in geophysics. Here we focus on a Rayleigh-Taylor instability, namely mantel plumes evolution, and the growth of lava domes. Both problems require the accurate description of the propagation of an interface between heavy and light materials (plume) or between high viscous lava and low viscous air (lava dome), respectively. The implementation of the models is based on Escript which is a Python module for the solution of partial differential equations (PDEs) using spatial discretization techniques such as FEM. It is designed to describe numerical models in the language of PDEs while using computational components implemented in C and C++ to achieve high performance for time-intensive, numerical calculations. A critical step in the solution geological flow problems is the solution of the velocity-pressure problem. We describe how the Escript module can be used for a high-level implementation of an efficient variant of the well-known Uzawa scheme. We begin with a brief outline of the Escript modules and then present illustrations of its usage for the numerical solutions of the problems mentioned above.
Benchmark of Ab Initio Bethe-Salpeter Equation Approach with Numeric Atom-Centered Orbitals
NASA Astrophysics Data System (ADS)
Liu, Chi; Kloppenburg, Jan; Kanai, Yosuke; Blum, Volker
The Bethe-Salpeter equation (BSE) approach based on the GW approximation has been shown to be successful for optical spectra prediction of solids and recently also for small molecules. We here present an all-electron implementation of the BSE using numeric atom-centered orbital (NAO) basis sets. In this work, we present benchmark of BSE implemented in FHI-aims for low-lying excitation energies for a set of small organic molecules, the well-known Thiel's set. The difference between our implementation (using an analytic continuation of the GW self-energy on the real axis) and the results generated by a fully frequency dependent GW treatment on the real axis is on the order of 0.07 eV for the benchmark molecular set. We study the convergence behavior to the complete basis set limit for excitation spectra, using a group of valence correlation consistent NAO basis sets (NAO-VCC-nZ), as well as for standard NAO basis sets for ground state DFT with extended augmentation functions (NAO+aug). The BSE results and convergence behavior are compared to linear-response time-dependent DFT, where excellent numerical convergence is shown for NAO+aug basis sets.
Nonlinear Scaling Laws for Parametric Receiving Arrays. Part II. Numerical Analysis
1976-06-30
SECTION 3U SUBROUTINE WRITE -UP» JPL» MAY 1969. 2, F. T, KROGH» »ON TESTING A SUBROUTINE FOR THE NUMERICAL INTEGRATION OF ORDINARY DIFFERENTIAL...WHICH IS ENTIRELY DOUBLE PRECISION. SEE THEIR WRITE -UPS FOR MINOR DIFFERENCES IN USAGE. 12.1.1.5. REMARKS THE ORDINARY DIFFERENTIAL EQUATIONS MAY BE...OF THE DEPENDENT VARIABLES» OR VALUES OF AUXILIARY FUNCTIONS. ONLY THE FIRST TWO OF THESE FEATURES ARE DESCRIBED IN THIS WRITE -UP. SEE REFERENCE 1
Chaotic structures of nonlinear magnetic fields. I - Theory. II - Numerical results
NASA Technical Reports Server (NTRS)
Lee, Nam C.; Parks, George K.
1992-01-01
A study of the evolutionary properties of nonlinear magnetic fields in flowing MHD plasmas is presented to illustrate that nonlinear magnetic fields may involve chaotic dynamics. It is shown how a suitable transformation of the coupled equations leads to Duffing's form, suggesting that the behavior of the general solution can also be chaotic. Numerical solutions of the nonlinear magnetic field equations that have been cast in the form of Duffing's equation are presented.
Analysis of the vibration environment induced on spacecraft components by hypervelocity impact
NASA Astrophysics Data System (ADS)
Pavarin, Daniele
2009-06-01
This paper reports the result achieved within the study ``Spacecraft Disturbances from Hypervelocity Impact'', performed by CISAS and Thales-Alenia Space Italia under European Space Agency contract. The research project investigated the perturbations produced on spacecraft internal components as a consequence of hypervelocity impacts of micrometeoroids and orbital debris on the external walls of the vehicle. Objective of the study was: (i) to set-up a general numerical /experimental procedure to investigate the vibration induced by hypervelocity impact, (ii) to analyze the GOCE mission in order to asses whether the vibration environment induce by the impact of orbital debris and micrometeoroids could jeopardize the mission. The research project was conducted both experimentally and numerically, performing a large number of impact tests on GOCE-like structural configurations and extrapolating the experimental results via numerical simulations based on hydrocode calculations, finite element and statistical energy analysis. As a result, a database was established which correlates the impact conditions in the experimental range (0.6 to 2.3 mm projectiles at 2.5 to 5 km/s) with the shock spectra on selected locations on various types of structural models.The main out coming of the study are: (i) a wide database reporting acceleration values on a wide range of impact condition, (ii) a general numerical methodology to investigate disturbances induced by space debris and micrometeoroids on general satellite structures.
Failsafe automation of Phase II clinical trial interim monitoring for stopping rules.
Day, Roger S
2010-02-01
In Phase II clinical trials in cancer, preventing the treatment of patients on a study when current data demonstrate that the treatment is insufficiently active or too toxic has obvious benefits, both in protecting patients and in reducing sponsor costs. Considerable efforts have gone into experimental designs for Phase II clinical trials with flexible sample size, usually implemented by early stopping rules. The intended benefits will not ensue, however, if the design is not followed. Despite the best intentions, failures can occur for many reasons. The main goal is to develop an automated system for interim monitoring, as a backup system supplementing the protocol team, to ensure that patients are protected. A secondary goal is to stimulate timely recording of patient assessments. We developed key concepts and performance needs, then designed, implemented, and deployed a software solution embedded in the clinical trials database system. The system has been in place since October 2007. One clinical trial tripped the automated monitor, resulting in e-mails that initiated statistician/investigator review in timely fashion. Several essential contributing activities still require human intervention, institutional policy decisions, and institutional commitment of resources. We believe that implementing the concepts presented here will provide greater assurance that interim monitoring plans are followed and that patients are protected from inadequate response or excessive toxicity. This approach may also facilitate wider acceptance and quicker implementation of new interim monitoring algorithms.
ERIC Educational Resources Information Center
BIVONA, WILLIAM A.
A SET OF GUIDELINES FOR IMPLEMENTING AND OPERATING A REPLICA OF A PROTOTYPE SELECTIVE DISSEMINATION OF INFORMATION (SDI) SYSTEM TESTED AT U.S. ARMY NATICK LABORATORIES, AND REPORTED IN LI 000 273, IS GIVEN IN THIS MANUAL. INFORMATION IS SUPPLIED WHICH IS USEFUL IN THE INITIAL STAGES OF IMPLEMENTATION. THE APPLICATION OF SPECIFIC CRITERIA FOR…
Integration of Interactive Interfaces with Intelligent Tutoring Systems: An Implementation
1993-09-01
Intelligent Tutoring Systems: At the crossroad of artifcial intelligence and education. Ablex Publishing Corp., Norwood, NJ. 6. Goldstein, 1. L. (1986...AD-A273 869 IImhlllII Integration of Interactive Interfaces with Intelligent Thtoring Systems: An Implementation Vijay Vasandani and T. Govindaraj...NUMBERS Integration of Interactive Interfaces with Intelligent Tutoring Systems: An Implementation C: N00014-87-K-0482 .ALITHOR(S) PE: 0602233N Vijay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hueda, A.U.; Perez, B.L.; Jodra, L.G.
1960-01-01
A presentation is made of calculation methods for ionexchange installations based on kinetic considerations and similarity with other unit operations. Factors to be experimentally obtained as well as difficulties which may occur in its determination are also given. Calculation procedures most commonly used in industry are enclosed and explain with numerical resolution of a problem of water demineralization. (auth)
The stellar content of 30 Doradus
NASA Technical Reports Server (NTRS)
Walborn, N. R.
1984-01-01
The components of the supergiant H II region Tarantula are surveyed, noting that 30 Doradus is really only the most active section of the Large Magellanic Cloud. The region contains at least 40 WR stars and numerous non-H II region late spectral type supergiants. Most of the stars are centrally located and presumably feed on the nebulosity. The closeness of the population will require fine spectroscopic scans of all the members to achieve accurate typing. Although the population is mixed, the ionizing radiation emitted by the region is consistent with its classification as part of the H II region. Finally, the brightest objects within Tarantula are suspected of being multiple systems.
Neuro-evolutionary computing paradigm for Painlevé equation-II in nonlinear optics
NASA Astrophysics Data System (ADS)
Ahmad, Iftikhar; Ahmad, Sufyan; Awais, Muhammad; Ul Islam Ahmad, Siraj; Asif Zahoor Raja, Muhammad
2018-05-01
The aim of this study is to investigate the numerical treatment of the Painlevé equation-II arising in physical models of nonlinear optics through artificial intelligence procedures by incorporating a single layer structure of neural networks optimized with genetic algorithms, sequential quadratic programming and active set techniques. We constructed a mathematical model for the nonlinear Painlevé equation-II with the help of networks by defining an error-based cost function in mean square sense. The performance of the proposed technique is validated through statistical analyses by means of the one-way ANOVA test conducted on a dataset generated by a large number of independent runs.
Rangarajan, Srinivas; Maravelias, Christos T.; Mavrikakis, Manos
2017-11-09
Here, we present a general optimization-based framework for (i) ab initio and experimental data driven mechanistic modeling and (ii) optimal catalyst design of heterogeneous catalytic systems. Both cases are formulated as a nonlinear optimization problem that is subject to a mean-field microkinetic model and thermodynamic consistency requirements as constraints, for which we seek sparse solutions through a ridge (L 2 regularization) penalty. The solution procedure involves an iterative sequence of forward simulation of the differential algebraic equations pertaining to the microkinetic model using a numerical tool capable of handling stiff systems, sensitivity calculations using linear algebra, and gradient-based nonlinear optimization.more » A multistart approach is used to explore the solution space, and a hierarchical clustering procedure is implemented for statistically classifying potentially competing solutions. An example of methanol synthesis through hydrogenation of CO and CO 2 on a Cu-based catalyst is used to illustrate the framework. The framework is fast, is robust, and can be used to comprehensively explore the model solution and design space of any heterogeneous catalytic system.« less
NASA Technical Reports Server (NTRS)
Nguyen, H. Lee; Wey, Ming-Jyh
1990-01-01
Two-dimensional calculations were made of spark ignited premixed-charge combustion and direct injection stratified-charge combustion in gasoline fueled piston engines. Results are obtained using kinetic-controlled combustion submodel governed by a four-step global chemical reaction or a hybrid laminar kinetics/mixing-controlled combustion submodel that accounts for laminar kinetics and turbulent mixing effects. The numerical solutions are obtained by using KIVA-2 computer code which uses a kinetic-controlled combustion submodel governed by a four-step global chemical reaction (i.e., it assumes that the mixing time is smaller than the chemistry). A hybrid laminar/mixing-controlled combustion submodel was implemented into KIVA-2. In this model, chemical species approach their thermodynamics equilibrium with a rate that is a combination of the turbulent-mixing time and the chemical-kinetics time. The combination is formed in such a way that the longer of the two times has more influence on the conversion rate and the energy release. An additional element of the model is that the laminar-flame kinetics strongly influence the early flame development following ignition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rangarajan, Srinivas; Maravelias, Christos T.; Mavrikakis, Manos
Here, we present a general optimization-based framework for (i) ab initio and experimental data driven mechanistic modeling and (ii) optimal catalyst design of heterogeneous catalytic systems. Both cases are formulated as a nonlinear optimization problem that is subject to a mean-field microkinetic model and thermodynamic consistency requirements as constraints, for which we seek sparse solutions through a ridge (L 2 regularization) penalty. The solution procedure involves an iterative sequence of forward simulation of the differential algebraic equations pertaining to the microkinetic model using a numerical tool capable of handling stiff systems, sensitivity calculations using linear algebra, and gradient-based nonlinear optimization.more » A multistart approach is used to explore the solution space, and a hierarchical clustering procedure is implemented for statistically classifying potentially competing solutions. An example of methanol synthesis through hydrogenation of CO and CO 2 on a Cu-based catalyst is used to illustrate the framework. The framework is fast, is robust, and can be used to comprehensively explore the model solution and design space of any heterogeneous catalytic system.« less
Least-dependent-component analysis based on mutual information
NASA Astrophysics Data System (ADS)
Stögbauer, Harald; Kraskov, Alexander; Astakhov, Sergey A.; Grassberger, Peter
2004-12-01
We propose to use precise estimators of mutual information (MI) to find the least dependent components in a linearly mixed signal. On the one hand, this seems to lead to better blind source separation than with any other presently available algorithm. On the other hand, it has the advantage, compared to other implementations of “independent” component analysis (ICA), some of which are based on crude approximations for MI, that the numerical values of the MI can be used for (i) estimating residual dependencies between the output components; (ii) estimating the reliability of the output by comparing the pairwise MIs with those of remixed components; and (iii) clustering the output according to the residual interdependencies. For the MI estimator, we use a recently proposed k -nearest-neighbor-based algorithm. For time sequences, we combine this with delay embedding, in order to take into account nontrivial time correlations. After several tests with artificial data, we apply the resulting MILCA (mutual-information-based least dependent component analysis) algorithm to a real-world dataset, the ECG of a pregnant woman.
Bassi, Gabriele; Blednykh, Alexei; Smalyuk, Victor
2016-02-24
A novel algorithm for self-consistent simulations of long-range wakefield effects has been developed and applied to the study of both longitudinal and transverse coupled-bunch instabilities at NSLS-II. The algorithm is implemented in the new parallel tracking code space (self-consistent parallel algorithm for collective effects) discussed in the paper. The code is applicable for accurate beam dynamics simulations in cases where both bunch-to-bunch and intrabunch motions need to be taken into account, such as chromatic head-tail effects on the coupled-bunch instability of a beam with a nonuniform filling pattern, or multibunch and single-bunch effects of a passive higher-harmonic cavity. The numericalmore » simulations have been compared with analytical studies. For a beam with an arbitrary filling pattern, intensity-dependent complex frequency shifts have been derived starting from a system of coupled Vlasov equations. The analytical formulas and numerical simulations confirm that the analysis is reduced to the formulation of an eigenvalue problem based on the known formulas of the complex frequency shifts for the uniform filling pattern case.« less
Macrae, Duncan J
2007-05-01
Numerous bodies from many countries, including governments, government regulatory departments, research organizations, medical professional bodies, and health care providers, have issued guidance or legislation on the ethical conduct of clinical trials. It is possible to trace the development of current guidelines back to the post-World War II Nuremburg war crimes trials, more specifically the "Doctors' Trial." From that trial emerged the Nuremburg Code, which set out basic principles to be observed when conducting research involving human subjects and which subsequently formed the basis for comprehensive international guidelines on medical research, such as the Declaration of Helsinki. Most recently, the Council for International Organizations and Medical Sciences (CIOMS) produced detailed guidelines (originally published in 1993 and updated in 2002) on the implementation of the principles outlined in the Declaration of Helsinki. The CIOMS guidelines set in an appropriate context the challenges of present-day clinical research, by addressing complex issues including HIV/AIDS research, availability of study treatments after a study ends, women as research subjects, safeguarding confidentiality, compensation for adverse events, as well guidelines on consent.
NASA Astrophysics Data System (ADS)
Atmani, O.; Abbès, B.; Abbès, F.; Li, Y. M.; Batkam, S.
2018-05-01
Thermoforming of high impact polystyrene sheets (HIPS) requires technical knowledge on material behavior, mold type, mold material, and process variables. Accurate thermoforming simulations are needed in the optimization process. Determining the behavior of the material under thermoforming conditions is one of the key parameters for an accurate simulation. The aim of this work is to identify the thermomechanical behavior of HIPS in the thermoforming conditions. HIPS behavior is highly dependent on temperature and strain rate. In order to reproduce the behavior of such material, a thermo-elasto-viscoplastic constitutive law was implement in the finite element code ABAQUS. The proposed model parameters are considered as thermo-dependent. The strain-dependence effect is introduced using Prony series. Tensile tests were carried out at different temperatures and strain rates. The material parameters were then identified using a NSGA-II algorithm. To validate the rheological model, experimental blowing tests were carried out on a thermoforming pilot machine. To compare the numerical results with the experimental ones the thickness distribution and the bubble shape were investigated.
NASA Astrophysics Data System (ADS)
Lombard, Bruno; Maurel, Agnès; Marigo, Jean-Jacques
2017-04-01
Homogenization of a thin micro-structure yields effective jump conditions that incorporate the geometrical features of the scatterers. These jump conditions apply across a thin but nonzero thickness interface whose interior is disregarded. This paper aims (i) to propose a numerical method able to handle the jump conditions in order to simulate the homogenized problem in the time domain, (ii) to inspect the validity of the homogenized problem when compared to the real one. For this purpose, we adapt the Explicit Simplified Interface Method originally developed for standard jump conditions across a zero-thickness interface. Doing so allows us to handle arbitrary-shaped interfaces on a Cartesian grid with the same efficiency and accuracy of the numerical scheme than those obtained in a homogeneous medium. Numerical experiments are performed to test the properties of the numerical method and to inspect the validity of the homogenization problem.
77 FR 20656 - Postal Service Classification and Price Adjustments
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-05
... noticing a recently-filed Postal Service notice announcing its intent to implement Picture Permit Imprint... implement Picture Permit Imprint Indicia as price categories for First-Class Mail and Standard Mail letters... Dominant Classification and Price Changes for Picture Permit Imprint Indicia, March 28, 2012 (Notice). II...
75 FR 13336 - Notice of Passenger Facility Charge (PFC) Approvals and Disapprovals
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-19
... Approved for Collection at Key West International Airport (EYW) and Use at EYW: Runway safety area design. Runway safety area construction. Approach clearing--design. Runway obstruction clearing--design. Runway obstruction clearing, phase II--construction. Noise implementation plan, phase 6--design. Noise implementation...
CBES--An Efficient Implementation of the Coursewriter Language.
ERIC Educational Resources Information Center
Franks, Edward W.
An extensive computer based education system (CBES) built around the IBM Coursewriter III program product at Ohio State University is described. In this system, numerous extensions have been added to the Coursewriter III language to provide capabilities needed to implement sophisticated instructional strategies. CBES design goals include lower CPU…
Numerical Modelling of Ground Penetrating Radar Antennas
NASA Astrophysics Data System (ADS)
Giannakis, Iraklis; Giannopoulos, Antonios; Pajewski, Lara
2014-05-01
Numerical methods are needed in order to solve Maxwell's equations in complicated and realistic problems. Over the years a number of numerical methods have been developed to do so. Amongst them the most popular are the finite element, finite difference implicit techniques, frequency domain solution of Helmontz equation, the method of moments, transmission line matrix method. However, the finite-difference time-domain method (FDTD) is considered to be one of the most attractive choice basically because of its simplicity, speed and accuracy. FDTD first introduced in 1966 by Kane Yee. Since then, FDTD has been established and developed to be a very rigorous and well defined numerical method for solving Maxwell's equations. The order characteristics, accuracy and limitations are rigorously and mathematically defined. This makes FDTD reliable and easy to use. Numerical modelling of Ground Penetrating Radar (GPR) is a very useful tool which can be used in order to give us insight into the scattering mechanisms and can also be used as an alternative approach to aid data interpretation. Numerical modelling has been used in a wide range of GPR applications including archeology, geophysics, forensic, landmine detection etc. In engineering, some applications of numerical modelling include the estimation of the effectiveness of GPR to detect voids in bridges, to detect metal bars in concrete, to estimate shielding effectiveness etc. The main challenges in numerical modelling of GPR for engineering applications are A) the implementation of the dielectric properties of the media (soils, concrete etc.) in a realistic way, B) the implementation of the geometry of the media (soils inhomogeneities, rough surface, vegetation, concrete features like fractures and rock fragments etc.) and C) the detailed modelling of the antenna units. The main focus of this work (which is part of the COST Action TU1208) is the accurate and realistic implementation of GPR antenna units into the FDTD model. Accurate models based on general characteristics of the commercial antennas GSSI 1.5 GHz and MALA 1.2 GHz have been already incorporated in GprMax, a free software which solves Maxwell's equation using a second order in space and time FDTD algorithm. This work presents the implementation of horn antennas with different parameters as well as ridged horn antennas into this FDTD model and their effectiveness is tested in realistic modelled situations. Accurate models of soils and concrete are used to test and compare different antenna units. Stochastic methods are used in order to realistically simulate the geometrical characteristics of the medium. Regarding the dielectric properties, Debye approximations are incorporated in order to simulate realistically the dielectric properties of the medium on the frequency range of interest.
NASA Astrophysics Data System (ADS)
Isobe, Masaharu
Hard sphere/disk systems are among the simplest models and have been used to address numerous fundamental problems in the field of statistical physics. The pioneering numerical works on the solid-fluid phase transition based on Monte Carlo (MC) and molecular dynamics (MD) methods published in 1957 represent historical milestones, which have had a significant influence on the development of computer algorithms and novel tools to obtain physical insights. This chapter addresses the works of Alder's breakthrough regarding hard sphere/disk simulation: (i) event-driven molecular dynamics, (ii) long-time tail, (iii) molasses tail, and (iv) two-dimensional melting/crystallization. From a numerical viewpoint, there are serious issues that must be overcome for further breakthrough. Here, we present a brief review of recent progress in this area.
Anthro-Centric Multisensory Interface for Vision Augmentation/Substitution (ACMI-VAS)
2014-02-01
Argus™ I and II Retinal Prosthesis System epiretinal microelectrode arrays (Second Sight Medical Products, Inc, Sylmar, CA) recently approved for use in...Figure 3. C olour photo of A rgus II epiretinal prosthesis secured to the retina w ith a retinaltack. Figure 4. Subject using the A rgus II device perform...in the environment. Alternatively, we have also implemented a touch screen mechanism that allows the user to feel the pixels under his or her
Non-hydrostatic semi-elastic hybrid-coordinate SISL extension of HIRLAM. Part II: numerical testing
NASA Astrophysics Data System (ADS)
Rõõm, Rein; Männik, Aarne; Luhamaa, Andres; Zirk, Marko
2007-10-01
The semi-implicit semi-Lagrangian (SISL), two-time-level, non-hydrostatic numerical scheme, based on the non-hydrostatic, semi-elastic pressure-coordinate equations, is tested in model experiments with flow over given orography (elliptical hill, mountain ridge, system of successive ridges) in a rectangular domain with emphasis on the numerical accuracy and non-hydrostatic effect presentation capability. Comparison demonstrates good (in strong primary wave generation) to satisfactory (in weak secondary wave reproduction in some cases) consistency of the numerical modelling results with known stationary linear test solutions. Numerical stability of the developed model is investigated with respect to the reference state choice, modelling dynamics of a stationary front. The horizontally area-mean reference temperature proves to be the optimal stability warrant. The numerical scheme with explicit residual in the vertical forcing term becomes unstable for cross-frontal temperature differences exceeding 30 K. Stability is restored, if the vertical forcing is treated implicitly, which enables to use time steps, comparable with the hydrostatic SISL.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-09
... Mississippi Department of Environmental Quality (MDEQ), on July 13, 2012, for parallel processing. This... of Contents I. What is parallel processing? II. Background III. What elements are required under... Executive Order Reviews I. What is parallel processing? Consistent with EPA regulations found at 40 CFR Part...
34 CFR 200.53 - LEA corrective action.
Code of Federal Regulations, 2011 CFR
2011-07-01
... SEA to identify an LEA for corrective action; and (ii) Any underlying staffing, curriculum, or other problems in the LEA; (2) Is designed to meet the goal that each group of students described in § 200.13(b... programmatic funds or reduce administrative funds. (ii) Institute and fully implement a new curriculum based on...
34 CFR 200.53 - LEA corrective action.
Code of Federal Regulations, 2010 CFR
2010-07-01
... SEA to identify an LEA for corrective action; and (ii) Any underlying staffing, curriculum, or other problems in the LEA; (2) Is designed to meet the goal that each group of students described in § 200.13(b... programmatic funds or reduce administrative funds. (ii) Institute and fully implement a new curriculum based on...
Meeting the New College Composition II Course Goals through Original Research
ERIC Educational Resources Information Center
Tompkins, Patrick
2007-01-01
In a College Composition II (ENG 112) class offered during the spring of 2005 at John Tyler Community College (JTCC), the author implemented an information-literacy curriculum whose salient features include: students collaborated on a semester-long, original-research project. The Metro Richmond Supermarket Price Comparison provided a focused,…
FORUM - FutureTox II: In vitro Data and In Silico Models for Predictive Toxicology
FutureTox II, a Society of Toxicology Contemporary Concepts in Toxicology workshop, was held in January, 2014. The meeting goals were to review and discuss the state of the science in toxicology in the context of implementing the NRC 21st century vision of predicting in vivo resp...
34 CFR 636.21 - What selection criteria does the Secretary use to evaluate an application?
Code of Federal Regulations, 2011 CFR
2011-07-01
...) Agencies of local government. (ii) Public and private elementary and secondary schools. (iii) Business... implementation strategy for each key project component activity is— (i) Comprehensive; (ii) Based on a sound... operation; (5) Describe a time-line chart that relates key evaluation processes and benchmarks to other...
34 CFR 636.21 - What selection criteria does the Secretary use to evaluate an application?
Code of Federal Regulations, 2014 CFR
2014-07-01
...) Agencies of local government. (ii) Public and private elementary and secondary schools. (iii) Business... implementation strategy for each key project component activity is— (i) Comprehensive; (ii) Based on a sound... operation; (5) Describe a time-line chart that relates key evaluation processes and benchmarks to other...
34 CFR 636.21 - What selection criteria does the Secretary use to evaluate an application?
Code of Federal Regulations, 2013 CFR
2013-07-01
...) Agencies of local government. (ii) Public and private elementary and secondary schools. (iii) Business... implementation strategy for each key project component activity is— (i) Comprehensive; (ii) Based on a sound... operation; (5) Describe a time-line chart that relates key evaluation processes and benchmarks to other...
34 CFR 636.21 - What selection criteria does the Secretary use to evaluate an application?
Code of Federal Regulations, 2012 CFR
2012-07-01
...) Agencies of local government. (ii) Public and private elementary and secondary schools. (iii) Business... implementation strategy for each key project component activity is— (i) Comprehensive; (ii) Based on a sound... operation; (5) Describe a time-line chart that relates key evaluation processes and benchmarks to other...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-21
... implementing teacher testing for state certification or licensure, consistent with Title II of the Higher... DEPARTMENT OF EDUCATION [Docket No. ED-2013-ICCD-0040] Agency Information Collection Activities... of Funds Under Title II, Part A: Improving Teacher Quality State Grants--State-Level Activity Funds...
40 CFR 52.331 - Committal SIP for the Colorado Group II PM10 areas.
Code of Federal Regulations, 2013 CFR
2013-07-01
... PM10 areas. 52.331 Section 52.331 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS Colorado § 52.331 Committal SIP for the Colorado Group II PM10 areas. On April 14, 1989, the Governor submitted a Committal SIP...
40 CFR 52.331 - Committal SIP for the Colorado Group II PM10 areas.
Code of Federal Regulations, 2014 CFR
2014-07-01
... PM10 areas. 52.331 Section 52.331 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS Colorado § 52.331 Committal SIP for the Colorado Group II PM10 areas. On April 14, 1989, the Governor submitted a Committal SIP...
40 CFR 52.331 - Committal SIP for the Colorado Group II PM10 areas.
Code of Federal Regulations, 2012 CFR
2012-07-01
... PM10 areas. 52.331 Section 52.331 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS Colorado § 52.331 Committal SIP for the Colorado Group II PM10 areas. On April 14, 1989, the Governor submitted a Committal SIP...
40 CFR 52.331 - Committal SIP for the Colorado Group II PM10 areas.
Code of Federal Regulations, 2011 CFR
2011-07-01
... PM10 areas. 52.331 Section 52.331 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS Colorado § 52.331 Committal SIP for the Colorado Group II PM10 areas. On April 14, 1989, the Governor submitted a Committal SIP...
40 CFR 52.331 - Committal SIP for the Colorado Group II PM10 areas.
Code of Federal Regulations, 2010 CFR
2010-07-01
... PM10 areas. 52.331 Section 52.331 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS Colorado § 52.331 Committal SIP for the Colorado Group II PM10 areas. On April 14, 1989, the Governor submitted a Committal SIP...
ERIC Educational Resources Information Center
Stanford, Linda
This course curriculum is intended for use by community college insructors and administrators in implementing an advanced information processing course. It builds on the skills developed in the previous information processing course but goes one step further by requiring students to perform in a simulated office environment and improve their…
DOT National Transportation Integrated Search
2016-03-07
Building on the success of developing a UAV based unpaved road assessment system in Phase I, the project team was awarded a Phase II project by the USDOT to focus on outreach and implementation. The project team added Valerie Lefler of Integrated Glo...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-07
... Redesignation Request ii. Subpart 4 Requirements and Illinois' Redesignation Request iii. Subpart 4 and Control... due to permanent and enforceable emission reductions? 1. Permanent and Enforceable Controls a. Federal Emission Control Measures i. Tier 2 Emission Standards for Vehicles and Gasoline Sulfur Standards ii. Heavy...
75 FR 72719 - Approval and Promulgation of Implementation Plans; Idaho
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-26
..., provisions relating to Tier 1 operating permits, facility emissions cap, standards of performance of certain... ENVIRONMENTAL PROTECTION AGENCY 40 CFR Part 52 [EPA-R10-OAR-2008-0482; FRL-9231-1] Approval and... Requirements 7/1/2002 for Tier II Operating Permits. 401 Tier II Operating Permit...... 4/6/2005 Except 401.01...