Sample records for computationally expensive numerical

  1. Multidisciplinary optimization of an HSCT wing using a response surface methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giunta, A.A.; Grossman, B.; Mason, W.H.

    1994-12-31

    Aerospace vehicle design is traditionally divided into three phases: conceptual, preliminary, and detailed. Each of these design phases entails a particular level of accuracy and computational expense. While there are several computer programs which perform inexpensive conceptual-level aircraft multidisciplinary design optimization (MDO), aircraft MDO remains prohibitively expensive using preliminary- and detailed-level analysis tools. This occurs due to the expense of computational analyses and because gradient-based optimization requires the analysis of hundreds or thousands of aircraft configurations to estimate design sensitivity information. A further hindrance to aircraft MDO is the problem of numerical noise which occurs frequently in engineering computations. Computermore » models produce numerical noise as a result of the incomplete convergence of iterative processes, round-off errors, and modeling errors. Such numerical noise is typically manifested as a high frequency, low amplitude variation in the results obtained from the computer models. Optimization attempted using noisy computer models may result in the erroneous calculation of design sensitivities and may slow or prevent convergence to an optimal design.« less

  2. Sensitivity Analysis for Coupled Aero-structural Systems

    NASA Technical Reports Server (NTRS)

    Giunta, Anthony A.

    1999-01-01

    A novel method has been developed for calculating gradients of aerodynamic force and moment coefficients for an aeroelastic aircraft model. This method uses the Global Sensitivity Equations (GSE) to account for the aero-structural coupling, and a reduced-order modal analysis approach to condense the coupling bandwidth between the aerodynamic and structural models. Parallel computing is applied to reduce the computational expense of the numerous high fidelity aerodynamic analyses needed for the coupled aero-structural system. Good agreement is obtained between aerodynamic force and moment gradients computed with the GSE/modal analysis approach and the same quantities computed using brute-force, computationally expensive, finite difference approximations. A comparison between the computational expense of the GSE/modal analysis method and a pure finite difference approach is presented. These results show that the GSE/modal analysis approach is the more computationally efficient technique if sensitivity analysis is to be performed for two or more aircraft design parameters.

  3. Numerical Optimization Using Computer Experiments

    NASA Technical Reports Server (NTRS)

    Trosset, Michael W.; Torczon, Virginia

    1997-01-01

    Engineering design optimization often gives rise to problems in which expensive objective functions are minimized by derivative-free methods. We propose a method for solving such problems that synthesizes ideas from the numerical optimization and computer experiment literatures. Our approach relies on kriging known function values to construct a sequence of surrogate models of the objective function that are used to guide a grid search for a minimizer. Results from numerical experiments on a standard test problem are presented.

  4. Numerical Experiments with a Turbulent Single-Mode Rayleigh-Taylor Instability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cloutman, L.D.

    2000-04-01

    Direct numerical simulation is a powerful tool for studying turbulent flows. Unfortunately, it is also computationally expensive and often beyond the reach of the largest, fastest computers. Consequently, a variety of turbulence models have been devised to allow tractable and affordable simulations of averaged flow fields. Unfortunately, these present a variety of practical difficulties, including the incorporation of varying degrees of empiricism and phenomenology, which leads to a lack of universality. This unsatisfactory state of affairs has led to the speculation that one can avoid the expense and bother of using a turbulence model by relying on the grid andmore » numerical diffusion of the computational fluid dynamics algorithm to introduce a spectral cutoff on the flow field and to provide dissipation at the grid scale, thereby mimicking two main effects of a large eddy simulation model. This paper shows numerical examples of a single-mode Rayleigh-Taylor instability in which this procedure produces questionable results. We then show a dramatic improvement when two simple subgrid-scale models are employed. This study also illustrates the extreme sensitivity to initial conditions that is a common feature of turbulent flows.« less

  5. The Chemical Engineer's Toolbox: A Glass Box Approach to Numerical Problem Solving

    ERIC Educational Resources Information Center

    Coronell, Daniel G.; Hariri, M. Hossein

    2009-01-01

    Computer programming in undergraduate engineering education all too often begins and ends with the freshman programming course. Improvements in computer technology and curriculum revision have improved this situation, but often at the expense of the students' learning due to the use of commercial "black box" software. This paper describes the…

  6. Inner Space Perturbation Theory in Matrix Product States: Replacing Expensive Iterative Diagonalization.

    PubMed

    Ren, Jiajun; Yi, Yuanping; Shuai, Zhigang

    2016-10-11

    We propose an inner space perturbation theory (isPT) to replace the expensive iterative diagonalization in the standard density matrix renormalization group theory (DMRG). The retained reduced density matrix eigenstates are partitioned into the active and secondary space. The first-order wave function and the second- and third-order energies are easily computed by using one step Davidson iteration. Our formulation has several advantages including (i) keeping a balance between the efficiency and accuracy, (ii) capturing more entanglement with the same amount of computational time, (iii) recovery of the standard DMRG when all the basis states belong to the active space. Numerical examples for the polyacenes and periacene show that the efficiency gain is considerable and the accuracy loss due to the perturbation treatment is very small, when half of the total basis states belong to the active space. Moreover, the perturbation calculations converge in all our numerical examples.

  7. Numerical experience with a class of algorithms for nonlinear optimization using inexact function and gradient information

    NASA Technical Reports Server (NTRS)

    Carter, Richard G.

    1989-01-01

    For optimization problems associated with engineering design, parameter estimation, image reconstruction, and other optimization/simulation applications, low accuracy function and gradient values are frequently much less expensive to obtain than high accuracy values. Here, researchers investigate the computational performance of trust region methods for nonlinear optimization when high accuracy evaluations are unavailable or prohibitively expensive, and confirm earlier theoretical predictions when the algorithm is convergent even with relative gradient errors of 0.5 or more. The proper choice of the amount of accuracy to use in function and gradient evaluations can result in orders-of-magnitude savings in computational cost.

  8. Multidisciplinary propulsion simulation using the numerical propulsion system simulator (NPSS)

    NASA Technical Reports Server (NTRS)

    Claus, Russel W.

    1994-01-01

    Implementing new technology in aerospace propulsion systems is becoming prohibitively expensive. One of the major contributions to the high cost is the need to perform many large scale system tests. The traditional design analysis procedure decomposes the engine into isolated components and focuses attention on each single physical discipline (e.g., fluid for structural dynamics). Consequently, the interactions that naturally occur between components and disciplines can be masked by the limited interactions that occur between individuals or teams doing the design and must be uncovered during expensive engine testing. This overview will discuss a cooperative effort of NASA, industry, and universities to integrate disciplines, components, and high performance computing into a Numerical propulsion System Simulator (NPSS).

  9. A Parallel Numerical Algorithm To Solve Linear Systems Of Equations Emerging From 3D Radiative Transfer

    NASA Astrophysics Data System (ADS)

    Wichert, Viktoria; Arkenberg, Mario; Hauschildt, Peter H.

    2016-10-01

    Highly resolved state-of-the-art 3D atmosphere simulations will remain computationally extremely expensive for years to come. In addition to the need for more computing power, rethinking coding practices is necessary. We take a dual approach by introducing especially adapted, parallel numerical methods and correspondingly parallelizing critical code passages. In the following, we present our respective work on PHOENIX/3D. With new parallel numerical algorithms, there is a big opportunity for improvement when iteratively solving the system of equations emerging from the operator splitting of the radiative transfer equation J = ΛS. The narrow-banded approximate Λ-operator Λ* , which is used in PHOENIX/3D, occurs in each iteration step. By implementing a numerical algorithm which takes advantage of its characteristic traits, the parallel code's efficiency is further increased and a speed-up in computational time can be achieved.

  10. Improving finite element results in modeling heart valve mechanics.

    PubMed

    Earl, Emily; Mohammadi, Hadi

    2018-06-01

    Finite element analysis is a well-established computational tool which can be used for the analysis of soft tissue mechanics. Due to the structural complexity of the leaflet tissue of the heart valve, the currently available finite element models do not adequately represent the leaflet tissue. A method of addressing this issue is to implement computationally expensive finite element models, characterized by precise constitutive models including high-order and high-density mesh techniques. In this study, we introduce a novel numerical technique that enhances the results obtained from coarse mesh finite element models to provide accuracy comparable to that of fine mesh finite element models while maintaining a relatively low computational cost. Introduced in this study is a method by which the computational expense required to solve linear and nonlinear constitutive models, commonly used in heart valve mechanics simulations, is reduced while continuing to account for large and infinitesimal deformations. This continuum model is developed based on the least square algorithm procedure coupled with the finite difference method adhering to the assumption that the components of the strain tensor are available at all nodes of the finite element mesh model. The suggested numerical technique is easy to implement, practically efficient, and requires less computational time compared to currently available commercial finite element packages such as ANSYS and/or ABAQUS.

  11. Computer numeric control generation of toric surfaces

    NASA Astrophysics Data System (ADS)

    Bradley, Norman D.; Ball, Gary A.; Keller, John R.

    1994-05-01

    Until recently, the manufacture of toric ophthalmic lenses relied largely upon expensive, manual techniques for generation and polishing. Recent gains in computer numeric control (CNC) technology and tooling enable lens designers to employ single- point diamond, fly-cutting methods in the production of torics. Fly-cutting methods continue to improve, significantly expanding lens design possibilities while lowering production costs. Advantages of CNC fly cutting include precise control of surface geometry, rapid production with high throughput, and high-quality lens surface finishes requiring minimal polishing. As accessibility and affordability increase within the ophthalmic market, torics promise to dramatically expand lens design choices available to consumers.

  12. Surrogates for numerical simulations; optimization of eddy-promoter heat exchangers

    NASA Technical Reports Server (NTRS)

    Patera, Anthony T.; Patera, Anthony

    1993-01-01

    Although the advent of fast and inexpensive parallel computers has rendered numerous previously intractable calculations feasible, many numerical simulations remain too resource-intensive to be directly inserted in engineering optimization efforts. An attractive alternative to direct insertion considers models for computational systems: the expensive simulation is evoked only to construct and validate a simplified, input-output model; this simplified input-output model then serves as a simulation surrogate in subsequent engineering optimization studies. A simple 'Bayesian-validated' statistical framework for the construction, validation, and purposive application of static computer simulation surrogates is presented. As an example, dissipation-transport optimization of laminar-flow eddy-promoter heat exchangers are considered: parallel spectral element Navier-Stokes calculations serve to construct and validate surrogates for the flowrate and Nusselt number; these surrogates then represent the originating Navier-Stokes equations in the ensuing design process.

  13. A Fourier-based total-field/scattered-field technique for three-dimensional broadband simulations of elastic targets near a water-sand interface.

    PubMed

    Shao, Yu; Wang, Shumin

    2016-12-01

    The numerical simulation of acoustic scattering from elastic objects near a water-sand interface is critical to underwater target identification. Frequency-domain methods are computationally expensive, especially for large-scale broadband problems. A numerical technique is proposed to enable the efficient use of finite-difference time-domain method for broadband simulations. By incorporating a total-field/scattered-field boundary, the simulation domain is restricted inside a tightly bounded region. The incident field is further synthesized by the Fourier transform for both subcritical and supercritical incidences. Finally, the scattered far field is computed using a half-space Green's function. Numerical examples are further provided to demonstrate the accuracy and efficiency of the proposed technique.

  14. A DIY Ultrasonic Signal Generator for Sound Experiments

    ERIC Educational Resources Information Center

    Riad, Ihab F.

    2018-01-01

    Many physics departments around the world have electronic and mechanical workshops attached to them that can help build experimental setups and instruments for research and the training of undergraduate students. The workshops are usually run by experienced technicians and equipped with expensive lathing, computer numerical control (CNC) machines,…

  15. Application of multi-grid method on the simulation of incremental forging processes

    NASA Astrophysics Data System (ADS)

    Ramadan, Mohamad; Khaled, Mahmoud; Fourment, Lionel

    2016-10-01

    Numerical simulation becomes essential in manufacturing large part by incremental forging processes. It is a splendid tool allowing to show physical phenomena however behind the scenes, an expensive bill should be paid, that is the computational time. That is why many techniques are developed to decrease the computational time of numerical simulation. Multi-Grid method is a numerical procedure that permits to reduce computational time of numerical calculation by performing the resolution of the system of equations on several mesh of decreasing size which allows to smooth faster the low frequency of the solution as well as its high frequency. In this paper a Multi-Grid method is applied to cogging process in the software Forge 3. The study is carried out using increasing number of degrees of freedom. The results shows that calculation time is divide by two for a mesh of 39,000 nodes. The method is promising especially if coupled with Multi-Mesh method.

  16. Nonequilibrium hypersonic flows simulations with asymptotic-preserving Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Ren, Wei; Liu, Hong; Jin, Shi

    2014-12-01

    In the rarefied gas dynamics, the DSMC method is one of the most popular numerical tools. It performs satisfactorily in simulating hypersonic flows surrounding re-entry vehicles and micro-/nano- flows. However, the computational cost is expensive, especially when Kn → 0. Even for flows in the near-continuum regime, pure DSMC simulations require a number of computational efforts for most cases. Albeit several DSMC/NS hybrid methods are proposed to deal with this, those methods still suffer from the boundary treatment, which may cause nonphysical solutions. Filbet and Jin [1] proposed a framework of new numerical methods of Boltzmann equation, called asymptotic preserving schemes, whose computational costs are affordable as Kn → 0. Recently, Ren et al. [2] realized the AP schemes with Monte Carlo methods (AP-DSMC), which have better performance than counterpart methods. In this paper, AP-DSMC is applied in simulating nonequilibrium hypersonic flows. Several numerical results are computed and analyzed to study the efficiency and capability of capturing complicated flow characteristics.

  17. An approach to the design of wide-angle optical systems with special illumination and IFOV requirements

    NASA Astrophysics Data System (ADS)

    Pravdivtsev, Andrey V.

    2012-06-01

    The article presents the approach to the design wide-angle optical systems with special illumination and instantaneous field of view (IFOV) requirements. The unevenness of illumination reduces the dynamic range of the system, which negatively influence on the system ability to perform their task. The result illumination on the detector depends among other factors from the IFOV changes. It is also necessary to consider IFOV in the synthesis of data processing algorithms, as it directly affects to the potential "signal/background" ratio for the case of statistically homogeneous backgrounds. A numerical-analytical approach that simplifies the design of wideangle optical systems with special illumination and IFOV requirements is presented. The solution can be used for optical systems which field of view greater than 180 degrees. Illumination calculation in optical CAD is based on computationally expensive tracing of large number of rays. The author proposes to use analytical expression for some characteristics which illumination depends on. The rest characteristic are determined numerically in calculation with less computationally expensive operands, the calculation performs not every optimization step. The results of analytical calculation inserts in the merit function of optical CAD optimizer. As a result we reduce the optimizer load, since using less computationally expensive operands. It allows reducing time and resources required to develop a system with the desired characteristics. The proposed approach simplifies the creation and understanding of the requirements for the quality of the optical system, reduces the time and resources required to develop an optical system, and allows creating more efficient EOS.

  18. Parallel implementation of geometrical shock dynamics for two dimensional converging shock waves

    NASA Astrophysics Data System (ADS)

    Qiu, Shi; Liu, Kuang; Eliasson, Veronica

    2016-10-01

    Geometrical shock dynamics (GSD) theory is an appealing method to predict the shock motion in the sense that it is more computationally efficient than solving the traditional Euler equations, especially for converging shock waves. However, to solve and optimize large scale configurations, the main bottleneck is the computational cost. Among the existing numerical GSD schemes, there is only one that has been implemented on parallel computers, with the purpose to analyze detonation waves. To extend the computational advantage of the GSD theory to more general applications such as converging shock waves, a numerical implementation using a spatial decomposition method has been coupled with a front tracking approach on parallel computers. In addition, an efficient tridiagonal system solver for massively parallel computers has been applied to resolve the most expensive function in this implementation, resulting in an efficiency of 0.93 while using 32 HPCC cores. Moreover, symmetric boundary conditions have been developed to further reduce the computational cost, achieving a speedup of 19.26 for a 12-sided polygonal converging shock.

  19. Fast non-overlapping Schwarz domain decomposition methods for solving the neutron diffusion equation

    NASA Astrophysics Data System (ADS)

    Jamelot, Erell; Ciarlet, Patrick

    2013-05-01

    Studying numerically the steady state of a nuclear core reactor is expensive, in terms of memory storage and computational time. In order to address both requirements, one can use a domain decomposition method, implemented on a parallel computer. We present here such a method for the mixed neutron diffusion equations, discretized with Raviart-Thomas-Nédélec finite elements. This method is based on the Schwarz iterative algorithm with Robin interface conditions to handle communications. We analyse this method from the continuous point of view to the discrete point of view, and we give some numerical results in a realistic highly heterogeneous 3D configuration. Computations are carried out with the MINOS solver of the APOLLO3® neutronics code. APOLLO3 is a registered trademark in France.

  20. The role of under-determined approximations in engineering and science application

    NASA Technical Reports Server (NTRS)

    Carpenter, William C.

    1992-01-01

    There is currently a great deal of interest in using response surfaces in the optimization of aircraft performance. The objective function and/or constraint equations involved in these optimization problems may come from numerous disciplines such as structures, aerodynamics, environmental engineering, etc. In each of these disciplines, the mathematical complexity of the governing equations usually dictates that numerical results be obtained from large computer programs such as a finite element method program. Thus, when performing optimization studies, response surfaces are a convenient way of transferring information from the various disciplines to the optimization algorithm as opposed to bringing all the sundry computer programs together in a massive computer code. Response surfaces offer another advantage in the optimization of aircraft structures. A characteristic of these types of optimization problems is that evaluation of the objective function and response equations (referred to as a functional evaluation) can be very expensive in a computational sense. Because of the computational expense in obtaining functional evaluations, the present study was undertaken to investigate under-determinined approximations. An under-determined approximation is one in which there are fewer training pairs (pieces of information about a function) than there are undetermined parameters (coefficients or weights) associated with the approximation. Both polynomial approximations and neural net approximations were examined. Three main example problems were investigated: (1) a function of one design variable was considered; (2) a function of two design variables was considered; and (3) a 35 bar truss with 4 design variables was considered.

  1. A partitioned model order reduction approach to rationalise computational expenses in nonlinear fracture mechanics

    PubMed Central

    Kerfriden, P.; Goury, O.; Rabczuk, T.; Bordas, S.P.A.

    2013-01-01

    We propose in this paper a reduced order modelling technique based on domain partitioning for parametric problems of fracture. We show that coupling domain decomposition and projection-based model order reduction permits to focus the numerical effort where it is most needed: around the zones where damage propagates. No a priori knowledge of the damage pattern is required, the extraction of the corresponding spatial regions being based solely on algebra. The efficiency of the proposed approach is demonstrated numerically with an example relevant to engineering fracture. PMID:23750055

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaurov, Alexander A., E-mail: kaurov@uchicago.edu

    The methods for studying the epoch of cosmic reionization vary from full radiative transfer simulations to purely analytical models. While numerical approaches are computationally expensive and are not suitable for generating many mock catalogs, analytical methods are based on assumptions and approximations. We explore the interconnection between both methods. First, we ask how the analytical framework of excursion set formalism can be used for statistical analysis of numerical simulations and visual representation of the morphology of ionization fronts. Second, we explore the methods of training the analytical model on a given numerical simulation. We present a new code which emergedmore » from this study. Its main application is to match the analytical model with a numerical simulation. Then, it allows one to generate mock reionization catalogs with volumes exceeding the original simulation quickly and computationally inexpensively, meanwhile reproducing large-scale statistical properties. These mock catalogs are particularly useful for cosmic microwave background polarization and 21 cm experiments, where large volumes are required to simulate the observed signal.« less

  3. On Using Surrogates with Genetic Programming.

    PubMed

    Hildebrandt, Torsten; Branke, Jürgen

    2015-01-01

    One way to accelerate evolutionary algorithms with expensive fitness evaluations is to combine them with surrogate models. Surrogate models are efficiently computable approximations of the fitness function, derived by means of statistical or machine learning techniques from samples of fully evaluated solutions. But these models usually require a numerical representation, and therefore cannot be used with the tree representation of genetic programming (GP). In this paper, we present a new way to use surrogate models with GP. Rather than using the genotype directly as input to the surrogate model, we propose using a phenotypic characterization. This phenotypic characterization can be computed efficiently and allows us to define approximate measures of equivalence and similarity. Using a stochastic, dynamic job shop scenario as an example of simulation-based GP with an expensive fitness evaluation, we show how these ideas can be used to construct surrogate models and improve the convergence speed and solution quality of GP.

  4. Parameter Estimation of Computationally Expensive Watershed Models Through Efficient Multi-objective Optimization and Interactive Decision Analytics

    NASA Astrophysics Data System (ADS)

    Akhtar, Taimoor; Shoemaker, Christine

    2016-04-01

    Watershed model calibration is inherently a multi-criteria problem. Conflicting trade-offs exist between different quantifiable calibration criterions indicating the non-existence of a single optimal parameterization. Hence, many experts prefer a manual approach to calibration where the inherent multi-objective nature of the calibration problem is addressed through an interactive, subjective, time-intensive and complex decision making process. Multi-objective optimization can be used to efficiently identify multiple plausible calibration alternatives and assist calibration experts during the parameter estimation process. However, there are key challenges to the use of multi objective optimization in the parameter estimation process which include: 1) multi-objective optimization usually requires many model simulations, which is difficult for complex simulation models that are computationally expensive; and 2) selection of one from numerous calibration alternatives provided by multi-objective optimization is non-trivial. This study proposes a "Hybrid Automatic Manual Strategy" (HAMS) for watershed model calibration to specifically address the above-mentioned challenges. HAMS employs a 3-stage framework for parameter estimation. Stage 1 incorporates the use of an efficient surrogate multi-objective algorithm, GOMORS, for identification of numerous calibration alternatives within a limited simulation evaluation budget. The novelty of HAMS is embedded in Stages 2 and 3 where an interactive visual and metric based analytics framework is available as a decision support tool to choose a single calibration from the numerous alternatives identified in Stage 1. Stage 2 of HAMS provides a goodness-of-fit measure / metric based interactive framework for identification of a small subset (typically less than 10) of meaningful and diverse set of calibration alternatives from the numerous alternatives obtained in Stage 1. Stage 3 incorporates the use of an interactive visual analytics framework for decision support in selection of one parameter combination from the alternatives identified in Stage 2. HAMS is applied for calibration of flow parameters of a SWAT model, (Soil and Water Assessment Tool) designed to simulate flow in the Cannonsville watershed in upstate New York. Results from the application of HAMS to Cannonsville indicate that efficient multi-objective optimization and interactive visual and metric based analytics can bridge the gap between the effective use of both automatic and manual strategies for parameter estimation of computationally expensive watershed models.

  5. Time-Accurate Solutions of Incompressible Navier-Stokes Equations for Potential Turbopump Applications

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Kwak, Dochan

    2001-01-01

    Two numerical procedures, one based on artificial compressibility method and the other pressure projection method, are outlined for obtaining time-accurate solutions of the incompressible Navier-Stokes equations. The performance of the two method are compared by obtaining unsteady solutions for the evolution of twin vortices behind a at plate. Calculated results are compared with experimental and other numerical results. For an un- steady ow which requires small physical time step, pressure projection method was found to be computationally efficient since it does not require any subiterations procedure. It was observed that the artificial compressibility method requires a fast convergence scheme at each physical time step in order to satisfy incompressibility condition. This was obtained by using a GMRES-ILU(0) solver in our computations. When a line-relaxation scheme was used, the time accuracy was degraded and time-accurate computations became very expensive.

  6. Numerical simulation of ice accretion phenomena on rotor blade of axial blower

    NASA Astrophysics Data System (ADS)

    Matsuura, Taiki; Suzuki, Masaya; Yamamoto, Makoto; Shishido, Shinichiro; Murooka, Takeshi; Miyagawa, Hiroshi

    2012-08-01

    Ice accretion is the phenomenon that super-cooled water droplets impinge and accrete on a body. It is well known that ice accretion on blades and airfoils leads to performance degradation and severe accidents. For this reason, experimental investigations have been carried out using flight tests or icing tunnels. However, it is too expensive, dangerous, and difficult to set actual icing conditions. Hence, computational fluid dynamics is useful to predict ice accretion. A rotor blade is one of jet engine components where ice accretes. Therefore, the authors focus on the ice accretion on a rotor blade in this study. Three-dimensional icing phenomena on the rotor blade of a commercial axial blower are computed here, and ice accretion on the rotor blade is numerically investigated.

  7. Current Grid Generation Strategies and Future Requirements in Hypersonic Vehicle Design, Analysis and Testing

    NASA Technical Reports Server (NTRS)

    Papadopoulos, Periklis; Venkatapathy, Ethiraj; Prabhu, Dinesh; Loomis, Mark P.; Olynick, Dave; Arnold, James O. (Technical Monitor)

    1998-01-01

    Recent advances in computational power enable computational fluid dynamic modeling of increasingly complex configurations. A review of grid generation methodologies implemented in support of the computational work performed for the X-38 and X-33 are presented. In strategizing topological constructs and blocking structures factors considered are the geometric configuration, optimal grid size, numerical algorithms, accuracy requirements, physics of the problem at hand, computational expense, and the available computer hardware. Also addressed are grid refinement strategies, the effects of wall spacing, and convergence. The significance of grid is demonstrated through a comparison of computational and experimental results of the aeroheating environment experienced by the X-38 vehicle. Special topics on grid generation strategies are also addressed to model control surface deflections, and material mapping.

  8. Explosion localization and characterization via infrasound using numerical modeling

    NASA Astrophysics Data System (ADS)

    Fee, D.; Kim, K.; Iezzi, A. M.; Matoza, R. S.; Jolly, A. D.; De Angelis, S.; Diaz Moreno, A.; Szuberla, C.

    2017-12-01

    Numerous methods have been applied to locate, detect, and characterize volcanic and anthropogenic explosions using infrasound. Far-field localization techniques typically use back-azimuths from multiple arrays (triangulation) or Reverse Time Migration (RTM, or back-projection). At closer ranges, networks surrounding a source may use Time Difference of Arrival (TDOA), semblance, station-pair double difference, etc. However, at volcanoes and regions with topography or obstructions that block the direct path of sound, recent studies have shown that numerical modeling is necessary to provide an accurate source location. A heterogeneous and moving atmosphere (winds) may also affect the location. The time reversal mirror (TRM) application of Kim et al. (2015) back-propagates the wavefield using a Finite Difference Time Domain (FDTD) algorithm, with the source corresponding to the location of peak convergence. Although it provides high-resolution source localization and can account for complex wave propagation, TRM is computationally expensive and limited to individual events. Here we present a new technique, termed RTM-FDTD, which integrates TRM and FDTD. Travel time and transmission loss information is computed from each station to the entire potential source grid from 3-D Green's functions derived via FDTD. The wave energy is then back-projected and stacked at each grid point, with the maximum corresponding to the likely source. We apply our method to detect and characterize thousands of explosions from Yasur Volcano, Vanuatu and Etna Volcano, Italy, which both provide complex wave propagation and multiple source locations. We compare our results with those from more traditional methods (e.g. semblance), and suggest our method is preferred as it is computationally less expensive than TRM but still integrates numerical modeling. RTM-FDTD could be applied to volcanic other anthropogenic sources at a wide variety of ranges and scenarios. Kim, K., Lees, J.M., 2015. Imaging volcanic infrasound sources using time reversal mirror algorithm. Geophysical Journal International 202, 1663-1676.

  9. Accelerating activity coefficient calculations using multicore platforms, and profiling the energy use resulting from such calculations.

    NASA Astrophysics Data System (ADS)

    Topping, David; Alibay, Irfan; Bane, Michael

    2017-04-01

    To predict the evolving concentration, chemical composition and ability of aerosol particles to act as cloud droplets, we rely on numerical modeling. Mechanistic models attempt to account for the movement of compounds between the gaseous and condensed phases at a molecular level. This 'bottom up' approach is designed to increase our fundamental understanding. However, such models rely on predicting the properties of molecules and subsequent mixtures. For partitioning between the gaseous and condensed phases this includes: saturation vapour pressures; Henrys law coefficients; activity coefficients; diffusion coefficients and reaction rates. Current gas phase chemical mechanisms predict the existence of potentially millions of individual species. Within a dynamic ensemble model, this can often be used as justification for neglecting computationally expensive process descriptions. Indeed, on whether we can quantify the true sensitivity to uncertainties in molecular properties, even at the single aerosol particle level it has been impossible to embed fully coupled representations of process level knowledge with all possible compounds, typically relying on heavily parameterised descriptions. Relying on emerging numerical frameworks, and designed for the changing landscape of high-performance computing (HPC), in this study we focus specifically on the ability to capture activity coefficients in liquid solutions using the UNIFAC method. Activity coefficients are often neglected with the largely untested hypothesis that they are simply too computationally expensive to include in dynamic frameworks. We present results demonstrating increased computational efficiency for a range of typical scenarios, including a profiling of the energy use resulting from reliance on such computations. As the landscape of HPC changes, the latter aspect is important to consider in future applications.

  10. Reactive transport modeling in the subsurface environment with OGS-IPhreeqc

    NASA Astrophysics Data System (ADS)

    He, Wenkui; Beyer, Christof; Fleckenstein, Jan; Jang, Eunseon; Kalbacher, Thomas; Naumov, Dimitri; Shao, Haibing; Wang, Wenqing; Kolditz, Olaf

    2015-04-01

    Worldwide, sustainable water resource management becomes an increasingly challenging task due to the growth of population and extensive applications of fertilizer in agriculture. Moreover, climate change causes further stresses to both water quantity and quality. Reactive transport modeling in the coupled soil-aquifer system is a viable approach to assess the impacts of different land use and groundwater exploitation scenarios on the water resources. However, the application of this approach is usually limited in spatial scale and to simplified geochemical systems due to the huge computational expense involved. Such computational expense is not only caused by solving the high non-linearity of the initial boundary value problems of water flow in the unsaturated zone numerically with rather fine spatial and temporal discretization for the correct mass balance and numerical stability, but also by the intensive computational task of quantifying geochemical reactions. In the present study, a flexible and efficient tool for large scale reactive transport modeling in variably saturated porous media and its applications are presented. The open source scientific software OpenGeoSys (OGS) is coupled with the IPhreeqc module of the geochemical solver PHREEQC. The new coupling approach makes full use of advantages from both codes: OGS provides a flexible choice of different numerical approaches for simulation of water flow in the vadose zone such as the pressure-based or mixed forms of Richards equation; whereas the IPhreeqc module leads to a simplification of data storage and its communication with OGS, which greatly facilitates the coupling and code updating. Moreover, a parallelization scheme with MPI (Message Passing Interface) is applied, in which the computational task of water flow and mass transport is partitioned through domain decomposition, whereas the efficient parallelization of geochemical reactions is achieved by smart allocation of computational workload over multiple compute nodes. The plausibility of the new coupling is verified by several benchmark tests. In addition, the efficiency of the new coupling approach is demonstrated by its application in a large scale scenario, in which the environmental fate of pesticides in a complex soil-aquifer system is studied.

  11. Reactive transport modeling in variably saturated porous media with OGS-IPhreeqc

    NASA Astrophysics Data System (ADS)

    He, W.; Beyer, C.; Fleckenstein, J. H.; Jang, E.; Kalbacher, T.; Shao, H.; Wang, W.; Kolditz, O.

    2014-12-01

    Worldwide, sustainable water resource management becomes an increasingly challenging task due to the growth of population and extensive applications of fertilizer in agriculture. Moreover, climate change causes further stresses to both water quantity and quality. Reactive transport modeling in the coupled soil-aquifer system is a viable approach to assess the impacts of different land use and groundwater exploitation scenarios on the water resources. However, the application of this approach is usually limited in spatial scale and to simplified geochemical systems due to the huge computational expense involved. Such computational expense is not only caused by solving the high non-linearity of the initial boundary value problems of water flow in the unsaturated zone numerically with rather fine spatial and temporal discretization for the correct mass balance and numerical stability, but also by the intensive computational task of quantifying geochemical reactions. In the present study, a flexible and efficient tool for large scale reactive transport modeling in variably saturated porous media and its applications are presented. The open source scientific software OpenGeoSys (OGS) is coupled with the IPhreeqc module of the geochemical solver PHREEQC. The new coupling approach makes full use of advantages from both codes: OGS provides a flexible choice of different numerical approaches for simulation of water flow in the vadose zone such as the pressure-based or mixed forms of Richards equation; whereas the IPhreeqc module leads to a simplification of data storage and its communication with OGS, which greatly facilitates the coupling and code updating. Moreover, a parallelization scheme with MPI (Message Passing Interface) is applied, in which the computational task of water flow and mass transport is partitioned through domain decomposition, whereas the efficient parallelization of geochemical reactions is achieved by smart allocation of computational workload over multiple compute nodes. The plausibility of the new coupling is verified by several benchmark tests. In addition, the efficiency of the new coupling approach is demonstrated by its application in a large scale scenario, in which the environmental fate of pesticides in a complex soil-aquifer system is studied.

  12. Numerical propulsion system simulation

    NASA Technical Reports Server (NTRS)

    Lytle, John K.; Remaklus, David A.; Nichols, Lester D.

    1990-01-01

    The cost of implementing new technology in aerospace propulsion systems is becoming prohibitively expensive. One of the major contributors to the high cost is the need to perform many large scale system tests. Extensive testing is used to capture the complex interactions among the multiple disciplines and the multiple components inherent in complex systems. The objective of the Numerical Propulsion System Simulation (NPSS) is to provide insight into these complex interactions through computational simulations. This will allow for comprehensive evaluation of new concepts early in the design phase before a commitment to hardware is made. It will also allow for rapid assessment of field-related problems, particularly in cases where operational problems were encountered during conditions that would be difficult to simulate experimentally. The tremendous progress taking place in computational engineering and the rapid increase in computing power expected through parallel processing make this concept feasible within the near future. However it is critical that the framework for such simulations be put in place now to serve as a focal point for the continued developments in computational engineering and computing hardware and software. The NPSS concept which is described will provide that framework.

  13. A Simplified Model for Multiphase Leakage through Faults with Applications for CO2 Storage

    NASA Astrophysics Data System (ADS)

    Watson, F. E.; Doster, F.

    2017-12-01

    In the context of geological CO2 storage, faults in the subsurface could affect storage security by acting as high permeability pathways which allow CO2 to flow upwards and away from the storage formation. To assess the likelihood of leakage through faults and the impacts faults might have on storage security numerical models are required. However, faults are complex geological features, usually consisting of a fault core surrounded by a highly fractured damage zone. A direct representation of these in a numerical model would require very fine grid resolution and would be computationally expensive. Here, we present the development of a reduced complexity model for fault flow using the vertically integrated formulation. This model captures the main features of the flow but does not require us to resolve the vertical dimension, nor the fault in the horizontal dimension, explicitly. It is thus less computationally expensive than full resolution models. Consequently, we can quickly model many realisations for parameter uncertainty studies of CO2 injection into faulted reservoirs. We develop the model based on explicitly simulating local 3D representations of faults for characteristic scenarios using the Matlab Reservoir Simulation Toolbox (MRST). We have assessed the impact of variables such as fault geometry, porosity and permeability on multiphase leakage rates.

  14. A fast CT reconstruction scheme for a general multi-core PC.

    PubMed

    Zeng, Kai; Bai, Erwei; Wang, Ge

    2007-01-01

    Expensive computational cost is a severe limitation in CT reconstruction for clinical applications that need real-time feedback. A primary example is bolus-chasing computed tomography (CT) angiography (BCA) that we have been developing for the past several years. To accelerate the reconstruction process using the filtered backprojection (FBP) method, specialized hardware or graphics cards can be used. However, specialized hardware is expensive and not flexible. The graphics processing unit (GPU) in a current graphic card can only reconstruct images in a reduced precision and is not easy to program. In this paper, an acceleration scheme is proposed based on a multi-core PC. In the proposed scheme, several techniques are integrated, including utilization of geometric symmetry, optimization of data structures, single-instruction multiple-data (SIMD) processing, multithreaded computation, and an Intel C++ compilier. Our scheme maintains the original precision and involves no data exchange between the GPU and CPU. The merits of our scheme are demonstrated in numerical experiments against the traditional implementation. Our scheme achieves a speedup of about 40, which can be further improved by several folds using the latest quad-core processors.

  15. A Fast CT Reconstruction Scheme for a General Multi-Core PC

    PubMed Central

    Zeng, Kai; Bai, Erwei; Wang, Ge

    2007-01-01

    Expensive computational cost is a severe limitation in CT reconstruction for clinical applications that need real-time feedback. A primary example is bolus-chasing computed tomography (CT) angiography (BCA) that we have been developing for the past several years. To accelerate the reconstruction process using the filtered backprojection (FBP) method, specialized hardware or graphics cards can be used. However, specialized hardware is expensive and not flexible. The graphics processing unit (GPU) in a current graphic card can only reconstruct images in a reduced precision and is not easy to program. In this paper, an acceleration scheme is proposed based on a multi-core PC. In the proposed scheme, several techniques are integrated, including utilization of geometric symmetry, optimization of data structures, single-instruction multiple-data (SIMD) processing, multithreaded computation, and an Intel C++ compilier. Our scheme maintains the original precision and involves no data exchange between the GPU and CPU. The merits of our scheme are demonstrated in numerical experiments against the traditional implementation. Our scheme achieves a speedup of about 40, which can be further improved by several folds using the latest quad-core processors. PMID:18256731

  16. Numerical investigation of a modified family of centered schemes applied to multiphase equations with nonconservative sources

    NASA Astrophysics Data System (ADS)

    Crochet, M. W.; Gonthier, K. A.

    2013-12-01

    Systems of hyperbolic partial differential equations are frequently used to model the flow of multiphase mixtures. These equations often contain sources, referred to as nozzling terms, that cannot be posed in divergence form, and have proven to be particularly challenging in the development of finite-volume methods. Upwind schemes have recently shown promise in properly resolving the steady wave solution of the associated multiphase Riemann problem. However, these methods require a full characteristic decomposition of the system eigenstructure, which may be either unavailable or computationally expensive. Central schemes, such as the Kurganov-Tadmor (KT) family of methods, require minimal characteristic information, which makes them easily applicable to systems with an arbitrary number of phases. However, the proper implementation of nozzling terms in these schemes has been mathematically ambiguous. The primary objectives of this work are twofold: first, an extension of the KT family of schemes is proposed that formally accounts for the nonconservative nozzling sources. This modification results in a semidiscrete form that retains the simplicity of its predecessor and introduces little additional computational expense. Second, this modified method is applied to multiple, but equivalent, forms of the multiphase equations to perform a numerical study by solving several one-dimensional test problems. Both ideal and Mie-Grüneisen equations of state are used, with the results compared to an analytical solution. This study demonstrates that the magnitudes of the resulting numerical errors are sensitive to the form of the equations considered, and suggests an optimal form to minimize these errors. Finally, a separate modification of the wave propagation speeds used in the KT family is also suggested that can reduce the extent of numerical diffusion in multiphase flows.

  17. Numerical investigation of the vortex-induced vibration of an elastically mounted circular cylinder at high Reynolds number (Re = 104) and low mass ratio using the RANS code.

    PubMed

    Khan, Niaz Bahadur; Ibrahim, Zainah; Nguyen, Linh Tuan The; Javed, Muhammad Faisal; Jameel, Mohammed

    2017-01-01

    This study numerically investigates the vortex-induced vibration (VIV) of an elastically mounted rigid cylinder by using Reynolds-averaged Navier-Stokes (RANS) equations with computational fluid dynamic (CFD) tools. CFD analysis is performed for a fixed-cylinder case with Reynolds number (Re) = 104 and for a cylinder that is free to oscillate in the transverse direction and possesses a low mass-damping ratio and Re = 104. Previously, similar studies have been performed with 3-dimensional and comparatively expensive turbulent models. In the current study, the capability and accuracy of the RANS model are validated, and the results of this model are compared with those of detached eddy simulation, direct numerical simulation, and large eddy simulation models. All three response branches and the maximum amplitude are well captured. The 2-dimensional case with the RANS shear-stress transport k-w model, which involves minimal computational cost, is reliable and appropriate for analyzing the characteristics of VIV.

  18. References and benchmarks for pore-scale flow simulated using micro-CT images of porous media and digital rocks

    NASA Astrophysics Data System (ADS)

    Saxena, Nishank; Hofmann, Ronny; Alpak, Faruk O.; Berg, Steffen; Dietderich, Jesse; Agarwal, Umang; Tandon, Kunj; Hunter, Sander; Freeman, Justin; Wilson, Ove Bjorn

    2017-11-01

    We generate a novel reference dataset to quantify the impact of numerical solvers, boundary conditions, and simulation platforms. We consider a variety of microstructures ranging from idealized pipes to digital rocks. Pore throats of the digital rocks considered are large enough to be well resolved with state-of-the-art micro-computerized tomography technology. Permeability is computed using multiple numerical engines, 12 in total, including, Lattice-Boltzmann, computational fluid dynamics, voxel based, fast semi-analytical, and known empirical models. Thus, we provide a measure of uncertainty associated with flow computations of digital media. Moreover, the reference and standards dataset generated is the first of its kind and can be used to test and improve new fluid flow algorithms. We find that there is an overall good agreement between solvers for idealized cross-section shape pipes. As expected, the disagreement increases with increase in complexity of the pore space. Numerical solutions for pipes with sinusoidal variation of cross section show larger variability compared to pipes of constant cross-section shapes. We notice relatively larger variability in computed permeability of digital rocks with coefficient of variation (of up to 25%) in computed values between various solvers. Still, these differences are small given other subsurface uncertainties. The observed differences between solvers can be attributed to several causes including, differences in boundary conditions, numerical convergence criteria, and parameterization of fundamental physics equations. Solvers that perform additional meshing of irregular pore shapes require an additional step in practical workflows which involves skill and can introduce further uncertainty. Computation times for digital rocks vary from minutes to several days depending on the algorithm and available computational resources. We find that more stringent convergence criteria can improve solver accuracy but at the expense of longer computation time.

  19. Decision rules for unbiased inventory estimates

    NASA Technical Reports Server (NTRS)

    Argentiero, P. D.; Koch, D.

    1979-01-01

    An efficient and accurate procedure for estimating inventories from remote sensing scenes is presented. In place of the conventional and expensive full dimensional Bayes decision rule, a one-dimensional feature extraction and classification technique was employed. It is shown that this efficient decision rule can be used to develop unbiased inventory estimates and that for large sample sizes typical of satellite derived remote sensing scenes, resulting accuracies are comparable or superior to more expensive alternative procedures. Mathematical details of the procedure are provided in the body of the report and in the appendix. Results of a numerical simulation of the technique using statistics obtained from an observed LANDSAT scene are included. The simulation demonstrates the effectiveness of the technique in computing accurate inventory estimates.

  20. Discretization of the induced-charge boundary integral equation.

    PubMed

    Bardhan, Jaydeep P; Eisenberg, Robert S; Gillespie, Dirk

    2009-07-01

    Boundary-element methods (BEMs) for solving integral equations numerically have been used in many fields to compute the induced charges at dielectric boundaries. In this paper, we consider a more accurate implementation of BEM in the context of ions in aqueous solution near proteins, but our results are applicable more generally. The ions that modulate protein function are often within a few angstroms of the protein, which leads to the significant accumulation of polarization charge at the protein-solvent interface. Computing the induced charge accurately and quickly poses a numerical challenge in solving a popular integral equation using BEM. In particular, the accuracy of simulations can depend strongly on seemingly minor details of how the entries of the BEM matrix are calculated. We demonstrate that when the dielectric interface is discretized into flat tiles, the qualocation method of Tausch [IEEE Trans Comput.-Comput.-Aided Des. 20, 1398 (2001)] to compute the BEM matrix elements is always more accurate than the traditional centroid-collocation method. Qualocation is not more expensive to implement than collocation and can save significant computational time by reducing the number of boundary elements needed to discretize the dielectric interfaces.

  1. Discretization of the induced-charge boundary integral equation

    NASA Astrophysics Data System (ADS)

    Bardhan, Jaydeep P.; Eisenberg, Robert S.; Gillespie, Dirk

    2009-07-01

    Boundary-element methods (BEMs) for solving integral equations numerically have been used in many fields to compute the induced charges at dielectric boundaries. In this paper, we consider a more accurate implementation of BEM in the context of ions in aqueous solution near proteins, but our results are applicable more generally. The ions that modulate protein function are often within a few angstroms of the protein, which leads to the significant accumulation of polarization charge at the protein-solvent interface. Computing the induced charge accurately and quickly poses a numerical challenge in solving a popular integral equation using BEM. In particular, the accuracy of simulations can depend strongly on seemingly minor details of how the entries of the BEM matrix are calculated. We demonstrate that when the dielectric interface is discretized into flat tiles, the qualocation method of Tausch [IEEE Trans Comput.-Comput.-Aided Des. 20, 1398 (2001)] to compute the BEM matrix elements is always more accurate than the traditional centroid-collocation method. Qualocation is not more expensive to implement than collocation and can save significant computational time by reducing the number of boundary elements needed to discretize the dielectric interfaces.

  2. Multi-Scale Fusion of Information for Uncertainty Quantification and Management in Large-Scale Simulations

    DTIC Science & Technology

    2015-12-02

    simplification of the equations but at the expense of introducing modeling errors. We have shown that the Wick solutions have accuracy comparable to...the system of equations for the coefficients of formal power series solutions . Moreover, the structure of this propagator is seemingly universal, i.e...the problem of computing the numerical solution to kinetic partial differential equa- tions involving many phase variables. These types of equations

  3. Transport of Space Environment Electrons: A Simplified Rapid-Analysis Computational Procedure

    NASA Technical Reports Server (NTRS)

    Nealy, John E.; Anderson, Brooke M.; Cucinotta, Francis A.; Wilson, John W.; Katz, Robert; Chang, C. K.

    2002-01-01

    A computational procedure for describing transport of electrons in condensed media has been formulated for application to effects and exposures from spectral distributions typical of electrons trapped in planetary magnetic fields. The procedure is based on earlier parameterizations established from numerous electron beam experiments. New parameterizations have been derived that logically extend the domain of application to low molecular weight (high hydrogen content) materials and higher energies (approximately 50 MeV). The production and transport of high energy photons (bremsstrahlung) generated in the electron transport processes have also been modeled using tabulated values of photon production cross sections. A primary purpose for developing the procedure has been to provide a means for rapidly performing numerous repetitive calculations essential for electron radiation exposure assessments for complex space structures. Several favorable comparisons have been made with previous calculations for typical space environment spectra, which have indicated that accuracy has not been substantially compromised at the expense of computational speed.

  4. Thermochemical Modeling of Nonequilibrium Oxygen Flows

    NASA Astrophysics Data System (ADS)

    Neitzel, Kevin Joseph

    The development of hypersonic vehicles leans heavily on computational simulation due to the high enthalpy flow conditions that are expensive and technically challenging to replicate experimentally. The accuracy of the nonequilibrium modeling in the computer simulations dictates the design margin that is required for the thermal protection system and flight dynamics. Previous hypersonic vehicles, such as Apollo and the Space Shuttle, were primarily concerned with re-entry TPS design. The strong flow conditions of re-entry, involving Mach numbers of 25, quickly dissociate the oxygen molecules in air. Sustained flight, hypersonic vehicles will be designed to operate in Mach number ranges of 5 to 10. The oxygen molecules will not quickly dissociate and will play an important role in the flow field behavior. The development of nonequilibrium models of oxygen is crucial for limiting modeling uncertainty. Thermochemical nonequilibrium modeling is investigated for oxygen flows. Specifically, the vibrational relaxation and dissociation behavior that dominate the nonequilibrium physics in this flight regime are studied in detail. The widely used two-temperature (2T) approach is compared to the higher fidelity and more computationally expensive state-to-state (STS) approach. This dissertation utilizes a wide range of rate sources, including newly available STS rates, to conduct a comprehensive study of modeling approaches for hypersonic nonequilibrium thermochemical modeling. Additionally, the physical accuracy of the computational methods are assessed by comparing the numerical results with available experimental data. The numerical results and experimental measurements present strong nonequilibrium, and even non-Boltzmann behavior in the vibrational energy mode for the sustained hypersonic flight regime. The STS approach is able to better capture the behavior observed in the experimental data, especially for stronger nonequilibrium conditions. Additionally, a reduced order model (ROM) modification to the 2T model is developed to improve the capability of the 2T approach framework.

  5. Particle swarm optimization and gravitational wave data analysis: Performance on a binary inspiral testbed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Yan; Mohanty, Soumya D.; Center for Gravitational Wave Astronomy, Department of Physics and Astronomy, University of Texas at Brownsville, 80 Fort Brown, Brownsville, Texas 78520

    2010-03-15

    The detection and estimation of gravitational wave signals belonging to a parameterized family of waveforms requires, in general, the numerical maximization of a data-dependent function of the signal parameters. Because of noise in the data, the function to be maximized is often highly multimodal with numerous local maxima. Searching for the global maximum then becomes computationally expensive, which in turn can limit the scientific scope of the search. Stochastic optimization is one possible approach to reducing computational costs in such applications. We report results from a first investigation of the particle swarm optimization method in this context. The method ismore » applied to a test bed motivated by the problem of detection and estimation of a binary inspiral signal. Our results show that particle swarm optimization works well in the presence of high multimodality, making it a viable candidate method for further applications in gravitational wave data analysis.« less

  6. Numerical prediction of the energy efficiency of the three-dimensional fish school using the discretized Adomian decomposition method

    NASA Astrophysics Data System (ADS)

    Lin, Yinwei

    2018-06-01

    A three-dimensional modeling of fish school performed by a modified Adomian decomposition method (ADM) discretized by the finite difference method is proposed. To our knowledge, few studies of the fish school are documented due to expensive cost of numerical computing and tedious three-dimensional data analysis. Here, we propose a simple model replied on the Adomian decomposition method to estimate the efficiency of energy saving of the flow motion of the fish school. First, the analytic solutions of Navier-Stokes equations are used for numerical validation. The influences of the distance between the side-by-side two fishes are studied on the energy efficiency of the fish school. In addition, the complete error analysis for this method is presented.

  7. Integrated Reconfigurable Intelligent Systems (IRIS) for Complex Naval Systems

    DTIC Science & Technology

    2010-02-21

    RKF45] and Adams Variable Step- Size Predictor - Corrector methods). While such algorithms naturally are usually used to numerically solve differential...verified by yet another function call. Due to their nature, such methods are referred to as predictor - corrector methods. While computationally expensive...CONTRACT NUMBER N00014-09- C -0394 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER N/A 6. Author(s) Dr. Dimitri N. Mavris Dr. Yongchang Li 5d

  8. Sensitivity analysis and approximation methods for general eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Murthy, D. V.; Haftka, R. T.

    1986-01-01

    Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.

  9. Advanced computational simulations of water waves interacting with wave energy converters

    NASA Astrophysics Data System (ADS)

    Pathak, Ashish; Freniere, Cole; Raessi, Mehdi

    2017-03-01

    Wave energy converter (WEC) devices harness the renewable ocean wave energy and convert it into useful forms of energy, e.g. mechanical or electrical. This paper presents an advanced 3D computational framework to study the interaction between water waves and WEC devices. The computational tool solves the full Navier-Stokes equations and considers all important effects impacting the device performance. To enable large-scale simulations in fast turnaround times, the computational solver was developed in an MPI parallel framework. A fast multigrid preconditioned solver is introduced to solve the computationally expensive pressure Poisson equation. The computational solver was applied to two surface-piercing WEC geometries: bottom-hinged cylinder and flap. Their numerically simulated response was validated against experimental data. Additional simulations were conducted to investigate the applicability of Froude scaling in predicting full-scale WEC response from the model experiments.

  10. Graphical Models for Ordinal Data

    PubMed Central

    Guo, Jian; Levina, Elizaveta; Michailidis, George; Zhu, Ji

    2014-01-01

    A graphical model for ordinal variables is considered, where it is assumed that the data are generated by discretizing the marginal distributions of a latent multivariate Gaussian distribution. The relationships between these ordinal variables are then described by the underlying Gaussian graphical model and can be inferred by estimating the corresponding concentration matrix. Direct estimation of the model is computationally expensive, but an approximate EM-like algorithm is developed to provide an accurate estimate of the parameters at a fraction of the computational cost. Numerical evidence based on simulation studies shows the strong performance of the algorithm, which is also illustrated on data sets on movie ratings and an educational survey. PMID:26120267

  11. A Deterministic Computational Procedure for Space Environment Electron Transport

    NASA Technical Reports Server (NTRS)

    Nealy, John E.; Chang, C. K.; Norman, Ryan B.; Blattnig, Steve R.; Badavi, Francis F.; Adamcyk, Anne M.

    2010-01-01

    A deterministic computational procedure for describing the transport of electrons in condensed media is formulated to simulate the effects and exposures from spectral distributions typical of electrons trapped in planetary magnetic fields. The primary purpose for developing the procedure is to provide a means of rapidly performing numerous repetitive transport calculations essential for electron radiation exposure assessments for complex space structures. The present code utilizes well-established theoretical representations to describe the relevant interactions and transport processes. A combined mean free path and average trajectory approach is used in the transport formalism. For typical space environment spectra, several favorable comparisons with Monte Carlo calculations are made which have indicated that accuracy is not compromised at the expense of the computational speed.

  12. Optimal implicit 2-D finite differences to model wave propagation in poroelastic media

    NASA Astrophysics Data System (ADS)

    Itzá, Reymundo; Iturrarán-Viveros, Ursula; Parra, Jorge O.

    2016-08-01

    Numerical modeling of seismic waves in heterogeneous porous reservoir rocks is an important tool for the interpretation of seismic surveys in reservoir engineering. We apply globally optimal implicit staggered-grid finite differences (FD) to model 2-D wave propagation in heterogeneous poroelastic media at a low-frequency range (<10 kHz). We validate the numerical solution by comparing it to an analytical-transient solution obtaining clear seismic wavefields including fast P and slow P and S waves (for a porous media saturated with fluid). The numerical dispersion and stability conditions are derived using von Neumann analysis, showing that over a wide range of porous materials the Courant condition governs the stability and this optimal implicit scheme improves the stability of explicit schemes. High-order explicit FD can be replaced by some lower order optimal implicit FD so computational cost will not be as expensive while maintaining the accuracy. Here, we compute weights for the optimal implicit FD scheme to attain an accuracy of γ = 10-8. The implicit spatial differentiation involves solving tridiagonal linear systems of equations through Thomas' algorithm.

  13. Numerical investigation of the vortex-induced vibration of an elastically mounted circular cylinder at high Reynolds number (Re = 104) and low mass ratio using the RANS code

    PubMed Central

    2017-01-01

    This study numerically investigates the vortex-induced vibration (VIV) of an elastically mounted rigid cylinder by using Reynolds-averaged Navier–Stokes (RANS) equations with computational fluid dynamic (CFD) tools. CFD analysis is performed for a fixed-cylinder case with Reynolds number (Re) = 104 and for a cylinder that is free to oscillate in the transverse direction and possesses a low mass-damping ratio and Re = 104. Previously, similar studies have been performed with 3-dimensional and comparatively expensive turbulent models. In the current study, the capability and accuracy of the RANS model are validated, and the results of this model are compared with those of detached eddy simulation, direct numerical simulation, and large eddy simulation models. All three response branches and the maximum amplitude are well captured. The 2-dimensional case with the RANS shear–stress transport k-w model, which involves minimal computational cost, is reliable and appropriate for analyzing the characteristics of VIV. PMID:28982172

  14. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.

  15. Macro-actor execution on multilevel data-driven architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaudiot, J.L.; Najjar, W.

    1988-12-31

    The data-flow model of computation brings to multiprocessors high programmability at the expense of increased overhead. Applying the model at a higher level leads to better performance but also introduces loss of parallelism. We demonstrate here syntax directed program decomposition methods for the creation of large macro-actors in numerical algorithms. In order to alleviate some of the problems introduced by the lower resolution interpretation, we describe a multi-level of resolution and analyze the requirements for its actual hardware and software integration.

  16. Metasurface optics for full-color computational imaging.

    PubMed

    Colburn, Shane; Zhan, Alan; Majumdar, Arka

    2018-02-01

    Conventional imaging systems comprise large and expensive optical components that successively mitigate aberrations. Metasurface optics offers a route to miniaturize imaging systems by replacing bulky components with flat and compact implementations. The diffractive nature of these devices, however, induces severe chromatic aberrations, and current multiwavelength and narrowband achromatic metasurfaces cannot support full visible spectrum imaging (400 to 700 nm). We combine principles of both computational imaging and metasurface optics to build a system with a single metalens of numerical aperture ~0.45, which generates in-focus images under white light illumination. Our metalens exhibits a spectrally invariant point spread function that enables computational reconstruction of captured images with a single digital filter. This work connects computational imaging and metasurface optics and demonstrates the capabilities of combining these disciplines by simultaneously reducing aberrations and downsizing imaging systems using simpler optics.

  17. 48 CFR 9904.410-60 - Illustrations.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... budgets for the other segment should be removed from B's G&A expense pool and transferred to the other...; all home office expenses allocated to Segment H are included in Segment H's G&A expense pool. (2) This... cost of scientific computer operations in its G&A expense pool. The scientific computer is used...

  18. 48 CFR 9904.410-60 - Illustrations.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... budgets for the other segment should be removed from B's G&A expense pool and transferred to the other...; all home office expenses allocated to Segment H are included in Segment H's G&A expense pool. (2) This... cost of scientific computer operations in its G&A expense pool. The scientific computer is used...

  19. A simple parameterization of aerosol emissions in RAMS

    NASA Astrophysics Data System (ADS)

    Letcher, Theodore

    Throughout the past decade, a high degree of attention has been focused on determining the microphysical impact of anthropogenically enhanced concentrations of Cloud Condensation Nuclei (CCN) on orographic snowfall in the mountains of the western United States. This area has garnered a lot of attention due to the implications this effect may have on local water resource distribution within the Region. Recent advances in computing power and the development of highly advanced microphysical schemes within numerical models have provided an estimation of the sensitivity that orographic snowfall has to changes in atmospheric CCN concentrations. However, what is still lacking is a coupling between these advanced microphysical schemes and a real-world representation of CCN sources. Previously, an attempt to representation the heterogeneous evolution of aerosol was made by coupling three-dimensional aerosol output from the WRF Chemistry model to the Colorado State University (CSU) Regional Atmospheric Modeling System (RAMS) (Ward et al. 2011). The biggest problem associated with this scheme was the computational expense. In fact, the computational expense associated with this scheme was so high, that it was prohibitive for simulations with fine enough resolution to accurately represent microphysical processes. To improve upon this method, a new parameterization for aerosol emission was developed in such a way that it was fully contained within RAMS. Several assumptions went into generating a computationally efficient aerosol emissions parameterization in RAMS. The most notable assumption was the decision to neglect the chemical processes in formed in the formation of Secondary Aerosol (SA), and instead treat SA as primary aerosol via short-term WRF-CHEM simulations. While, SA makes up a substantial portion of the total aerosol burden (much of which is made up of organic material), the representation of this process is highly complex and highly expensive within a numerical model. Furthermore, SA formation is greatly reduced during the winter months due to the lack of naturally produced organic VOC's. Because of these reasons, it was felt that neglecting SOA within the model was the best course of action. The actual parameterization uses a prescribed source map to add aerosol to the model at two vertical levels that surround an arbitrary height decided by the user. To best represent the real-world, the WRF Chemistry model was run using the National Emissions Inventory (NEI2005) to represent anthropogenic emissions and the Model Emissions of Gases and Aerosols from Nature (MEGAN) to represent natural contributions to aerosol. WRF Chemistry was run for one hour, after which the aerosol output along with the hygroscopicity parameter (κ) were saved into a data file that had the capacity to be interpolated to an arbitrary grid used in RAMS. The comparison of this parameterization to observations collected at Mesa Verde National Park (MVNP) during the Inhibition of Snowfall from Pollution Aerosol (ISPA-III) field campaign yielded promising results. The model was able to simulate the variability in near surface aerosol concentration with reasonable accuracy, though with a general low bias. Furthermore, this model compared much better to the observations than did the WRF Chemistry model using a fraction of the computational expense. This emissions scheme was able to show reasonable solutions regarding the aerosol concentrations and can therefore be used to provide an estimate of the seasonal impact of increased CCN on water resources in Western Colorado with relatively low computational expense.

  20. A fast numerical method for ideal fluid flow in domains with multiple stirrers

    NASA Astrophysics Data System (ADS)

    Nasser, Mohamed M. S.; Green, Christopher C.

    2018-03-01

    A collection of arbitrarily-shaped solid objects, each moving at a constant speed, can be used to mix or stir ideal fluid, and can give rise to interesting flow patterns. Assuming these systems of fluid stirrers are two-dimensional, the mathematical problem of resolving the flow field—given a particular distribution of any finite number of stirrers of specified shape and speed—can be formulated as a Riemann-Hilbert (R-H) problem. We show that this R-H problem can be solved numerically using a fast and accurate algorithm for any finite number of stirrers based around a boundary integral equation with the generalized Neumann kernel. Various systems of fluid stirrers are considered, and our numerical scheme is shown to handle highly multiply connected domains (i.e. systems of many fluid stirrers) with minimal computational expense.

  1. Computing the Evans function via solving a linear boundary value ODE

    NASA Astrophysics Data System (ADS)

    Wahl, Colin; Nguyen, Rose; Ventura, Nathaniel; Barker, Blake; Sandstede, Bjorn

    2015-11-01

    Determining the stability of traveling wave solutions to partial differential equations can oftentimes be computationally intensive but of great importance to understanding the effects of perturbations on the physical systems (chemical reactions, hydrodynamics, etc.) they model. For waves in one spatial dimension, one may linearize around the wave and form an Evans function - an analytic Wronskian-like function which has zeros that correspond in multiplicity to the eigenvalues of the linearized system. If eigenvalues with a positive real part do not exist, the traveling wave will be stable. Two methods exist for calculating the Evans function numerically: the exterior-product method and the method of continuous orthogonalization. The first is numerically expensive, and the second reformulates the originally linear system as a nonlinear system. We develop a new algorithm for computing the Evans function through appropriate linear boundary-value problems. This algorithm is cheaper than the previous methods, and we prove that it preserves analyticity of the Evans function. We also provide error estimates and implement it on some classical one- and two-dimensional systems, one being the Swift-Hohenberg equation in a channel, to show the advantages.

  2. Closed-form solution of decomposable stochastic models

    NASA Technical Reports Server (NTRS)

    Sjogren, Jon A.

    1990-01-01

    Markov and semi-Markov processes are increasingly being used in the modeling of complex reconfigurable systems (fault tolerant computers). The estimation of the reliability (or some measure of performance) of the system reduces to solving the process for its state probabilities. Such a model may exhibit numerous states and complicated transition distributions, contributing to an expensive and numerically delicate solution procedure. Thus, when a system exhibits a decomposition property, either structurally (autonomous subsystems), or behaviorally (component failure versus reconfiguration), it is desirable to exploit this decomposition in the reliability calculation. In interesting cases there can be failure states which arise from non-failure states of the subsystems. Equations are presented which allow the computation of failure probabilities of the total (combined) model without requiring a complete solution of the combined model. This material is presented within the context of closed-form functional representation of probabilities as utilized in the Symbolic Hierarchical Automated Reliability and Performance Evaluator (SHARPE) tool. The techniques adopted enable one to compute such probability functions for a much wider class of systems at a reduced computational cost. Several examples show how the method is used, especially in enhancing the versatility of the SHARPE tool.

  3. The development and validation of a numerical integration method for non-linear viscoelastic modeling

    PubMed Central

    Ramo, Nicole L.; Puttlitz, Christian M.

    2018-01-01

    Compelling evidence that many biological soft tissues display both strain- and time-dependent behavior has led to the development of fully non-linear viscoelastic modeling techniques to represent the tissue’s mechanical response under dynamic conditions. Since the current stress state of a viscoelastic material is dependent on all previous loading events, numerical analyses are complicated by the requirement of computing and storing the stress at each step throughout the load history. This requirement quickly becomes computationally expensive, and in some cases intractable, for finite element models. Therefore, we have developed a strain-dependent numerical integration approach for capturing non-linear viscoelasticity that enables calculation of the current stress from a strain-dependent history state variable stored from the preceding time step only, which improves both fitting efficiency and computational tractability. This methodology was validated based on its ability to recover non-linear viscoelastic coefficients from simulated stress-relaxation (six strain levels) and dynamic cyclic (three frequencies) experimental stress-strain data. The model successfully fit each data set with average errors in recovered coefficients of 0.3% for stress-relaxation fits and 0.1% for cyclic. The results support the use of the presented methodology to develop linear or non-linear viscoelastic models from stress-relaxation or cyclic experimental data of biological soft tissues. PMID:29293558

  4. A theoretical approach for analyzing the restabilization of wakes

    NASA Astrophysics Data System (ADS)

    Hill, D. C.

    1992-04-01

    Recently reported experimental results demonstrate that restabilization of the low-Reynolds-number flow past a circular cylinder can be achieved by the placement of a smaller cylinder in the wake of the first at particular locations. Traditional numerical procedures for modeling such phenomena are computationally expensive. An approach is presented here in which the properties of the adjoint solutions to the linearized equations of motion are exploited to map quickly the best positions for the small cylinder's placement. Comparisons with experiment and previous computations are favorable. The approach is shown to be applicable to general flows, illustrating how strongly control mechanisms that involve sources of momentum couple to unstable (or stable) modes of the system.

  5. A hybrid flower pollination algorithm based modified randomized location for multi-threshold medical image segmentation.

    PubMed

    Wang, Rui; Zhou, Yongquan; Zhao, Chengyan; Wu, Haizhou

    2015-01-01

    Multi-threshold image segmentation is a powerful image processing technique that is used for the preprocessing of pattern recognition and computer vision. However, traditional multilevel thresholding methods are computationally expensive because they involve exhaustively searching the optimal thresholds to optimize the objective functions. To overcome this drawback, this paper proposes a flower pollination algorithm with a randomized location modification. The proposed algorithm is used to find optimal threshold values for maximizing Otsu's objective functions with regard to eight medical grayscale images. When benchmarked against other state-of-the-art evolutionary algorithms, the new algorithm proves itself to be robust and effective through numerical experimental results including Otsu's objective values and standard deviations.

  6. Interaction Entropy: A New Paradigm for Highly Efficient and Reliable Computation of Protein-Ligand Binding Free Energy.

    PubMed

    Duan, Lili; Liu, Xiao; Zhang, John Z H

    2016-05-04

    Efficient and reliable calculation of protein-ligand binding free energy is a grand challenge in computational biology and is of critical importance in drug design and many other molecular recognition problems. The main challenge lies in the calculation of entropic contribution to protein-ligand binding or interaction systems. In this report, we present a new interaction entropy method which is theoretically rigorous, computationally efficient, and numerically reliable for calculating entropic contribution to free energy in protein-ligand binding and other interaction processes. Drastically different from the widely employed but extremely expensive normal mode method for calculating entropy change in protein-ligand binding, the new method calculates the entropic component (interaction entropy or -TΔS) of the binding free energy directly from molecular dynamics simulation without any extra computational cost. Extensive study of over a dozen randomly selected protein-ligand binding systems demonstrated that this interaction entropy method is both computationally efficient and numerically reliable and is vastly superior to the standard normal mode approach. This interaction entropy paradigm introduces a novel and intuitive conceptual understanding of the entropic effect in protein-ligand binding and other general interaction systems as well as a practical method for highly efficient calculation of this effect.

  7. GPU-accelerated element-free reverse-time migration with Gauss points partition

    NASA Astrophysics Data System (ADS)

    Zhou, Zhen; Jia, Xiaofeng; Qiang, Xiaodong

    2018-06-01

    An element-free method (EFM) has been demonstrated successfully in elasticity, heat conduction and fatigue crack growth problems. We present the theory of EFM and its numerical applications in seismic modelling and reverse time migration (RTM). Compared with the finite difference method and the finite element method, the EFM has unique advantages: (1) independence of grids in computation and (2) lower expense and more flexibility (because only the information of the nodes and the boundary of the concerned area is required). However, in EFM, due to improper computation and storage of some large sparse matrices, such as the mass matrix and the stiffness matrix, the method is difficult to apply to seismic modelling and RTM for a large velocity model. To solve the problem of storage and computation efficiency, we propose a concept of Gauss points partition and utilise the graphics processing unit to improve the computational efficiency. We employ the compressed sparse row format to compress the intermediate large sparse matrices and attempt to simplify the operations by solving the linear equations with CULA solver. To improve the computation efficiency further, we introduce the concept of the lumped mass matrix. Numerical experiments indicate that the proposed method is accurate and more efficient than the regular EFM.

  8. Triangular covariance factorizations for. Ph.D. Thesis. - Calif. Univ.

    NASA Technical Reports Server (NTRS)

    Thornton, C. L.

    1976-01-01

    An improved computational form of the discrete Kalman filter is derived using an upper triangular factorization of the error covariance matrix. The covariance P is factored such that P = UDUT where U is unit upper triangular and D is diagonal. Recursions are developed for propagating the U-D covariance factors together with the corresponding state estimate. The resulting algorithm, referred to as the U-D filter, combines the superior numerical precision of square root filtering techniques with an efficiency comparable to that of Kalman's original formula. Moreover, this method is easily implemented and involves no more computer storage than the Kalman algorithm. These characteristics make the U-D method an attractive realtime filtering technique. A new covariance error analysis technique is obtained from an extension of the U-D filter equations. This evaluation method is flexible and efficient and may provide significantly improved numerical results. Cost comparisons show that for a large class of problems the U-D evaluation algorithm is noticeably less expensive than conventional error analysis methods.

  9. Development of an upwind, finite-volume code with finite-rate chemistry

    NASA Technical Reports Server (NTRS)

    Molvik, Gregory A.

    1994-01-01

    Under this grant, two numerical algorithms were developed to predict the flow of viscous, hypersonic, chemically reacting gases over three-dimensional bodies. Both algorithms take advantage of the benefits of upwind differencing, total variation diminishing techniques, and a finite-volume framework, but obtain their solution in two separate manners. The first algorithm is a zonal, time-marching scheme, and is generally used to obtain solutions in the subsonic portions of the flow field. The second algorithm is a much less expensive, space-marching scheme and can be used for the computation of the larger, supersonic portion of the flow field. Both codes compute their interface fluxes with a temporal Riemann solver and the resulting schemes are made fully implicit including the chemical source terms and boundary conditions. Strong coupling is used between the fluid dynamic, chemical, and turbulence equations. These codes have been validated on numerous hypersonic test cases and have provided excellent comparison with existing data.

  10. EIT image reconstruction based on a hybrid FE-EFG forward method and the complete-electrode model.

    PubMed

    Hadinia, M; Jafari, R; Soleimani, M

    2016-06-01

    This paper presents the application of the hybrid finite element-element free Galerkin (FE-EFG) method for the forward and inverse problems of electrical impedance tomography (EIT). The proposed method is based on the complete electrode model. Finite element (FE) and element-free Galerkin (EFG) methods are accurate numerical techniques. However, the FE technique has meshing task problems and the EFG method is computationally expensive. In this paper, the hybrid FE-EFG method is applied to take both advantages of FE and EFG methods, the complete electrode model of the forward problem is solved, and an iterative regularized Gauss-Newton method is adopted to solve the inverse problem. The proposed method is applied to compute Jacobian in the inverse problem. Utilizing 2D circular homogenous models, the numerical results are validated with analytical and experimental results and the performance of the hybrid FE-EFG method compared with the FE method is illustrated. Results of image reconstruction are presented for a human chest experimental phantom.

  11. An adaptive tau-leaping method for stochastic simulations of reaction-diffusion systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Padgett, Jill M. A.; Ilie, Silvana, E-mail: silvana@ryerson.ca

    2016-03-15

    Stochastic modelling is critical for studying many biochemical processes in a cell, in particular when some reacting species have low population numbers. For many such cellular processes the spatial distribution of the molecular species plays a key role. The evolution of spatially heterogeneous biochemical systems with some species in low amounts is accurately described by the mesoscopic model of the Reaction-Diffusion Master Equation. The Inhomogeneous Stochastic Simulation Algorithm provides an exact strategy to numerically solve this model, but it is computationally very expensive on realistic applications. We propose a novel adaptive time-stepping scheme for the tau-leaping method for approximating themore » solution of the Reaction-Diffusion Master Equation. This technique combines effective strategies for variable time-stepping with path preservation to reduce the computational cost, while maintaining the desired accuracy. The numerical tests on various examples arising in applications show the improved efficiency achieved by the new adaptive method.« less

  12. A modeling study of the time-averaged electric currents in the vicinity of isolated thunderstorms

    NASA Technical Reports Server (NTRS)

    Driscoll, Kevin T.; Blakeslee, Richard J.; Baginski, Michael E.

    1992-01-01

    A thorough examination of the results of a time-dependent computer model of a dipole thunderstorm revealed that there are numerous similarities between the time-averaged electrical properties and the steady-state properties of an active thunderstorm. Thus, the electrical behavior of the atmosphere in the vicinity of a thunderstorm can be determined with a formulation similar to what was first described by Holzer and Saxon (1952). From the Maxwell continuity equation of electric current, a simple analytical equation was derived that expresses a thunderstorm's average current contribution to the global electric circuit in terms of the generator current within the thundercloud, the intracloud lightning current, the cloud-to-ground lightning current, the altitudes of the charge centers, and the conductivity profile of the atmosphere. This equation was found to be nearly as accurate as the more computationally expensive numerical model, even when it is applied to a thunderstorm with a reduced conductivity thundercloud, a time-varying generator current, a varying flash rate, and a changing lightning mix.

  13. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  14. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE PAGES

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  15. Efficient Semiparametric Inference Under Two-Phase Sampling, With Applications to Genetic Association Studies.

    PubMed

    Tao, Ran; Zeng, Donglin; Lin, Dan-Yu

    2017-01-01

    In modern epidemiological and clinical studies, the covariates of interest may involve genome sequencing, biomarker assay, or medical imaging and thus are prohibitively expensive to measure on a large number of subjects. A cost-effective solution is the two-phase design, under which the outcome and inexpensive covariates are observed for all subjects during the first phase and that information is used to select subjects for measurements of expensive covariates during the second phase. For example, subjects with extreme values of quantitative traits were selected for whole-exome sequencing in the National Heart, Lung, and Blood Institute (NHLBI) Exome Sequencing Project (ESP). Herein, we consider general two-phase designs, where the outcome can be continuous or discrete, and inexpensive covariates can be continuous and correlated with expensive covariates. We propose a semiparametric approach to regression analysis by approximating the conditional density functions of expensive covariates given inexpensive covariates with B-spline sieves. We devise a computationally efficient and numerically stable EM-algorithm to maximize the sieve likelihood. In addition, we establish the consistency, asymptotic normality, and asymptotic efficiency of the estimators. Furthermore, we demonstrate the superiority of the proposed methods over existing ones through extensive simulation studies. Finally, we present applications to the aforementioned NHLBI ESP.

  16. Final Report of the Project "From the finite element method to the virtual element method"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manzini, Gianmarco; Gyrya, Vitaliy

    The Finite Element Method (FEM) is a powerful numerical tool that is being used in a large number of engineering applications. The FEM is constructed on triangular/tetrahedral and quadrilateral/hexahedral meshes. Extending the FEM to general polygonal/polyhedral meshes in straightforward way turns out to be extremely difficult and leads to very complex and computationally expensive schemes. The reason for this failure is that the construction of the basis functions on elements with a very general shape is a non-trivial and complex task. In this project we developed a new family of numerical methods, dubbed the Virtual Element Method (VEM) for themore » numerical approximation of partial differential equations (PDE) of elliptic type suitable to polygonal and polyhedral unstructured meshes. We successfully formulated, implemented and tested these methods and studied both theoretically and numerically their stability, robustness and accuracy for diffusion problems, convection-reaction-diffusion problems, the Stokes equations and the biharmonic equations.« less

  17. 76 FR 56401 - Stainless Steel Bar from India: Final Results of the Antidumping Duty Administrative Review, and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-13

    ... Sieves' general and administrative expense rate to exclude from the numerator the portion of the director remuneration expense reported as a selling expense; (11) we increased Hindustan's reported cost of manufacture...

  18. Proctalgia fugax: would you recognize it?

    PubMed

    Babb, R R

    1996-04-01

    Proctalgia fugax is characterized by sudden and sometimes severe rectal pain that occurs by day or night at irregular intervals. The pain results from dysfunction of the internal anal sphincter. Proctalgia fugax has a uniform clinical picture, and it can be easily diagnosed when recognized. The patient can be assured that nothing serious is wrong. Expensive tests, such as computed tomography or magnetic resonance imaging of the pelvis, are not required. Treatment may be difficult, but if the attacks of pain are numerous and severe,, a calcium channel blocker such as nifedipine (Adalat, Procardia) should be tried.

  19. Decoupled CFD-based optimization of efficiency and cavitation performance of a double-suction pump

    NASA Astrophysics Data System (ADS)

    Škerlavaj, A.; Morgut, M.; Jošt, D.; Nobile, E.

    2017-04-01

    In this study the impeller geometry of a double-suction pump ensuring the best performances in terms of hydraulic efficiency and reluctance of cavitation is determined using an optimization strategy, which was driven by means of the modeFRONTIER optimization platform. The different impeller shapes (designs) are modified according to the optimization parameters and tested with a computational fluid dynamics (CFD) software, namely ANSYS CFX. The simulations are performed using a decoupled approach, where only the impeller domain region is numerically investigated for computational convenience. The flow losses in the volute are estimated on the base of the velocity distribution at the impeller outlet. The best designs are then validated considering the computationally more expensive full geometry CFD model. The overall results show that the proposed approach is suitable for quick impeller shape optimization.

  20. On modelling three-dimensional piezoelectric smart structures with boundary spectral element method

    NASA Astrophysics Data System (ADS)

    Zou, Fangxin; Aliabadi, M. H.

    2017-05-01

    The computational efficiency of the boundary element method in elastodynamic analysis can be significantly improved by employing high-order spectral elements for boundary discretisation. In this work, for the first time, the so-called boundary spectral element method is utilised to formulate the piezoelectric smart structures that are widely used in structural health monitoring (SHM) applications. The resultant boundary spectral element formulation has been validated by the finite element method (FEM) and physical experiments. The new formulation has demonstrated a lower demand on computational resources and a higher numerical stability than commercial FEM packages. Comparing to the conventional boundary element formulation, a significant reduction in computational expenses has been achieved. In summary, the boundary spectral element formulation presented in this paper provides a highly efficient and stable mathematical tool for the development of SHM applications.

  1. Tractable Experiment Design via Mathematical Surrogates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Brian J.

    This presentation summarizes the development and implementation of quantitative design criteria motivated by targeted inference objectives for identifying new, potentially expensive computational or physical experiments. The first application is concerned with estimating features of quantities of interest arising from complex computational models, such as quantiles or failure probabilities. A sequential strategy is proposed for iterative refinement of the importance distributions used to efficiently sample the uncertain inputs to the computational model. In the second application, effective use of mathematical surrogates is investigated to help alleviate the analytical and numerical intractability often associated with Bayesian experiment design. This approach allows formore » the incorporation of prior information into the design process without the need for gross simplification of the design criterion. Illustrative examples of both design problems will be presented as an argument for the relevance of these research problems.« less

  2. A DIY Ultrasonic Signal Generator for Sound Experiments

    NASA Astrophysics Data System (ADS)

    Riad, Ihab F.

    2018-02-01

    Many physics departments around the world have electronic and mechanical workshops attached to them that can help build experimental setups and instruments for research and the training of undergraduate students. The workshops are usually run by experienced technicians and equipped with expensive lathing, computer numerical control (CNC) machines, electric measuring instruments, and several other essential tools. However, in developing countries such as Sudan, the lack of qualified technicians and adequately equipped workshops hampers efforts by these departments to supplement their laboratories with the equipment they need. The only other option is to buy the needed equipment from specialized manufacturers. The latter option is not feasible for the departments in developing countries where funding for education and research is scarce and very limited and as equipment from these manufacturers is typically too expensive. These departments struggle significantly in equipping undergraduate teaching laboratories, and here we propose one way to address this.

  3. Efficient algorithms and implementations of entropy-based moment closures for rarefied gases

    NASA Astrophysics Data System (ADS)

    Schaerer, Roman Pascal; Bansal, Pratyuksh; Torrilhon, Manuel

    2017-07-01

    We present efficient algorithms and implementations of the 35-moment system equipped with the maximum-entropy closure in the context of rarefied gases. While closures based on the principle of entropy maximization have been shown to yield very promising results for moderately rarefied gas flows, the computational cost of these closures is in general much higher than for closure theories with explicit closed-form expressions of the closing fluxes, such as Grad's classical closure. Following a similar approach as Garrett et al. (2015) [13], we investigate efficient implementations of the computationally expensive numerical quadrature method used for the moment evaluations of the maximum-entropy distribution by exploiting its inherent fine-grained parallelism with the parallelism offered by multi-core processors and graphics cards. We show that using a single graphics card as an accelerator allows speed-ups of two orders of magnitude when compared to a serial CPU implementation. To accelerate the time-to-solution for steady-state problems, we propose a new semi-implicit time discretization scheme. The resulting nonlinear system of equations is solved with a Newton type method in the Lagrange multipliers of the dual optimization problem in order to reduce the computational cost. Additionally, fully explicit time-stepping schemes of first and second order accuracy are presented. We investigate the accuracy and efficiency of the numerical schemes for several numerical test cases, including a steady-state shock-structure problem.

  4. Petascale supercomputing to accelerate the design of high-temperature alloys

    DOE PAGES

    Shin, Dongwon; Lee, Sangkeun; Shyam, Amit; ...

    2017-10-25

    Recent progress in high-performance computing and data informatics has opened up numerous opportunities to aid the design of advanced materials. Herein, we demonstrate a computational workflow that includes rapid population of high-fidelity materials datasets via petascale computing and subsequent analyses with modern data science techniques. We use a first-principles approach based on density functional theory to derive the segregation energies of 34 microalloying elements at the coherent and semi-coherent interfaces between the aluminium matrix and the θ'-Al 2Cu precipitate, which requires several hundred supercell calculations. We also perform extensive correlation analyses to identify materials descriptors that affect the segregation behaviourmore » of solutes at the interfaces. Finally, we show an example of leveraging machine learning techniques to predict segregation energies without performing computationally expensive physics-based simulations. As a result, the approach demonstrated in the present work can be applied to any high-temperature alloy system for which key materials data can be obtained using high-performance computing.« less

  5. Petascale supercomputing to accelerate the design of high-temperature alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shin, Dongwon; Lee, Sangkeun; Shyam, Amit

    Recent progress in high-performance computing and data informatics has opened up numerous opportunities to aid the design of advanced materials. Herein, we demonstrate a computational workflow that includes rapid population of high-fidelity materials datasets via petascale computing and subsequent analyses with modern data science techniques. We use a first-principles approach based on density functional theory to derive the segregation energies of 34 microalloying elements at the coherent and semi-coherent interfaces between the aluminium matrix and the θ'-Al 2Cu precipitate, which requires several hundred supercell calculations. We also perform extensive correlation analyses to identify materials descriptors that affect the segregation behaviourmore » of solutes at the interfaces. Finally, we show an example of leveraging machine learning techniques to predict segregation energies without performing computationally expensive physics-based simulations. As a result, the approach demonstrated in the present work can be applied to any high-temperature alloy system for which key materials data can be obtained using high-performance computing.« less

  6. Petascale supercomputing to accelerate the design of high-temperature alloys

    NASA Astrophysics Data System (ADS)

    Shin, Dongwon; Lee, Sangkeun; Shyam, Amit; Haynes, J. Allen

    2017-12-01

    Recent progress in high-performance computing and data informatics has opened up numerous opportunities to aid the design of advanced materials. Herein, we demonstrate a computational workflow that includes rapid population of high-fidelity materials datasets via petascale computing and subsequent analyses with modern data science techniques. We use a first-principles approach based on density functional theory to derive the segregation energies of 34 microalloying elements at the coherent and semi-coherent interfaces between the aluminium matrix and the θ‧-Al2Cu precipitate, which requires several hundred supercell calculations. We also perform extensive correlation analyses to identify materials descriptors that affect the segregation behaviour of solutes at the interfaces. Finally, we show an example of leveraging machine learning techniques to predict segregation energies without performing computationally expensive physics-based simulations. The approach demonstrated in the present work can be applied to any high-temperature alloy system for which key materials data can be obtained using high-performance computing.

  7. An O(N squared) method for computing the eigensystem of N by N symmetric tridiagonal matrices by the divide and conquer approach

    NASA Technical Reports Server (NTRS)

    Gill, Doron; Tadmor, Eitan

    1988-01-01

    An efficient method is proposed to solve the eigenproblem of N by N Symmetric Tridiagonal (ST) matrices. Unlike the standard eigensolvers which necessitate O(N cubed) operations to compute the eigenvectors of such ST matrices, the proposed method computes both the eigenvalues and eigenvectors with only O(N squared) operations. The method is based on serial implementation of the recently introduced Divide and Conquer (DC) algorithm. It exploits the fact that by O(N squared) of DC operations, one can compute the eigenvalues of N by N ST matrix and a finite number of pairs of successive rows of its eigenvector matrix. The rest of the eigenvectors--all of them or one at a time--are computed by linear three-term recurrence relations. Numerical examples are presented which demonstrate the superiority of the proposed method by saving an order of magnitude in execution time at the expense of sacrificing a few orders of accuracy.

  8. A high-order multiscale finite-element method for time-domain acoustic-wave modeling

    NASA Astrophysics Data System (ADS)

    Gao, Kai; Fu, Shubin; Chung, Eric T.

    2018-05-01

    Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructs high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss-Lobatto-Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.

  9. A high-order multiscale finite-element method for time-domain acoustic-wave modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Kai; Fu, Shubin; Chung, Eric T.

    Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructsmore » high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss–Lobatto–Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.« less

  10. A high-order multiscale finite-element method for time-domain acoustic-wave modeling

    DOE PAGES

    Gao, Kai; Fu, Shubin; Chung, Eric T.

    2018-02-04

    Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructsmore » high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss–Lobatto–Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.« less

  11. 24 CFR 990.170 - Computation of utilities expense level (UEL): Overview.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... level (UEL): Overview. 990.170 Section 990.170 Housing and Urban Development Regulations Relating to... Expenses § 990.170 Computation of utilities expense level (UEL): Overview. (a) General. The UEL for each... by the payable consumption level multiplied by the inflation factor. The UEL is expressed in terms of...

  12. Iterative methods for 3D implicit finite-difference migration using the complex Padé approximation

    NASA Astrophysics Data System (ADS)

    Costa, Carlos A. N.; Campos, Itamara S.; Costa, Jessé C.; Neto, Francisco A.; Schleicher, Jörg; Novais, Amélia

    2013-08-01

    Conventional implementations of 3D finite-difference (FD) migration use splitting techniques to accelerate performance and save computational cost. However, such techniques are plagued with numerical anisotropy that jeopardises the correct positioning of dipping reflectors in the directions not used for the operator splitting. We implement 3D downward continuation FD migration without splitting using a complex Padé approximation. In this way, the numerical anisotropy is eliminated at the expense of a computationally more intensive solution of a large-band linear system. We compare the performance of the iterative stabilized biconjugate gradient (BICGSTAB) and that of the multifrontal massively parallel direct solver (MUMPS). It turns out that the use of the complex Padé approximation not only stabilizes the solution, but also acts as an effective preconditioner for the BICGSTAB algorithm, reducing the number of iterations as compared to the implementation using the real Padé expansion. As a consequence, the iterative BICGSTAB method is more efficient than the direct MUMPS method when solving a single term in the Padé expansion. The results of both algorithms, here evaluated by computing the migration impulse response in the SEG/EAGE salt model, are of comparable quality.

  13. Adaptive error covariances estimation methods for ensemble Kalman filters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhen, Yicun, E-mail: zhen@math.psu.edu; Harlim, John, E-mail: jharlim@psu.edu

    2015-08-01

    This paper presents a computationally fast algorithm for estimating, both, the system and observation noise covariances of nonlinear dynamics, that can be used in an ensemble Kalman filtering framework. The new method is a modification of Belanger's recursive method, to avoid an expensive computational cost in inverting error covariance matrices of product of innovation processes of different lags when the number of observations becomes large. When we use only product of innovation processes up to one-lag, the computational cost is indeed comparable to a recently proposed method by Berry–Sauer's. However, our method is more flexible since it allows for usingmore » information from product of innovation processes of more than one-lag. Extensive numerical comparisons between the proposed method and both the original Belanger's and Berry–Sauer's schemes are shown in various examples, ranging from low-dimensional linear and nonlinear systems of SDEs and 40-dimensional stochastically forced Lorenz-96 model. Our numerical results suggest that the proposed scheme is as accurate as the original Belanger's scheme on low-dimensional problems and has a wider range of more accurate estimates compared to Berry–Sauer's method on L-96 example.« less

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gur, Sourav; Frantziskonis, George N.; Univ. of Arizona, Tucson, AZ

    Here, we report results from a numerical study of multi-time-scale bistable dynamics for CO oxidation on a catalytic surface in a flowing, well-mixed gas stream. The problem is posed in terms of surface and gas-phase submodels that dynamically interact in the presence of stochastic perturbations, reflecting the impact of molecular-scale fluctuations on the surface and turbulence in the gas. Wavelet-based methods are used to encode and characterize the temporal dynamics produced by each submodel and detect the onset of sudden state shifts (bifurcations) caused by nonlinear kinetics. When impending state shifts are detected, a more accurate but computationally expensive integrationmore » scheme can be used. This appears to make it possible, at least in some cases, to decrease the net computational burden associated with simulating multi-time-scale, nonlinear reacting systems by limiting the amount of time in which the more expensive integration schemes are required. Critical to achieving this is being able to detect unstable temporal transitions such as the bistable shifts in the example problem considered here. Lastly, our results indicate that a unique wavelet-based algorithm based on the Lipschitz exponent is capable of making such detections, even under noisy conditions, and may find applications in critical transition detection problems beyond catalysis.« less

  15. A Computationally Efficient Parallel Levenberg-Marquardt Algorithm for Large-Scale Big-Data Inversion

    NASA Astrophysics Data System (ADS)

    Lin, Y.; O'Malley, D.; Vesselinov, V. V.

    2015-12-01

    Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a powerful tool for large-scale applications.

  16. An efficient and guaranteed stable numerical method for continuous modeling of infiltration and redistribution with a shallow dynamic water table

    NASA Astrophysics Data System (ADS)

    Lai, Wencong; Ogden, Fred L.; Steinke, Robert C.; Talbot, Cary A.

    2015-03-01

    We have developed a one-dimensional numerical method to simulate infiltration and redistribution in the presence of a shallow dynamic water table. This method builds upon the Green-Ampt infiltration with Redistribution (GAR) model and incorporates features from the Talbot-Ogden (T-O) infiltration and redistribution method in a discretized moisture content domain. The redistribution scheme is more physically meaningful than the capillary weighted redistribution scheme in the T-O method. Groundwater dynamics are considered in this new method instead of hydrostatic groundwater front. It is also computationally more efficient than the T-O method. Motion of water in the vadose zone due to infiltration, redistribution, and interactions with capillary groundwater are described by ordinary differential equations. Numerical solutions to these equations are computationally less expensive than solutions of the highly nonlinear Richards' (1931) partial differential equation. We present results from numerical tests on 11 soil types using multiple rain pulses with different boundary conditions, with and without a shallow water table and compare against the numerical solution of Richards' equation (RE). Results from the new method are in satisfactory agreement with RE solutions in term of ponding time, deponding time, infiltration rate, and cumulative infiltrated depth. The new method, which we call "GARTO" can be used as an alternative to the RE for 1-D coupled surface and groundwater models in general situations with homogeneous soils with dynamic water table. The GARTO method represents a significant advance in simulating groundwater surface water interactions because it very closely matches the RE solution while being computationally efficient, with guaranteed mass conservation, and no stability limitations that can affect RE solvers in the case of a near-surface water table.

  17. An efficient hybrid pseudospectral/finite-difference scheme for solving the TTI pure P-wave equation

    NASA Astrophysics Data System (ADS)

    Zhan, Ge; Pestana, Reynam C.; Stoffa, Paul L.

    2013-04-01

    The pure P-wave equation for modelling and migration in tilted transversely isotropic (TTI) media has attracted more and more attention in imaging seismic data with anisotropy. The desirable feature is that it is absolutely free of shear-wave artefacts and the consequent alleviation of numerical instabilities generally suffered by some systems of coupled equations. However, due to several forward-backward Fourier transforms in wavefield updating at each time step, the computational cost is significant, and thereby hampers its prevalence. We propose to use a hybrid pseudospectral (PS) and finite-difference (FD) scheme to solve the pure P-wave equation. In the hybrid solution, most of the cost-consuming wavenumber terms in the equation are replaced by inexpensive FD operators, which in turn accelerates the computation and reduces the computational cost. To demonstrate the benefit in cost saving of the new scheme, 2D and 3D reverse-time migration (RTM) examples using the hybrid solution to the pure P-wave equation are carried out, and respective runtimes are listed and compared. Numerical results show that the hybrid strategy demands less computation time and is faster than using the PS method alone. Furthermore, this new TTI RTM algorithm with the hybrid method is computationally less expensive than that with the FD solution to conventional TTI coupled equations.

  18. 47 CFR 32.6124 - General purpose computers expense.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... is the physical operation of general purpose computers and the maintenance of operating systems. This... UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6124... application systems and databases for general purpose computers. (See also § 32.6720, General and...

  19. 47 CFR 32.6124 - General purpose computers expense.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... is the physical operation of general purpose computers and the maintenance of operating systems. This... UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6124... application systems and databases for general purpose computers. (See also § 32.6720, General and...

  20. 47 CFR 32.6124 - General purpose computers expense.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... is the physical operation of general purpose computers and the maintenance of operating systems. This... UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6124... application systems and databases for general purpose computers. (See also § 32.6720, General and...

  1. 47 CFR 32.6124 - General purpose computers expense.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... is the physical operation of general purpose computers and the maintenance of operating systems. This... UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6124... application systems and databases for general purpose computers. (See also § 32.6720, General and...

  2. 47 CFR 32.6124 - General purpose computers expense.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... is the physical operation of general purpose computers and the maintenance of operating systems. This... UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6124... application systems and databases for general purpose computers. (See also § 32.6720, General and...

  3. Semi-Numerical Studies of the Three-Meter Spherical Couette Experiment Utilizing Data Assimilation

    NASA Astrophysics Data System (ADS)

    Burnett, S. C.; Rojas, R.; Perevalov, A.; Lathrop, D. P.

    2017-12-01

    The model of the Earth's magnetic field has been investigated in recent years through experiments and numerical models. At the University of Maryland, experimental studies are implemented in a three-meter spherical Couette device filled with liquid sodium. The inner and outer spheres of this apparatus mimic the planet's inner core and core-mantle boundary, respectively. These experiments incorporate high velocity flows with Reynolds numbers 108. In spherical Couette geometry, the numerical scheme applied to this work features finite difference methods in the radial direction and pseudospectral spherical harmonic transforms elsewhere [Schaeffer, N. G3 (2013)]. Adding to the numerical model, data assimilation integrates the experimental outer-layer magnetic field measurements. This semi-numerical model can then be compared to the experimental results as well as forecasting magnetic field changes. Data assimilation makes it possible to get estimates of internal motions of the three-meter experiment that would otherwise be intrusive or impossible to obtain in experiments or too computationally expensive with a purely numerical code. If we can provide accurate models of the three-meter device, it is possible to attempt to model the geomagnetic field. We gratefully acknowledge the support of NSF Grant No. EAR1417148 & DGE1322106.

  4. Semi-Numerical Studies of the Three-Meter Spherical Couette Experiment Utilizing Data Assimilation

    NASA Astrophysics Data System (ADS)

    Burnett, Sarah; Rojas, Ruben; Perevalov, Artur; Lathrop, Daniel; Ide, Kayo; Schaeffer, Nathanael

    2017-11-01

    The model of the Earth's magnetic field has been investigated in recent years through experiments and numerical models. At the University of Maryland, experimental studies are implemented in a three-meter spherical Couette device filled with liquid sodium. The inner and outer spheres of this apparatus mimic the planet's inner core and core-mantle boundary, respectively. These experiments incorporate high velocity flows with Reynolds numbers 108 . In spherical Couette geometry, the numerical scheme applied to this work features finite difference methods in the radial direction and pseudospectral spherical harmonic transforms elsewhere. Adding to the numerical model, data assimilation integrates the experimental outer-layer magnetic field measurements. This semi-numerical model can then be compared to the experimental results as well as forecasting magnetic field changes. Data assimilation makes it possible to get estimates of internal motions of the three-meter experiment that would otherwise be intrusive or impossible to obtain in experiments or too computationally expensive with a purely numerical code. If we can provide accurate models of the three-meter device, it is possible to attempt to model the geomagnetic field. We gratefully acknowledge the support of NSF Grant No. EAR1417148 & DGE1322106.

  5. Permeability Sensitivity Functions and Rapid Simulation of Hydraulic-Testing Measurements Using Perturbation Theory

    NASA Astrophysics Data System (ADS)

    Escobar Gómez, J. D.; Torres-Verdín, C.

    2018-03-01

    Single-well pressure-diffusion simulators enable improved quantitative understanding of hydraulic-testing measurements in the presence of arbitrary spatial variations of rock properties. Simulators of this type implement robust numerical algorithms which are often computationally expensive, thereby making the solution of the forward modeling problem onerous and inefficient. We introduce a time-domain perturbation theory for anisotropic permeable media to efficiently and accurately approximate the transient pressure response of spatially complex aquifers. Although theoretically valid for any spatially dependent rock/fluid property, our single-phase flow study emphasizes arbitrary spatial variations of permeability and anisotropy, which constitute key objectives of hydraulic-testing operations. Contrary to time-honored techniques, the perturbation method invokes pressure-flow deconvolution to compute the background medium's permeability sensitivity function (PSF) with a single numerical simulation run. Subsequently, the first-order term of the perturbed solution is obtained by solving an integral equation that weighs the spatial variations of permeability with the spatial-dependent and time-dependent PSF. Finally, discrete convolution transforms the constant-flow approximation to arbitrary multirate conditions. Multidimensional numerical simulation studies for a wide range of single-well field conditions indicate that perturbed solutions can be computed in less than a few CPU seconds with relative errors in pressure of <5%, corresponding to perturbations in background permeability of up to two orders of magnitude. Our work confirms that the proposed joint perturbation-convolution (JPC) method is an efficient alternative to analytical and numerical solutions for accurate modeling of pressure-diffusion phenomena induced by Neumann or Dirichlet boundary conditions.

  6. Efficient analytical implementation of the DOT Riemann solver for the de Saint Venant-Exner morphodynamic model

    NASA Astrophysics Data System (ADS)

    Carraro, F.; Valiani, A.; Caleffi, V.

    2018-03-01

    Within the framework of the de Saint Venant equations coupled with the Exner equation for morphodynamic evolution, this work presents a new efficient implementation of the Dumbser-Osher-Toro (DOT) scheme for non-conservative problems. The DOT path-conservative scheme is a robust upwind method based on a complete Riemann solver, but it has the drawback of requiring expensive numerical computations. Indeed, to compute the non-linear time evolution in each time step, the DOT scheme requires numerical computation of the flux matrix eigenstructure (the totality of eigenvalues and eigenvectors) several times at each cell edge. In this work, an analytical and compact formulation of the eigenstructure for the de Saint Venant-Exner (dSVE) model is introduced and tested in terms of numerical efficiency and stability. Using the original DOT and PRICE-C (a very efficient FORCE-type method) as reference methods, we present a convergence analysis (error against CPU time) to study the performance of the DOT method with our new analytical implementation of eigenstructure calculations (A-DOT). In particular, the numerical performance of the three methods is tested in three test cases: a movable bed Riemann problem with analytical solution; a problem with smooth analytical solution; a test in which the water flow is characterised by subcritical and supercritical regions. For a given target error, the A-DOT method is always the most efficient choice. Finally, two experimental data sets and different transport formulae are considered to test the A-DOT model in more practical case studies.

  7. Methods for simulation-based analysis of fluid-structure interaction.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barone, Matthew Franklin; Payne, Jeffrey L.

    2005-10-01

    Methods for analysis of fluid-structure interaction using high fidelity simulations are critically reviewed. First, a literature review of modern numerical techniques for simulation of aeroelastic phenomena is presented. The review focuses on methods contained within the arbitrary Lagrangian-Eulerian (ALE) framework for coupling computational fluid dynamics codes to computational structural mechanics codes. The review treats mesh movement algorithms, the role of the geometric conservation law, time advancement schemes, wetted surface interface strategies, and some representative applications. The complexity and computational expense of coupled Navier-Stokes/structural dynamics simulations points to the need for reduced order modeling to facilitate parametric analysis. The proper orthogonalmore » decomposition (POD)/Galerkin projection approach for building a reduced order model (ROM) is presented, along with ideas for extension of the methodology to allow construction of ROMs based on data generated from ALE simulations.« less

  8. Numerical relativity waveform surrogate model for generically precessing binary black hole mergers

    NASA Astrophysics Data System (ADS)

    Blackman, Jonathan; Field, Scott E.; Scheel, Mark A.; Galley, Chad R.; Ott, Christian D.; Boyle, Michael; Kidder, Lawrence E.; Pfeiffer, Harald P.; Szilágyi, Béla

    2017-07-01

    A generic, noneccentric binary black hole (BBH) system emits gravitational waves (GWs) that are completely described by seven intrinsic parameters: the black hole spin vectors and the ratio of their masses. Simulating a BBH coalescence by solving Einstein's equations numerically is computationally expensive, requiring days to months of computing resources for a single set of parameter values. Since theoretical predictions of the GWs are often needed for many different source parameters, a fast and accurate model is essential. We present the first surrogate model for GWs from the coalescence of BBHs including all seven dimensions of the intrinsic noneccentric parameter space. The surrogate model, which we call NRSur7dq2, is built from the results of 744 numerical relativity simulations. NRSur7dq2 covers spin magnitudes up to 0.8 and mass ratios up to 2, includes all ℓ≤4 modes, begins about 20 orbits before merger, and can be evaluated in ˜50 ms . We find the largest NRSur7dq2 errors to be comparable to the largest errors in the numerical relativity simulations, and more than an order of magnitude smaller than the errors of other waveform models. Our model, and more broadly the methods developed here, will enable studies that were not previously possible when using highly accurate waveforms, such as parameter inference and tests of general relativity with GW observations.

  9. Statistical emulation of landslide-induced tsunamis at the Rockall Bank, NE Atlantic

    PubMed Central

    Guillas, S.; Georgiopoulou, A.; Dias, F.

    2017-01-01

    Statistical methods constitute a useful approach to understand and quantify the uncertainty that governs complex tsunami mechanisms. Numerical experiments may often have a high computational cost. This forms a limiting factor for performing uncertainty and sensitivity analyses, where numerous simulations are required. Statistical emulators, as surrogates of these simulators, can provide predictions of the physical process in a much faster and computationally inexpensive way. They can form a prominent solution to explore thousands of scenarios that would be otherwise numerically expensive and difficult to achieve. In this work, we build a statistical emulator of the deterministic codes used to simulate submarine sliding and tsunami generation at the Rockall Bank, NE Atlantic Ocean, in two stages. First we calibrate, against observations of the landslide deposits, the parameters used in the landslide simulations. This calibration is performed under a Bayesian framework using Gaussian Process (GP) emulators to approximate the landslide model, and the discrepancy function between model and observations. Distributions of the calibrated input parameters are obtained as a result of the calibration. In a second step, a GP emulator is built to mimic the coupled landslide-tsunami numerical process. The emulator propagates the uncertainties in the distributions of the calibrated input parameters inferred from the first step to the outputs. As a result, a quantification of the uncertainty of the maximum free surface elevation at specified locations is obtained. PMID:28484339

  10. A hybrid method combining the surface integral equation method and ray tracing for the numerical simulation of high frequency diffraction involved in ultrasonic NDT

    NASA Astrophysics Data System (ADS)

    Bonnet, M.; Collino, F.; Demaldent, E.; Imperiale, A.; Pesudo, L.

    2018-05-01

    Ultrasonic Non-Destructive Testing (US NDT) has become widely used in various fields of applications to probe media. Exploiting the surface measurements of the ultrasonic incident waves echoes after their propagation through the medium, it allows to detect potential defects (cracks and inhomogeneities) and characterize the medium. The understanding and interpretation of those experimental measurements is performed with the help of numerical modeling and simulations. However, classical numerical methods can become computationally very expensive for the simulation of wave propagation in the high frequency regime. On the other hand, asymptotic techniques are better suited to model high frequency scattering over large distances but nevertheless do not allow accurate simulation of complex diffraction phenomena. Thus, neither numerical nor asymptotic methods can individually solve high frequency diffraction problems in large media, as those involved in UNDT controls, both quickly and accurately, but their advantages and limitations are complementary. Here we propose a hybrid strategy coupling the surface integral equation method and the ray tracing method to simulate high frequency diffraction under speed and accuracy constraints. This strategy is general and applicable to simulate diffraction phenomena in acoustic or elastodynamic media. We provide its implementation and investigate its performances for the 2D acoustic diffraction problem. The main features of this hybrid method are described and results of 2D computational experiments discussed.

  11. Statistical emulation of landslide-induced tsunamis at the Rockall Bank, NE Atlantic.

    PubMed

    Salmanidou, D M; Guillas, S; Georgiopoulou, A; Dias, F

    2017-04-01

    Statistical methods constitute a useful approach to understand and quantify the uncertainty that governs complex tsunami mechanisms. Numerical experiments may often have a high computational cost. This forms a limiting factor for performing uncertainty and sensitivity analyses, where numerous simulations are required. Statistical emulators, as surrogates of these simulators, can provide predictions of the physical process in a much faster and computationally inexpensive way. They can form a prominent solution to explore thousands of scenarios that would be otherwise numerically expensive and difficult to achieve. In this work, we build a statistical emulator of the deterministic codes used to simulate submarine sliding and tsunami generation at the Rockall Bank, NE Atlantic Ocean, in two stages. First we calibrate, against observations of the landslide deposits, the parameters used in the landslide simulations. This calibration is performed under a Bayesian framework using Gaussian Process (GP) emulators to approximate the landslide model, and the discrepancy function between model and observations. Distributions of the calibrated input parameters are obtained as a result of the calibration. In a second step, a GP emulator is built to mimic the coupled landslide-tsunami numerical process. The emulator propagates the uncertainties in the distributions of the calibrated input parameters inferred from the first step to the outputs. As a result, a quantification of the uncertainty of the maximum free surface elevation at specified locations is obtained.

  12. 49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2012-10-01 2012-10-01 false Computers and data processing equipment (account...

  13. 49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2013-10-01 2013-10-01 false Computers and data processing equipment (account...

  14. 49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2011-10-01 2011-10-01 false Computers and data processing equipment (account...

  15. 49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2014-10-01 2014-10-01 false Computers and data processing equipment (account...

  16. 49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2010-10-01 2010-10-01 false Computers and data processing equipment (account...

  17. Development of homotopy algorithms for fixed-order mixed H2/H(infinity) controller synthesis

    NASA Technical Reports Server (NTRS)

    Whorton, M.; Buschek, H.; Calise, A. J.

    1994-01-01

    A major difficulty associated with H-infinity and mu-synthesis methods is the order of the resulting compensator. Whereas model and/or controller reduction techniques are sometimes applied, performance and robustness properties are not preserved. By directly constraining compensator order during the optimization process, these properties are better preserved, albeit at the expense of computational complexity. This paper presents a novel homotopy algorithm to synthesize fixed-order mixed H2/H-infinity compensators. Numerical results are presented for a four-disk flexible structure to evaluate the efficiency of the algorithm.

  18. Producing picture-perfect posters.

    PubMed

    Bach, D B; Vellet, A D; Karlik, S J; Downey, D B; Levin, M F; Munk, P L

    1993-06-01

    Scientific posters form an integral part of many radiology meetings. They provide the opportunity for interested parties to read the material at an individualized pace, to study the images in detail, and to return to the exhibit numerous times. Although the content of the poster is undoubtedly its most important component, the visual presentation of the material can enhance or detract from the clarity of the message. With the wide availability of sophisticated computer programs for desktop publishing (DTP), one can now create the poster on a computer monitor with full control of the form as well as the content. This process will result in a professional-appearing poster, yet still allow the author the opportunity to make innumerable revisions, as the poster is visualized in detail on the computer monitor before printing. Furthermore, this process is less expensive than the traditional method of typesetting individual sections separately and mounting them on cardboard for display. The purpose of this article is to present our approach to poster production using commercially available DTP computer programs.

  19. Fast reconstruction of optical properties for complex segmentations in near infrared imaging

    NASA Astrophysics Data System (ADS)

    Jiang, Jingjing; Wolf, Martin; Sánchez Majos, Salvador

    2017-04-01

    The intrinsic ill-posed nature of the inverse problem in near infrared imaging makes the reconstruction of fine details of objects deeply embedded in turbid media challenging even for the large amounts of data provided by time-resolved cameras. In addition, most reconstruction algorithms for this type of measurements are only suitable for highly symmetric geometries and rely on a linear approximation to the diffusion equation since a numerical solution of the fully non-linear problem is computationally too expensive. In this paper, we will show that a problem of practical interest can be successfully addressed making efficient use of the totality of the information supplied by time-resolved cameras. We set aside the goal of achieving high spatial resolution for deep structures and focus on the reconstruction of complex arrangements of large regions. We show numerical results based on a combined approach of wavelength-normalized data and prior geometrical information, defining a fully parallelizable problem in arbitrary geometries for time-resolved measurements. Fast reconstructions are obtained using a diffusion approximation and Monte-Carlo simulations, parallelized in a multicore computer and a GPU respectively.

  20. Development of an upwind, finite-volume code with finite-rate chemistry

    NASA Technical Reports Server (NTRS)

    Molvik, Gregory A.

    1995-01-01

    Under this grant, two numerical algorithms were developed to predict the flow of viscous, hypersonic, chemically reacting gases over three-dimensional bodies. Both algorithms take advantage of the benefits of upwind differencing, total variation diminishing techniques and of a finite-volume framework, but obtain their solution in two separate manners. The first algorithm is a zonal, time-marching scheme, and is generally used to obtain solutions in the subsonic portions of the flow field. The second algorithm is a much less expensive, space-marching scheme and can be used for the computation of the larger, supersonic portion of the flow field. Both codes compute their interface fluxes with a temporal Riemann solver and the resulting schemes are made fully implicit including the chemical source terms and boundary conditions. Strong coupling is used between the fluid dynamic, chemical and turbulence equations. These codes have been validated on numerous hypersonic test cases and have provided excellent comparison with existing data. This report summarizes the research that took place from August 1,1994 to January 1, 1995.

  1. Efficient and Robust Optimization for Building Energy Simulation

    PubMed Central

    Pourarian, Shokouh; Kearsley, Anthony; Wen, Jin; Pertzborn, Amanda

    2016-01-01

    Efficiently, robustly and accurately solving large sets of structured, non-linear algebraic and differential equations is one of the most computationally expensive steps in the dynamic simulation of building energy systems. Here, the efficiency, robustness and accuracy of two commonly employed solution methods are compared. The comparison is conducted using the HVACSIM+ software package, a component based building system simulation tool. The HVACSIM+ software presently employs Powell’s Hybrid method to solve systems of nonlinear algebraic equations that model the dynamics of energy states and interactions within buildings. It is shown here that the Powell’s method does not always converge to a solution. Since a myriad of other numerical methods are available, the question arises as to which method is most appropriate for building energy simulation. This paper finds considerable computational benefits result from replacing the Powell’s Hybrid method solver in HVACSIM+ with a solver more appropriate for the challenges particular to numerical simulations of buildings. Evidence is provided that a variant of the Levenberg-Marquardt solver has superior accuracy and robustness compared to the Powell’s Hybrid method presently used in HVACSIM+. PMID:27325907

  2. Efficient and Robust Optimization for Building Energy Simulation.

    PubMed

    Pourarian, Shokouh; Kearsley, Anthony; Wen, Jin; Pertzborn, Amanda

    2016-06-15

    Efficiently, robustly and accurately solving large sets of structured, non-linear algebraic and differential equations is one of the most computationally expensive steps in the dynamic simulation of building energy systems. Here, the efficiency, robustness and accuracy of two commonly employed solution methods are compared. The comparison is conducted using the HVACSIM+ software package, a component based building system simulation tool. The HVACSIM+ software presently employs Powell's Hybrid method to solve systems of nonlinear algebraic equations that model the dynamics of energy states and interactions within buildings. It is shown here that the Powell's method does not always converge to a solution. Since a myriad of other numerical methods are available, the question arises as to which method is most appropriate for building energy simulation. This paper finds considerable computational benefits result from replacing the Powell's Hybrid method solver in HVACSIM+ with a solver more appropriate for the challenges particular to numerical simulations of buildings. Evidence is provided that a variant of the Levenberg-Marquardt solver has superior accuracy and robustness compared to the Powell's Hybrid method presently used in HVACSIM+.

  3. Method of moments comparison for soot population modeling in turbulent combustion

    NASA Astrophysics Data System (ADS)

    Chong, Shao Teng; Im, Hong; Raman, Venkat

    2017-11-01

    Representation of soot population is an important component in the efficient computational prediction of particulate emissions. However, there are a number of moments-based techniques with varying numerical complexity. In the past, development of such methods has been principally carried out on canonical laminar and 0-D flows. However, their applications in realistic solvers developed for turbulent combustion may face challenges from turbulence closure to selection of moment sets. In this work, the accuracy and relative computational expense of a few common soot method of moments are tested in canonical turbulent flames for different configurations. Large eddy simulation (LES) will be used as the turbulence modeling framework. In grid-filtered LES, the interaction of numerical and modeling errors is a first-order problem that can undermine the accuracy of soot predictions. In the past, special moments-based methods for solvers that transport high frequency content fluid with ability to reconstruct particle size distribution have been developed. Here, a similar analysis will be carried out for the moment-based soot modeling approaches above. Specifically, realizability of moments methods with nonlinear advection schemes will be discussed.

  4. Artificial Boundary Conditions Based on the Difference Potentials Method

    NASA Technical Reports Server (NTRS)

    Tsynkov, Semyon V.

    1996-01-01

    While numerically solving a problem initially formulated on an unbounded domain, one typically truncates this domain, which necessitates setting the artificial boundary conditions (ABC's) at the newly formed external boundary. The issue of setting the ABC's appears to be most significant in many areas of scientific computing, for example, in problems originating from acoustics, electrodynamics, solid mechanics, and fluid dynamics. In particular, in computational fluid dynamics (where external problems present a wide class of practically important formulations) the proper treatment of external boundaries may have a profound impact on the overall quality and performance of numerical algorithms. Most of the currently used techniques for setting the ABC's can basically be classified into two groups. The methods from the first group (global ABC's) usually provide high accuracy and robustness of the numerical procedure but often appear to be fairly cumbersome and (computationally) expensive. The methods from the second group (local ABC's) are, as a rule, algorithmically simple, numerically cheap, and geometrically universal; however, they usually lack accuracy of computations. In this paper we first present a survey and provide a comparative assessment of different existing methods for constructing the ABC's. Then, we describe a relatively new ABC's technique of ours and review the corresponding results. This new technique, in our opinion, is currently one of the most promising in the field. It enables one to construct such ABC's that combine the advantages relevant to the two aforementioned classes of existing methods. Our approach is based on application of the difference potentials method attributable to V. S. Ryaben'kii. This approach allows us to obtain highly accurate ABC's in the form of certain (nonlocal) boundary operator equations. The operators involved are analogous to the pseudodifferential boundary projections first introduced by A. P. Calderon and then also studied by R. T. Seeley. The apparatus of the boundary pseudodifferential equations, which has formerly been used mostly in the qualitative theory of integral equations and PDE'S, is now effectively employed for developing numerical methods in the different fields of scientific computing.

  5. Development of a Higher Order Laminate Theory for Modeling Composites with Induced Strain Actuators

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Seeley, Charles E.

    1996-01-01

    A refined higher order plate theory is developed to investigate the actuation mechanism of piezoelectric materials surface bonded or embedded in composite laminates. The current analysis uses a displacement field which accurately accounts for transverse shear stresses. Some higher order terms are identified by using the conditions that shear stresses vanish at all free surfaces. Therefore, all boundary conditions for displacements and stresses are satisfied in the present theory. The analysis is implemented using the finite element method which provides a convenient means to construct a numerical solution due to the discrete nature of the actuators. The higher order theory is computationally less expensive than a full three dimensional analysis. The theory is also shown to agree well with published experimental results. Numerical examples are presented for composite plates with thicknesses ranging from thin to very thick.

  6. Assessment of a high-resolution central scheme for the solution of the relativistic hydrodynamics equations

    NASA Astrophysics Data System (ADS)

    Lucas-Serrano, A.; Font, J. A.; Ibáñez, J. M.; Martí, J. M.

    2004-12-01

    We assess the suitability of a recent high-resolution central scheme developed by \\cite{kurganov} for the solution of the relativistic hydrodynamic equations. The novelty of this approach relies on the absence of Riemann solvers in the solution procedure. The computations we present are performed in one and two spatial dimensions in Minkowski spacetime. Standard numerical experiments such as shock tubes and the relativistic flat-faced step test are performed. As an astrophysical application the article includes two-dimensional simulations of the propagation of relativistic jets using both Cartesian and cylindrical coordinates. The simulations reported clearly show the capabilities of the numerical scheme of yielding satisfactory results, with an accuracy comparable to that obtained by the so-called high-resolution shock-capturing schemes based upon Riemann solvers (Godunov-type schemes), even well inside the ultrarelativistic regime. Such a central scheme can be straightforwardly applied to hyperbolic systems of conservation laws for which the characteristic structure is not explicitly known, or in cases where a numerical computation of the exact solution of the Riemann problem is prohibitively expensive. Finally, we present comparisons with results obtained using various Godunov-type schemes as well as with those obtained using other high-resolution central schemes which have recently been reported in the literature.

  7. Fast and Accurate Prediction of Numerical Relativity Waveforms from Binary Black Hole Coalescences Using Surrogate Models

    NASA Astrophysics Data System (ADS)

    Blackman, Jonathan; Field, Scott E.; Galley, Chad R.; Szilágyi, Béla; Scheel, Mark A.; Tiglio, Manuel; Hemberger, Daniel A.

    2015-09-01

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic -2Yℓm waveform modes resolved by the NR code up to ℓ=8 . We compare our surrogate model to effective one body waveforms from 50 M⊙ to 300 M⊙ for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).

  8. Computational investigations and grid refinement study of 3D transient flow in a cylindrical tank using OpenFOAM

    NASA Astrophysics Data System (ADS)

    Mohd Sakri, F.; Mat Ali, M. S.; Sheikh Salim, S. A. Z.

    2016-10-01

    The study of physic fluid for a liquid draining inside a tank is easily accessible using numerical simulation. However, numerical simulation is expensive when the liquid draining involves the multi-phase problem. Since an accurate numerical simulation can be obtained if a proper method for error estimation is accomplished, this paper provides systematic assessment of error estimation due to grid convergence error using OpenFOAM. OpenFOAM is an open source CFD-toolbox and it is well-known among the researchers and institutions because of its free applications and ready to use. In this study, three types of grid resolution are used: coarse, medium and fine grids. Grid Convergence Index (GCI) is applied to estimate the error due to the grid sensitivity. A monotonic convergence condition is obtained in this study that shows the grid convergence error has been progressively reduced. The fine grid has the GCI value below 1%. The extrapolated value from Richardson Extrapolation is in the range of the GCI obtained.

  9. Fast and Accurate Prediction of Numerical Relativity Waveforms from Binary Black Hole Coalescences Using Surrogate Models.

    PubMed

    Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A

    2015-09-18

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).

  10. Efficient algorithms and implementations of entropy-based moment closures for rarefied gases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schaerer, Roman Pascal, E-mail: schaerer@mathcces.rwth-aachen.de; Bansal, Pratyuksh; Torrilhon, Manuel

    We present efficient algorithms and implementations of the 35-moment system equipped with the maximum-entropy closure in the context of rarefied gases. While closures based on the principle of entropy maximization have been shown to yield very promising results for moderately rarefied gas flows, the computational cost of these closures is in general much higher than for closure theories with explicit closed-form expressions of the closing fluxes, such as Grad's classical closure. Following a similar approach as Garrett et al. (2015) , we investigate efficient implementations of the computationally expensive numerical quadrature method used for the moment evaluations of the maximum-entropymore » distribution by exploiting its inherent fine-grained parallelism with the parallelism offered by multi-core processors and graphics cards. We show that using a single graphics card as an accelerator allows speed-ups of two orders of magnitude when compared to a serial CPU implementation. To accelerate the time-to-solution for steady-state problems, we propose a new semi-implicit time discretization scheme. The resulting nonlinear system of equations is solved with a Newton type method in the Lagrange multipliers of the dual optimization problem in order to reduce the computational cost. Additionally, fully explicit time-stepping schemes of first and second order accuracy are presented. We investigate the accuracy and efficiency of the numerical schemes for several numerical test cases, including a steady-state shock-structure problem.« less

  11. Reducing the Time and Cost of Testing Engines

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Producing a new aircraft engine currently costs approximately $1 billion, with 3 years of development time for a commercial engine and 10 years for a military engine. The high development time and cost make it extremely difficult to transition advanced technologies for cleaner, quieter, and more efficient new engines. To reduce this time and cost, NASA created a vision for the future where designers would use high-fidelity computer simulations early in the design process in order to resolve critical design issues before building the expensive engine hardware. To accomplish this vision, NASA's Glenn Research Center initiated a collaborative effort with the aerospace industry and academia to develop its Numerical Propulsion System Simulation (NPSS), an advanced engineering environment for the analysis and design of aerospace propulsion systems and components. Partners estimate that using NPSS has the potential to dramatically reduce the time, effort, and expense necessary to design and test jet engines by generating sophisticated computer simulations of an aerospace object or system. These simulations will permit an engineer to test various design options without having to conduct costly and time-consuming real-life tests. By accelerating and streamlining the engine system design analysis and test phases, NPSS facilitates bringing the final product to market faster. NASA's NPSS Version (V)1.X effort was a task within the Agency s Computational Aerospace Sciences project of the High Performance Computing and Communication program, which had a mission to accelerate the availability of high-performance computing hardware and software to the U.S. aerospace community for its use in design processes. The technology brings value back to NASA by improving methods of analyzing and testing space transportation components.

  12. Finite Element Simulation of Articular Contact Mechanics with Quadratic Tetrahedral Elements

    PubMed Central

    Maas, Steve A.; Ellis, Benjamin J.; Rawlins, David S.; Weiss, Jeffrey A.

    2016-01-01

    Although it is easier to generate finite element discretizations with tetrahedral elements, trilinear hexahedral (HEX8) elements are more often used in simulations of articular contact mechanics. This is due to numerical shortcomings of linear tetrahedral (TET4) elements, limited availability of quadratic tetrahedron elements in combination with effective contact algorithms, and the perceived increased computational expense of quadratic finite elements. In this study we implemented both ten-node (TET10) and fifteen-node (TET15) quadratic tetrahedral elements in FEBio (www.febio.org) and compared their accuracy, robustness in terms of convergence behavior and computational cost for simulations relevant to articular contact mechanics. Suitable volume integration and surface integration rules were determined by comparing the results of several benchmark contact problems. The results demonstrated that the surface integration rule used to evaluate the contact integrals for quadratic elements affected both convergence behavior and accuracy of predicted stresses. The computational expense and robustness of both quadratic tetrahedral formulations compared favorably to the HEX8 models. Of note, the TET15 element demonstrated superior convergence behavior and lower computational cost than both the TET10 and HEX8 elements for meshes with similar numbers of degrees of freedom in the contact problems that we examined. Finally, the excellent accuracy and relative efficiency of these quadratic tetrahedral elements was illustrated by comparing their predictions with those for a HEX8 mesh for simulation of articular contact in a fully validated model of the hip. These results demonstrate that TET10 and TET15 elements provide viable alternatives to HEX8 elements for simulation of articular contact mechanics. PMID:26900037

  13. Masking Strategies for Image Manifolds.

    PubMed

    Dadkhahi, Hamid; Duarte, Marco F

    2016-07-07

    We consider the problem of selecting an optimal mask for an image manifold, i.e., choosing a subset of the pixels of the image that preserves the manifold's geometric structure present in the original data. Such masking implements a form of compressive sensing through emerging imaging sensor platforms for which the power expense grows with the number of pixels acquired. Our goal is for the manifold learned from masked images to resemble its full image counterpart as closely as possible. More precisely, we show that one can indeed accurately learn an image manifold without having to consider a large majority of the image pixels. In doing so, we consider two masking methods that preserve the local and global geometric structure of the manifold, respectively. In each case, the process of finding the optimal masking pattern can be cast as a binary integer program, which is computationally expensive but can be approximated by a fast greedy algorithm. Numerical experiments show that the relevant manifold structure is preserved through the datadependent masking process, even for modest mask sizes.

  14. 3D nonrigid registration via optimal mass transport on the GPU.

    PubMed

    Ur Rehman, Tauseef; Haber, Eldad; Pryor, Gallagher; Melonakos, John; Tannenbaum, Allen

    2009-12-01

    In this paper, we present a new computationally efficient numerical scheme for the minimizing flow approach for optimal mass transport (OMT) with applications to non-rigid 3D image registration. The approach utilizes all of the gray-scale data in both images, and the optimal mapping from image A to image B is the inverse of the optimal mapping from B to A. Further, no landmarks need to be specified, and the minimizer of the distance functional involved is unique. Our implementation also employs multigrid, and parallel methodologies on a consumer graphics processing unit (GPU) for fast computation. Although computing the optimal map has been shown to be computationally expensive in the past, we show that our approach is orders of magnitude faster then previous work and is capable of finding transport maps with optimality measures (mean curl) previously unattainable by other works (which directly influences the accuracy of registration). We give results where the algorithm was used to compute non-rigid registrations of 3D synthetic data as well as intra-patient pre-operative and post-operative 3D brain MRI datasets.

  15. Effect of Turbulence Modeling on an Excited Jet

    NASA Technical Reports Server (NTRS)

    Brown, Clifford A.; Hixon, Ray

    2010-01-01

    The flow dynamics in a high-speed jet are dominated by unsteady turbulent flow structures in the plume. Jet excitation seeks to control these flow structures through the natural instabilities present in the initial shear layer of the jet. Understanding and optimizing the excitation input, for jet noise reduction or plume mixing enhancement, requires many trials that may be done experimentally or computationally at a significant cost savings. Numerical simulations, which model various parts of the unsteady dynamics to reduce the computational expense of the simulation, must adequately capture the unsteady flow dynamics in the excited jet for the results are to be used. Four CFD methods are considered for use in an excited jet problem, including two turbulence models with an Unsteady Reynolds Averaged Navier-Stokes (URANS) solver, one Large Eddy Simulation (LES) solver, and one URANS/LES hybrid method. Each method is used to simulate a simplified excited jet and the results are evaluated based on the flow data, computation time, and numerical stability. The knowledge gained about the effect of turbulence modeling and CFD methods from these basic simulations will guide and assist future three-dimensional (3-D) simulations that will be used to understand and optimize a realistic excited jet for a particular application.

  16. Physically weighted approximations of unsteady aerodynamic forces using the minimum-state method

    NASA Technical Reports Server (NTRS)

    Karpel, Mordechay; Hoadley, Sherwood Tiffany

    1991-01-01

    The Minimum-State Method for rational approximation of unsteady aerodynamic force coefficient matrices, modified to allow physical weighting of the tabulated aerodynamic data, is presented. The approximation formula and the associated time-domain, state-space, open-loop equations of motion are given, and the numerical procedure for calculating the approximation matrices, with weighted data and with various equality constraints are described. Two data weighting options are presented. The first weighting is for normalizing the aerodynamic data to maximum unit value of each aerodynamic coefficient. The second weighting is one in which each tabulated coefficient, at each reduced frequency value, is weighted according to the effect of an incremental error of this coefficient on aeroelastic characteristics of the system. This weighting yields a better fit of the more important terms, at the expense of less important ones. The resulting approximate yields a relatively low number of aerodynamic lag states in the subsequent state-space model. The formulation forms the basis of the MIST computer program which is written in FORTRAN for use on the MicroVAX computer and interfaces with NASA's Interaction of Structures, Aerodynamics and Controls (ISAC) computer program. The program structure, capabilities and interfaces are outlined in the appendices, and a numerical example which utilizes Rockwell's Active Flexible Wing (AFW) model is given and discussed.

  17. Application of Wavelet-Based Methods for Accelerating Multi-Time-Scale Simulation of Bistable Heterogeneous Catalysis

    DOE PAGES

    Gur, Sourav; Frantziskonis, George N.; Univ. of Arizona, Tucson, AZ; ...

    2017-02-16

    Here, we report results from a numerical study of multi-time-scale bistable dynamics for CO oxidation on a catalytic surface in a flowing, well-mixed gas stream. The problem is posed in terms of surface and gas-phase submodels that dynamically interact in the presence of stochastic perturbations, reflecting the impact of molecular-scale fluctuations on the surface and turbulence in the gas. Wavelet-based methods are used to encode and characterize the temporal dynamics produced by each submodel and detect the onset of sudden state shifts (bifurcations) caused by nonlinear kinetics. When impending state shifts are detected, a more accurate but computationally expensive integrationmore » scheme can be used. This appears to make it possible, at least in some cases, to decrease the net computational burden associated with simulating multi-time-scale, nonlinear reacting systems by limiting the amount of time in which the more expensive integration schemes are required. Critical to achieving this is being able to detect unstable temporal transitions such as the bistable shifts in the example problem considered here. Lastly, our results indicate that a unique wavelet-based algorithm based on the Lipschitz exponent is capable of making such detections, even under noisy conditions, and may find applications in critical transition detection problems beyond catalysis.« less

  18. Assessment of physical activity of the human body considering the thermodynamic system.

    PubMed

    Hochstein, Stefan; Rauschenberger, Philipp; Weigand, Bernhard; Siebert, Tobias; Schmitt, Syn; Schlicht, Wolfgang; Převorovská, Světlana; Maršík, František

    2016-01-01

    Correctly dosed physical activity is the basis of a vital and healthy life, but the measurement of physical activity is certainly rather empirical resulting in limited individual and custom activity recommendations. Certainly, very accurate three-dimensional models of the cardiovascular system exist, however, requiring the numeric solution of the Navier-Stokes equations of the flow in blood vessels. These models are suitable for the research of cardiac diseases, but computationally very expensive. Direct measurements are expensive and often not applicable outside laboratories. This paper offers a new approach to assess physical activity using thermodynamical systems and its leading quantity of entropy production which is a compromise between computation time and precise prediction of pressure, volume, and flow variables in blood vessels. Based on a simplified (one-dimensional) model of the cardiovascular system of the human body, we develop and evaluate a setup calculating entropy production of the heart to determine the intensity of human physical activity in a more precise way than previous parameters, e.g. frequently used energy considerations. The knowledge resulting from the precise real-time physical activity provides the basis for an intelligent human-technology interaction allowing to steadily adjust the degree of physical activity according to the actual individual performance level and thus to improve training and activity recommendations.

  19. 47 CFR 69.156 - Marketing expenses.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false Marketing expenses. 69.156 Section 69.156... Computation of Charges for Price Cap Local Exchange Carriers § 69.156 Marketing expenses. Effective July 1, 2000, the marketing expenses formerly allocated to the common line and traffic sensitive baskets, and...

  20. 47 CFR 69.156 - Marketing expenses.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false Marketing expenses. 69.156 Section 69.156... Computation of Charges for Price Cap Local Exchange Carriers § 69.156 Marketing expenses. Effective July 1, 2000, the marketing expenses formerly allocated to the common line and traffic sensitive baskets, and...

  1. 47 CFR 69.156 - Marketing expenses.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Marketing expenses. 69.156 Section 69.156... Computation of Charges for Price Cap Local Exchange Carriers § 69.156 Marketing expenses. Effective July 1, 2000, the marketing expenses formerly allocated to the common line and traffic sensitive baskets, and...

  2. 47 CFR 69.156 - Marketing expenses.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Marketing expenses. 69.156 Section 69.156... Computation of Charges for Price Cap Local Exchange Carriers § 69.156 Marketing expenses. Effective July 1, 2000, the marketing expenses formerly allocated to the common line and traffic sensitive baskets, and...

  3. 47 CFR 69.156 - Marketing expenses.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false Marketing expenses. 69.156 Section 69.156... Computation of Charges for Price Cap Local Exchange Carriers § 69.156 Marketing expenses. Effective July 1, 2000, the marketing expenses formerly allocated to the common line and traffic sensitive baskets, and...

  4. Application of the MacCormack scheme to overland flow routing for high-spatial resolution distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Zhang, Ling; Nan, Zhuotong; Liang, Xu; Xu, Yi; Hernández, Felipe; Li, Lianxia

    2018-03-01

    Although process-based distributed hydrological models (PDHMs) are evolving rapidly over the last few decades, their extensive applications are still challenged by the computational expenses. This study attempted, for the first time, to apply the numerically efficient MacCormack algorithm to overland flow routing in a representative high-spatial resolution PDHM, i.e., the distributed hydrology-soil-vegetation model (DHSVM), in order to improve its computational efficiency. The analytical verification indicates that both the semi and full versions of the MacCormack schemes exhibit robust numerical stability and are more computationally efficient than the conventional explicit linear scheme. The full-version outperforms the semi-version in terms of simulation accuracy when a same time step is adopted. The semi-MacCormack scheme was implemented into DHSVM (version 3.1.2) to solve the kinematic wave equations for overland flow routing. The performance and practicality of the enhanced DHSVM-MacCormack model was assessed by performing two groups of modeling experiments in the Mercer Creek watershed, a small urban catchment near Bellevue, Washington. The experiments show that DHSVM-MacCormack can considerably improve the computational efficiency without compromising the simulation accuracy of the original DHSVM model. More specifically, with the same computational environment and model settings, the computational time required by DHSVM-MacCormack can be reduced to several dozen minutes for a simulation period of three months (in contrast with one day and a half by the original DHSVM model) without noticeable sacrifice of the accuracy. The MacCormack scheme proves to be applicable to overland flow routing in DHSVM, which implies that it can be coupled into other PHDMs for watershed routing to either significantly improve their computational efficiency or to make the kinematic wave routing for high resolution modeling computational feasible.

  5. The Influence of Inlet Asymmetry on Steam Turbine Exhaust Hood Flows.

    PubMed

    Burton, Zoe; Hogg, Simon; Ingram, Grant L

    2014-04-01

    It has been widely recognized for some decades that it is essential to accurately represent the strong coupling between the last stage blades (LSB) and the diffuser inlet, in order to correctly capture the flow through the exhaust hoods of steam turbine low pressure cylinders. This applies to any form of simulation of the flow, i.e., numerical or experimental. The exhaust hood flow structure is highly three-dimensional and appropriate coupling will enable the important influence of this asymmetry to be transferred to the rotor. This, however, presents challenges as the calculation size grows rapidly when the full annulus is calculated. The size of the simulation means researchers are constantly searching for methods to reduce the computational effort without compromising solution accuracy. However, this can result in excessive computational demands in numerical simulations. Unsteady full-annulus CFD calculation will remain infeasible for routine design calculations for the foreseeable future. More computationally efficient methods for coupling the unsteady rotor flow to the hood flow are required that bring computational expense within realizable limits while still maintaining sufficient accuracy for meaningful design calculations. Research activity in this area is focused on developing new methods and techniques to improve accuracy and reduce computational expense. A novel approach for coupling the turbine last stage to the exhaust hood employing the nonlinear harmonic (NLH) method is presented in this paper. The generic, IP free, exhaust hood and last stage blade geometries from Burton et al. (2012. "A Generic Low Pressure Exhaust Diffuser for Steam Turbine Research,"Proceedings of the ASME Turbo Expo, Copenhagen, Denmark, Paper No. GT2012-68485) that are representative of modern designs, are used to demonstrate the effectiveness of the method. This is achieved by comparing results obtained with the NLH to those obtained with a more conventional mixing-plane approach. The results show that the circumferential asymmetry can be successfully transferred in both directions between the exhaust hood flow and that through the LSB, by using the NLH. This paper also suggests that for exhaust hoods of generous axial length, little change in C p is observed when the circumferential asymmetry is captured. However, the predicted flow structure is significantly different, which will influence the design and placement of the exhaust hood internal "furniture."

  6. Mathematical model of information process of protection of the social sector

    NASA Astrophysics Data System (ADS)

    Novikov, D. A.; Tsarkova, E. G.; Dubrovin, A. S.; Soloviev, A. S.

    2018-03-01

    In work the mathematical model of information protection of society against distribution of extremist moods by means of impact on mass consciousness of information placed in media is investigated. Internal and external channels on which there is a dissemination of information are designated. The problem of optimization consisting in search of the optimum strategy allowing to use most effectively media for dissemination of antiterrorist information with the minimum financial expenses is solved. The algorithm of a numerical method of the solution of a problem of optimization is constructed and also the analysis of results of a computing experiment is carried out.

  7. A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions

    PubMed Central

    Pan, Guang; Ye, Pengcheng; Yang, Zhidong

    2014-01-01

    Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206

  8. A coupled ALE-AMR method for shock hydrodynamics

    DOE PAGES

    Waltz, J.; Bakosi, J.

    2018-03-05

    We present a numerical method combining adaptive mesh refinement (AMR) with arbitrary Lagrangian-Eulerian (ALE) mesh motion for the simulation of shock hydrodynamics on unstructured grids. The primary goal of the coupled method is to use AMR to reduce numerical error in ALE simulations at reduced computational expense relative to uniform fine mesh calculations, in the same manner that AMR has been used in Eulerian simulations. We also identify deficiencies with ALE methods that AMR is able to mitigate, and discuss the unique coupling challenges. The coupled method is demonstrated using three-dimensional unstructured meshes of up to O(10 7) tetrahedral cells.more » Convergence of ALE-AMR solutions towards both uniform fine mesh ALE results and analytic solutions is demonstrated. Speed-ups of 5-10× for a given level of error are observed relative to uniform fine mesh calculations.« less

  9. A coupled ALE-AMR method for shock hydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waltz, J.; Bakosi, J.

    We present a numerical method combining adaptive mesh refinement (AMR) with arbitrary Lagrangian-Eulerian (ALE) mesh motion for the simulation of shock hydrodynamics on unstructured grids. The primary goal of the coupled method is to use AMR to reduce numerical error in ALE simulations at reduced computational expense relative to uniform fine mesh calculations, in the same manner that AMR has been used in Eulerian simulations. We also identify deficiencies with ALE methods that AMR is able to mitigate, and discuss the unique coupling challenges. The coupled method is demonstrated using three-dimensional unstructured meshes of up to O(10 7) tetrahedral cells.more » Convergence of ALE-AMR solutions towards both uniform fine mesh ALE results and analytic solutions is demonstrated. Speed-ups of 5-10× for a given level of error are observed relative to uniform fine mesh calculations.« less

  10. Modeling molecular mixing in a spatially inhomogeneous turbulent flow

    NASA Astrophysics Data System (ADS)

    Meyer, Daniel W.; Deb, Rajdeep

    2012-02-01

    Simulations of spatially inhomogeneous turbulent mixing in decaying grid turbulence with a joint velocity-concentration probability density function (PDF) method were conducted. The inert mixing scenario involves three streams with different compositions. The mixing model of Meyer ["A new particle interaction mixing model for turbulent dispersion and turbulent reactive flows," Phys. Fluids 22(3), 035103 (2010)], the interaction by exchange with the mean (IEM) model and its velocity-conditional variant, i.e., the IECM model, were applied. For reference, the direct numerical simulation data provided by Sawford and de Bruyn Kops ["Direct numerical simulation and lagrangian modeling of joint scalar statistics in ternary mixing," Phys. Fluids 20(9), 095106 (2008)] was used. It was found that velocity conditioning is essential to obtain accurate concentration PDF predictions. Moreover, the model of Meyer provides significantly better results compared to the IECM model at comparable computational expense.

  11. Numerical optimization in Hilbert space using inexact function and gradient evaluations

    NASA Technical Reports Server (NTRS)

    Carter, Richard G.

    1989-01-01

    Trust region algorithms provide a robust iterative technique for solving non-convex unstrained optimization problems, but in many instances it is prohibitively expensive to compute high accuracy function and gradient values for the method. Of particular interest are inverse and parameter estimation problems, since function and gradient evaluations involve numerically solving large systems of differential equations. A global convergence theory is presented for trust region algorithms in which neither function nor gradient values are known exactly. The theory is formulated in a Hilbert space setting so that it can be applied to variational problems as well as the finite dimensional problems normally seen in trust region literature. The conditions concerning allowable error are remarkably relaxed: relative errors in the gradient error condition is automatically satisfied if the error is orthogonal to the gradient approximation. A technique for estimating gradient error and improving the approximation is also presented.

  12. CONORBIT: constrained optimization by radial basis function interpolation in trust regions

    DOE PAGES

    Regis, Rommel G.; Wild, Stefan M.

    2016-09-26

    Here, this paper presents CONORBIT (CONstrained Optimization by Radial Basis function Interpolation in Trust regions), a derivative-free algorithm for constrained black-box optimization where the objective and constraint functions are computationally expensive. CONORBIT employs a trust-region framework that uses interpolating radial basis function (RBF) models for the objective and constraint functions, and is an extension of the ORBIT algorithm. It uses a small margin for the RBF constraint models to facilitate the generation of feasible iterates, and extensive numerical tests confirm that such a margin is helpful in improving performance. CONORBIT is compared with other algorithms on 27 test problems, amore » chemical process optimization problem, and an automotive application. Numerical results show that CONORBIT performs better than COBYLA, a sequential penalty derivative-free method, an augmented Lagrangian method, a direct search method, and another RBF-based algorithm on the test problems and on the automotive application.« less

  13. Multi-objective Optimization Strategies Using Adjoint Method and Game Theory in Aerodynamics

    NASA Astrophysics Data System (ADS)

    Tang, Zhili

    2006-08-01

    There are currently three different game strategies originated in economics: (1) Cooperative games (Pareto front), (2) Competitive games (Nash game) and (3) Hierarchical games (Stackelberg game). Each game achieves different equilibria with different performance, and their players play different roles in the games. Here, we introduced game concept into aerodynamic design, and combined it with adjoint method to solve multi-criteria aerodynamic optimization problems. The performance distinction of the equilibria of these three game strategies was investigated by numerical experiments. We computed Pareto front, Nash and Stackelberg equilibria of the same optimization problem with two conflicting and hierarchical targets under different parameterizations by using the deterministic optimization method. The numerical results show clearly that all the equilibria solutions are inferior to the Pareto front. Non-dominated Pareto front solutions are obtained, however the CPU cost to capture a set of solutions makes the Pareto front an expensive tool to the designer.

  14. Hybrid Upwinding for Two-Phase Flow in Heterogeneous Porous Media with Buoyancy and Capillarity

    NASA Astrophysics Data System (ADS)

    Hamon, F. P.; Mallison, B.; Tchelepi, H.

    2016-12-01

    In subsurface flow simulation, efficient discretization schemes for the partial differential equations governing multiphase flow and transport are critical. For highly heterogeneous porous media, the temporal discretization of choice is often the unconditionally stable fully implicit (backward-Euler) method. In this scheme, the simultaneous update of all the degrees of freedom requires solving large algebraic nonlinear systems at each time step using Newton's method. This is computationally expensive, especially in the presence of strong capillary effects driven by abrupt changes in porosity and permeability between different rock types. Therefore, discretization schemes that reduce the simulation cost by improving the nonlinear convergence rate are highly desirable. To speed up nonlinear convergence, we present an efficient fully implicit finite-volume scheme for immiscible two-phase flow in the presence of strong capillary forces. In this scheme, the discrete viscous, buoyancy, and capillary spatial terms are evaluated separately based on physical considerations. We build on previous work on Implicit Hybrid Upwinding (IHU) by using the upstream saturations with respect to the total velocity to compute the relative permeabilities in the viscous term, and by determining the directionality of the buoyancy term based on the phase density differences. The capillary numerical flux is decomposed into a rock- and geometry-dependent transmissibility factor, a nonlinear capillary diffusion coefficient, and an approximation of the saturation gradient. Combining the viscous, buoyancy, and capillary terms, we obtain a numerical flux that is consistent, bounded, differentiable, and monotone for homogeneous one-dimensional flow. The proposed scheme also accounts for spatially discontinuous capillary pressure functions. Specifically, at the interface between two rock types, the numerical scheme accurately honors the entry pressure condition by solving a local nonlinear problem to compute the numerical flux. Heterogeneous numerical tests demonstrate that this extended IHU scheme is non-oscillatory and convergent upon refinement. They also illustrate the superior accuracy and nonlinear convergence rate of the IHU scheme compared with the standard phase-based upstream weighting approach.

  15. 48 CFR 227.7103-6 - Contract clauses.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... private expense). Do not use the clause when the only deliverable items are computer software or computer software documentation (see 227.72), commercial items developed exclusively at private expense (see 227... the clause in architect-engineer and construction contracts. (b)(1) Use the clause at 252.227-7013...

  16. 48 CFR 227.7103-6 - Contract clauses.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... private expense). Do not use the clause when the only deliverable items are computer software or computer software documentation (see 227.72), commercial items developed exclusively at private expense (see 227... the clause in architect-engineer and construction contracts. (b)(1) Use the clause at 252.227-7013...

  17. Optimal structure and parameter learning of Ising models

    DOE PAGES

    Lokhov, Andrey; Vuffray, Marc Denis; Misra, Sidhant; ...

    2018-03-16

    Reconstruction of the structure and parameters of an Ising model from binary samples is a problem of practical importance in a variety of disciplines, ranging from statistical physics and computational biology to image processing and machine learning. The focus of the research community shifted toward developing universal reconstruction algorithms that are both computationally efficient and require the minimal amount of expensive data. Here, we introduce a new method, interaction screening, which accurately estimates model parameters using local optimization problems. The algorithm provably achieves perfect graph structure recovery with an information-theoretically optimal number of samples, notably in the low-temperature regime, whichmore » is known to be the hardest for learning. Here, the efficacy of interaction screening is assessed through extensive numerical tests on synthetic Ising models of various topologies with different types of interactions, as well as on real data produced by a D-Wave quantum computer. Finally, this study shows that the interaction screening method is an exact, tractable, and optimal technique that universally solves the inverse Ising problem.« less

  18. Use of the parameterised finite element method to robustly and efficiently evolve the edge of a moving cell.

    PubMed

    Neilson, Matthew P; Mackenzie, John A; Webb, Steven D; Insall, Robert H

    2010-11-01

    In this paper we present a computational tool that enables the simulation of mathematical models of cell migration and chemotaxis on an evolving cell membrane. Recent models require the numerical solution of systems of reaction-diffusion equations on the evolving cell membrane and then the solution state is used to drive the evolution of the cell edge. Previous work involved moving the cell edge using a level set method (LSM). However, the LSM is computationally very expensive, which severely limits the practical usefulness of the algorithm. To address this issue, we have employed the parameterised finite element method (PFEM) as an alternative method for evolving a cell boundary. We show that the PFEM is far more efficient and robust than the LSM. We therefore suggest that the PFEM potentially has an essential role to play in computational modelling efforts towards the understanding of many of the complex issues related to chemotaxis.

  19. An Online Gravity Modeling Method Applied for High Precision Free-INS

    PubMed Central

    Wang, Jing; Yang, Gongliu; Li, Jing; Zhou, Xiao

    2016-01-01

    For real-time solution of inertial navigation system (INS), the high-degree spherical harmonic gravity model (SHM) is not applicable because of its time and space complexity, in which traditional normal gravity model (NGM) has been the dominant technique for gravity compensation. In this paper, a two-dimensional second-order polynomial model is derived from SHM according to the approximate linear characteristic of regional disturbing potential. Firstly, deflections of vertical (DOVs) on dense grids are calculated with SHM in an external computer. And then, the polynomial coefficients are obtained using these DOVs. To achieve global navigation, the coefficients and applicable region of polynomial model are both updated synchronously in above computer. Compared with high-degree SHM, the polynomial model takes less storage and computational time at the expense of minor precision. Meanwhile, the model is more accurate than NGM. Finally, numerical test and INS experiment show that the proposed method outperforms traditional gravity models applied for high precision free-INS. PMID:27669261

  20. Optimal structure and parameter learning of Ising models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lokhov, Andrey; Vuffray, Marc Denis; Misra, Sidhant

    Reconstruction of the structure and parameters of an Ising model from binary samples is a problem of practical importance in a variety of disciplines, ranging from statistical physics and computational biology to image processing and machine learning. The focus of the research community shifted toward developing universal reconstruction algorithms that are both computationally efficient and require the minimal amount of expensive data. Here, we introduce a new method, interaction screening, which accurately estimates model parameters using local optimization problems. The algorithm provably achieves perfect graph structure recovery with an information-theoretically optimal number of samples, notably in the low-temperature regime, whichmore » is known to be the hardest for learning. Here, the efficacy of interaction screening is assessed through extensive numerical tests on synthetic Ising models of various topologies with different types of interactions, as well as on real data produced by a D-Wave quantum computer. Finally, this study shows that the interaction screening method is an exact, tractable, and optimal technique that universally solves the inverse Ising problem.« less

  1. Hydra: a scalable proteomic search engine which utilizes the Hadoop distributed computing framework

    PubMed Central

    2012-01-01

    Background For shotgun mass spectrometry based proteomics the most computationally expensive step is in matching the spectra against an increasingly large database of sequences and their post-translational modifications with known masses. Each mass spectrometer can generate data at an astonishingly high rate, and the scope of what is searched for is continually increasing. Therefore solutions for improving our ability to perform these searches are needed. Results We present a sequence database search engine that is specifically designed to run efficiently on the Hadoop MapReduce distributed computing framework. The search engine implements the K-score algorithm, generating comparable output for the same input files as the original implementation. The scalability of the system is shown, and the architecture required for the development of such distributed processing is discussed. Conclusion The software is scalable in its ability to handle a large peptide database, numerous modifications and large numbers of spectra. Performance scales with the number of processors in the cluster, allowing throughput to expand with the available resources. PMID:23216909

  2. Quasi-Optimal Elimination Trees for 2D Grids with Singularities

    DOE PAGES

    Paszyńska, A.; Paszyński, M.; Jopek, K.; ...

    2015-01-01

    We consmore » truct quasi-optimal elimination trees for 2D finite element meshes with singularities. These trees minimize the complexity of the solution of the discrete system. The computational cost estimates of the elimination process model the execution of the multifrontal algorithms in serial and in parallel shared-memory executions. Since the meshes considered are a subspace of all possible mesh partitions, we call these minimizers quasi-optimal. We minimize the cost functionals using dynamic programming. Finding these minimizers is more computationally expensive than solving the original algebraic system. Nevertheless, from the insights provided by the analysis of the dynamic programming minima, we propose a heuristic construction of the elimination trees that has cost O N e log ⁡ N e , where N e is the number of elements in the mesh. We show that this heuristic ordering has similar computational cost to the quasi-optimal elimination trees found with dynamic programming and outperforms state-of-the-art alternatives in our numerical experiments.« less

  3. An Online Gravity Modeling Method Applied for High Precision Free-INS.

    PubMed

    Wang, Jing; Yang, Gongliu; Li, Jing; Zhou, Xiao

    2016-09-23

    For real-time solution of inertial navigation system (INS), the high-degree spherical harmonic gravity model (SHM) is not applicable because of its time and space complexity, in which traditional normal gravity model (NGM) has been the dominant technique for gravity compensation. In this paper, a two-dimensional second-order polynomial model is derived from SHM according to the approximate linear characteristic of regional disturbing potential. Firstly, deflections of vertical (DOVs) on dense grids are calculated with SHM in an external computer. And then, the polynomial coefficients are obtained using these DOVs. To achieve global navigation, the coefficients and applicable region of polynomial model are both updated synchronously in above computer. Compared with high-degree SHM, the polynomial model takes less storage and computational time at the expense of minor precision. Meanwhile, the model is more accurate than NGM. Finally, numerical test and INS experiment show that the proposed method outperforms traditional gravity models applied for high precision free-INS.

  4. Hydra: a scalable proteomic search engine which utilizes the Hadoop distributed computing framework.

    PubMed

    Lewis, Steven; Csordas, Attila; Killcoyne, Sarah; Hermjakob, Henning; Hoopmann, Michael R; Moritz, Robert L; Deutsch, Eric W; Boyle, John

    2012-12-05

    For shotgun mass spectrometry based proteomics the most computationally expensive step is in matching the spectra against an increasingly large database of sequences and their post-translational modifications with known masses. Each mass spectrometer can generate data at an astonishingly high rate, and the scope of what is searched for is continually increasing. Therefore solutions for improving our ability to perform these searches are needed. We present a sequence database search engine that is specifically designed to run efficiently on the Hadoop MapReduce distributed computing framework. The search engine implements the K-score algorithm, generating comparable output for the same input files as the original implementation. The scalability of the system is shown, and the architecture required for the development of such distributed processing is discussed. The software is scalable in its ability to handle a large peptide database, numerous modifications and large numbers of spectra. Performance scales with the number of processors in the cluster, allowing throughput to expand with the available resources.

  5. Development of soft-sphere contact models for thermal heat conduction in granular flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, A. B.; Pannala, S.; Ma, Z.

    2016-06-08

    Conductive heat transfer to flowing particles occurs when two particles (or a particle and wall) come into contact. The direct conduction between the two bodies depends on the collision dynamics, namely the size of the contact area and the duration of contact. For soft-sphere discrete-particle simulations, it is computationally expensive to resolve the true collision time because doing so would require a restrictively small numerical time step. To improve the computational speed, it is common to increase the 'softness' of the material to artificially increase the collision time, but doing so affects the heat transfer. In this work, two physically-basedmore » correction terms are derived to compensate for the increased contact area and time stemming from artificial particle softening. By including both correction terms, the impact that artificial softening has on the conductive heat transfer is removed, thus enabling simulations at greatly reduced computational times without sacrificing physical accuracy.« less

  6. Quasi-Optimal Elimination Trees for 2D Grids with Singularities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paszyńska, A.; Paszyński, M.; Jopek, K.

    We consmore » truct quasi-optimal elimination trees for 2D finite element meshes with singularities. These trees minimize the complexity of the solution of the discrete system. The computational cost estimates of the elimination process model the execution of the multifrontal algorithms in serial and in parallel shared-memory executions. Since the meshes considered are a subspace of all possible mesh partitions, we call these minimizers quasi-optimal. We minimize the cost functionals using dynamic programming. Finding these minimizers is more computationally expensive than solving the original algebraic system. Nevertheless, from the insights provided by the analysis of the dynamic programming minima, we propose a heuristic construction of the elimination trees that has cost O N e log ⁡ N e , where N e is the number of elements in the mesh. We show that this heuristic ordering has similar computational cost to the quasi-optimal elimination trees found with dynamic programming and outperforms state-of-the-art alternatives in our numerical experiments.« less

  7. A probabilistic method for constructing wave time-series at inshore locations using model scenarios

    USGS Publications Warehouse

    Long, Joseph W.; Plant, Nathaniel G.; Dalyander, P. Soupy; Thompson, David M.

    2014-01-01

    Continuous time-series of wave characteristics (height, period, and direction) are constructed using a base set of model scenarios and simple probabilistic methods. This approach utilizes an archive of computationally intensive, highly spatially resolved numerical wave model output to develop time-series of historical or future wave conditions without performing additional, continuous numerical simulations. The archive of model output contains wave simulations from a set of model scenarios derived from an offshore wave climatology. Time-series of wave height, period, direction, and associated uncertainties are constructed at locations included in the numerical model domain. The confidence limits are derived using statistical variability of oceanographic parameters contained in the wave model scenarios. The method was applied to a region in the northern Gulf of Mexico and assessed using wave observations at 12 m and 30 m water depths. Prediction skill for significant wave height is 0.58 and 0.67 at the 12 m and 30 m locations, respectively, with similar performance for wave period and direction. The skill of this simplified, probabilistic time-series construction method is comparable to existing large-scale, high-fidelity operational wave models but provides higher spatial resolution output at low computational expense. The constructed time-series can be developed to support a variety of applications including climate studies and other situations where a comprehensive survey of wave impacts on the coastal area is of interest.

  8. Method based on the Laplace equations to reconstruct the river terrain for two-dimensional hydrodynamic numerical modeling

    NASA Astrophysics Data System (ADS)

    Lai, Ruixun; Wang, Min; Yang, Ming; Zhang, Chao

    2018-02-01

    The accuracy of the widely-used two-dimensional hydrodynamic numerical model depends on the quality of the river terrain model, particularly in the main channel. However, in most cases, the bathymetry of the river channel is difficult or expensive to obtain in the field, and there is a lack of available data to describe the geometry of the river channel. We introduce a method that originates from the grid generation with the elliptic equation to generate streamlines of the river channel. The streamlines are numerically solved with the Laplace equations. In the process, streamlines in the physical domain are first computed in a computational domain, and then transformed back to the physical domain. The interpolated streamlines are integrated with the surrounding topography to reconstruct the entire river terrain model. The approach was applied to a meandering reach in the Qinhe River, which is a tributary in the middle of the Yellow River, China. Cross-sectional validation and the two-dimensional shallow-water equations are used to test the performance of the river terrain generated. The results show that the approach can reconstruct the river terrain using the data from measured cross-sections. Furthermore, the created river terrain can maintain a geometrical shape consistent with the measurements, while generating a smooth main channel. Finally, several limitations and opportunities for future research are discussed.

  9. 24 CFR 990.165 - Computation of project expense level (PEL).

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Computation of project expense level (PEL). 990.165 Section 990.165 Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued) OFFICE OF ASSISTANT SECRETARY FOR PUBLIC AND INDIAN HOUSING, DEPARTMENT OF...

  10. An architecture for efficient gravitational wave parameter estimation with multimodal linear surrogate models

    NASA Astrophysics Data System (ADS)

    O'Shaughnessy, Richard; Blackman, Jonathan; Field, Scott E.

    2017-07-01

    The recent direct observation of gravitational waves has further emphasized the desire for fast, low-cost, and accurate methods to infer the parameters of gravitational wave sources. Due to expense in waveform generation and data handling, the cost of evaluating the likelihood function limits the computational performance of these calculations. Building on recently developed surrogate models and a novel parameter estimation pipeline, we show how to quickly generate the likelihood function as an analytic, closed-form expression. Using a straightforward variant of a production-scale parameter estimation code, we demonstrate our method using surrogate models of effective-one-body and numerical relativity waveforms. Our study is the first time these models have been used for parameter estimation and one of the first ever parameter estimation calculations with multi-modal numerical relativity waveforms, which include all \\ell ≤slant 4 modes. Our grid-free method enables rapid parameter estimation for any waveform with a suitable reduced-order model. The methods described in this paper may also find use in other data analysis studies, such as vetting coincident events or the computation of the coalescing-compact-binary detection statistic.

  11. A two-component Matched Interface and Boundary (MIB) regularization for charge singularity in implicit solvation

    NASA Astrophysics Data System (ADS)

    Geng, Weihua; Zhao, Shan

    2017-12-01

    We present a new Matched Interface and Boundary (MIB) regularization method for treating charge singularity in solvated biomolecules whose electrostatics are described by the Poisson-Boltzmann (PB) equation. In a regularization method, by decomposing the potential function into two or three components, the singular component can be analytically represented by the Green's function, while other components possess a higher regularity. Our new regularization combines the efficiency of two-component schemes with the accuracy of the three-component schemes. Based on this regularization, a new MIB finite difference algorithm is developed for solving both linear and nonlinear PB equations, where the nonlinearity is handled by using the inexact-Newton's method. Compared with the existing MIB PB solver based on a three-component regularization, the present algorithm is simpler to implement by circumventing the work to solve a boundary value Poisson equation inside the molecular interface and to compute related interface jump conditions numerically. Moreover, the new MIB algorithm becomes computationally less expensive, while maintains the same second order accuracy. This is numerically verified by calculating the electrostatic potential and solvation energy on the Kirkwood sphere on which the analytical solutions are available and on a series of proteins with various sizes.

  12. Renormalization group theory outperforms other approaches in statistical comparison between upscaling techniques for porous media

    NASA Astrophysics Data System (ADS)

    Hanasoge, Shravan; Agarwal, Umang; Tandon, Kunj; Koelman, J. M. Vianney A.

    2017-09-01

    Determining the pressure differential required to achieve a desired flow rate in a porous medium requires solving Darcy's law, a Laplace-like equation, with a spatially varying tensor permeability. In various scenarios, the permeability coefficient is sampled at high spatial resolution, which makes solving Darcy's equation numerically prohibitively expensive. As a consequence, much effort has gone into creating upscaled or low-resolution effective models of the coefficient while ensuring that the estimated flow rate is well reproduced, bringing to the fore the classic tradeoff between computational cost and numerical accuracy. Here we perform a statistical study to characterize the relative success of upscaling methods on a large sample of permeability coefficients that are above the percolation threshold. We introduce a technique based on mode-elimination renormalization group theory (MG) to build coarse-scale permeability coefficients. Comparing the results with coefficients upscaled using other methods, we find that MG is consistently more accurate, particularly due to its ability to address the tensorial nature of the coefficients. MG places a low computational demand, in the manner in which we have implemented it, and accurate flow-rate estimates are obtained when using MG-upscaled permeabilities that approach or are beyond the percolation threshold.

  13. Generation of Turbulent Inflow Conditions for Pipe Flow via an Annular Ribbed Turbulator

    NASA Astrophysics Data System (ADS)

    Moallemi, Nima; Brinkerhoff, Joshua

    2016-11-01

    The generation of turbulent inflow conditions adds significant computational expense to direct numerical simulations (DNS) of turbulent pipe flows. Typical approaches involve introducing boxes of isotropic turbulence to the velocity field at the inlet of the pipe. In the present study, an alternative method is proposed that incurs a lower computational cost and allows the anisotropy observed in pipe turbulence to be physically captured. The method is based on a periodic DNS of a ribbed turbulator upstream of the inlet boundary of the pipe. The Reynolds number based on the bulk velocity and pipe diameter is 5300 and the blockage ratio (BR) is 0.06 based on the rib height and pipe diameter. The pitch ratio is defined as the ratio of rib streamwise spacing to rib height and is varied between 1.7 and 5.0. The generation of turbulent flow structures downstream of the ribbed turbulator are identified and discussed. Suitability of this method for accurate representation of turbulent inflow conditions is assessed through comparison of the turbulent mean properties, fluctuations, Reynolds stress profiles, and spectra with published pipe flow DNS studies. The DNS results achieve excellent agreement with the numerical and experimental data available in the literature.

  14. Numerically modelling the large scale coronal magnetic field

    NASA Astrophysics Data System (ADS)

    Panja, Mayukh; Nandi, Dibyendu

    2016-07-01

    The solar corona spews out vast amounts of magnetized plasma into the heliosphere which has a direct impact on the Earth's magnetosphere. Thus it is important that we develop an understanding of the dynamics of the solar corona. With our present technology it has not been possible to generate 3D magnetic maps of the solar corona; this warrants the use of numerical simulations to study the coronal magnetic field. A very popular method of doing this, is to extrapolate the photospheric magnetic field using NLFF or PFSS codes. However the extrapolations at different time intervals are completely independent of each other and do not capture the temporal evolution of magnetic fields. On the other hand full MHD simulations of the global coronal field, apart from being computationally very expensive would be physically less transparent, owing to the large number of free parameters that are typically used in such codes. This brings us to the Magneto-frictional model which is relatively simpler and computationally more economic. We have developed a Magnetofrictional Model, in 3D spherical polar co-ordinates to study the large scale global coronal field. Here we present studies of changing connectivities between active regions, in response to photospheric motions.

  15. Finite element simulation of articular contact mechanics with quadratic tetrahedral elements.

    PubMed

    Maas, Steve A; Ellis, Benjamin J; Rawlins, David S; Weiss, Jeffrey A

    2016-03-21

    Although it is easier to generate finite element discretizations with tetrahedral elements, trilinear hexahedral (HEX8) elements are more often used in simulations of articular contact mechanics. This is due to numerical shortcomings of linear tetrahedral (TET4) elements, limited availability of quadratic tetrahedron elements in combination with effective contact algorithms, and the perceived increased computational expense of quadratic finite elements. In this study we implemented both ten-node (TET10) and fifteen-node (TET15) quadratic tetrahedral elements in FEBio (www.febio.org) and compared their accuracy, robustness in terms of convergence behavior and computational cost for simulations relevant to articular contact mechanics. Suitable volume integration and surface integration rules were determined by comparing the results of several benchmark contact problems. The results demonstrated that the surface integration rule used to evaluate the contact integrals for quadratic elements affected both convergence behavior and accuracy of predicted stresses. The computational expense and robustness of both quadratic tetrahedral formulations compared favorably to the HEX8 models. Of note, the TET15 element demonstrated superior convergence behavior and lower computational cost than both the TET10 and HEX8 elements for meshes with similar numbers of degrees of freedom in the contact problems that we examined. Finally, the excellent accuracy and relative efficiency of these quadratic tetrahedral elements was illustrated by comparing their predictions with those for a HEX8 mesh for simulation of articular contact in a fully validated model of the hip. These results demonstrate that TET10 and TET15 elements provide viable alternatives to HEX8 elements for simulation of articular contact mechanics. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Realistic natural atmospheric phenomena and weather effects for interactive virtual environments

    NASA Astrophysics Data System (ADS)

    McLoughlin, Leigh

    Clouds and the weather are important aspects of any natural outdoor scene, but existing dynamic techniques within computer graphics only offer the simplest of cloud representations. The problem that this work looks to address is how to provide a means of simulating clouds and weather features such as precipitation, that are suitable for virtual environments. Techniques for cloud simulation are available within the area of meteorology, but numerical weather prediction systems are computationally expensive, give more numerical accuracy than we require for graphics and are restricted to the laws of physics. Within computer graphics, we often need to direct and adjust physical features or to bend reality to meet artistic goals, which is a key difference between the subjects of computer graphics and physical science. Pure physically-based simulations, however, evolve their solutions according to pre-set rules and are notoriously difficult to control. The challenge then is for the solution to be computationally lightweight and able to be directed in some measure while at the same time producing believable results. This work presents a lightweight physically-based cloud simulation scheme that simulates the dynamic properties of cloud formation and weather effects. The system simulates water vapour, cloud water, cloud ice, rain, snow and hail. The water model incorporates control parameters and the cloud model uses an arbitrary vertical temperature profile, with a tool described to allow the user to define this. The result of this work is that clouds can now be simulated in near real-time complete with precipitation. The temperature profile and tool then provide a means of directing the resulting formation..

  17. Hybrid Numerical-Analytical Scheme for Calculating Elastic Wave Diffraction in Locally Inhomogeneous Waveguides

    NASA Astrophysics Data System (ADS)

    Glushkov, E. V.; Glushkova, N. V.; Evdokimov, A. A.

    2018-01-01

    Numerical simulation of traveling wave excitation, propagation, and diffraction in structures with local inhomogeneities (obstacles) is computationally expensive due to the need for mesh-based approximation of extended domains with the rigorous account for the radiation conditions at infinity. Therefore, hybrid numerical-analytic approaches are being developed based on the conjugation of a numerical solution in a local vicinity of the obstacle and/or source with an explicit analytic representation in the remaining semi-infinite external domain. However, in standard finite-element software, such a coupling with the external field, moreover, in the case of multimode expansion, is generally not provided. This work proposes a hybrid computational scheme that allows realization of such a conjugation using a standard software. The latter is used to construct a set of numerical solutions used as the basis for the sought solution in the local internal domain. The unknown expansion coefficients on this basis and on normal modes in the semi-infinite external domain are then determined from the conditions of displacement and stress continuity at the boundary between the two domains. We describe the implementation of this approach in the scalar and vector cases. To evaluate the reliability of the results and the efficiency of the algorithm, we compare it with a semianalytic solution to the problem of traveling wave diffraction by a horizontal obstacle, as well as with a finite-element solution obtained for a limited domain artificially restricted using absorbing boundaries. As an example, we consider the incidence of a fundamental antisymmetric Lamb wave onto surface and partially submerged elastic obstacles. It is noted that the proposed hybrid scheme can also be used to determine the eigenfrequencies and eigenforms of resonance scattering, as well as the characteristics of traveling waves in embedded waveguides.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Xiao; Blazek, Jonathan A.; McEwen, Joseph E.

    Cosmological perturbation theory is a powerful tool to predict the statistics of large-scale structure in the weakly non-linear regime, but even at 1-loop order it results in computationally expensive mode-coupling integrals. Here we present a fast algorithm for computing 1-loop power spectra of quantities that depend on the observer's orientation, thereby generalizing the FAST-PT framework (McEwen et al., 2016) that was originally developed for scalars such as the matter density. This algorithm works for an arbitrary input power spectrum and substantially reduces the time required for numerical evaluation. We apply the algorithm to four examples: intrinsic alignments of galaxies inmore » the tidal torque model; the Ostriker-Vishniac effect; the secondary CMB polarization due to baryon flows; and the 1-loop matter power spectrum in redshift space. Code implementing this algorithm and these applications is publicly available at https://github.com/JoeMcEwen/FAST-PT.« less

  19. Hybrid discrete/continuum algorithms for stochastic reaction networks

    DOE PAGES

    Safta, Cosmin; Sargsyan, Khachik; Debusschere, Bert; ...

    2014-10-22

    Direct solutions of the Chemical Master Equation (CME) governing Stochastic Reaction Networks (SRNs) are generally prohibitively expensive due to excessive numbers of possible discrete states in such systems. To enhance computational efficiency we develop a hybrid approach where the evolution of states with low molecule counts is treated with the discrete CME model while that of states with large molecule counts is modeled by the continuum Fokker-Planck equation. The Fokker-Planck equation is discretized using a 2nd order finite volume approach with appropriate treatment of flux components to avoid negative probability values. The numerical construction at the interface between the discretemore » and continuum regions implements the transfer of probability reaction by reaction according to the stoichiometry of the system. As a result, the performance of this novel hybrid approach is explored for a two-species circadian model with computational efficiency gains of about one order of magnitude.« less

  20. Gaussian process surrogates for failure detection: A Bayesian experimental design approach

    NASA Astrophysics Data System (ADS)

    Wang, Hongqiao; Lin, Guang; Li, Jinglai

    2016-05-01

    An important task of uncertainty quantification is to identify the probability of undesired events, in particular, system failures, caused by various sources of uncertainties. In this work we consider the construction of Gaussian process surrogates for failure detection and failure probability estimation. In particular, we consider the situation that the underlying computer models are extremely expensive, and in this setting, determining the sampling points in the state space is of essential importance. We formulate the problem as an optimal experimental design for Bayesian inferences of the limit state (i.e., the failure boundary) and propose an efficient numerical scheme to solve the resulting optimization problem. In particular, the proposed limit-state inference method is capable of determining multiple sampling points at a time, and thus it is well suited for problems where multiple computer simulations can be performed in parallel. The accuracy and performance of the proposed method is demonstrated by both academic and practical examples.

  1. A New Low Complexity Angle of Arrival Algorithm for 1D and 2D Direction Estimation in MIMO Smart Antenna Systems

    PubMed Central

    Al-Sadoon, Mohammed A. G.; Zuid, Abdulkareim; Jones, Stephen M. R.; Noras, James M.

    2017-01-01

    This paper proposes a new low complexity angle of arrival (AOA) method for signal direction estimation in multi-element smart wireless communication systems. The new method estimates the AOAs of the received signals directly from the received signals with significantly reduced complexity since it does not need to construct the correlation matrix, invert the matrix or apply eigen-decomposition, which are computationally expensive. A mathematical model of the proposed method is illustrated and then verified using extensive computer simulations. Both linear and circular sensors arrays are studied using various numerical examples. The method is systematically compared with other common and recently introduced AOA methods over a wide range of scenarios. The simulated results show that the new method has several advantages in terms of reduced complexity and improved accuracy under the assumptions of correlated signals and limited numbers of snapshots. PMID:29140313

  2. Optimal variable-grid finite-difference modeling for porous media

    NASA Astrophysics Data System (ADS)

    Liu, Xinxin; Yin, Xingyao; Li, Haishan

    2014-12-01

    Numerical modeling of poroelastic waves by the finite-difference (FD) method is more expensive than that of acoustic or elastic waves. To improve the accuracy and computational efficiency of seismic modeling, variable-grid FD methods have been developed. In this paper, we derived optimal staggered-grid finite difference schemes with variable grid-spacing and time-step for seismic modeling in porous media. FD operators with small grid-spacing and time-step are adopted for low-velocity or small-scale geological bodies, while FD operators with big grid-spacing and time-step are adopted for high-velocity or large-scale regions. The dispersion relations of FD schemes were derived based on the plane wave theory, then the FD coefficients were obtained using the Taylor expansion. Dispersion analysis and modeling results demonstrated that the proposed method has higher accuracy with lower computational cost for poroelastic wave simulation in heterogeneous reservoirs.

  3. A New Low Complexity Angle of Arrival Algorithm for 1D and 2D Direction Estimation in MIMO Smart Antenna Systems.

    PubMed

    Al-Sadoon, Mohammed A G; Ali, Nazar T; Dama, Yousf; Zuid, Abdulkareim; Jones, Stephen M R; Abd-Alhameed, Raed A; Noras, James M

    2017-11-15

    This paper proposes a new low complexity angle of arrival (AOA) method for signal direction estimation in multi-element smart wireless communication systems. The new method estimates the AOAs of the received signals directly from the received signals with significantly reduced complexity since it does not need to construct the correlation matrix, invert the matrix or apply eigen-decomposition, which are computationally expensive. A mathematical model of the proposed method is illustrated and then verified using extensive computer simulations. Both linear and circular sensors arrays are studied using various numerical examples. The method is systematically compared with other common and recently introduced AOA methods over a wide range of scenarios. The simulated results show that the new method has several advantages in terms of reduced complexity and improved accuracy under the assumptions of correlated signals and limited numbers of snapshots.

  4. Proceeding On : Parallelisation Of Critical Code Passages In PHOENIX/3D

    NASA Astrophysics Data System (ADS)

    Arkenberg, Mario; Wichert, Viktoria; Hauschildt, Peter H.

    2016-10-01

    Highly resolved state-of-the-art 3D atmosphere simulations will remain computationally extremely expensive for years to come. In addition to the need for more computing power, rethinking coding practices is necessary. We take a dual approach here, by introducing especially adapted, parallel numerical methods and correspondingly parallelising time critical code passages. In the following, we present our work on PHOENIX/3D.While parallelisation is generally worthwhile, it requires revision of time-consuming subroutines with respect to separability of localised data and variables in order to determine the optimal approach. Of course, the same applies to the code structure. The importance of this ongoing work can be showcased by recently derived benchmark results, which were generated utilis- ing MPI and OpenMP. Furthermore, the need for a careful and thorough choice of an adequate, machine dependent setup is discussed.

  5. Three-dimensional simulation of helix traveling-wave tube cold-test characteristics using MAFIA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kory, C.L.

    1996-12-31

    A critically important step in the traveling-wave tube (TWT) design process is the cold-testing of the slow-wave circuit for dispersion, beam interaction impedance and RF losses. Experimental cold-tests can be very time-consuming and expensive, thus limiting the freedom to examine numerous variations to the test circuit. This makes the need for computational methods crucial as they can lower cost, reduce tube development time and allow the freedom to introduce novel and improved designs. The cold-test parameters have been calculated for a C-Band Northrop-Grumman helix TWT slow-wave circuit using MAFIA, the three-dimensional electromagnetic finite-integration computer code. Measured and simulated cold-test datamore » for the Northrop-Grumman helix TWT including dispersion, impedance and attenuation will be presented. Close agreement between simulated and measured values of the dispersion, impedance and attenuation has been obtained.« less

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mueller, Juliane

    MISO is an optimization framework for solving computationally expensive mixed-integer, black-box, global optimization problems. MISO uses surrogate models to approximate the computationally expensive objective function. Hence, derivative information, which is generally unavailable for black-box simulation objective functions, is not needed. MISO allows the user to choose the initial experimental design strategy, the type of surrogate model, and the sampling strategy.

  7. 48 CFR 9905.506-60 - Illustrations.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., installs a computer service center to begin operations on May 1. The operating expense related to the new... operating expenses of the computer service center for the 8-month part of the cost accounting period may be... 48 Federal Acquisition Regulations System 7 2013-10-01 2012-10-01 true Illustrations. 9905.506-60...

  8. Highly Scalable Asynchronous Computing Method for Partial Differential Equations: A Path Towards Exascale

    NASA Astrophysics Data System (ADS)

    Konduri, Aditya

    Many natural and engineering systems are governed by nonlinear partial differential equations (PDEs) which result in a multiscale phenomena, e.g. turbulent flows. Numerical simulations of these problems are computationally very expensive and demand for extreme levels of parallelism. At realistic conditions, simulations are being carried out on massively parallel computers with hundreds of thousands of processing elements (PEs). It has been observed that communication between PEs as well as their synchronization at these extreme scales take up a significant portion of the total simulation time and result in poor scalability of codes. This issue is likely to pose a bottleneck in scalability of codes on future Exascale systems. In this work, we propose an asynchronous computing algorithm based on widely used finite difference methods to solve PDEs in which synchronization between PEs due to communication is relaxed at a mathematical level. We show that while stability is conserved when schemes are used asynchronously, accuracy is greatly degraded. Since message arrivals at PEs are random processes, so is the behavior of the error. We propose a new statistical framework in which we show that average errors drop always to first-order regardless of the original scheme. We propose new asynchrony-tolerant schemes that maintain accuracy when synchronization is relaxed. The quality of the solution is shown to depend, not only on the physical phenomena and numerical schemes, but also on the characteristics of the computing machine. A novel algorithm using remote memory access communications has been developed to demonstrate excellent scalability of the method for large-scale computing. Finally, we present a path to extend this method in solving complex multi-scale problems on Exascale machines.

  9. Broadband ground motion simulation using a paralleled hybrid approach of Frequency Wavenumber and Finite Difference method

    NASA Astrophysics Data System (ADS)

    Chen, M.; Wei, S.

    2016-12-01

    The serious damage of Mexico City caused by the 1985 Michoacan earthquake 400 km away indicates that urban areas may be affected by remote earthquakes. To asses earthquake risk of urban areas imposed by distant earthquakes, we developed a hybrid Frequency Wavenumber (FK) and Finite Difference (FD) code implemented with MPI, since the computation of seismic wave propagation from a distant earthquake using a single numerical method (e.g. Finite Difference, Finite Element or Spectral Element) is very expensive. In our approach, we compute the incident wave field (ud) at the boundaries of the excitation box, which surrounding the local structure, using a paralleled FK method (Zhu and Rivera, 2002), and compute the total wave field (u) within the excitation box using a parallelled 2D FD method. We apply perfectly matched layer (PML) absorbing condition to the diffracted wave field (u-ud). Compared to previous Generalized Ray Theory and Finite Difference (Wen and Helmberger, 1998), Frequency Wavenumber and Spectral Element (Tong et al., 2014), and Direct Solution Method and Spectral Element hybrid method (Monteiller et al., 2013), our absorbing boundary condition dramatically suppress the numerical noise. The MPI implementation of our method can greatly speed up the calculation. Besides, our hybrid method also has a potential use in high resolution array imaging similar to Tong et al. (2014).

  10. 26 CFR 1.50B-1 - Definitions of WIN expenses and WIN employees.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... employee. (c) Trade or business expenses. The term “WIN expenses” includes only salaries and wages which... 26 Internal Revenue 1 2010-04-01 2010-04-01 true Definitions of WIN expenses and WIN employees. 1... INCOME TAXES Rules for Computing Credit for Expenses of Work Incentive Programs § 1.50B-1 Definitions of...

  11. A LSQR-type method provides a computationally efficient automated optimal choice of regularization parameter in diffuse optical tomography.

    PubMed

    Prakash, Jaya; Yalavarthy, Phaneendra K

    2013-03-01

    Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time.

  12. Evaluating vortex generator jet experiments for turbulent flow separation control

    NASA Astrophysics Data System (ADS)

    von Stillfried, F.; Kékesi, T.; Wallin, S.; Johansson, A. V.

    2011-12-01

    Separating turbulent boundary-layers can be energized by streamwise vortices from vortex generators (VG) that increase the near wall momentum as well as the overall mixing of the flow so that flow separation can be delayed or even prevented. In general, two different types of VGs exist: passive vane VGs (VVG) and active VG jets (VGJ). Even though VGs are already successfully used in engineering applications, it is still time-consuming and computationally expensive to include them in a numerical analysis. Fully resolved VGs in a computational mesh lead to a very high number of grid points and thus, computational costs. In addition, computational parameter studies for such flow control devices take much time to set-up. Therefore, much of the research work is still carried out experimentally. KTH Stockholm develops a novel VGJ model that makes it possible to only include the physical influence in terms of the additional stresses that originate from the VGJs without the need to locally refine the computational mesh. Such a modelling strategy enables fast VGJ parameter variations and optimization studies are easliy made possible. For that, VGJ experiments are evaluated in this contribution and results are used for developing a statistical VGJ model.

  13. Fast inverse scattering solutions using the distorted Born iterative method and the multilevel fast multipole algorithm

    PubMed Central

    Hesford, Andrew J.; Chew, Weng C.

    2010-01-01

    The distorted Born iterative method (DBIM) computes iterative solutions to nonlinear inverse scattering problems through successive linear approximations. By decomposing the scattered field into a superposition of scattering by an inhomogeneous background and by a material perturbation, large or high-contrast variations in medium properties can be imaged through iterations that are each subject to the distorted Born approximation. However, the need to repeatedly compute forward solutions still imposes a very heavy computational burden. To ameliorate this problem, the multilevel fast multipole algorithm (MLFMA) has been applied as a forward solver within the DBIM. The MLFMA computes forward solutions in linear time for volumetric scatterers. The typically regular distribution and shape of scattering elements in the inverse scattering problem allow the method to take advantage of data redundancy and reduce the computational demands of the normally expensive MLFMA setup. Additional benefits are gained by employing Kaczmarz-like iterations, where partial measurements are used to accelerate convergence. Numerical results demonstrate both the efficiency of the forward solver and the successful application of the inverse method to imaging problems with dimensions in the neighborhood of ten wavelengths. PMID:20707438

  14. Reduced order surrogate modelling (ROSM) of high dimensional deterministic simulations

    NASA Astrophysics Data System (ADS)

    Mitry, Mina

    Often, computationally expensive engineering simulations can prohibit the engineering design process. As a result, designers may turn to a less computationally demanding approximate, or surrogate, model to facilitate their design process. However, owing to the the curse of dimensionality, classical surrogate models become too computationally expensive for high dimensional data. To address this limitation of classical methods, we develop linear and non-linear Reduced Order Surrogate Modelling (ROSM) techniques. Two algorithms are presented, which are based on a combination of linear/kernel principal component analysis and radial basis functions. These algorithms are applied to subsonic and transonic aerodynamic data, as well as a model for a chemical spill in a channel. The results of this thesis show that ROSM can provide a significant computational benefit over classical surrogate modelling, sometimes at the expense of a minor loss in accuracy.

  15. 47 CFR 32.6112 - Motor vehicle expense.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 2 2010-10-01 2010-10-01 false Motor vehicle expense. 32.6112 Section 32.6112 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES UNIFORM SYSTEM OF ACCOUNTS.../or to other Plant Specific Operations Expense accounts. These amounts shall be computed on the basis...

  16. Microplasma fabrication: from semiconductor technology for 2D-chips and microfluidic channels to rapid prototyping and 3D-printing of microplasma devices

    NASA Astrophysics Data System (ADS)

    Shatford, R.; Karanassios, Vassili

    2014-05-01

    Microplasmas are receiving attention in recent conferences and current scientific literature. In our laboratory, microplasmas-on-chips proved to be particularly attractive. The 2D- and 3D-chips we developed became hybrid because they were fitted with a quartz plate (quartz was used due to its transparency to UV). Fabrication of 2D- and 3D-chips for microplasma research is described. The fabrication methods described ranged from semiconductor fabrication technology, to Computer Numerical Control (CNC) machining, to 3D-printing. These methods may prove to be useful for those contemplating in entering microplasma research but have no access to expensive semiconductor fabrication equipment.

  17. Abort Options for Human Missions to Earth-Moon Halo Orbits

    NASA Technical Reports Server (NTRS)

    Jesick, Mark C.

    2013-01-01

    Abort trajectories are optimized for human halo orbit missions about the translunar libration point (L2), with an emphasis on the use of free return trajectories. Optimal transfers from outbound free returns to L2 halo orbits are numerically optimized in the four-body ephemeris model. Circumlunar free returns are used for direct transfers, and cislunar free returns are used in combination with lunar gravity assists to reduce propulsive requirements. Trends in orbit insertion cost and flight time are documented across the southern L2 halo family as a function of halo orbit position and free return flight time. It is determined that the maximum amplitude southern halo incurs the lowest orbit insertion cost for direct transfers but the maximum cost for lunar gravity assist transfers. The minimum amplitude halo is the most expensive destination for direct transfers but the least expensive for lunar gravity assist transfers. The on-orbit abort costs for three halos are computed as a function of abort time and return time. Finally, an architecture analysis is performed to determine launch and on-orbit vehicle requirements for halo orbit missions.

  18. Modelling and analysis of the wetting characteristics of ink for display applications with the surface evolution technique

    NASA Astrophysics Data System (ADS)

    Shin, Dong-Youn; Brakke, Kenneth A.

    2009-06-01

    Piezo drop-on-demand inkjet printing technology has attracted the attention of display industries for the production of colour filters for thin film transistor liquid crystal displays (TFT LCD) because of the opportunity of reducing manufacturing cost. Colourant ink droplets ejected from inkjet nozzles selectively fill subpixels surrounded with black matrix (BM). Surface energy differences between the glass substrate and the BM generally guide this ink filling process. This colourant ink filling process, however, results from the complex hydrodynamic interaction of ink with the substrate and the BM. Neither computationally expensive numerical methods nor time and cost expensive experiments are suitable for the derivation of optimum surface conditions at the early development stage. In this study, a more concise surface evolution technique is proposed and ways to find the optimum surface conditions for the fabrication of TFT LCD colour filters and polymer light emitting devices are discussed, which might be useful for chemists and developers of ink and BM material, as well as for process engineers in display industries.

  19. 47 CFR 32.6121 - Land and building expense.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... operate the telecommunications network shall be charged to Account 6531, Power Expense, and the cost of separately metered electricity used for operating specific types of equipment, such as computers, shall be... SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6121 Land and...

  20. 47 CFR 32.6121 - Land and building expense.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... operate the telecommunications network shall be charged to Account 6531, Power Expense, and the cost of separately metered electricity used for operating specific types of equipment, such as computers, shall be... SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6121 Land and...

  1. 47 CFR 32.6121 - Land and building expense.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... operate the telecommunications network shall be charged to Account 6531, Power Expense, and the cost of separately metered electricity used for operating specific types of equipment, such as computers, shall be... SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6121 Land and...

  2. 47 CFR 32.6121 - Land and building expense.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... operate the telecommunications network shall be charged to Account 6531, Power Expense, and the cost of separately metered electricity used for operating specific types of equipment, such as computers, shall be... SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6121 Land and...

  3. 47 CFR 32.6121 - Land and building expense.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... operate the telecommunications network shall be charged to Account 6531, Power Expense, and the cost of separately metered electricity used for operating specific types of equipment, such as computers, shall be... SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6121 Land and...

  4. The advanced role of computational mechanics and visualization in science and technology: analysis of the Germanwings Flight 9525 crash

    NASA Astrophysics Data System (ADS)

    Chen, Goong; Wang, Yi-Ching; Perronnet, Alain; Gu, Cong; Yao, Pengfei; Bin-Mohsin, Bandar; Hajaiej, Hichem; Scully, Marlan O.

    2017-03-01

    Computational mathematics, physics and engineering form a major constituent of modern computational science, which now stands on an equal footing with the established branches of theoretical and experimental sciences. Computational mechanics solves problems in science and engineering based upon mathematical modeling and computing, bypassing the need for expensive and time-consuming laboratory setups and experimental measurements. Furthermore, it allows the numerical simulations of large scale systems, such as the formation of galaxies that could not be done in any earth bound laboratories. This article is written as part of the 21st Century Frontiers Series to illustrate some state-of-the-art computational science. We emphasize how to do numerical modeling and visualization in the study of a contemporary event, the pulverizing crash of the Germanwings Flight 9525 on March 24, 2015, as a showcase. Such numerical modeling and the ensuing simulation of aircraft crashes into land or mountain are complex tasks as they involve both theoretical study and supercomputing of a complex physical system. The most tragic type of crash involves ‘pulverization’ such as the one suffered by this Germanwings flight. Here, we show pulverizing airliner crashes by visualization through video animations from supercomputer applications of the numerical modeling tool LS-DYNA. A sound validation process is challenging but essential for any sophisticated calculations. We achieve this by validation against the experimental data from a crash test done in 1993 of an F4 Phantom II fighter jet into a wall. We have developed a method by hybridizing two primary methods: finite element analysis and smoothed particle hydrodynamics. This hybrid method also enhances visualization by showing a ‘debris cloud’. Based on our supercomputer simulations and the visualization, we point out that prior works on this topic based on ‘hollow interior’ modeling can be quite problematic and, thus, not likely to be correct. We discuss the effects of terrain on pulverization using the information from the recovered flight-data-recorder and show our forensics and assessments of what may have happened during the final moments of the crash. Finally, we point out that our study has potential for being made into real-time flight crash simulators to help the study of crashworthiness and survivability for future aviation safety. Some forward-looking statements are also made.

  5. 7 CFR 1484.53 - What are the requirements for documenting and reporting contributions?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... contribution must be documented by the Cooperator, showing the method of computing non-cash contributions, salaries, and travel expenses. (b) Each Cooperator must keep records of the methods used to compute the value of non-cash contributions, and (1) Copies of invoices or receipts for expenses paid by the U.S...

  6. 26 CFR 1.213-1 - Medical, dental, etc., expenses.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... medical care includes the diagnosis, cure, mitigation, treatment, or prevention of disease. Expenses paid... taxable year for insurance that constitute expenses paid for medical care shall, for purposes of computing... care of the taxpayer, his spouse, or a dependent of the taxpayer and not be compensated for by...

  7. 26 CFR 1.556-2 - Adjustments to taxable income.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... of deductions for trade or business expenses and depreciation which are allocable to the operation... computed without the deduction of the amount disallowed under section 556(b)(5), relating to expenses and... disallowed under section 556(b)(5), relating to expenses and depreciation applicable to property of the...

  8. An algorithm for fast elastic wave simulation using a vectorized finite difference operator

    NASA Astrophysics Data System (ADS)

    Malkoti, Ajay; Vedanti, Nimisha; Tiwari, Ram Krishna

    2018-07-01

    Modern geophysical imaging techniques exploit the full wavefield information which can be simulated numerically. These numerical simulations are computationally expensive due to several factors, such as a large number of time steps and nodes, big size of the derivative stencil and huge model size. Besides these constraints, it is also important to reformulate the numerical derivative operator for improved efficiency. In this paper, we have introduced a vectorized derivative operator over the staggered grid with shifted coordinate systems. The operator increases the efficiency of simulation by exploiting the fact that each variable can be represented in the form of a matrix. This operator allows updating all nodes of a variable defined on the staggered grid, in a manner similar to the collocated grid scheme and thereby reducing the computational run-time considerably. Here we demonstrate an application of this operator to simulate the seismic wave propagation in elastic media (Marmousi model), by discretizing the equations on a staggered grid. We have compared the performance of this operator on three programming languages, which reveals that it can increase the execution speed by a factor of at least 2-3 times for FORTRAN and MATLAB; and nearly 100 times for Python. We have further carried out various tests in MATLAB to analyze the effect of model size and the number of time steps on total simulation run-time. We find that there is an additional, though small, computational overhead for each step and it depends on total number of time steps used in the simulation. A MATLAB code package, 'FDwave', for the proposed simulation scheme is available upon request.

  9. Fast solver for large scale eddy current non-destructive evaluation problems

    NASA Astrophysics Data System (ADS)

    Lei, Naiguang

    Eddy current testing plays a very important role in non-destructive evaluations of conducting test samples. Based on Faraday's law, an alternating magnetic field source generates induced currents, called eddy currents, in an electrically conducting test specimen. The eddy currents generate induced magnetic fields that oppose the direction of the inducing magnetic field in accordance with Lenz's law. In the presence of discontinuities in material property or defects in the test specimen, the induced eddy current paths are perturbed and the associated magnetic fields can be detected by coils or magnetic field sensors, such as Hall elements or magneto-resistance sensors. Due to the complexity of the test specimen and the inspection environments, the availability of theoretical simulation models is extremely valuable for studying the basic field/flaw interactions in order to obtain a fuller understanding of non-destructive testing phenomena. Theoretical models of the forward problem are also useful for training and validation of automated defect detection systems. Theoretical models generate defect signatures that are expensive to replicate experimentally. In general, modelling methods can be classified into two categories: analytical and numerical. Although analytical approaches offer closed form solution, it is generally not possible to obtain largely due to the complex sample and defect geometries, especially in three-dimensional space. Numerical modelling has become popular with advances in computer technology and computational methods. However, due to the huge time consumption in the case of large scale problems, accelerations/fast solvers are needed to enhance numerical models. This dissertation describes a numerical simulation model for eddy current problems using finite element analysis. Validation of the accuracy of this model is demonstrated via comparison with experimental measurements of steam generator tube wall defects. These simulations generating two-dimension raster scan data typically takes one to two days on a dedicated eight-core PC. A novel direct integral solver for eddy current problems and GPU-based implementation is also investigated in this research to reduce the computational time.

  10. Comparison of numerical weather prediction based deterministic and probabilistic wind resource assessment methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jie; Draxl, Caroline; Hopson, Thomas

    Numerical weather prediction (NWP) models have been widely used for wind resource assessment. Model runs with higher spatial resolution are generally more accurate, yet extremely computational expensive. An alternative approach is to use data generated by a low resolution NWP model, in conjunction with statistical methods. In order to analyze the accuracy and computational efficiency of different types of NWP-based wind resource assessment methods, this paper performs a comparison of three deterministic and probabilistic NWP-based wind resource assessment methodologies: (i) a coarse resolution (0.5 degrees x 0.67 degrees) global reanalysis data set, the Modern-Era Retrospective Analysis for Research and Applicationsmore » (MERRA); (ii) an analog ensemble methodology based on the MERRA, which provides both deterministic and probabilistic predictions; and (iii) a fine resolution (2-km) NWP data set, the Wind Integration National Dataset (WIND) Toolkit, based on the Weather Research and Forecasting model. Results show that: (i) as expected, the analog ensemble and WIND Toolkit perform significantly better than MERRA confirming their ability to downscale coarse estimates; (ii) the analog ensemble provides the best estimate of the multi-year wind distribution at seven of the nine sites, while the WIND Toolkit is the best at one site; (iii) the WIND Toolkit is more accurate in estimating the distribution of hourly wind speed differences, which characterizes the wind variability, at five of the available sites, with the analog ensemble being best at the remaining four locations; and (iv) the analog ensemble computational cost is negligible, whereas the WIND Toolkit requires large computational resources. Future efforts could focus on the combination of the analog ensemble with intermediate resolution (e.g., 10-15 km) NWP estimates, to considerably reduce the computational burden, while providing accurate deterministic estimates and reliable probabilistic assessments.« less

  11. Numerical Modeling of Gas Turbine Combustor Utilizing One-Dimensional Acoustics

    NASA Astrophysics Data System (ADS)

    Caley, Thomas M.

    This study focuses on the numerical modeling of a gas turbine combustor set-up with known regions of thermoacoustic instability. The proposed model takes the form of a hybrid thermoacoustic network, with lumped elements representing boundary conditions and the flame, and 3-dimensional geometry volumes representing the geometry. The model is analyzed using a commercial 3-D finite element method (FEM) software, COMSOL Multiphysics. A great deal of literature is available covering thermoacoustic modeling, but much of it utilizes more computationally expensive techniques such as Large-Eddy Simulations, or relies on analytical modeling that is limited to specific test cases or proprietary software. The present study models the 3-D geometry of a high-pressure combustion chamber accurately, and uses the lumped elements of a thermoacoustic network to represent parts of the combustor system that can be experimentally tested under stable conditions, ensuring that the recorded acoustic responses can be attributed to that element alone. The numerical model has been tested against the experimental model with and without an experimentally-determined impedance boundary condition. Eigenfrequency studies are used to compare the frequency and growth rates (and from that, the thermoacoustic stability) of resonant modes in the combustor. The flame in the combustor is modeled with a flame transfer function that was determined from experimental testing using frequency forcing. The effect of flow rate on the impedance boundary condition is also examined experimentally and numerically to qualify the practice of modeling an orifice plate as an acoustically-closed boundary. Using the experimental flame transfer function and boundary conditions in the numerical model produced results that closely matched previous experimental tests in frequency, but not in stability characteristics. The lightweight nature of the numerical model means additional lumped elements can be easily added when experimental data is available, creating a more accurate model without noticeably increasing the complexity or computational time.

  12. Analysis of the influence of dynamic phenomena on the fracture of a reinforced concrete beam under quasistatic loading (computations and experiment)

    NASA Astrophysics Data System (ADS)

    Bykov, A. A.; Matveenko, V. P.; Serovaev, G. S.; Shardakov, I. N.; Shestakov, A. P.

    2015-07-01

    Construction of numerical models which reliably describe the processes of crack formation and development in reinforced concrete permit estimating the bearing capacity and structural strength of any structural element without using expensive full-scale experiments. In the present paper, an example of four-point bending of a rectangular beam is used to consider a finite-element model of concrete fracture. The results obtained by quasistatic calculations and by solving the problem with inertia forces taken into account are compared. The kinetic energy contribution to the total mechanical energy of the system at the crack origination moment, which is greater than 30%, is estimated to justify the expediency of taking the inertia forces into account. The crack distribution characters obtained numerically and observed experimentally are compared. It is shown that the leading role in the evolution of the crack formation process is played by the mechanism of fracture of bonds between the reinforcing elements and the concrete.

  13. The use of spectral methods in bidomain studies.

    PubMed

    Trayanova, N; Pilkington, T

    1992-01-01

    A Fourier transform method is developed for solving the bidomain coupled differential equations governing the intracellular and extracellular potentials on a finite sheet of cardiac cells undergoing stimulation. The spectral formulation converts the system of differential equations into a "diagonal" system of algebraic equations. Solving the algebraic equations directly and taking the inverse transform of the potentials proved numerically less expensive than solving the coupled differential equations by means of traditional numerical techniques, such as finite differences; the comparison between the computer execution times showed that the Fourier transform method was about 40 times faster than the finite difference method. By application of the Fourier transform method, transmembrane potential distributions in the two-dimensional myocardial slice were calculated. For a tissue characterized by a ratio of the intra- to extracellular conductivities that is different in all principal directions, the transmembrane potential distribution exhibits a rather complicated geometrical pattern. The influence of the different anisotropy ratios, the finite tissue size, and the stimuli configuration on the pattern of membrane polarization is investigated.

  14. An improved coupled-states approximation including the nearest neighbor Coriolis couplings for diatom-diatom inelastic collision

    NASA Astrophysics Data System (ADS)

    Yang, Dongzheng; Hu, Xixi; Zhang, Dong H.; Xie, Daiqian

    2018-02-01

    Solving the time-independent close coupling equations of a diatom-diatom inelastic collision system by using the rigorous close-coupling approach is numerically difficult because of its expensive matrix manipulation. The coupled-states approximation decouples the centrifugal matrix by neglecting the important Coriolis couplings completely. In this work, a new approximation method based on the coupled-states approximation is presented and applied to time-independent quantum dynamic calculations. This approach only considers the most important Coriolis coupling with the nearest neighbors and ignores weaker Coriolis couplings with farther K channels. As a result, it reduces the computational costs without a significant loss of accuracy. Numerical tests for para-H2+ortho-H2 and para-H2+HD inelastic collision were carried out and the results showed that the improved method dramatically reduces the errors due to the neglect of the Coriolis couplings in the coupled-states approximation. This strategy should be useful in quantum dynamics of other systems.

  15. Comprehension and computation in Bayesian problem solving

    PubMed Central

    Johnson, Eric D.; Tubau, Elisabet

    2015-01-01

    Humans have long been characterized as poor probabilistic reasoners when presented with explicit numerical information. Bayesian word problems provide a well-known example of this, where even highly educated and cognitively skilled individuals fail to adhere to mathematical norms. It is widely agreed that natural frequencies can facilitate Bayesian inferences relative to normalized formats (e.g., probabilities, percentages), both by clarifying logical set-subset relations and by simplifying numerical calculations. Nevertheless, between-study performance on “transparent” Bayesian problems varies widely, and generally remains rather unimpressive. We suggest there has been an over-focus on this representational facilitator (i.e., transparent problem structures) at the expense of the specific logical and numerical processing requirements and the corresponding individual abilities and skills necessary for providing Bayesian-like output given specific verbal and numerical input. We further suggest that understanding this task-individual pair could benefit from considerations from the literature on mathematical cognition, which emphasizes text comprehension and problem solving, along with contributions of online executive working memory, metacognitive regulation, and relevant stored knowledge and skills. We conclude by offering avenues for future research aimed at identifying the stages in problem solving at which correct vs. incorrect reasoners depart, and how individual differences might influence this time point. PMID:26283976

  16. A simplified model for TIG-dressing numerical simulation

    NASA Astrophysics Data System (ADS)

    Ferro, P.; Berto, F.; James, M. N.

    2017-04-01

    Irrespective of the mechanical properties of the alloy to be welded, the fatigue strength of welded joints is primarily controlled by the stress concentration associated with the weld toe or weld root. In order to reduce the effects of such notch defects in welds, which are influenced by tensile properties of the alloy, post-weld improvement techniques have been developed. The two most commonly used techniques are weld toe grinding and TIG dressing, which are intended to both remove toe defects such as non-metallic intrusions and to re-profile the weld toe region to give a lower stress concentration. In the case of TIG dressing the weld toe is re-melted to provide a smoother transition between the plate and the weld crown and to beneficially modify the residual stress redistribution. Assessing the changes to weld stress state arising from TIG-dressing is most easily accomplished through a complex numerical simulation that requires coupled thermo-fluid dynamics and solid mechanics. However, this can be expensive in terms of computational cost and time needed to reach a solution. The present paper therefore proposes a simplified numerical model that overcomes such drawbacks and which simulates the remelted toe region by means of the activation and deactivation of elements in the numerical model.

  17. Ground Contact Modeling for the Morpheus Test Vehicle Simulation

    NASA Technical Reports Server (NTRS)

    Cordova, Luis

    2014-01-01

    The Morpheus vertical test vehicle is an autonomous robotic lander being developed at Johnson Space Center (JSC) to test hazard detection technology. Because the initial ground contact simulation model was not very realistic, it was decided to improve the model without making it too computationally expensive. The first development cycle added capability to define vehicle attachment points (AP) and to keep track of their states in the lander reference frame (LFRAME). These states are used with a spring damper model to compute an AP contact force. The lateral force is then overwritten, if necessary, by the Coulomb static or kinetic friction force. The second development cycle added capability to use the PolySurface class as the contact surface. The class can load CAD data in STL (Stereo Lithography) format, and use the data to compute line of sight (LOS) intercepts. A polygon frame (PFRAME) is computed from the facet intercept normal and used to convert the AP state to PFRAME. Three flat plane tests validate the transitions from kinetic to static, static to kinetic, and vertical impact. The hazardous terrain test will be used to test for visual reasonableness. The improved model is numerically inexpensive, robust, and produces results that are reasonable.

  18. Ground Contact Modeling for the Morpheus Test Vehicle Simulation

    NASA Technical Reports Server (NTRS)

    Cordova, Luis

    2013-01-01

    The Morpheus vertical test vehicle is an autonomous robotic lander being developed at Johnson Space Center (JSC) to test hazard detection technology. Because the initial ground contact simulation model was not very realistic, it was decided to improve the model without making it too computationally expensive. The first development cycle added capability to define vehicle attachment points (AP) and to keep track of their states in the lander reference frame (LFRAME). These states are used with a spring damper model to compute an AP contact force. The lateral force is then overwritten, if necessary, by the Coulomb static or kinetic friction force. The second development cycle added capability to use the PolySurface class as the contact surface. The class can load CAD data in STL (Stereo Lithography) format, and use the data to compute line of sight (LOS) intercepts. A polygon frame (PFRAME) is computed from the facet intercept normal and used to convert the AP state to PFRAME. Three flat plane tests validate the transitions from kinetic to static, static to kinetic, and vertical impact. The hazardous terrain test will be used to test for visual reasonableness. The improved model is numerically inexpensive, robust, and produces results that are reasonable.

  19. Solving the Coupled System Improves Computational Efficiency of the Bidomain Equations

    PubMed Central

    Southern, James A.; Plank, Gernot; Vigmond, Edward J.; Whiteley, Jonathan P.

    2017-01-01

    The bidomain equations are frequently used to model the propagation of cardiac action potentials across cardiac tissue. At the whole organ level the size of the computational mesh required makes their solution a significant computational challenge. As the accuracy of the numerical solution cannot be compromised, efficiency of the solution technique is important to ensure that the results of the simulation can be obtained in a reasonable time whilst still encapsulating the complexities of the system. In an attempt to increase efficiency of the solver, the bidomain equations are often decoupled into one parabolic equation that is computationally very cheap to solve and an elliptic equation that is much more expensive to solve. In this study the performance of this uncoupled solution method is compared with an alternative strategy in which the bidomain equations are solved as a coupled system. This seems counter-intuitive as the alternative method requires the solution of a much larger linear system at each time step. However, in tests on two 3-D rabbit ventricle benchmarks it is shown that the coupled method is up to 80% faster than the conventional uncoupled method — and that parallel performance is better for the larger coupled problem. PMID:19457741

  20. Hermite regularization of the lattice Boltzmann method for open source computational aeroacoustics.

    PubMed

    Brogi, F; Malaspinas, O; Chopard, B; Bonadonna, C

    2017-10-01

    The lattice Boltzmann method (LBM) is emerging as a powerful engineering tool for aeroacoustic computations. However, the LBM has been shown to present accuracy and stability issues in the medium-low Mach number range, which is of interest for aeroacoustic applications. Several solutions have been proposed but are often too computationally expensive, do not retain the simplicity and the advantages typical of the LBM, or are not described well enough to be usable by the community due to proprietary software policies. An original regularized collision operator is proposed, based on the expansion of Hermite polynomials, that greatly improves the accuracy and stability of the LBM without significantly altering its algorithm. The regularized LBM can be easily coupled with both non-reflective boundary conditions and a multi-level grid strategy, essential ingredients for aeroacoustic simulations. Excellent agreement was found between this approach and both experimental and numerical data on two different benchmarks: the laminar, unsteady flow past a 2D cylinder and the 3D turbulent jet. Finally, most of the aeroacoustic computations with LBM have been done with commercial software, while here the entire theoretical framework is implemented using an open source library (palabos).

  1. Quantification of video-taped images in microcirculation research using inexpensive imaging software (Adobe Photoshop).

    PubMed

    Brunner, J; Krummenauer, F; Lehr, H A

    2000-04-01

    Study end-points in microcirculation research are usually video-taped images rather than numeric computer print-outs. Analysis of these video-taped images for the quantification of microcirculatory parameters usually requires computer-based image analysis systems. Most software programs for image analysis are custom-made, expensive, and limited in their applicability to selected parameters and study end-points. We demonstrate herein that an inexpensive, commercially available computer software (Adobe Photoshop), run on a Macintosh G3 computer with inbuilt graphic capture board provides versatile, easy to use tools for the quantification of digitized video images. Using images obtained by intravital fluorescence microscopy from the pre- and postischemic muscle microcirculation in the skinfold chamber model in hamsters, Photoshop allows simple and rapid quantification (i) of microvessel diameters, (ii) of the functional capillary density and (iii) of postischemic leakage of FITC-labeled high molecular weight dextran from postcapillary venules. We present evidence of the technical accuracy of the software tools and of a high degree of interobserver reliability. Inexpensive commercially available imaging programs (i.e., Adobe Photoshop) provide versatile tools for image analysis with a wide range of potential applications in microcirculation research.

  2. A Parametric Study of Unsteady Rotor-Stator Interaction in a Simplified Francis Turbine

    NASA Astrophysics Data System (ADS)

    Wouden, Alex; Cimbala, John; Lewis, Bryan

    2011-11-01

    CFD analysis is becoming a critical stage in the design of hydroturbines. However, its capability to represent unsteady flow interactions between the rotor and stator (which requires a 360-degree, mesh-refined model of the turbine passage) is hindered. For CFD to become a more effective tool in predicting the performance of a hydroturbine, the key interactions between the rotor and stator need to be understood using current numerical methods. As a first step towards evaluating this unsteady behavior without the burden of a computationally expensive domain, the stator and Francis-type rotor blades are reduced to flat plates. Local and global variables are compared using periodic, semi-periodic, and 360-degree geometric models and various turbulence models (k-omega, k-epsilon, and Spalart-Allmaras). The computations take place within the OpenFOAM® environment and utilize a general grid interface (GGI) between the rotor and stator computational domains. The rotor computational domain is capable of dynamic rotation. The results demonstrate some of the strengths and limitations of utilizing CFD for hydroturbine analysis. These case studies will also serve as tutorials to help others learn how to use CFD for turbomachinery. This research is funded by a grant from the DOE.

  3. Sensitivity of Emissions to Uncertainties in Residual Gas Fraction Measurements in Automotive Engines: A Numerical Study

    DOE PAGES

    Aithal, S. M.

    2018-01-01

    Initial conditions of the working fluid (air-fuel mixture) within an engine cylinder, namely, mixture composition and temperature, greatly affect the combustion characteristics and emissions of an engine. In particular, the percentage of residual gas fraction (RGF) in the engine cylinder can significantly alter the temperature and composition of the working fluid as compared with the air-fuel mixture inducted into the engine, thus affecting engine-out emissions. Accurate measurement of the RGF is cumbersome and expensive, thus making it hard to accurately characterize the initial mixture composition and temperature in any given engine cycle. This uncertainty can lead to challenges in accuratelymore » interpreting experimental emissions data and in implementing real-time control strategies. Quantifying the effects of the RGF can have important implications for the diagnostics and control of internal combustion engines. This paper reports on the use of a well-validated, two-zone quasi-dimensional model to compute the engine-out NO and CO emission in a gasoline engine. The effect of varying the RGF on the emissions under lean, near-stoichiometric, and rich engine conditions was investigated. Numerical results show that small uncertainties (~2–4%) in the measured/computed values of the RGF can significantly affect the engine-out NO/CO emissions.« less

  4. Sensitivity of Emissions to Uncertainties in Residual Gas Fraction Measurements in Automotive Engines: A Numerical Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aithal, S. M.

    Initial conditions of the working fluid (air-fuel mixture) within an engine cylinder, namely, mixture composition and temperature, greatly affect the combustion characteristics and emissions of an engine. In particular, the percentage of residual gas fraction (RGF) in the engine cylinder can significantly alter the temperature and composition of the working fluid as compared with the air-fuel mixture inducted into the engine, thus affecting engine-out emissions. Accurate measurement of the RGF is cumbersome and expensive, thus making it hard to accurately characterize the initial mixture composition and temperature in any given engine cycle. This uncertainty can lead to challenges in accuratelymore » interpreting experimental emissions data and in implementing real-time control strategies. Quantifying the effects of the RGF can have important implications for the diagnostics and control of internal combustion engines. This paper reports on the use of a well-validated, two-zone quasi-dimensional model to compute the engine-out NO and CO emission in a gasoline engine. The effect of varying the RGF on the emissions under lean, near-stoichiometric, and rich engine conditions was investigated. Numerical results show that small uncertainties (~2–4%) in the measured/computed values of the RGF can significantly affect the engine-out NO/CO emissions.« less

  5. Compressive Sensing with Cross-Validation and Stop-Sampling for Sparse Polynomial Chaos Expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik

    Compressive sensing is a powerful technique for recovering sparse solutions of underdetermined linear systems, which is often encountered in uncertainty quanti cation analysis of expensive and high-dimensional physical models. We perform numerical investigations employing several com- pressive sensing solvers that target the unconstrained LASSO formulation, with a focus on linear systems that arise in the construction of polynomial chaos expansions. With core solvers of l1 ls, SpaRSA, CGIST, FPC AS, and ADMM, we develop techniques to mitigate over tting through an automated selection of regularization constant based on cross-validation, and a heuristic strategy to guide the stop-sampling decision. Practical recommendationsmore » on parameter settings for these tech- niques are provided and discussed. The overall method is applied to a series of numerical examples of increasing complexity, including large eddy simulations of supersonic turbulent jet-in-cross flow involving a 24-dimensional input. Through empirical phase-transition diagrams and convergence plots, we illustrate sparse recovery performance under structures induced by polynomial chaos, accuracy and computational tradeoffs between polynomial bases of different degrees, and practi- cability of conducting compressive sensing for a realistic, high-dimensional physical application. Across test cases studied in this paper, we find ADMM to have demonstrated empirical advantages through consistent lower errors and faster computational times.« less

  6. Efficient Smoothed Concomitant Lasso Estimation for High Dimensional Regression

    NASA Astrophysics Data System (ADS)

    Ndiaye, Eugene; Fercoq, Olivier; Gramfort, Alexandre; Leclère, Vincent; Salmon, Joseph

    2017-10-01

    In high dimensional settings, sparse structures are crucial for efficiency, both in term of memory, computation and performance. It is customary to consider ℓ 1 penalty to enforce sparsity in such scenarios. Sparsity enforcing methods, the Lasso being a canonical example, are popular candidates to address high dimension. For efficiency, they rely on tuning a parameter trading data fitting versus sparsity. For the Lasso theory to hold this tuning parameter should be proportional to the noise level, yet the latter is often unknown in practice. A possible remedy is to jointly optimize over the regression parameter as well as over the noise level. This has been considered under several names in the literature: Scaled-Lasso, Square-root Lasso, Concomitant Lasso estimation for instance, and could be of interest for uncertainty quantification. In this work, after illustrating numerical difficulties for the Concomitant Lasso formulation, we propose a modification we coined Smoothed Concomitant Lasso, aimed at increasing numerical stability. We propose an efficient and accurate solver leading to a computational cost no more expensive than the one for the Lasso. We leverage on standard ingredients behind the success of fast Lasso solvers: a coordinate descent algorithm, combined with safe screening rules to achieve speed efficiency, by eliminating early irrelevant features.

  7. Development and Implementation of a Transport Method for the Transport and Reaction Simulation Engine (TaRSE) based on the Godunov-Mixed Finite Element Method

    USGS Publications Warehouse

    James, Andrew I.; Jawitz, James W.; Munoz-Carpena, Rafael

    2009-01-01

    A model to simulate transport of materials in surface water and ground water has been developed to numerically approximate solutions to the advection-dispersion equation. This model, known as the Transport and Reaction Simulation Engine (TaRSE), uses an algorithm that incorporates a time-splitting technique where the advective part of the equation is solved separately from the dispersive part. An explicit finite-volume Godunov method is used to approximate the advective part, while a mixed-finite element technique is used to approximate the dispersive part. The dispersive part uses an implicit discretization, which allows it to run stably with a larger time step than the explicit advective step. The potential exists to develop algorithms that run several advective steps, and then one dispersive step that encompasses the time interval of the advective steps. Because the dispersive step is computationally most expensive, schemes can be implemented that are more computationally efficient than non-time-split algorithms. This technique enables scientists to solve problems with high grid Peclet numbers, such as transport problems with sharp solute fronts, without spurious oscillations in the numerical approximation to the solution and with virtually no artificial diffusion.

  8. Numerical simulation of circular cylinders in free-fall

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romero-Gomez, Pedro; Richmond, Marshall C.

    2016-02-01

    In this work, we combined the use of (i) overset meshes, (ii) a 6 degree-of-freedom (6- DOF) motion solver, and (iii) an eddy-resolving flow simulation approach to resolve the drag and secondary movement of large-sized cylinders settling in a quiescent fluid at moderate terminal Reynolds numbers (1,500 < Re < 28,000). These three strategies were implemented in a series of computational fluid dynamics (CFD) solutions to describe the fluid-structure interactions and the resulting effects on the cylinder motion. Using the drag coefficient, oscillation period, and maximum angular displacement as baselines, the findings show good agreement between the present CFD resultsmore » and corresponding data of published laboratory experiments. We discussed the computational expense incurred in using the present modeling approach. We also conducted a preceding simulation of flow past a fixed cylinder at Re = 3,900, which tested the influence of the turbulence approach (time-averaging vs eddy-resolving) and the meshing strategy (continuous vs. overset) on the numerical results. The outputs indicated a strong effect of the former and an insignificant influence of the latter. The long-term motivation for the present study is the need to understand the motion of an autonomous sensor of cylindrical shape used to measure the hydraulic conditions occurring in operating hydropower turbines.« less

  9. Full Coupling Between the Atmosphere, Surface, and Subsurface for Integrated Hydrologic Simulation

    NASA Astrophysics Data System (ADS)

    Davison, Jason Hamilton; Hwang, Hyoun-Tae; Sudicky, Edward A.; Mallia, Derek V.; Lin, John C.

    2018-01-01

    An ever increasing community of earth system modelers is incorporating new physical processes into numerical models. This trend is facilitated by advancements in computational resources, improvements in simulation skill, and the desire to build numerical simulators that represent the water cycle with greater fidelity. In this quest to develop a state-of-the-art water cycle model, we coupled HydroGeoSphere (HGS), a 3-D control-volume finite element surface and variably saturated subsurface flow model that includes evapotranspiration processes, to the Weather Research and Forecasting (WRF) Model, a 3-D finite difference nonhydrostatic mesoscale atmospheric model. The two-way coupled model, referred to as HGS-WRF, exchanges the actual evapotranspiration fluxes and soil saturations calculated by HGS to WRF; conversely, the potential evapotranspiration and precipitation fluxes from WRF are passed to HGS. The flexible HGS-WRF coupling method allows for unique meshes used by each model, while maintaining mass and energy conservation between the domains. Furthermore, the HGS-WRF coupling implements a subtime stepping algorithm to minimize computational expense. As a demonstration of HGS-WRF's capabilities, we applied it to the California Basin and found a strong connection between the depth to the groundwater table and the latent heat fluxes across the land surface.

  10. Numerical simulation and validation of helicopter blade-vortex interaction using coupled CFD/CSD and three levels of aerodynamic modeling

    NASA Astrophysics Data System (ADS)

    Amiraux, Mathieu

    Rotorcraft Blade-Vortex Interaction (BVI) remains one of the most challenging flow phenomenon to simulate numerically. Over the past decade, the HART-II rotor test and its extensive experimental dataset has been a major database for validation of CFD codes. Its strong BVI signature, with high levels of intrusive noise and vibrations, makes it a difficult test for computational methods. The main challenge is to accurately capture and preserve the vortices which interact with the rotor, while predicting correct blade deformations and loading. This doctoral dissertation presents the application of a coupled CFD/CSD methodology to the problem of helicopter BVI and compares three levels of fidelity for aerodynamic modeling: a hybrid lifting-line/free-wake (wake coupling) method, with modified compressible unsteady model; a hybrid URANS/free-wake method; and a URANS-based wake capturing method, using multiple overset meshes to capture the entire flow field. To further increase numerical correlation, three helicopter fuselage models are implemented in the framework. The first is a high resolution 3D GPU panel code; the second is an immersed boundary based method, with 3D elliptic grid adaption; the last one uses a body-fitted, curvilinear fuselage mesh. The main contribution of this work is the implementation and systematic comparison of multiple numerical methods to perform BVI modeling. The trade-offs between solution accuracy and computational cost are highlighted for the different approaches. Various improvements have been made to each code to enhance physical fidelity, while advanced technologies, such as GPU computing, have been employed to increase efficiency. The resulting numerical setup covers all aspects of the simulation creating a truly multi-fidelity and multi-physics framework. Overall, the wake capturing approach showed the best BVI phasing correlation and good blade deflection predictions, with slightly under-predicted aerodynamic loading magnitudes. However, it proved to be much more expensive than the other two methods. Wake coupling with RANS solver had very good loading magnitude predictions, and therefore good acoustic intensities, with acceptable computational cost. The lifting-line based technique often had over-predicted aerodynamic levels, due to the degree of empiricism of the model, but its very short run-times, thanks to GPU technology, makes it a very attractive approach.

  11. Numerical simulation of immiscible viscous fingering using adaptive unstructured meshes

    NASA Astrophysics Data System (ADS)

    Adam, A.; Salinas, P.; Percival, J. R.; Pavlidis, D.; Pain, C.; Muggeridge, A. H.; Jackson, M.

    2015-12-01

    Displacement of one fluid by another in porous media occurs in various settings including hydrocarbon recovery, CO2 storage and water purification. When the invading fluid is of lower viscosity than the resident fluid, the displacement front is subject to a Saffman-Taylor instability and is unstable to transverse perturbations. These instabilities can grow, leading to fingering of the invading fluid. Numerical simulation of viscous fingering is challenging. The physics is controlled by a complex interplay of viscous and diffusive forces and it is necessary to ensure physical diffusion dominates numerical diffusion to obtain converged solutions. This typically requires the use of high mesh resolution and high order numerical methods. This is computationally expensive. We demonstrate here the use of a novel control volume - finite element (CVFE) method along with dynamic unstructured mesh adaptivity to simulate viscous fingering with higher accuracy and lower computational cost than conventional methods. Our CVFE method employs a discontinuous representation for both pressure and velocity, allowing the use of smaller control volumes (CVs). This yields higher resolution of the saturation field which is represented CV-wise. Moreover, dynamic mesh adaptivity allows high mesh resolution to be employed where it is required to resolve the fingers and lower resolution elsewhere. We use our results to re-examine the existing criteria that have been proposed to govern the onset of instability.Mesh adaptivity requires the mapping of data from one mesh to another. Conventional methods such as consistent interpolation do not readily generalise to discontinuous fields and are non-conservative. We further contribute a general framework for interpolation of CV fields by Galerkin projection. The method is conservative, higher order and yields improved results, particularly with higher order or discontinuous elements where existing approaches are often excessively diffusive.

  12. A new unconditionally stable and consistent quasi-analytical in-stream water quality solution scheme for CSTR-based water quality simulators

    NASA Astrophysics Data System (ADS)

    Woldegiorgis, Befekadu Taddesse; van Griensven, Ann; Pereira, Fernando; Bauwens, Willy

    2017-06-01

    Most common numerical solutions used in CSTR-based in-stream water quality simulators are susceptible to instabilities and/or solution inconsistencies. Usually, they cope with instability problems by adopting computationally expensive small time steps. However, some simulators use fixed computation time steps and hence do not have the flexibility to do so. This paper presents a novel quasi-analytical solution for CSTR-based water quality simulators of an unsteady system. The robustness of the new method is compared with the commonly used fourth-order Runge-Kutta methods, the Euler method and three versions of the SWAT model (SWAT2012, SWAT-TCEQ, and ESWAT). The performance of each method is tested for different hypothetical experiments. Besides the hypothetical data, a real case study is used for comparison. The growth factors we derived as stability measures for the different methods and the R-factor—considered as a consistency measure—turned out to be very useful for determining the most robust method. The new method outperformed all the numerical methods used in the hypothetical comparisons. The application for the Zenne River (Belgium) shows that the new method provides stable and consistent BOD simulations whereas the SWAT2012 model is shown to be unstable for the standard daily computation time step. The new method unconditionally simulates robust solutions. Therefore, it is a reliable scheme for CSTR-based water quality simulators that use first-order reaction formulations.

  13. CABS-flex 2.0: a web server for fast simulations of flexibility of protein structures.

    PubMed

    Kuriata, Aleksander; Gierut, Aleksandra Maria; Oleniecki, Tymoteusz; Ciemny, Maciej Pawel; Kolinski, Andrzej; Kurcinski, Mateusz; Kmiecik, Sebastian

    2018-05-14

    Classical simulations of protein flexibility remain computationally expensive, especially for large proteins. A few years ago, we developed a fast method for predicting protein structure fluctuations that uses a single protein model as the input. The method has been made available as the CABS-flex web server and applied in numerous studies of protein structure-function relationships. Here, we present a major update of the CABS-flex web server to version 2.0. The new features include: extension of the method to significantly larger and multimeric proteins, customizable distance restraints and simulation parameters, contact maps and a new, enhanced web server interface. CABS-flex 2.0 is freely available at http://biocomp.chem.uw.edu.pl/CABSflex2.

  14. OMPC: an Open-Source MATLAB®-to-Python Compiler

    PubMed Central

    Jurica, Peter; van Leeuwen, Cees

    2008-01-01

    Free access to scientific information facilitates scientific progress. Open-access scientific journals are a first step in this direction; a further step is to make auxiliary and supplementary materials that accompany scientific publications, such as methodological procedures and data-analysis tools, open and accessible to the scientific community. To this purpose it is instrumental to establish a software base, which will grow toward a comprehensive free and open-source language of technical and scientific computing. Endeavors in this direction are met with an important obstacle. MATLAB®, the predominant computation tool in many fields of research, is a closed-source commercial product. To facilitate the transition to an open computation platform, we propose Open-source MATLAB®-to-Python Compiler (OMPC), a platform that uses syntax adaptation and emulation to allow transparent import of existing MATLAB® functions into Python programs. The imported MATLAB® modules will run independently of MATLAB®, relying on Python's numerical and scientific libraries. Python offers a stable and mature open source platform that, in many respects, surpasses commonly used, expensive commercial closed source packages. The proposed software will therefore facilitate the transparent transition towards a free and general open-source lingua franca for scientific computation, while enabling access to the existing methods and algorithms of technical computing already available in MATLAB®. OMPC is available at http://ompc.juricap.com. PMID:19225577

  15. Development of Computational Aeroacoustics Code for Jet Noise and Flow Prediction

    NASA Astrophysics Data System (ADS)

    Keith, Theo G., Jr.; Hixon, Duane R.

    2002-07-01

    Accurate prediction of jet fan and exhaust plume flow and noise generation and propagation is very important in developing advanced aircraft engines that will pass current and future noise regulations. In jet fan flows as well as exhaust plumes, two major sources of noise are present: large-scale, coherent instabilities and small-scale turbulent eddies. In previous work for the NASA Glenn Research Center, three strategies have been explored in an effort to computationally predict the noise radiation from supersonic jet exhaust plumes. In order from the least expensive computationally to the most expensive computationally, these are: 1) Linearized Euler equations (LEE). 2) Very Large Eddy Simulations (VLES). 3) Large Eddy Simulations (LES). The first method solves the linearized Euler equations (LEE). These equations are obtained by linearizing about a given mean flow and the neglecting viscous effects. In this way, the noise from large-scale instabilities can be found for a given mean flow. The linearized Euler equations are computationally inexpensive, and have produced good noise results for supersonic jets where the large-scale instability noise dominates, as well as for the tone noise from a jet engine blade row. However, these linear equations do not predict the absolute magnitude of the noise; instead, only the relative magnitude is predicted. Also, the predicted disturbances do not modify the mean flow, removing a physical mechanism by which the amplitude of the disturbance may be controlled. Recent research for isolated airfoils' indicates that this may not affect the solution greatly at low frequencies. The second method addresses some of the concerns raised by the LEE method. In this approach, called Very Large Eddy Simulation (VLES), the unsteady Reynolds averaged Navier-Stokes equations are solved directly using a high-accuracy computational aeroacoustics numerical scheme. With the addition of a two-equation turbulence model and the use of a relatively coarse grid, the numerical solution is effectively filtered into a directly calculated mean flow with the small-scale turbulence being modeled, and an unsteady large-scale component that is also being directly calculated. In this way, the unsteady disturbances are calculated in a nonlinear way, with a direct effect on the mean flow. This method is not as fast as the LEE approach, but does have many advantages to recommend it; however, like the LEE approach, only the effect of the largest unsteady structures will be captured. An initial calculation was performed on a supersonic jet exhaust plume, with promising results, but the calculation was hampered by the explicit time marching scheme that was employed. This explicit scheme required a very small time step to resolve the nozzle boundary layer, which caused a long run time. Current work is focused on testing a lower-order implicit time marching method to combat this problem.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    I. W. Ginsberg

    Multiresolutional decompositions known as spectral fingerprints are often used to extract spectral features from multispectral/hyperspectral data. In this study, the authors investigate the use of wavelet-based algorithms for generating spectral fingerprints. The wavelet-based algorithms are compared to the currently used method, traditional convolution with first-derivative Gaussian filters. The comparison analyses consists of two parts: (a) the computational expense of the new method is compared with the computational costs of the current method and (b) the outputs of the wavelet-based methods are compared with those of the current method to determine any practical differences in the resulting spectral fingerprints. The resultsmore » show that the wavelet-based algorithms can greatly reduce the computational expense of generating spectral fingerprints, while practically no differences exist in the resulting fingerprints. The analysis is conducted on a database of hyperspectral signatures, namely, Hyperspectral Digital Image Collection Experiment (HYDICE) signatures. The reduction in computational expense is by a factor of about 30, and the average Euclidean distance between resulting fingerprints is on the order of 0.02.« less

  17. Metamodels for Computer-Based Engineering Design: Survey and Recommendations

    NASA Technical Reports Server (NTRS)

    Simpson, Timothy W.; Peplinski, Jesse; Koch, Patrick N.; Allen, Janet K.

    1997-01-01

    The use of statistical techniques to build approximations of expensive computer analysis codes pervades much of todays engineering design. These statistical approximations, or metamodels, are used to replace the actual expensive computer analyses, facilitating multidisciplinary, multiobjective optimization and concept exploration. In this paper we review several of these techniques including design of experiments, response surface methodology, Taguchi methods, neural networks, inductive learning, and kriging. We survey their existing application in engineering design and then address the dangers of applying traditional statistical techniques to approximate deterministic computer analysis codes. We conclude with recommendations for the appropriate use of statistical approximation techniques in given situations and how common pitfalls can be avoided.

  18. Numerical Uncertainty Quantification for Radiation Analysis Tools

    NASA Technical Reports Server (NTRS)

    Anderson, Brooke; Blattnig, Steve; Clowdsley, Martha

    2007-01-01

    Recently a new emphasis has been placed on engineering applications of space radiation analyses and thus a systematic effort of Verification, Validation and Uncertainty Quantification (VV&UQ) of the tools commonly used for radiation analysis for vehicle design and mission planning has begun. There are two sources of uncertainty in geometric discretization addressed in this paper that need to be quantified in order to understand the total uncertainty in estimating space radiation exposures. One source of uncertainty is in ray tracing, as the number of rays increase the associated uncertainty decreases, but the computational expense increases. Thus, a cost benefit analysis optimizing computational time versus uncertainty is needed and is addressed in this paper. The second source of uncertainty results from the interpolation over the dose vs. depth curves that is needed to determine the radiation exposure. The question, then, is what is the number of thicknesses that is needed to get an accurate result. So convergence testing is performed to quantify the uncertainty associated with interpolating over different shield thickness spatial grids.

  19. 4D Origami by Smart Embroidery.

    PubMed

    Stoychev, Georgi; Razavi, Mir Jalil; Wang, Xianqiao; Ionov, Leonid

    2017-09-01

    There exist many methods for processing of materials: extrusion, injection molding, fibers spinning, 3D printing, to name a few. In most cases, materials with a static, fixed shape are produced. However, numerous advanced applications require customized elements with reconfigurable shape. The few available techniques capable of overcoming this problem are expensive and/or time-consuming. Here, the use of one of the most ancient technologies for structuring, embroidering, is proposed to generate sophisticated patterns of active materials, and, in this way, to achieve complex actuation. By combining experiments and computational modeling, the fundamental rules that can predict the folding behavior of sheets with a variety of stitch-patterns are elucidated. It is demonstrated that theoretical mechanics analysis is only suitable to predict the behavior of the simplest experimental setups, whereas computer modeling gives better predictions for more complex cases. Finally, the applicability of the rules by designing basic origami structures and wrinkling substrates with controlled thermal insulation properties is shown. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Frozen Gaussian approximation based domain decomposition methods for the linear Schrödinger equation beyond the semi-classical regime

    NASA Astrophysics Data System (ADS)

    Lorin, E.; Yang, X.; Antoine, X.

    2016-06-01

    The paper is devoted to develop efficient domain decomposition methods for the linear Schrödinger equation beyond the semiclassical regime, which does not carry a small enough rescaled Planck constant for asymptotic methods (e.g. geometric optics) to produce a good accuracy, but which is too computationally expensive if direct methods (e.g. finite difference) are applied. This belongs to the category of computing middle-frequency wave propagation, where neither asymptotic nor direct methods can be directly used with both efficiency and accuracy. Motivated by recent works of the authors on absorbing boundary conditions (Antoine et al. (2014) [13] and Yang and Zhang (2014) [43]), we introduce Semiclassical Schwarz Waveform Relaxation methods (SSWR), which are seamless integrations of semiclassical approximation to Schwarz Waveform Relaxation methods. Two versions are proposed respectively based on Herman-Kluk propagation and geometric optics, and we prove the convergence and provide numerical evidence of efficiency and accuracy of these methods.

  1. Multi-scale calculation based on dual domain material point method combined with molecular dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dhakal, Tilak Raj

    This dissertation combines the dual domain material point method (DDMP) with molecular dynamics (MD) in an attempt to create a multi-scale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically non-equilibrium state, and conventional constitutive relations are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a MD simulation of a group of atoms surrounding the material point. Rather than restricting the multi-scale simulation in a small spatial region, such as phase interfaces, or crackmore » tips, this multi-scale method can be used to consider non-equilibrium thermodynamic e ects in a macroscopic domain. This method takes advantage that the material points only communicate with mesh nodes, not among themselves; therefore MD simulations for material points can be performed independently in parallel. First, using a one-dimensional shock problem as an example, the numerical properties of the original material point method (MPM), the generalized interpolation material point (GIMP) method, the convected particle domain interpolation (CPDI) method, and the DDMP method are investigated. Among these methods, only the DDMP method converges as the number of particles increases, but the large number of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the computation. The numerical properties of the multiscale method are investigated as well as the results from this multi-scale calculation are compared of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the computation. The numerical properties of the multiscale method are investigated as well as the results from this multi-scale calculation are compared with direct MD simulation results to demonstrate the feasibility of the method. Also, the multi-scale method is applied for a two dimensional problem of jet formation around copper notch under a strong impact.« less

  2. Numerical modeling of separated flows at moderate Reynolds numbers appropriate for turbine blades and unmanned aero vehicles

    NASA Astrophysics Data System (ADS)

    Castiglioni, Giacomo

    Flows over airfoils and blades in rotating machinery, for unmanned and micro-aerial vehicles, wind turbines, and propellers consist of a laminar boundary layer near the leading edge that is often followed by a laminar separation bubble and transition to turbulence further downstream. Typical Reynolds averaged Navier-Stokes turbulence models are inadequate for such flows. Direct numerical simulation is the most reliable, but is also the most computationally expensive alternative. This work assesses the capability of immersed boundary methods and large eddy simulations to reduce the computational requirements for such flows and still provide high quality results. Two-dimensional and three-dimensional simulations of a laminar separation bubble on a NACA-0012 airfoil at Rec = 5x104 and at 5° of incidence have been performed with an immersed boundary code and a commercial code using body fitted grids. Several sub-grid scale models have been implemented in both codes and their performance evaluated. For the two-dimensional simulations with the immersed boundary method the results show good agreement with the direct numerical simulation benchmark data for the pressure coefficient Cp and the friction coefficient Cf, but only when using dissipative numerical schemes. There is evidence that this behavior can be attributed to the ability of dissipative schemes to damp numerical noise coming from the immersed boundary. For the three-dimensional simulations the results show a good prediction of the separation point, but an inaccurate prediction of the reattachment point unless full direct numerical simulation resolution is used. The commercial code shows good agreement with the direct numerical simulation benchmark data in both two and three-dimensional simulations, but the presence of significant, unquantified numerical dissipation prevents a conclusive assessment of the actual prediction capabilities of very coarse large eddy simulations with low order schemes in general cases. Additionally, a two-dimensional sweep of angles of attack from 0° to 5° is performed showing a qualitative prediction of the jump in lift and drag coefficients due to the appearance of the laminar separation bubble. The numerical dissipation inhibits the predictive capabilities of large eddy simulations whenever it is of the same order of magnitude or larger than the sub-grid scale dissipation. The need to estimate the numerical dissipation is most pressing for low-order methods employed by commercial computational fluid dynamics codes. Following the recent work of Schranner et al., the equations and procedure for estimating the numerical dissipation rate and the numerical viscosity in a commercial code are presented. The method allows for the computation of the numerical dissipation rate and numerical viscosity in the physical space for arbitrary sub-domains in a self-consistent way, using only information provided by the code in question. The method is first tested for a three-dimensional Taylor-Green vortex flow in a simple cubic domain and compared with benchmark results obtained using an accurate, incompressible spectral solver. Afterwards the same procedure is applied for the first time to a realistic flow configuration, specifically to the above discussed laminar separation bubble flow over a NACA 0012 airfoil. The method appears to be quite robust and its application reveals that for the code and the flow in question the numerical dissipation can be significantly larger than the viscous dissipation or the dissipation of the classical Smagorinsky sub-grid scale model, confirming the previously qualitative finding.

  3. Strategy and methodology for rank-ordering Virginia state agencies regarding solar attractiveness and identification of specific project possibilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hewett, R.

    1997-12-31

    This paper describes the strategy and computer processing system that NREL, the Virginia Department of Mines, Minerals and Energy (DMME) and the state energy office, are developing for computing solar attractiveness scores for state agencies and the individual facilities or buildings within each agency. In the case of an agency, solar attractiveness is a measure of that agency`s having a significant number of facilities for which solar has the potential to be promising. In the case of a facility, solar attractiveness is a measure of its potential for being good, economically viable candidate for a solar waste heating system. Virginiamore » State agencies are charged with reducing fossil energy and electricity use and expense. DMME is responsible for working with them to achieve the goals and for managing the state`s energy consumption and cost monitoring program. This is done using the Fast Accounting System for Energy Reporting (FASER) computerized energy accounting and tracking system and database. Agencies report energy use and expenses (by individual facility and energy type) to DMME quarterly. DMME is also responsible for providing technical and other assistance services to agencies and facilities interested in investigating use of solar. Since Virginia has approximately 80 agencies operating over 8,000 energy-consuming facilities and since DMME`s resources are limited, it is interested in being able to determine: (1) on which agencies to focus; (2) specific facilities on which to focus within each high-priority agency; and (3) irrespective of agency, which facilities are the most promising potential candidates for solar. The computer processing system described in this paper computes numerical solar attractiveness scores for the state`s agencies and the individual facilities using the energy use and cost data in the FASER system database and the state`s and NREL`s experience in implementing, testing and evaluating solar water heating systems in commercial and government facilities.« less

  4. Using quantum chemistry muscle to flex massive systems: How to respond to something perturbing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertoni, Colleen

    Computational chemistry uses the theoretical advances of quantum mechanics and the algorithmic and hardware advances of computer science to give insight into chemical problems. It is currently possible to do highly accurate quantum chemistry calculations, but the most accurate methods are very computationally expensive. Thus it is only feasible to do highly accurate calculations on small molecules, since typically more computationally efficient methods are also less accurate. The overall goal of my dissertation work has been to try to decrease the computational expense of calculations without decreasing the accuracy. In particular, my dissertation work focuses on fragmentation methods, intermolecular interactionsmore » methods, analytic gradients, and taking advantage of new hardware.« less

  5. 47 CFR 36.311 - Network Support/General Support Expenses-Accounts 6110 and 6120 (Class B Telephone Companies...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... (Class A Telephone Companies). 36.311 Section 36.311 Telecommunication FEDERAL COMMUNICATIONS COMMISSION..., office equipment, and general purpose computers. (b) The expenses in these account are apportioned among...

  6. 47 CFR 36.311 - Network Support/General Support Expenses-Accounts 6110 and 6120 (Class B Telephone Companies...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... (Class A Telephone Companies). 36.311 Section 36.311 Telecommunication FEDERAL COMMUNICATIONS COMMISSION..., office equipment, and general purpose computers. (b) The expenses in these account are apportioned among...

  7. 47 CFR 36.311 - Network Support/General Support Expenses-Accounts 6110 and 6120 (Class B Telephone Companies...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... (Class A Telephone Companies). 36.311 Section 36.311 Telecommunication FEDERAL COMMUNICATIONS COMMISSION..., office equipment, and general purpose computers. (b) The expenses in these account are apportioned among...

  8. Efficient simulation and model reformulation of two-dimensional electrochemical thermal behavior of lithium-ion batteries

    DOE PAGES

    Northrop, Paul W. C.; Pathak, Manan; Rife, Derek; ...

    2015-03-09

    Lithium-ion batteries are an important technology to facilitate efficient energy storage and enable a shift from petroleum based energy to more environmentally benign sources. Such systems can be utilized most efficiently if good understanding of performance can be achieved for a range of operating conditions. Mathematical models can be useful to predict battery behavior to allow for optimization of design and control. An analytical solution is ideally preferred to solve the equations of a mathematical model, as it eliminates the error that arises when using numerical techniques and is usually computationally cheap. An analytical solution provides insight into the behaviormore » of the system and also explicitly shows the effects of different parameters on the behavior. However, most engineering models, including the majority of battery models, cannot be solved analytically due to non-linearities in the equations and state dependent transport and kinetic parameters. The numerical method used to solve the system of equations describing a battery operation can have a significant impact on the computational cost of the simulation. In this paper, a model reformulation of the porous electrode pseudo three dimensional (P3D) which significantly reduces the computational cost of lithium ion battery simulation, while maintaining high accuracy, is discussed. This reformulation enables the use of the P3D model into applications that would otherwise be too computationally expensive to justify its use, such as online control, optimization, and parameter estimation. Furthermore, the P3D model has proven to be robust enough to allow for the inclusion of additional physical phenomena as understanding improves. In this study, the reformulated model is used to allow for more complicated physical phenomena to be considered for study, including thermal effects.« less

  9. Creating Weather System Ensembles Through Synergistic Process Modeling and Machine Learning

    NASA Astrophysics Data System (ADS)

    Chen, B.; Posselt, D. J.; Nguyen, H.; Wu, L.; Su, H.; Braverman, A. J.

    2017-12-01

    Earth's weather and climate are sensitive to a variety of control factors (e.g., initial state, forcing functions, etc). Characterizing the response of the atmosphere to a change in initial conditions or model forcing is critical for weather forecasting (ensemble prediction) and climate change assessment. Input - response relationships can be quantified by generating an ensemble of multiple (100s to 1000s) realistic realizations of weather and climate states. Atmospheric numerical models generate simulated data through discretized numerical approximation of the partial differential equations (PDEs) governing the underlying physics. However, the computational expense of running high resolution atmospheric state models makes generation of more than a few simulations infeasible. Here, we discuss an experiment wherein we approximate the numerical PDE solver within the Weather Research and Forecasting (WRF) Model using neural networks trained on a subset of model run outputs. Once trained, these neural nets can produce large number of realization of weather states from a small number of deterministic simulations with speeds that are orders of magnitude faster than the underlying PDE solver. Our neural network architecture is inspired by the governing partial differential equations. These equations are location-invariant, and consist of first and second derivations. As such, we use a 3x3 lon-lat grid of atmospheric profiles as the predictor in the neural net to provide the network the information necessary to compute the first and second moments. Results indicate that the neural network algorithm can approximate the PDE outputs with high degree of accuracy (less than 1% error), and that this error increases as a function of the prediction time lag.

  10. Numerical Predictions of Wind Turbine Power and Aerodynamic Loads for the NREL Phase II and IV Combined Experiment Rotor

    NASA Technical Reports Server (NTRS)

    Duque, Earl P. N.; Johnson, Wayne; vanDam, C. P.; Chao, David D.; Cortes, Regina; Yee, Karen

    1999-01-01

    Accurate, reliable and robust numerical predictions of wind turbine rotor power remain a challenge to the wind energy industry. The literature reports various methods that compare predictions to experiments. The methods vary from Blade Element Momentum Theory (BEM), Vortex Lattice (VL), to variants of Reynolds-averaged Navier-Stokes (RaNS). The BEM and VL methods consistently show discrepancies in predicting rotor power at higher wind speeds mainly due to inadequacies with inboard stall and stall delay models. The RaNS methodologies show promise in predicting blade stall. However, inaccurate rotor vortex wake convection, boundary layer turbulence modeling and grid resolution has limited their accuracy. In addition, the inherently unsteady stalled flow conditions become computationally expensive for even the best endowed research labs. Although numerical power predictions have been compared to experiment. The availability of good wind turbine data sufficient for code validation experimental data that has been extracted from the IEA Annex XIV download site for the NREL Combined Experiment phase II and phase IV rotor. In addition, the comparisons will show data that has been further reduced into steady wind and zero yaw conditions suitable for comparisons to "steady wind" rotor power predictions. In summary, the paper will present and discuss the capabilities and limitations of the three numerical methods and make available a database of experimental data suitable to help other numerical methods practitioners validate their own work.

  11. Forecasting database for the tsunami warning regional center for the western Mediterranean Sea

    NASA Astrophysics Data System (ADS)

    Gailler, A.; Hebert, H.; Loevenbruck, A.; Hernandez, B.

    2010-12-01

    Improvements in the availability of sea-level observations and advances in numerical modeling techniques are increasing the potential for tsunami warnings to be based on numerical model forecasts. Numerical tsunami propagation and inundation models are well developed, but they present a challenge to run in real-time, partly due to computational limitations and also to a lack of detailed knowledge on the earthquake rupture parameters. Through the establishment of the tsunami warning regional center for NE Atlantic and western Mediterranean Sea, the CEA is especially in charge of providing rapidly a map with uncertainties showing zones in the main axis of energy at the Mediterranean scale. The strategy is based initially on a pre-computed tsunami scenarios database, as source parameters available a short time after an earthquake occurs are preliminary and may be somewhat inaccurate. Existing numerical models are good enough to provide a useful guidance for warning structures to be quickly disseminated. When an event will occur, an appropriate variety of offshore tsunami propagation scenarios by combining pre-computed propagation solutions (single or multi sources) may be recalled through an automatic interface. This approach would provide quick estimates of tsunami offshore propagation, and aid hazard assessment and evacuation decision-making. As numerical model accuracy is inherently limited by errors in bathymetry and topography, and as inundation maps calculation is more complex and expensive in term of computational time, only tsunami offshore propagation modeling will be included in the forecasting database using a single sparse bathymetric computation grid for the numerical modeling. Because of too much variability in the mechanism of tsunamigenic earthquakes, all possible magnitudes cannot be represented in the scenarios database. In principle, an infinite number of tsunami propagation scenarios can be constructed by linear combinations of a finite number of pre-computed unit scenarios. The whole notion of a pre-computed forecasting database also requires a historical earthquake and tsunami database, as well as an up-to-date seismotectonic database including faults geometry and a zonation based on seismotectonic synthesis of source zones and tsunamigenic faults. Our forecast strategy is thus based on a unit source function methodology, whereby the model runs are combined and scaled linearly to produce any composite tsunamis propagation solution. Each unit source function is equivalent to a tsunami generated by a Mo 1.75E+19 N.m earthquake (Mw ~6.8) with a rectangular fault 25 km by 20 km in size and 1 m in slip. The faults of the unit functions are placed adjacent to each other, following the discretization of the main seismogenic faults bounding the western Mediterranean basin. The number of unit functions involved varies with the magnitude of the wanted composite solution and the combined waveheights are multiplied by a given scaling factor to produce the new arbitrary scenario. Some test-cases examples are presented (e.g., Boumerdès 2003 [Algeria, Mw 6.8], Djijel 1856 [Algeria, Mw 7.2], Ligure 1887 [Italia, Mw 6.5-6.7]).

  12. Finite-difference time-domain modelling of through-the-Earth radio signal propagation

    NASA Astrophysics Data System (ADS)

    Ralchenko, M.; Svilans, M.; Samson, C.; Roper, M.

    2015-12-01

    This research seeks to extend the knowledge of how a very low frequency (VLF) through-the-Earth (TTE) radio signal behaves as it propagates underground, by calculating and visualizing the strength of the electric and magnetic fields for an arbitrary geology through numeric modelling. To achieve this objective, a new software tool has been developed using the finite-difference time-domain method. This technique is particularly well suited to visualizing the distribution of electromagnetic fields in an arbitrary geology. The frequency range of TTE radio (400-9000 Hz) and geometrical scales involved (1 m resolution for domains a few hundred metres in size) involves processing a grid composed of millions of cells for thousands of time steps, which is computationally expensive. Graphics processing unit acceleration was used to reduce execution time from days and weeks, to minutes and hours. Results from the new modelling tool were compared to three cases for which an analytic solution is known. Two more case studies were done featuring complex geologic environments relevant to TTE communications that cannot be solved analytically. There was good agreement between numeric and analytic results. Deviations were likely caused by numeric artifacts from the model boundaries; however, in a TTE application in field conditions, the uncertainty in the conductivity of the various geologic formations will greatly outweigh these small numeric errors.

  13. Numerical investigation on super-cooled large droplet icing of fan rotor blade in jet engine

    NASA Astrophysics Data System (ADS)

    Isobe, Keisuke; Suzuki, Masaya; Yamamoto, Makoto

    2014-10-01

    Icing (or ice accretion) is a phenomenon in which super-cooled water droplets impinge and accrete on a body. It is well known that ice accretion on blades and vanes leads to performance degradation and has caused severe accidents. Although various anti-icing and deicing systems have been developed, such accidents still occur. Therefore, it is important to clarify the phenomenon of ice accretion on an aircraft and in a jet engine. However, flight tests for ice accretion are very expensive, and in the wind tunnel it is difficult to reproduce all climate conditions where ice accretion can occur. Therefore, it is expected that computational fluid dynamics (CFD), which can estimate ice accretion in various climate conditions, will be a useful way to predict and understand the ice accretion phenomenon. On the other hand, although the icing caused by super-cooled large droplets (SLD) is very dangerous, the numerical method has not been established yet. This is why SLD icing is characterized by splash and bounce phenomena of droplets and they are very complex in nature. In the present study, we develop an ice accretion code considering the splash and bounce phenomena to predict SLD icing, and the code is applied to a fan rotor blade. The numerical results with and without the SLD icing model are compared. Through this study, the influence of the SLD icing model is numerically clarified.

  14. A study of metaheuristic algorithms for high dimensional feature selection on microarray data

    NASA Astrophysics Data System (ADS)

    Dankolo, Muhammad Nasiru; Radzi, Nor Haizan Mohamed; Sallehuddin, Roselina; Mustaffa, Noorfa Haszlinna

    2017-11-01

    Microarray systems enable experts to examine gene profile at molecular level using machine learning algorithms. It increases the potentials of classification and diagnosis of many diseases at gene expression level. Though, numerous difficulties may affect the efficiency of machine learning algorithms which includes vast number of genes features comprised in the original data. Many of these features may be unrelated to the intended analysis. Therefore, feature selection is necessary to be performed in the data pre-processing. Many feature selection algorithms are developed and applied on microarray which including the metaheuristic optimization algorithms. This paper discusses the application of the metaheuristics algorithms for feature selection in microarray dataset. This study reveals that, the algorithms have yield an interesting result with limited resources thereby saving computational expenses of machine learning algorithms.

  15. Implementation of Implicit Adaptive Mesh Refinement in an Unstructured Finite-Volume Flow Solver

    NASA Technical Reports Server (NTRS)

    Schwing, Alan M.; Nompelis, Ioannis; Candler, Graham V.

    2013-01-01

    This paper explores the implementation of adaptive mesh refinement in an unstructured, finite-volume solver. Unsteady and steady problems are considered. The effect on the recovery of high-order numerics is explored and the results are favorable. Important to this work is the ability to provide a path for efficient, implicit time advancement. A method using a simple refinement sensor based on undivided differences is discussed and applied to a practical problem: a shock-shock interaction on a hypersonic, inviscid double-wedge. Cases are compared to uniform grids without the use of adapted meshes in order to assess error and computational expense. Discussion of difficulties, advances, and future work prepare this method for additional research. The potential for this method in more complicated flows is described.

  16. Principles of thermoacoustic energy harvesting

    NASA Astrophysics Data System (ADS)

    Avent, A. W.; Bowen, C. R.

    2015-11-01

    Thermoacoustics exploit a temperature gradient to produce powerful acoustic pressure waves. The technology has a key role to play in energy harvesting systems. A time-line in the development of thermoacoustics is presented from its earliest recorded example in glass blowing through to the development of the Sondhauss and Rijke tubes to Stirling engines and pulse-tube cryo-cooling. The review sets the current literature in context, identifies key publications and promising areas of research. The fundamental principles of thermoacoustic phenomena are explained; design challenges and factors influencing efficiency are explored. Thermoacoustic processes involve complex multi-physical coupling and transient, highly non-linear relationships which are computationally expensive to model; appropriate numerical modelling techniques and options for analyses are presented. Potential methods of harvesting the energy in the acoustic waves are also examined.

  17. Computation of Thermodynamic Equilibria Pertinent to Nuclear Materials in Multi-Physics Codes

    NASA Astrophysics Data System (ADS)

    Piro, Markus Hans Alexander

    Nuclear energy plays a vital role in supporting electrical needs and fulfilling commitments to reduce greenhouse gas emissions. Research is a continuing necessity to improve the predictive capabilities of fuel behaviour in order to reduce costs and to meet increasingly stringent safety requirements by the regulator. Moreover, a renewed interest in nuclear energy has given rise to a "nuclear renaissance" and the necessity to design the next generation of reactors. In support of this goal, significant research efforts have been dedicated to the advancement of numerical modelling and computational tools in simulating various physical and chemical phenomena associated with nuclear fuel behaviour. This undertaking in effect is collecting the experience and observations of a past generation of nuclear engineers and scientists in a meaningful way for future design purposes. There is an increasing desire to integrate thermodynamic computations directly into multi-physics nuclear fuel performance and safety codes. A new equilibrium thermodynamic solver is being developed with this matter as a primary objective. This solver is intended to provide thermodynamic material properties and boundary conditions for continuum transport calculations. There are several concerns with the use of existing commercial thermodynamic codes: computational performance; limited capabilities in handling large multi-component systems of interest to the nuclear industry; convenient incorporation into other codes with quality assurance considerations; and, licensing entanglements associated with code distribution. The development of this software in this research is aimed at addressing all of these concerns. The approach taken in this work exploits fundamental principles of equilibrium thermodynamics to simplify the numerical optimization equations. In brief, the chemical potentials of all species and phases in the system are constrained by estimates of the chemical potentials of the system components at each iterative step, and the objective is to minimize the residuals of the mass balance equations. Several numerical advantages are achieved through this simplification. In particular, computational expense is reduced and the rate of convergence is enhanced. Furthermore, the software has demonstrated the ability to solve systems involving as many as 118 component elements. An early version of the code has already been integrated into the Advanced Multi-Physics (AMP) code under development by the Oak Ridge National Laboratory, Los Alamos National Laboratory, Idaho National Laboratory and Argonne National Laboratory. Keywords: Engineering, Nuclear -- 0552, Engineering, Material Science -- 0794, Chemistry, Mathematics -- 0405, Computer Science -- 0984

  18. A New Biogeochemical Computational Framework Integrated within the Community Land Model

    NASA Astrophysics Data System (ADS)

    Fang, Y.; Li, H.; Liu, C.; Huang, M.; Leung, L.

    2012-12-01

    Terrestrial biogeochemical processes, particularly carbon cycle dynamics, have been shown to significantly influence regional and global climate changes. Modeling terrestrial biogeochemical processes within the land component of Earth System Models such as the Community Land model (CLM), however, faces three major challenges: 1) extensive efforts in modifying modeling structures and rewriting computer programs to incorporate biogeochemical processes with increasing complexity, 2) expensive computational cost to solve the governing equations due to numerical stiffness inherited from large variations in the rates of biogeochemical processes, and 3) lack of an efficient framework to systematically evaluate various mathematical representations of biogeochemical processes. To address these challenges, we introduce a new computational framework to incorporate biogeochemical processes into CLM, which consists of a new biogeochemical module with a generic algorithm and reaction database. New and updated biogeochemical processes can be incorporated into CLM without significant code modification. To address the stiffness issue, algorithms and criteria will be developed to identify fast processes, which will be replaced with algebraic equations and decoupled from slow processes. This framework can serve as a generic and user-friendly platform to test out different mechanistic process representations and datasets and gain new insight on the behavior of the terrestrial ecosystems in response to climate change in a systematic way.

  19. Simple analysis of scattering data with the Ornstein-Zernike equation

    NASA Astrophysics Data System (ADS)

    Kats, E. I.; Muratov, A. R.

    2018-01-01

    In this paper we propose and explore a method of analysis of the scattering experimental data for uniform liquidlike systems. In our pragmatic approach we are not trying to introduce by hands an artificial small parameter to work out a perturbation theory with respect to the known results, e.g., for hard spheres or sticky hard spheres (all the more that in the agreement with the notorious Landau statement, there is no physical small parameter for liquids). Instead of it being guided by the experimental data we are solving the Ornstein-Zernike equation with a trial (variational) form of the interparticle interaction potential. To find all needed correlation functions this variational input is iterated numerically to satisfy the Ornstein-Zernike equation supplemented by a closure relation. Our method is developed for spherically symmetric scattering objects, and our numeric code is written for such a case. However, it can be extended (at the expense of more involved computations and a larger amount of required experimental input information) for nonspherical particles. What is important for our approach is that it is sufficient to know experimental data in a relatively narrow range of the scattering wave vectors (q ) to compute the static structure factor in a much broader range of q . We illustrate by a few model and real experimental examples of the x-ray and neutron scattering data how the approach works.

  20. Numerical study of Wavy Blade Section for Wind Turbines

    NASA Astrophysics Data System (ADS)

    Kobæk, C. M.; Hansen, M. O. L.

    2016-09-01

    The Wavy Blade concept is inspired by the unique flipper of a humpback whale, characterized by the tubercles located at the leading edge. It has been suggested that this shape may have been a result of a natural selection process, since this flipper under some circumstances can produce higher lift than a flipper having a smooth trailing edge and thus could be potentially beneficial when catching food. A thorough literature study of the Wavy Blade concept is made and followed by CFD computations of two wavy blade geometries and a comparison with their baseline S809 airfoil at conditions more relevant for modern wind turbines. The findings in the literature from geometries similar to the hump back whale flipper indicate that the aerodynamic performance can be improved at high angles of attack, but sometimes at the expense of a lower lift slope and increased drag before stall. The numerical results for a blade section based on the S809 airfoil are, however, not as promising as some of the findings reported in the literature for the whale flipper at high angles of attack. These first CFD computations using a thicker airfoil and a higher Reynolds number than the whale flipper indicate that the results may very well depend on the actual airfoil geometry and perhaps also the Reynolds number, and future studies are necessary in order to illuminate this further.

  1. Model selection on solid ground: Rigorous comparison of nine ways to evaluate Bayesian model evidence

    PubMed Central

    Schöniger, Anneli; Wöhling, Thomas; Samaniego, Luis; Nowak, Wolfgang

    2014-01-01

    Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible. PMID:25745272

  2. Discontinuous Galerkin Method with Numerical Roe Flux for Spherical Shallow Water Equations

    NASA Astrophysics Data System (ADS)

    Yi, T.; Choi, S.; Kang, S.

    2013-12-01

    In developing the dynamic core of a numerical weather prediction model with discontinuous Galerkin method, a numerical flux at the boundaries of grid elements plays a vital role since it preserves the local conservation properties and has a significant impact on the accuracy and stability of numerical solutions. Due to these reasons, we developed the numerical Roe flux based on an approximate Riemann problem for spherical shallow water equations in Cartesian coordinates [1] to find out its stability and accuracy. In order to compare the performance with its counterpart flux, we used the Lax-Friedrichs flux, which has been used in many dynamic cores such as NUMA [1], CAM-DG [2] and MCore [3] because of its simplicity. The Lax-Friedrichs flux is implemented by a flux difference between left and right states plus the maximum characteristic wave speed across the boundaries of elements. It has been shown that the Lax-Friedrichs flux with the finite volume method is more dissipative and unstable than other numerical fluxes such as HLLC, AUSM+ and Roe. The Roe flux implemented in this study is based on the decomposition of flux difference over the element boundaries where the nonlinear equations are linearized. It is rarely used in dynamic cores due to its complexity and thus computational expensiveness. To compare the stability and accuracy of the Roe flux with the Lax-Friedrichs, two- and three-dimensional test cases are performed on a plane and cubed-sphere, respectively, with various numbers of element and polynomial order. For the two-dimensional case, the Gaussian bell is simulated on the plane with two different numbers of elements at the fixed polynomial orders. In three-dimensional cases on the cubed-sphere, we performed the test cases of a zonal flow over an isolated mountain and a Rossby-Haurwitz wave, of which initial conditions are the same as those of Williamson [4]. This study presented that the Roe flux with the discontinuous Galerkin method is less dissipative and has stronger numerical stability than the Lax-Friedrichs. Reference 1. 2002, Giraldo, F.X., Hesthaven, J.S. and Warburton, T., "Nodal High-Order Discontinous Galerkin Methods for the Spherical Shallow Water Equations," Journal of Computational Physics, Vol.181, pp.499-525. 2. 2005, Nair, R.D., Thomas, S.J. and Loft, R.D., "A Discontinuous Galerkin Transport Scheme on the Cubed Sphere," Monthly Weather Review, Vol.133, pp.814-828. 3. 2010, Ullrich, P.A., Jablonowski, C. and Leer, van B., "High-Order Finite-Volume Methods for the Shallow-Water Equations on the Sphere," Journal of Computational Physics, Vol.229, pp.6104-6134. 4. 1992, Williamson, D.L., Drake, J.B., Hack, J., Jacob, R. and Swartztrauber, P.N., "A Standard Test Set for Numerical Approximations to the Shallow Water Equations in Spherical Geometry," Journal of Computational Physics, Vol.102, pp.211-224.

  3. Nanoinformatics knowledge infrastructures: bringing efficient information management to nanomedical research.

    PubMed

    de la Iglesia, D; Cachau, R E; García-Remesal, M; Maojo, V

    2013-11-27

    Nanotechnology represents an area of particular promise and significant opportunity across multiple scientific disciplines. Ongoing nanotechnology research ranges from the characterization of nanoparticles and nanomaterials to the analysis and processing of experimental data seeking correlations between nanoparticles and their functionalities and side effects. Due to their special properties, nanoparticles are suitable for cellular-level diagnostics and therapy, offering numerous applications in medicine, e.g. development of biomedical devices, tissue repair, drug delivery systems and biosensors. In nanomedicine, recent studies are producing large amounts of structural and property data, highlighting the role for computational approaches in information management. While in vitro and in vivo assays are expensive, the cost of computing is falling. Furthermore, improvements in the accuracy of computational methods (e.g. data mining, knowledge discovery, modeling and simulation) have enabled effective tools to automate the extraction, management and storage of these vast data volumes. Since this information is widely distributed, one major issue is how to locate and access data where it resides (which also poses data-sharing limitations). The novel discipline of nanoinformatics addresses the information challenges related to nanotechnology research. In this paper, we summarize the needs and challenges in the field and present an overview of extant initiatives and efforts.

  4. A comparison of artificial compressibility and fractional step methods for incompressible flow computations

    NASA Technical Reports Server (NTRS)

    Chan, Daniel C.; Darian, Armen; Sindir, Munir

    1992-01-01

    We have applied and compared the efficiency and accuracy of two commonly used numerical methods for the solution of Navier-Stokes equations. The artificial compressibility method augments the continuity equation with a transient pressure term and allows one to solve the modified equations as a coupled system. Due to its implicit nature, one can have the luxury of taking a large temporal integration step at the expense of higher memory requirement and larger operation counts per step. Meanwhile, the fractional step method splits the Navier-Stokes equations into a sequence of differential operators and integrates them in multiple steps. The memory requirement and operation count per time step are low, however, the restriction on the size of time marching step is more severe. To explore the strengths and weaknesses of these two methods, we used them for the computation of a two-dimensional driven cavity flow with Reynolds number of 100 and 1000, respectively. Three grid sizes, 41 x 41, 81 x 81, and 161 x 161 were used. The computations were considered after the L2-norm of the change of the dependent variables in two consecutive time steps has fallen below 10(exp -5).

  5. Nanoinformatics knowledge infrastructures: bringing efficient information management to nanomedical research

    NASA Astrophysics Data System (ADS)

    de la Iglesia, D.; Cachau, R. E.; García-Remesal, M.; Maojo, V.

    2013-01-01

    Nanotechnology represents an area of particular promise and significant opportunity across multiple scientific disciplines. Ongoing nanotechnology research ranges from the characterization of nanoparticles and nanomaterials to the analysis and processing of experimental data seeking correlations between nanoparticles and their functionalities and side effects. Due to their special properties, nanoparticles are suitable for cellular-level diagnostics and therapy, offering numerous applications in medicine, e.g. development of biomedical devices, tissue repair, drug delivery systems and biosensors. In nanomedicine, recent studies are producing large amounts of structural and property data, highlighting the role for computational approaches in information management. While in vitro and in vivo assays are expensive, the cost of computing is falling. Furthermore, improvements in the accuracy of computational methods (e.g. data mining, knowledge discovery, modeling and simulation) have enabled effective tools to automate the extraction, management and storage of these vast data volumes. Since this information is widely distributed, one major issue is how to locate and access data where it resides (which also poses data-sharing limitations). The novel discipline of nanoinformatics addresses the information challenges related to nanotechnology research. In this paper, we summarize the needs and challenges in the field and present an overview of extant initiatives and efforts.

  6. A hybrid finite element-transfer matrix model for vibroacoustic systems with flat and homogeneous acoustic treatments.

    PubMed

    Alimonti, Luca; Atalla, Noureddine; Berry, Alain; Sgard, Franck

    2015-02-01

    Practical vibroacoustic systems involve passive acoustic treatments consisting of highly dissipative media such as poroelastic materials. The numerical modeling of such systems at low to mid frequencies typically relies on substructuring methodologies based on finite element models. Namely, the master subsystems (i.e., structural and acoustic domains) are described by a finite set of uncoupled modes, whereas condensation procedures are typically preferred for the acoustic treatments. However, although accurate, such methodology is computationally expensive when real life applications are considered. A potential reduction of the computational burden could be obtained by approximating the effect of the acoustic treatment on the master subsystems without introducing physical degrees of freedom. To do that, the treatment has to be assumed homogeneous, flat, and of infinite lateral extent. Under these hypotheses, simple analytical tools like the transfer matrix method can be employed. In this paper, a hybrid finite element-transfer matrix methodology is proposed. The impact of the limiting assumptions inherent within the analytical framework are assessed for the case of plate-cavity systems involving flat and homogeneous acoustic treatments. The results prove that the hybrid model can capture the qualitative behavior of the vibroacoustic system while reducing the computational effort.

  7. Novel Method for Incorporating Model Uncertainties into Gravitational Wave Parameter Estimates

    NASA Astrophysics Data System (ADS)

    Moore, Christopher J.; Gair, Jonathan R.

    2014-12-01

    Posterior distributions on parameters computed from experimental data using Bayesian techniques are only as accurate as the models used to construct them. In many applications, these models are incomplete, which both reduces the prospects of detection and leads to a systematic error in the parameter estimates. In the analysis of data from gravitational wave detectors, for example, accurate waveform templates can be computed using numerical methods, but the prohibitive cost of these simulations means this can only be done for a small handful of parameters. In this Letter, a novel method to fold model uncertainties into data analysis is proposed; the waveform uncertainty is analytically marginalized over using with a prior distribution constructed by using Gaussian process regression to interpolate the waveform difference from a small training set of accurate templates. The method is well motivated, easy to implement, and no more computationally expensive than standard techniques. The new method is shown to perform extremely well when applied to a toy problem. While we use the application to gravitational wave data analysis to motivate and illustrate the technique, it can be applied in any context where model uncertainties exist.

  8. 76 FR 9349 - Jim Woodruff Project

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-17

    ... month. Southeastern would compute its purchased power obligation for each delivery point monthly... rates to include a pass-through of purchased power expenses. The capacity and energy charges to preference customers can be reduced because purchased power expenses will be recovered in a separate, pass...

  9. Exact numerical calculation of fixation probability and time on graphs.

    PubMed

    Hindersin, Laura; Möller, Marius; Traulsen, Arne; Bauer, Benedikt

    2016-12-01

    The Moran process on graphs is a popular model to study the dynamics of evolution in a spatially structured population. Exact analytical solutions for the fixation probability and time of a new mutant have been found for only a few classes of graphs so far. Simulations are time-expensive and many realizations are necessary, as the variance of the fixation times is high. We present an algorithm that numerically computes these quantities for arbitrary small graphs by an approach based on the transition matrix. The advantage over simulations is that the calculation has to be executed only once. Building the transition matrix is automated by our algorithm. This enables a fast and interactive study of different graph structures and their effect on fixation probability and time. We provide a fast implementation in C with this note (Hindersin et al., 2016). Our code is very flexible, as it can handle two different update mechanisms (Birth-death or death-Birth), as well as arbitrary directed or undirected graphs. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Haitao, E-mail: liaoht@cae.ac.cn

    The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results inmore » an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.« less

  11. Efficient sensitivity analysis method for chaotic dynamical systems

    NASA Astrophysics Data System (ADS)

    Liao, Haitao

    2016-05-01

    The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results in an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.

  12. Parameterizing correlations between hydrometeor species in mixed-phase Arctic clouds

    NASA Astrophysics Data System (ADS)

    Larson, Vincent E.; Nielsen, Brandon J.; Fan, Jiwen; Ovchinnikov, Mikhail

    2011-01-01

    Mixed-phase Arctic clouds, like other clouds, contain small-scale variability in hydrometeor fields, such as cloud water or snow mixing ratio. This variability may be worth parameterizing in coarse-resolution numerical models. In particular, for modeling multispecies processes such as accretion and aggregation, it would be useful to parameterize subgrid correlations among hydrometeor species. However, one difficulty is that there exist many hydrometeor species and many microphysical processes, leading to complexity and computational expense. Existing lower and upper bounds on linear correlation coefficients are too loose to serve directly as a method to predict subgrid correlations. Therefore, this paper proposes an alternative method that begins with the spherical parameterization framework of Pinheiro and Bates (1996), which expresses the correlation matrix in terms of its Cholesky factorization. The values of the elements of the Cholesky matrix are populated here using a "cSigma" parameterization that we introduce based on the aforementioned bounds on correlations. The method has three advantages: (1) the computational expense is tolerable; (2) the correlations are, by construction, guaranteed to be consistent with each other; and (3) the methodology is fairly general and hence may be applicable to other problems. The method is tested noninteractively using simulations of three Arctic mixed-phase cloud cases from two field experiments: the Indirect and Semi-Direct Aerosol Campaign and the Mixed-Phase Arctic Cloud Experiment. Benchmark simulations are performed using a large-eddy simulation (LES) model that includes a bin microphysical scheme. The correlations estimated by the new method satisfactorily approximate the correlations produced by the LES.

  13. OMPC: an Open-Source MATLAB-to-Python Compiler.

    PubMed

    Jurica, Peter; van Leeuwen, Cees

    2009-01-01

    Free access to scientific information facilitates scientific progress. Open-access scientific journals are a first step in this direction; a further step is to make auxiliary and supplementary materials that accompany scientific publications, such as methodological procedures and data-analysis tools, open and accessible to the scientific community. To this purpose it is instrumental to establish a software base, which will grow toward a comprehensive free and open-source language of technical and scientific computing. Endeavors in this direction are met with an important obstacle. MATLAB((R)), the predominant computation tool in many fields of research, is a closed-source commercial product. To facilitate the transition to an open computation platform, we propose Open-source MATLAB((R))-to-Python Compiler (OMPC), a platform that uses syntax adaptation and emulation to allow transparent import of existing MATLAB((R)) functions into Python programs. The imported MATLAB((R)) modules will run independently of MATLAB((R)), relying on Python's numerical and scientific libraries. Python offers a stable and mature open source platform that, in many respects, surpasses commonly used, expensive commercial closed source packages. The proposed software will therefore facilitate the transparent transition towards a free and general open-source lingua franca for scientific computation, while enabling access to the existing methods and algorithms of technical computing already available in MATLAB((R)). OMPC is available at http://ompc.juricap.com.

  14. SOP: parallel surrogate global optimization with Pareto center selection for computationally expensive single objective problems

    DOE PAGES

    Krityakierne, Tipaluck; Akhtar, Taimoor; Shoemaker, Christine A.

    2016-02-02

    This paper presents a parallel surrogate-based global optimization method for computationally expensive objective functions that is more effective for larger numbers of processors. To reach this goal, we integrated concepts from multi-objective optimization and tabu search into, single objective, surrogate optimization. Our proposed derivative-free algorithm, called SOP, uses non-dominated sorting of points for which the expensive function has been previously evaluated. The two objectives are the expensive function value of the point and the minimum distance of the point to previously evaluated points. Based on the results of non-dominated sorting, P points from the sorted fronts are selected as centersmore » from which many candidate points are generated by random perturbations. Based on surrogate approximation, the best candidate point is subsequently selected for expensive evaluation for each of the P centers, with simultaneous computation on P processors. Centers that previously did not generate good solutions are tabu with a given tenure. We show almost sure convergence of this algorithm under some conditions. The performance of SOP is compared with two RBF based methods. The test results show that SOP is an efficient method that can reduce time required to find a good near optimal solution. In a number of cases the efficiency of SOP is so good that SOP with 8 processors found an accurate answer in less wall-clock time than the other algorithms did with 32 processors.« less

  15. A second-order accurate finite volume scheme with the discrete maximum principle for solving Richards’ equation on unstructured meshes

    DOE PAGES

    Svyatsky, Daniil; Lipnikov, Konstantin

    2017-03-18

    Richards’s equation describes steady-state or transient flow in a variably saturated medium. For a medium having multiple layers of soils that are not aligned with coordinate axes, a mesh fitted to these layers is no longer orthogonal and the classical two-point flux approximation finite volume scheme is no longer accurate. Here, we propose new second-order accurate nonlinear finite volume (NFV) schemes for the head and pressure formulations of Richards’ equation. We prove that the discrete maximum principles hold for both formulations at steady-state which mimics similar properties of the continuum solution. The second-order accuracy is achieved using high-order upwind algorithmsmore » for the relative permeability. Numerical simulations of water infiltration into a dry soil show significant advantage of the second-order NFV schemes over the first-order NFV schemes even on coarse meshes. Since explicit calculation of the Jacobian matrix becomes prohibitively expensive for high-order schemes due to build-in reconstruction and slope limiting algorithms, we study numerically the preconditioning strategy introduced recently in Lipnikov et al. (2016) that uses a stable approximation of the continuum Jacobian. Lastly, numerical simulations show that the new preconditioner reduces computational cost up to 2–3 times in comparison with the conventional preconditioners.« less

  16. A second-order accurate finite volume scheme with the discrete maximum principle for solving Richards’ equation on unstructured meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Svyatsky, Daniil; Lipnikov, Konstantin

    Richards’s equation describes steady-state or transient flow in a variably saturated medium. For a medium having multiple layers of soils that are not aligned with coordinate axes, a mesh fitted to these layers is no longer orthogonal and the classical two-point flux approximation finite volume scheme is no longer accurate. Here, we propose new second-order accurate nonlinear finite volume (NFV) schemes for the head and pressure formulations of Richards’ equation. We prove that the discrete maximum principles hold for both formulations at steady-state which mimics similar properties of the continuum solution. The second-order accuracy is achieved using high-order upwind algorithmsmore » for the relative permeability. Numerical simulations of water infiltration into a dry soil show significant advantage of the second-order NFV schemes over the first-order NFV schemes even on coarse meshes. Since explicit calculation of the Jacobian matrix becomes prohibitively expensive for high-order schemes due to build-in reconstruction and slope limiting algorithms, we study numerically the preconditioning strategy introduced recently in Lipnikov et al. (2016) that uses a stable approximation of the continuum Jacobian. Lastly, numerical simulations show that the new preconditioner reduces computational cost up to 2–3 times in comparison with the conventional preconditioners.« less

  17. Economic incentives for oak woodland preservation and conservation

    Treesearch

    Rosi Dagit; Cy Carlberg; Christy Cuba; Thomas Scott

    2015-01-01

    Numerous ordinances and laws recognize the value of oak trees and woodlands, and dictate serious and expensive consequences for removing or harming them. Unfortunately, the methods used to calculate these values are equally numerous and often inconsistent. More important, these ordinances typically lack economic incentives to avoid impacts to oak woodland values...

  18. A Case against Computer Symbolic Manipulation in School Mathematics Today.

    ERIC Educational Resources Information Center

    Waits, Bert K.; Demana, Franklin

    1992-01-01

    Presented are two reasons discouraging computer symbol manipulation systems use in school mathematics at present: cost for computer laboratories or expensive pocket computers; and impracticality of exact solution representations. Although development with this technology in mathematics education advances, graphing calculators are recommended to…

  19. Application of high-performance computing to numerical simulation of human movement

    NASA Technical Reports Server (NTRS)

    Anderson, F. C.; Ziegler, J. M.; Pandy, M. G.; Whalen, R. T.

    1995-01-01

    We have examined the feasibility of using massively-parallel and vector-processing supercomputers to solve large-scale optimization problems for human movement. Specifically, we compared the computational expense of determining the optimal controls for the single support phase of gait using a conventional serial machine (SGI Iris 4D25), a MIMD parallel machine (Intel iPSC/860), and a parallel-vector-processing machine (Cray Y-MP 8/864). With the human body modeled as a 14 degree-of-freedom linkage actuated by 46 musculotendinous units, computation of the optimal controls for gait could take up to 3 months of CPU time on the Iris. Both the Cray and the Intel are able to reduce this time to practical levels. The optimal solution for gait can be found with about 77 hours of CPU on the Cray and with about 88 hours of CPU on the Intel. Although the overall speeds of the Cray and the Intel were found to be similar, the unique capabilities of each machine are better suited to different portions of the computational algorithm used. The Intel was best suited to computing the derivatives of the performance criterion and the constraints whereas the Cray was best suited to parameter optimization of the controls. These results suggest that the ideal computer architecture for solving very large-scale optimal control problems is a hybrid system in which a vector-processing machine is integrated into the communication network of a MIMD parallel machine.

  20. AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandelli, D.; Alfonsi, A.; Talbot, P.

    2016-10-01

    The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, themore » overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).« less

  1. Inverse Regional Modeling with Adjoint-Free Technique

    NASA Astrophysics Data System (ADS)

    Yaremchuk, M.; Martin, P.; Panteleev, G.; Beattie, C.

    2016-02-01

    The ongoing parallelization trend in computer technologies facilitates the use ensemble methods in geophysical data assimilation. Of particular interest are ensemble techniques which do not require the development of tangent linear numerical models and their adjoints for optimization. These ``adjoint-free'' methods minimize the cost function within the sequence of subspaces spanned by a carefully chosen sets perturbations of the control variables. In this presentation, an adjoint-free variational technique (a4dVar) is demonstrated in an application estimating initial conditions of two numerical models: the Navy Coastal Ocean Model (NCOM), and the surface wave model (WAM). With the NCOM, performance of both adjoint and adjoint-free 4dVar data assimilation techniques is compared in application to the hydrographic surveys and velocity observations collected in the Adriatic Sea in 2006. Numerical experiments have shown that a4dVar is capable of providing forecast skill similar to that of conventional 4dVar at comparable computational expense while being less susceptible to excitation of ageostrophic modes that are not supported by observations. Adjoint-free technique constrained by the WAM model is tested in a series of data assimilation experiments with synthetic observations in the southern Chukchi Sea. The types of considered observations are directional spectra estimated from point measurements by stationary buoys, significant wave height (SWH) observations by coastal high-frequency radars and along-track SWH observations by satellite altimeters. The a4dVar forecast skill is shown to be 30-40% better than the skill of the sequential assimilaiton method based on optimal interpolation which is currently used in operations. Prospects of further development of the a4dVar methods in regional applications are discussed.

  2. Base-Calling Algorithm with Vocabulary (BCV) Method for Analyzing Population Sequencing Chromatograms

    PubMed Central

    Fantin, Yuri S.; Neverov, Alexey D.; Favorov, Alexander V.; Alvarez-Figueroa, Maria V.; Braslavskaya, Svetlana I.; Gordukova, Maria A.; Karandashova, Inga V.; Kuleshov, Konstantin V.; Myznikova, Anna I.; Polishchuk, Maya S.; Reshetov, Denis A.; Voiciehovskaya, Yana A.; Mironov, Andrei A.; Chulanov, Vladimir P.

    2013-01-01

    Sanger sequencing is a common method of reading DNA sequences. It is less expensive than high-throughput methods, and it is appropriate for numerous applications including molecular diagnostics. However, sequencing mixtures of similar DNA of pathogens with this method is challenging. This is important because most clinical samples contain such mixtures, rather than pure single strains. The traditional solution is to sequence selected clones of PCR products, a complicated, time-consuming, and expensive procedure. Here, we propose the base-calling with vocabulary (BCV) method that computationally deciphers Sanger chromatograms obtained from mixed DNA samples. The inputs to the BCV algorithm are a chromatogram and a dictionary of sequences that are similar to those we expect to obtain. We apply the base-calling function on a test dataset of chromatograms without ambiguous positions, as well as one with 3–14% sequence degeneracy. Furthermore, we use BCV to assemble a consensus sequence for an HIV genome fragment in a sample containing a mixture of viral DNA variants and to determine the positions of the indels. Finally, we detect drug-resistant Mycobacterium tuberculosis strains carrying frameshift mutations mixed with wild-type bacteria in the pncA gene, and roughly characterize bacterial communities in clinical samples by direct 16S rRNA sequencing. PMID:23382983

  3. Temperature scaling method for Markov chains.

    PubMed

    Crosby, Lonnie D; Windus, Theresa L

    2009-01-22

    The use of ab initio potentials in Monte Carlo simulations aimed at investigating the nucleation kinetics of water clusters is complicated by the computational expense of the potential energy determinations. Furthermore, the common desire to investigate the temperature dependence of kinetic properties leads to an urgent need to reduce the expense of performing simulations at many different temperatures. A method is detailed that allows a Markov chain (obtained via Monte Carlo) at one temperature to be scaled to other temperatures of interest without the need to perform additional large simulations. This Markov chain temperature-scaling (TeS) can be generally applied to simulations geared for numerous applications. This paper shows the quality of results which can be obtained by TeS and the possible quantities which may be extracted from scaled Markov chains. Results are obtained for a 1-D analytical potential for which the exact solutions are known. Also, this method is applied to water clusters consisting of between 2 and 5 monomers, using Dynamical Nucleation Theory to determine the evaporation rate constant for monomer loss. Although ab initio potentials are not utilized in this paper, the benefit of this method is made apparent by using the Dang-Chang polarizable classical potential for water to obtain statistical properties at various temperatures.

  4. The Effect of Computer Automation on Institutional Review Board (IRB) Office Efficiency

    ERIC Educational Resources Information Center

    Oder, Karl; Pittman, Stephanie

    2015-01-01

    Companies purchase computer systems to make their processes more efficient through automation. Some academic medical centers (AMC) have purchased computer systems for their institutional review boards (IRB) to increase efficiency and compliance with regulations. IRB computer systems are expensive to purchase, deploy, and maintain. An AMC should…

  5. Numerical Modelling of Tsunami Generated by Deformable Submarine Slides: Parameterisation of Slide Dynamics for Coupling to Tsunami Propagation Model

    NASA Astrophysics Data System (ADS)

    Smith, R. C.; Collins, G. S.; Hill, J.; Piggott, M. D.; Mouradian, S. L.

    2015-12-01

    Numerical modelling informs risk assessment of tsunami generated by submarine slides; however, for large-scale slides modelling can be complex and computationally challenging. Many previous numerical studies have approximated slides as rigid blocks that moved according to prescribed motion. However, wave characteristics are strongly dependent on the motion of the slide and previous work has recommended that more accurate representation of slide dynamics is needed. We have used the finite-element, adaptive-mesh CFD model Fluidity, to perform multi-material simulations of deformable submarine slide-generated waves at real world scales for a 2D scenario in the Gulf of Mexico. Our high-resolution approach represents slide dynamics with good accuracy, compared to other numerical simulations of this scenario, but precludes tracking of wave propagation over large distances. To enable efficient modelling of further propagation of the waves, we investigate an approach to extract information about the slide evolution from our multi-material simulations in order to drive a single-layer wave propagation model, also using Fluidity, which is much less computationally expensive. The extracted submarine slide geometry and position as a function of time are parameterised using simple polynomial functions. The polynomial functions are used to inform a prescribed velocity boundary condition in a single-layer simulation, mimicking the effect the submarine slide motion has on the water column. The approach is verified by successful comparison of wave generation in the single-layer model with that recorded in the multi-material, multi-layer simulations. We then extend this approach to 3D for further validation of this methodology (using the Gulf of Mexico scenario proposed by Horrillo et al., 2013) and to consider the effect of lateral spreading. This methodology is then used to simulate a series of hypothetical submarine slide events in the Arctic Ocean (based on evidence of historic slides) and examine the hazard posed to the UK coast.

  6. A Two-Step Method to Select Major Surge-Producing Extratropical Cyclones from a 10,000-Year Stochastic Catalog

    NASA Astrophysics Data System (ADS)

    Keshtpoor, M.; Carnacina, I.; Yablonsky, R. M.

    2016-12-01

    Extratropical cyclones (ETCs) are the primary driver of storm surge events along the UK and northwest mainland Europe coastlines. In an effort to evaluate the storm surge risk in coastal communities in this region, a stochastic catalog is developed by perturbing the historical storm seeds of European ETCs to account for 10,000 years of possible ETCs. Numerical simulation of the storm surge generated by the full 10,000-year stochastic catalog, however, is computationally expensive and may take several months to complete with available computational resources. A new statistical regression model is developed to select the major surge-generating events from the stochastic ETC catalog. This regression model is based on the maximum storm surge, obtained via numerical simulations using a calibrated version of the Delft3D-FM hydrodynamic model with a relatively coarse mesh, of 1750 historical ETC events that occurred over the past 38 years in Europe. These numerically-simulated surge values were regressed to the local sea level pressure and the U and V components of the wind field at the location of 196 tide gauge stations near the UK and northwest mainland Europe coastal areas. The regression model suggests that storm surge values in the area of interest are highly correlated to the U- and V-component of wind speed, as well as the sea level pressure. Based on these correlations, the regression model was then used to select surge-generating storms from the 10,000-year stochastic catalog. Results suggest that roughly 105,000 events out of 480,000 stochastic storms are surge-generating events and need to be considered for numerical simulation using a hydrodynamic model. The selected stochastic storms were then simulated in Delft3D-FM, and the final refinement of the storm population was performed based on return period analysis of the 1750 historical event simulations at each of the 196 tide gauges in preparation for Delft3D-FM fine mesh simulations.

  7. Variable fidelity robust optimization of pulsed laser orbital debris removal under epistemic uncertainty

    NASA Astrophysics Data System (ADS)

    Hou, Liqiang; Cai, Yuanli; Liu, Jin; Hou, Chongyuan

    2016-04-01

    A variable fidelity robust optimization method for pulsed laser orbital debris removal (LODR) under uncertainty is proposed. Dempster-shafer theory of evidence (DST), which merges interval-based and probabilistic uncertainty modeling, is used in the robust optimization. The robust optimization method optimizes the performance while at the same time maximizing its belief value. A population based multi-objective optimization (MOO) algorithm based on a steepest descent like strategy with proper orthogonal decomposition (POD) is used to search robust Pareto solutions. Analytical and numerical lifetime predictors are used to evaluate the debris lifetime after the laser pulses. Trust region based fidelity management is designed to reduce the computational cost caused by the expensive model. When the solutions fall into the trust region, the analytical model is used to reduce the computational cost. The proposed robust optimization method is first tested on a set of standard problems and then applied to the removal of Iridium 33 with pulsed lasers. It will be shown that the proposed approach can identify the most robust solutions with minimum lifetime under uncertainty.

  8. Crystallographic Lattice Boltzmann Method

    PubMed Central

    Namburi, Manjusha; Krithivasan, Siddharth; Ansumali, Santosh

    2016-01-01

    Current approaches to Direct Numerical Simulation (DNS) are computationally quite expensive for most realistic scientific and engineering applications of Fluid Dynamics such as automobiles or atmospheric flows. The Lattice Boltzmann Method (LBM), with its simplified kinetic descriptions, has emerged as an important tool for simulating hydrodynamics. In a heterogeneous computing environment, it is often preferred due to its flexibility and better parallel scaling. However, direct simulation of realistic applications, without the use of turbulence models, remains a distant dream even with highly efficient methods such as LBM. In LBM, a fictitious lattice with suitable isotropy in the velocity space is considered to recover Navier-Stokes hydrodynamics in macroscopic limit. The same lattice is mapped onto a cartesian grid for spatial discretization of the kinetic equation. In this paper, we present an inverted argument of the LBM, by making spatial discretization as the central theme. We argue that the optimal spatial discretization for LBM is a Body Centered Cubic (BCC) arrangement of grid points. We illustrate an order-of-magnitude gain in efficiency for LBM and thus a significant progress towards feasibility of DNS for realistic flows. PMID:27251098

  9. P-CSI v1.0, an accelerated barotropic solver for the high-resolution ocean model component in the Community Earth System Model v2.0

    NASA Astrophysics Data System (ADS)

    Huang, Xiaomeng; Tang, Qiang; Tseng, Yuheng; Hu, Yong; Baker, Allison H.; Bryan, Frank O.; Dennis, John; Fu, Haohuan; Yang, Guangwen

    2016-11-01

    In the Community Earth System Model (CESM), the ocean model is computationally expensive for high-resolution grids and is often the least scalable component for high-resolution production experiments. The major bottleneck is that the barotropic solver scales poorly at high core counts. We design a new barotropic solver to accelerate the high-resolution ocean simulation. The novel solver adopts a Chebyshev-type iterative method to reduce the global communication cost in conjunction with an effective block preconditioner to further reduce the iterations. The algorithm and its computational complexity are theoretically analyzed and compared with other existing methods. We confirm the significant reduction of the global communication time with a competitive convergence rate using a series of idealized tests. Numerical experiments using the CESM 0.1° global ocean model show that the proposed approach results in a factor of 1.7 speed-up over the original method with no loss of accuracy, achieving 10.5 simulated years per wall-clock day on 16 875 cores.

  10. A fast algorithm for multiscale electromagnetic problems using interpolative decomposition and multilevel fast multipole algorithm

    NASA Astrophysics Data System (ADS)

    Pan, Xiao-Min; Wei, Jian-Gong; Peng, Zhen; Sheng, Xin-Qing

    2012-02-01

    The interpolative decomposition (ID) is combined with the multilevel fast multipole algorithm (MLFMA), denoted by ID-MLFMA, to handle multiscale problems. The ID-MLFMA first generates ID levels by recursively dividing the boxes at the finest MLFMA level into smaller boxes. It is specifically shown that near-field interactions with respect to the MLFMA, in the form of the matrix vector multiplication (MVM), are efficiently approximated at the ID levels. Meanwhile, computations on far-field interactions at the MLFMA levels remain unchanged. Only a small portion of matrix entries are required to approximate coupling among well-separated boxes at the ID levels, and these submatrices can be filled without computing the complete original coupling matrix. It follows that the matrix filling in the ID-MLFMA becomes much less expensive. The memory consumed is thus greatly reduced and the MVM is accelerated as well. Several factors that may influence the accuracy, efficiency and reliability of the proposed ID-MLFMA are investigated by numerical experiments. Complex targets are calculated to demonstrate the capability of the ID-MLFMA algorithm.

  11. Simplified Phased-Mission System Analysis for Systems with Independent Component Repairs

    NASA Technical Reports Server (NTRS)

    Somani, Arun K.

    1996-01-01

    Accurate analysis of reliability of system requires that it accounts for all major variations in system's operation. Most reliability analyses assume that the system configuration, success criteria, and component behavior remain the same. However, multiple phases are natural. We present a new computationally efficient technique for analysis of phased-mission systems where the operational states of a system can be described by combinations of components states (such as fault trees or assertions). Moreover, individual components may be repaired, if failed, as part of system operation but repairs are independent of the system state. For repairable systems Markov analysis techniques are used but they suffer from state space explosion. That limits the size of system that can be analyzed and it is expensive in computation. We avoid the state space explosion. The phase algebra is used to account for the effects of variable configurations, repairs, and success criteria from phase to phase. Our technique yields exact (as opposed to approximate) results. We demonstrate our technique by means of several examples and present numerical results to show the effects of phases and repairs on the system reliability/availability.

  12. Forecasting runout of rock and debris avalanches

    USGS Publications Warehouse

    Iverson, Richard M.; Evans, S.G.; Mugnozza, G.S.; Strom, A.; Hermanns, R.L.

    2006-01-01

    Physically based mathematical models and statistically based empirical equations each may provide useful means of forecasting runout of rock and debris avalanches. This paper compares the foundations, strengths, and limitations of a physically based model and a statistically based forecasting method, both of which were developed to predict runout across three-dimensional topography. The chief advantage of the physically based model results from its ties to physical conservation laws and well-tested axioms of soil and rock mechanics, such as the Coulomb friction rule and effective-stress principle. The output of this model provides detailed information about the dynamics of avalanche runout, at the expense of high demands for accurate input data, numerical computation, and experimental testing. In comparison, the statistical method requires relatively modest computation and no input data except identification of prospective avalanche source areas and a range of postulated avalanche volumes. Like the physically based model, the statistical method yields maps of predicted runout, but it provides no information on runout dynamics. Although the two methods differ significantly in their structure and objectives, insights gained from one method can aid refinement of the other.

  13. Numerical solutions of the complete Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Hassan, H. A.

    1993-01-01

    The objective of this study is to compare the use of assumed pdf (probability density function) approaches for modeling supersonic turbulent reacting flowfields with the more elaborate approach where the pdf evolution equation is solved. Assumed pdf approaches for averaging the chemical source terms require modest increases in CPU time typically of the order of 20 percent above treating the source terms as 'laminar.' However, it is difficult to assume a form for these pdf's a priori that correctly mimics the behavior of the actual pdf governing the flow. Solving the evolution equation for the pdf is a theoretically sound approach, but because of the large dimensionality of this function, its solution requires a Monte Carlo method which is computationally expensive and slow to coverage. Preliminary results show both pdf approaches to yield similar solutions for the mean flow variables.

  14. Partially-Averaged Navier Stokes Model for Turbulence: Implementation and Validation

    NASA Technical Reports Server (NTRS)

    Girimaji, Sharath S.; Abdol-Hamid, Khaled S.

    2005-01-01

    Partially-averaged Navier Stokes (PANS) is a suite of turbulence closure models of various modeled-to-resolved scale ratios ranging from Reynolds-averaged Navier Stokes (RANS) to Navier-Stokes (direct numerical simulations). The objective of PANS, like hybrid models, is to resolve large scale structures at reasonable computational expense. The modeled-to-resolved scale ratio or the level of physical resolution in PANS is quantified by two parameters: the unresolved-to-total ratios of kinetic energy (f(sub k)) and dissipation (f(sub epsilon)). The unresolved-scale stress is modeled with the Boussinesq approximation and modeled transport equations are solved for the unresolved kinetic energy and dissipation. In this paper, we first present a brief discussion of the PANS philosophy followed by a description of the implementation procedure and finally perform preliminary evaluation in benchmark problems.

  15. 3D-printed adaptive acoustic lens as a disruptive technology for transcranial ultrasound therapy using single-element transducers.

    PubMed

    Maimbourg, Guillaume; Houdouin, Alexandre; Deffieux, Thomas; Tanter, Mickael; Aubry, Jean-François

    2018-01-16

    The development of multi-element arrays for better control of the shape of ultrasonic beams has opened the way for focusing through highly aberrating media, such as the human skull. As a result, the use of brain therapy with transcranial-focused ultrasound has rapidly grown. Although effective, such technology is expensive. We propose a disruptive, low-cost approach that consists of focusing a 1 MHz ultrasound beam through a human skull with a single-element transducer coupled with a tailored silicone acoustic lens cast in a 3D-printed mold and designed using computed tomography-based numerical acoustic simulation. We demonstrate on N  =  3 human skulls that adding lens-based aberration correction to a single-element transducer increases the deposited energy on the target 10 fold.

  16. 3D-printed adaptive acoustic lens as a disruptive technology for transcranial ultrasound therapy using single-element transducers

    NASA Astrophysics Data System (ADS)

    Maimbourg, Guillaume; Houdouin, Alexandre; Deffieux, Thomas; Tanter, Mickael; Aubry, Jean-François

    2018-01-01

    The development of multi-element arrays for better control of the shape of ultrasonic beams has opened the way for focusing through highly aberrating media, such as the human skull. As a result, the use of brain therapy with transcranial-focused ultrasound has rapidly grown. Although effective, such technology is expensive. We propose a disruptive, low-cost approach that consists of focusing a 1 MHz ultrasound beam through a human skull with a single-element transducer coupled with a tailored silicone acoustic lens cast in a 3D-printed mold and designed using computed tomography-based numerical acoustic simulation. We demonstrate on N  =  3 human skulls that adding lens-based aberration correction to a single-element transducer increases the deposited energy on the target 10 fold.

  17. An efficient hybrid method for stochastic reaction-diffusion biochemical systems with delay

    NASA Astrophysics Data System (ADS)

    Sayyidmousavi, Alireza; Ilie, Silvana

    2017-12-01

    Many chemical reactions, such as gene transcription and translation in living cells, need a certain time to finish once they are initiated. Simulating stochastic models of reaction-diffusion systems with delay can be computationally expensive. In the present paper, a novel hybrid algorithm is proposed to accelerate the stochastic simulation of delayed reaction-diffusion systems. The delayed reactions may be of consuming or non-consuming delay type. The algorithm is designed for moderately stiff systems in which the events can be partitioned into slow and fast subsets according to their propensities. The proposed algorithm is applied to three benchmark problems and the results are compared with those of the delayed Inhomogeneous Stochastic Simulation Algorithm. The numerical results show that the new hybrid algorithm achieves considerable speed-up in the run time and very good accuracy.

  18. Computer simulations of space-borne meteorological systems on the CYBER 205

    NASA Technical Reports Server (NTRS)

    Halem, M.

    1984-01-01

    Because of the extreme expense involved in developing and flight testing meteorological instruments, an extensive series of numerical modeling experiments to simulate the performance of meteorological observing systems were performed on CYBER 205. The studies compare the relative importance of different global measurements of individual and composite systems of the meteorological variables needed to determine the state of the atmosphere. The assessments are made in terms of the systems ability to improve 12 hour global forecasts. Each experiment involves the daily assimilation of simulated data that is obtained from a data set called nature. This data is obtained from two sources: first, a long two-month general circulation integration with the GLAS 4th Order Forecast Model and second, global analysis prepared by the National Meteorological Center, NOAA, from the current observing systems twice daily.

  19. On a framework for generating PoD curves assisted by numerical simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Subair, S. Mohamed, E-mail: prajagopal@iitm.ac.in; Agrawal, Shweta, E-mail: prajagopal@iitm.ac.in; Balasubramaniam, Krishnan, E-mail: prajagopal@iitm.ac.in

    2015-03-31

    The Probability of Detection (PoD) curve method has emerged as an important tool for the assessment of the performance of NDE techniques, a topic of particular interest to the nuclear industry where inspection qualification is very important. The conventional experimental means of generating PoD curves though, can be expensive, requiring large data sets (covering defects and test conditions), and equipment and operator time. Several methods of achieving faster estimates for PoD curves using physics-based modelling have been developed to address this problem. Numerical modelling techniques are also attractive, especially given the ever-increasing computational power available to scientists today. Here wemore » develop procedures for obtaining PoD curves, assisted by numerical simulation and based on Bayesian statistics. Numerical simulations are performed using Finite Element analysis for factors that are assumed to be independent, random and normally distributed. PoD curves so generated are compared with experiments on austenitic stainless steel (SS) plates with artificially created notches. We examine issues affecting the PoD curve generation process including codes, standards, distribution of defect parameters and the choice of the noise threshold. We also study the assumption of normal distribution for signal response parameters and consider strategies for dealing with data that may be more complex or sparse to justify this. These topics are addressed and illustrated through the example case of generation of PoD curves for pulse-echo ultrasonic inspection of vertical surface-breaking cracks in SS plates.« less

  20. On a framework for generating PoD curves assisted by numerical simulations

    NASA Astrophysics Data System (ADS)

    Subair, S. Mohamed; Agrawal, Shweta; Balasubramaniam, Krishnan; Rajagopal, Prabhu; Kumar, Anish; Rao, Purnachandra B.; Tamanna, Jayakumar

    2015-03-01

    The Probability of Detection (PoD) curve method has emerged as an important tool for the assessment of the performance of NDE techniques, a topic of particular interest to the nuclear industry where inspection qualification is very important. The conventional experimental means of generating PoD curves though, can be expensive, requiring large data sets (covering defects and test conditions), and equipment and operator time. Several methods of achieving faster estimates for PoD curves using physics-based modelling have been developed to address this problem. Numerical modelling techniques are also attractive, especially given the ever-increasing computational power available to scientists today. Here we develop procedures for obtaining PoD curves, assisted by numerical simulation and based on Bayesian statistics. Numerical simulations are performed using Finite Element analysis for factors that are assumed to be independent, random and normally distributed. PoD curves so generated are compared with experiments on austenitic stainless steel (SS) plates with artificially created notches. We examine issues affecting the PoD curve generation process including codes, standards, distribution of defect parameters and the choice of the noise threshold. We also study the assumption of normal distribution for signal response parameters and consider strategies for dealing with data that may be more complex or sparse to justify this. These topics are addressed and illustrated through the example case of generation of PoD curves for pulse-echo ultrasonic inspection of vertical surface-breaking cracks in SS plates.

  1. 78 FR 50374 - Proposed Information Collection; Comment Request; Information and Communication Technology Survey

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-19

    ... expenses (purchases; and operating leases and rental payments) for four types of information and communication technology equipment and software (computers and peripheral equipment; ICT equipment, excluding computers and peripherals; electromedical and electrotherapeutic apparatus; and computer software, including...

  2. Experimental calibration procedures for rotating Lorentz-force flowmeters

    DOE PAGES

    Hvasta, M. G.; Slighton, N. T.; Kolemen, E.; ...

    2017-07-14

    Rotating Lorentz-force flowmeters are a novel and useful technology with a range of applications in a variety of different industries. However, calibrating these flowmeters can be challenging, time-consuming, and expensive. In this paper, simple calibration procedures for rotating Lorentz-force flowmeters are presented. These procedures eliminate the need for expensive equipment, numerical modeling, redundant flowmeters, and system down-time. Finally, the calibration processes are explained in a step-by-step manner and compared to experimental results.

  3. Experimental calibration procedures for rotating Lorentz-force flowmeters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hvasta, M. G.; Slighton, N. T.; Kolemen, E.

    Rotating Lorentz-force flowmeters are a novel and useful technology with a range of applications in a variety of different industries. However, calibrating these flowmeters can be challenging, time-consuming, and expensive. In this paper, simple calibration procedures for rotating Lorentz-force flowmeters are presented. These procedures eliminate the need for expensive equipment, numerical modeling, redundant flowmeters, and system down-time. Finally, the calibration processes are explained in a step-by-step manner and compared to experimental results.

  4. Analysis of the Source Physics Experiment SPE4 Prime Using State-Of Parallel Numerical Tools.

    NASA Astrophysics Data System (ADS)

    Vorobiev, O.; Ezzedine, S. M.; Antoun, T.; Glenn, L.

    2015-12-01

    This work describes a methodology used for large scale modeling of wave propagation from underground chemical explosions conducted at the Nevada National Security Site (NNSS) fractured granitic rock. We show that the discrete natures of rock masses as well as the spatial variability of the fabric of rock properties are very important to understand ground motions induced by underground explosions. In order to build a credible conceptual model of the subsurface we integrated the geological, geomechanical and geophysical characterizations conducted during recent test at the NNSS as well as historical data from the characterization during the underground nuclear test conducted at the NNSS. Because detailed site characterization is limited, expensive and, in some instances, impossible we have numerically investigated the effects of the characterization gaps on the overall response of the system. We performed several computational studies to identify the key important geologic features specific to fractured media mainly the joints characterized at the NNSS. We have also explored common key features to both geological environments such as saturation and topography and assess which characteristics affect the most the ground motion in the near-field and in the far-field. Stochastic representation of these features based on the field characterizations has been implemented into LLNL's Geodyn-L hydrocode. Simulations were used to guide site characterization efforts in order to provide the essential data to the modeling community. We validate our computational results by comparing the measured and computed ground motion at various ranges for the recently executed SPE4 prime experiment. We have also conducted a comparative study between SPE4 prime and previous experiments SPE1 and SPE3 to assess similarities and differences and draw conclusions on designing SPE5.

  5. Imaging and computational considerations for image computed permeability: Operating envelope of Digital Rock Physics

    NASA Astrophysics Data System (ADS)

    Saxena, Nishank; Hows, Amie; Hofmann, Ronny; Alpak, Faruk O.; Freeman, Justin; Hunter, Sander; Appel, Matthias

    2018-06-01

    This study defines the optimal operating envelope of the Digital Rock technology from the perspective of imaging and numerical simulations of transport properties. Imaging larger volumes of rocks for Digital Rock Physics (DRP) analysis improves the chances of achieving a Representative Elementary Volume (REV) at which flow-based simulations (1) do not vary with change in rock volume, and (2) is insensitive to the choice of boundary conditions. However, this often comes at the expense of image resolution. This trade-off exists due to the finiteness of current state-of-the-art imaging detectors. Imaging and analyzing digital rocks that sample the REV and still sufficiently resolve pore throats is critical to ensure simulation quality and robustness of rock property trends for further analysis. We find that at least 10 voxels are needed to sufficiently resolve pore throats for single phase fluid flow simulations. If this condition is not met, additional analyses and corrections may allow for meaningful comparisons between simulation results and laboratory measurements of permeability, but some cases may fall outside the current technical feasibility of DRP. On the other hand, we find that the ratio of field of view and effective grain size provides a reliable measure of the REV for siliciclastic rocks. If this ratio is greater than 5, the coefficient of variation for single-phase permeability simulations drops below 15%. These imaging considerations are crucial when comparing digitally computed rock flow properties with those measured in the laboratory. We find that the current imaging methods are sufficient to achieve both REV (with respect to numerical boundary conditions) and required image resolution to perform digital core analysis for coarse to fine-grained sandstones.

  6. Turbulence modeling for Francis turbine water passages simulation

    NASA Astrophysics Data System (ADS)

    Maruzewski, P.; Hayashi, H.; Munch, C.; Yamaishi, K.; Hashii, T.; Mombelli, H. P.; Sugow, Y.; Avellan, F.

    2010-08-01

    The applications of Computational Fluid Dynamics, CFD, to hydraulic machines life require the ability to handle turbulent flows and to take into account the effects of turbulence on the mean flow. Nowadays, Direct Numerical Simulation, DNS, is still not a good candidate for hydraulic machines simulations due to an expensive computational time consuming. Large Eddy Simulation, LES, even, is of the same category of DNS, could be an alternative whereby only the small scale turbulent fluctuations are modeled and the larger scale fluctuations are computed directly. Nevertheless, the Reynolds-Averaged Navier-Stokes, RANS, model have become the widespread standard base for numerous hydraulic machine design procedures. However, for many applications involving wall-bounded flows and attached boundary layers, various hybrid combinations of LES and RANS are being considered, such as Detached Eddy Simulation, DES, whereby the RANS approximation is kept in the regions where the boundary layers are attached to the solid walls. Furthermore, the accuracy of CFD simulations is highly dependent on the grid quality, in terms of grid uniformity in complex configurations. Moreover any successful structured and unstructured CFD codes have to offer a wide range to the variety of classic RANS model to hybrid complex model. The aim of this study is to compare the behavior of turbulent simulations for both structured and unstructured grids topology with two different CFD codes which used the same Francis turbine. Hence, the study is intended to outline the encountered discrepancy for predicting the wake of turbine blades by using either the standard k-epsilon model, or the standard k-epsilon model or the SST shear stress model in a steady CFD simulation. Finally, comparisons are made with experimental data from the EPFL Laboratory for Hydraulic Machines reduced scale model measurements.

  7. Numerical Simulations of Hypersonic Boundary Layer Transition

    NASA Astrophysics Data System (ADS)

    Bartkowicz, Matthew David

    Numerical schemes for supersonic flows tend to use large amounts of artificial viscosity for stability. This tends to damp out the small scale structures in the flow. Recently some low-dissipation methods have been proposed which selectively eliminate the artificial viscosity in regions which do not require it. This work builds upon the low-dissipation method of Subbareddy and Candler which uses the flux vector splitting method of Steger and Warming but identifies the dissipation portion to eliminate it. Computing accurate fluxes typically relies on large grid stencils or coupled linear systems that become computationally expensive to solve. Unstructured grids allow for CFD solutions to be obtained on complex geometries, unfortunately, it then becomes difficult to create a large stencil or the coupled linear system. Accurate solutions require grids that quickly become too large to be feasible. In this thesis a method is proposed to obtain more accurate solutions using relatively local data, making it suitable for unstructured grids composed of hexahedral elements. Fluxes are reconstructed using local gradients to extend the range of data used. The method is then validated on several test problems. Simulations of boundary layer transition are then performed. An elliptic cone at Mach 8 is simulated based on an experiment at the Princeton Gasdynamics Laboratory. A simulated acoustic noise boundary condition is imposed to model the noisy conditions of the wind tunnel and the transitioning boundary layer observed. A computation of an isolated roughness element is done based on an experiment in Purdue's Mach 6 quiet wind tunnel. The mechanism for transition is identified as an instability in the upstream separation region and a comparison is made to experimental data. In the CFD a fully turbulent boundary layer is observed downstream.

  8. Reliability based design optimization: Formulations and methodologies

    NASA Astrophysics Data System (ADS)

    Agarwal, Harish

    Modern products ranging from simple components to complex systems should be designed to be optimal and reliable. The challenge of modern engineering is to ensure that manufacturing costs are reduced and design cycle times are minimized while achieving requirements for performance and reliability. If the market for the product is competitive, improved quality and reliability can generate very strong competitive advantages. Simulation based design plays an important role in designing almost any kind of automotive, aerospace, and consumer products under these competitive conditions. Single discipline simulations used for analysis are being coupled together to create complex coupled simulation tools. This investigation focuses on the development of efficient and robust methodologies for reliability based design optimization in a simulation based design environment. Original contributions of this research are the development of a novel efficient and robust unilevel methodology for reliability based design optimization, the development of an innovative decoupled reliability based design optimization methodology, the application of homotopy techniques in unilevel reliability based design optimization methodology, and the development of a new framework for reliability based design optimization under epistemic uncertainty. The unilevel methodology for reliability based design optimization is shown to be mathematically equivalent to the traditional nested formulation. Numerical test problems show that the unilevel methodology can reduce computational cost by at least 50% as compared to the nested approach. The decoupled reliability based design optimization methodology is an approximate technique to obtain consistent reliable designs at lesser computational expense. Test problems show that the methodology is computationally efficient compared to the nested approach. A framework for performing reliability based design optimization under epistemic uncertainty is also developed. A trust region managed sequential approximate optimization methodology is employed for this purpose. Results from numerical test studies indicate that the methodology can be used for performing design optimization under severe uncertainty.

  9. A method for spectral DNS of low Rm channel flows based on the least dissipative modes

    NASA Astrophysics Data System (ADS)

    Kornet, Kacper; Pothérat, Alban

    2015-10-01

    We put forward a new type of spectral method for the direct numerical simulation of flows where anisotropy or very fine boundary layers are present. The main idea is to take advantage of the fact that such structures are dissipative and that their presence should reduce the number of degrees of freedom of the flow, when paradoxically, their fine resolution incurs extra computational cost in most current methods. The principle of this method is to use a functional basis with elements that already include these fine structures so as to avoid these extra costs. This leads us to develop an algorithm to implement a spectral method for arbitrary functional bases, and in particular, non-orthogonal ones. We construct a basic implementation of this algorithm to simulate magnetohydrodynamic (MHD) channel flows with an externally imposed, transverse magnetic field, where very thin boundary layers are known to develop along the channel walls. In this case, the sought functional basis can be built out of the eigenfunctions of the dissipation operator, which incorporate these boundary layers, and it turns out to be non-orthogonal. We validate this new scheme against numerical simulations of freely decaying MHD turbulence based on a finite volume code and it is found to provide accurate results. Its ability to fully resolve wall-bounded turbulence with a number of modes close to that required by the dynamics is demonstrated on a simple example. This opens the way to full-blown simulations of MHD turbulence under very high magnetic fields. Until now such simulations were too computationally expensive. In contrast to traditional methods the computational cost of the proposed method, does not depend on the intensity of the magnetic field.

  10. Low-Cost Terminal Alternative for Learning Center Managers. Final Report.

    ERIC Educational Resources Information Center

    Nix, C. Jerome; And Others

    This study established the feasibility of replacing high performance and relatively expensive computer terminals with less expensive ones adequate for supporting specific tasks of Advanced Instructional System (AIS) at Lowry AFB, Colorado. Surveys of user requirements and available devices were conducted and the results used in a system analysis.…

  11. Delamination detection using methods of computational intelligence

    NASA Astrophysics Data System (ADS)

    Ihesiulor, Obinna K.; Shankar, Krishna; Zhang, Zhifang; Ray, Tapabrata

    2012-11-01

    Abstract Reliable delamination prediction scheme is indispensable in order to prevent potential risks of catastrophic failures in composite structures. The existence of delaminations changes the vibration characteristics of composite laminates and hence such indicators can be used to quantify the health characteristics of laminates. An approach for online health monitoring of in-service composite laminates is presented in this paper that relies on methods based on computational intelligence. Typical changes in the observed vibration characteristics (i.e. change in natural frequencies) are considered as inputs to identify the existence, location and magnitude of delaminations. The performance of the proposed approach is demonstrated using numerical models of composite laminates. Since this identification problem essentially involves the solution of an optimization problem, the use of finite element (FE) methods as the underlying tool for analysis turns out to be computationally expensive. A surrogate assisted optimization approach is hence introduced to contain the computational time within affordable limits. An artificial neural network (ANN) model with Bayesian regularization is used as the underlying approximation scheme while an improved rate of convergence is achieved using a memetic algorithm. However, building of ANN surrogate models usually requires large training datasets. K-means clustering is effectively employed to reduce the size of datasets. ANN is also used via inverse modeling to determine the position, size and location of delaminations using changes in measured natural frequencies. The results clearly highlight the efficiency and the robustness of the approach.

  12. Numerical Simulation of Transitional, Hypersonic Flows using a Hybrid Particle-Continuum Method

    NASA Astrophysics Data System (ADS)

    Verhoff, Ashley Marie

    Analysis of hypersonic flows requires consideration of multiscale phenomena due to the range of flight regimes encountered, from rarefied conditions in the upper atmosphere to fully continuum flow at low altitudes. At transitional Knudsen numbers there are likely to be localized regions of strong thermodynamic nonequilibrium effects that invalidate the continuum assumptions of the Navier-Stokes equations. Accurate simulation of these regions, which include shock waves, boundary and shear layers, and low-density wakes, requires a kinetic theory-based approach where no prior assumptions are made regarding the molecular distribution function. Because of the nature of these types of flows, there is much to be gained in terms of both numerical efficiency and physical accuracy by developing hybrid particle-continuum simulation approaches. The focus of the present research effort is the continued development of the Modular Particle-Continuum (MPC) method, where the Navier-Stokes equations are solved numerically using computational fluid dynamics (CFD) techniques in regions of the flow field where continuum assumptions are valid, and the direct simulation Monte Carlo (DSMC) method is used where strong thermodynamic nonequilibrium effects are present. Numerical solutions of transitional, hypersonic flows are thus obtained with increased physical accuracy relative to CFD alone, and improved numerical efficiency is achieved in comparison to DSMC alone because this more computationally expensive method is restricted to those regions of the flow field where it is necessary to maintain physical accuracy. In this dissertation, a comprehensive assessment of the physical accuracy of the MPC method is performed, leading to the implementation of a non-vacuum supersonic outflow boundary condition in particle domains, and more consistent initialization of DSMC simulator particles along hybrid interfaces. The relative errors between MPC and full DSMC results are greatly reduced as a direct result of these improvements. Next, a new parameter for detecting rotational nonequilibrium effects is proposed and shown to offer advantages over other continuum breakdown parameters, achieving further accuracy gains. Lastly, the capabilities of the MPC method are extended to accommodate multiple chemical species in rotational nonequilibrium, each of which is allowed to equilibrate independently, enabling application of the MPC method to more realistic atmospheric flows.

  13. Parameterizing correlations between hydrometeor species in mixed-phase Arctic clouds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larson, Vincent E.; Nielsen, Brandon J.; Fan, Jiwen

    2011-08-16

    Mixed-phase Arctic clouds, like other clouds, contain small-scale variability in hydrometeor fields, such as cloud water or snow mixing ratio. This variability may be worth parameterizing in coarse-resolution numerical models. In particular, for modeling processes such as accretion and aggregation, it would be useful to parameterize subgrid correlations among hydrometeor species. However, one difficulty is that there exist many hydrometeor species and many microphysical processes, leading to complexity and computational expense.Existing lower and upper bounds (inequalities) on linear correlation coefficients provide useful guidance, but these bounds are too loose to serve directly as a method to predict subgrid correlations. Therefore,more » this paper proposes an alternative method that is based on a blend of theory and empiricism. The method begins with the spherical parameterization framework of Pinheiro and Bates (1996), which expresses the correlation matrix in terms of its Cholesky factorization. The values of the elements of the Cholesky matrix are parameterized here using a cosine row-wise formula that is inspired by the aforementioned bounds on correlations. The method has three advantages: 1) the computational expense is tolerable; 2) the correlations are, by construction, guaranteed to be consistent with each other; and 3) the methodology is fairly general and hence may be applicable to other problems. The method is tested non-interactively using simulations of three Arctic mixed-phase cloud cases from two different field experiments: the Indirect and Semi-Direct Aerosol Campaign (ISDAC) and the Mixed-Phase Arctic Cloud Experiment (M-PACE). Benchmark simulations are performed using a large-eddy simulation (LES) model that includes a bin microphysical scheme. The correlations estimated by the new method satisfactorily approximate the correlations produced by the LES.« less

  14. Inelastic neutron scattering of large molecular systems: The case of the original benzylic amide [2]catenane

    NASA Astrophysics Data System (ADS)

    Caciuffo, Roberto; Esposti, Alessandra Degli; Deleuze, Michael S.; Leigh, David A.; Murphy, Aden; Paci, Barbara; Parker, Stewart F.; Zerbetto, Francesco

    1998-12-01

    The inelastic neutron scattering (INS) spectrum of the original benzylic amide [2]catenane is recorded and simulated by a semiempirical quantum chemical procedure coupled with the most comprehensive approach available to date, the CLIMAX program. The successful simulation of the spectrum indicates that the modified neglect of differential overlap (MNDO) model can reproduce the intramolecular vibrations of a molecular system as large as a catenane (136 atoms). Because of the computational costs involved and some numerical instabilities, a less expensive approach is attempted which involves the molecular mechanics-based calculation of the INS response in terms of the most basic formulation for the scattering activity. The encouraging results obtained validate the less computationally intensive procedure and allow its extension to the calculation of the INS spectrum for a second, theoretical, co-conformer, which, although structurally and energetically reasonable, is not, in fact, found in the solid state. The second structure was produced by a Monte Carlo simulated annealing method run in the conformational space (a procedure that would have been prohibitively expensive at the semiempirical level) and is characterized by a higher degree of intramolecular hydrogen bonding than the x-ray structure. The two alternative structures yield different simulated spectra, only one of which, the authentic one, is compatible with the experimental data. Comparison of the two simulated and experimental spectra affords the identification of an inelastic neutron scattering spectral signature of the correct hydrogen bonding motif in the region slightly above 700 cm-1. The study illustrates that combinations of simulated INS data and experimental results can be successfully used to discriminate between different proposed structures or possible hydrogen bonding motifs in large functional molecular systems.

  15. Markov reward processes

    NASA Technical Reports Server (NTRS)

    Smith, R. M.

    1991-01-01

    Numerous applications in the area of computer system analysis can be effectively studied with Markov reward models. These models describe the behavior of the system with a continuous-time Markov chain, where a reward rate is associated with each state. In a reliability/availability model, upstates may have reward rate 1 and down states may have reward rate zero associated with them. In a queueing model, the number of jobs of certain type in a given state may be the reward rate attached to that state. In a combined model of performance and reliability, the reward rate of a state may be the computational capacity, or a related performance measure. Expected steady-state reward rate and expected instantaneous reward rate are clearly useful measures of the Markov reward model. More generally, the distribution of accumulated reward or time-averaged reward over a finite time interval may be determined from the solution of the Markov reward model. This information is of great practical significance in situations where the workload can be well characterized (deterministically, or by continuous functions e.g., distributions). The design process in the development of a computer system is an expensive and long term endeavor. For aerospace applications the reliability of the computer system is essential, as is the ability to complete critical workloads in a well defined real time interval. Consequently, effective modeling of such systems must take into account both performance and reliability. This fact motivates our use of Markov reward models to aid in the development and evaluation of fault tolerant computer systems.

  16. New Developments in Modeling MHD Systems on High Performance Computing Architectures

    NASA Astrophysics Data System (ADS)

    Germaschewski, K.; Raeder, J.; Larson, D. J.; Bhattacharjee, A.

    2009-04-01

    Modeling the wide range of time and length scales present even in fluid models of plasmas like MHD and X-MHD (Extended MHD including two fluid effects like Hall term, electron inertia, electron pressure gradient) is challenging even on state-of-the-art supercomputers. In the last years, HPC capacity has continued to grow exponentially, but at the expense of making the computer systems more and more difficult to program in order to get maximum performance. In this paper, we will present a new approach to managing the complexity caused by the need to write efficient codes: Separating the numerical description of the problem, in our case a discretized right hand side (r.h.s.), from the actual implementation of efficiently evaluating it. An automatic code generator is used to describe the r.h.s. in a quasi-symbolic form while leaving the translation into efficient and parallelized code to a computer program itself. We implemented this approach for OpenGGCM (Open General Geospace Circulation Model), a model of the Earth's magnetosphere, which was accelerated by a factor of three on regular x86 architecture and a factor of 25 on the Cell BE architecture (commonly known for its deployment in Sony's PlayStation 3).

  17. Dual-scale Galerkin methods for Darcy flow

    NASA Astrophysics Data System (ADS)

    Wang, Guoyin; Scovazzi, Guglielmo; Nouveau, Léo; Kees, Christopher E.; Rossi, Simone; Colomés, Oriol; Main, Alex

    2018-02-01

    The discontinuous Galerkin (DG) method has found widespread application in elliptic problems with rough coefficients, of which the Darcy flow equations are a prototypical example. One of the long-standing issues of DG approximations is the overall computational cost, and many different strategies have been proposed, such as the variational multiscale DG method, the hybridizable DG method, the multiscale DG method, the embedded DG method, and the Enriched Galerkin method. In this work, we propose a mixed dual-scale Galerkin method, in which the degrees-of-freedom of a less computationally expensive coarse-scale approximation are linked to the degrees-of-freedom of a base DG approximation. We show that the proposed approach has always similar or improved accuracy with respect to the base DG method, with a considerable reduction in computational cost. For the specific definition of the coarse-scale space, we consider Raviart-Thomas finite elements for the mass flux and piecewise-linear continuous finite elements for the pressure. We provide a complete analysis of stability and convergence of the proposed method, in addition to a study on its conservation and consistency properties. We also present a battery of numerical tests to verify the results of the analysis, and evaluate a number of possible variations, such as using piecewise-linear continuous finite elements for the coarse-scale mass fluxes.

  18. Nanoinformatics knowledge infrastructures: bringing efficient information management to nanomedical research

    PubMed Central

    de la Iglesia, D; Cachau, R E; García-Remesal, M; Maojo, V

    2014-01-01

    Nanotechnology represents an area of particular promise and significant opportunity across multiple scientific disciplines. Ongoing nanotechnology research ranges from the characterization of nanoparticles and nanomaterials to the analysis and processing of experimental data seeking correlations between nanoparticles and their functionalities and side effects. Due to their special properties, nanoparticles are suitable for cellular-level diagnostics and therapy, offering numerous applications in medicine, e.g. development of biomedical devices, tissue repair, drug delivery systems and biosensors. In nanomedicine, recent studies are producing large amounts of structural and property data, highlighting the role for computational approaches in information management. While in vitro and in vivo assays are expensive, the cost of computing is falling. Furthermore, improvements in the accuracy of computational methods (e.g. data mining, knowledge discovery, modeling and simulation) have enabled effective tools to automate the extraction, management and storage of these vast data volumes. Since this information is widely distributed, one major issue is how to locate and access data where it resides (which also poses data-sharing limitations). The novel discipline of nanoinformatics addresses the information challenges related to nanotechnology research. In this paper, we summarize the needs and challenges in the field and present an overview of extant initiatives and efforts. PMID:24932210

  19. Quantum Monte Carlo for large chemical systems: implementing efficient strategies for petascale platforms and beyond.

    PubMed

    Scemama, Anthony; Caffarel, Michel; Oseret, Emmanuel; Jalby, William

    2013-04-30

    Various strategies to implement efficiently quantum Monte Carlo (QMC) simulations for large chemical systems are presented. These include: (i) the introduction of an efficient algorithm to calculate the computationally expensive Slater matrices. This novel scheme is based on the use of the highly localized character of atomic Gaussian basis functions (not the molecular orbitals as usually done), (ii) the possibility of keeping the memory footprint minimal, (iii) the important enhancement of single-core performance when efficient optimization tools are used, and (iv) the definition of a universal, dynamic, fault-tolerant, and load-balanced framework adapted to all kinds of computational platforms (massively parallel machines, clusters, or distributed grids). These strategies have been implemented in the QMC=Chem code developed at Toulouse and illustrated with numerical applications on small peptides of increasing sizes (158, 434, 1056, and 1731 electrons). Using 10-80 k computing cores of the Curie machine (GENCI-TGCC-CEA, France), QMC=Chem has been shown to be capable of running at the petascale level, thus demonstrating that for this machine a large part of the peak performance can be achieved. Implementation of large-scale QMC simulations for future exascale platforms with a comparable level of efficiency is expected to be feasible. Copyright © 2013 Wiley Periodicals, Inc.

  20. Optimal gains for a single polar orbiting satellite

    NASA Technical Reports Server (NTRS)

    Banfield, Don; Ingersoll, A. P.; Keppenne, C. L.

    1993-01-01

    Gains are the spatial weighting of an observation in its neighborhood versus the local values of a model prediction. They are the key to data assimilation, as they are the direct measure of how the data are used to guide the model. As derived in the broad context of data assimilation by Kalman and in the context of meteorology, for example, by Rutherford, the optimal gains are functions of the prediction error covariances between the observation and analysis points. Kalman introduced a very powerful technique that allows one to calculate these optimal gains at the time of each observation. Unfortunately, this technique is both computationally expensive and often numerically unstable for dynamical systems of the magnitude of meteorological models, and thus is unsuited for use in PMIRR data assimilation. However, the optimal gains as calculated by a Kalman filter do reach a steady state for regular observing patterns like that of a satellite. In this steady state, the gains are constants in time, and thus could conceivably be computed off-line. These steady-state Kalman gains (i.e., Wiener gains) would yield optimal performance without the computational burden of true Kalman filtering. We proposed to use this type of constant-in-time Wiener gain for the assimilation of data from PMIRR and Mars Observer.

  1. Web-Based Job Submission Interface for the GAMESS Computational Chemistry Program

    ERIC Educational Resources Information Center

    Perri, M. J.; Weber, S. H.

    2014-01-01

    A Web site is described that facilitates use of the free computational chemistry software: General Atomic and Molecular Electronic Structure System (GAMESS). Its goal is to provide an opportunity for undergraduate students to perform computational chemistry experiments without the need to purchase expensive software.

  2. Computing Systems | High-Performance Computing | NREL

    Science.gov Websites

    investigate, build, and test models of complex phenomena or entire integrated systems-that cannot be directly observed or manipulated in the lab, or would be too expensive or time consuming. Models and visualizations

  3. Integrating numerical computation into the undergraduate education physics curriculum using spreadsheet excel

    NASA Astrophysics Data System (ADS)

    Fauzi, Ahmad

    2017-11-01

    Numerical computation has many pedagogical advantages: it develops analytical skills and problem-solving skills, helps to learn through visualization, and enhances physics education. Unfortunately, numerical computation is not taught to undergraduate education physics students in Indonesia. Incorporate numerical computation into the undergraduate education physics curriculum presents many challenges. The main challenges are the dense curriculum that makes difficult to put new numerical computation course and most students have no programming experience. In this research, we used case study to review how to integrate numerical computation into undergraduate education physics curriculum. The participants of this research were 54 students of the fourth semester of physics education department. As a result, we concluded that numerical computation could be integrated into undergraduate education physics curriculum using spreadsheet excel combined with another course. The results of this research become complements of the study on how to integrate numerical computation in learning physics using spreadsheet excel.

  4. Modeling the evolution of channel shape: Balancing computational efficiency with hydraulic fidelity

    USGS Publications Warehouse

    Wobus, C.W.; Kean, J.W.; Tucker, G.E.; Anderson, R. Scott

    2008-01-01

    The cross-sectional shape of a natural river channel controls the capacity of the system to carry water off a landscape, to convey sediment derived from hillslopes, and to erode its bed and banks. Numerical models that describe the response of a landscape to changes in climate or tectonics therefore require formulations that can accommodate evolution of channel cross-sectional geometry. However, fully two-dimensional (2-D) flow models are too computationally expensive to implement in large-scale landscape evolution models, while available simple empirical relationships between width and discharge do not adequately capture the dynamics of channel adjustment. We have developed a simplified 2-D numerical model of channel evolution in a cohesive, detachment-limited substrate subject to steady, unidirectional flow. Erosion is assumed to be proportional to boundary shear stress, which is calculated using an approximation of the flow field in which log-velocity profiles are assumed to apply along vectors that are perpendicular to the local channel bed. Model predictions of the velocity structure, peak boundary shear stress, and equilibrium channel shape compare well with predictions of a more sophisticated but more computationally demanding ray-isovel model. For example, the mean velocities computed by the two models are consistent to within ???3%, and the predicted peak shear stress is consistent to within ???7%. Furthermore, the shear stress distributions predicted by our model compare favorably with available laboratory measurements for prescribed channel shapes. A modification to our simplified code in which the flow includes a high-velocity core allows the model to be extended to estimate shear stress distributions in channels with large width-to-depth ratios. Our model is efficient enough to incorporate into large-scale landscape evolution codes and can be used to examine how channels adjust both cross-sectional shape and slope in response to tectonic and climatic forcing. Copyright 2008 by the American Geophysical Union.

  5. Stream-based Hebbian eigenfilter for real-time neuronal spike discrimination

    PubMed Central

    2012-01-01

    Background Principal component analysis (PCA) has been widely employed for automatic neuronal spike sorting. Calculating principal components (PCs) is computationally expensive, and requires complex numerical operations and large memory resources. Substantial hardware resources are therefore needed for hardware implementations of PCA. General Hebbian algorithm (GHA) has been proposed for calculating PCs of neuronal spikes in our previous work, which eliminates the needs of computationally expensive covariance analysis and eigenvalue decomposition in conventional PCA algorithms. However, large memory resources are still inherently required for storing a large volume of aligned spikes for training PCs. The large size memory will consume large hardware resources and contribute significant power dissipation, which make GHA difficult to be implemented in portable or implantable multi-channel recording micro-systems. Method In this paper, we present a new algorithm for PCA-based spike sorting based on GHA, namely stream-based Hebbian eigenfilter, which eliminates the inherent memory requirements of GHA while keeping the accuracy of spike sorting by utilizing the pseudo-stationarity of neuronal spikes. Because of the reduction of large hardware storage requirements, the proposed algorithm can lead to ultra-low hardware resources and power consumption of hardware implementations, which is critical for the future multi-channel micro-systems. Both clinical and synthetic neural recording data sets were employed for evaluating the accuracy of the stream-based Hebbian eigenfilter. The performance of spike sorting using stream-based eigenfilter and the computational complexity of the eigenfilter were rigorously evaluated and compared with conventional PCA algorithms. Field programmable logic arrays (FPGAs) were employed to implement the proposed algorithm, evaluate the hardware implementations and demonstrate the reduction in both power consumption and hardware memories achieved by the streaming computing Results and discussion Results demonstrate that the stream-based eigenfilter can achieve the same accuracy and is 10 times more computationally efficient when compared with conventional PCA algorithms. Hardware evaluations show that 90.3% logic resources, 95.1% power consumption and 86.8% computing latency can be reduced by the stream-based eigenfilter when compared with PCA hardware. By utilizing the streaming method, 92% memory resources and 67% power consumption can be saved when compared with the direct implementation of GHA. Conclusion Stream-based Hebbian eigenfilter presents a novel approach to enable real-time spike sorting with reduced computational complexity and hardware costs. This new design can be further utilized for multi-channel neuro-physiological experiments or chronic implants. PMID:22490725

  6. 41 CFR 301-10.301 - How do I compute my mileage reimbursement?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 41 Public Contracts and Property Management 4 2010-07-01 2010-07-01 false How do I compute my...-TRANSPORTATION EXPENSES Privately Owned Vehicle (POV) § 301-10.301 How do I compute my mileage reimbursement? You compute mileage reimbursement by multiplying the distance traveled, determined under § 301-10.302 of this...

  7. Deformation measurements by ESPI of the surface of a heated mirror and comparison with numerical model

    NASA Astrophysics Data System (ADS)

    Languy, Fabian; Vandenrijt, Jean-François; Saint-Georges, Philippe; Georges, Marc P.

    2017-06-01

    The manufacture of mirrors for space application is expensive and the requirements on the optical performance increase over years. To achieve higher performance, larger mirrors are manufactured but the larger the mirror the higher the sensitivity to temperature variation and therefore the higher the degradation of optical performances. To avoid the use of an expensive thermal regulation, we need to develop tools able to predict how optics behaves with thermal constraints. This paper presents the comparison between experimental surface mirror deformation and theoretical results from a multiphysics model. The local displacements of the mirror surface have been measured with the use of electronic speckle pattern interferometry (ESPI) and the deformation itself has been calculated by subtracting the rigid body motion. After validation of the mechanical model, experimental and numerical wave front errors are compared.

  8. Is Your School Y2K-OK?

    ERIC Educational Resources Information Center

    Bates, Martine G.

    1999-01-01

    The most vulnerable Y2K areas for schools are networked computers, free-standing personal computers, software, and embedded chips in utilities such as telephones and fire alarms. Expensive, time-consuming procedures and software have been developed for testing and bringing most computers into compliance. Districts need a triage prioritization…

  9. Understanding the Internet.

    ERIC Educational Resources Information Center

    Oblinger, Diana

    The Internet is an international network linking hundreds of smaller computer networks in North America, Europe, and Asia. Using the Internet, computer users can connect to a variety of computers with little effort or expense. The potential for use by college faculty is enormous. The largest problem faced by most users is understanding what such…

  10. "Mini", "Midi" and the Student.

    ERIC Educational Resources Information Center

    Edwards, Perry; Broadwell, Bruce

    Mini- and midi-computers have been introduced into the computer science program at Sierra College to afford students more direct contact with computers. The college's administration combined with the Science and Business departments to share the expense and utilization of the program. The National Cash Register Century 100 and the Data General…

  11. 48 CFR 970.5227-1 - Rights in data-facilities.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... software. (2) Computer software, as used in this clause, means (i) computer programs which are data... software. The term “data” does not include data incidental to the administration of this contract, such as... this clause, means data, other than computer software, developed at private expense that embody trade...

  12. RenderMan design principles

    NASA Technical Reports Server (NTRS)

    Apodaca, Tony; Porter, Tom

    1989-01-01

    The two worlds of interactive graphics and realistic graphics have remained separate. Fast graphics hardware runs simple algorithms and generates simple looking images. Photorealistic image synthesis software runs slowly on large expensive computers. The time has come for these two branches of computer graphics to merge. The speed and expense of graphics hardware is no longer the barrier to the wide acceptance of photorealism. There is every reason to believe that high quality image synthesis will become a standard capability of every graphics machine, from superworkstation to personal computer. The significant barrier has been the lack of a common language, an agreed-upon set of terms and conditions, for 3-D modeling systems to talk to 3-D rendering systems for computing an accurate rendition of that scene. Pixar has introduced RenderMan to serve as that common language. RenderMan, specifically the extensibility it offers in shading calculations, is discussed.

  13. Extending Strong Scaling of Quantum Monte Carlo to the Exascale

    NASA Astrophysics Data System (ADS)

    Shulenburger, Luke; Baczewski, Andrew; Luo, Ye; Romero, Nichols; Kent, Paul

    Quantum Monte Carlo is one of the most accurate and most computationally expensive methods for solving the electronic structure problem. In spite of its significant computational expense, its massively parallel nature is ideally suited to petascale computers which have enabled a wide range of applications to relatively large molecular and extended systems. Exascale capabilities have the potential to enable the application of QMC to significantly larger systems, capturing much of the complexity of real materials such as defects and impurities. However, both memory and computational demands will require significant changes to current algorithms to realize this possibility. This talk will detail both the causes of the problem and potential solutions. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corp, a wholly owned subsidiary of Lockheed Martin Corp, for the US Department of Energys National Nuclear Security Administration under contract DE-AC04-94AL85000.

  14. Detecting non-orthology in the COGs database and other approaches grouping orthologs using genome-specific best hits.

    PubMed

    Dessimoz, Christophe; Boeckmann, Brigitte; Roth, Alexander C J; Gonnet, Gaston H

    2006-01-01

    Correct orthology assignment is a critical prerequisite of numerous comparative genomics procedures, such as function prediction, construction of phylogenetic species trees and genome rearrangement analysis. We present an algorithm for the detection of non-orthologs that arise by mistake in current orthology classification methods based on genome-specific best hits, such as the COGs database. The algorithm works with pairwise distance estimates, rather than computationally expensive and error-prone tree-building methods. The accuracy of the algorithm is evaluated through verification of the distribution of predicted cases, case-by-case phylogenetic analysis and comparisons with predictions from other projects using independent methods. Our results show that a very significant fraction of the COG groups include non-orthologs: using conservative parameters, the algorithm detects non-orthology in a third of all COG groups. Consequently, sequence analysis sensitive to correct orthology assignments will greatly benefit from these findings.

  15. Modeling Production Plant Forming Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rhee, M; Becker, R; Couch, R

    2004-09-22

    Engineering has simulation tools and experience in modeling forming processes. Y-12 personnel have expressed interest in validating our tools and experience against their manufacturing process activities such as rolling, casting, and forging etc. We have demonstrated numerical capabilities in a collaborative DOE/OIT project with ALCOA that is nearing successful completion. The goal was to use ALE3D to model Alcoa's slab rolling process in order to demonstrate a computational tool that would allow Alcoa to define a rolling schedule that would minimize the probability of ingot fracture, thus reducing waste and energy consumption. It is intended to lead to long-term collaborationmore » with Y-12 and perhaps involvement with other components of the weapons production complex. Using simulations to aid in design of forming processes can: decrease time to production; reduce forming trials and associated expenses; and guide development of products with greater uniformity and less scrap.« less

  16. Computational homogenisation for thermoviscoplasticity: application to thermally sprayed coatings

    NASA Astrophysics Data System (ADS)

    Berthelsen, Rolf; Denzer, Ralf; Oppermann, Philip; Menzel, Andreas

    2017-11-01

    Metal forming processes require wear-resistant tool surfaces in order to ensure a long life cycle of the expensive tools together with a constant high quality of the produced components. Thermal spraying is a relatively widely applied coating technique for the deposit of wear protection coatings. During these coating processes, heterogeneous coatings are deployed at high temperatures followed by quenching where residual stresses occur which strongly influence the performance of the coated tools. The objective of this article is to discuss and apply a thermo-mechanically coupled simulation framework which captures the heterogeneity of the deposited coating material. Therefore, a two-scale finite element framework for the solution of nonlinear thermo-mechanically coupled problems is elaborated and applied to the simulation of thermoviscoplastic material behaviour including nonlinear thermal softening in a geometrically linearised setting. The finite element framework and material model is demonstrated by means of numerical examples.

  17. Simulation of hypersonic rarefied flows with the immersed-boundary method

    NASA Astrophysics Data System (ADS)

    Bruno, D.; De Palma, P.; de Tullio, M. D.

    2011-05-01

    This paper provides a validation of an immersed boundary method for computing hypersonic rarefied gas flows. The method is based on the solution of the Navier-Stokes equation and is validated versus numerical results obtained by the DSMC approach. The Navier-Stokes solver employs a flexible local grid refinement technique and is implemented on parallel machines using a domain-decomposition approach. Thanks to the efficient grid generation process, based on the ray-tracing technique, and the use of the METIS software, it is possible to obtain the partitioned grids to be assigned to each processor with a minimal effort by the user. This allows one to by-pass the expensive (in terms of time and human resources) classical generation process of a body fitted grid. First-order slip-velocity boundary conditions are employed and tested for taking into account rarefied gas effects.

  18. Ship detection in optical remote sensing images based on deep convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Yao, Yuan; Jiang, Zhiguo; Zhang, Haopeng; Zhao, Danpei; Cai, Bowen

    2017-10-01

    Automatic ship detection in optical remote sensing images has attracted wide attention for its broad applications. Major challenges for this task include the interference of cloud, wave, wake, and the high computational expenses. We propose a fast and robust ship detection algorithm to solve these issues. The framework for ship detection is designed based on deep convolutional neural networks (CNNs), which provide the accurate locations of ship targets in an efficient way. First, the deep CNN is designed to extract features. Then, a region proposal network (RPN) is applied to discriminate ship targets and regress the detection bounding boxes, in which the anchors are designed by intrinsic shape of ship targets. Experimental results on numerous panchromatic images demonstrate that, in comparison with other state-of-the-art ship detection methods, our method is more efficient and achieves higher detection accuracy and more precise bounding boxes in different complex backgrounds.

  19. TRUSS: An intelligent design system for aircraft wings

    NASA Technical Reports Server (NTRS)

    Bates, Preston R.; Schrage, Daniel P.

    1989-01-01

    Competitive leadership in the international marketplace, superiority in national defense, excellence in productivity, and safety of both private and public systems are all national defense goals which are dependent on superior engineering design. In recent years, it has become more evident that early design decisions are critical, and when only based on performance often result in products which are too expensive, hard to manufacture, or unsupportable. Better use of computer-aided design tools and information-based technologies is required to produce better quality United States products. A program is outlined here to explore the use of knowledge based expert systems coupled with numerical optimization, database management techniques, and designer interface methods in a networked design environment to improve and assess design changes due to changing emphasis or requirements. The initial structural design of a tiltrotor aircraft wing is used as a representative example to demonstrate the approach being followed.

  20. Multi-Resolution Unstructured Grid-Generation for Geophysical Applications on the Sphere

    NASA Technical Reports Server (NTRS)

    Engwirda, Darren

    2015-01-01

    An algorithm for the generation of non-uniform unstructured grids on ellipsoidal geometries is described. This technique is designed to generate high quality triangular and polygonal meshes appropriate for general circulation modelling on the sphere, including applications to atmospheric and ocean simulation, and numerical weather predication. Using a recently developed Frontal-Delaunay-refinement technique, a method for the construction of high-quality unstructured ellipsoidal Delaunay triangulations is introduced. A dual polygonal grid, derived from the associated Voronoi diagram, is also optionally generated as a by-product. Compared to existing techniques, it is shown that the Frontal-Delaunay approach typically produces grids with near-optimal element quality and smooth grading characteristics, while imposing relatively low computational expense. Initial results are presented for a selection of uniform and non-uniform ellipsoidal grids appropriate for large-scale geophysical applications. The use of user-defined mesh-sizing functions to generate smoothly graded, non-uniform grids is discussed.

  1. A numerical differentiation library exploiting parallel architectures

    NASA Astrophysics Data System (ADS)

    Voglis, C.; Hadjidoukas, P. E.; Lagaris, I. E.; Papageorgiou, D. G.

    2009-08-01

    We present a software library for numerically estimating first and second order partial derivatives of a function by finite differencing. Various truncation schemes are offered resulting in corresponding formulas that are accurate to order O(h), O(h), and O(h), h being the differencing step. The derivatives are calculated via forward, backward and central differences. Care has been taken that only feasible points are used in the case where bound constraints are imposed on the variables. The Hessian may be approximated either from function or from gradient values. There are three versions of the software: a sequential version, an OpenMP version for shared memory architectures and an MPI version for distributed systems (clusters). The parallel versions exploit the multiprocessing capability offered by computer clusters, as well as modern multi-core systems and due to the independent character of the derivative computation, the speedup scales almost linearly with the number of available processors/cores. Program summaryProgram title: NDL (Numerical Differentiation Library) Catalogue identifier: AEDG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 73 030 No. of bytes in distributed program, including test data, etc.: 630 876 Distribution format: tar.gz Programming language: ANSI FORTRAN-77, ANSI C, MPI, OPENMP Computer: Distributed systems (clusters), shared memory systems Operating system: Linux, Solaris Has the code been vectorised or parallelized?: Yes RAM: The library uses O(N) internal storage, N being the dimension of the problem Classification: 4.9, 4.14, 6.5 Nature of problem: The numerical estimation of derivatives at several accuracy levels is a common requirement in many computational tasks, such as optimization, solution of nonlinear systems, etc. The parallel implementation that exploits systems with multiple CPUs is very important for large scale and computationally expensive problems. Solution method: Finite differencing is used with carefully chosen step that minimizes the sum of the truncation and round-off errors. The parallel versions employ both OpenMP and MPI libraries. Restrictions: The library uses only double precision arithmetic. Unusual features: The software takes into account bound constraints, in the sense that only feasible points are used to evaluate the derivatives, and given the level of the desired accuracy, the proper formula is automatically employed. Running time: Running time depends on the function's complexity. The test run took 15 ms for the serial distribution, 0.6 s for the OpenMP and 4.2 s for the MPI parallel distribution on 2 processors.

  2. Comparing the GPR responses of real experiment and simulation of cavity

    NASA Astrophysics Data System (ADS)

    Yu, H.; Nam, M. J.; Kim, C.; Lee, D. K.

    2017-12-01

    Seoul, capital city of South Korea, has been suffering from ground subsidence mainly caused by cavities beneath the road. Urban subsidence usually brings serious social problems such as damages of human life, properties and so on. To prevent ground subsidence, Korea government embark much money in developing techniques to detect cavities in advance. Ground penetrating radar (GPR) is known as the most effective method among geophysical surveys in exploring underground cavitied but shallow ones only. For the study of GPR responses for underground cavities, real scale physical models have been made and GPR surveys are conducted. In simulating cavities with various sizes at various depths, spheres of polystyrene have been used since the electric permittivity of polystyrene has a similar value to that of the air. However, the real scale experiments only used simple shapes of cavities due to its expensive construction cost and further changing in shapes of cavities is limited once they are built. For not only comparison between field responses for the physical model and numerical responses but also for analyzing GPR responses for more various cavity shapes in numerous environments, we conducted numerical simulation of GPR responses using three-dimensional (3D) finite difference time domain (FDTD) GPR modeling algorithm employing staggered grid. We first construct numerical modeling for models similar to the physical models to confirm considering radiation pattern in numerical modeling of GPR responses which is critical to generate similar responses to field GPR data. Further, GPR responses computed for various shapes of cavities in several different environments determine not only additional construction of the physical cavities but also analyze the characteristics of GPR responses.

  3. Emerge - A Python environment for the modeling of subsurface transfers

    NASA Astrophysics Data System (ADS)

    Lopez, S.; Smai, F.; Sochala, P.

    2014-12-01

    The simulation of subsurface mass and energy transfers often relies on specific codes that were mainly developed using compiled languages which usually ensure computational efficiency at the expense of relatively long development times and relatively rigid software. Even if a very detailed, possibly graphical, user-interface is developed the core numerical aspects are rarely accessible and the smallest modification will always need a compilation step. Thus, user-defined physical laws or alternative numerical schemes may be relatively difficult to use. Over the last decade, Python has emerged as a popular and widely used language in the scientific community. There already exist several libraries for the pre and post-treatment of input and output files for reservoir simulators (e.g. pytough). Development times in Python are considerably reduced compared to compiled languages, and programs can be easily interfaced with libraries written in compiled languages with several comprehensive numerical libraries that provide sequential and parallel solvers (e.g. PETSc, Trilinos…). The core objective of the Emerge project is to explore the possibility to develop a modeling environment in full Python. Consequently, we are developing an open python package with the classes/objects necessary to express, discretize and solve the physical problems encountered in the modeling of subsurface transfers. We heavily relied on Python to have a convenient and concise way of manipulating potentially complex concepts with a few lines of code and a high level of abstraction. Our result aims to be a friendly numerical environment targeting both numerical engineers and physicist or geoscientists with the possibility to quickly specify and handle geometries, arbitrary meshes, spatially or temporally varying properties, PDE formulations, boundary conditions…

  4. [Diagnostic possibilities of digital volume tomography].

    PubMed

    Lemkamp, Michael; Filippi, Andreas; Berndt, Dorothea; Lambrecht, J Thomas

    2006-01-01

    Cone beam computed tomography allows high quality 3D images of cranio-facial structures. Although detail resolution is increased, x-ray exposition is reduced compared to classic computer tomography. The volume is analysed in three orthogonal plains, which can be rotated independently without quality loss. Cone beam computed tomography seems to be a less expensive and less x-ray exposing alternative to classic computer tomography.

  5. Scheduling structural health monitoring activities for optimizing life-cycle costs and reliability of wind turbines

    NASA Astrophysics Data System (ADS)

    Hanish Nithin, Anu; Omenzetter, Piotr

    2017-04-01

    Optimization of the life-cycle costs and reliability of offshore wind turbines (OWTs) is an area of immense interest due to the widespread increase in wind power generation across the world. Most of the existing studies have used structural reliability and the Bayesian pre-posterior analysis for optimization. This paper proposes an extension to the previous approaches in a framework for probabilistic optimization of the total life-cycle costs and reliability of OWTs by combining the elements of structural reliability/risk analysis (SRA), the Bayesian pre-posterior analysis with optimization through a genetic algorithm (GA). The SRA techniques are adopted to compute the probabilities of damage occurrence and failure associated with the deterioration model. The probabilities are used in the decision tree and are updated using the Bayesian analysis. The output of this framework would determine the optimal structural health monitoring and maintenance schedules to be implemented during the life span of OWTs while maintaining a trade-off between the life-cycle costs and risk of the structural failure. Numerical illustrations with a generic deterioration model for one monitoring exercise in the life cycle of a system are demonstrated. Two case scenarios, namely to build initially an expensive and robust or a cheaper but more quickly deteriorating structures and to adopt expensive monitoring system, are presented to aid in the decision-making process.

  6. hPIN/hTAN: Low-Cost e-Banking Secure against Untrusted Computers

    NASA Astrophysics Data System (ADS)

    Li, Shujun; Sadeghi, Ahmad-Reza; Schmitz, Roland

    We propose hPIN/hTAN, a low-cost token-based e-banking protection scheme when the adversary has full control over the user's computer. Compared with existing hardware-based solutions, hPIN/hTAN depends on neither second trusted channel, nor secure keypad, nor computationally expensive encryption module.

  7. Reduced-order modeling with sparse polynomial chaos expansion and dimension reduction for evaluating the impact of CO2 and brine leakage on groundwater

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Zheng, L.; Pau, G. S. H.

    2016-12-01

    A careful assessment of the risk associated with geologic CO2 storage is critical to the deployment of large-scale storage projects. While numerical modeling is an indispensable tool for risk assessment, there has been increasing need in considering and addressing uncertainties in the numerical models. However, uncertainty analyses have been significantly hindered by the computational complexity of the model. As a remedy, reduced-order models (ROM), which serve as computationally efficient surrogates for high-fidelity models (HFM), have been employed. The ROM is constructed at the expense of an initial set of HFM simulations, and afterwards can be relied upon to predict the model output values at minimal cost. The ROM presented here is part of National Risk Assessment Program (NRAP) and intends to predict the water quality change in groundwater in response to hypothetical CO2 and brine leakage. The HFM based on which the ROM is derived is a multiphase flow and reactive transport model, with 3-D heterogeneous flow field and complex chemical reactions including aqueous complexation, mineral dissolution/precipitation, adsorption/desorption via surface complexation and cation exchange. Reduced-order modeling techniques based on polynomial basis expansion, such as polynomial chaos expansion (PCE), are widely used in the literature. However, the accuracy of such ROMs can be affected by the sparse structure of the coefficients of the expansion. Failing to identify vanishing polynomial coefficients introduces unnecessary sampling errors, the accumulation of which deteriorates the accuracy of the ROMs. To address this issue, we treat the PCE as a sparse Bayesian learning (SBL) problem, and the sparsity is obtained by detecting and including only the non-zero PCE coefficients one at a time by iteratively selecting the most contributing coefficients. The computational complexity due to predicting the entire 3-D concentration fields is further mitigated by a dimension reduction procedure-proper orthogonal decomposition (POD). Our numerical results show that utilizing the sparse structure and POD significantly enhances the accuracy and efficiency of the ROMs, laying the basis for further analyses that necessitate a large number of model simulations.

  8. The AAHA Computer Program. American Animal Hospital Association.

    PubMed

    Albers, J W

    1986-07-01

    The American Animal Hospital Association Computer Program should benefit all small animal practitioners. Through the availability of well-researched and well-developed certified software, veterinarians will have increased confidence in their purchase decisions. With the expansion of computer applications to improve practice management efficiency, veterinary computer systems will further justify their initial expense. The development of the Association's veterinary computer network will provide a variety of important services to the profession.

  9. Multiobjective Aerodynamic Shape Optimization Using Pareto Differential Evolution and Generalized Response Surface Metamodels

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.

    2004-01-01

    Differential Evolution (DE) is a simple, fast, and robust evolutionary algorithm that has proven effective in determining the global optimum for several difficult single-objective optimization problems. The DE algorithm has been recently extended to multiobjective optimization problem by using a Pareto-based approach. In this paper, a Pareto DE algorithm is applied to multiobjective aerodynamic shape optimization problems that are characterized by computationally expensive objective function evaluations. To improve computational expensive the algorithm is coupled with generalized response surface meta-models based on artificial neural networks. Results are presented for some test optimization problems from the literature to demonstrate the capabilities of the method.

  10. Computer Numerical Control: Instructional Manual. The North Dakota High Technology Mobile Laboratory Project.

    ERIC Educational Resources Information Center

    Sinn, John W.

    This instructional manual contains five learning activity packets for use in a workshop on computer numerical control for computer-aided manufacturing. The lessons cover the following topics: introduction to computer-aided manufacturing, understanding the lathe, using the computer, computer numerically controlled part programming, and executing a…

  11. Model Reduction of Computational Aerothermodynamics for Multi-Discipline Analysis in High Speed Flows

    NASA Astrophysics Data System (ADS)

    Crowell, Andrew Rippetoe

    This dissertation describes model reduction techniques for the computation of aerodynamic heat flux and pressure loads for multi-disciplinary analysis of hypersonic vehicles. NASA and the Department of Defense have expressed renewed interest in the development of responsive, reusable hypersonic cruise vehicles capable of sustained high-speed flight and access to space. However, an extensive set of technical challenges have obstructed the development of such vehicles. These technical challenges are partially due to both the inability to accurately test scaled vehicles in wind tunnels and to the time intensive nature of high-fidelity computational modeling, particularly for the fluid using Computational Fluid Dynamics (CFD). The aim of this dissertation is to develop efficient and accurate models for the aerodynamic heat flux and pressure loads to replace the need for computationally expensive, high-fidelity CFD during coupled analysis. Furthermore, aerodynamic heating and pressure loads are systematically evaluated for a number of different operating conditions, including: simple two-dimensional flow over flat surfaces up to three-dimensional flows over deformed surfaces with shock-shock interaction and shock-boundary layer interaction. An additional focus of this dissertation is on the implementation and computation of results using the developed aerodynamic heating and pressure models in complex fluid-thermal-structural simulations. Model reduction is achieved using a two-pronged approach. One prong focuses on developing analytical corrections to isothermal, steady-state CFD flow solutions in order to capture flow effects associated with transient spatially-varying surface temperatures and surface pressures (e.g., surface deformation, surface vibration, shock impingements, etc.). The second prong is focused on minimizing the computational expense of computing the steady-state CFD solutions by developing an efficient surrogate CFD model. The developed two-pronged approach is found to exhibit balanced performance in terms of accuracy and computational expense, relative to several existing approaches. This approach enables CFD-based loads to be implemented into long duration fluid-thermal-structural simulations.

  12. Four-dimensional volume-of-interest reconstruction for cone-beam computed tomography-guided radiation therapy.

    PubMed

    Ahmad, Moiz; Balter, Peter; Pan, Tinsu

    2011-10-01

    Data sufficiency are a major problem in four-dimensional cone-beam computed tomography (4D-CBCT) on linear accelerator-integrated scanners for image-guided radiotherapy. Scan times must be in the range of 4-6 min to avoid undersampling artifacts. Various image reconstruction algorithms have been proposed to accommodate undersampled data acquisitions, but these algorithms are computationally expensive, may require long reconstruction times, and may require algorithm parameters to be optimized. The authors present a novel reconstruction method, 4D volume-of-interest (4D-VOI) reconstruction which suppresses undersampling artifacts and resolves lung tumor motion for undersampled 1-min scans. The 4D-VOI reconstruction is much less computationally expensive than other 4D-CBCT algorithms. The 4D-VOI method uses respiration-correlated projection data to reconstruct a four-dimensional (4D) image inside a VOI containing the moving tumor, and uncorrelated projection data to reconstruct a three-dimensional (3D) image outside the VOI. Anatomical motion is resolved inside the VOI and blurred outside the VOI. The authors acquired a 1-min. scan of an anthropomorphic chest phantom containing a moving water-filled sphere. The authors also used previously acquired 1-min scans for two lung cancer patients who had received CBCT-guided radiation therapy. The same raw data were used to test and compare the 4D-VOI reconstruction with the standard 4D reconstruction and the McKinnon-Bates (MB) reconstruction algorithms. Both the 4D-VOI and the MB reconstructions suppress nearly all the streak artifacts compared with the standard 4D reconstruction, but the 4D-VOI has 3-8 times greater contrast-to-noise ratio than the MB reconstruction. In the dynamic chest phantom study, the 4D-VOI and the standard 4D reconstructions both resolved a moving sphere with an 18 mm displacement. The 4D-VOI reconstruction shows a motion blur of only 3 mm, whereas the MB reconstruction shows a motion blur of 13 mm. With graphics processing unit hardware used to accelerate computations, the 4D-VOI reconstruction required a 40-s reconstruction time. 4D-VOI reconstruction effectively reduces undersampling artifacts and resolves lung tumor motion in 4D-CBCT. The 4D-VOI reconstruction is computationally inexpensive compared with more sophisticated iterative algorithms. Compared with these algorithms, our 4D-VOI reconstruction is an attractive alternative in 4D-CBCT for reconstructing target motion without generating numerous streak artifacts.

  13. Four-dimensional volume-of-interest reconstruction for cone-beam computed tomography-guided radiation therapy

    PubMed Central

    Ahmad, Moiz; Balter, Peter; Pan, Tinsu

    2011-01-01

    Purpose: Data sufficiency are a major problem in four-dimensional cone-beam computed tomography (4D-CBCT) on linear accelerator-integrated scanners for image-guided radiotherapy. Scan times must be in the range of 4–6 min to avoid undersampling artifacts. Various image reconstruction algorithms have been proposed to accommodate undersampled data acquisitions, but these algorithms are computationally expensive, may require long reconstruction times, and may require algorithm parameters to be optimized. The authors present a novel reconstruction method, 4D volume-of-interest (4D-VOI) reconstruction which suppresses undersampling artifacts and resolves lung tumor motion for undersampled 1-min scans. The 4D-VOI reconstruction is much less computationally expensive than other 4D-CBCT algorithms. Methods: The 4D-VOI method uses respiration-correlated projection data to reconstruct a four-dimensional (4D) image inside a VOI containing the moving tumor, and uncorrelated projection data to reconstruct a three-dimensional (3D) image outside the VOI. Anatomical motion is resolved inside the VOI and blurred outside the VOI. The authors acquired a 1-min. scan of an anthropomorphic chest phantom containing a moving water-filled sphere. The authors also used previously acquired 1-min scans for two lung cancer patients who had received CBCT-guided radiation therapy. The same raw data were used to test and compare the 4D-VOI reconstruction with the standard 4D reconstruction and the McKinnon-Bates (MB) reconstruction algorithms. Results: Both the 4D-VOI and the MB reconstructions suppress nearly all the streak artifacts compared with the standard 4D reconstruction, but the 4D-VOI has 3–8 times greater contrast-to-noise ratio than the MB reconstruction. In the dynamic chest phantom study, the 4D-VOI and the standard 4D reconstructions both resolved a moving sphere with an 18 mm displacement. The 4D-VOI reconstruction shows a motion blur of only 3 mm, whereas the MB reconstruction shows a motion blur of 13 mm. With graphics processing unit hardware used to accelerate computations, the 4D-VOI reconstruction required a 40-s reconstruction time. Conclusions: 4D-VOI reconstruction effectively reduces undersampling artifacts and resolves lung tumor motion in 4D-CBCT. The 4D-VOI reconstruction is computationally inexpensive compared with more sophisticated iterative algorithms. Compared with these algorithms, our 4D-VOI reconstruction is an attractive alternative in 4D-CBCT for reconstructing target motion without generating numerous streak artifacts. PMID:21992381

  14. Semiclassical initial value representation for the quantum propagator in the Heisenberg interaction representation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petersen, Jakob; Pollak, Eli, E-mail: eli.pollak@weizmann.ac.il

    2015-12-14

    One of the challenges facing on-the-fly ab initio semiclassical time evolution is the large expense needed to converge the computation. In this paper, we suggest that a significant saving in computational effort may be achieved by employing a semiclassical initial value representation (SCIVR) of the quantum propagator based on the Heisenberg interaction representation. We formulate and test numerically a modification and simplification of the previous semiclassical interaction representation of Shao and Makri [J. Chem. Phys. 113, 3681 (2000)]. The formulation is based on the wavefunction form of the semiclassical propagation instead of the operator form, and so is simpler andmore » cheaper to implement. The semiclassical interaction representation has the advantage that the phase and prefactor vary relatively slowly as compared to the “standard” SCIVR methods. This improves its convergence properties significantly. Using a one-dimensional model system, the approximation is compared with Herman-Kluk’s frozen Gaussian and Heller’s thawed Gaussian approximations. The convergence properties of the interaction representation approach are shown to be favorable and indicate that the interaction representation is a viable way of incorporating on-the-fly force field information within a semiclassical framework.« less

  15. A Simplified Mesh Deformation Method Using Commercial Structural Analysis Software

    NASA Technical Reports Server (NTRS)

    Hsu, Su-Yuen; Chang, Chau-Lyan; Samareh, Jamshid

    2004-01-01

    Mesh deformation in response to redefined or moving aerodynamic surface geometries is a frequently encountered task in many applications. Most existing methods are either mathematically too complex or computationally too expensive for usage in practical design and optimization. We propose a simplified mesh deformation method based on linear elastic finite element analyses that can be easily implemented by using commercially available structural analysis software. Using a prescribed displacement at the mesh boundaries, a simple structural analysis is constructed based on a spatially varying Young s modulus to move the entire mesh in accordance with the surface geometry redefinitions. A variety of surface movements, such as translation, rotation, or incremental surface reshaping that often takes place in an optimization procedure, may be handled by the present method. We describe the numerical formulation and implementation using the NASTRAN software in this paper. The use of commercial software bypasses tedious reimplementation and takes advantage of the computational efficiency offered by the vendor. A two-dimensional airfoil mesh and a three-dimensional aircraft mesh were used as test cases to demonstrate the effectiveness of the proposed method. Euler and Navier-Stokes calculations were performed for the deformed two-dimensional meshes.

  16. A Fast Surrogate-facilitated Data-driven Bayesian Approach to Uncertainty Quantification of a Regional Groundwater Flow Model with Structural Error

    NASA Astrophysics Data System (ADS)

    Xu, T.; Valocchi, A. J.; Ye, M.; Liang, F.

    2016-12-01

    Due to simplification and/or misrepresentation of the real aquifer system, numerical groundwater flow and solute transport models are usually subject to model structural error. During model calibration, the hydrogeological parameters may be overly adjusted to compensate for unknown structural error. This may result in biased predictions when models are used to forecast aquifer response to new forcing. In this study, we extend a fully Bayesian method [Xu and Valocchi, 2015] to calibrate a real-world, regional groundwater flow model. The method uses a data-driven error model to describe model structural error and jointly infers model parameters and structural error. In this study, Bayesian inference is facilitated using high performance computing and fast surrogate models. The surrogate models are constructed using machine learning techniques to emulate the response simulated by the computationally expensive groundwater model. We demonstrate in the real-world case study that explicitly accounting for model structural error yields parameter posterior distributions that are substantially different from those derived by the classical Bayesian calibration that does not account for model structural error. In addition, the Bayesian with error model method gives significantly more accurate prediction along with reasonable credible intervals.

  17. Adaptive probabilistic collocation based Kalman filter for unsaturated flow problem

    NASA Astrophysics Data System (ADS)

    Man, J.; Li, W.; Zeng, L.; Wu, L.

    2015-12-01

    The ensemble Kalman filter (EnKF) has gained popularity in hydrological data assimilation problems. As a Monte Carlo based method, a relatively large ensemble size is usually required to guarantee the accuracy. As an alternative approach, the probabilistic collocation based Kalman filter (PCKF) employs the Polynomial Chaos to approximate the original system. In this way, the sampling error can be reduced. However, PCKF suffers from the so called "cure of dimensionality". When the system nonlinearity is strong and number of parameters is large, PCKF is even more computationally expensive than EnKF. Motivated by recent developments in uncertainty quantification, we propose a restart adaptive probabilistic collocation based Kalman filter (RAPCKF) for data assimilation in unsaturated flow problem. During the implementation of RAPCKF, the important parameters are identified and active PCE basis functions are adaptively selected. The "restart" technology is used to alleviate the inconsistency between model parameters and states. The performance of RAPCKF is tested by unsaturated flow numerical cases. It is shown that RAPCKF is more efficient than EnKF with the same computational cost. Compared with the traditional PCKF, the RAPCKF is more applicable in strongly nonlinear and high dimensional problems.

  18. Efficient Radiative Transfer for Dynamically Evolving Stratified Atmospheres

    NASA Astrophysics Data System (ADS)

    Judge, Philip G.

    2017-12-01

    We present a fast multi-level and multi-atom non-local thermodynamic equilibrium radiative transfer method for dynamically evolving stratified atmospheres, such as the solar atmosphere. The preconditioning method of Rybicki & Hummer (RH92) is adopted. But, pressed for the need of speed and stability, a “second-order escape probability” scheme is implemented within the framework of the RH92 method, in which frequency- and angle-integrals are carried out analytically. While minimizing the computational work needed, this comes at the expense of numerical accuracy. The iteration scheme is local, the formal solutions for the intensities are the only non-local component. At present the methods have been coded for vertical transport, applicable to atmospheres that are highly stratified. The probabilistic method seems adequately fast, stable, and sufficiently accurate for exploring dynamical interactions between the evolving MHD atmosphere and radiation using current computer hardware. Current 2D and 3D dynamics codes do not include this interaction as consistently as the current method does. The solutions generated may ultimately serve as initial conditions for dynamical calculations including full 3D radiative transfer. The National Center for Atmospheric Research is sponsored by the National Science Foundation.

  19. High Frequency Sampling of TTL Pulses on a Raspberry Pi for Diffuse Correlation Spectroscopy Applications.

    PubMed

    Tivnan, Matthew; Gurjar, Rajan; Wolf, David E; Vishwanath, Karthik

    2015-08-12

    Diffuse Correlation Spectroscopy (DCS) is a well-established optical technique that has been used for non-invasive measurement of blood flow in tissues. Instrumentation for DCS includes a correlation device that computes the temporal intensity autocorrelation of a coherent laser source after it has undergone diffuse scattering through a turbid medium. Typically, the signal acquisition and its autocorrelation are performed by a correlation board. These boards have dedicated hardware to acquire and compute intensity autocorrelations of rapidly varying input signal and usually are quite expensive. Here we show that a Raspberry Pi minicomputer can acquire and store a rapidly varying time-signal with high fidelity. We show that this signal collected by a Raspberry Pi device can be processed numerically to yield intensity autocorrelations well suited for DCS applications. DCS measurements made using the Raspberry Pi device were compared to those acquired using a commercial hardware autocorrelation board to investigate the stability, performance, and accuracy of the data acquired in controlled experiments. This paper represents a first step toward lowering the instrumentation cost of a DCS system and may offer the potential to make DCS become more widely used in biomedical applications.

  20. High Frequency Sampling of TTL Pulses on a Raspberry Pi for Diffuse Correlation Spectroscopy Applications

    PubMed Central

    Tivnan, Matthew; Gurjar, Rajan; Wolf, David E.; Vishwanath, Karthik

    2015-01-01

    Diffuse Correlation Spectroscopy (DCS) is a well-established optical technique that has been used for non-invasive measurement of blood flow in tissues. Instrumentation for DCS includes a correlation device that computes the temporal intensity autocorrelation of a coherent laser source after it has undergone diffuse scattering through a turbid medium. Typically, the signal acquisition and its autocorrelation are performed by a correlation board. These boards have dedicated hardware to acquire and compute intensity autocorrelations of rapidly varying input signal and usually are quite expensive. Here we show that a Raspberry Pi minicomputer can acquire and store a rapidly varying time-signal with high fidelity. We show that this signal collected by a Raspberry Pi device can be processed numerically to yield intensity autocorrelations well suited for DCS applications. DCS measurements made using the Raspberry Pi device were compared to those acquired using a commercial hardware autocorrelation board to investigate the stability, performance, and accuracy of the data acquired in controlled experiments. This paper represents a first step toward lowering the instrumentation cost of a DCS system and may offer the potential to make DCS become more widely used in biomedical applications. PMID:26274961

  1. Bayesian Treed Multivariate Gaussian Process with Adaptive Design: Application to a Carbon Capture Unit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konomi, Bledar A.; Karagiannis, Georgios; Sarkar, Avik

    2014-05-16

    Computer experiments (numerical simulations) are widely used in scientific research to study and predict the behavior of complex systems, which usually have responses consisting of a set of distinct outputs. The computational cost of the simulations at high resolution are often expensive and become impractical for parametric studies at different input values. To overcome these difficulties we develop a Bayesian treed multivariate Gaussian process (BTMGP) as an extension of the Bayesian treed Gaussian process (BTGP) in order to model and evaluate a multivariate process. A suitable choice of covariance function and the prior distributions facilitates the different Markov chain Montemore » Carlo (MCMC) movements. We utilize this model to sequentially sample the input space for the most informative values, taking into account model uncertainty and expertise gained. A simulation study demonstrates the use of the proposed method and compares it with alternative approaches. We apply the sequential sampling technique and BTMGP to model the multiphase flow in a full scale regenerator of a carbon capture unit. The application presented in this paper is an important tool for research into carbon dioxide emissions from thermal power plants.« less

  2. Ensemble modeling of stochastic unsteady open-channel flow in terms of its time-space evolutionary probability distribution - Part 1: theoretical development

    NASA Astrophysics Data System (ADS)

    Dib, Alain; Kavvas, M. Levent

    2018-03-01

    The Saint-Venant equations are commonly used as the governing equations to solve for modeling the spatially varied unsteady flow in open channels. The presence of uncertainties in the channel or flow parameters renders these equations stochastic, thus requiring their solution in a stochastic framework in order to quantify the ensemble behavior and the variability of the process. While the Monte Carlo approach can be used for such a solution, its computational expense and its large number of simulations act to its disadvantage. This study proposes, explains, and derives a new methodology for solving the stochastic Saint-Venant equations in only one shot, without the need for a large number of simulations. The proposed methodology is derived by developing the nonlocal Lagrangian-Eulerian Fokker-Planck equation of the characteristic form of the stochastic Saint-Venant equations for an open-channel flow process, with an uncertain roughness coefficient. A numerical method for its solution is subsequently devised. The application and validation of this methodology are provided in a companion paper, in which the statistical results computed by the proposed methodology are compared against the results obtained by the Monte Carlo approach.

  3. Off-design Performance Analysis of Multi-Stage Transonic Axial Compressors

    NASA Astrophysics Data System (ADS)

    Du, W. H.; Wu, H.; Zhang, L.

    Because of the complex flow fields and component interaction in modern gas turbine engines, they require extensive experiment to validate performance and stability. The experiment process can become expensive and complex. Modeling and simulation of gas turbine engines are way to reduce experiment costs, provide fidelity and enhance the quality of essential experiment. The flow field of a transonic compressor contains all the flow aspects, which are difficult to present-boundary layer transition and separation, shock-boundary layer interactions, and large flow unsteadiness. Accurate transonic axial compressor off-design performance prediction is especially difficult, due in large part to three-dimensional blade design and the resulting flow field. Although recent advancements in computer capacity have brought computational fluid dynamics to forefront of turbomachinery design and analysis, the grid and turbulence model still limit Reynolds-average Navier-Stokes (RANS) approximations in the multi-stage transonic axial compressor flow field. Streamline curvature methods are still the dominant numerical approach as an important tool for turbomachinery to analyze and design, and it is generally accepted that streamline curvature solution techniques will provide satisfactory flow prediction as long as the losses, deviation and blockage are accurately predicted.

  4. Capturing remote mixing due to internal tides using multi-scale modeling tool: SOMAR-LES

    NASA Astrophysics Data System (ADS)

    Santilli, Edward; Chalamalla, Vamsi; Scotti, Alberto; Sarkar, Sutanu

    2016-11-01

    Internal tides that are generated during the interaction of an oscillating barotropic tide with the bottom bathymetry dissipate only a fraction of their energy near the generation region. The rest is radiated away in the form of low- high-mode internal tides. These internal tides dissipate energy at remote locations when they interact with the upper ocean pycnocline, continental slope, and large scale eddies. Capturing the wide range of length and time scales involved during the life-cycle of internal tides is computationally very expensive. A recently developed multi-scale modeling tool called SOMAR-LES combines the adaptive grid refinement features of SOMAR with the turbulence modeling features of a Large Eddy Simulation (LES) to capture multi-scale processes at a reduced computational cost. Numerical simulations of internal tide generation at idealized bottom bathymetries are performed to demonstrate this multi-scale modeling technique. Although each of the remote mixing phenomena have been considered independently in previous studies, this work aims to capture remote mixing processes during the life cycle of an internal tide in more realistic settings, by allowing multi-level (coarse and fine) grids to co-exist and exchange information during the time stepping process.

  5. A fast solver for the Helmholtz equation based on the generalized multiscale finite-element method

    NASA Astrophysics Data System (ADS)

    Fu, Shubin; Gao, Kai

    2017-11-01

    Conventional finite-element methods for solving the acoustic-wave Helmholtz equation in highly heterogeneous media usually require finely discretized mesh to represent the medium property variations with sufficient accuracy. Computational costs for solving the Helmholtz equation can therefore be considerably expensive for complicated and large geological models. Based on the generalized multiscale finite-element theory, we develop a novel continuous Galerkin method to solve the Helmholtz equation in acoustic media with spatially variable velocity and mass density. Instead of using conventional polynomial basis functions, we use multiscale basis functions to form the approximation space on the coarse mesh. The multiscale basis functions are obtained from multiplying the eigenfunctions of a carefully designed local spectral problem with an appropriate multiscale partition of unity. These multiscale basis functions can effectively incorporate the characteristics of heterogeneous media's fine-scale variations, thus enable us to obtain accurate solution to the Helmholtz equation without directly solving the large discrete system formed on the fine mesh. Numerical results show that our new solver can significantly reduce the dimension of the discrete Helmholtz equation system, and can also obviously reduce the computational time.

  6. A hybrid formalism of aerosol gas phase interaction for 3-D global models

    NASA Astrophysics Data System (ADS)

    Benduhn, F.

    2009-04-01

    Aerosol chemical composition is a relevant factor to the global climate system with respect to both atmospheric chemistry and the aerosol direct and indirect effects. Aerosol chemical composition determines the capacity of aerosol particles to act as cloud condensation nuclei both explicitly via particle size and implicitly via the aerosol hygroscopic property. Due to the primary role of clouds in the climate system and the sensitivity of cloud formation and radiative properties to the cloud droplet number it is necessary to determine with accuracy the chemical composition of the aerosol. Dissolution, although a formally fairly well known process, may be subject to numerically prohibitive properties that result from the chemical interaction of the species engaged. So-far approaches to model the dissolution of inorganics into the aerosol liquid phase in the framework of a 3-D global model were based on an equilibrium, transient or hybrid equilibrium-transient approach. All of these methods present the disadvantage of a priori assumptions with respect to the mechanism and/or are numerically not manageable in the context of a global climate system model. In this paper a new hybrid formalism to aerosol gas phase interaction is presented within the framework of the H2SO4/HNO3/HCl/NH3 system and a modal approach of aerosol size discretisation. The formalism is distinct from prior hybrid approaches in as much as no a priori assumption on the nature of the regime a particular aerosol mode is in is made. Whether a particular mode is set to be in the equilibrium or the transitory regime is continuously determined during each time increment against relevant criteria considering the estimated equilibration time interval and the interdependence of the aerosol modes relative to the partitioning of the dissolving species. Doing this the aerosol composition range of numerical stiffness due to species interaction during transient dissolution is effectively eluded, and the numerical expense of dissolution in the transient regime is reduced through the minimisation of the number of modes in this regime and a larger time step. Containment of the numerical expense of the modes in the equilibrium regime is ensured through the usage of either an analytical equilibrium solver that requires iteration among the equilibrium modes, or a simple numerical solver based on a differential approach that requires iteration among the chemical species. Both equilibrium solvers require iteration over the water content and the activity coefficients. Decision for using either one or the other solver is made upon the consideration of the actual equilibrating mechanism, either chemical interaction or gas phase partial pressure variation, respectively. The formalism should thus ally appropriate process simplification resulting in reasonable computation time to a high degree of real process conformity as it is ensured by a transitory representation of dissolution. The resulting effectiveness and limits of the formalism are illustrated with numerical examples.

  7. Utilizing Adjoint-Based Error Estimates for Surrogate Models to Accurately Predict Probabilities of Events

    DOE PAGES

    Butler, Troy; Wildey, Timothy

    2018-01-01

    In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives preciselymore » the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.« less

  8. Utilizing Adjoint-Based Error Estimates for Surrogate Models to Accurately Predict Probabilities of Events

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butler, Troy; Wildey, Timothy

    In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives preciselymore » the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.« less

  9. CORDIC-based digital signal processing (DSP) element for adaptive signal processing

    NASA Astrophysics Data System (ADS)

    Bolstad, Gregory D.; Neeld, Kenneth B.

    1995-04-01

    The High Performance Adaptive Weight Computation (HAWC) processing element is a CORDIC based application specific DSP element that, when connected in a linear array, can perform extremely high throughput (100s of GFLOPS) matrix arithmetic operations on linear systems of equations in real time. In particular, it very efficiently performs the numerically intense computation of optimal least squares solutions for large, over-determined linear systems. Most techniques for computing solutions to these types of problems have used either a hard-wired, non-programmable systolic array approach, or more commonly, programmable DSP or microprocessor approaches. The custom logic methods can be efficient, but are generally inflexible. Approaches using multiple programmable generic DSP devices are very flexible, but suffer from poor efficiency and high computation latencies, primarily due to the large number of DSP devices that must be utilized to achieve the necessary arithmetic throughput. The HAWC processor is implemented as a highly optimized systolic array, yet retains some of the flexibility of a programmable data-flow system, allowing efficient implementation of algorithm variations. This provides flexible matrix processing capabilities that are one to three orders of magnitude less expensive and more dense than the current state of the art, and more importantly, allows a realizable solution to matrix processing problems that were previously considered impractical to physically implement. HAWC has direct applications in RADAR, SONAR, communications, and image processing, as well as in many other types of systems.

  10. Solving the chemical master equation using sliding windows

    PubMed Central

    2010-01-01

    Background The chemical master equation (CME) is a system of ordinary differential equations that describes the evolution of a network of chemical reactions as a stochastic process. Its solution yields the probability density vector of the system at each point in time. Solving the CME numerically is in many cases computationally expensive or even infeasible as the number of reachable states can be very large or infinite. We introduce the sliding window method, which computes an approximate solution of the CME by performing a sequence of local analysis steps. In each step, only a manageable subset of states is considered, representing a "window" into the state space. In subsequent steps, the window follows the direction in which the probability mass moves, until the time period of interest has elapsed. We construct the window based on a deterministic approximation of the future behavior of the system by estimating upper and lower bounds on the populations of the chemical species. Results In order to show the effectiveness of our approach, we apply it to several examples previously described in the literature. The experimental results show that the proposed method speeds up the analysis considerably, compared to a global analysis, while still providing high accuracy. Conclusions The sliding window method is a novel approach to address the performance problems of numerical algorithms for the solution of the chemical master equation. The method efficiently approximates the probability distributions at the time points of interest for a variety of chemically reacting systems, including systems for which no upper bound on the population sizes of the chemical species is known a priori. PMID:20377904

  11. 26 CFR 1.50B-3 - Estates and trusts.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 1 2010-04-01 2010-04-01 true Estates and trusts. 1.50B-3 Section 1.50B-3... Computing Credit for Expenses of Work Incentive Programs § 1.50B-3 Estates and trusts. (a) General rule—(1) In general. In the case of an estate or trust, WIN expenses (as defined in paragraph (a) of § 1.50B-1...

  12. Paradigm Paralysis and the Plight of the PC in Education.

    ERIC Educational Resources Information Center

    O'Neil, Mick

    1998-01-01

    Examines the varied factors involved in providing Internet access in K-12 education, including expense, computer installation and maintenance, and security, and explores how the network computer could be useful in this context. Operating systems and servers are discussed. (MSE)

  13. Computational Modeling in Concert with Laboratory Studies: Application to B Cell Differentiation

    EPA Science Inventory

    Remediation is expensive, so accurate prediction of dose-response is important to help control costs. Dose response is a function of biological mechanisms. Computational models of these mechanisms improve the efficiency of research and provide the capability for prediction.

  14. Research in applied mathematics, numerical analysis, and computer science

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Research conducted at the Institute for Computer Applications in Science and Engineering (ICASE) in applied mathematics, numerical analysis, and computer science is summarized and abstracts of published reports are presented. The major categories of the ICASE research program are: (1) numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; (2) control and parameter identification; (3) computational problems in engineering and the physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and (4) computer systems and software, especially vector and parallel computers.

  15. A Talking Computers System for Persons with Vision and Speech Handicaps. Final Report.

    ERIC Educational Resources Information Center

    Visek & Maggs, Urbana, IL.

    This final report contains a detailed description of six software systems designed to assist individuals with blindness and/or speech disorders in using inexpensive, off-the-shelf computers rather than expensive custom-made devices. The developed software is not written in the native machine language of any particular brand of computer, but in the…

  16. High Throughput Screening For Hazard and Risk of Environmental Contaminants

    EPA Science Inventory

    High throughput toxicity testing provides detailed mechanistic information on the concentration response of environmental contaminants in numerous potential toxicity pathways. High throughput screening (HTS) has several key advantages: (1) expense orders of magnitude less than an...

  17. [Cost analysis for navigation in knee endoprosthetics].

    PubMed

    Cerha, O; Kirschner, S; Günther, K-P; Lützner, J

    2009-12-01

    Total knee arthroplasty (TKA) is one of the most frequent procedures in orthopaedic surgery. The outcome depends on a range of factors including alignment of the leg and the positioning of the implant in addition to patient-associated factors. Computer-assisted navigation systems can improve the restoration of a neutral leg alignment. This procedure has been established especially in Europe and North America. The additional expenses are not reimbursed in the German DRG system (Diagnosis Related Groups). In the present study a cost analysis of computer-assisted TKA compared to the conventional technique was performed. The acquisition expenses of various navigation systems (5 and 10 year depreciation), annual costs for maintenance and software updates as well as the accompanying costs per operation (consumables, additional operating time) were considered. The additional operating time was determined on the basis of a meta-analysis according to the current literature. Situations with 25, 50, 100, 200 and 500 computer-assisted TKAs per year were simulated. The amount of the incremental costs of the computer-assisted TKA depends mainly on the annual volume and the additional operating time. A relevant decrease of the incremental costs was detected between 50 and 100 procedures per year. In a model with 100 computer-assisted TKAs per year an additional operating time of 14 mins and a 10 year depreciation of the investment costs, the incremental expenses amount to 300-395 depending on the navigation system. Computer-assisted TKA is associated with additional costs. From an economical point of view an amount of more than 50 procedures per year appears to be favourable. The cost-effectiveness could be estimated if long-term results will show a reduction of revisions or a better clinical outcome.

  18. Use of less expensive cigarettes in six cities in China: findings from the International Tobacco Control (ITC) China Survey.

    PubMed

    Li, Qiang; Hyland, Andrew; Fong, Geoffrey T; Jiang, Yuan; Elton-Marshall, Tara

    2010-10-01

    The existence of less expensive cigarettes in China may undermine public health. The aim of the current study is to examine the use of less expensive cigarettes in six cities in China. Data was from the baseline wave of the International Tobacco Control (ITC) China Survey of 4815 adult urban smokers in 6 cities, conducted between April and August 2006. The percentage of smokers who reported buying less expensive cigarettes (the lowest pricing tertile within each city) at last purchase was computed. Complex sample multivariate logistic regression models were used to identify factors associated with use of less expensive cigarettes. The association between the use of less expensive cigarettes and intention to quit smoking was also examined. Smokers who reported buying less expensive cigarettes at last purchase tended to be older, heavier smokers, to have lower education and income, and to think more about the money spent on smoking in the last month. Smokers who bought less expensive cigarettes at the last purchase and who were less knowledgeable about the health harm of smoking were less likely to intend to quit smoking. Measures need to be taken to minimise the price differential among cigarette brands and to increase smokers' health knowledge, which may in turn increase their intentions to quit.

  19. Using Adaptive Mesh Refinment to Simulate Storm Surge

    NASA Astrophysics Data System (ADS)

    Mandli, K. T.; Dawson, C.

    2012-12-01

    Coastal hazards related to strong storms such as hurricanes and typhoons are one of the most frequently recurring and wide spread hazards to coastal communities. Storm surges are among the most devastating effects of these storms, and their prediction and mitigation through numerical simulations is of great interest to coastal communities that need to plan for the subsequent rise in sea level during these storms. Unfortunately these simulations require a large amount of resolution in regions of interest to capture relevant effects resulting in a computational cost that may be intractable. This problem is exacerbated in situations where a large number of similar runs is needed such as in design of infrastructure or forecasting with ensembles of probable storms. One solution to address the problem of computational cost is to employ adaptive mesh refinement (AMR) algorithms. AMR functions by decomposing the computational domain into regions which may vary in resolution as time proceeds. Decomposing the domain as the flow evolves makes this class of methods effective at ensuring that computational effort is spent only where it is needed. AMR also allows for placement of computational resolution independent of user interaction and expectation of the dynamics of the flow as well as particular regions of interest such as harbors. The simulation of many different applications have only been made possible by using AMR-type algorithms, which have allowed otherwise impractical simulations to be performed for much less computational expense. Our work involves studying how storm surge simulations can be improved with AMR algorithms. We have implemented relevant storm surge physics in the GeoClaw package and tested how Hurricane Ike's surge into Galveston Bay and up the Houston Ship Channel compares to available tide gauge data. We will also discuss issues dealing with refinement criteria, optimal resolution and refinement ratios, and inundation.

  20. Applications of Automation Methods for Nonlinear Fracture Test Analysis

    NASA Technical Reports Server (NTRS)

    Allen, Phillip A.; Wells, Douglas N.

    2013-01-01

    Using automated and standardized computer tools to calculate the pertinent test result values has several advantages such as: 1. allowing high-fidelity solutions to complex nonlinear phenomena that would be impractical to express in written equation form, 2. eliminating errors associated with the interpretation and programing of analysis procedures from the text of test standards, 3. lessening the need for expertise in the areas of solid mechanics, fracture mechanics, numerical methods, and/or finite element modeling, to achieve sound results, 4. and providing one computer tool and/or one set of solutions for all users for a more "standardized" answer. In summary, this approach allows a non-expert with rudimentary training to get the best practical solution based on the latest understanding with minimum difficulty.Other existing ASTM standards that cover complicated phenomena use standard computer programs: 1. ASTM C1340/C1340M-10- Standard Practice for Estimation of Heat Gain or Loss Through Ceilings Under Attics Containing Radiant Barriers by Use of a Computer Program 2. ASTM F 2815 - Standard Practice for Chemical Permeation through Protective Clothing Materials: Testing Data Analysis by Use of a Computer Program 3. ASTM E2807 - Standard Specification for 3D Imaging Data Exchange, Version 1.0 The verification, validation, and round-robin processes required of a computer tool closely parallel the methods that are used to ensure the solution validity for equations included in test standard. The use of automated analysis tools allows the creation and practical implementation of advanced fracture mechanics test standards that capture the physics of a nonlinear fracture mechanics problem without adding undue burden or expense to the user. The presented approach forms a bridge between the equation-based fracture testing standards of today and the next generation of standards solving complex problems through analysis automation.

  1. Summary of research in applied mathematics, numerical analysis, and computer sciences

    NASA Technical Reports Server (NTRS)

    1986-01-01

    The major categories of current ICASE research programs addressed include: numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; control and parameter identification problems, with emphasis on effective numerical methods; computational problems in engineering and physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and computer systems and software, especially vector and parallel computers.

  2. A review of recent advances in the spherical harmonics expansion method for semiconductor device simulation.

    PubMed

    Rupp, K; Jungemann, C; Hong, S-M; Bina, M; Grasser, T; Jüngel, A

    The Boltzmann transport equation is commonly considered to be the best semi-classical description of carrier transport in semiconductors, providing precise information about the distribution of carriers with respect to time (one dimension), location (three dimensions), and momentum (three dimensions). However, numerical solutions for the seven-dimensional carrier distribution functions are very demanding. The most common solution approach is the stochastic Monte Carlo method, because the gigabytes of memory requirements of deterministic direct solution approaches has not been available until recently. As a remedy, the higher accuracy provided by solutions of the Boltzmann transport equation is often exchanged for lower computational expense by using simpler models based on macroscopic quantities such as carrier density and mean carrier velocity. Recent developments for the deterministic spherical harmonics expansion method have reduced the computational cost for solving the Boltzmann transport equation, enabling the computation of carrier distribution functions even for spatially three-dimensional device simulations within minutes to hours. We summarize recent progress for the spherical harmonics expansion method and show that small currents, reasonable execution times, and rare events such as low-frequency noise, which are all hard or even impossible to simulate with the established Monte Carlo method, can be handled in a straight-forward manner. The applicability of the method for important practical applications is demonstrated for noise simulation, small-signal analysis, hot-carrier degradation, and avalanche breakdown.

  3. Quadratic String Method for Locating Instantons in Tunneling Splitting Calculations.

    PubMed

    Cvitaš, Marko T

    2018-03-13

    The ring-polymer instanton (RPI) method is an efficient technique for calculating approximate tunneling splittings in high-dimensional molecular systems. In the RPI method, tunneling splitting is evaluated from the properties of the minimum action path (MAP) connecting the symmetric wells, whereby the extensive sampling of the full potential energy surface of the exact quantum-dynamics methods is avoided. Nevertheless, the search for the MAP is usually the most time-consuming step in the standard numerical procedures. Recently, nudged elastic band (NEB) and string methods, originaly developed for locating minimum energy paths (MEPs), were adapted for the purpose of MAP finding with great efficiency gains [ J. Chem. Theory Comput. 2016 , 12 , 787 ]. In this work, we develop a new quadratic string method for locating instantons. The Euclidean action is minimized by propagating the initial guess (a path connecting two wells) over the quadratic potential energy surface approximated by means of updated Hessians. This allows the algorithm to take many minimization steps between the potential/gradient calls with further reductions in the computational effort, exploiting the smoothness of potential energy surface. The approach is general, as it uses Cartesian coordinates, and widely applicable, with computational effort of finding the instanton usually lower than that of determining the MEP. It can be combined with expensive potential energy surfaces or on-the-fly electronic-structure methods to explore a wide variety of molecular systems.

  4. Toward giga-pixel nanoscopy on a chip: a computational wide-field look at the nano-scale without the use of lenses

    PubMed Central

    McLeod, Euan; Luo, Wei; Mudanyali, Onur; Greenbaum, Alon

    2013-01-01

    The development of lensfree on-chip microscopy in the past decade has opened up various new possibilities for biomedical imaging across ultra-large fields of view using compact, portable, and cost-effective devices. However, until recently, its ability to resolve fine features and detect ultra-small particles has not rivalled the capabilities of the more expensive and bulky laboratory-grade optical microscopes. In this Frontier Review, we highlight the developments over the last two years that have enabled computational lensfree holographic on-chip microscopy to compete with and, in some cases, surpass conventional bright-field microscopy in its ability to image nano-scale objects across large fields of view, yielding giga-pixel phase and amplitude images. Lensfree microscopy has now achieved a numerical aperture as high as 0.92, with a spatial resolution as small as 225 nm across a large field of view e.g., >20 mm2. Furthermore, the combination of lensfree microscopy with self-assembled nanolenses, forming nano-catenoid minimal surfaces around individual nanoparticles has boosted the image contrast to levels high enough to permit bright-field imaging of individual particles smaller than 100 nm. These capabilities support a number of new applications, including, for example, the detection and sizing of individual virus particles using field-portable computational on-chip microscopes. PMID:23592185

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rajagopal, K. R.; Rao, I. J.

    The procedures in place for producing materials in order to optimize their performance with respect to creep characteristics, oxidation resistance, elevation of melting point, thermal and electrical conductivity and other thermal and electrical properties are essentially trial and error experimentation that tend to be tremendously time consuming and expensive. A computational approach has been developed that can replace the trial and error procedures in order that one can efficiently design and engineer materials based on the application in question can lead to enhanced performance of the material, significant decrease in costs and cut down the time necessary to produce suchmore » materials. The work has relevance to the design and manufacture of turbine blades operating at high operating temperature, development of armor and missiles heads; corrosion resistant tanks and containers, better conductors of electricity, and the numerous other applications that are envisaged for specially structured nanocrystalline solids. A robust thermodynamic framework is developed within which the computational approach is developed. The procedure takes into account microstructural features such as the dislocation density, lattice mismatch, stacking faults, volume fractions of inclusions, interfacial area, etc. A robust model for single crystal superalloys that takes into account the microstructure of the alloy within the context of a continuum model is developed. Having developed the model, we then implement in a computational scheme using the software ABAQUS/STANDARD. The results of the simulation are compared against experimental data in realistic geometries.« less

  6. Toward giga-pixel nanoscopy on a chip: a computational wide-field look at the nano-scale without the use of lenses.

    PubMed

    McLeod, Euan; Luo, Wei; Mudanyali, Onur; Greenbaum, Alon; Ozcan, Aydogan

    2013-06-07

    The development of lensfree on-chip microscopy in the past decade has opened up various new possibilities for biomedical imaging across ultra-large fields of view using compact, portable, and cost-effective devices. However, until recently, its ability to resolve fine features and detect ultra-small particles has not rivalled the capabilities of the more expensive and bulky laboratory-grade optical microscopes. In this Frontier Review, we highlight the developments over the last two years that have enabled computational lensfree holographic on-chip microscopy to compete with and, in some cases, surpass conventional bright-field microscopy in its ability to image nano-scale objects across large fields of view, yielding giga-pixel phase and amplitude images. Lensfree microscopy has now achieved a numerical aperture as high as 0.92, with a spatial resolution as small as 225 nm across a large field of view e.g., >20 mm(2). Furthermore, the combination of lensfree microscopy with self-assembled nanolenses, forming nano-catenoid minimal surfaces around individual nanoparticles has boosted the image contrast to levels high enough to permit bright-field imaging of individual particles smaller than 100 nm. These capabilities support a number of new applications, including, for example, the detection and sizing of individual virus particles using field-portable computational on-chip microscopes.

  7. Timing and Mode of Landscape Response to Glacial-Interglacial Climate Forcing From Fluvial Fill Terrace Sediments: Humahuaca Basin, E Cordillera, NW Argentina

    NASA Astrophysics Data System (ADS)

    Schildgen, T. F.; Robinson, R. A. J.; Savi, S.; Bookhagen, B.; Tofelde, S.; Strecker, M. R.

    2014-12-01

    Numerical modelling informs risk assessment of tsunami generated by submarine slides; however, for large-scale slides modelling can be complex and computationally challenging. Many previous numerical studies have approximated slides as rigid blocks that moved according to prescribed motion. However, wave characteristics are strongly dependent on the motion of the slide and previous work has recommended that more accurate representation of slide dynamics is needed. We have used the finite-element, adaptive-mesh CFD model Fluidity, to perform multi-material simulations of deformable submarine slide-generated waves at real world scales for a 2D scenario in the Gulf of Mexico. Our high-resolution approach represents slide dynamics with good accuracy, compared to other numerical simulations of this scenario, but precludes tracking of wave propagation over large distances. To enable efficient modelling of further propagation of the waves, we investigate an approach to extract information about the slide evolution from our multi-material simulations in order to drive a single-layer wave propagation model, also using Fluidity, which is much less computationally expensive. The extracted submarine slide geometry and position as a function of time are parameterised using simple polynomial functions. The polynomial functions are used to inform a prescribed velocity boundary condition in a single-layer simulation, mimicking the effect the submarine slide motion has on the water column. The approach is verified by successful comparison of wave generation in the single-layer model with that recorded in the multi-material, multi-layer simulations. We then extend this approach to 3D for further validation of this methodology (using the Gulf of Mexico scenario proposed by Horrillo et al., 2013) and to consider the effect of lateral spreading. This methodology is then used to simulate a series of hypothetical submarine slide events in the Arctic Ocean (based on evidence of historic slides) and examine the hazard posed to the UK coast.

  8. The Numerical Analysis of a Turbulent Compressible Jet. Degree awarded by Ohio State Univ., 2000

    NASA Technical Reports Server (NTRS)

    DeBonis, James R.

    2001-01-01

    A numerical method to simulate high Reynolds number jet flows was formulated and applied to gain a better understanding of the flow physics. Large-eddy simulation was chosen as the most promising approach to model the turbulent structures due to its compromise between accuracy and computational expense. The filtered Navier-Stokes equations were developed including a total energy form of the energy equation. Subgrid scale models for the momentum and energy equations were adapted from compressible forms of Smagorinsky's original model. The effect of using disparate temporal and spatial accuracy in a numerical scheme was discovered through one-dimensional model problems and a new uniformly fourth-order accurate numerical method was developed. Results from two- and three-dimensional validation exercises show that the code accurately reproduces both viscous and inviscid flows. Numerous axisymmetric jet simulations were performed to investigate the effect of grid resolution, numerical scheme, exit boundary conditions and subgrid scale modeling on the solution and the results were used to guide the three-dimensional calculations. Three-dimensional calculations of a Mach 1.4 jet showed that this LES simulation accurately captures the physics of the turbulent flow. The agreement with experimental data was relatively good and is much better than results in the current literature. Turbulent intensities indicate that the turbulent structures at this level of modeling are not isotropic and this information could lend itself to the development of improved subgrid scale models for LES and turbulence models for RANS simulations. A two point correlation technique was used to quantify the turbulent structures. Two point space correlations were used to obtain a measure of the integral length scale, which proved to be approximately 1/2 D(sub j). Two point space-time correlations were used to obtain the convection velocity for the turbulent structures. This velocity ranged from 0.57 to 0.71 U(sub j).

  9. AUTOMATIC CALIBRATION OF A STOCHASTIC-LAGRANGIAN TRANSPORT MODEL (SLAM)

    EPA Science Inventory

    Numerical models are a useful tool in evaluating and designing NAPL remediation systems. Traditional constitutive finite difference and finite element models are complex and expensive to apply. For this reason, this paper presents the application of a simplified stochastic-Lagran...

  10. Algorithm and code development for unsteady three-dimensional Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Obayashi, Shigeru

    1993-01-01

    In the last two decades, there have been extensive developments in computational aerodynamics, which constitutes a major part of the general area of computational fluid dynamics. Such developments are essential to advance the understanding of the physics of complex flows, to complement expensive wind-tunnel tests, and to reduce the overall design cost of an aircraft, particularly in the area of aeroelasticity. Aeroelasticity plays an important role in the design and development of aircraft, particularly modern aircraft, which tend to be more flexible. Several phenomena that can be dangerous and limit the performance of an aircraft occur because of the interaction of the flow with flexible components. For example, an aircraft with highly swept wings may experience vortex-induced aeroelastic oscillations. Also, undesirable aeroelastic phenomena due to the presence and movement of shock waves occur in the transonic range. Aeroelastically critical phenomena, such as a low transonic flutter speed, have been known to occur through limited wind-tunnel tests and flight tests. Aeroelastic tests require extensive cost and risk. An aeroelastic wind-tunnel experiment is an order of magnitude more expensive than a parallel experiment involving only aerodynamics. By complementing the wind-tunnel experiments with numerical simulations the overall cost of the development of aircraft can be considerably reduced. In order to accurately compute aeroelastic phenomenon it is necessary to solve the unsteady Euler/Navier-Stokes equations simultaneously with the structural equations of motion. These equations accurately describe the flow phenomena for aeroelastic applications. At Ames a code, ENSAERO, is being developed for computing the unsteady aerodynamics and aeroelasticity of aircraft and it solves the Euler/Navier-Stokes equations. The purpose of this contract is to continue the algorithm enhancements of ENSAERO and to apply the code to complicated geometries. During the last year, the geometric capability of the code was extended to simulate transonic flows, a wing with oscillating control surface. Single-grid and zonal approaches were tested. For the zonal approach, a new interpolation technique was introduced. The key development of the algorithm was an interface treatment between moving zones for a control surface using the virtual-zone concept. The work performed during the period, 1 Apr. 1992 through 31 Mar. 1993 is summarized. Additional details on the various aspects of the study are given in the Appendices.

  11. 49 CFR Appendix A to Part 1511 - Aviation Security Infrastructure Fee

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    .... Please also submit the same information in Microsoft Word either on a computer disk or by e-mail to TSA..., including Checkpoint Screening Supervisors. 7. All associated expensed non-labor costs including computers, communications equipment, time management systems, supplies, parking, identification badging, furniture, fixtures...

  12. 49 CFR Appendix A to Part 1511 - Aviation Security Infrastructure Fee

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    .... Please also submit the same information in Microsoft Word either on a computer disk or by e-mail to TSA..., including Checkpoint Screening Supervisors. 7. All associated expensed non-labor costs including computers, communications equipment, time management systems, supplies, parking, identification badging, furniture, fixtures...

  13. 49 CFR Appendix A to Part 1511 - Aviation Security Infrastructure Fee

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    .... Please also submit the same information in Microsoft Word either on a computer disk or by e-mail to TSA..., including Checkpoint Screening Supervisors. 7. All associated expensed non-labor costs including computers, communications equipment, time management systems, supplies, parking, identification badging, furniture, fixtures...

  14. Computer-assisted coding and clinical documentation: first things first.

    PubMed

    Tully, Melinda; Carmichael, Angela

    2012-10-01

    Computer-assisted coding tools have the potential to drive improvements in seven areas: Transparency of coding. Productivity (generally by 20 to 25 percent for inpatient claims). Accuracy (by improving specificity of documentation). Cost containment (by reducing overtime expenses, audit fees, and denials). Compliance. Efficiency. Consistency.

  15. Generalized multiscale finite-element method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Kai; Fu, Shubin; Gibson, Richard L.

    It is important to develop fast yet accurate numerical methods for seismic wave propagation to characterize complex geological structures and oil and gas reservoirs. However, the computational cost of conventional numerical modeling methods, such as finite-difference method and finite-element method, becomes prohibitively expensive when applied to very large models. We propose a Generalized Multiscale Finite-Element Method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media, where we construct basis functions from multiple local problems for both the boundaries and interior of a coarse node support or coarse element. The application of multiscale basis functions can capture the fine scale mediummore » property variations, and allows us to greatly reduce the degrees of freedom that are required to implement the modeling compared with conventional finite-element method for wave equation, while restricting the error to low values. We formulate the continuous Galerkin and discontinuous Galerkin formulation of the multiscale method, both of which have pros and cons. Applications of the multiscale method to three heterogeneous models show that our multiscale method can effectively model the elastic wave propagation in anisotropic media with a significant reduction in the degrees of freedom in the modeling system.« less

  16. Generalized Multiscale Finite-Element Method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Kai, E-mail: kaigao87@gmail.com; Fu, Shubin, E-mail: shubinfu89@gmail.com; Gibson, Richard L., E-mail: gibson@tamu.edu

    It is important to develop fast yet accurate numerical methods for seismic wave propagation to characterize complex geological structures and oil and gas reservoirs. However, the computational cost of conventional numerical modeling methods, such as finite-difference method and finite-element method, becomes prohibitively expensive when applied to very large models. We propose a Generalized Multiscale Finite-Element Method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media, where we construct basis functions from multiple local problems for both the boundaries and interior of a coarse node support or coarse element. The application of multiscale basis functions can capture the fine scale mediummore » property variations, and allows us to greatly reduce the degrees of freedom that are required to implement the modeling compared with conventional finite-element method for wave equation, while restricting the error to low values. We formulate the continuous Galerkin and discontinuous Galerkin formulation of the multiscale method, both of which have pros and cons. Applications of the multiscale method to three heterogeneous models show that our multiscale method can effectively model the elastic wave propagation in anisotropic media with a significant reduction in the degrees of freedom in the modeling system.« less

  17. Generalized multiscale finite-element method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media

    DOE PAGES

    Gao, Kai; Fu, Shubin; Gibson, Richard L.; ...

    2015-04-14

    It is important to develop fast yet accurate numerical methods for seismic wave propagation to characterize complex geological structures and oil and gas reservoirs. However, the computational cost of conventional numerical modeling methods, such as finite-difference method and finite-element method, becomes prohibitively expensive when applied to very large models. We propose a Generalized Multiscale Finite-Element Method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media, where we construct basis functions from multiple local problems for both the boundaries and interior of a coarse node support or coarse element. The application of multiscale basis functions can capture the fine scale mediummore » property variations, and allows us to greatly reduce the degrees of freedom that are required to implement the modeling compared with conventional finite-element method for wave equation, while restricting the error to low values. We formulate the continuous Galerkin and discontinuous Galerkin formulation of the multiscale method, both of which have pros and cons. Applications of the multiscale method to three heterogeneous models show that our multiscale method can effectively model the elastic wave propagation in anisotropic media with a significant reduction in the degrees of freedom in the modeling system.« less

  18. Star formation in evolving molecular clouds

    NASA Astrophysics Data System (ADS)

    Völschow, M.; Banerjee, R.; Körtgen, B.

    2017-09-01

    Molecular clouds are the principle stellar nurseries of our universe; they thus remain a focus of both observational and theoretical studies. From observations, some of the key properties of molecular clouds are well known but many questions regarding their evolution and star formation activity remain open. While numerical simulations feature a large number and complexity of involved physical processes, this plethora of effects may hide the fundamentals that determine the evolution of molecular clouds and enable the formation of stars. Purely analytical models, on the other hand, tend to suffer from rough approximations or a lack of completeness, limiting their predictive power. In this paper, we present a model that incorporates central concepts of astrophysics as well as reliable results from recent simulations of molecular clouds and their evolutionary paths. Based on that, we construct a self-consistent semi-analytical framework that describes the formation, evolution, and star formation activity of molecular clouds, including a number of feedback effects to account for the complex processes inside those objects. The final equation system is solved numerically but at much lower computational expense than, for example, hydrodynamical descriptions of comparable systems. The model presented in this paper agrees well with a broad range of observational results, showing that molecular cloud evolution can be understood as an interplay between accretion, global collapse, star formation, and stellar feedback.

  19. On the relative intensity of Poisson’s spot

    NASA Astrophysics Data System (ADS)

    Reisinger, T.; Leufke, P. M.; Gleiter, H.; Hahn, H.

    2017-03-01

    The Fresnel diffraction phenomenon referred to as Poisson’s spot or spot of Arago has, beside its historical significance, become relevant in a number of fields. Among them are for example fundamental tests of the super-position principle in the transition from quantum to classical physics and the search for extra-solar planets using star shades. Poisson’s spot refers to the positive on-axis wave interference in the shadow of any spherical or circular obstacle. While the spot’s intensity is equal to the undisturbed field in the plane wave picture, its intensity in general depends on a number of factors, namely the size and wavelength of the source, the size and surface corrugation of the diffraction obstacle, and the distances between source, obstacle and detector. The intensity can be calculated by solving the Fresnel-Kirchhoff diffraction integral numerically, which however tends to be computationally expensive. We have therefore devised an analytical model for the on-axis intensity of Poisson’s spot relative to the intensity of the undisturbed wave field and successfully validated it both using a simple light diffraction setup and numerical methods. The model will be useful for optimizing future Poisson-spot matter-wave diffraction experiments and determining under what experimental conditions the spot can be observed.

  20. An integrated approach for the knowledge discovery in computer simulation models with a multi-dimensional parameter space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khawli, Toufik Al; Eppelt, Urs; Hermanns, Torsten

    2016-06-08

    In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part ismore » to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.« less

  1. An integrated approach for the knowledge discovery in computer simulation models with a multi-dimensional parameter space

    NASA Astrophysics Data System (ADS)

    Khawli, Toufik Al; Gebhardt, Sascha; Eppelt, Urs; Hermanns, Torsten; Kuhlen, Torsten; Schulz, Wolfgang

    2016-06-01

    In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part is to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.

  2. Lithographic image simulation for the 21st century with 19th-century tools

    NASA Astrophysics Data System (ADS)

    Gordon, Ronald L.; Rosenbluth, Alan E.

    2004-01-01

    Simulation of lithographic processes in semiconductor manufacturing has gone from a crude learning tool 20 years ago to a critical part of yield enhancement strategy today. Although many disparate models, championed by equally disparate communities, exist to describe various photoresist development phenomena, these communities would all agree that the one piece of the simulation picture that can, and must, be computed accurately is the image intensity in the photoresist. The imaging of a photomask onto a thin-film stack is one of the only phenomena in the lithographic process that is described fully by well-known, definitive physical laws. Although many approximations are made in the derivation of the Fourier transform relations between the mask object, the pupil, and the image, these and their impacts are well-understood and need little further investigation. The imaging process in optical lithography is modeled as a partially-coherent, Kohler illumination system. As Hopkins has shown, we can separate the computation into 2 pieces: one that takes information about the illumination source, the projection lens pupil, the resist stack, and the mask size or pitch, and the other that only needs the details of the mask structure. As the latter piece of the calculation can be expressed as a fast Fourier transform, it is the first piece that dominates. This piece involves computation of a potentially large number of numbers called Transmission Cross-Coefficients (TCCs), which are correlations of the pupil function weighted with the illumination intensity distribution. The advantage of performing the image calculations this way is that the computation of these TCCs represents an up-front cost, not to be repeated if one is only interested in changing the mask features, which is the case in Model-Based Optical Proximity Correction (MBOPC). The down side, however, is that the number of these expensive double integrals that must be performed increases as the square of the mask unit cell area; this number can cause even the fastest computers to balk if one needs to study medium- or long-range effects. One can reduce this computational burden by approximating with a smaller area, but accuracy is usually a concern, especially when building a model that will purportedly represent a manufacturing process. This work will review the current methodologies used to simulate the intensity distribution in air above the resist and address the above problems. More to the point, a methodology has been developed to eliminate the expensive numerical integrations in the TCC calculations, as the resulting integrals in many cases of interest can be either evaluated analytically, or replaced by analytical functions accurate to within machine precision. With the burden of computing these numbers lightened, more accurate representations of the image field can be realized, and better overall models are then possible.

  3. Fokker-Planck Equations of Stochastic Acceleration: A Study of Numerical Methods

    NASA Astrophysics Data System (ADS)

    Park, Brian T.; Petrosian, Vahe

    1996-03-01

    Stochastic wave-particle acceleration may be responsible for producing suprathermal particles in many astrophysical situations. The process can be described as a diffusion process through the Fokker-Planck equation. If the acceleration region is homogeneous and the scattering mean free path is much smaller than both the energy change mean free path and the size of the acceleration region, then the Fokker-Planck equation reduces to a simple form involving only the time and energy variables. in an earlier paper (Park & Petrosian 1995, hereafter Paper 1), we studied the analytic properties of the Fokker-Planck equation and found analytic solutions for some simple cases. In this paper, we study the numerical methods which must be used to solve more general forms of the equation. Two classes of numerical methods are finite difference methods and Monte Carlo simulations. We examine six finite difference methods, three fully implicit and three semi-implicit, and a stochastic simulation method which uses the exact correspondence between the Fokker-Planck equation and the it5 stochastic differential equation. As discussed in Paper I, Fokker-Planck equations derived under the above approximations are singular, causing problems with boundary conditions and numerical overflow and underflow. We evaluate each method using three sample equations to test its stability, accuracy, efficiency, and robustness for both time-dependent and steady state solutions. We conclude that the most robust finite difference method is the fully implicit Chang-Cooper method, with minor extensions to account for the escape and injection terms. Other methods suffer from stability and accuracy problems when dealing with some Fokker-Planck equations. The stochastic simulation method, although simple to implement, is susceptible to Poisson noise when insufficient test particles are used and is computationally very expensive compared to the finite difference method.

  4. The Reduced Basis Method in Geosciences: Practical examples for numerical forward simulations

    NASA Astrophysics Data System (ADS)

    Degen, D.; Veroy, K.; Wellmann, F.

    2017-12-01

    Due to the highly heterogeneous character of the earth's subsurface, the complex coupling of thermal, hydrological, mechanical, and chemical processes, and the limited accessibility we have to face high-dimensional problems associated with high uncertainties in geosciences. Performing the obviously necessary uncertainty quantifications with a reasonable number of parameters is often not possible due to the high-dimensional character of the problem. Therefore, we are presenting the reduced basis (RB) method, being a model order reduction (MOR) technique, that constructs low-order approximations to, for instance, the finite element (FE) space. We use the RB method to address this computationally challenging simulations because this method significantly reduces the degrees of freedom. The RB method is decomposed into an offline and online stage, allowing to make the expensive pre-computations beforehand to get real-time results during field campaigns. Generally, the RB approach is most beneficial in the many-query and real-time context.We will illustrate the advantages of the RB method for the field of geosciences through two examples of numerical forward simulations.The first example is a geothermal conduction problem demonstrating the implementation of the RB method for a steady-state case. The second examples, a Darcy flow problem, shows the benefits for transient scenarios. In both cases, a quality evaluation of the approximations is given. Additionally, the runtimes for both the FE and the RB simulations are compared. We will emphasize the advantages of this method for repetitive simulations by showing the speed-up for the RB solution in contrast to the FE solution. Finally, we will demonstrate how the used implementation is usable in high-performance computing (HPC) infrastructures and evaluate its performance for such infrastructures. Hence, we will especially point out its scalability, yielding in an optimal usage on HPC infrastructures and normal working stations.

  5. 48 CFR 227.7103-6 - Contract clauses.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... expense). Do not use the clause when the only deliverable items are computer software or computer software... architect-engineer and construction contracts. (b)(1) Use the clause at 252.227-7013 with its Alternate I in... Software Previously Delivered to the Government, in solicitations when the resulting contract will require...

  6. 48 CFR 227.7103-6 - Contract clauses.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... expense). Do not use the clause when the only deliverable items are computer software or computer software... architect-engineer and construction contracts. (b)(1) Use the clause at 252.227-7013 with its Alternate I in... Software Previously Delivered to the Government, in solicitations when the resulting contract will require...

  7. A COMPUTATIONALLY EFFICIENT HYBRID APPROACH FOR DYNAMIC GAS/AEROSOL TRANSFER IN AIR QUALITY MODELS. (R826371C005)

    EPA Science Inventory

    Dynamic mass transfer methods have been developed to better describe the interaction of the aerosol population with semi-volatile species such as nitrate, ammonia, and chloride. Unfortunately, these dynamic methods are computationally expensive. Assumptions are often made to r...

  8. Looking At Display Technologies

    ERIC Educational Resources Information Center

    Bull, Glen; Bull, Gina

    2005-01-01

    A projection system in a classroom with an Internet connection provides a window on the world. Until recently, projectors were expensive and difficult to maintain. Technological advances have resulted in solid-state projectors that require little maintenance and cost no more than a computer. Adding a second or third computer to a classroom…

  9. Large-scale expensive black-box function optimization

    NASA Astrophysics Data System (ADS)

    Rashid, Kashif; Bailey, William; Couët, Benoît

    2012-09-01

    This paper presents the application of an adaptive radial basis function method to a computationally expensive black-box reservoir simulation model of many variables. An iterative proxy-based scheme is used to tune the control variables, distributed for finer control over a varying number of intervals covering the total simulation period, to maximize asset NPV. The method shows that large-scale simulation-based function optimization of several hundred variables is practical and effective.

  10. Large Eddy/Reynolds-Averaged Navier-Stokes Simulations of CUBRC Base Heating Experiments

    NASA Technical Reports Server (NTRS)

    Salazar, Giovanni; Edwards, Jack R.; Amar, Adam J.

    2012-01-01

    ven with great advances in computational techniques and computing power during recent decades, the modeling of unsteady separated flows, such as those encountered in the wake of a re-entry vehicle, continues to be one of the most challenging problems in CFD. Of most interest to the aerothermodynamics community is accurately predicting transient heating loads on the base of a blunt body, which would result in reduced uncertainties and safety margins when designing a re-entry vehicle. However, the prediction of heat transfer can vary widely depending on the turbulence model employed. Therefore, selecting a turbulence model which realistically captures as much of the flow physics as possible will result in improved results. Reynolds Averaged Navier Stokes (RANS) models have become increasingly popular due to their good performance with attached flows, and the relatively quick turnaround time to obtain results. However, RANS methods cannot accurately simulate unsteady separated wake flows, and running direct numerical simulation (DNS) on such complex flows is currently too computationally expensive. Large Eddy Simulation (LES) techniques allow for the computation of the large eddies, which contain most of the Reynolds stress, while modeling the smaller (subgrid) eddies. This results in models which are more computationally expensive than RANS methods, but not as prohibitive as DNS. By complimenting an LES approach with a RANS model, a hybrid LES/RANS method resolves the larger turbulent scales away from surfaces with LES, and switches to a RANS model inside boundary layers. As pointed out by Bertin et al., this type of hybrid approach has shown a lot of promise for predicting turbulent flows, but work is needed to verify that these models work well in hypersonic flows. The very limited amounts of flight and experimental data available presents an additional challenge for researchers. Recently, a joint study by NASA and CUBRC has focused on collecting heat transfer data on the backshell of a scaled model of the Orion Multi-Purpose Crew Vehicle (MPCV). Heat augmentation effects due to the presence of cavities and RCS jet firings were also investigated. The high quality data produced by this effort presents a new set of data which can be used to assess the performance of CFD methods. In this work, a hybrid LES/RANS model developed at North Carolina State University (NCSU) is used to simulate several runs from these experiments, and evaluate the performance of high fidelity methods as compared to more typical RANS models. .

  11. Advances in Numerical Boundary Conditions for Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.

    1997-01-01

    Advances in Computational Aeroacoustics (CAA) depend critically on the availability of accurate, nondispersive, least dissipative computation algorithm as well as high quality numerical boundary treatments. This paper focuses on the recent developments of numerical boundary conditions. In a typical CAA problem, one often encounters two types of boundaries. Because a finite computation domain is used, there are external boundaries. On the external boundaries, boundary conditions simulating the solution outside the computation domain are to be imposed. Inside the computation domain, there may be internal boundaries. On these internal boundaries, boundary conditions simulating the presence of an object or surface with specific acoustic characteristics are to be applied. Numerical boundary conditions, both external or internal, developed for simple model problems are reviewed and examined. Numerical boundary conditions for real aeroacoustic problems are also discussed through specific examples. The paper concludes with a description of some much needed research in numerical boundary conditions for CAA.

  12. Use of less expensive cigarettes in six cities in China: findings from the International Tobacco Control (ITC) China Survey

    PubMed Central

    Hyland, Andrew; Fong, Geoffrey T; Jiang, Yuan; Elton-Marshall, Tara

    2010-01-01

    Objective The existence of less expensive cigarettes in China may undermine public health. The aim of the current study is to examine the use of less expensive cigarettes in six cities in China. Methods Data was from the baseline wave of the International Tobacco Control (ITC) China Survey of 4815 adult urban smokers in 6 cities, conducted between April and August 2006. The percentage of smokers who reported buying less expensive cigarettes (the lowest pricing tertile within each city) at last purchase was computed. Complex sample multivariate logistic regression models were used to identify factors associated with use of less expensive cigarettes. The association between the use of less expensive cigarettes and intention to quit smoking was also examined. Results Smokers who reported buying less expensive cigarettes at last purchase tended to be older, heavier smokers, to have lower education and income, and to think more about the money spent on smoking in the last month. Smokers who bought less expensive cigarettes at the last purchase and who were less knowledgeable about the health harm of smoking were less likely to intend to quit smoking. Conclusions Measures need to be taken to minimise the price differential among cigarette brands and to increase smokers' health knowledge, which may in turn increase their intentions to quit. PMID:20935199

  13. The Advantages of Using Technology in Second Language Education: Technology Integration in Foreign Language Teaching Demonstrates the Shift from a Behavioral to a Constructivist Learning Approach

    ERIC Educational Resources Information Center

    Wang, Li

    2005-01-01

    With the advent of networked computers and Internet technology, computer-based instruction has been widely used in language classrooms throughout the United States. Computer technologies have dramatically changed the way people gather information, conduct research and communicate with others worldwide. Considering the tremendous startup expenses,…

  14. Integrating Numerical Computation into the Modeling Instruction Curriculum

    ERIC Educational Resources Information Center

    Caballero, Marcos D.; Burk, John B.; Aiken, John M.; Thoms, Brian D.; Douglas, Scott S.; Scanlon, Erin M.; Schatz, Michael F.

    2014-01-01

    Numerical computation (the use of a computer to solve, simulate, or visualize a physical problem) has fundamentally changed the way scientific research is done. Systems that are too difficult to solve in closed form are probed using computation. Experiments that are impossible to perform in the laboratory are studied numerically. Consequently, in…

  15. Numerical Package in Computer Supported Numeric Analysis Teaching

    ERIC Educational Resources Information Center

    Tezer, Murat

    2007-01-01

    At universities in the faculties of Engineering, Sciences, Business and Economics together with higher education in Computing, it is stated that because of the difficulty, calculators and computers can be used in Numerical Analysis (NA). In this study, the learning computer supported NA will be discussed together with important usage of the…

  16. Towards a new method for modeling multicomponent, multiphase flow and transport in porous media

    NASA Astrophysics Data System (ADS)

    Kong, X. Z.; Schaedle, P.; Leal, A. M. M.; Saar, M. O.

    2016-12-01

    The ability to computationally simulate multiphase-multicomponent fluid flow, coupled with geochemical reactions between fluid species and rock minerals, in porous and/or fractured subsurface systems is of major importance to a vast number of applications. These include (1) carbon dioxide storage in geologic formations, (2) geothermal energy extraction, (3) combinations of the latter two applications during CO2-Plume Geothermal energy extraction, (4) waste fluid and waste storage, as well as (5) groundwater and contaminant transport. Modeling these systems with such a wide variety of coupled physical and chemical processes is both challenging and computationally expensive. In this work we present a new approach to develop a simulator for multicomponent-multiphase flow and reactive transport in porous media by using state of the art numerical tools, namely FEniCS (fenicsproject.org) and Reaktoro (reaktoro.org). The governing partial differential equations for fluid flow and transport are solved using FEniCS, which enables fast and efficient implementation of computer codes for the simulation of complex physical phenomena using finite element methods on unstructured meshes. FEniCS supports a wide range of finite element schemes of special interest to porous media flow. In addition, FEniCS interfaces with many sparse linear solvers and provides convenient tools for adaptive mesh refinement and the capability of massively parallel calculations. A fundamental component of our contribution is the coupling of our FEniCS based flow and transport solver with our chemical reaction simulator, Reaktoro, which implements efficient, robust, and accurate methods for chemical equilibrium and kinetics calculations at every node of the mesh, at every time step. These numerical methods for reaction modeling have been especially developed for performance-critical applications such as reactive transport modeling. Furthermore, Reaktoro is also used for the calculation of thermodynamic properties of rock minerals and fluids. The proposed simulator can, however, be coupled with other back-ends for the calculation of both thermodynamic and thermophysical properties of rock minerals and fluids. We present several example applications of our new approach, demonstrating its capabilities and computation speed.

  17. Robust scalable stabilisability conditions for large-scale heterogeneous multi-agent systems with uncertain nonlinear interactions: towards a distributed computing architecture

    NASA Astrophysics Data System (ADS)

    Manfredi, Sabato

    2016-06-01

    Large-scale dynamic systems are becoming highly pervasive in their occurrence with applications ranging from system biology, environment monitoring, sensor networks, and power systems. They are characterised by high dimensionality, complexity, and uncertainty in the node dynamic/interactions that require more and more computational demanding methods for their analysis and control design, as well as the network size and node system/interaction complexity increase. Therefore, it is a challenging problem to find scalable computational method for distributed control design of large-scale networks. In this paper, we investigate the robust distributed stabilisation problem of large-scale nonlinear multi-agent systems (briefly MASs) composed of non-identical (heterogeneous) linear dynamical systems coupled by uncertain nonlinear time-varying interconnections. By employing Lyapunov stability theory and linear matrix inequality (LMI) technique, new conditions are given for the distributed control design of large-scale MASs that can be easily solved by the toolbox of MATLAB. The stabilisability of each node dynamic is a sufficient assumption to design a global stabilising distributed control. The proposed approach improves some of the existing LMI-based results on MAS by both overcoming their computational limits and extending the applicative scenario to large-scale nonlinear heterogeneous MASs. Additionally, the proposed LMI conditions are further reduced in terms of computational requirement in the case of weakly heterogeneous MASs, which is a common scenario in real application where the network nodes and links are affected by parameter uncertainties. One of the main advantages of the proposed approach is to allow to move from a centralised towards a distributed computing architecture so that the expensive computation workload spent to solve LMIs may be shared among processors located at the networked nodes, thus increasing the scalability of the approach than the network size. Finally, a numerical example shows the applicability of the proposed method and its advantage in terms of computational complexity when compared with the existing approaches.

  18. Numerical Propulsion System Simulation: An Overview

    NASA Technical Reports Server (NTRS)

    Lytle, John K.

    2000-01-01

    The cost of implementing new technology in aerospace propulsion systems is becoming prohibitively expensive and time consuming. One of the main contributors to the high cost and lengthy time is the need to perform many large-scale hardware tests and the inability to integrate all appropriate subsystems early in the design process. The NASA Glenn Research Center is developing the technologies required to enable simulations of full aerospace propulsion systems in sufficient detail to resolve critical design issues early in the design process before hardware is built. This concept, called the Numerical Propulsion System Simulation (NPSS), is focused on the integration of multiple disciplines such as aerodynamics, structures and heat transfer with computing and communication technologies to capture complex physical processes in a timely and cost-effective manner. The vision for NPSS, as illustrated, is to be a "numerical test cell" that enables full engine simulation overnight on cost-effective computing platforms. There are several key elements within NPSS that are required to achieve this capability: 1) clear data interfaces through the development and/or use of data exchange standards, 2) modular and flexible program construction through the use of object-oriented programming, 3) integrated multiple fidelity analysis (zooming) techniques that capture the appropriate physics at the appropriate fidelity for the engine systems, 4) multidisciplinary coupling techniques and finally 5) high performance parallel and distributed computing. The current state of development in these five area focuses on air breathing gas turbine engines and is reported in this paper. However, many of the technologies are generic and can be readily applied to rocket based systems and combined cycles currently being considered for low-cost access-to-space applications. Recent accomplishments include: (1) the development of an industry-standard engine cycle analysis program and plug 'n play architecture, called NPSS Version 1, (2) A full engine simulation that combines a 3D low-pressure subsystem with a 0D high pressure core simulation. This demonstrates the ability to integrate analyses at different levels of detail and to aerodynamically couple components, the fan/booster and low-pressure turbine, through a 3D computational fluid dynamics simulation. (3) Simulation of all of the turbomachinery in a modern turbofan engine on parallel computing platform for rapid and cost-effective execution. This capability can also be used to generate full compressor map, requiring both design and off-design simulation. (4) Three levels of coupling characterize the multidisciplinary analysis under NPSS: loosely coupled, process coupled and tightly coupled. The loosely coupled and process coupled approaches require a common geometry definition to link CAD to analysis tools. The tightly coupled approach is currently validating the use of arbitrary Lagrangian/Eulerian formulation for rotating turbomachinery. The validation includes both centrifugal and axial compression systems. The results of the validation will be reported in the paper. (5) The demonstration of significant computing cost/performance reduction for turbine engine applications using PC clusters. The NPSS Project is supported under the NASA High Performance Computing and Communications Program.

  19. Numerical Analysis of Crack Tip Plasticity and History Effects under Mixed Mode Conditions

    NASA Astrophysics Data System (ADS)

    Lopez-Crespo, Pablo; Pommier, Sylvie

    The plastic behaviour in the crack tip region has a strong influence on the fatigue life of engineering components. In general, residual stresses developed as a consequence of the plasticity being constrained around the crack tip have a significant role on both the direction of crack propagation and the propagation rate. Finite element methods (FEM) are commonly employed in order to model plasticity. However, if millions of cycles need to be modelled to predict the fatigue behaviour of a component, the method becomes computationally too expensive. By employing a multiscale approach, very precise analyses computed by FEM can be brought to a global scale. The data generated using the FEM enables us to identify a global cyclic elastic-plastic model for the crack tip region. Once this model is identified, it can be employed directly, with no need of additional FEM computations, resulting in fast computations. This is done by partitioning local displacement fields computed by FEM into intensity factors (global data) and spatial fields. A Karhunen-Loeve algorithm developed for image processing was employed for this purpose. In addition, the partitioning is done such as to distinguish into elastic and plastic components. Each of them is further divided into opening mode and shear mode parts. The plastic flow direction was determined with the above approach on a centre cracked panel subjected to a wide range of mixed-mode loading conditions. It was found to agree well with the maximum tangential stress criterion developed by Erdogan and Sih, provided that the loading direction is corrected for residual stresses. In this approach, residual stresses are measured at the global scale through internal intensity factors.

  20. Pseudo-point transport technique: a new method for solving the Boltzmann transport equation in media with highly fluctuating cross sections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakhai, B.

    A new method for solving radiation transport problems is presented. The heart of the technique is a new cross section processing procedure for the calculation of group-to-point and point-to-group cross sections sets. The method is ideally suited for problems which involve media with highly fluctuating cross sections, where the results of the traditional multigroup calculations are beclouded by the group averaging procedures employed. Extensive computational efforts, which would be required to evaluate double integrals in the multigroup treatment numerically, prohibit iteration to optimize the energy boundaries. On the other hand, use of point-to-point techniques (as in the stochastic technique) ismore » often prohibitively expensive due to the large computer storage requirement. The pseudo-point code is a hybrid of the two aforementioned methods (group-to-group and point-to-point) - hence the name pseudo-point - that reduces the computational efforts of the former and the large core requirements of the latter. The pseudo-point code generates the group-to-point or the point-to-group transfer matrices, and can be coupled with the existing transport codes to calculate pointwise energy-dependent fluxes. This approach yields much more detail than is available from the conventional energy-group treatments. Due to the speed of this code, several iterations could be performed (in affordable computing efforts) to optimize the energy boundaries and the weighting functions. The pseudo-point technique is demonstrated by solving six problems, each depicting a certain aspect of the technique. The results are presented as flux vs energy at various spatial intervals. The sensitivity of the technique to the energy grid and the savings in computational effort are clearly demonstrated.« less

  1. HIGH-FIDELITY SIMULATION-DRIVEN MODEL DEVELOPMENT FOR COARSE-GRAINED COMPUTATIONAL FLUID DYNAMICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanna, Botros N.; Dinh, Nam T.; Bolotnov, Igor A.

    Nuclear reactor safety analysis requires identifying various credible accident scenarios and determining their consequences. For a full-scale nuclear power plant system behavior, it is impossible to obtain sufficient experimental data for a broad range of risk-significant accident scenarios. In single-phase flow convective problems, Direct Numerical Simulation (DNS) and Large Eddy Simulation (LES) can provide us with high fidelity results when physical data are unavailable. However, these methods are computationally expensive and cannot be afforded for simulation of long transient scenarios in nuclear accidents despite extraordinary advances in high performance scientific computing over the past decades. The major issue is themore » inability to make the transient computation parallel, thus making number of time steps required in high-fidelity methods unaffordable for long transients. In this work, we propose to apply a high fidelity simulation-driven approach to model sub-grid scale (SGS) effect in Coarse Grained Computational Fluid Dynamics CG-CFD. This approach aims to develop a statistical surrogate model instead of the deterministic SGS model. We chose to start with a turbulent natural convection case with volumetric heating in a horizontal fluid layer with a rigid, insulated lower boundary and isothermal (cold) upper boundary. This scenario of unstable stratification is relevant to turbulent natural convection in a molten corium pool during a severe nuclear reactor accident, as well as in containment mixing and passive cooling. The presented approach demonstrates how to create a correction for the CG-CFD solution by modifying the energy balance equation. A global correction for the temperature equation proves to achieve a significant improvement to the prediction of steady state temperature distribution through the fluid layer.« less

  2. Numerical conformal mapping: Methods, applications, and theory. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeLillo, T.K.

    1995-11-01

    Section 1 of this report, briefly summarizes research performed under this grant during the first two years 1992 to 1994 and makes some overall remarks. Section 2, summarizes research performed during the final year from September, 1994 through May 31, 1995, more fully. The main achievement of the last period has been the application of numerical conformed mapping to the solution of the biharmonic equation. Section 3, summarizes travel, meetings, and other expenses supported by this grant during the final year.

  3. Cloud computing can simplify HIT infrastructure management.

    PubMed

    Glaser, John

    2011-08-01

    Software as a Service (SaaS), built on cloud computing technology, is emerging as the forerunner in IT infrastructure because it helps healthcare providers reduce capital investments. Cloud computing leads to predictable, monthly, fixed operating expenses for hospital IT staff. Outsourced cloud computing facilities are state-of-the-art data centers boasting some of the most sophisticated networking equipment on the market. The SaaS model helps hospitals safeguard against technology obsolescence, minimizes maintenance requirements, and simplifies management.

  4. Current and anticipated uses of the thermal hydraulics codes at the NRC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caruso, R.

    1997-07-01

    The focus of Thermal-Hydraulic computer code usage in nuclear regulatory organizations has undergone a considerable shift since the codes were originally conceived. Less work is being done in the area of {open_quotes}Design Basis Accidents,{close_quotes}, and much more emphasis is being placed on analysis of operational events, probabalistic risk/safety assessment, and maintenance practices. All of these areas need support from Thermal-Hydraulic computer codes to model the behavior of plant fluid systems, and they all need the ability to perform large numbers of analyses quickly. It is therefore important for the T/H codes of the future to be able to support thesemore » needs, by providing robust, easy-to-use, tools that produce easy-to understand results for a wider community of nuclear professionals. These tools need to take advantage of the great advances that have occurred recently in computer software, by providing users with graphical user interfaces for both input and output. In addition, reduced costs of computer memory and other hardware have removed the need for excessively complex data structures and numerical schemes, which make the codes more difficult and expensive to modify, maintain, and debug, and which increase problem run-times. Future versions of the T/H codes should also be structured in a modular fashion, to allow for the easy incorporation of new correlations, models, or features, and to simplify maintenance and testing. Finally, it is important that future T/H code developers work closely with the code user community, to ensure that the code meet the needs of those users.« less

  5. Constrained approximation of effective generators for multiscale stochastic reaction networks and application to conditioned path sampling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cotter, Simon L., E-mail: simon.cotter@manchester.ac.uk

    2016-10-15

    Efficient analysis and simulation of multiscale stochastic systems of chemical kinetics is an ongoing area for research, and is the source of many theoretical and computational challenges. In this paper, we present a significant improvement to the constrained approach, which is a method for computing effective dynamics of slowly changing quantities in these systems, but which does not rely on the quasi-steady-state assumption (QSSA). The QSSA can cause errors in the estimation of effective dynamics for systems where the difference in timescales between the “fast” and “slow” variables is not so pronounced. This new application of the constrained approach allowsmore » us to compute the effective generator of the slow variables, without the need for expensive stochastic simulations. This is achieved by finding the null space of the generator of the constrained system. For complex systems where this is not possible, or where the constrained subsystem is itself multiscale, the constrained approach can then be applied iteratively. This results in breaking the problem down into finding the solutions to many small eigenvalue problems, which can be efficiently solved using standard methods. Since this methodology does not rely on the quasi steady-state assumption, the effective dynamics that are approximated are highly accurate, and in the case of systems with only monomolecular reactions, are exact. We will demonstrate this with some numerics, and also use the effective generators to sample paths of the slow variables which are conditioned on their endpoints, a task which would be computationally intractable for the generator of the full system.« less

  6. Reduced Order Model Implementation in the Risk-Informed Safety Margin Characterization Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandelli, Diego; Smith, Curtis L.; Alfonsi, Andrea

    2015-09-01

    The RISMC project aims to develop new advanced simulation-based tools to perform Probabilistic Risk Analysis (PRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermo-hydraulic behavior of the reactor primary and secondary systems but also external events temporal evolution and components/system ageing. Thus, this is not only a multi-physics problem but also a multi-scale problem (both spatial, µm-mm-m, and temporal, ms-s-minutes-years). As part of the RISMC PRA approach, a large amount of computationally expensive simulation runs are required. An important aspect is that even though computational power is regularly growing, themore » overall computational cost of a RISMC analysis may be not viable for certain cases. A solution that is being evaluated is the use of reduce order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RICM analysis computational cost by decreasing the number of simulations runs to perform and employ surrogate models instead of the actual simulation codes. This report focuses on the use of reduced order modeling techniques that can be applied to any RISMC analysis to generate, analyze and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (µs instead of hours/days). We apply reduced order and surrogate modeling techniques to several RISMC types of analyses using RAVEN and RELAP-7 and show the advantages that can be gained.« less

  7. Recovery Schemes for Primitive Variables in General-relativistic Magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Siegel, Daniel M.; Mösta, Philipp; Desai, Dhruv; Wu, Samantha

    2018-05-01

    General-relativistic magnetohydrodynamic (GRMHD) simulations are an important tool to study a variety of astrophysical systems such as neutron star mergers, core-collapse supernovae, and accretion onto compact objects. A conservative GRMHD scheme numerically evolves a set of conservation equations for “conserved” quantities and requires the computation of certain primitive variables at every time step. This recovery procedure constitutes a core part of any conservative GRMHD scheme and it is closely tied to the equation of state (EOS) of the fluid. In the quest to include nuclear physics, weak interactions, and neutrino physics, state-of-the-art GRMHD simulations employ finite-temperature, composition-dependent EOSs. While different schemes have individually been proposed, the recovery problem still remains a major source of error, failure, and inefficiency in GRMHD simulations with advanced microphysics. The strengths and weaknesses of the different schemes when compared to each other remain unclear. Here we present the first systematic comparison of various recovery schemes used in different dynamical spacetime GRMHD codes for both analytic and tabulated microphysical EOSs. We assess the schemes in terms of (i) speed, (ii) accuracy, and (iii) robustness. We find large variations among the different schemes and that there is not a single ideal scheme. While the computationally most efficient schemes are less robust, the most robust schemes are computationally less efficient. More robust schemes may require an order of magnitude more calls to the EOS, which are computationally expensive. We propose an optimal strategy of an efficient three-dimensional Newton–Raphson scheme and a slower but more robust one-dimensional scheme as a fall-back.

  8. Hybrid Rocket Performance Prediction with Coupling Method of CFD and Thermal Conduction Calculation

    NASA Astrophysics Data System (ADS)

    Funami, Yuki; Shimada, Toru

    The final purpose of this study is to develop a design tool for hybrid rocket engines. This tool is a computer code which will be used in order to investigate rocket performance characteristics and unsteady phenomena lasting through the burning time, such as fuel regression or combustion oscillation. When phenomena inside a combustion chamber, namely boundary layer combustion, are described, it is difficult to use rigorous models for this target. It is because calculation cost may be too expensive. Therefore simple models are required for this calculation. In this study, quasi-one-dimensional compressible Euler equations for flowfields inside a chamber and the equation for thermal conduction inside a solid fuel are numerically solved. The energy balance equation at the solid fuel surface is solved to estimate fuel regression rate. Heat feedback model is Karabeyoglu's model dependent on total mass flux. Combustion model is global single step reaction model for 4 chemical species or chemical equilibrium model for 9 chemical species. As a first step, steady-state solutions are reported.

  9. Photoinjector optimization using a derivative-free, model-based trust-region algorithm for the Argonne Wakefield Accelerator

    NASA Astrophysics Data System (ADS)

    Neveu, N.; Larson, J.; Power, J. G.; Spentzouris, L.

    2017-07-01

    Model-based, derivative-free, trust-region algorithms are increasingly popular for optimizing computationally expensive numerical simulations. A strength of such methods is their efficient use of function evaluations. In this paper, we use one such algorithm to optimize the beam dynamics in two cases of interest at the Argonne Wakefield Accelerator (AWA) facility. First, we minimize the emittance of a 1 nC electron bunch produced by the AWA rf photocathode gun by adjusting three parameters: rf gun phase, solenoid strength, and laser radius. The algorithm converges to a set of parameters that yield an emittance of 1.08 μm. Second, we expand the number of optimization parameters to model the complete AWA rf photoinjector (the gun and six accelerating cavities) at 40 nC. The optimization algorithm is used in a Pareto study that compares the trade-off between emittance and bunch length for the AWA 70MeV photoinjector.

  10. Normal-Gamma-Bernoulli Peak Detection for Analysis of Comprehensive Two-Dimensional Gas Chromatography Mass Spectrometry Data.

    PubMed

    Kim, Seongho; Jang, Hyejeong; Koo, Imhoi; Lee, Joohyoung; Zhang, Xiang

    2017-01-01

    Compared to other analytical platforms, comprehensive two-dimensional gas chromatography coupled with mass spectrometry (GC×GC-MS) has much increased separation power for analysis of complex samples and thus is increasingly used in metabolomics for biomarker discovery. However, accurate peak detection remains a bottleneck for wide applications of GC×GC-MS. Therefore, the normal-exponential-Bernoulli (NEB) model is generalized by gamma distribution and a new peak detection algorithm using the normal-gamma-Bernoulli (NGB) model is developed. Unlike the NEB model, the NGB model has no closed-form analytical solution, hampering its practical use in peak detection. To circumvent this difficulty, three numerical approaches, which are fast Fourier transform (FFT), the first-order and the second-order delta methods (D1 and D2), are introduced. The applications to simulated data and two real GC×GC-MS data sets show that the NGB-D1 method performs the best in terms of both computational expense and peak detection performance.

  11. Risk Assessment of Carbon Sequestration into A Naturally Fractured Reservoir at Kevin Dome, Montana

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Minh; Onishi, Tsubasa; Carey, James William

    In this report, we describe risk assessment work done using the National Risk Assessment Partnership (NRAP) applied to CO 2 storage at Kevin Dome, Montana. Geologic CO 2 sequestration in saline aquifers poses certain risks including CO 2/brine leakage through wells or non-sealing faults into groundwater or to the land surface. These risks are difficult to quantify due to data availability and uncertainty. One solution is to explore the consequences of these limitations by running large numbers of numerical simulations on the primary CO2 injection reservoir, shallow reservoirs/aquifers, faults, and wells to assess leakage risks and uncertainties. However, a largemore » number of full-physics simulations is usually too computationally expensive. The NRAP integrated assessment model (NRAP-IAM) uses reduced order models (ROMs) developed from full-physics simulations to address this issue. A powerful stochastic framework allows NRAPIAM to explore complex interactions among many uncertain variables and evaluate the likely performance of potential sequestration sites.« less

  12. The small-scale turbulent dynamo in smoothed particle magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Tricco, T. S.; Price, D. J.; Federrath, C.

    2016-05-01

    Supersonic turbulence is believed to be at the heart of star formation. We have performed smoothed particle magnetohydrodynamics (SPMHD) simulations of the small- scale dynamo amplification of magnetic fields in supersonic turbulence. The calculations use isothermal gas driven at rms velocity of Mach 10 so that conditions are representative of starforming molecular clouds in the Milky Way. The growth of magnetic energy is followed for 10 orders in magnitude until it reaches saturation, a few percent of the kinetic energy. The results of our dynamo calculations are compared with results from grid-based methods, finding excellent agreement on their statistics and their qualitative behaviour. The simulations utilise the latest algorithmic developments we have developed, in particular, a new divergence cleaning approach to maintain the solenoidal constraint on the magnetic field and a method to reduce the numerical dissipation of the magnetic shock capturing scheme. We demonstrate that our divergence cleaning method may be used to achieve ∇ • B = 0 to machine precision, albeit at significant computational expense.

  13. Onyx-Advanced Aeropropulsion Simulation Framework Created

    NASA Technical Reports Server (NTRS)

    Reed, John A.

    2001-01-01

    The Numerical Propulsion System Simulation (NPSS) project at the NASA Glenn Research Center is developing a new software environment for analyzing and designing aircraft engines and, eventually, space transportation systems. Its purpose is to dramatically reduce the time, effort, and expense necessary to design and test jet engines by creating sophisticated computer simulations of an aerospace object or system (refs. 1 and 2). Through a university grant as part of that effort, researchers at the University of Toledo have developed Onyx, an extensible Java-based (Sun Micro-systems, Inc.), objectoriented simulation framework, to investigate how advanced software design techniques can be successfully applied to aeropropulsion system simulation (refs. 3 and 4). The design of Onyx's architecture enables users to customize and extend the framework to add new functionality or adapt simulation behavior as required. It exploits object-oriented technologies, such as design patterns, domain frameworks, and software components, to develop a modular system in which users can dynamically replace components with others having different functionality.

  14. Imaging and sizing of single DNA molecules on a mobile phone.

    PubMed

    Wei, Qingshan; Luo, Wei; Chiang, Samuel; Kappel, Tara; Mejia, Crystal; Tseng, Derek; Chan, Raymond Yan Lok; Yan, Eddie; Qi, Hangfei; Shabbir, Faizan; Ozkan, Haydar; Feng, Steve; Ozcan, Aydogan

    2014-12-23

    DNA imaging techniques using optical microscopy have found numerous applications in biology, chemistry and physics and are based on relatively expensive, bulky and complicated set-ups that limit their use to advanced laboratory settings. Here we demonstrate imaging and length quantification of single molecule DNA strands using a compact, lightweight and cost-effective fluorescence microscope installed on a mobile phone. In addition to an optomechanical attachment that creates a high contrast dark-field imaging setup using an external lens, thin-film interference filters, a miniature dovetail stage and a laser-diode for oblique-angle excitation, we also created a computational framework and a mobile phone application connected to a server back-end for measurement of the lengths of individual DNA molecules that are labeled and stretched using disposable chips. Using this mobile phone platform, we imaged single DNA molecules of various lengths to demonstrate a sizing accuracy of <1 kilobase-pairs (kbp) for 10 kbp and longer DNA samples imaged over a field-of-view of ∼2 mm2.

  15. High Accuracy Passive Magnetic Field-Based Localization for Feedback Control Using Principal Component Analysis.

    PubMed

    Foong, Shaohui; Sun, Zhenglong

    2016-08-12

    In this paper, a novel magnetic field-based sensing system employing statistically optimized concurrent multiple sensor outputs for precise field-position association and localization is presented. This method capitalizes on the independence between simultaneous spatial field measurements at multiple locations to induce unique correspondences between field and position. This single-source-multi-sensor configuration is able to achieve accurate and precise localization and tracking of translational motion without contact over large travel distances for feedback control. Principal component analysis (PCA) is used as a pseudo-linear filter to optimally reduce the dimensions of the multi-sensor output space for computationally efficient field-position mapping with artificial neural networks (ANNs). Numerical simulations are employed to investigate the effects of geometric parameters and Gaussian noise corruption on PCA assisted ANN mapping performance. Using a 9-sensor network, the sensing accuracy and closed-loop tracking performance of the proposed optimal field-based sensing system is experimentally evaluated on a linear actuator with a significantly more expensive optical encoder as a comparison.

  16. Comparison Between 2D and 3D Simulations of Rate Dependent Friction Using DEM

    NASA Astrophysics Data System (ADS)

    Wang, C.; Elsworth, D.

    2017-12-01

    Rate-state dependent constitutive laws of frictional evolution have been successful in representing many of the first- and second- order components of earthquake rupture. Although this constitutive law has been successfully applied in numerical models, difficulty remains in efficient implementation of this constitutive law in computationally-expensive granular mechanics simulations using discrete element methods (DEM). This study introduces a novel approach in implementing a rate-dependent constitutive relation of contact friction into DEM. This is essentially an implementation of a slip-weakening constitutive law onto local particle contacts without sacrificing computational efficiency. This implementation allows the analysis of slip stability of simulated fault gouge materials. Velocity-stepping experiments are reported on both uniform and textured distributions of quartz and talc as 3D analogs of gouge mixtures. Distinct local slip stability parameters (a-b) are assigned to the quartz and talc, respectively. We separately vary talc content from 0 to 100% in the uniform mixtures and talc layer thickness from 1 to 20 particles in the textured mixtures. Applied shear displacements are cycled through velocities of 1μm/s and 10μm/s. Frictional evolution data are collected and compared to 2D simulation results. We show that dimensionality significantly impacts the evolution of friction. 3D simulation results are more representative of laboratory observed behavior and numerical noise is shown at a magnitude of 0.01 in terms of friction coefficient. Stability parameters (a-b) can be straightforwardly obtained from analyzing velocity steps, and are different from locally assigned (a-b) values. Sensitivity studies on normal stress, shear velocity, particle size, local (a-b) values, and characteristic slip distance (Dc) show that the implementation is sensitive to local (a-b) values and relations between (Dc) and particle size.

  17. A coevolution analysis for identifying protein-protein interactions by Fourier transform.

    PubMed

    Yin, Changchuan; Yau, Stephen S-T

    2017-01-01

    Protein-protein interactions (PPIs) play key roles in life processes, such as signal transduction, transcription regulations, and immune response, etc. Identification of PPIs enables better understanding of the functional networks within a cell. Common experimental methods for identifying PPIs are time consuming and expensive. However, recent developments in computational approaches for inferring PPIs from protein sequences based on coevolution theory avoid these problems. In the coevolution theory model, interacted proteins may show coevolutionary mutations and have similar phylogenetic trees. The existing coevolution methods depend on multiple sequence alignments (MSA); however, the MSA-based coevolution methods often produce high false positive interactions. In this paper, we present a computational method using an alignment-free approach to accurately detect PPIs and reduce false positives. In the method, protein sequences are numerically represented by biochemical properties of amino acids, which reflect the structural and functional differences of proteins. Fourier transform is applied to the numerical representation of protein sequences to capture the dissimilarities of protein sequences in biophysical context. The method is assessed for predicting PPIs in Ebola virus. The results indicate strong coevolution between the protein pairs (NP-VP24, NP-VP30, NP-VP40, VP24-VP30, VP24-VP40, and VP30-VP40). The method is also validated for PPIs in influenza and E.coli genomes. Since our method can reduce false positive and increase the specificity of PPI prediction, it offers an effective tool to understand mechanisms of disease pathogens and find potential targets for drug design. The Python programs in this study are available to public at URL (https://github.com/cyinbox/PPI).

  18. An Immersed Boundary method with divergence-free velocity interpolation and force spreading

    NASA Astrophysics Data System (ADS)

    Bao, Yuanxun; Donev, Aleksandar; Griffith, Boyce E.; McQueen, David M.; Peskin, Charles S.

    2017-10-01

    The Immersed Boundary (IB) method is a mathematical framework for constructing robust numerical methods to study fluid-structure interaction in problems involving an elastic structure immersed in a viscous fluid. The IB formulation uses an Eulerian representation of the fluid and a Lagrangian representation of the structure. The Lagrangian and Eulerian frames are coupled by integral transforms with delta function kernels. The discretized IB equations use approximations to these transforms with regularized delta function kernels to interpolate the fluid velocity to the structure, and to spread structural forces to the fluid. It is well-known that the conventional IB method can suffer from poor volume conservation since the interpolated Lagrangian velocity field is not generally divergence-free, and so this can cause spurious volume changes. In practice, the lack of volume conservation is especially pronounced for cases where there are large pressure differences across thin structural boundaries. The aim of this paper is to greatly reduce the volume error of the IB method by introducing velocity-interpolation and force-spreading schemes with the properties that the interpolated velocity field in which the structure moves is at least C1 and satisfies a continuous divergence-free condition, and that the force-spreading operator is the adjoint of the velocity-interpolation operator. We confirm through numerical experiments in two and three spatial dimensions that this new IB method is able to achieve substantial improvement in volume conservation compared to other existing IB methods, at the expense of a modest increase in the computational cost. Further, the new method provides smoother Lagrangian forces (tractions) than traditional IB methods. The method presented here is restricted to periodic computational domains. Its generalization to non-periodic domains is important future work.

  19. Shear Heating-Induced Thermal Pressurization During the Nucleation of Earthquakes

    NASA Astrophysics Data System (ADS)

    Schmitt, S. V.; Segall, P.

    2008-12-01

    Shear heating-induced thermal pressurization has long been posited as a weakening mechanism during earthquakes. It is often assumed that thermal pressurization does not become important until earthquakes become moderate to large in magnitude. Schmitt et al. [AGU, 2007] confirmed the estimate of Segall and Rice [JGR, 2006] that thermal pressurization becomes dominant during the quasi-static nucleation phase by conducting 2D numerical simulations that account for full thermomechanical coupling, with rate and state dependent friction. In that work, thermal pressurization becomes the dominant weakening mechanism at slip rates of 10-5 to 10-3 m/s, depending on the fault zone hydraulic diffusivity. Interestingly, the thermal pressurization process leads to a contraction of the nucleation zone, rather than the growing crack (aging law) or unidirectional slip pulse (slip law) associated with drained rate- and state-dependent frictional nucleation. The results of Schmitt et al. [AGU, 2007] had a shortcoming in that the principal slip surface was treated as a zero-width feature, while in reality it should be a finite-width shear zone. We address that shortcoming with a new set of numerical simulations. We assume a finite-width fault governed by rate and state friction with the radiation damping approximation to simulate inertial effects. Both thermal and hydraulic diffusion are computed via finite differences on separate, coupled grids that adaptively remesh to minimize computational expense while maintaining accuracy. New results suggest that the thermal pressurization effect is modestly reduced by including the finite thickness of the shear zone. Despite the reduction in the effect, the new results still indicate that (1) thermal pressurization is important before seismic slip and (2) thermal pressurization restricts growth of the nucleation zone.

  20. The Laguerre finite difference one-way equation solver

    NASA Astrophysics Data System (ADS)

    Terekhov, Andrew V.

    2017-05-01

    This paper presents a new finite difference algorithm for solving the 2D one-way wave equation with a preliminary approximation of a pseudo-differential operator by a system of partial differential equations. As opposed to the existing approaches, the integral Laguerre transform instead of Fourier transform is used. After carrying out the approximation of spatial variables it is possible to obtain systems of linear algebraic equations with better computing properties and to reduce computer costs for their solution. High accuracy of calculations is attained at the expense of employing finite difference approximations of higher accuracy order that are based on the dispersion-relationship-preserving method and the Richardson extrapolation in the downward continuation direction. The numerical experiments have verified that as compared to the spectral difference method based on Fourier transform, the new algorithm allows one to calculate wave fields with a higher degree of accuracy and a lower level of numerical noise and artifacts including those for non-smooth velocity models. In the context of solving the geophysical problem the post-stack migration for velocity models of the types Syncline and Sigsbee2A has been carried out. It is shown that the images obtained contain lesser noise and are considerably better focused as compared to those obtained by the known Fourier Finite Difference and Phase-Shift Plus Interpolation methods. There is an opinion that purely finite difference approaches do not allow carrying out the seismic migration procedure with sufficient accuracy, however the results obtained disprove this statement. For the supercomputer implementation it is proposed to use the parallel dichotomy algorithm when solving systems of linear algebraic equations with block-tridiagonal matrices.

  1. Relative importance of first and second derivatives of nuclear magnetic resonance chemical shifts and spin-spin coupling constants for vibrational averaging.

    PubMed

    Dracínský, Martin; Kaminský, Jakub; Bour, Petr

    2009-03-07

    Relative importance of anharmonic corrections to molecular vibrational energies, nuclear magnetic resonance (NMR) chemical shifts, and J-coupling constants was assessed for a model set of methane derivatives, differently charged alanine forms, and sugar models. Molecular quartic force fields and NMR parameter derivatives were obtained quantum mechanically by a numerical differentiation. In most cases the harmonic vibrational function combined with the property second derivatives provided the largest correction of the equilibrium values, while anharmonic corrections (third and fourth energy derivatives) were found less important. The most computationally expensive off-diagonal quartic energy derivatives involving four different coordinates provided a negligible contribution. The vibrational corrections of NMR shifts were small and yielded a convincing improvement only for very accurate wave function calculations. For the indirect spin-spin coupling constants the averaging significantly improved already the equilibrium values obtained at the density functional theory level. Both first and complete second shielding derivatives were found important for the shift corrections, while for the J-coupling constants the vibrational parts were dominated by the diagonal second derivatives. The vibrational corrections were also applied to some isotopic effects, where the corrected values reasonably well reproduced the experiment, but only if a full second-order expansion of the NMR parameters was included. Contributions of individual vibrational modes for the averaging are discussed. Similar behavior was found for the methane derivatives, and for the larger and polar molecules. The vibrational averaging thus facilitates interpretation of previous experimental results and suggests that it can make future molecular structural studies more reliable. Because of the lengthy numerical differentiation required to compute the NMR parameter derivatives their analytical implementation in future quantum chemistry packages is desirable.

  2. An intercomparison of methods for solving the stochastic collection equation with a focus on cloud radar Doppler spectra in drizzling stratocumulus

    NASA Astrophysics Data System (ADS)

    Lee, H.; Fridlind, A. M.; Ackerman, A. S.; Kollias, P.

    2017-12-01

    Cloud radar Doppler spectra provide rich information for evaluating the fidelity of particle size distributions from cloud models. The intrinsic simplifications of bulk microphysics schemes generally preclude the generation of plausible Doppler spectra, unlike bin microphysics schemes, which develop particle size distributions more organically at substantial computational expense. However, bin microphysics schemes face the difficulty of numerical diffusion leading to overly rapid large drop formation, particularly while solving the stochastic collection equation (SCE). Because such numerical diffusion can cause an even greater overestimation of radar reflectivity, an accurate method for solving the SCE is essential for bin microphysics schemes to accurately simulate Doppler spectra. While several methods have been proposed to solve the SCE, here we examine those of Berry and Reinhardt (1974, BR74), Jacobson et al. (1994, J94), and Bott (2000, B00). Using a simple box model to simulate drop size distribution evolution during precipitation formation with a realistic kernel, it is shown that each method yields a converged solution as the resolution of the drop size grid increases. However, the BR74 and B00 methods yield nearly identical size distributions in time, whereas the J94 method produces consistently larger drops throughout the simulation. In contrast to an earlier study, the performance of the B00 method is found to be satisfactory; it converges at relatively low resolution and long time steps, and its computational efficiency is the best among the three methods considered here. Finally, a series of idealized stratocumulus large-eddy simulations are performed using the J94 and B00 methods. The reflectivity size distributions and Doppler spectra obtained from the different SCE solution methods are presented and compared with observations.

  3. Low-resolution simulations of vesicle suspensions in 2D

    NASA Astrophysics Data System (ADS)

    Kabacaoğlu, Gökberk; Quaife, Bryan; Biros, George

    2018-03-01

    Vesicle suspensions appear in many biological and industrial applications. These suspensions are characterized by rich and complex dynamics of vesicles due to their interaction with the bulk fluid, and their large deformations and nonlinear elastic properties. Many existing state-of-the-art numerical schemes can resolve such complex vesicle flows. However, even when using provably optimal algorithms, these simulations can be computationally expensive, especially for suspensions with a large number of vesicles. These high computational costs can limit the use of simulations for parameter exploration, optimization, or uncertainty quantification. One way to reduce the cost is to use low-resolution discretizations in space and time. However, it is well-known that simply reducing the resolution results in vesicle collisions, numerical instabilities, and often in erroneous results. In this paper, we investigate the effect of a number of algorithmic empirical fixes (which are commonly used by many groups) in an attempt to make low-resolution simulations more stable and more predictive. Based on our empirical studies for a number of flow configurations, we propose a scheme that attempts to integrate these fixes in a systematic way. This low-resolution scheme is an extension of our previous work [51,53]. Our low-resolution correction algorithms (LRCA) include anti-aliasing and membrane reparametrization for avoiding spurious oscillations in vesicles' membranes, adaptive time stepping and a repulsion force for handling vesicle collisions and, correction of vesicles' area and arc-length for maintaining physical vesicle shapes. We perform a systematic error analysis by comparing the low-resolution simulations of dilute and dense suspensions with their high-fidelity, fully resolved, counterparts. We observe that the LRCA enables both efficient and statistically accurate low-resolution simulations of vesicle suspensions, while it can be 10× to 100× faster.

  4. Accuracy and efficiency considerations for wide-angle wavefield extrapolators and scattering operators

    NASA Astrophysics Data System (ADS)

    Thomson, C. J.

    2005-10-01

    Several observations are made concerning the numerical implementation of wide-angle one-way wave equations, using for illustration scalar waves obeying the Helmholtz equation in two space dimensions. This simple case permits clear identification of a sequence of physically motivated approximations of use when the mathematically exact pseudo-differential operator (PSDO) one-way method is applied. As intuition suggests, these approximations largely depend on the medium gradients in the direction transverse to the main propagation direction. A key point is that narrow-angle approximations are to be avoided in the interests of accuracy. Another key consideration stems from the fact that the so-called `standard-ordering' PSDO indicates how lateral interpolation of the velocity structure can significantly reduce computational costs associated with the Fourier or plane-wave synthesis lying at the heart of the calculations. A third important point is that the PSDO theory shows what approximations are necessary in order to generate an exponential one-way propagator for the laterally varying case, representing the intuitive extension of classical integral-transform solutions for a laterally homogeneous medium. This exponential propagator permits larger forward stepsizes. Numerical comparisons with Helmholtz (i.e. full) wave-equation finite-difference solutions are presented for various canonical problems. These include propagation along an interfacial gradient, the effects of a compact inclusion and the formation of extended transmitted and backscattered wave trains by model roughness. The ideas extend to the 3-D, generally anisotropic case and to multiple scattering by invariant embedding. It is concluded that the method is very competitive, striking a new balance between simplifying approximations and computational labour. Complicated wave-scattering effects are retained without the need for expensive global solutions, providing a robust and flexible modelling tool.

  5. Using Machine Learning as a fast emulator of physical processes within the Met Office's Unified Model

    NASA Astrophysics Data System (ADS)

    Prudden, R.; Arribas, A.; Tomlinson, J.; Robinson, N.

    2017-12-01

    The Unified Model is a numerical model of the atmosphere used at the UK Met Office (and numerous partner organisations including Korean Meteorological Agency, Australian Bureau of Meteorology and US Air Force) for both weather and climate applications.Especifically, dynamical models such as the Unified Model are now a central part of weather forecasting. Starting from basic physical laws, these models make it possible to predict events such as storms before they have even begun to form. The Unified Model can be simply described as having two components: one component solves the navier-stokes equations (usually referred to as the "dynamics"); the other solves relevant sub-grid physical processes (usually referred to as the "physics"). Running weather forecasts requires substantial computing resources - for example, the UK Met Office operates the largest operational High Performance Computer in Europe - and the cost of a typical simulation is spent roughly 50% in the "dynamics" and 50% in the "physics". Therefore there is a high incentive to reduce cost of weather forecasts and Machine Learning is a possible option because, once a machine learning model has been trained, it is often much faster to run than a full simulation. This is the motivation for a technique called model emulation, the idea being to build a fast statistical model which closely approximates a far more expensive simulation. In this paper we discuss the use of Machine Learning as an emulator to replace the "physics" component of the Unified Model. Various approaches and options will be presented and the implications for further model development, operational running of forecasting systems, development of data assimilation schemes, and development of ensemble prediction techniques will be discussed.

  6. A coevolution analysis for identifying protein-protein interactions by Fourier transform

    PubMed Central

    Yin, Changchuan; Yau, Stephen S. -T.

    2017-01-01

    Protein-protein interactions (PPIs) play key roles in life processes, such as signal transduction, transcription regulations, and immune response, etc. Identification of PPIs enables better understanding of the functional networks within a cell. Common experimental methods for identifying PPIs are time consuming and expensive. However, recent developments in computational approaches for inferring PPIs from protein sequences based on coevolution theory avoid these problems. In the coevolution theory model, interacted proteins may show coevolutionary mutations and have similar phylogenetic trees. The existing coevolution methods depend on multiple sequence alignments (MSA); however, the MSA-based coevolution methods often produce high false positive interactions. In this paper, we present a computational method using an alignment-free approach to accurately detect PPIs and reduce false positives. In the method, protein sequences are numerically represented by biochemical properties of amino acids, which reflect the structural and functional differences of proteins. Fourier transform is applied to the numerical representation of protein sequences to capture the dissimilarities of protein sequences in biophysical context. The method is assessed for predicting PPIs in Ebola virus. The results indicate strong coevolution between the protein pairs (NP-VP24, NP-VP30, NP-VP40, VP24-VP30, VP24-VP40, and VP30-VP40). The method is also validated for PPIs in influenza and E.coli genomes. Since our method can reduce false positive and increase the specificity of PPI prediction, it offers an effective tool to understand mechanisms of disease pathogens and find potential targets for drug design. The Python programs in this study are available to public at URL (https://github.com/cyinbox/PPI). PMID:28430779

  7. Modeling Biodegradation and Reactive Transport: Analytical and Numerical Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Y; Glascoe, L

    The computational modeling of the biodegradation of contaminated groundwater systems accounting for biochemical reactions coupled to contaminant transport is a valuable tool for both the field engineer/planner with limited computational resources and the expert computational researcher less constrained by time and computer power. There exists several analytical and numerical computer models that have been and are being developed to cover the practical needs put forth by users to fulfill this spectrum of computational demands. Generally, analytical models provide rapid and convenient screening tools running on very limited computational power, while numerical models can provide more detailed information with consequent requirementsmore » of greater computational time and effort. While these analytical and numerical computer models can provide accurate and adequate information to produce defensible remediation strategies, decisions based on inadequate modeling output or on over-analysis can have costly and risky consequences. In this chapter we consider both analytical and numerical modeling approaches to biodegradation and reactive transport. Both approaches are discussed and analyzed in terms of achieving bioremediation goals, recognizing that there is always a tradeoff between computational cost and the resolution of simulated systems.« less

  8. A comparison of the costs of laparoscopic myomectomy and open myomectomy at a teaching hospital in southern Taiwan.

    PubMed

    Chang, Chi-Chang

    2013-06-01

    To compare the costs of traditional open myomectomy (OM) with laparoscopic myomectomy (LM). A retrospective review was conducted of the medical records of 155 women who underwent traditional open myomectomy (OM) or laparoscopic myomectomy (LM) in a teaching hospital in Taiwan. The total medical service expense and the patient out-of-pocket expense were significantly higher for women who received LM than for women who received OM. However, the operative time and blood loss during surgery were significantly less in women who received LM than in women who received OM. The overall expense is higher for LM than for OM; however, the laparoscopic approach has numerous advantages for patients, including shorter operative time, less blood loss, and a more rapid recovery time. No difference existed in the rate of complications or in the recurrence of disease for the two procedures. Copyright © 2013. Published by Elsevier B.V.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krityakierne, Tipaluck; Akhtar, Taimoor; Shoemaker, Christine A.

    This paper presents a parallel surrogate-based global optimization method for computationally expensive objective functions that is more effective for larger numbers of processors. To reach this goal, we integrated concepts from multi-objective optimization and tabu search into, single objective, surrogate optimization. Our proposed derivative-free algorithm, called SOP, uses non-dominated sorting of points for which the expensive function has been previously evaluated. The two objectives are the expensive function value of the point and the minimum distance of the point to previously evaluated points. Based on the results of non-dominated sorting, P points from the sorted fronts are selected as centersmore » from which many candidate points are generated by random perturbations. Based on surrogate approximation, the best candidate point is subsequently selected for expensive evaluation for each of the P centers, with simultaneous computation on P processors. Centers that previously did not generate good solutions are tabu with a given tenure. We show almost sure convergence of this algorithm under some conditions. The performance of SOP is compared with two RBF based methods. The test results show that SOP is an efficient method that can reduce time required to find a good near optimal solution. In a number of cases the efficiency of SOP is so good that SOP with 8 processors found an accurate answer in less wall-clock time than the other algorithms did with 32 processors.« less

  10. Supercomputing Aspects for Simulating Incompressible Flow

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan; Kris, Cetin C.

    2000-01-01

    The primary objective of this research is to support the design of liquid rocket systems for the Advanced Space Transportation System. Since the space launch systems in the near future are likely to rely on liquid rocket engines, increasing the efficiency and reliability of the engine components is an important task. One of the major problems in the liquid rocket engine is to understand fluid dynamics of fuel and oxidizer flows from the fuel tank to plume. Understanding the flow through the entire turbo-pump geometry through numerical simulation will be of significant value toward design. One of the milestones of this effort is to develop, apply and demonstrate the capability and accuracy of 3D CFD methods as efficient design analysis tools on high performance computer platforms. The development of the Message Passage Interface (MPI) and Multi Level Parallel (MLP) versions of the INS3D code is currently underway. The serial version of INS3D code is a multidimensional incompressible Navier-Stokes solver based on overset grid technology, INS3D-MPI is based on the explicit massage-passing interface across processors and is primarily suited for distributed memory systems. INS3D-MLP is based on multi-level parallel method and is suitable for distributed-shared memory systems. For the entire turbo-pump simulations, moving boundary capability and efficient time-accurate integration methods are built in the flow solver, To handle the geometric complexity and moving boundary problems, an overset grid scheme is incorporated with the solver so that new connectivity data will be obtained at each time step. The Chimera overlapped grid scheme allows subdomains move relative to each other, and provides a great flexibility when the boundary movement creates large displacements. Two numerical procedures, one based on artificial compressibility method and the other pressure projection method, are outlined for obtaining time-accurate solutions of the incompressible Navier-Stokes equations. The performance of the two methods is compared by obtaining unsteady solutions for the evolution of twin vortices behind a flat plate. Calculated results are compared with experimental and other numerical results. For an unsteady flow, which requires small physical time step, the pressure projection method was found to be computationally efficient since it does not require any subiteration procedure. It was observed that the artificial compressibility method requires a fast convergence scheme at each physical time step in order to satisfy the incompressibility condition. This was obtained by using a GMRES-ILU(0) solver in present computations. When a line-relaxation scheme was used, the time accuracy was degraded and time-accurate computations became very expensive.

  11. Superintendents' Perceptions of 1:1 Initiative Implementation and Sustainability

    ERIC Educational Resources Information Center

    Cole, Bobby Virgil, Jr.; Sauers, Nicholas J.

    2018-01-01

    One of the fastest growing, most discussed, and most expensive technology initiatives over the last decade has been one-to-one (1:1) computing initiatives. The purpose of this study was to examine key factors that influenced implementing and sustaining 1:1 computing initiatives from the perspective of school superintendents. Nine superintendents…

  12. Data Bases at a State Institution--Costs, Uses and Needs. AIR Forum Paper 1978.

    ERIC Educational Resources Information Center

    McLaughlin, Gerald W.

    The cost-benefit of administrative data at a state college is placed in perspective relative to the institutional involvement in computer use. The costs of computer operations, personnel, and peripheral equipment expenses related to instruction are analyzed. Data bases and systems support institutional activities, such as registration, and aid…

  13. Film Library Information Management System.

    ERIC Educational Resources Information Center

    Minnella, C. Vincent; And Others

    The computer program described not only allows the user to determine rental sources for a particular film title quickly, but also to select the least expensive of the sources. This program developed at SUNY Cortland's Sperry Learning Resources Center and Computer Center is designed to maintain accurate data on rental and purchase films in both…

  14. Appropedia as a Tool for Service Learning in Sustainable Development

    ERIC Educational Resources Information Center

    Pearce, Joshua M.

    2009-01-01

    Numerous studies have demonstrated that university students are capable of contributing to sustainable development while improving their academic skills. Unfortunately for many institutions, the expense of sending large cohorts of students on international service learning trips is prohibitive. Yet, students remain enthusiastic and well equipped…

  15. Algorithm and code development for unsteady three-dimensional Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Obayashi, Shigeru

    1994-01-01

    Aeroelastic tests require extensive cost and risk. An aeroelastic wind-tunnel experiment is an order of magnitude more expensive than a parallel experiment involving only aerodynamics. By complementing the wind-tunnel experiments with numerical simulations, the overall cost of the development of aircraft can be considerably reduced. In order to accurately compute aeroelastic phenomenon it is necessary to solve the unsteady Euler/Navier-Stokes equations simultaneously with the structural equations of motion. These equations accurately describe the flow phenomena for aeroelastic applications. At ARC a code, ENSAERO, is being developed for computing the unsteady aerodynamics and aeroelasticity of aircraft, and it solves the Euler/Navier-Stokes equations. The purpose of this cooperative agreement was to enhance ENSAERO in both algorithm and geometric capabilities. During the last five years, the algorithms of the code have been enhanced extensively by using high-resolution upwind algorithms and efficient implicit solvers. The zonal capability of the code has been extended from a one-to-one grid interface to a mismatching unsteady zonal interface. The geometric capability of the code has been extended from a single oscillating wing case to a full-span wing-body configuration with oscillating control surfaces. Each time a new capability was added, a proper validation case was simulated, and the capability of the code was demonstrated.

  16. Balancing accuracy, efficiency, and flexibility in a radiative transfer parameterization for dynamical models

    NASA Astrophysics Data System (ADS)

    Pincus, R.; Mlawer, E. J.

    2017-12-01

    Radiation is key process in numerical models of the atmosphere. The problem is well-understood and the parameterization of radiation has seen relatively few conceptual advances in the past 15 years. It is nonthelss often the single most expensive component of all physical parameterizations despite being computed less frequently than other terms. This combination of cost and maturity suggests value in a single radiation parameterization that could be shared across models; devoting effort to a single parameterization might allow for fine tuning for efficiency. The challenge lies in the coupling of this parameterization to many disparate representations of clouds and aerosols. This talk will describe RRTMGP, a new radiation parameterization that seeks to balance efficiency and flexibility. This balance is struck by isolating computational tasks in "kernels" that expose as much fine-grained parallelism as possible. These have simple interfaces and are interoperable across programming languages so that they might be repalced by alternative implementations in domain-specific langauges. Coupling to the host model makes use of object-oriented features of Fortran 2003, minimizing branching within the kernels and the amount of data that must be transferred. We will show accuracy and efficiency results for a globally-representative set of atmospheric profiles using a relatively high-resolution spectral discretization.

  17. 2D OR NOT 2D: THE EFFECT OF DIMENSIONALITY ON THE DYNAMICS OF FINGERING CONVECTION AT LOW PRANDTL NUMBER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garaud, Pascale; Brummell, Nicholas

    2015-12-10

    Fingering convection (otherwise known as thermohaline convection) is an instability that occurs in stellar radiative interiors in the presence of unstable compositional gradients. Numerical simulations have been used in order to estimate the efficiency of mixing induced by this instability. However, fully three-dimensional (3D) computations in the parameter regime appropriate for stellar astrophysics (i.e., low Prandtl number) are prohibitively expensive. This raises the question of whether two-dimensional (2D) simulations could be used instead to achieve the same goals. In this work, we address this issue by comparing the outcome of 2D and 3D simulations of fingering convection at low Prandtlmore » number. We find that 2D simulations are never appropriate. However, we also find that the required 3D computational domain does not have to be very wide: the third dimension only needs to contain a minimum of two wavelengths of the fastest-growing linearly unstable mode to capture the essentially 3D dynamics of small-scale fingering. Narrow domains, however, should still be used with caution since they could limit the subsequent development of any large-scale dynamics typically associated with fingering convection.« less

  18. Data assimilation for unsaturated flow models with restart adaptive probabilistic collocation based Kalman filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Man, Jun; Li, Weixuan; Zeng, Lingzao

    2016-06-01

    The ensemble Kalman filter (EnKF) has gained popularity in hydrological data assimilation problems. As a Monte Carlo based method, a relatively large ensemble size is usually required to guarantee the accuracy. As an alternative approach, the probabilistic collocation based Kalman filter (PCKF) employs the polynomial chaos to approximate the original system. In this way, the sampling error can be reduced. However, PCKF suffers from the so-called "curse of dimensionality". When the system nonlinearity is strong and number of parameters is large, PCKF could be even more computationally expensive than EnKF. Motivated by most recent developments in uncertainty quantification, we proposemore » a restart adaptive probabilistic collocation based Kalman filter (RAPCKF) for data assimilation in unsaturated flow problems. During the implementation of RAPCKF, the important parameters are identified and active PCE basis functions are adaptively selected. The "restart" technology is used to eliminate the inconsistency between model parameters and states. The performance of RAPCKF is tested with numerical cases of unsaturated flow models. It is shown that RAPCKF is more efficient than EnKF with the same computational cost. Compared with the traditional PCKF, the RAPCKF is more applicable in strongly nonlinear and high dimensional problems.« less

  19. Optimal time points sampling in pathway modelling.

    PubMed

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  20. Efficient Sum of Outer Products Dictionary Learning (SOUP-DIL) and Its Application to Inverse Problems.

    PubMed

    Ravishankar, Saiprasad; Nadakuditi, Raj Rao; Fessler, Jeffrey A

    2017-12-01

    The sparsity of signals in a transform domain or dictionary has been exploited in applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns. The resulting block coordinate descent algorithms involve efficient closed-form solutions. Furthermore, we consider the problem of dictionary-blind image reconstruction, and propose novel and efficient algorithms for adaptive image reconstruction using block coordinate descent and sum of outer products methodologies. We provide a convergence study of the algorithms for dictionary learning and dictionary-blind image reconstruction. Our numerical experiments show the promising performance and speedups provided by the proposed methods over previous schemes in sparse data representation and compressed sensing-based image reconstruction.

  1. Efficient Sum of Outer Products Dictionary Learning (SOUP-DIL) and Its Application to Inverse Problems

    PubMed Central

    Ravishankar, Saiprasad; Nadakuditi, Raj Rao; Fessler, Jeffrey A.

    2017-01-01

    The sparsity of signals in a transform domain or dictionary has been exploited in applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns. The resulting block coordinate descent algorithms involve efficient closed-form solutions. Furthermore, we consider the problem of dictionary-blind image reconstruction, and propose novel and efficient algorithms for adaptive image reconstruction using block coordinate descent and sum of outer products methodologies. We provide a convergence study of the algorithms for dictionary learning and dictionary-blind image reconstruction. Our numerical experiments show the promising performance and speedups provided by the proposed methods over previous schemes in sparse data representation and compressed sensing-based image reconstruction. PMID:29376111

  2. Data Structures for Extreme Scale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kahan, Simon

    As computing problems of national importance grow, the government meets the increased demand by funding the development of ever larger systems. The overarching goal of the work supported in part by this grant is to increase efficiency of programming and performing computations on these large computing systems. In past work, we have demonstrated that some of these computations once thought to require expensive hardware designs and/or complex, special-purpose programming may be executed efficiently on low-cost commodity cluster computing systems using a general-purpose “latency-tolerant” programming framework. One important developed application of the ideas underlying this framework is graph database technology supportingmore » social network pattern matching used by US intelligence agencies to more quickly identify potential terrorist threats. This database application has been spun out by the Pacific Northwest National Laboratory, a Department of Energy Laboratory, into a commercial start-up, Trovares Inc. We explore an alternative application of the same underlying ideas to a well-studied challenge arising in engineering: solving unstructured sparse linear equations. Solving these equations is key to predicting the behavior of large electronic circuits before they are fabricated. Predicting that behavior ahead of fabrication means that designs can optimized and errors corrected ahead of the expense of manufacture.« less

  3. Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures

    DTIC Science & Technology

    2017-10-04

    Report: Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures The views, opinions and/or findings contained in this...Chapel Hill Title: Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures Report Term: 0-Other Email: dm...algorithms for scientific and geometric computing by exploiting the power and performance efficiency of heterogeneous shared memory architectures . These

  4. Modeling convection-diffusion-reaction systems for microfluidic molecular communications with surface-based receivers in Internet of Bio-Nano Things

    PubMed Central

    Akan, Ozgur B.

    2018-01-01

    We consider a microfluidic molecular communication (MC) system, where the concentration-encoded molecular messages are transported via fluid flow-induced convection and diffusion, and detected by a surface-based MC receiver with ligand receptors placed at the bottom of the microfluidic channel. The overall system is a convection-diffusion-reaction system that can only be solved by numerical methods, e.g., finite element analysis (FEA). However, analytical models are key for the information and communication technology (ICT), as they enable an optimisation framework to develop advanced communication techniques, such as optimum detection methods and reliable transmission schemes. In this direction, we develop an analytical model to approximate the expected time course of bound receptor concentration, i.e., the received signal used to decode the transmitted messages. The model obviates the need for computationally expensive numerical methods by capturing the nonlinearities caused by laminar flow resulting in parabolic velocity profile, and finite number of ligand receptors leading to receiver saturation. The model also captures the effects of reactive surface depletion layer resulting from the mass transport limitations and moving reaction boundary originated from the passage of finite-duration molecular concentration pulse over the receiver surface. Based on the proposed model, we derive closed form analytical expressions that approximate the received pulse width, pulse delay and pulse amplitude, which can be used to optimize the system from an ICT perspective. We evaluate the accuracy of the proposed model by comparing model-based analytical results to the numerical results obtained by solving the exact system model with COMSOL Multiphysics. PMID:29415019

  5. Numerical Modeling of Tube Forming by HPTR Cold Pilgering Process

    NASA Astrophysics Data System (ADS)

    Sornin, D.; Pachón-Rodríguez, E. A.; Vanegas-Márquez, E.; Mocellin, K.; Logé, R.

    2016-09-01

    For new fast-neutron sodium-cooled Generation IV nuclear reactors, the candidate cladding materials for the very strong burn-up are ferritic and martensitic oxide dispersion strengthened grades. Classically, the cladding tube is cold formed by a sequence of cold pilger milling passes with intermediate heat treatments. This process acts upon the geometry and the microstructure of the tubes. Consequently, crystallographic texture, grain sizes and morphologies, and tube integrity are highly dependent on the pilgering parameters. In order to optimize the resulting mechanical properties of cold-rolled cladding tubes, it is essential to have a thorough understanding of the pilgering process. Finite Element Method (FEM) models are used for the numerical predictions of this task; however, the accuracy of the numerical predictions depends not only on the type of constitutive laws but also on the quality of the material parameters identification. Therefore, a Chaboche-type law which parameters have been identified on experimental observation of the mechanical behavior of the material is used here. As a complete three-dimensional FEM mechanical analysis of the high-precision tube rolling (HPTR) cold pilgering of tubes could be very expensive, only the evolution of geometry and deformation is addressed in this work. The computed geometry is compared to the experimental one. It is shown that the evolution of the geometry and deformation is not homogeneous over the circumference. Moreover, it is exposed that the strain is nonhomogeneous in the radial, tangential, and axial directions. Finally, it is seen that the dominant deformation mode of a material point evolves during HPTR cold pilgering forming.

  6. Modeling convection-diffusion-reaction systems for microfluidic molecular communications with surface-based receivers in Internet of Bio-Nano Things.

    PubMed

    Kuscu, Murat; Akan, Ozgur B

    2018-01-01

    We consider a microfluidic molecular communication (MC) system, where the concentration-encoded molecular messages are transported via fluid flow-induced convection and diffusion, and detected by a surface-based MC receiver with ligand receptors placed at the bottom of the microfluidic channel. The overall system is a convection-diffusion-reaction system that can only be solved by numerical methods, e.g., finite element analysis (FEA). However, analytical models are key for the information and communication technology (ICT), as they enable an optimisation framework to develop advanced communication techniques, such as optimum detection methods and reliable transmission schemes. In this direction, we develop an analytical model to approximate the expected time course of bound receptor concentration, i.e., the received signal used to decode the transmitted messages. The model obviates the need for computationally expensive numerical methods by capturing the nonlinearities caused by laminar flow resulting in parabolic velocity profile, and finite number of ligand receptors leading to receiver saturation. The model also captures the effects of reactive surface depletion layer resulting from the mass transport limitations and moving reaction boundary originated from the passage of finite-duration molecular concentration pulse over the receiver surface. Based on the proposed model, we derive closed form analytical expressions that approximate the received pulse width, pulse delay and pulse amplitude, which can be used to optimize the system from an ICT perspective. We evaluate the accuracy of the proposed model by comparing model-based analytical results to the numerical results obtained by solving the exact system model with COMSOL Multiphysics.

  7. A Numerical Analysis on the Effects of Self-Excited Tip Flow Unsteadiness and Upstream Blade Row Interactions on the Performance Predictions of a Transonic Compressor

    NASA Astrophysics Data System (ADS)

    Heberling, Brian

    Computational fluid dynamics (CFD) simulations can offer a detailed view of the complex flow fields within an axial compressor and greatly aid the design process. However, the desire for quick turnaround times raises the question of how exact the model must be. At design conditions, steady CFD simulating an isolated blade row can accurately predict the performance of a rotor. However, as a compressor is throttled and mass flow rate decreased, axial flow becomes weaker making the capturing of unsteadiness, wakes, or other flow features more important. The unsteadiness of the tip clearance flow and upstream blade wake can have a significant impact on a rotor. At off-design conditions, time-accurate simulations or modeling multiple blade rows can become necessary in order to receive accurate performance predictions. Unsteady and multi- bladerow simulations are computationally expensive, especially when used in conjunction. It is important to understand which features are important to model in order to accurately capture a compressor's performance. CFD simulations of a transonic axial compressor throttling from the design point to stall are presented. The importance of capturing the unsteadiness of the rotor tip clearance flow versus capturing upstream blade-row interactions is examined through steady and unsteady, single- and multi-bladerow computations. It is shown that there are significant differences at near stall conditions between the different types of simulations.

  8. Computational fluid dynamics analysis of cyclist aerodynamics: performance of different turbulence-modelling and boundary-layer modelling approaches.

    PubMed

    Defraeye, Thijs; Blocken, Bert; Koninckx, Erwin; Hespel, Peter; Carmeliet, Jan

    2010-08-26

    This study aims at assessing the accuracy of computational fluid dynamics (CFD) for applications in sports aerodynamics, for example for drag predictions of swimmers, cyclists or skiers, by evaluating the applied numerical modelling techniques by means of detailed validation experiments. In this study, a wind-tunnel experiment on a scale model of a cyclist (scale 1:2) is presented. Apart from three-component forces and moments, also high-resolution surface pressure measurements on the scale model's surface, i.e. at 115 locations, are performed to provide detailed information on the flow field. These data are used to compare the performance of different turbulence-modelling techniques, such as steady Reynolds-averaged Navier-Stokes (RANS), with several k-epsilon and k-omega turbulence models, and unsteady large-eddy simulation (LES), and also boundary-layer modelling techniques, namely wall functions and low-Reynolds number modelling (LRNM). The commercial CFD code Fluent 6.3 is used for the simulations. The RANS shear-stress transport (SST) k-omega model shows the best overall performance, followed by the more computationally expensive LES. Furthermore, LRNM is clearly preferred over wall functions to model the boundary layer. This study showed that there are more accurate alternatives for evaluating flow around bluff bodies with CFD than the standard k-epsilon model combined with wall functions, which is often used in CFD studies in sports. 2010 Elsevier Ltd. All rights reserved.

  9. A Spectral Finite Element Approach to Modeling Soft Solids Excited with High-Frequency Harmonic Loads

    PubMed Central

    Brigham, John C.; Aquino, Wilkins; Aguilo, Miguel A.; Diamessis, Peter J.

    2010-01-01

    An approach for efficient and accurate finite element analysis of harmonically excited soft solids using high-order spectral finite elements is presented and evaluated. The Helmholtz-type equations used to model such systems suffer from additional numerical error known as pollution when excitation frequency becomes high relative to stiffness (i.e. high wave number), which is the case, for example, for soft tissues subject to ultrasound excitations. The use of high-order polynomial elements allows for a reduction in this pollution error, but requires additional consideration to counteract Runge's phenomenon and/or poor linear system conditioning, which has led to the use of spectral element approaches. This work examines in detail the computational benefits and practical applicability of high-order spectral elements for such problems. The spectral elements examined are tensor product elements (i.e. quad or brick elements) of high-order Lagrangian polynomials with non-uniformly distributed Gauss-Lobatto-Legendre nodal points. A shear plane wave example is presented to show the dependence of the accuracy and computational expense of high-order elements on wave number. Then, a convergence study for a viscoelastic acoustic-structure interaction finite element model of an actual ultrasound driven vibroacoustic experiment is shown. The number of degrees of freedom required for a given accuracy level was found to consistently decrease with increasing element order. However, the computationally optimal element order was found to strongly depend on the wave number. PMID:21461402

  10. Geometry and Grid Modeling for Numerical Simulation

    DTIC Science & Technology

    2005-06-01

    which makes them particularly vexing to beginners . In addition, they are expensive in terms of memory requirements and compile times. 3.1.4 TSTT The...F77, Python , or Fortran90 (beta). It supports drivers written in C, C++, F77, Python , Java or Fortran90 (beta). It is supported currently on

  11. Private Long-Term Care Insurance: Cost, Coverage, and Restrictions.

    ERIC Educational Resources Information Center

    Wiener, Joshua M.; And Others

    1987-01-01

    Conducted descriptive analysis of 31 private long-term care insurance policies. Examined policies for premium rates, extent and levels of coverage, restrictions of eligibility to purchase a policy, and indemnity payment levels. Findings suggest that policies are expensive, impose numerous restrictions, offer limited coverage for certain services,…

  12. Onion and weed response to mustard (Sinapis alba) seed meal

    USDA-ARS?s Scientific Manuscript database

    Weed control in organic onion production is often difficult and expensive, requiring numerous cultivations and extensive hand-weeding. Onion safety and weed control with mustard seed meal (MSM) derived from Sinapis alba was evaluated in greenhouse and field trials. MSM applied at 110, 220, and 440 g...

  13. Interactive numerals

    PubMed Central

    2017-01-01

    Although Arabic numerals (like ‘2016’ and ‘3.14’) are ubiquitous, we show that in interactive computer applications they are often misleading and surprisingly unreliable. We introduce interactive numerals as a new concept and show, like Roman numerals and Arabic numerals, interactive numerals introduce another way of using and thinking about numbers. Properly understanding interactive numerals is essential for all computer applications that involve numerical data entered by users, including finance, medicine, aviation and science. PMID:28484609

  14. Correlation energy extrapolation by many-body expansion

    DOE PAGES

    Boschen, Jeffery S.; Theis, Daniel; Ruedenberg, Klaus; ...

    2017-01-09

    Accounting for electron correlation is required for high accuracy calculations of molecular energies. The full configuration interaction (CI) approach can fully capture the electron correlation within a given basis, but it does so at a computational expense that is impractical for all but the smallest chemical systems. In this work, a new methodology is presented to approximate configuration interaction calculations at a reduced computational expense and memory requirement, namely, the correlation energy extrapolation by many-body expansion (CEEMBE). This method combines a MBE approximation of the CI energy with an extrapolated correction obtained from CI calculations using subsets of the virtualmore » orbitals. The extrapolation approach is inspired by, and analogous to, the method of correlation energy extrapolation by intrinsic scaling. Benchmark calculations of the new method are performed on diatomic fluorine and ozone. Finally, the method consistently achieves agreement with CI calculations to within a few mhartree and often achieves agreement to within ~1 millihartree or less, while requiring significantly less computational resources.« less

  15. Correlation energy extrapolation by many-body expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boschen, Jeffery S.; Theis, Daniel; Ruedenberg, Klaus

    Accounting for electron correlation is required for high accuracy calculations of molecular energies. The full configuration interaction (CI) approach can fully capture the electron correlation within a given basis, but it does so at a computational expense that is impractical for all but the smallest chemical systems. In this work, a new methodology is presented to approximate configuration interaction calculations at a reduced computational expense and memory requirement, namely, the correlation energy extrapolation by many-body expansion (CEEMBE). This method combines a MBE approximation of the CI energy with an extrapolated correction obtained from CI calculations using subsets of the virtualmore » orbitals. The extrapolation approach is inspired by, and analogous to, the method of correlation energy extrapolation by intrinsic scaling. Benchmark calculations of the new method are performed on diatomic fluorine and ozone. Finally, the method consistently achieves agreement with CI calculations to within a few mhartree and often achieves agreement to within ~1 millihartree or less, while requiring significantly less computational resources.« less

  16. 45 CFR 2507.5 - How does the Corporation process requests for records?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... compelled to create new records or do statistical computations. For example, the Corporation is not required... feasible way to respond to a request. The Corporation is not required to perform any research for the... duplicating all of them. For example, if it requires less time and expense to provide a computer record as a...

  17. 26 CFR 1.179-5 - Time and manner of making election.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... desktop computer costing $1,500. On Taxpayer's 2003 Federal tax return filed on April 15, 2004, Taxpayer elected to expense under section 179 the full cost of the laptop computer and the full cost of the desktop... provided by the Internal Revenue Code, the regulations under the Code, or other guidance published in the...

  18. Innovative Leaders Take the Phone and Run: Profiles of Four Trailblazing Programs

    ERIC Educational Resources Information Center

    Norris, Cathleen; Soloway, Elliot; Menchhofer, Kyle; Bauman, Billie Diane; Dickerson, Mindy; Schad, Lenny; Tomko, Sue

    2010-01-01

    While the Internet changed everything, mobile will change everything squared. The Internet is just a roadway, and computers--the equivalent of cars for the Internet--have been expensive. The keepers of the information roadway--the telecommunication companies--will give one a "computer," such as cell phone, mobile learning device, or MLD,…

  19. 75 FR 25161 - Defense Federal Acquisition Regulation Supplement; Presumption of Development at Private Expense

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-07

    ... asserted restrictions on technical data and computer software. DATES: Comments on the proposed rule should... restrictions on technical data and computer software. More specifically, the proposed rule affects these...) items (as defined at 41 U.S.C. 431(c)). Since COTS items are a subtype of commercial items, this change...

  20. 17 CFR 240.17a-3 - Records to be made by certain exchange members, brokers and dealers.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... records) reflecting all assets and liabilities, income and expense and capital accounts. (3) Ledger..., and a record of the computation of aggregate indebtedness and net capital, as of the trial balance...) thereof shall make a record of the computation of aggregate indebtedness and net capital as of the trial...

  1. Application of Sequence Comparison Methods to Multisensor Data Fusion and Target Recognition

    DTIC Science & Technology

    1993-06-18

    lin- ear comparison). A particularly attractive aspect of the proposed fusion scheme is that it has the potential to work for any object with (1...radar sensing is a historical custom - however, the reader should keep in mind that the fundamental issue in this research is to explore and exploit...reduce the computationally expensive need to compute partial derivatives. In usual practice, the computationally more attractive filter design is

  2. Effect of Numerical Error on Gravity Field Estimation for GRACE and Future Gravity Missions

    NASA Astrophysics Data System (ADS)

    McCullough, Christopher; Bettadpur, Srinivas

    2015-04-01

    In recent decades, gravity field determination from low Earth orbiting satellites, such as the Gravity Recovery and Climate Experiment (GRACE), has become increasingly more effective due to the incorporation of high accuracy measurement devices. Since instrumentation quality will only increase in the near future and the gravity field determination process is computationally and numerically intensive, numerical error from the use of double precision arithmetic will eventually become a prominent error source. While using double-extended or quadruple precision arithmetic will reduce these errors, the numerical limitations of current orbit determination algorithms and processes must be accurately identified and quantified in order to adequately inform the science data processing techniques of future gravity missions. The most obvious numerical limitation in the orbit determination process is evident in the comparison of measured observables with computed values, derived from mathematical models relating the satellites' numerically integrated state to the observable. Significant error in the computed trajectory will corrupt this comparison and induce error in the least squares solution of the gravitational field. In addition, errors in the numerically computed trajectory propagate into the evaluation of the mathematical measurement model's partial derivatives. These errors amalgamate in turn with numerical error from the computation of the state transition matrix, computed using the variational equations of motion, in the least squares mapping matrix. Finally, the solution of the linearized least squares system, computed using a QR factorization, is also susceptible to numerical error. Certain interesting combinations of each of these numerical errors are examined in the framework of GRACE gravity field determination to analyze and quantify their effects on gravity field recovery.

  3. 7 CFR 3560.102 - Housing project management.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ..., unless the machine becomes the property of the project after purchase. (iii) Determining if Expenses are... computer learning center activities benefiting tenants are not covered in this prohibition. (viii) It is...

  4. 7 CFR 3560.102 - Housing project management.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ..., unless the machine becomes the property of the project after purchase. (iii) Determining if Expenses are... computer learning center activities benefiting tenants are not covered in this prohibition. (viii) It is...

  5. 7 CFR 3560.102 - Housing project management.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ..., unless the machine becomes the property of the project after purchase. (iii) Determining if Expenses are... computer learning center activities benefiting tenants are not covered in this prohibition. (viii) It is...

  6. 7 CFR 3560.102 - Housing project management.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ..., unless the machine becomes the property of the project after purchase. (iii) Determining if Expenses are... computer learning center activities benefiting tenants are not covered in this prohibition. (viii) It is...

  7. Time-varying metamaterials based on graphene-wrapped microwires: Modeling and potential applications

    NASA Astrophysics Data System (ADS)

    Salary, Mohammad Mahdi; Jafar-Zanjani, Samad; Mosallaei, Hossein

    2018-03-01

    The successful realization of metamaterials and metasurfaces requires the judicious choice of constituent elements. In this paper, we demonstrate the implementation of time-varying metamaterials in the terahertz frequency regime by utilizing graphene-wrapped microwires as building blocks and modulation of graphene conductivity through exterior electrical gating. These elements enable enhancement of light-graphene interaction by utilizing optical resonances associated with Mie scattering, yielding a large tunability and modulation depth. We develop a semianalytical framework based on transition-matrix formulation for modeling and analysis of periodic and aperiodic arrays of such time-varying building blocks. The proposed method is validated against full-wave numerical results obtained using the finite-difference time-domain method. It provides an ideal tool for mathematical synthesis and analysis of space-time gradient metamaterials, eliminating the need for computationally expensive numerical models. Moreover, it allows for a wider exploration of exotic space-time scattering phenomena in time-modulated metamaterials. We apply the method to explore the role of modulation parameters in the generation of frequency harmonics and their emerging wavefronts. Several potential applications of such platforms are demonstrated, including frequency conversion, holographic generation of frequency harmonics, and spatiotemporal manipulation of light. The presented results provide key physical insights to design time-modulated functional metadevices using various building blocks and open up new directions in the emerging paradigm of time-modulated metamaterials.

  8. A dynamic wall model for Large-Eddy simulations of wind turbine dedicated airfoils

    NASA Astrophysics Data System (ADS)

    J, Calafell; O, Lehmkuhl; A, Carmona; D, Pérez-Segarra C.; A, Oliva

    2014-06-01

    This work aims at modelling the flow behavior past a wind turbine dedicated airfoil at high Reynolds number and large angle of attack (AoA). The DU-93-W-210 airfoil has been selected. To do this, Large Eddy Simulations (LES) have been performed. Momentum equations have been solved with a parallel unstructured symmetry preserving formulation while the wall-adapting local-eddy viscosity model within a variational multi-scale framework (VMS- WALE) is used as the subgrid-scales model. Since LES calculations are still very expensive at high Reynolds Number, specially at the near-wall region, a dynamic wall model has been implemented in order to overcome this limitation. The model has been validated with a very unresolved Channel Flow case at Reτ = 2000. Afterwards, the model is also tested with the Ahmed Car case, that from the flow physics point of view is more similar to an stalled airfoil than the Channel Flow is, including flow features as boundary layer detachment and recirculations. This case has been selected because experimental results of mean velocity profiles are available. Finally, a flow around a DU-93-W-210 airfoil is computed at Re = 3 x 106 and with an AoA of 15°. Numerical results are presented in comparison with Direct Numerical Simulation (DNS) or experimental data for all cases.

  9. Optimise inlet condition and design parameters of a new sewer overflow screening device using numerical model.

    PubMed

    Aziz, M A; Imteaz, M A; Huda, Nazmul; Naser, J

    2014-01-01

    After heavy rainfall, sewer overflow spills to receiving water bodies cause serious concern for the environment, aesthetics and public health. To overcome these problems this study investigated a new self-cleansing sewer overflow screening device. The device has a sewer overflow chamber, a rectangular tank and a slotted ogee weir to capture the gross pollutants. To design an efficient screening device a numerical computational fluid dynamic (CFD) model was used. A plausibility check of the CFD model was done using a one-dimensional analytical model. Results showed that an inlet parallel to the weir ensured better self-cleansing than an inlet perpendicular to the weir. Perforations should be at the bottom of the weir to get increased velocity and shear stress to create a favourable self-cleaning effect of the screening device. Increasing inlet length from 0.3 to 1.5 m reduced wave reflection up to 10%, which increased flow uniformity downstream and improved self-cleansing effect. The orientation of the ogee weir with the rectangular tank was found most uniform with a 1:3 (horizontal:vertical) slope. These results will help to maximise functional efficiency of the new sewer overflow screening device. Otherwise it would be too expensive to alter after installation and at times difficult to customise accordingly to existing urban drainage systems.

  10. A facile top-down etching to create a Cu2O jagged polyhedron covered with numerous {110} edges and {111} corners with enhanced photocatalytic activity.

    PubMed

    Shang, Yang; Sun, Du; Shao, Yiming; Zhang, Dongfeng; Guo, Lin; Yang, Shihe

    2012-11-05

    Cutting edge: A Cu(2)O jagged polyhedron, with numerous {110} edges and {111} corners, has been developed through a top-down selective oxidative etching process at the expense of the original {111} facet (see figure). The as-prepared nanocrystals exhibited higher photocatalytic activities for the degradation of methylene orange, which may be primarily ascribed to the increased edges and corners. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Semiannual report, 1 April - 30 September 1991

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The major categories of the current Institute for Computer Applications in Science and Engineering (ICASE) research program are: (1) numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; (2) control and parameter identification problems, with emphasis on effective numerical methods; (3) computational problems in engineering and the physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and (4) computer systems and software for parallel computers. Research in these areas is discussed.

  12. Dynamics of Numerics & Spurious Behaviors in CFD Computations. Revised

    NASA Technical Reports Server (NTRS)

    Yee, Helen C.; Sweby, Peter K.

    1997-01-01

    The global nonlinear behavior of finite discretizations for constant time steps and fixed or adaptive grid spacings is studied using tools from dynamical systems theory. Detailed analysis of commonly used temporal and spatial discretizations for simple model problems is presented. The role of dynamics in the understanding of long time behavior of numerical integration and the nonlinear stability, convergence, and reliability of using time-marching approaches for obtaining steady-state numerical solutions in computational fluid dynamics (CFD) is explored. The study is complemented with examples of spurious behavior observed in steady and unsteady CFD computations. The CFD examples were chosen to illustrate non-apparent spurious behavior that was difficult to detect without extensive grid and temporal refinement studies and some knowledge from dynamical systems theory. Studies revealed the various possible dangers of misinterpreting numerical simulation of realistic complex flows that are constrained by available computing power. In large scale computations where the physics of the problem under study is not well understood and numerical simulations are the only viable means of solution, extreme care must be taken in both computation and interpretation of the numerical data. The goal of this paper is to explore the important role that dynamical systems theory can play in the understanding of the global nonlinear behavior of numerical algorithms and to aid the identification of the sources of numerical uncertainties in CFD.

  13. An iterative method for hydrodynamic interactions in Brownian dynamics simulations of polymer dynamics

    NASA Astrophysics Data System (ADS)

    Miao, Linling; Young, Charles D.; Sing, Charles E.

    2017-07-01

    Brownian Dynamics (BD) simulations are a standard tool for understanding the dynamics of polymers in and out of equilibrium. Quantitative comparison can be made to rheological measurements of dilute polymer solutions, as well as direct visual observations of fluorescently labeled DNA. The primary computational challenge with BD is the expensive calculation of hydrodynamic interactions (HI), which are necessary to capture physically realistic dynamics. The full HI calculation, performed via a Cholesky decomposition every time step, scales with the length of the polymer as O(N3). This limits the calculation to a few hundred simulated particles. A number of approximations in the literature can lower this scaling to O(N2 - N2.25), and explicit solvent methods scale as O(N); however both incur a significant constant per-time step computational cost. Despite this progress, there remains a need for new or alternative methods of calculating hydrodynamic interactions; large polymer chains or semidilute polymer solutions remain computationally expensive. In this paper, we introduce an alternative method for calculating approximate hydrodynamic interactions. Our method relies on an iterative scheme to establish self-consistency between a hydrodynamic matrix that is averaged over simulation and the hydrodynamic matrix used to run the simulation. Comparison to standard BD simulation and polymer theory results demonstrates that this method quantitatively captures both equilibrium and steady-state dynamics after only a few iterations. The use of an averaged hydrodynamic matrix allows the computationally expensive Brownian noise calculation to be performed infrequently, so that it is no longer the bottleneck of the simulation calculations. We also investigate limitations of this conformational averaging approach in ring polymers.

  14. Automatic Data Filter Customization Using a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Mandrake, Lukas

    2013-01-01

    This work predicts whether a retrieval algorithm will usefully determine CO2 concentration from an input spectrum of GOSAT (Greenhouse Gases Observing Satellite). This was done to eliminate needless runtime on atmospheric soundings that would never yield useful results. A space of 50 dimensions was examined for predictive power on the final CO2 results. Retrieval algorithms are frequently expensive to run, and wasted effort defeats requirements and expends needless resources. This algorithm could be used to help predict and filter unneeded runs in any computationally expensive regime. Traditional methods such as the Fischer discriminant analysis and decision trees can attempt to predict whether a sounding will be properly processed. However, this work sought to detect a subsection of the dimensional space that can be simply filtered out to eliminate unwanted runs. LDAs (linear discriminant analyses) and other systems examine the entire data and judge a "best fit," giving equal weight to complex and problematic regions as well as simple, clear-cut regions. In this implementation, a genetic space of "left" and "right" thresholds outside of which all data are rejected was defined. These left/right pairs are created for each of the 50 input dimensions. A genetic algorithm then runs through countless potential filter settings using a JPL computer cluster, optimizing the tossed-out data s yield (proper vs. improper run removal) and number of points tossed. This solution is robust to an arbitrary decision boundary within the data and avoids the global optimization problem of whole-dataset fitting using LDA or decision trees. It filters out runs that would not have produced useful CO2 values to save needless computation. This would be an algorithmic preprocessing improvement to any computationally expensive system.

  15. The design of a light aircraft automated dropsonde launcher

    NASA Astrophysics Data System (ADS)

    Pasken, Gregory R.

    The use of the National Center for Atmospheric Research's dropsonde system is currently limited to large NASA, NSF and NOAA operated research aircraft, which are expensive to fly and are over-subscribed. Designing a new dropsonde system for a smaller, less expensive to operate light aircraft will make the dropsonde system available to a much wider research community. To test this concept, a dropsonde launch system designed to fit in the cargo door of a twin engine Piper Seminole is developed and tested. Although the launch system for the light aircraft dropsonde launcher has gone through many designs, a prototype is built and tested from the final design using Tetra for the computation fluid dynamics and stress testing, as Tetra has material properties for solids as well as fluids. The design is further tested in the wind tunnel. These tests show that the new design is a viable alternative for light aircraft, thus allowing dropsondes to be more widely used. The results of the ABAQUS, SC Tetra simulations, and the wind tunnel results of the final design are covered and discussed. The settings used for the ABAQUS and SC Tetra simulations are described in detail. ABAQUS simulations are conducted to perform stress testing and SC Tetra is used for CFD simulations. The SC Tetra simulations provide a more comprehensive picture of the design, as SC Tetra is able to perform the stress testing, as well as pressure testing, allowing for more accurate results. The limitations of ABAQUS simulations require numerous assumptions for loading that may or may not be realistic.

  16. Accelerating Monte Carlo molecular simulations by reweighting and reconstructing Markov chains: Extrapolation of canonical ensemble averages and second derivatives to different temperature and density conditions

    NASA Astrophysics Data System (ADS)

    Kadoura, Ahmad; Sun, Shuyu; Salama, Amgad

    2014-08-01

    Accurate determination of thermodynamic properties of petroleum reservoir fluids is of great interest to many applications, especially in petroleum engineering and chemical engineering. Molecular simulation has many appealing features, especially its requirement of fewer tuned parameters but yet better predicting capability; however it is well known that molecular simulation is very CPU expensive, as compared to equation of state approaches. We have recently introduced an efficient thermodynamically consistent technique to regenerate rapidly Monte Carlo Markov Chains (MCMCs) at different thermodynamic conditions from the existing data points that have been pre-computed with expensive classical simulation. This technique can speed up the simulation more than a million times, making the regenerated molecular simulation almost as fast as equation of state approaches. In this paper, this technique is first briefly reviewed and then numerically investigated in its capability of predicting ensemble averages of primary quantities at different neighboring thermodynamic conditions to the original simulated MCMCs. Moreover, this extrapolation technique is extended to predict second derivative properties (e.g. heat capacity and fluid compressibility). The method works by reweighting and reconstructing generated MCMCs in canonical ensemble for Lennard-Jones particles. In this paper, system's potential energy, pressure, isochoric heat capacity and isothermal compressibility along isochors, isotherms and paths of changing temperature and density from the original simulated points were extrapolated. Finally, an optimized set of Lennard-Jones parameters (ε, σ) for single site models were proposed for methane, nitrogen and carbon monoxide.

  17. Coordinate Systems, Numerical Objects and Algorithmic Operations of Computational Experiment in Fluid Mechanics

    NASA Astrophysics Data System (ADS)

    Degtyarev, Alexander; Khramushin, Vasily

    2016-02-01

    The paper deals with the computer implementation of direct computational experiments in fluid mechanics, constructed on the basis of the approach developed by the authors. The proposed approach allows the use of explicit numerical scheme, which is an important condition for increasing the effciency of the algorithms developed by numerical procedures with natural parallelism. The paper examines the main objects and operations that let you manage computational experiments and monitor the status of the computation process. Special attention is given to a) realization of tensor representations of numerical schemes for direct simulation; b) realization of representation of large particles of a continuous medium motion in two coordinate systems (global and mobile); c) computing operations in the projections of coordinate systems, direct and inverse transformation in these systems. Particular attention is paid to the use of hardware and software of modern computer systems.

  18. Busting the Limits of Science Laboratory Economics

    ERIC Educational Resources Information Center

    Bush, Robert C.

    2008-01-01

    This article discusses the trend facing today's scientific laboratories: that the more specialized the lab, the more expensive it is, and the less accessible it becomes. Or conversely, the more accessible a lab needs to be, the fewer resources can be dedicated per capita, and the less specialized it becomes. From a numerical standpoint, "real"…

  19. Numerically stable algorithm for combining census and sample estimates with the multivariate composite estimator

    Treesearch

    R. L. Czaplewski

    2009-01-01

    The minimum variance multivariate composite estimator is a relatively simple sequential estimator for complex sampling designs (Czaplewski 2009). Such designs combine a probability sample of expensive field data with multiple censuses and/or samples of relatively inexpensive multi-sensor, multi-resolution remotely sensed data. Unfortunately, the multivariate composite...

  20. STD Installation and Maintenance Procedures. Satellite Technology DEmonstration, Technical Report No. 0421.

    ERIC Educational Resources Information Center

    Lane, Billy D.

    The ground terminals for the Satellite Technology Demonstration were more expensive to install than was anticipated because crews forced to make numerous visits to each site. Delays were caused by delinquent deliveries and by malfunctions in some of the subsystems. Insufficient time was allowed for trouble shooting, and consequently, additional…

Top