Direct Numerical Simulation of Automobile Cavity Tones
NASA Technical Reports Server (NTRS)
Kurbatskii, Konstantin; Tam, Christopher K. W.
2000-01-01
The Navier Stokes equation is solved computationally by the Dispersion-Relation-Preserving (DRP) scheme for the flow and acoustic fields associated with a laminar boundary layer flow over an automobile door cavity. In this work, the flow Reynolds number is restricted to R(sub delta*) < 3400; the range of Reynolds number for which laminar flow may be maintained. This investigation focuses on two aspects of the problem, namely, the effect of boundary layer thickness on the cavity tone frequency and intensity and the effect of the size of the computation domain on the accuracy of the numerical simulation. It is found that the tone frequency decreases with an increase in boundary layer thickness. When the boundary layer is thicker than a certain critical value, depending on the flow speed, no tone is emitted by the cavity. Computationally, solutions of aeroacoustics problems are known to be sensitive to the size of the computation domain. Numerical experiments indicate that the use of a small domain could result in normal mode type acoustic oscillations in the entire computation domain leading to an increase in tone frequency and intensity. When the computation domain is expanded so that the boundaries are at least one wavelength away from the noise source, the computed tone frequency and intensity are found to be computation domain size independent.
How to Build an AppleSeed: A Parallel Macintosh Cluster for Numerically Intensive Computing
NASA Astrophysics Data System (ADS)
Decyk, V. K.; Dauger, D. E.
We have constructed a parallel cluster consisting of a mixture of Apple Macintosh G3 and G4 computers running the Mac OS, and have achieved very good performance on numerically intensive, parallel plasma particle-incell simulations. A subset of the MPI message-passing library was implemented in Fortran77 and C. This library enabled us to port code, without modification, from other parallel processors to the Macintosh cluster. Unlike Unix-based clusters, no special expertise in operating systems is required to build and run the cluster. This enables us to move parallel computing from the realm of experts to the main stream of computing.
Py4CAtS - Python tools for line-by-line modelling of infrared atmospheric radiative transfer
NASA Astrophysics Data System (ADS)
Schreier, Franz; García, Sebastián Gimeno
2013-05-01
Py4CAtS — Python scripts for Computational ATmospheric Spectroscopy is a Python re-implementation of the Fortran infrared radiative transfer code GARLIC, where compute-intensive code sections utilize the Numeric/Scientific Python modules for highly optimized array-processing. The individual steps of an infrared or microwave radiative transfer computation are implemented in separate scripts to extract lines of relevant molecules in the spectral range of interest, to compute line-by-line cross sections for given pressure(s) and temperature(s), to combine cross sections to absorption coefficients and optical depths, and to integrate along the line-of-sight to transmission and radiance/intensity. The basic design of the package, numerical and computational aspects relevant for optimization, and a sketch of the typical workflow are presented.
Efficient Parallel Algorithm For Direct Numerical Simulation of Turbulent Flows
NASA Technical Reports Server (NTRS)
Moitra, Stuti; Gatski, Thomas B.
1997-01-01
A distributed algorithm for a high-order-accurate finite-difference approach to the direct numerical simulation (DNS) of transition and turbulence in compressible flows is described. This work has two major objectives. The first objective is to demonstrate that parallel and distributed-memory machines can be successfully and efficiently used to solve computationally intensive and input/output intensive algorithms of the DNS class. The second objective is to show that the computational complexity involved in solving the tridiagonal systems inherent in the DNS algorithm can be reduced by algorithm innovations that obviate the need to use a parallelized tridiagonal solver.
NASA Technical Reports Server (NTRS)
Sharma, Naveen
1992-01-01
In this paper we briefly describe a combined symbolic and numeric approach for solving mathematical models on parallel computers. An experimental software system, PIER, is being developed in Common Lisp to synthesize computationally intensive and domain formulation dependent phases of finite element analysis (FEA) solution methods. Quantities for domain formulation like shape functions, element stiffness matrices, etc., are automatically derived using symbolic mathematical computations. The problem specific information and derived formulae are then used to generate (parallel) numerical code for FEA solution steps. A constructive approach to specify a numerical program design is taken. The code generator compiles application oriented input specifications into (parallel) FORTRAN77 routines with the help of built-in knowledge of the particular problem, numerical solution methods and the target computer.
In Vivo Validation of Numerical Prediction for Turbulence Intensity in an Aortic Coarctation
Arzani, Amirhossein; Dyverfeldt, Petter; Ebbers, Tino; Shadden, Shawn C.
2013-01-01
This paper compares numerical predictions of turbulence intensity with in vivo measurement. Magnetic resonance imaging (MRI) was carried out on a 60-year-old female with a restenosed aortic coarctation. Time-resolved three-directional phase-contrast (PC) MRI data was acquired to enable turbulence intensity estimation. A contrast-enhanced MR angiography (MRA) and a time-resolved 2D PCMRI measurement were also performed to acquire data needed to perform subsequent image-based computational fluid dynamics (CFD) modeling. A 3D model of the aortic coarctation and surrounding vasculature was constructed from the MRA data, and physiologic boundary conditions were modeled to match 2D PCMRI and pressure pulse measurements. Blood flow velocity data was subsequently obtained by numerical simulation. Turbulent kinetic energy (TKE) was computed from the resulting CFD data. Results indicate relative agreement (error ≈10%) between the in vivo measurements and the CFD predictions of TKE. The discrepancies in modeled vs. measured TKE values were within expectations due to modeling and measurement errors. PMID:22016327
NASA Technical Reports Server (NTRS)
Roberti, Dino; Ludwig, Reinhold; Looft, Fred J.
1988-01-01
A 3-D computer model of a piston radiator with lenses for focusing and defocusing is presented. To achieve high-resolution imaging, the frequency of the transmitted and received ultrasound must be as high as 10 MHz. Current ultrasonic transducers produce an extremely narrow beam at these high frequencies and thus are not appropriate for imaging schemes such as synthetic-aperture focus techniques (SAFT). Consequently, a numerical analysis program has been developed to determine field intensity patterns that are radiated from ultrasonic transducers with lenses. Lens shapes are described and the field intensities are numerically predicted and compared with experimental results.
Applications of Massive Mathematical Computations
1990-04-01
particles from the first principles of QCD . This problem is under intensive numerical study 11-6 using special purpose parallel supercomputers in...several places around the world. The method used here is the Monte Carlo integration for a fixed 3-D plus time lattices . Reliable results are still years...mathematical and theoretical physics, but its most promising applications are in the numerical realization of QCD computations. Our programs for the solution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sepke, Scott M.
In this note, the laser focal plane intensity pro le for a beam modeled using the 3D ray trace package in HYDRA is determined. First, the analytical model is developed followed by a practical numerical model for evaluating the resulting computationally intensive normalization factor for all possible input parameters.
NASA Astrophysics Data System (ADS)
Parvin, Salma; Sultana, Aysha
2017-06-01
The influence of High Intensity Focused Ultrasound (HIFU) on the obstacle through blood vessel is studied numerically. A three-dimensional acoustics-thermal-fluid coupling model is employed to compute the temperature field around the obstacle through blood vessel. The model construction is based on the linear Westervelt and conjugate heat transfer equations for the obstacle through blood vessel. The system of equations is solved using Finite Element Method (FEM). We found from this three-dimensional numerical study that the rate of heat transfer is increasing from the obstacle and both the convective cooling and acoustic streaming can considerably change the temperature field.
NASA Astrophysics Data System (ADS)
Wei, Xiaohui; Li, Weishan; Tian, Hailong; Li, Hongliang; Xu, Haixiao; Xu, Tianfu
2015-07-01
The numerical simulation of multiphase flow and reactive transport in the porous media on complex subsurface problem is a computationally intensive application. To meet the increasingly computational requirements, this paper presents a parallel computing method and architecture. Derived from TOUGHREACT that is a well-established code for simulating subsurface multi-phase flow and reactive transport problems, we developed a high performance computing THC-MP based on massive parallel computer, which extends greatly on the computational capability for the original code. The domain decomposition method was applied to the coupled numerical computing procedure in the THC-MP. We designed the distributed data structure, implemented the data initialization and exchange between the computing nodes and the core solving module using the hybrid parallel iterative and direct solver. Numerical accuracy of the THC-MP was verified through a CO2 injection-induced reactive transport problem by comparing the results obtained from the parallel computing and sequential computing (original code). Execution efficiency and code scalability were examined through field scale carbon sequestration applications on the multicore cluster. The results demonstrate successfully the enhanced performance using the THC-MP on parallel computing facilities.
Unsteady thermal blooming of intense laser beams
NASA Astrophysics Data System (ADS)
Ulrich, J. T.; Ulrich, P. B.
1980-01-01
A four dimensional (three space plus time) computer program has been written to compute the nonlinear heating of a gas by an intense laser beam. Unsteady, transient cases are capable of solution and no assumption of a steady state need be made. The transient results are shown to asymptotically approach the steady-state results calculated by the standard three dimensional thermal blooming computer codes. The report discusses the physics of the laser-absorber interaction, the numerical approximation used, and comparisons with experimental data. A flowchart is supplied in the appendix to the report.
NASA Astrophysics Data System (ADS)
MacDonald, Christopher L.; Bhattacharya, Nirupama; Sprouse, Brian P.; Silva, Gabriel A.
2015-09-01
Computing numerical solutions to fractional differential equations can be computationally intensive due to the effect of non-local derivatives in which all previous time points contribute to the current iteration. In general, numerical approaches that depend on truncating part of the system history while efficient, can suffer from high degrees of error and inaccuracy. Here we present an adaptive time step memory method for smooth functions applied to the Grünwald-Letnikov fractional diffusion derivative. This method is computationally efficient and results in smaller errors during numerical simulations. Sampled points along the system's history at progressively longer intervals are assumed to reflect the values of neighboring time points. By including progressively fewer points backward in time, a temporally 'weighted' history is computed that includes contributions from the entire past of the system, maintaining accuracy, but with fewer points actually calculated, greatly improving computational efficiency.
NASA Astrophysics Data System (ADS)
Kuzmina, K. S.; Marchevsky, I. K.; Ryatina, E. P.
2017-11-01
We consider the methodology of numerical schemes development for two-dimensional vortex method. We describe two different approaches to deriving integral equation for unknown vortex sheet intensity. We simulate the velocity of the surface line of an airfoil as the influence of attached vortex and source sheets. We consider a polygonal approximation of the airfoil and assume intensity distributions of free and attached vortex sheets and attached source sheet to be approximated with piecewise constant or piecewise linear (continuous or discontinuous) functions. We describe several specific numerical schemes that provide different accuracy and have a different computational cost. The study shows that a Galerkin-type approach to solving boundary integral equation requires computing several integrals and double integrals over the panels. We obtain exact analytical formulae for all the necessary integrals, which makes it possible to raise significantly the accuracy of vortex sheet intensity computation and improve the quality of velocity and vorticity field representation, especially in proximity to the surface line of the airfoil. All the formulae are written down in the invariant form and depend only on the geometric relationship between the positions of the beginnings and ends of the panels.
NASA Astrophysics Data System (ADS)
Sarojkumar, K.; Krishna, S.
2016-08-01
Online dynamic security assessment (DSA) is a computationally intensive task. In order to reduce the amount of computation, screening of contingencies is performed. Screening involves analyzing the contingencies with the system described by a simpler model so that computation requirement is reduced. Screening identifies those contingencies which are sure to not cause instability and hence can be eliminated from further scrutiny. The numerical method and the step size used for screening should be chosen with a compromise between speed and accuracy. This paper proposes use of energy function as a measure of error in the numerical solution used for screening contingencies. The proposed measure of error can be used to determine the most accurate numerical method satisfying the time constraint of online DSA. Case studies on 17 generator system are reported.
Effect of Numerical Error on Gravity Field Estimation for GRACE and Future Gravity Missions
NASA Astrophysics Data System (ADS)
McCullough, Christopher; Bettadpur, Srinivas
2015-04-01
In recent decades, gravity field determination from low Earth orbiting satellites, such as the Gravity Recovery and Climate Experiment (GRACE), has become increasingly more effective due to the incorporation of high accuracy measurement devices. Since instrumentation quality will only increase in the near future and the gravity field determination process is computationally and numerically intensive, numerical error from the use of double precision arithmetic will eventually become a prominent error source. While using double-extended or quadruple precision arithmetic will reduce these errors, the numerical limitations of current orbit determination algorithms and processes must be accurately identified and quantified in order to adequately inform the science data processing techniques of future gravity missions. The most obvious numerical limitation in the orbit determination process is evident in the comparison of measured observables with computed values, derived from mathematical models relating the satellites' numerically integrated state to the observable. Significant error in the computed trajectory will corrupt this comparison and induce error in the least squares solution of the gravitational field. In addition, errors in the numerically computed trajectory propagate into the evaluation of the mathematical measurement model's partial derivatives. These errors amalgamate in turn with numerical error from the computation of the state transition matrix, computed using the variational equations of motion, in the least squares mapping matrix. Finally, the solution of the linearized least squares system, computed using a QR factorization, is also susceptible to numerical error. Certain interesting combinations of each of these numerical errors are examined in the framework of GRACE gravity field determination to analyze and quantify their effects on gravity field recovery.
The M-Integral for Computing Stress Intensity Factors in Generally Anisotropic Materials
NASA Technical Reports Server (NTRS)
Warzynek, P. A.; Carter, B. J.; Banks-Sills, L.
2005-01-01
The objective of this project is to develop and demonstrate a capability for computing stress intensity factors in generally anisotropic materials. These objectives have been met. The primary deliverable of this project is this report and the information it contains. In addition, we have delivered the source code for a subroutine that will compute stress intensity factors for anisotropic materials encoded in both the C and Python programming languages and made available a version of the FRANC3D program that incorporates this subroutine. Single crystal super alloys are commonly used for components in the hot sections of contemporary jet and rocket engines. Because these components have a uniform atomic lattice orientation throughout, they exhibit anisotropic material behavior. This means that stress intensity solutions developed for isotropic materials are not appropriate for the analysis of crack growth in these materials. Until now, a general numerical technique did not exist for computing stress intensity factors of cracks in anisotropic materials and cubic materials in particular. Such a capability was developed during the project and is described and demonstrated herein.
Tan, Sisi; Wu, Zhao; Lei, Lei; Hu, Shoujin; Dong, Jianji; Zhang, Xinliang
2013-03-25
We propose and experimentally demonstrate an all-optical differentiator-based computation system used for solving constant-coefficient first-order linear ordinary differential equations. It consists of an all-optical intensity differentiator and a wavelength converter, both based on a semiconductor optical amplifier (SOA) and an optical filter (OF). The equation is solved for various values of the constant-coefficient and two considered input waveforms, namely, super-Gaussian and Gaussian signals. An excellent agreement between the numerical simulation and the experimental results is obtained.
Surrogates for numerical simulations; optimization of eddy-promoter heat exchangers
NASA Technical Reports Server (NTRS)
Patera, Anthony T.; Patera, Anthony
1993-01-01
Although the advent of fast and inexpensive parallel computers has rendered numerous previously intractable calculations feasible, many numerical simulations remain too resource-intensive to be directly inserted in engineering optimization efforts. An attractive alternative to direct insertion considers models for computational systems: the expensive simulation is evoked only to construct and validate a simplified, input-output model; this simplified input-output model then serves as a simulation surrogate in subsequent engineering optimization studies. A simple 'Bayesian-validated' statistical framework for the construction, validation, and purposive application of static computer simulation surrogates is presented. As an example, dissipation-transport optimization of laminar-flow eddy-promoter heat exchangers are considered: parallel spectral element Navier-Stokes calculations serve to construct and validate surrogates for the flowrate and Nusselt number; these surrogates then represent the originating Navier-Stokes equations in the ensuing design process.
Computation of Anisotropic Bi-Material Interfacial Fracture Parameters and Delamination Creteria
NASA Technical Reports Server (NTRS)
Chow, W-T.; Wang, L.; Atluri, S. N.
1998-01-01
This report documents the recent developments in methodologies for the evaluation of the integrity and durability of composite structures, including i) the establishment of a stress-intensity-factor based fracture criterion for bimaterial interfacial cracks in anisotropic materials (see Sec. 2); ii) the development of a virtual crack closure integral method for the evaluation of the mixed-mode stress intensity factors for a bimaterial interfacial crack (see Sec. 3). Analytical and numerical results show that the proposed fracture criterion is a better fracture criterion than the total energy release rate criterion in the characterization of the bimaterial interfacial cracks. The proposed virtual crack closure integral method is an efficient and accurate numerical method for the evaluation of mixed-mode stress intensity factors.
Numerical Simulation of the Generation of Axisymmetric Mode Jet Screech Tones
NASA Technical Reports Server (NTRS)
Shen, Hao; Tam, Christopher K. W.
1998-01-01
An imperfectly expanded supersonic jet, invariably, radiates both broadband noise and discrete frequency sound called screech tones. Screech tones are known to be generated by a feedback loop driven by the large scale instability waves of the jet flow. Inside the jet plume is a quasi-periodic shock cell structure. The interaction of the instability waves and the shock cell structure, as the former propagates through the latter, is responsible for the generation of the tones. Presently, there are formulas that can predict the tone frequency fairly accurately. However, there is no known way to predict the screech tone intensity. In this work, the screech phenomenon of an axisymmetric jet at low supersonic Mach number is reproduced by numerical simulation. The computed mean velocity profiles and the shock cell pressure distribution of the jet are found to be in good agreement with experimental measurements. The same is true with the simulated screech frequency. Calculated screech tone intensity and directivity at selected jet Mach number are reported in this paper. The present results demonstrate that numerical simulation using computational aeroacoustics methods offers not only a reliable way to determine the screech tone intensity and directivity but also an opportunity to study the physics and detailed mechanisms of the phenomenon by an entirely new approach.
Seismic waveform modeling over cloud
NASA Astrophysics Data System (ADS)
Luo, Cong; Friederich, Wolfgang
2016-04-01
With the fast growing computational technologies, numerical simulation of seismic wave propagation achieved huge successes. Obtaining the synthetic waveforms through numerical simulation receives an increasing amount of attention from seismologists. However, computational seismology is a data-intensive research field, and the numerical packages usually come with a steep learning curve. Users are expected to master considerable amount of computer knowledge and data processing skills. Training users to use the numerical packages, correctly access and utilize the computational resources is a troubled task. In addition to that, accessing to HPC is also a common difficulty for many users. To solve these problems, a cloud based solution dedicated on shallow seismic waveform modeling has been developed with the state-of-the-art web technologies. It is a web platform integrating both software and hardware with multilayer architecture: a well designed SQL database serves as the data layer, HPC and dedicated pipeline for it is the business layer. Through this platform, users will no longer need to compile and manipulate various packages on the local machine within local network to perform a simulation. By providing users professional access to the computational code through its interfaces and delivering our computational resources to the users over cloud, users can customize the simulation at expert-level, submit and run the job through it.
NASA Technical Reports Server (NTRS)
Swedlow, J. L.
1976-01-01
An approach is described for singularity computations based on a numerical method for elastoplastic flow to delineate radial and angular distribution of field quantities and measure the intensity of the singularity. The method is applicable to problems in solid mechanics and lends itself to certain types of heat flow and fluid motion studies. Its use is not limited to linear, elastic, small strain, or two-dimensional situations.
Flow in curved ducts of varying cross-section
NASA Astrophysics Data System (ADS)
Sotiropoulos, F.; Patel, V. C.
1992-07-01
Two numerical methods for solving the incompressible Navier-Stokes equations are compared with each other by applying them to calculate laminar and turbulent flows through curved ducts of regular cross-section. Detailed comparisons, between the computed solutions and experimental data, are carried out in order to validate the two methods and to identify their relative merits and disadvantages. Based on the conclusions of this comparative study a numerical method is developed for simulating viscous flows through curved ducts of varying cross-sections. The proposed method is capable of simulating the near-wall turbulence using fine computational meshes across the sublayer in conjunction with a two-layer k-epsilon model. Numerical solutions are obtained for: (1) a straight transition duct geometry, and (2) a hydroturbine draft-tube configuration at model scale Reynolds number for various inlet swirl intensities. The report also provides a detailed literature survey that summarizes all the experimental and computational work in the area of duct flows.
Castaño-Díez, Daniel
2017-01-01
Dynamo is a package for the processing of tomographic data. As a tool for subtomogram averaging, it includes different alignment and classification strategies. Furthermore, its data-management module allows experiments to be organized in groups of tomograms, while offering specialized three-dimensional tomographic browsers that facilitate visualization, location of regions of interest, modelling and particle extraction in complex geometries. Here, a technical description of the package is presented, focusing on its diverse strategies for optimizing computing performance. Dynamo is built upon mbtools (middle layer toolbox), a general-purpose MATLAB library for object-oriented scientific programming specifically developed to underpin Dynamo but usable as an independent tool. Its structure intertwines a flexible MATLAB codebase with precompiled C++ functions that carry the burden of numerically intensive operations. The package can be delivered as a precompiled standalone ready for execution without a MATLAB license. Multicore parallelization on a single node is directly inherited from the high-level parallelization engine provided for MATLAB, automatically imparting a balanced workload among the threads in computationally intense tasks such as alignment and classification, but also in logistic-oriented tasks such as tomogram binning and particle extraction. Dynamo supports the use of graphical processing units (GPUs), yielding considerable speedup factors both for native Dynamo procedures (such as the numerically intensive subtomogram alignment) and procedures defined by the user through its MATLAB-based GPU library for three-dimensional operations. Cloud-based virtual computing environments supplied with a pre-installed version of Dynamo can be publicly accessed through the Amazon Elastic Compute Cloud (EC2), enabling users to rent GPU computing time on a pay-as-you-go basis, thus avoiding upfront investments in hardware and longterm software maintenance. PMID:28580909
Castaño-Díez, Daniel
2017-06-01
Dynamo is a package for the processing of tomographic data. As a tool for subtomogram averaging, it includes different alignment and classification strategies. Furthermore, its data-management module allows experiments to be organized in groups of tomograms, while offering specialized three-dimensional tomographic browsers that facilitate visualization, location of regions of interest, modelling and particle extraction in complex geometries. Here, a technical description of the package is presented, focusing on its diverse strategies for optimizing computing performance. Dynamo is built upon mbtools (middle layer toolbox), a general-purpose MATLAB library for object-oriented scientific programming specifically developed to underpin Dynamo but usable as an independent tool. Its structure intertwines a flexible MATLAB codebase with precompiled C++ functions that carry the burden of numerically intensive operations. The package can be delivered as a precompiled standalone ready for execution without a MATLAB license. Multicore parallelization on a single node is directly inherited from the high-level parallelization engine provided for MATLAB, automatically imparting a balanced workload among the threads in computationally intense tasks such as alignment and classification, but also in logistic-oriented tasks such as tomogram binning and particle extraction. Dynamo supports the use of graphical processing units (GPUs), yielding considerable speedup factors both for native Dynamo procedures (such as the numerically intensive subtomogram alignment) and procedures defined by the user through its MATLAB-based GPU library for three-dimensional operations. Cloud-based virtual computing environments supplied with a pre-installed version of Dynamo can be publicly accessed through the Amazon Elastic Compute Cloud (EC2), enabling users to rent GPU computing time on a pay-as-you-go basis, thus avoiding upfront investments in hardware and longterm software maintenance.
Computational approaches to computational aero-acoustics
NASA Technical Reports Server (NTRS)
Hardin, Jay C.
1996-01-01
The various techniques by which the goal of computational aeroacoustics (the calculation and noise prediction of a fluctuating fluid flow) may be achieved are reviewed. The governing equations for compressible fluid flow are presented. The direct numerical simulation approach is shown to be computationally intensive for high Reynolds number viscous flows. Therefore, other approaches, such as the acoustic analogy, vortex models and various perturbation techniques that aim to break the analysis into a viscous part and an acoustic part are presented. The choice of the approach is shown to be problem dependent.
NASA Astrophysics Data System (ADS)
Decyk, Viktor K.; Dauger, Dean E.
We have constructed a parallel cluster consisting of Apple Macintosh G4 computers running both Classic Mac OS as well as the Unix-based Mac OS X, and have achieved very good performance on numerically intensive, parallel plasma particle-in-cell simulations. Unlike other Unix-based clusters, no special expertise in operating systems is required to build and run the cluster. This enables us to move parallel computing from the realm of experts to the mainstream of computing.
Digital image processing for information extraction.
NASA Technical Reports Server (NTRS)
Billingsley, F. C.
1973-01-01
The modern digital computer has made practical image processing techniques for handling nonlinear operations in both the geometrical and the intensity domains, various types of nonuniform noise cleanup, and the numerical analysis of pictures. An initial requirement is that a number of anomalies caused by the camera (e.g., geometric distortion, MTF roll-off, vignetting, and nonuniform intensity response) must be taken into account or removed to avoid their interference with the information extraction process. Examples illustrating these operations are discussed along with computer techniques used to emphasize details, perform analyses, classify materials by multivariate analysis, detect temporal differences, and aid in human interpretation of photos.
Toward a superconducting quantum computer
Tsai, Jaw-Shen
2010-01-01
Intensive research on the construction of superconducting quantum computers has produced numerous important achievements. The quantum bit (qubit), based on the Josephson junction, is at the heart of this research. This macroscopic system has the ability to control quantum coherence. This article reviews the current state of quantum computing as well as its history, and discusses its future. Although progress has been rapid, the field remains beset with unsolved issues, and there are still many new research opportunities open to physicists and engineers. PMID:20431256
Toward a superconducting quantum computer. Harnessing macroscopic quantum coherence.
Tsai, Jaw-Shen
2010-01-01
Intensive research on the construction of superconducting quantum computers has produced numerous important achievements. The quantum bit (qubit), based on the Josephson junction, is at the heart of this research. This macroscopic system has the ability to control quantum coherence. This article reviews the current state of quantum computing as well as its history, and discusses its future. Although progress has been rapid, the field remains beset with unsolved issues, and there are still many new research opportunities open to physicists and engineers.
Structural optimization with approximate sensitivities
NASA Technical Reports Server (NTRS)
Patnaik, S. N.; Hopkins, D. A.; Coroneos, R.
1994-01-01
Computational efficiency in structural optimization can be enhanced if the intensive computations associated with the calculation of the sensitivities, that is, gradients of the behavior constraints, are reduced. Approximation to gradients of the behavior constraints that can be generated with small amount of numerical calculations is proposed. Structural optimization with these approximate sensitivities produced correct optimum solution. Approximate gradients performed well for different nonlinear programming methods, such as the sequence of unconstrained minimization technique, method of feasible directions, sequence of quadratic programming, and sequence of linear programming. Structural optimization with approximate gradients can reduce by one third the CPU time that would otherwise be required to solve the problem with explicit closed-form gradients. The proposed gradient approximation shows potential to reduce intensive computation that has been associated with traditional structural optimization.
Specialized computer architectures for computational aerodynamics
NASA Technical Reports Server (NTRS)
Stevenson, D. K.
1978-01-01
In recent years, computational fluid dynamics has made significant progress in modelling aerodynamic phenomena. Currently, one of the major barriers to future development lies in the compute-intensive nature of the numerical formulations and the relative high cost of performing these computations on commercially available general purpose computers, a cost high with respect to dollar expenditure and/or elapsed time. Today's computing technology will support a program designed to create specialized computing facilities to be dedicated to the important problems of computational aerodynamics. One of the still unresolved questions is the organization of the computing components in such a facility. The characteristics of fluid dynamic problems which will have significant impact on the choice of computer architecture for a specialized facility are reviewed.
Gaussian representation of high-intensity focused ultrasound beams.
Soneson, Joshua E; Myers, Matthew R
2007-11-01
A method for fast numerical simulation of high-intensity focused ultrasound beams is derived. The method is based on the frequency-domain representation of the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation, and assumes for each harmonic a Gaussian transverse pressure distribution at all distances from the transducer face. The beamwidths of the harmonics are constrained to vary inversely with the square root of the harmonic number, and as such this method may be viewed as an extension of a quasilinear approximation. The technique is capable of determining pressure or intensity fields of moderately nonlinear high-intensity focused ultrasound beams in water or biological tissue, usually requiring less than a minute of computer time on a modern workstation. Moreover, this method is particularly well suited to high-gain simulations since, unlike traditional finite-difference methods, it is not subject to resolution limitations in the transverse direction. Results are shown to be in reasonable agreement with numerical solutions of the full KZK equation in both tissue and water for moderately nonlinear beams.
Experimental investigation and numerical modelling of positive corona discharge: ozone generation
NASA Astrophysics Data System (ADS)
Yanallah, K; Pontiga, F; Fernández-Rueda, A; Castellanos, A
2009-03-01
The spatial distribution of the species generated in a wire-cylinder positive corona discharge in pure oxygen has been computed using a plasma chemistry model that includes the most significant reactions between electrons, ions, atoms and molecules. The plasma chemistry model is included in the continuity equations of each species, which are coupled with Poisson's equation for the electric field and the energy conservation equation for the gas temperature. The current-voltage characteristic measured in the experiments has been used as an input data to the numerical simulation. The numerical model is able to reproduce the basic structure of the positive corona discharge and highlights the importance of Joule heating on ozone generation. The average ozone density has been computed as a function of current intensity and compared with the experimental measurements of ozone concentration determined by UV absorption spectroscopy.
Data-intensive computing on numerically-insensitive supercomputers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahrens, James P; Fasel, Patricia K; Habib, Salman
2010-12-03
With the advent of the era of petascale supercomputing, via the delivery of the Roadrunner supercomputing platform at Los Alamos National Laboratory, there is a pressing need to address the problem of visualizing massive petascale-sized results. In this presentation, I discuss progress on a number of approaches including in-situ analysis, multi-resolution out-of-core streaming and interactive rendering on the supercomputing platform. These approaches are placed in context by the emerging area of data-intensive supercomputing.
NASA Astrophysics Data System (ADS)
Perrin, A.; Ndao, M.; Manceron, L.
2017-10-01
A recent paper [1] presents a high-resolution, high-temperature version of the Nitrogen Dioxide Spectroscopic Databank called NDSD-1000. The NDSD-1000 database contains line parameters (positions, intensities, self- and air-broadening coefficients, exponents of the temperature dependence of self- and air-broadening coefficients) for numerous cold and hot bands of the 14N16O2 isotopomer of nitrogen dioxide. The parameters used for the line positions and intensities calculation were generated through a global modeling of experimental data collected in the literature within the framework of the method of effective operators. However, the form of the effective dipole moment operator used to compute the NO2 line intensities in the NDSD-1000 database differs from the classical one used for line intensities calculation in the NO2 infrared literature [12]. Using Fourier transform spectra recorded at high resolution in the 6.3 μm region, it is shown here, that the NDSD-1000 formulation is incorrect since the computed intensities do not account properly for the (Int(+)/Int(-)) intensity ratio between the (+) (J = N+ 1/2) and (-) (J = N-1/2) electron - spin rotation subcomponents of the computed vibration rotation transitions. On the other hand, in the HITRAN or GEISA spectroscopic databases, the NO2 line intensities were computed using the classical theoretical approach, and it is shown here that these data lead to a significant better agreement between the observed and calculated spectra.
The CALL-SLA Interface: Insights from a Second-Order Synthesis
ERIC Educational Resources Information Center
Plonsky, Luke; Ziegler, Nicole
2016-01-01
The relationship between computer-assisted language learning (CALL) and second language acquisition (SLA) has been studied both extensively, covering numerous subdomains, and intensively, resulting in hundreds of primary studies. It is therefore no surprise that CALL researchers, as in other areas of applied linguistics, have turned in recent…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aliaga, José I., E-mail: aliaga@uji.es; Alonso, Pedro; Badía, José M.
We introduce a new iterative Krylov subspace-based eigensolver for the simulation of macromolecular motions on desktop multithreaded platforms equipped with multicore processors and, possibly, a graphics accelerator (GPU). The method consists of two stages, with the original problem first reduced into a simpler band-structured form by means of a high-performance compute-intensive procedure. This is followed by a memory-intensive but low-cost Krylov iteration, which is off-loaded to be computed on the GPU by means of an efficient data-parallel kernel. The experimental results reveal the performance of the new eigensolver. Concretely, when applied to the simulation of macromolecules with a few thousandsmore » degrees of freedom and the number of eigenpairs to be computed is small to moderate, the new solver outperforms other methods implemented as part of high-performance numerical linear algebra packages for multithreaded architectures.« less
Electromagnetic field scattering by a triangular aperture.
Harrison, R E; Hyman, E
1979-03-15
The multiple Laplace transform has been applied to analysis and computation of scattering by a double triangular aperture. Results are obtained which match far-field intensity distributions observed in experiments. Arbitrary polarization components, as well as in-phase and quadrature-phase components, may be determined, in the transform domain, as a continuous function of distance from near to far-field for any orientation, aperture, and transformable waveform. Numerical results are obtained by application of numerical multiple inversions of the fully transformed solution.
NASA Technical Reports Server (NTRS)
Chackerian, C., Jr.; Farreng, R.; Guelachvili, G.; Rossetti, C.; Urban, W.
1984-01-01
Experimental intensity information is combined with numerically obtained vibrational wave functions in a nonlinear least squares fitting procedure to obtain the ground electronic state electric-dipole-moment function of carbon monoxide valid in the range of nuclear oscillation (0.87 to 1.01 A) of about the V = 38th vibrational level. Mechanical anharmonicity intensity factors, H, are computed from this function for delta V + = 1, 2, 3, with or = to 38.
Numerical simulation of stress amplification induced by crack interaction in human femur bone
NASA Astrophysics Data System (ADS)
Alia, Noor; Daud, Ruslizam; Ramli, Mohammad Fadzli; Azman, Wan Zuki; Faizal, Ahmad; Aisyah, Siti
2015-05-01
This research is about numerical simulation using a computational method which study on stress amplification induced by crack interaction in human femur bone. Cracks in human femur bone usually occur because of large load or stress applied on it. Usually, the fracture takes longer time to heal itself. At present, the crack interaction is still not well understood due to bone complexity. Thus, brittle fracture behavior of bone may be underestimated and inaccurate. This study aims to investigate the geometrical effect of double co-planar edge cracks on stress intensity factor (K) in femur bone. This research focuses to analyze the amplification effect on the fracture behavior of double co-planar edge cracks, where numerical model is developed using computational method. The concept of fracture mechanics and finite element method (FEM) are used to solve the interacting cracks problems using linear elastic fracture mechanics (LEFM) theory. As a result, this study has shown the identification of the crack interaction limit (CIL) and crack unification limit (CUL) exist in the human femur bone model developed. In future research, several improvements will be made such as varying the load, applying thickness on the model and also use different theory or method in calculating the stress intensity factor (K).
NASA Astrophysics Data System (ADS)
Nehar, K. C.; Hachi, B. E.; Cazes, F.; Haboussi, M.
2017-12-01
The aim of the present work is to investigate the numerical modeling of interfacial cracks that may appear at the interface between two isotropic elastic materials. The extended finite element method is employed to analyze brittle and bi-material interfacial fatigue crack growth by computing the mixed mode stress intensity factors (SIF). Three different approaches are introduced to compute the SIFs. In the first one, mixed mode SIF is deduced from the computation of the contour integral as per the classical J-integral method, whereas a displacement method is used to evaluate the SIF by using either one or two displacement jumps located along the crack path in the second and third approaches. The displacement jump method is rather classical for mono-materials, but has to our knowledge not been used up to now for a bi-material. Hence, use of displacement jump for characterizing bi-material cracks constitutes the main contribution of the present study. Several benchmark tests including parametric studies are performed to show the effectiveness of these computational methodologies for SIF considering static and fatigue problems of bi-material structures. It is found that results based on the displacement jump methods are in a very good agreement with those of exact solutions, such as for the J-integral method, but with a larger domain of applicability and a better numerical efficiency (less time consuming and less spurious boundary effect).
Finite element techniques applied to cracks interacting with selected singularities
NASA Technical Reports Server (NTRS)
Conway, J. C.
1975-01-01
The finite-element method for computing the extensional stress-intensity factor for cracks approaching selected singularities of varied geometry is described. Stress-intensity factors are generated using both displacement and J-integral techniques, and numerical results are compared to those obtained experimentally in a photoelastic investigation. The selected singularities considered are a colinear crack, a circular penetration, and a notched circular penetration. Results indicate that singularities greatly influence the crack-tip stress-intensity factor as the crack approaches the singularity. In addition, the degree of influence can be regulated by varying the overall geometry of the singularity. Local changes in singularity geometry have little effect on the stress-intensity factor for the cases investigated.
NASA Astrophysics Data System (ADS)
Nair, B. G.; Winter, N.; Daniel, B.; Ward, R. M.
2016-07-01
Direct measurement of the flow of electric current during VAR is extremely difficult due to the aggressive environment as the arc process itself controls the distribution of current. In previous studies the technique of “magnetic source tomography” was presented; this was shown to be effective but it used a computationally intensive iterative method to analyse the distribution of arc centre position. In this paper we present faster computational methods requiring less numerical optimisation to determine the centre position of a single distributed arc both numerically and experimentally. Numerical validation of the algorithms were done on models and experimental validation on measurements based on titanium and nickel alloys (Ti6Al4V and INCONEL 718). The results are used to comment on the effects of process parameters on arc behaviour during VAR.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ohsuga, Ken; Takahashi, Hiroyuki R.
2016-02-20
We develop a numerical scheme for solving the equations of fully special relativistic, radiation magnetohydrodynamics (MHDs), in which the frequency-integrated, time-dependent radiation transfer equation is solved to calculate the specific intensity. The radiation energy density, the radiation flux, and the radiation stress tensor are obtained by the angular quadrature of the intensity. In the present method, conservation of total mass, momentum, and energy of the radiation magnetofluids is guaranteed. We treat not only the isotropic scattering but also the Thomson scattering. The numerical method of MHDs is the same as that of our previous work. The advection terms are explicitlymore » solved, and the source terms, which describe the gas–radiation interaction, are implicitly integrated. Our code is suitable for massive parallel computing. We present that our code shows reasonable results in some numerical tests for propagating radiation and radiation hydrodynamics. Particularly, the correct solution is given even in the optically very thin or moderately thin regimes, and the special relativistic effects are nicely reproduced.« less
Meshfree and efficient modeling of swimming cells
NASA Astrophysics Data System (ADS)
Gallagher, Meurig T.; Smith, David J.
2018-05-01
Locomotion in Stokes flow is an intensively studied problem because it describes important biological phenomena such as the motility of many species' sperm, bacteria, algae, and protozoa. Numerical computations can be challenging, particularly in three dimensions, due to the presence of moving boundaries and complex geometries; methods which combine ease of implementation and computational efficiency are therefore needed. A recently proposed method to discretize the regularized Stokeslet boundary integral equation without the need for a connected mesh is applied to the inertialess locomotion problem in Stokes flow. The mathematical formulation and key aspects of the computational implementation in matlab® or GNU Octave are described, followed by numerical experiments with biflagellate algae and multiple uniflagellate sperm swimming between no-slip surfaces, for which both swimming trajectories and flow fields are calculated. These computational experiments required minutes of time on modest hardware; an extensible implementation is provided in a GitHub repository. The nearest-neighbor discretization dramatically improves convergence and robustness, a key challenge in extending the regularized Stokeslet method to complicated three-dimensional biological fluid problems.
Vaporization of irradiated droplets
NASA Astrophysics Data System (ADS)
Armstrong, R. L.; O'Rourke, P. J.; Zardecki, A.
1986-11-01
The vaporization of a spherically symmetric liquid droplet subject to a high-intensity laser flux is investigated on the basis of a hydrodynamic description of the system composed of the vapor and ambient gas. In the limit of the convective vaporization, the boundary conditions at the fluid-gas interface are formulated by using the notion of a Knudsen layer in which translational equilibrium is established. This leads to approximate jump conditions at the interface. For homogeneous energy deposition, the hydrodynamic equations are solved numerically with the aid of the CON1D computer code (``CON1D: A computer program for calculating spherically symmetric droplet combustion,'' Los Alamos National Laboratory Report No. LA-10269-MS, December, 1984), based on the implict continuous-fluid Eulerian (ICE) [J. Comput. Phys. 8, 197 (1971)] and arbitrary Lagrangian-Eulerian (ALE) [J. Comput. Phys. 14, 1227 (1974)] numerical mehtods. The solutions exhibit the existence of two shock waves propagating in opposite directions with respect to the contact discontinuity surface that separates the ambient gas and vapor.
Computational attributes of the integral form of the equation of transfer
NASA Technical Reports Server (NTRS)
Frankel, J. I.
1991-01-01
Difficulties can arise in radiative and neutron transport calculations when a highly anisotropic scattering phase function is present. In the presence of anisotropy, currently used numerical solutions are based on the integro-differential form of the linearized Boltzmann transport equation. This paper, departs from classical thought and presents an alternative numerical approach based on application of the integral form of the transport equation. Use of the integral formalism facilitates the following steps: a reduction in dimensionality of the system prior to discretization, the use of symbolic manipulation to augment the computational procedure, and the direct determination of key physical quantities which are derivable through the various Legendre moments of the intensity. The approach is developed in the context of radiative heat transfer in a plane-parallel geometry, and results are presented and compared with existing benchmark solutions. Encouraging results are presented to illustrate the potential of the integral formalism for computation. The integral formalism appears to possess several computational attributes which are well-suited to radiative and neutron transport calculations.
Calculation of heat sink around cracks formed under pulsed heat load
NASA Astrophysics Data System (ADS)
Lazareva, G. G.; Arakcheev, A. S.; Kandaurov, I. V.; Kasatov, A. A.; Kurkuchekov, V. V.; Maksimova, A. G.; Popov, V. A.; Shoshin, A. A.; Snytnikov, A. V.; Trunev, Yu A.; Vasilyev, A. A.; Vyacheslavov, L. N.
2017-10-01
The experimental and numerical simulations of the conditions causing the intensive erosion and expected to be realized infusion reactor were carried out. The influence of relevant pulsed heat loads to tungsten was simulated using a powerful electron beam source in BINP. The mechanical destruction, melting and splashing of the material were observed. The laboratory experiments are accompanied by computational ones. Computational experiment allowed to quantitatively describe the overheating near the cracks, caused by parallel to surface cracks.
Quantitative computational infrared imaging of buoyant diffusion flames
NASA Astrophysics Data System (ADS)
Newale, Ashish S.
Studies of infrared radiation from turbulent buoyant diffusion flames impinging on structural elements have applications to the development of fire models. A numerical and experimental study of radiation from buoyant diffusion flames with and without impingement on a flat plate is reported. Quantitative images of the radiation intensity from the flames are acquired using a high speed infrared camera. Large eddy simulations are performed using fire dynamics simulator (FDS version 6). The species concentrations and temperature from the simulations are used in conjunction with a narrow-band radiation model (RADCAL) to solve the radiative transfer equation. The computed infrared radiation intensities rendered in the form of images and compared with the measurements. The measured and computed radiation intensities reveal necking and bulging with a characteristic frequency of 7.1 Hz which is in agreement with previous empirical correlations. The results demonstrate the effects of stagnation point boundary layer on the upstream buoyant shear layer. The coupling between these two shear layers presents a model problem for sub-grid scale modeling necessary for future large eddy simulations.
Computation of transmitted and received B1 fields in magnetic resonance imaging.
Milles, Julien; Zhu, Yue Min; Chen, Nan-Kuei; Panych, Lawrence P; Gimenez, Gérard; Guttmann, Charles R G
2006-05-01
Computation of B1 fields is a key issue for determination and correction of intensity nonuniformity in magnetic resonance images. This paper presents a new method for computing transmitted and received B1 fields. Our method combines a modified MRI acquisition protocol and an estimation technique based on the Levenberg-Marquardt algorithm and spatial filtering. It enables accurate estimation of transmitted and received B1 fields for both homogeneous and heterogeneous objects. The method is validated using numerical simulations and experimental data from phantom and human scans. The experimental results are in agreement with theoretical expectations.
NASA Technical Reports Server (NTRS)
Hersh, Alan S.; Tam, Christopher
2009-01-01
Two significant advances have been made in the application of computational aeroacoustics methodology to acoustic liner technology. The first is that temperature effects for discrete sound are not the same as for broadband noise. For discrete sound, the normalized resistance appears to be insensitive to temperature except at high SPL. However, reactance is lower, significantly lower in absolute value, at high temperature. The second is the numerical investigation the acoustic performance of a liner by direct numerical simulation. Liner impedance is affected by the non-uniformity of the incident sound waves. This identifies the importance of pressure gradient. Preliminary design one and two-dimensional impedance models were developed to design sound absorbing liners in the presence of intense sound and grazing flow. The two-dimensional model offers the potential to empirically determine incident sound pressure face-plate distance from resonator orifices. This represents an important initial step in improving our understanding of how to effectively use the Dean Two-Microphone impedance measurement method.
On the relative intensity of Poisson’s spot
NASA Astrophysics Data System (ADS)
Reisinger, T.; Leufke, P. M.; Gleiter, H.; Hahn, H.
2017-03-01
The Fresnel diffraction phenomenon referred to as Poisson’s spot or spot of Arago has, beside its historical significance, become relevant in a number of fields. Among them are for example fundamental tests of the super-position principle in the transition from quantum to classical physics and the search for extra-solar planets using star shades. Poisson’s spot refers to the positive on-axis wave interference in the shadow of any spherical or circular obstacle. While the spot’s intensity is equal to the undisturbed field in the plane wave picture, its intensity in general depends on a number of factors, namely the size and wavelength of the source, the size and surface corrugation of the diffraction obstacle, and the distances between source, obstacle and detector. The intensity can be calculated by solving the Fresnel-Kirchhoff diffraction integral numerically, which however tends to be computationally expensive. We have therefore devised an analytical model for the on-axis intensity of Poisson’s spot relative to the intensity of the undisturbed wave field and successfully validated it both using a simple light diffraction setup and numerical methods. The model will be useful for optimizing future Poisson-spot matter-wave diffraction experiments and determining under what experimental conditions the spot can be observed.
Computation of Nonlinear Backscattering Using a High-Order Numerical Method
NASA Technical Reports Server (NTRS)
Fibich, G.; Ilan, B.; Tsynkov, S.
2001-01-01
The nonlinear Schrodinger equation (NLS) is the standard model for propagation of intense laser beams in Kerr media. The NLS is derived from the nonlinear Helmholtz equation (NLH) by employing the paraxial approximation and neglecting the backscattered waves. In this study we use a fourth-order finite-difference method supplemented by special two-way artificial boundary conditions (ABCs) to solve the NLH as a boundary value problem. Our numerical methodology allows for a direct comparison of the NLH and NLS models and for an accurate quantitative assessment of the backscattered signal.
ERIC Educational Resources Information Center
Facao, M.; Lopes, A.; Silva, A. L.; Silva, P.
2011-01-01
We propose an undergraduate numerical project for simulating the results of the second-order correlation function as obtained by an intensity interference experiment for two kinds of light, namely bunched light with Gaussian or Lorentzian power density spectrum and antibunched light obtained from single-photon sources. While the algorithm for…
Open source data logger for low-cost environmental monitoring
2014-01-01
Abstract The increasing transformation of biodiversity into a data-intensive science has seen numerous independent systems linked and aggregated into the current landscape of biodiversity informatics. This paper outlines how we can move forward with this programme, incorporating real time environmental monitoring into our methodology using low-power and low-cost computing platforms. PMID:24855446
Numerical simulation of electron scattering by nanotube junctions
NASA Astrophysics Data System (ADS)
Brüning, J.; Grikurov, V. E.
2008-03-01
We demonstrate the possibility of computing the intensity of electronic transport through various junctions of three-dimensional metallic nanotubes. In particular, we observe that the magnetic field can be used to control the switch of electron in Y-type junctions. Keeping in mind the asymptotic modeling of reliable nanostructures by quantum graphs, we conjecture that the scattering matrix of the graph should be the same as the scattering matrix of its nanosize-prototype. The numerical computation of the latter gives a method for determining the "gluing" conditions at a graph. Exploring this conjecture, we show that the Kirchhoff conditions (which are commonly used on graphs) cannot be applied to model reliable junctions. This work is a natural extension of the paper [1], but it is written in a self-consistent manner.
Modeling of shock wave propagation in large amplitude ultrasound.
Pinton, Gianmarco F; Trahey, Gregg E
2008-01-01
The Rankine-Hugoniot relation for shock wave propagation describes the shock speed of a nonlinear wave. This paper investigates time-domain numerical methods that solve the nonlinear parabolic wave equation, or the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation, and the conditions they require to satisfy the Rankine-Hugoniot relation. Two numerical methods commonly used in hyperbolic conservation laws are adapted to solve the KZK equation: Godunov's method and the monotonic upwind scheme for conservation laws (MUSCL). It is shown that they satisfy the Rankine-Hugoniot relation regardless of attenuation. These two methods are compared with the current implicit solution based method. When the attenuation is small, such as in water, the current method requires a degree of grid refinement that is computationally impractical. All three numerical methods are compared in simulations for lithotripters and high intensity focused ultrasound (HIFU) where the attenuation is small compared to the nonlinearity because much of the propagation occurs in water. The simulations are performed on grid sizes that are consistent with present-day computational resources but are not sufficiently refined for the current method to satisfy the Rankine-Hugoniot condition. It is shown that satisfying the Rankine-Hugoniot conditions has a significant impact on metrics relevant to lithotripsy (such as peak pressures) and HIFU (intensity). Because the Godunov and MUSCL schemes satisfy the Rankine-Hugoniot conditions on coarse grids, they are particularly advantageous for three-dimensional simulations.
GPU-accelerated computation of electron transfer.
Höfinger, Siegfried; Acocella, Angela; Pop, Sergiu C; Narumi, Tetsu; Yasuoka, Kenji; Beu, Titus; Zerbetto, Francesco
2012-11-05
Electron transfer is a fundamental process that can be studied with the help of computer simulation. The underlying quantum mechanical description renders the problem a computationally intensive application. In this study, we probe the graphics processing unit (GPU) for suitability to this type of problem. Time-critical components are identified via profiling of an existing implementation and several different variants are tested involving the GPU at increasing levels of abstraction. A publicly available library supporting basic linear algebra operations on the GPU turns out to accelerate the computation approximately 50-fold with minor dependence on actual problem size. The performance gain does not compromise numerical accuracy and is of significant value for practical purposes. Copyright © 2012 Wiley Periodicals, Inc.
Computation of acoustic ressure fields produced in feline brain by high-intensity focused ultrasound
NASA Astrophysics Data System (ADS)
Omidi, Nazanin
In 1975, Dunn et al. (JASA 58:512-514) showed that a simple relation describes the ultrasonic threshold for cavitation-induced changes in the mammalian brain. The thresholds for tissue damage were estimated for a variety of acoustic parameters in exposed feline brain. The goal of this study was to improve the estimates for acoustic pressures and intensities present in vivo during those experimental exposures by estimating them using nonlinear rather than linear theory. In our current project, the acoustic pressure waveforms produced in the brains of anesthetized felines were numerically simulated for a spherically focused, nominally f1-transducer (focal length = 13 cm) at increasing values of the source pressure at frequencies of 1, 3, and 9 MHz. The corresponding focal intensities were correlated with the experimental data of Dunn et al. The focal pressure waveforms were also computed at the location of the true maximum. For low source pressures, the computed waveforms were the same as those determined using linear theory, and the focal intensities matched experimentally determined values. For higher source pressures, the focal pressure waveforms became increasingly distorted, with the compressional amplitude of the wave becoming greater, and the rarefactional amplitude becoming lower than the values calculated using linear theory. The implications of these results for clinical exposures are discussed.
Haidar, Azzam; Jagode, Heike; Vaccaro, Phil; ...
2018-03-22
The emergence of power efficiency as a primary constraint in processor and system design poses new challenges concerning power and energy awareness for numerical libraries and scientific applications. Power consumption also plays a major role in the design of data centers, which may house petascale or exascale-level computing systems. At these extreme scales, understanding and improving the energy efficiency of numerical libraries and their related applications becomes a crucial part of the successful implementation and operation of the computing system. In this paper, we study and investigate the practice of controlling a compute system's power usage, and we explore howmore » different power caps affect the performance of numerical algorithms with different computational intensities. Further, we determine the impact, in terms of performance and energy usage, that these caps have on a system running scientific applications. This analysis will enable us to characterize the types of algorithms that benefit most from these power management schemes. Our experiments are performed using a set of representative kernels and several popular scientific benchmarks. Lastly, we quantify a number of power and performance measurements and draw observations and conclusions that can be viewed as a roadmap to achieving energy efficiency in the design and execution of scientific algorithms.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haidar, Azzam; Jagode, Heike; Vaccaro, Phil
The emergence of power efficiency as a primary constraint in processor and system design poses new challenges concerning power and energy awareness for numerical libraries and scientific applications. Power consumption also plays a major role in the design of data centers, which may house petascale or exascale-level computing systems. At these extreme scales, understanding and improving the energy efficiency of numerical libraries and their related applications becomes a crucial part of the successful implementation and operation of the computing system. In this paper, we study and investigate the practice of controlling a compute system's power usage, and we explore howmore » different power caps affect the performance of numerical algorithms with different computational intensities. Further, we determine the impact, in terms of performance and energy usage, that these caps have on a system running scientific applications. This analysis will enable us to characterize the types of algorithms that benefit most from these power management schemes. Our experiments are performed using a set of representative kernels and several popular scientific benchmarks. Lastly, we quantify a number of power and performance measurements and draw observations and conclusions that can be viewed as a roadmap to achieving energy efficiency in the design and execution of scientific algorithms.« less
NASA Astrophysics Data System (ADS)
Colaïtis, A.; Chapman, T.; Strozzi, D.; Divol, L.; Michel, P.
2018-03-01
A three-dimensional laser propagation model for computation of laser-plasma interactions is presented. It is focused on indirect drive geometries in inertial confinement fusion and formulated for use at large temporal and spatial scales. A modified tesselation-based estimator and a relaxation scheme are used to estimate the intensity distribution in plasma from geometrical optics rays. Comparisons with reference solutions show that this approach is well-suited to reproduce realistic 3D intensity field distributions of beams smoothed by phase plates. It is shown that the method requires a reduced number of rays compared to traditional rigid-scale intensity estimation. Using this field estimator, we have implemented laser refraction, inverse-bremsstrahlung absorption, and steady-state crossed-beam energy transfer with a linear kinetic model in the numerical code Vampire. Probe beam amplification and laser spot shapes are compared with experimental results and pf3d paraxial simulations. These results are promising for the efficient and accurate computation of laser intensity distributions in holhraums, which is of importance for determining the capsule implosion shape and risks of laser-plasma instabilities such as hot electron generation and backscatter in multi-beam configurations.
Paranoia.Ada: A diagnostic program to evaluate Ada floating-point arithmetic
NASA Technical Reports Server (NTRS)
Hjermstad, Chris
1986-01-01
Many essential software functions in the mission critical computer resource application domain depend on floating point arithmetic. Numerically intensive functions associated with the Space Station project, such as emphemeris generation or the implementation of Kalman filters, are likely to employ the floating point facilities of Ada. Paranoia.Ada appears to be a valuabe program to insure that Ada environments and their underlying hardware exhibit the precision and correctness required to satisfy mission computational requirements. As a diagnostic tool, Paranoia.Ada reveals many essential characteristics of an Ada floating point implementation. Equipped with such knowledge, programmers need not tremble before the complex task of floating point computation.
Dynamic array processing for computationally intensive expert systems in CLIPS
NASA Technical Reports Server (NTRS)
Athavale, N. N.; Ragade, R. K.; Fenske, T. E.; Cassaro, M. A.
1990-01-01
This paper puts forth an architecture for implementing a loop for advanced data structure of arrays in CLIPS. An attempt is made to use multi-field variables in such an architecture to process a set of data during the decision making cycle. Also, current limitations on the expert system shells are discussed in brief in this paper. The resulting architecture is designed to circumvent the current limitations set by the expert system shell and also by the operating environment. Such advanced data structures are needed for tightly coupling symbolic and numeric computation modules.
NASA Technical Reports Server (NTRS)
Nosenchuck, D. M.; Littman, M. G.
1986-01-01
The Navier-Stokes computer (NSC) has been developed for solving problems in fluid mechanics involving complex flow simulations that require more speed and capacity than provided by current and proposed Class VI supercomputers. The machine is a parallel processing supercomputer with several new architectural elements which can be programmed to address a wide range of problems meeting the following criteria: (1) the problem is numerically intensive, and (2) the code makes use of long vectors. A simulation of two-dimensional nonsteady viscous flows is presented to illustrate the architecture, programming, and some of the capabilities of the NSC.
Zhan, X.
2005-01-01
A parallel Fortran-MPI (Message Passing Interface) software for numerical inversion of the Laplace transform based on a Fourier series method is developed to meet the need of solving intensive computational problems involving oscillatory water level's response to hydraulic tests in a groundwater environment. The software is a parallel version of ACM (The Association for Computing Machinery) Transactions on Mathematical Software (TOMS) Algorithm 796. Running 38 test examples indicated that implementation of MPI techniques with distributed memory architecture speedups the processing and improves the efficiency. Applications to oscillatory water levels in a well during aquifer tests are presented to illustrate how this package can be applied to solve complicated environmental problems involved in differential and integral equations. The package is free and is easy to use for people with little or no previous experience in using MPI but who wish to get off to a quick start in parallel computing. ?? 2004 Elsevier Ltd. All rights reserved.
Peppas, Kostas P; Lazarakis, Fotis; Alexandridis, Antonis; Dangakis, Kostas
2012-08-01
In this Letter we investigate the error performance of multiple-input multiple-output free-space optical communication systems employing intensity modulation/direct detection and operating over strong atmospheric turbulence channels. Atmospheric-induced strong turbulence fading is modeled using the negative exponential distribution. For the considered system, an approximate yet accurate analytical expression for the average bit error probability is derived and an efficient method for its numerical evaluation is proposed. Numerically evaluated and computer simulation results are further provided to demonstrate the validity of the proposed mathematical analysis.
NASA Technical Reports Server (NTRS)
Kim, Jonnathan H.
1995-01-01
Humans can perform many complicated tasks without explicit rules. This inherent and advantageous capability becomes a hurdle when a task is to be automated. Modern computers and numerical calculations require explicit rules and discrete numerical values. In order to bridge the gap between human knowledge and automating tools, a knowledge model is proposed. Knowledge modeling techniques are discussed and utilized to automate a labor and time intensive task of detecting anomalous bearing wear patterns in the Space Shuttle Main Engine (SSME) High Pressure Oxygen Turbopump (HPOTP).
Camboulives, A-R; Velluet, M-T; Poulenard, S; Saint-Antonin, L; Michau, V
2018-02-01
An optical communication link performance between the ground and a geostationary satellite can be impaired by scintillation, beam wandering, and beam spreading due to its propagation through atmospheric turbulence. These effects on the link performance can be mitigated by tracking and error correction codes coupled with interleaving. Precise numerical tools capable of describing the irradiance fluctuations statistically and of creating an irradiance time series are needed to characterize the benefits of these techniques and optimize them. The wave optics propagation methods have proven their capability of modeling the effects of atmospheric turbulence on a beam, but these are known to be computationally intensive. We present an analytical-numerical model which provides good results on the probability density functions of irradiance fluctuations as well as a time series with an important saving of time and computational resources.
System on a chip with MPEG-4 capability
NASA Astrophysics Data System (ADS)
Yassa, Fathy; Schonfeld, Dan
2002-12-01
Current products supporting video communication applications rely on existing computer architectures. RISC processors have been used successfully in numerous applications over several decades. DSP processors have become ubiquitous in signal processing and communication applications. Real-time applications such as speech processing in cellular telephony rely extensively on the computational power of these processors. Video processors designed to implement the computationally intensive codec operations have also been used to address the high demands of video communication applications (e.g., cable set-top boxes and DVDs). This paper presents an overview of a system-on-chip (SOC) architecture used for real-time video in wireless communication applications. The SOC specifications answer to the system requirements imposed by the application environment. A CAM-based video processor is used to accelerate data intensive video compression tasks such as motion estimations and filtering. Other components are dedicated to system level data processing and audio processing. A rich set of I/Os allows the SOC to communicate with other system components such as baseband and memory subsystems.
Back focal plane microscopic ellipsometer with internal reflection geometry
NASA Astrophysics Data System (ADS)
Otsuki, Soichi; Murase, Norio; Kano, Hiroshi
2013-05-01
A back focal plane (BFP) ellipsometer is presented to measure a thin film on a cover glass using an oil-immersion high-numerical-aperture objective lens. The internal reflection geometry lowers the pseudo Brewster angle (ϕB) to the range over which the light distribution is observed in BFP of the objective. A calculation based on Mueller matrix was developed to compute ellipsometric parameters from the intensity distribution on BFP. The center and radius of the partial reflection region below the critical angle were determined and used to define a polar coordinate on BFP. Harmonic components were computed from the intensities along the azimuth direction and transformed to ellipsometric parameters at multiple incident angles around ϕB. The refractive index and thickness of the film and the contributions of the objective effect were estimated at the same time by fitting.
Terascale direct numerical simulations of turbulent combustion using S3D
NASA Astrophysics Data System (ADS)
Chen, J. H.; Choudhary, A.; de Supinski, B.; DeVries, M.; Hawkes, E. R.; Klasky, S.; Liao, W. K.; Ma, K. L.; Mellor-Crummey, J.; Podhorszki, N.; Sankaran, R.; Shende, S.; Yoo, C. S.
2009-01-01
Computational science is paramount to the understanding of underlying processes in internal combustion engines of the future that will utilize non-petroleum-based alternative fuels, including carbon-neutral biofuels, and burn in new combustion regimes that will attain high efficiency while minimizing emissions of particulates and nitrogen oxides. Next-generation engines will likely operate at higher pressures, with greater amounts of dilution and utilize alternative fuels that exhibit a wide range of chemical and physical properties. Therefore, there is a significant role for high-fidelity simulations, direct numerical simulations (DNS), specifically designed to capture key turbulence-chemistry interactions in these relatively uncharted combustion regimes, and in particular, that can discriminate the effects of differences in fuel properties. In DNS, all of the relevant turbulence and flame scales are resolved numerically using high-order accurate numerical algorithms. As a consequence terascale DNS are computationally intensive, require massive amounts of computing power and generate tens of terabytes of data. Recent results from terascale DNS of turbulent flames are presented here, illustrating its role in elucidating flame stabilization mechanisms in a lifted turbulent hydrogen/air jet flame in a hot air coflow, and the flame structure of a fuel-lean turbulent premixed jet flame. Computing at this scale requires close collaborations between computer and combustion scientists to provide optimized scaleable algorithms and software for terascale simulations, efficient collective parallel I/O, tools for volume visualization of multiscale, multivariate data and automating the combustion workflow. The enabling computer science, applied to combustion science, is also required in many other terascale physics and engineering simulations. In particular, performance monitoring is used to identify the performance of key kernels in the DNS code, S3D and especially memory intensive loops in the code. Through the careful application of loop transformations, data reuse in cache is exploited thereby reducing memory bandwidth needs, and hence, improving S3D's nodal performance. To enhance collective parallel I/O in S3D, an MPI-I/O caching design is used to construct a two-stage write-behind method for improving the performance of write-only operations. The simulations generate tens of terabytes of data requiring analysis. Interactive exploration of the simulation data is enabled by multivariate time-varying volume visualization. The visualization highlights spatial and temporal correlations between multiple reactive scalar fields using an intuitive user interface based on parallel coordinates and time histogram. Finally, an automated combustion workflow is designed using Kepler to manage large-scale data movement, data morphing, and archival and to provide a graphical display of run-time diagnostics.
NASA Astrophysics Data System (ADS)
Rouvinskaya, Ekaterina; Kurkin, Andrey; Kurkina, Oxana
2017-04-01
Intensive internal gravity waves influence bottom topography in the coastal zone. They induce substantial flows in the bottom layer that are essential for the formation of suspension and for the sediment transport. It is necessary to develop a mathematical model to predict the state of the seabed near the coastline to assess and ensure safety during the building and operation of the hydraulic engineering constructions. There are many models which are used to predict the impact of storm waves on the sediment transport processes. Such models for the impact of the tsunami waves are also actively developing. In recent years, the influence of intense internal waves on the sedimentation processes is also of a special interest. In this study we adapt one of such models, that is based on the advection-diffusion equation and allows to study processes of resuspension under the influence of internal gravity waves in the coastal zone, for solving the specific practical problems. During the numerical simulation precomputed velocity values are substituted in the advection - diffusion equation for sediment concentration at each time step and each node of the computational grid. Velocity values are obtained by the simulation of the internal waves' dynamics by using the IGW Research software package for numerical integration of fully nonlinear two-dimensional (vertical plane) system of equations of hydrodynamics of inviscid incompressible stratified fluid in the Boussinesq approximation bearing in mind the impact of barotropic tide. It is necessary to set the initial velocity and density distribution in the computational domain, bottom topography, as well as the value of the Coriolis parameter and, if necessary, the parameters of the tidal wave to carry out numerical calculations in the software package IGW Research. To initialize the background conditions of the numerical model we used data records obtained in the summer in the southern part of the shelf zone of Sakhalin Island from 1999 to 2003, provided by SakhNIRO, Russia. The process of assimilation of field data with numerical model is described in detail in our previous studies. It has been shown that process of suspension formation is quite intense for the investigated condition. Concentration of suspended particles significantly increases during the tide, especially on naturally uneven bottom relief as well as on the right boundary of the computational domain (near shoreline). Pronounced nepheloid layer is produced. Its thickness is about 5.6 m. At the phase of low tide, the process of suspension sediment production stops, and suspended particles are beginning to settle because of the small vertical velocities. Thickness of nepheloid layer is actively reduced. Obviously, this should lead to a change in the bottom relief. The presented results of research were obtained with the support of the Russian President's scholarship for young scientists and graduate students SP-2311.2016.5.
NASA Astrophysics Data System (ADS)
Kruis, Nathanael J. F.
Heat transfer from building foundations varies significantly in all three spatial dimensions and has important dynamic effects at all timescales, from one hour to several years. With the additional consideration of moisture transport, ground freezing, evapotranspiration, and other physical phenomena, the estimation of foundation heat transfer becomes increasingly sophisticated and computationally intensive to the point where accuracy must be compromised for reasonable computation time. The tools currently available to calculate foundation heat transfer are often either too limited in their capabilities to draw meaningful conclusions or too sophisticated to use in common practices. This work presents Kiva, a new foundation heat transfer computational framework. Kiva provides a flexible environment for testing different numerical schemes, initialization methods, spatial and temporal discretizations, and geometric approximations. Comparisons within this framework provide insight into the balance of computation speed and accuracy relative to highly detailed reference solutions. The accuracy and computational performance of six finite difference numerical schemes are verified against established IEA BESTEST test cases for slab-on-grade heat conduction. Of the schemes tested, the Alternating Direction Implicit (ADI) scheme demonstrates the best balance between accuracy, performance, and numerical stability. Kiva features four approaches of initializing soil temperatures for an annual simulation. A new accelerated initialization approach is shown to significantly reduce the required years of presimulation. Methods of approximating three-dimensional heat transfer within a representative two-dimensional context further improve computational performance. A new approximation called the boundary layer adjustment method is shown to improve accuracy over other established methods with a negligible increase in computation time. This method accounts for the reduced heat transfer from concave foundation shapes, which has not been adequately addressed to date. Within the Kiva framework, three-dimensional heat transfer that can require several days to simulate is approximated in two-dimensions in a matter of seconds while maintaining a mean absolute deviation within 3%.
Improved Method Of Bending Concentric Pipes
NASA Technical Reports Server (NTRS)
Schroeder, James E.
1995-01-01
Proposed method for bending two concentric pipes simultaneously while maintaining void between them replaces present tedious, messy, and labor-intensive method. Array of rubber tubes inserted in gap between concentric pipes. Tubes then inflated with relatively incompressible liquid to fill gap. Enables bending to be done faster and more cleanly, and amenable to automation of significant portion of bending process on computer numerically controlled (CNC) tube-bending machinery.
NASA Astrophysics Data System (ADS)
Hossa, Robert; Górski, Maksymilian
2010-09-01
In the paper we analyze the influence of RF channels mismatch and mutual coupling effect on the performance of the multistatic passive radar with Uniform Circular Array (UCA) configuration. The problem was tested intensively in numerous different scenarios with a reference virtual multistatic passive radar. Finally, exemplary results of the computer software simulations are provided and discussed.
Diffraction of Harmonic Flexural Waves in a Cracked Elastic Plate Carrying Electrical Current
NASA Technical Reports Server (NTRS)
Ambur, Damodar R.; Hasanyan, Davresh; Librescu, iviu; Qin, Zhanming
2005-01-01
The scattering effect of harmonic flexural waves at a through crack in an elastic plate carrying electrical current is investigated. In this context, the Kirchhoffean bending plate theory is extended as to include magnetoelastic interactions. An incident wave giving rise to bending moments symmetric about the longitudinal z-axis of the crack is applied. Fourier transform technique reduces the problem to dual integral equations, which are then cast to a system of two singular integral equations. Efficient numerical computation is implemented to get the bending moment intensity factor for arbitrary frequency of the incident wave and of arbitrary electrical current intensity. The asymptotic behaviour of the bending moment intensity factor is analysed and parametric studies are conducted.
Numerical Simulation of DC Coronal Heating
NASA Astrophysics Data System (ADS)
Dahlburg, Russell B.; Einaudi, G.; Taylor, Brian D.; Ugarte-Urra, Ignacio; Warren, Harry; Rappazzo, A. F.; Velli, Marco
2016-05-01
Recent research on observational signatures of turbulent heating of a coronal loop will be discussed. The evolution of the loop is is studied by means of numerical simulations of the fully compressible three-dimensional magnetohydrodynamic equations using the HYPERION code. HYPERION calculates the full energy cycle involving footpoint convection, magnetic reconnection, nonlinear thermal conduction and optically thin radiation. The footpoints of the loop magnetic field are convected by random photospheric motions. As a consequence the magnetic field in the loop is energized and develops turbulent nonlinear dynamics characterized by the continuous formation and dissipation of field-aligned current sheets: energy is deposited at small scales where heating occurs. Dissipation is non-uniformly distributed so that only a fraction of thecoronal mass and volume gets heated at any time. Temperature and density are highly structured at scales which, in the solar corona, remain observationally unresolved: the plasma of the simulated loop is multi thermal, where highly dynamical hotter and cooler plasma strands are scattered throughout the loop at sub-observational scales. Typical simulated coronal loops are 50000 km length and have axial magnetic field intensities ranging from 0.01 to 0.04 Tesla. To connect these simulations to observations the computed number densities and temperatures are used to synthesize the intensities expected in emission lines typically observed with the Extreme ultraviolet Imaging Spectrometer (EIS) on Hinode. These intensities are then employed to compute differential emission measure distributions, which are found to be very similar to those derived from observations of solar active regions.
A parallel computing engine for a class of time critical processes.
Nabhan, T M; Zomaya, A Y
1997-01-01
This paper focuses on the efficient parallel implementation of systems of numerically intensive nature over loosely coupled multiprocessor architectures. These analytical models are of significant importance to many real-time systems that have to meet severe time constants. A parallel computing engine (PCE) has been developed in this work for the efficient simplification and the near optimal scheduling of numerical models over the different cooperating processors of the parallel computer. First, the analytical system is efficiently coded in its general form. The model is then simplified by using any available information (e.g., constant parameters). A task graph representing the interconnections among the different components (or equations) is generated. The graph can then be compressed to control the computation/communication requirements. The task scheduler employs a graph-based iterative scheme, based on the simulated annealing algorithm, to map the vertices of the task graph onto a Multiple-Instruction-stream Multiple-Data-stream (MIMD) type of architecture. The algorithm uses a nonanalytical cost function that properly considers the computation capability of the processors, the network topology, the communication time, and congestion possibilities. Moreover, the proposed technique is simple, flexible, and computationally viable. The efficiency of the algorithm is demonstrated by two case studies with good results.
NASA Technical Reports Server (NTRS)
Leonard, A.
1980-01-01
Three recent simulations of tubulent shear flow bounded by a wall using the Illiac computer are reported. These are: (1) vibrating-ribbon experiments; (2) study of the evolution of a spot-like disturbance in a laminar boundary layer; and (3) investigation of turbulent channel flow. A number of persistent flow structures were observed, including streamwise and vertical vorticity distributions near the wall, low-speed and high-speed streaks, and local regions of intense vertical velocity. The role of these structures in, for example, the growth or maintenance of turbulence is discussed. The problem of representing the large range of turbulent scales in a computer simulation is also discussed.
Xu, Minzhong; Ye, Shufeng; Lawler, Ronald; Turro, Nicholas J; Bačić, Zlatko
2013-09-13
We report rigorous quantum calculations of the inelastic neutron scattering (INS) spectra of HD@C₆₀, over a range of temperatures from 0 to 240 K and for two incident neutron wavelengths used in recent experimental investigations. The computations were performed using our newly developed methodology, which incorporates the coupled five-dimensional translation-rotation (T-R) eigenstates of the guest molecule as the initial and final states of the INS transitions, and yields highly detailed spectra. Depending on the incident neutron wavelength, the number of computed INS transitions varies from almost 500 to over 2000. The low-temperature INS spectra display the fingerprints of the coupling between the translational and rotational motions of the entrapped HD molecule, which is responsible for the characteristic splitting patterns of the T-R energy levels. INS transitions from the ground T-R state of HD to certain sublevels of excited T-R multiplets have zero intensity and are absent from the spectra. This surprising finding is explained by the new INS selection rule introduced here. The calculated spectra exhibit strong temperature dependence. As the temperature increases, numerous new peaks appear, arising from the transitions originating in excited T-R states which become populated. Our calculations show that the higher temperature features typically comprise two or more transitions close in energy and with similar intensities, interspersed with numerous other transitions whose intensities are negligible. This implies that accurately calculated energies and intensities of INS transitions which our methodology provides will be indispensable for reliable interpretation and assignment of the experimental spectra of HD@C₆₀ and related systems at higher temperatures.
Early years of Computational Statistical Mechanics
NASA Astrophysics Data System (ADS)
Mareschal, Michel
2018-05-01
Evidence that a model of hard spheres exhibits a first-order solid-fluid phase transition was provided in the late fifties by two new numerical techniques known as Monte Carlo and Molecular Dynamics. This result can be considered as the starting point of computational statistical mechanics: at the time, it was a confirmation of a counter-intuitive (and controversial) theoretical prediction by J. Kirkwood. It necessitated an intensive collaboration between the Los Alamos team, with Bill Wood developing the Monte Carlo approach, and the Livermore group, where Berni Alder was inventing Molecular Dynamics. This article tells how it happened.
Study of Wind Effects on Unique Buildings
NASA Astrophysics Data System (ADS)
Olenkov, V.; Puzyrev, P.
2017-11-01
The article deals with a numerical simulation of wind effects on the building of the Church of the Intercession of the Holy Virgin in the village Bulzi of the Chelyabinsk region. We presented a calculation algorithm and obtained pressure fields, velocity fields and the fields of kinetic energy of a wind stream, as well as streamlines. Computational fluid dynamic (CFD) evolved three decades ago at the interfaces of calculus mathematics and theoretical hydromechanics and has become a separate branch of science the subject of which is a numerical simulation of different fluid and gas flows as well as the solution of arising problems with the help of methods that involve computer systems. This scientific field which is of a great practical value is intensively developing. The increase in CFD-calculations is caused by the improvement of computer technologies, creation of multipurpose easy-to-use CFD-packagers that are available to a wide group of researchers and cope with various tasks. Such programs are not only competitive in comparison with physical experiments but sometimes they provide the only opportunity to answer the research questions. The following advantages of computer simulation can be pointed out: a) Reduction in time spent on design and development of a model in comparison with a real experiment (variation of boundary conditions). b) Numerical experiment allows for the simulation of conditions that are not reproducible with environmental tests (use of ideal gas as environment). c) Use of computational gas dynamics methods provides a researcher with a complete and ample information that is necessary to fully describe different processes of the experiment. d) Economic efficiency of computer calculations is more attractive than an experiment. e) Possibility to modify a computational model which ensures efficient timing (change of the sizes of wall layer cells in accordance with the chosen turbulence model).
Numerical Investigation of Flow Around Rectangular Cylinders with and Without Jets
NASA Technical Reports Server (NTRS)
Tiwari, S. N .; Pidugu, S. B.
1999-01-01
The problem of flow past bluff bodies was studied extensively in the past. The problem of drag reduction is very important in many high speed flow applications. Considerable work has been done in this subject area in case of circular cylinders. The present study attempts to investigate the feasibility of drag reduction on a rectangular cylinder by flow injection by flow injection from the rear stagnation region. The physical problem is modeled as two-dimensional body and numerical analysis is carried out with and without trailing jets. A commercial code is used for this purpose. Unsteady computation is performed in case of rectangular cylinders with no trailing jets where as steady state computation is performed when jet is introduced. It is found that drag can be reduced by introducing jets with small intensity in rear stagnation region of the rectangular cylinders.
Load management strategy for Particle-In-Cell simulations in high energy particle acceleration
NASA Astrophysics Data System (ADS)
Beck, A.; Frederiksen, J. T.; Dérouillat, J.
2016-09-01
In the wake of the intense effort made for the experimental CILEX project, numerical simulation campaigns have been carried out in order to finalize the design of the facility and to identify optimal laser and plasma parameters. These simulations bring, of course, important insight into the fundamental physics at play. As a by-product, they also characterize the quality of our theoretical and numerical models. In this paper, we compare the results given by different codes and point out algorithmic limitations both in terms of physical accuracy and computational performances. These limitations are illustrated in the context of electron laser wakefield acceleration (LWFA). The main limitation we identify in state-of-the-art Particle-In-Cell (PIC) codes is computational load imbalance. We propose an innovative algorithm to deal with this specific issue as well as milestones towards a modern, accurate high-performance PIC code for high energy particle acceleration.
Computational Investigation of Soot and Radiation in Turbulent Reacting Flows
NASA Astrophysics Data System (ADS)
Lalit, Harshad
This study delves into computational modeling of soot and infrared radiation for turbulent reacting flows, detailed understanding of both of which is paramount in the design of cleaner engines and pollution control. In the first part of the study, the concept of Stochastic Time and Space Series Analysis (STASS) as a numerical tool to compute time dependent statistics of radiation intensity is introduced for a turbulent premixed flame. In the absence of high fidelity codes for large eddy simulation or direct numerical simulation of turbulent flames, the utility of STASS for radiation imaging of reacting flows to understand the flame structure is assessed by generating images of infrared radiation in spectral bands dominated by radiation from gas phase carbon dioxide and water vapor using an assumed PDF method. The study elucidates the need for time dependent computation of radiation intensity for validation with experiments and the need for accounting for turbulence radiation interactions for correctly predicting radiation intensity and consequently the flame temperature and NOx in a reacting fluid flow. Comparison of single point statistics of infrared radiation intensity with measurements show that STASS can not only predict the flame structure but also estimate the dynamics of thermochemical scalars in the flame with reasonable accuracy. While a time series is used to generate realizations of thermochemical scalars in the first part of the study, in the second part, instantaneous realizations of resolved scale temperature, CO2 and H2O mole fractions and soot volume fractions are extracted from a large eddy simulation (LES) to carry out quantitative imaging of radiation intensity (QIRI) for a turbulent soot generating ethylene diffusion flame. A primary motivation of the study is to establish QIRI as a computational tool for validation of soot models, especially in the absence of conventional flow field and measured scalar data for sooting flames. Realizations of scalars from the LES are used in conjunction with the radiation heat transfer equation and a narrow band radiation model to compute time dependent and time averaged images of infrared radiation intensity in spectral bands corresponding to molecular radiation from gas phase carbon dioxide and soot particles exclusively. While qualitative and quantitative comparisons with measured images in the CO2 radiation band show that the flame structure is correctly computed, images computed in the soot radiation band illustrate that the soot volume fraction is under predicted by the computations. The effect of the soot model and cause of under prediction is investigated further by correcting the soot volume fraction using an empirical state relationship. By comparing default simulations with computations using the state relation, it is shown that while the soot model under-estimates the soot concentration, it correctly computes the intermittency of soot in the flame. The study of sooting flames is extended further by performing a parametric analysis of physical and numerical parameters that affect soot formation and transport in two laboratory scale turbulent sooting flames, one fueled by natural gas and the other by ethylene. The study is focused on investigating the effect of molecular diffusion of species, dilution of fuel with hydrogen gas and the effect of chemical reaction mechanism on the soot concentration in the flame. The effect of species Lewis numbers on soot evolution and transport is investigated by carrying out simulations, first with the default equal diffusivity (ED) assumption and then by incorporating a differential diffusion (DD) model. Computations using the DD model over-estimate the concentration of the soot precursor and soot oxidizer species, leading to inconsistencies in the estimate of the soot concentration. The linear differential diffusion (LDD) model, reported previously to consistently model differential diffusion effects is implemented to correct the over prediction effect of the DD model. It is shown that the effect of species Lewis number on soot evolution is a secondary phenomenon and that soot is primarily transported by advection of the fluid in a turbulent flame. The effect of hydrogen dilution on the soot formation and transport process is also studied. It is noted that the decay of soot volume fraction and flame length with hydrogen addition follows trends observed in laminar sooting flame measurements. While hydrogen enhances mixing shown by the laminar flamelet solutions, the mixing effect does not significantly contribute to differential molecular diffusion effects in the soot nucleation regions downstream of the flame and has a negligible effect on soot transport. The sensitivity of computations of soot volume fraction towards the chemical reaction mechanism is shown. It is concluded that modeling reaction pathways of C3 and C4 species that lead up to Polycyclic Aromatic Hydrocarbon (PAH) molecule formation is paramount for accurate predictions of soot in the flame. (Abstract shortened by ProQuest.).
NASA Astrophysics Data System (ADS)
Naumov, D.; Fischer, T.; Böttcher, N.; Watanabe, N.; Walther, M.; Rink, K.; Bilke, L.; Shao, H.; Kolditz, O.
2014-12-01
OpenGeoSys (OGS) is a scientific open source code for numerical simulation of thermo-hydro-mechanical-chemical processes in porous and fractured media. Its basic concept is to provide a flexible numerical framework for solving multi-field problems for applications in geoscience and hydrology as e.g. for CO2 storage applications, geothermal power plant forecast simulation, salt water intrusion, water resources management, etc. Advances in computational mathematics have revolutionized the variety and nature of the problems that can be addressed by environmental scientists and engineers nowadays and an intensive code development in the last years enables in the meantime the solutions of much larger numerical problems and applications. However, solving environmental processes along the water cycle at large scales, like for complete catchment or reservoirs, stays computationally still a challenging task. Therefore, we started a new OGS code development with focus on execution speed and parallelization. In the new version, a local data structure concept improves the instruction and data cache performance by a tight bundling of data with an element-wise numerical integration loop. Dedicated analysis methods enable the investigation of memory-access patterns in the local and global assembler routines, which leads to further data structure optimization for an additional performance gain. The concept is presented together with a technical code analysis of the recent development and a large case study including transient flow simulation in the unsaturated / saturated zone of the Thuringian Syncline, Germany. The analysis is performed on a high-resolution mesh (up to 50M elements) with embedded fault structures.
Unsteady numerical simulation of a round jet with impinging microjets for noise suppression
Lew, Phoi-Tack; Najafi-Yazdi, Alireza; Mongeau, Luc
2013-01-01
The objective of this study was to determine the feasibility of a lattice-Boltzmann method (LBM)-Large Eddy Simulation methodology for the prediction of sound radiation from a round jet-microjet combination. The distinct advantage of LBM over traditional computational fluid dynamics methods is its ease of handling problems with complex geometries. Numerical simulations of an isothermal Mach 0.5, ReD = 1 × 105 circular jet (Dj = 0.0508 m) with and without the presence of 18 microjets (Dmj = 1 mm) were performed. The presence of microjets resulted in a decrease in the axial turbulence intensity and turbulent kinetic energy. The associated decrease in radiated sound pressure level was around 1 dB. The far-field sound was computed using the porous Ffowcs Williams-Hawkings surface integral acoustic method. The trend obtained is in qualitative agreement with experimental observations. The results of this study support the accuracy of LBM based numerical simulations for predictions of the effects of noise suppression devices on the radiated sound power. PMID:23967931
seismo-live: Training in Computational Seismology using Jupyter Notebooks
NASA Astrophysics Data System (ADS)
Igel, H.; Krischer, L.; van Driel, M.; Tape, C.
2016-12-01
Practical training in computational methodologies is still underrepresented in Earth science curriculae despite the increasing use of sometimes highly sophisticated simulation technologies in research projects. At the same time well-engineered community codes make it easy to return simulation-based results yet with the danger that the inherent traps of numerical solutions are not well understood. It is our belief that training with highly simplified numerical solutions (here to the equations describing elastic wave propagation) with carefully chosen elementary ingredients of simulation technologies (e.g., finite-differencing, function interpolation, spectral derivatives, numerical integration) could substantially improve this situation. For this purpose we have initiated a community platform (www.seismo-live.org) where Python-based Jupyter notebooks can be accessed and run without and necessary downloads or local software installations. The increasingly popular Jupyter notebooks allow combining markup language, graphics, equations with interactive, executable python codes. We demonstrate the potential with training notebooks for the finite-difference method, pseudospectral methods, finite/spectral element methods, the finite-volume and the discontinuous Galerkin method. The platform already includes general Python training, introduction to the ObsPy library for seismology as well as seismic data processing and noise analysis. Submission of Jupyter notebooks for general seismology are encouraged. The platform can be used for complementary teaching in Earth Science courses on compute-intensive research areas.
NASA Technical Reports Server (NTRS)
Wang, C. R.; Papell, S. S.
1983-01-01
Three dimensional mixing length models of a flow field immediately downstream of coolant injection through a discrete circular hole at a 30 deg angle into a crossflow were derived from the measurements of turbulence intensity. To verify their effectiveness, the models were used to estimate the anisotropic turbulent effects in a simplified theoretical and numerical analysis to compute the velocity and temperature fields. With small coolant injection mass flow rate and constant surface temperature, numerical results of the local crossflow streamwise velocity component and surface heat transfer rate are consistent with the velocity measurement and the surface film cooling effectiveness distributions reported in previous studies.
NASA Astrophysics Data System (ADS)
Wang, C. R.; Papell, S. S.
1983-09-01
Three dimensional mixing length models of a flow field immediately downstream of coolant injection through a discrete circular hole at a 30 deg angle into a crossflow were derived from the measurements of turbulence intensity. To verify their effectiveness, the models were used to estimate the anisotropic turbulent effects in a simplified theoretical and numerical analysis to compute the velocity and temperature fields. With small coolant injection mass flow rate and constant surface temperature, numerical results of the local crossflow streamwise velocity component and surface heat transfer rate are consistent with the velocity measurement and the surface film cooling effectiveness distributions reported in previous studies.
Wave propagation in a plate after impact by a projectile
NASA Technical Reports Server (NTRS)
El-Raheb, M.; Wagner, P.
1987-01-01
The wave propagation in a circular plate after impact by a cylindrical projectile is studied. In the vicinity of impact, the pressure is computed numerically. An intense pressure pulse is generated that peaks 0.2 microns after impact, then drops sharply to a plateau. The response of the plate is determined adopting a modal solution of Mindlin's equations. Velocity and acceleration histories display both propagating and dispersive features.
Nonlinear derating of high-intensity focused ultrasound beams using Gaussian modal sums.
Dibaji, Seyed Ahmad Reza; Banerjee, Rupak K; Soneson, Joshua E; Myers, Matthew R
2013-11-01
A method is introduced for using measurements made in water of the nonlinear acoustic pressure field produced by a high-intensity focused ultrasound transducer to compute the acoustic pressure and temperature rise in a tissue medium. The acoustic pressure harmonics generated by nonlinear propagation are represented as a sum of modes having a Gaussian functional dependence in the radial direction. While the method is derived in the context of Gaussian beams, final results are applicable to general transducer profiles. The focal acoustic pressure is obtained by solving an evolution equation in the axial variable. The nonlinear term in the evolution equation for tissue is modeled using modal amplitudes measured in water and suitably reduced using a combination of "source derating" (experiments in water performed at a lower source acoustic pressure than in tissue) and "endpoint derating" (amplitudes reduced at the target location). Numerical experiments showed that, with proper combinations of source derating and endpoint derating, direct simulations of acoustic pressure and temperature in tissue could be reproduced by derating within 5% error. Advantages of the derating approach presented include applicability over a wide range of gains, ease of computation (a single numerical quadrature is required), and readily obtained temperature estimates from the water measurements.
Kinetic energy budgets in areas of convection
NASA Technical Reports Server (NTRS)
Fuelberg, H. E.
1979-01-01
Synoptic scale budgets of kinetic energy are computed using 3 and 6 h data from three of NASA's Atmospheric Variability Experiments (AVE's). Numerous areas of intense convection occurred during the three experiments. Large kinetic energy variability, with periods as short as 6 h, is observed in budgets computed over each entire experiment area and over limited volumes that barely enclose the convection and move with it. Kinetic energy generation and transport processes in the smaller volumes are often a maximum when the enclosed storms are near peak intensity, but the nature of the various energy processes differs between storm cases and seems closely related to the synoptic conditions. A commonly observed energy budget for peak storm intensity indicates that generation of kinetic energy by cross-contour flow is the major energy source while dissipation to subgrid scales is the major sink. Synoptic scale vertical motion transports kinetic energy from lower to upper levels of the atmosphere while low-level horizontal flux convergence and upper-level horizontal divergence also occur. Spatial fields of the energy budget terms show that the storm environment is a major center of energy activity for the entire area.
Computation of Feedback Aeroacoustic System by the CE/SE Method
NASA Technical Reports Server (NTRS)
Loh, Ching Y.; Wang, Xiao Y.; Chang, Sin-Chung; Jorgenson, Philip C. E.
2000-01-01
It is well known that due to vortex shedding in high speed flow over cutouts, cavities, and gaps, intense noise may be generated. Strong tonal oscillations occur in a feedback cycle in which the vortices shed from the upstream edge of the cavity convect downstream and impinge on the cavity lip, generating acoustic waves that propagate upstream to excite new vortices. Numerical simulation of such a complicated process requires a scheme that can: (1) resolve acoustic waves with low dispersion and numerical dissipation, (2) handle nonlinear and discontinuous waves (e.g. shocks), and (3) have an effective (near field) nonreflecting boundary condition (NRBC). The new space time conservation element and solution element method, or CE/SE for short, is a numerical method that meets the above requirements.
Graphics processing unit (GPU)-based computation of heat conduction in thermally anisotropic solids
NASA Astrophysics Data System (ADS)
Nahas, C. A.; Balasubramaniam, Krishnan; Rajagopal, Prabhu
2013-01-01
Numerical modeling of anisotropic media is a computationally intensive task since it brings additional complexity to the field problem in such a way that the physical properties are different in different directions. Largely used in the aerospace industry because of their lightweight nature, composite materials are a very good example of thermally anisotropic media. With advancements in video gaming technology, parallel processors are much cheaper today and accessibility to higher-end graphical processing devices has increased dramatically over the past couple of years. Since these massively parallel GPUs are very good in handling floating point arithmetic, they provide a new platform for engineers and scientists to accelerate their numerical models using commodity hardware. In this paper we implement a parallel finite difference model of thermal diffusion through anisotropic media using the NVIDIA CUDA (Compute Unified device Architecture). We use the NVIDIA GeForce GTX 560 Ti as our primary computing device which consists of 384 CUDA cores clocked at 1645 MHz with a standard desktop pc as the host platform. We compare the results from standard CPU implementation for its accuracy and speed and draw implications for simulation using the GPU paradigm.
Neylon, J; Min, Y; Kupelian, P; Low, D A; Santhanam, A
2017-04-01
In this paper, a multi-GPU cloud-based server (MGCS) framework is presented for dose calculations, exploring the feasibility of remote computing power for parallelization and acceleration of computationally and time intensive radiotherapy tasks in moving toward online adaptive therapies. An analytical model was developed to estimate theoretical MGCS performance acceleration and intelligently determine workload distribution. Numerical studies were performed with a computing setup of 14 GPUs distributed over 4 servers interconnected by a 1 Gigabits per second (Gbps) network. Inter-process communication methods were optimized to facilitate resource distribution and minimize data transfers over the server interconnect. The analytically predicted computation time predicted matched experimentally observations within 1-5 %. MGCS performance approached a theoretical limit of acceleration proportional to the number of GPUs utilized when computational tasks far outweighed memory operations. The MGCS implementation reproduced ground-truth dose computations with negligible differences, by distributing the work among several processes and implemented optimization strategies. The results showed that a cloud-based computation engine was a feasible solution for enabling clinics to make use of fast dose calculations for advanced treatment planning and adaptive radiotherapy. The cloud-based system was able to exceed the performance of a local machine even for optimized calculations, and provided significant acceleration for computationally intensive tasks. Such a framework can provide access to advanced technology and computational methods to many clinics, providing an avenue for standardization across institutions without the requirements of purchasing, maintaining, and continually updating hardware.
Saad, Akram; Cho, Yonghyun; Ahmed, Farid; Jun, Martin Byung-Guk
2016-01-01
A 3D finite element model constructed to predict the intensity-dependent refractive index profile induced by femtosecond laser radiation is presented. A fiber core irradiated by a pulsed laser is modeled as a cylinder subject to predefined boundary conditions using COMSOL5.2 Multiphysics commercial package. The numerically obtained refractive index change is used to numerically design and experimentally fabricate long-period fiber grating (LPFG) in pure silica core single-mode fiber employing identical laser conditions. To reduce the high computational requirements, the beam envelope method approach is utilized in the aforementioned numerical models. The number of periods, grating length, and grating period considered in this work are numerically quantified. The numerically obtained spectral growth of the modeled LPFG seems to be consistent with the transmission of the experimentally fabricated LPFG single mode fiber. The sensing capabilities of the modeled LPFG are tested by varying the refractive index of the surrounding medium. The numerically obtained spectrum corresponding to the varied refractive index shows good agreement with the experimental findings. PMID:28774060
Saad, Akram; Cho, Yonghyun; Ahmed, Farid; Jun, Martin Byung-Guk
2016-11-21
A 3D finite element model constructed to predict the intensity-dependent refractive index profile induced by femtosecond laser radiation is presented. A fiber core irradiated by a pulsed laser is modeled as a cylinder subject to predefined boundary conditions using COMSOL5.2 Multiphysics commercial package. The numerically obtained refractive index change is used to numerically design and experimentally fabricate long-period fiber grating (LPFG) in pure silica core single-mode fiber employing identical laser conditions. To reduce the high computational requirements, the beam envelope method approach is utilized in the aforementioned numerical models. The number of periods, grating length, and grating period considered in this work are numerically quantified. The numerically obtained spectral growth of the modeled LPFG seems to be consistent with the transmission of the experimentally fabricated LPFG single mode fiber. The sensing capabilities of the modeled LPFG are tested by varying the refractive index of the surrounding medium. The numerically obtained spectrum corresponding to the varied refractive index shows good agreement with the experimental findings.
NASA Technical Reports Server (NTRS)
Tanner, John A.
1996-01-01
A computational procedure is presented for the solution of frictional contact problems for aircraft tires. A Space Shuttle nose-gear tire is modeled using a two-dimensional laminated anisotropic shell theory which includes the effects of variations in material and geometric parameters, transverse-shear deformation, and geometric nonlinearities. Contact conditions are incorporated into the formulation by using a perturbed Lagrangian approach with the fundamental unknowns consisting of the stress resultants, the generalized displacements, and the Lagrange multipliers associated with both contact and friction conditions. The contact-friction algorithm is based on a modified Coulomb friction law. A modified two-field, mixed-variational principle is used to obtain elemental arrays. This modification consists of augmenting the functional of that principle by two terms: the Lagrange multiplier vector associated with normal and tangential node contact-load intensities and a regularization term that is quadratic in the Lagrange multiplier vector. These capabilities and computational features are incorporated into an in-house computer code. Experimental measurements were taken to define the response of the Space Shuttle nose-gear tire to inflation-pressure loads and to inflation-pressure loads combined with normal static loads against a rigid flat plate. These experimental results describe the meridional growth of the tire cross section caused by inflation loading, the static load-deflection characteristics of the tire, the geometry of the tire footprint under static loading conditions, and the normal and tangential load-intensity distributions in the tire footprint for the various static vertical loading conditions. Numerical results were obtained for the Space Shuttle nose-gear tire subjected to inflation pressure loads and combined inflation pressure and contact loads against a rigid flat plate. The experimental measurements and the numerical results are compared.
NASA Technical Reports Server (NTRS)
Tanner, John A.
1996-01-01
A computational procedure is presented for the solution of frictional contact problems for aircraft tires. A Space Shuttle nose-gear tire is modeled using a two-dimensional laminated anisotropic shell theory which includes the effects of variations in material and geometric parameters, transverse-shear deformation, and geometric nonlinearities. Contact conditions are incorporated into the formulation by using a perturbed Lagrangian approach with the fundamental unknowns consisting of the stress resultants, the generalized displacements, and the Lagrange multipliers associated with both contact and friction conditions. The contact-friction algorithm is based on a modified Coulomb friction law. A modified two-field, mixed-variational principle is used to obtain elemental arrays. This modification consists of augmenting the functional of that principle by two terms: the Lagrange multiplier vector associated with normal and tangential node contact-load intensities and a regularization term that is quadratic in the Lagrange multiplier vector. These capabilities and computational features are incorporated into an in-house computer code. Experimental measurements were taken to define the response of the Space Shuttle nose-gear tire to inflation-pressure loads and to inflation-pressure loads combined with normal static loads against a rigid flat plate. These experimental results describe the meridional growth of the tire cross section caused by inflation loading, the static load-deflection characteristics of the tire, the geometry of the tire footprint under static loading conditions, and the normal and tangential load-intensity distributions in the tire footprint for the various static vertical-loading conditions. Numerical results were obtained for the Space Shuttle nose-gear tire subjected to inflation pressure loads and combined inflation pressure and contact loads against a rigid flat plate. The experimental measurements and the numerical results are compared.
Methods for analysis of cracks in three-dimensional solids
NASA Technical Reports Server (NTRS)
Raju, I. S.; Newman, J. C., Jr.
1984-01-01
Analytical and numerical methods evaluating the stress-intensity factors for three-dimensional cracks in solids are presented, with reference to fatigue failure in aerospace structures. The exact solutions for embedded elliptical and circular cracks in infinite solids, and the approximate methods, including the finite-element, the boundary-integral equation, the line-spring models, and the mixed methods are discussed. Among the mixed methods, the superposition of analytical and finite element methods, the stress-difference, the discretization-error, the alternating, and the finite element-alternating methods are reviewed. Comparison of the stress-intensity factor solutions for some three-dimensional crack configurations showed good agreement. Thus, the choice of a particular method in evaluating the stress-intensity factor is limited only to the availability of resources and computer programs.
NASA Technical Reports Server (NTRS)
1981-01-01
Progress in the study of the intensity of the urban heat island is reported. The intensity of the heat island is commonly defined as the temperature difference between the center of the city and the surrounding suburban and rural regions. The intensity is considered as a function of changes in the season and changes in meteorological conditions in order to derive various parameters which may be used in numerical models for urban climate. Twelve case studies were selected and CCT's were ordered. In situ data was obtained from sixteen stations scattered about the city of St. Louis. Upper-air meteorological data were obtained and the water vapor and the temperature data were processed. Atmospheric transmissivities were computed for each of the case studies.
Computing the Evans function via solving a linear boundary value ODE
NASA Astrophysics Data System (ADS)
Wahl, Colin; Nguyen, Rose; Ventura, Nathaniel; Barker, Blake; Sandstede, Bjorn
2015-11-01
Determining the stability of traveling wave solutions to partial differential equations can oftentimes be computationally intensive but of great importance to understanding the effects of perturbations on the physical systems (chemical reactions, hydrodynamics, etc.) they model. For waves in one spatial dimension, one may linearize around the wave and form an Evans function - an analytic Wronskian-like function which has zeros that correspond in multiplicity to the eigenvalues of the linearized system. If eigenvalues with a positive real part do not exist, the traveling wave will be stable. Two methods exist for calculating the Evans function numerically: the exterior-product method and the method of continuous orthogonalization. The first is numerically expensive, and the second reformulates the originally linear system as a nonlinear system. We develop a new algorithm for computing the Evans function through appropriate linear boundary-value problems. This algorithm is cheaper than the previous methods, and we prove that it preserves analyticity of the Evans function. We also provide error estimates and implement it on some classical one- and two-dimensional systems, one being the Swift-Hohenberg equation in a channel, to show the advantages.
Distributed Computing Architecture for Image-Based Wavefront Sensing and 2 D FFTs
NASA Technical Reports Server (NTRS)
Smith, Jeffrey S.; Dean, Bruce H.; Haghani, Shadan
2006-01-01
Image-based wavefront sensing (WFS) provides significant advantages over interferometric-based wavefi-ont sensors such as optical design simplicity and stability. However, the image-based approach is computational intensive, and therefore, specialized high-performance computing architectures are required in applications utilizing the image-based approach. The development and testing of these high-performance computing architectures are essential to such missions as James Webb Space Telescope (JWST), Terrestial Planet Finder-Coronagraph (TPF-C and CorSpec), and Spherical Primary Optical Telescope (SPOT). The development of these specialized computing architectures require numerous two-dimensional Fourier Transforms, which necessitate an all-to-all communication when applied on a distributed computational architecture. Several solutions for distributed computing are presented with an emphasis on a 64 Node cluster of DSPs, multiple DSP FPGAs, and an application of low-diameter graph theory. Timing results and performance analysis will be presented. The solutions offered could be applied to other all-to-all communication and scientifically computationally complex problems.
OBSERVATIONAL SIGNATURES OF CORONAL LOOP HEATING AND COOLING DRIVEN BY FOOTPOINT SHUFFLING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dahlburg, R. B.; Taylor, B. D.; Einaudi, G.
The evolution of a coronal loop is studied by means of numerical simulations of the fully compressible three-dimensional magnetohydrodynamic equations using the HYPERION code. The footpoints of the loop magnetic field are advected by random motions. As a consequence, the magnetic field in the loop is energized and develops turbulent nonlinear dynamics characterized by the continuous formation and dissipation of field-aligned current sheets: energy is deposited at small scales where heating occurs. Dissipation is nonuniformly distributed so that only a fraction of the coronal mass and volume gets heated at any time. Temperature and density are highly structured at scalesmore » that, in the solar corona, remain observationally unresolved: the plasma of our simulated loop is multithermal, where highly dynamical hotter and cooler plasma strands are scattered throughout the loop at sub-observational scales. Numerical simulations of coronal loops of 50,000 km length and axial magnetic field intensities ranging from 0.01 to 0.04 T are presented. To connect these simulations to observations, we use the computed number densities and temperatures to synthesize the intensities expected in emission lines typically observed with the Extreme Ultraviolet Imaging Spectrometer on Hinode. These intensities are used to compute differential emission measure distributions using the Monte Carlo Markov Chain code, which are very similar to those derived from observations of solar active regions. We conclude that coronal heating is found to be strongly intermittent in space and time, with only small portions of the coronal loop being heated: in fact, at any given time, most of the corona is cooling down.« less
Defining the computational structure of the motion detector in Drosophila
Clark, Damon A.; Bursztyn, Limor; Horowitz, Mark; Schnitzer, Mark J.; Clandinin, Thomas R.
2011-01-01
SUMMARY Many animals rely on visual motion detection for survival. Motion information is extracted from spatiotemporal intensity patterns on the retina, a paradigmatic neural computation. A phenomenological model, the Hassenstein-Reichardt Correlator (HRC), relates visual inputs to neural and behavioral responses to motion, but the circuits that implement this computation remain unknown. Using cell-type specific genetic silencing, minimal motion stimuli, and in vivo calcium imaging, we examine two critical HRC inputs. These two pathways respond preferentially to light and dark moving edges. We demonstrate that these pathways perform overlapping but complementary subsets of the computations underlying the HRC. A numerical model implementing differential weighting of these operations displays the observed edge preferences. Intriguingly, these pathways are distinguished by their sensitivities to a stimulus correlation that corresponds to an illusory percept, “reverse phi”, that affects many species. Thus, this computational architecture may be widely used to achieve edge selectivity in motion detection. PMID:21689602
Acoustic intensity calculations for axisymmetrically modeled fluid regions
NASA Technical Reports Server (NTRS)
Hambric, Stephen A.; Everstine, Gordon C.
1992-01-01
An algorithm for calculating acoustic intensities from a time harmonic pressure field in an axisymmetric fluid region is presented. Acoustic pressures are computed in a mesh of NASTRAN triangular finite elements of revolution (TRIAAX) using an analogy between the scalar wave equation and elasticity equations. Acoustic intensities are then calculated from pressures and pressure derivatives taken over the mesh of TRIAAX elements. Intensities are displayed as vectors indicating the directions and magnitudes of energy flow at all mesh points in the acoustic field. A prolate spheroidal shell is modeled with axisymmetric shell elements (CONEAX) and submerged in a fluid region of TRIAAX elements. The model is analyzed to illustrate the acoustic intensity method and the usefulness of energy flow paths in the understanding of the response of fluid-structure interaction problems. The structural-acoustic analogy used is summarized for completeness. This study uncovered a NASTRAN limitation involving numerical precision issues in the CONEAX stiffness calculation causing large errors in the system matrices for nearly cylindrical cones.
Boudreau, François; Walthouwer, Michel Jean Louis; de Vries, Hein; Dagenais, Gilles R; Turbide, Ginette; Bourlaud, Anne-Sophie; Moreau, Michel; Côté, José; Poirier, Paul
2015-10-09
The relationship between physical activity and cardiovascular disease (CVD) protection is well documented. Numerous factors (e.g. patient motivation, lack of facilities, physician time constraints) can contribute to poor PA adherence. Web-based computer-tailored interventions offer an innovative way to provide tailored feedback and to empower adults to engage in regular moderate- to vigorous-intensity PA. To describe the rationale, design and content of a web-based computer-tailored PA intervention for Canadian adults enrolled in a randomized controlled trial (RCT). 244 men and women aged between 35 and 70 years, without CVD or physical disability, not participating in regular moderate- to vigorous-intensity PA, and familiar with and having access to a computer at home, were recruited from the Quebec City Prospective Urban and Rural Epidemiological (PURE) study centre. Participants were randomized into two study arms: 1) an experimental group receiving the intervention and 2) a waiting list control group. The fully automated web-based computer-tailored PA intervention consists of seven 10- to 15-min sessions over an 8-week period. The theoretical underpinning of the intervention is based on the I-Change Model. The aim of the intervention was to reach a total of 150 min per week of moderate- to vigorous-intensity aerobic PA. This study will provide useful information before engaging in a large RCT to assess the long-term participation and maintenance of PA, the potential impact of regular PA on CVD risk factors and the cost-effectiveness of a web-based computer-tailored intervention. ISRCTN36353353 registered on 24/07/2014.
Numerical solution of the exact cavity equations of motion for an unstable optical resonator.
Bowers, M S; Moody, S E
1990-09-20
We solve numerically, we believe for the first time, the exact cavity equations of motion for a realistic unstable resonator with a simple gain saturation model. The cavity equations of motion, first formulated by Siegman ["Exact Cavity Equations for Lasers with Large Output Coupling," Appl. Phys. Lett. 36, 412-414 (1980)], and which we term the dynamic coupled modes (DCM) method of solution, solve for the full 3-D time dependent electric field inside the optical cavity by expanding the field in terms of the actual diffractive transverse eigenmodes of the bare (gain free) cavity with time varying coefficients. The spatially varying gain serves to couple the bare cavity transverse modes and to scatter power from mode to mode. We show that the DCM method numerically converges with respect to the number of eigenmodes in the basis set. The intracavity intensity in the numerical example shown reaches a steady state, and this steady state distribution is compared with that computed from the traditional Fox and Li approach using a fast Fourier transform propagation algorithm. The output wavefronts from both methods are quite similar, and the computed output powers agree to within 10%. The usefulness and advantages of using this method for predicting the output of a laser, especially pulsed lasers used for coherent detection, are discussed.
High-order conservative finite difference GLM-MHD schemes for cell-centered MHD
NASA Astrophysics Data System (ADS)
Mignone, Andrea; Tzeferacos, Petros; Bodo, Gianluigi
2010-08-01
We present and compare third- as well as fifth-order accurate finite difference schemes for the numerical solution of the compressible ideal MHD equations in multiple spatial dimensions. The selected methods lean on four different reconstruction techniques based on recently improved versions of the weighted essentially non-oscillatory (WENO) schemes, monotonicity preserving (MP) schemes as well as slope-limited polynomial reconstruction. The proposed numerical methods are highly accurate in smooth regions of the flow, avoid loss of accuracy in proximity of smooth extrema and provide sharp non-oscillatory transitions at discontinuities. We suggest a numerical formulation based on a cell-centered approach where all of the primary flow variables are discretized at the zone center. The divergence-free condition is enforced by augmenting the MHD equations with a generalized Lagrange multiplier yielding a mixed hyperbolic/parabolic correction, as in Dedner et al. [J. Comput. Phys. 175 (2002) 645-673]. The resulting family of schemes is robust, cost-effective and straightforward to implement. Compared to previous existing approaches, it completely avoids the CPU intensive workload associated with an elliptic divergence cleaning step and the additional complexities required by staggered mesh algorithms. Extensive numerical testing demonstrate the robustness and reliability of the proposed framework for computations involving both smooth and discontinuous features.
Singularity computations. [finite element methods for elastoplastic flow
NASA Technical Reports Server (NTRS)
Swedlow, J. L.
1978-01-01
Direct descriptions of the structure of a singularity would describe the radial and angular distributions of the field quantities as explicitly as practicable along with some measure of the intensity of the singularity. This paper discusses such an approach based on recent development of numerical methods for elastoplastic flow. Attention is restricted to problems where one variable or set of variables is finite at the origin of the singularity but a second set is not.
Physical mechanism and numerical simulation of the inception of the lightning upward leader
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Qingmin; Lu Xinchang; Shi Wei
2012-12-15
The upward leader is a key physical process of the leader progression model of lightning shielding. The inception mechanism and criterion of the upward leader need further understanding and clarification. Based on leader discharge theory, this paper proposes the critical electric field intensity of the stable upward leader (CEFISUL) and characterizes it by the valve electric field intensity on the conductor surface, E{sub L}, which is the basis of a new inception criterion for the upward leader. Through numerical simulation under various physical conditions, we verified that E{sub L} is mainly related to the conductor radius, and data fitting yieldsmore » the mathematical expression of E{sub L}. We further establish a computational model for lightning shielding performance of the transmission lines based on the proposed CEFISUL criterion, which reproduces the shielding failure rate of typical UHV transmission lines. The model-based calculation results agree well with the statistical data from on-site operations, which show the effectiveness and validity of the CEFISUL criterion.« less
Boost-phase discrimination research activities
NASA Technical Reports Server (NTRS)
Cooper, David M.; Deiwert, George S.
1989-01-01
Theoretical research in two areas was performed. The aerothermodynamics research focused on the hard-body and rocket plume flows. Analytical real gas models to describe finite rate chemistry were developed and incorporated into the three-dimensional flow codes. New numerical algorithms capable of treating multi-species reacting gas equations and treating flows with large gradients were also developed. The computational chemistry research focused on the determination of spectral radiative intensity factors, transport properties and reaction rates. Ab initio solutions to the Schrodinger equation provided potential energy curves transition moments (radiative probabilities and strengths) and potential energy surfaces. These surfaces were then coupled with classical particle reactive trajectories to compute reaction cross-sections and rates.
Singular boundary method for global gravity field modelling
NASA Astrophysics Data System (ADS)
Cunderlik, Robert
2014-05-01
The singular boundary method (SBM) and method of fundamental solutions (MFS) are meshless boundary collocation techniques that use the fundamental solution of a governing partial differential equation (e.g. the Laplace equation) as their basis functions. They have been developed to avoid singular numerical integration as well as mesh generation in the traditional boundary element method (BEM). SBM have been proposed to overcome a main drawback of MFS - its controversial fictitious boundary outside the domain. The key idea of SBM is to introduce a concept of the origin intensity factors that isolate singularities of the fundamental solution and its derivatives using some appropriate regularization techniques. Consequently, the source points can be placed directly on the real boundary and coincide with the collocation nodes. In this study we deal with SBM applied for high-resolution global gravity field modelling. The first numerical experiment presents a numerical solution to the fixed gravimetric boundary value problem. The achieved results are compared with the numerical solutions obtained by MFS or the direct BEM indicating efficiency of all methods. In the second numerical experiments, SBM is used to derive the geopotential and its first derivatives from the Tzz components of the gravity disturbing tensor observed by the GOCE satellite mission. A determination of the origin intensity factors allows to evaluate the disturbing potential and gravity disturbances directly on the Earth's surface where the source points are located. To achieve high-resolution numerical solutions, the large-scale parallel computations are performed on the cluster with 1TB of the distributed memory and an iterative elimination of far zones' contributions is applied.
Application of geometric approximation to the CPMG experiment: Two- and three-site exchange.
Chao, Fa-An; Byrd, R Andrew
2017-04-01
The Carr-Purcell-Meiboom-Gill (CPMG) experiment is one of the most classical and well-known relaxation dispersion experiments in NMR spectroscopy, and it has been successfully applied to characterize biologically relevant conformational dynamics in many cases. Although the data analysis of the CPMG experiment for the 2-site exchange model can be facilitated by analytical solutions, the data analysis in a more complex exchange model generally requires computationally-intensive numerical analysis. Recently, a powerful computational strategy, geometric approximation, has been proposed to provide approximate numerical solutions for the adiabatic relaxation dispersion experiments where analytical solutions are neither available nor feasible. Here, we demonstrate the general potential of geometric approximation by providing a data analysis solution of the CPMG experiment for both the traditional 2-site model and a linear 3-site exchange model. The approximate numerical solution deviates less than 0.5% from the numerical solution on average, and the new approach is computationally 60,000-fold more efficient than the numerical approach. Moreover, we find that accurate dynamic parameters can be determined in most cases, and, for a range of experimental conditions, the relaxation can be assumed to follow mono-exponential decay. The method is general and applicable to any CPMG RD experiment (e.g. N, C', C α , H α , etc.) The approach forms a foundation of building solution surfaces to analyze the CPMG experiment for different models of 3-site exchange. Thus, the geometric approximation is a general strategy to analyze relaxation dispersion data in any system (biological or chemical) if the appropriate library can be built in a physically meaningful domain. Published by Elsevier Inc.
Randomized algorithms for high quality treatment planning in volumetric modulated arc therapy
NASA Astrophysics Data System (ADS)
Yang, Yu; Dong, Bin; Wen, Zaiwen
2017-02-01
In recent years, volumetric modulated arc therapy (VMAT) has been becoming a more and more important radiation technique widely used in clinical application for cancer treatment. One of the key problems in VMAT is treatment plan optimization, which is complicated due to the constraints imposed by the involved equipments. In this paper, we consider a model with four major constraints: the bound on the beam intensity, an upper bound on the rate of the change of the beam intensity, the moving speed of leaves of the multi-leaf collimator (MLC) and its directional-convexity. We solve the model by a two-stage algorithm: performing minimization with respect to the shapes of the aperture and the beam intensities alternatively. Specifically, the shapes of the aperture are obtained by a greedy algorithm whose performance is enhanced by random sampling in the leaf pairs with a decremental rate. The beam intensity is optimized using a gradient projection method with non-monotonic line search. We further improve the proposed algorithm by an incremental random importance sampling of the voxels to reduce the computational cost of the energy functional. Numerical simulations on two clinical cancer date sets demonstrate that our method is highly competitive to the state-of-the-art algorithms in terms of both computational time and quality of treatment planning.
NASA Astrophysics Data System (ADS)
Bittencourt, Tulio N.; Barry, Ahmabou; Ingraffea, Anthony R.
This paper presents a comparison among stress-intensity factors for mixed-mode two-dimensional problems obtained through three different approaches: displacement correlation, J-integral, and modified crack-closure integral. All mentioned procedures involve only one analysis step and are incorporated in the post-processor page of a finite element computer code for fracture mechanics analysis (FRANC). Results are presented for a closed-form solution problem under mixed-mode conditions. The accuracy of these described methods then is discussed and analyzed in the framework of their numerical results. The influence of the differences among the three methods on the predicted crack trajectory of general problems is also discussed.
García-Martínez, L; Rosete-Aguilar, M; Garduño-Mejia, J
2012-01-20
We analyze the spatio-temporal intensity of sub-20 femtosecond pulses with a carrier wavelength of 810 nm along the optical axis of low numerical aperture achromatic and apochromatic doublets designed in the IR region by using the scalar diffraction theory. The diffraction integral is solved by expanding the wave number around the carrier frequency of the pulse in a Taylor series up to third order, and then the integral over the frequencies is solved by using the Gauss-Legendre quadrature method. The numerical errors in this method are negligible by taking 96 nodes and the computational time is reduced by 95% compared to the integration method by rectangles. We will show that the third-order group velocity dispersion (GVD) is not negligible for 10 fs pulses at 810 nm propagating through the low numerical aperture doublets, and its effect is more important than the propagation time difference (PTD). This last effect, however, is also significant. For sub-20 femtosecond pulses, these two effects make the use of a pulse shaper necessary to correct for second and higher-order GVD terms and also the use of apochromatic optics to correct the PTD effect. The design of an apochromatic doublet is presented in this paper and the spatio-temporal intensity of the pulse at the focal region of this doublet is compared to that given by the achromatic doublet. © 2012 Optical Society of America
Determination of the Fracture Parameters in a Stiffened Composite Panel
NASA Technical Reports Server (NTRS)
Lin, Chung-Yi
2000-01-01
A modified J-integral, namely the equivalent domain integral, is derived for a three-dimensional anisotropic cracked solid to evaluate the stress intensity factor along the crack front using the finite element method. Based on the equivalent domain integral method with auxiliary fields, an interaction integral is also derived to extract the second fracture parameter, the T-stress, from the finite element results. The auxiliary fields are the two-dimensional plane strain solutions of monoclinic materials with the plane of symmetry at x(sub 3) = 0 under point loads applied at the crack tip. These solutions are expressed in a compact form based on the Stroh formalism. Both integrals can be implemented into a single numerical procedure to determine the distributions of stress intensity factor and T-stress components, T11, T13, and thus T33, along a three-dimensional crack front. The effects of plate thickness and crack length on the variation of the stress intensity factor and T-stresses through the thickness are investigated in detail for through-thickness center-cracked plates (isotropic and orthotropic) and orthotropic stiffened panels under pure mode-I loading conditions. For all the cases studied, T11 remains negative. For plates with the same dimensions, a larger size of crack yields larger magnitude of the normalized stress intensity factor and normalized T-stresses. The results in orthotropic stiffened panels exhibit an opposite trend in general. As expected, for the thicker panels, the fracture parameters evaluated through the thickness, except the region near the free surfaces, approach two-dimensional plane strain solutions. In summary, the numerical methods presented in this research demonstrate their high computational effectiveness and good numerical accuracy in extracting these fracture parameters from the finite element results in three-dimensional cracked solids.
NASA Astrophysics Data System (ADS)
Leal, Allan M. M.; Kulik, Dmitrii A.; Kosakowski, Georg
2016-02-01
We present a numerical method for multiphase chemical equilibrium calculations based on a Gibbs energy minimization approach. The method can accurately and efficiently determine the stable phase assemblage at equilibrium independently of the type of phases and species that constitute the chemical system. We have successfully applied our chemical equilibrium algorithm in reactive transport simulations to demonstrate its effective use in computationally intensive applications. We used FEniCS to solve the governing partial differential equations of mass transport in porous media using finite element methods in unstructured meshes. Our equilibrium calculations were benchmarked with GEMS3K, the numerical kernel of the geochemical package GEMS. This allowed us to compare our results with a well-established Gibbs energy minimization algorithm, as well as their performance on every mesh node, at every time step of the transport simulation. The benchmark shows that our novel chemical equilibrium algorithm is accurate, robust, and efficient for reactive transport applications, and it is an improvement over the Gibbs energy minimization algorithm used in GEMS3K. The proposed chemical equilibrium method has been implemented in Reaktoro, a unified framework for modeling chemically reactive systems, which is now used as an alternative numerical kernel of GEMS.
Design consideration in constructing high performance embedded Knowledge-Based Systems (KBS)
NASA Technical Reports Server (NTRS)
Dalton, Shelly D.; Daley, Philip C.
1988-01-01
As the hardware trends for artificial intelligence (AI) involve more and more complexity, the process of optimizing the computer system design for a particular problem will also increase in complexity. Space applications of knowledge based systems (KBS) will often require an ability to perform both numerically intensive vector computations and real time symbolic computations. Although parallel machines can theoretically achieve the speeds necessary for most of these problems, if the application itself is not highly parallel, the machine's power cannot be utilized. A scheme is presented which will provide the computer systems engineer with a tool for analyzing machines with various configurations of array, symbolic, scaler, and multiprocessors. High speed networks and interconnections make customized, distributed, intelligent systems feasible for the application of AI in space. The method presented can be used to optimize such AI system configurations and to make comparisons between existing computer systems. It is an open question whether or not, for a given mission requirement, a suitable computer system design can be constructed for any amount of money.
High-Performance Java Codes for Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Riley, Christopher; Chatterjee, Siddhartha; Biswas, Rupak; Biegel, Bryan (Technical Monitor)
2001-01-01
The computational science community is reluctant to write large-scale computationally -intensive applications in Java due to concerns over Java's poor performance, despite the claimed software engineering advantages of its object-oriented features. Naive Java implementations of numerical algorithms can perform poorly compared to corresponding Fortran or C implementations. To achieve high performance, Java applications must be designed with good performance as a primary goal. This paper presents the object-oriented design and implementation of two real-world applications from the field of Computational Fluid Dynamics (CFD): a finite-volume fluid flow solver (LAURA, from NASA Langley Research Center), and an unstructured mesh adaptation algorithm (2D_TAG, from NASA Ames Research Center). This work builds on our previous experience with the design of high-performance numerical libraries in Java. We examine the performance of the applications using the currently available Java infrastructure and show that the Java version of the flow solver LAURA performs almost within a factor of 2 of the original procedural version. Our Java version of the mesh adaptation algorithm 2D_TAG performs within a factor of 1.5 of its original procedural version on certain platforms. Our results demonstrate that object-oriented software design principles are not necessarily inimical to high performance.
Lattice Boltzmann Method for 3-D Flows with Curved Boundary
NASA Technical Reports Server (NTRS)
Mei, Renwei; Shyy, Wei; Yu, Dazhi; Luo, Li-Shi
2002-01-01
In this work, we investigate two issues that are important to computational efficiency and reliability in fluid dynamics applications of the lattice, Boltzmann equation (LBE): (1) Computational stability and accuracy of different lattice Boltzmann models and (2) the treatment of the boundary conditions on curved solid boundaries and their 3-D implementations. Three athermal 3-D LBE models (D3QI5, D3Ql9, and D3Q27) are studied and compared in terms of efficiency, accuracy, and robustness. The boundary treatment recently developed by Filippova and Hanel and Met et al. in 2-D is extended to and implemented for 3-D. The convergence, stability, and computational efficiency of the 3-D LBE models with the boundary treatment for curved boundaries were tested in simulations of four 3-D flows: (1) Fully developed flows in a square duct, (2) flow in a 3-D lid-driven cavity, (3) fully developed flows in a circular pipe, and (4) a uniform flow over a sphere. We found that while the fifteen-velocity 3-D (D3Ql5) model is more prone to numerical instability and the D3Q27 is more computationally intensive, the 63Q19 model provides a balance between computational reliability and efficiency. Through numerical simulations, we demonstrated that the boundary treatment for 3-D arbitrary curved geometry has second-order accuracy and possesses satisfactory stability characteristics.
Advanced Computational Aeroacoustics Methods for Fan Noise Prediction
NASA Technical Reports Server (NTRS)
Envia, Edmane (Technical Monitor); Tam, Christopher
2003-01-01
Direct computation of fan noise is presently not possible. One of the major difficulties is the geometrical complexity of the problem. In the case of fan noise, the blade geometry is critical to the loading on the blade and hence the intensity of the radiated noise. The precise geometry must be incorporated into the computation. In computational fluid dynamics (CFD), there are two general ways to handle problems with complex geometry. One way is to use unstructured grids. The other is to use body fitted overset grids. In the overset grid method, accurate data transfer is of utmost importance. For acoustic computation, it is not clear that the currently used data transfer methods are sufficiently accurate as not to contaminate the very small amplitude acoustic disturbances. In CFD, low order schemes are, invariably, used in conjunction with unstructured grids. However, low order schemes are known to be numerically dispersive and dissipative. dissipative errors are extremely undesirable for acoustic wave problems. The objective of this project is to develop a high order unstructured grid Dispersion-Relation-Preserving (DRP) scheme. would minimize numerical dispersion and dissipation errors. contains the results of the funded portion of the project. scheme on an unstructured grid has been developed. constructed in the wave number space. The characteristics of the scheme can be improved by the inclusion of additional constraints. Stability of the scheme has been investigated. Stability can be improved by adopting the upwinding strategy.
Tivnan, Matthew; Gurjar, Rajan; Wolf, David E; Vishwanath, Karthik
2015-08-12
Diffuse Correlation Spectroscopy (DCS) is a well-established optical technique that has been used for non-invasive measurement of blood flow in tissues. Instrumentation for DCS includes a correlation device that computes the temporal intensity autocorrelation of a coherent laser source after it has undergone diffuse scattering through a turbid medium. Typically, the signal acquisition and its autocorrelation are performed by a correlation board. These boards have dedicated hardware to acquire and compute intensity autocorrelations of rapidly varying input signal and usually are quite expensive. Here we show that a Raspberry Pi minicomputer can acquire and store a rapidly varying time-signal with high fidelity. We show that this signal collected by a Raspberry Pi device can be processed numerically to yield intensity autocorrelations well suited for DCS applications. DCS measurements made using the Raspberry Pi device were compared to those acquired using a commercial hardware autocorrelation board to investigate the stability, performance, and accuracy of the data acquired in controlled experiments. This paper represents a first step toward lowering the instrumentation cost of a DCS system and may offer the potential to make DCS become more widely used in biomedical applications.
Tivnan, Matthew; Gurjar, Rajan; Wolf, David E.; Vishwanath, Karthik
2015-01-01
Diffuse Correlation Spectroscopy (DCS) is a well-established optical technique that has been used for non-invasive measurement of blood flow in tissues. Instrumentation for DCS includes a correlation device that computes the temporal intensity autocorrelation of a coherent laser source after it has undergone diffuse scattering through a turbid medium. Typically, the signal acquisition and its autocorrelation are performed by a correlation board. These boards have dedicated hardware to acquire and compute intensity autocorrelations of rapidly varying input signal and usually are quite expensive. Here we show that a Raspberry Pi minicomputer can acquire and store a rapidly varying time-signal with high fidelity. We show that this signal collected by a Raspberry Pi device can be processed numerically to yield intensity autocorrelations well suited for DCS applications. DCS measurements made using the Raspberry Pi device were compared to those acquired using a commercial hardware autocorrelation board to investigate the stability, performance, and accuracy of the data acquired in controlled experiments. This paper represents a first step toward lowering the instrumentation cost of a DCS system and may offer the potential to make DCS become more widely used in biomedical applications. PMID:26274961
Electrohydrodynamic coalescence of droplets using an embedded potential flow model
NASA Astrophysics Data System (ADS)
Garzon, M.; Gray, L. J.; Sethian, J. A.
2018-03-01
The coalescence, and subsequent satellite formation, of two inviscid droplets is studied numerically. The initial drops are taken to be of equal and different sizes, and simulations have been carried out with and without the presence of an electrical field. The main computational challenge is the tracking of a free surface that changes topology. Coupling level set and boundary integral methods with an embedded potential flow model, we seamlessly compute through these singular events. As a consequence, the various coalescence modes that appear depending upon the relative ratio of the parent droplets can be studied. Computations of first stage pinch-off, second stage pinch-off, and complete engulfment are analyzed and compared to recent numerical studies and laboratory experiments. Specifically, we study the evolution of bridge radii and the related scaling laws, the minimum drop radii evolution from coalescence to satellite pinch-off, satellite sizes, and the upward stretching of the near cylindrical protrusion at the droplet top. Clear evidence of partial coalescence self-similarity is presented for parent droplet ratios between 1.66 and 4. This has been possible due to the fact that computational initial conditions only depend upon the mother droplet size, in contrast with laboratory experiments where the difficulty in establishing the same initial physical configuration is well known. The presence of electric forces changes the coalescence patterns, and it is possible to control the satellite droplet size by tuning the electrical field intensity. All of the numerical results are in very good agreement with recent laboratory experiments for water droplet coalescence.
Influence of Computational Drop Representation in LES of a Droplet-Laden Mixing Layer
NASA Technical Reports Server (NTRS)
Bellan, Josette; Radhakrishnan, Senthilkumaran
2013-01-01
Multiphase turbulent flows are encountered in many practical applications including turbine engines or natural phenomena involving particle dispersion. Numerical computations of multiphase turbulent flows are important because they provide a cheaper alternative to performing experiments during an engine design process or because they can provide predictions of pollutant dispersion, etc. Two-phase flows contain millions and sometimes billions of particles. For flows with volumetrically dilute particle loading, the most accurate method of numerically simulating the flow is based on direct numerical simulation (DNS) of the governing equations in which all scales of the flow including the small scales that are responsible for the overwhelming amount of dissipation are resolved. DNS, however, requires high computational cost and cannot be used in engineering design applications where iterations among several design conditions are necessary. Because of high computational cost, numerical simulations of such flows cannot track all these drops. The objective of this work is to quantify the influence of the number of computational drops and grid spacing on the accuracy of predicted flow statistics, and to possibly identify the minimum number, or, if not possible, the optimal number of computational drops that provide minimal error in flow prediction. For this purpose, several Large Eddy Simulation (LES) of a mixing layer with evaporating drops have been performed by using coarse, medium, and fine grid spacings and computational drops, rather than physical drops. To define computational drops, an integer NR is introduced that represents the ratio of the number of existing physical drops to the desired number of computational drops; for example, if NR=8, this means that a computational drop represents 8 physical drops in the flow field. The desired number of computational drops is determined by the available computational resources; the larger NR is, the less computationally intensive is the simulation. A set of first order and second order flow statistics, and of drop statistics are extracted from LES predictions and are compared to results obtained by filtering a DNS database. First order statistics such as Favre averaged stream-wise velocity, Favre averaged vapor mass fraction, and the drop stream-wise velocity, are predicted accurately independent of the number of computational drops and grid spacing. Second order flow statistics depend both on the number of computational drops and on grid spacing. The scalar variance and turbulent vapor flux are predicted accurately by the fine mesh LES only when NR is less than 32, and by the coarse mesh LES reasonably accurately for all NR values. This is attributed to the fact that when the grid spacing is coarsened, the number of drops in a computational cell must not be significantly lower than that in the DNS.
Numerical Study of Solar Storms from the Sun to Earth
NASA Astrophysics Data System (ADS)
Feng, Xueshang; Jiang, Chaowei; Zhou, Yufen
2017-04-01
As solar storms are sweeping the Earth, adverse changes occur in geospace environment. How human can mitigate and avoid destructive damages caused by solar storms becomes an important frontier issue that we must face in the high-tech times. It is of both scientific significance to understand the dynamic process during solar storm's propagation in interplanetary space and realistic value to conduct physics-based numerical researches on the three-dimensional process of solar storms in interplanetary space with the aid of powerful computing capacity to predict the arrival times, intensities, and probable geoeffectiveness of solar storms at the Earth. So far, numerical studies based on magnetohydrodynamics (MHD) have gone through the transition from the initial qualitative principle researches to systematic quantitative studies on concrete events and numerical predictions. Numerical modeling community has a common goal to develop an end-to-end physics-based modeling system for forecasting the Sun-Earth relationship. It is hoped that the transition of these models to operational use depends on the availability of computational resources at reasonable cost and that the models' prediction capabilities may be improved by incorporating the observational findings and constraints into physics-based models, combining the observations, empirical models and MHD simulations in organic ways. In this talk, we briefly focus on our recent progress in using solar observations to produce realistic magnetic configurations of CMEs as they leave the Sun, and coupling data-driven simulations of CMEs to heliospheric simulations that then propagate the CME configuration to 1AU, and outlook the important numerical issues and their possible solutions in numerical space weather modeling from the Sun to Earth for future research.
Iterative methods for 3D implicit finite-difference migration using the complex Padé approximation
NASA Astrophysics Data System (ADS)
Costa, Carlos A. N.; Campos, Itamara S.; Costa, Jessé C.; Neto, Francisco A.; Schleicher, Jörg; Novais, Amélia
2013-08-01
Conventional implementations of 3D finite-difference (FD) migration use splitting techniques to accelerate performance and save computational cost. However, such techniques are plagued with numerical anisotropy that jeopardises the correct positioning of dipping reflectors in the directions not used for the operator splitting. We implement 3D downward continuation FD migration without splitting using a complex Padé approximation. In this way, the numerical anisotropy is eliminated at the expense of a computationally more intensive solution of a large-band linear system. We compare the performance of the iterative stabilized biconjugate gradient (BICGSTAB) and that of the multifrontal massively parallel direct solver (MUMPS). It turns out that the use of the complex Padé approximation not only stabilizes the solution, but also acts as an effective preconditioner for the BICGSTAB algorithm, reducing the number of iterations as compared to the implementation using the real Padé expansion. As a consequence, the iterative BICGSTAB method is more efficient than the direct MUMPS method when solving a single term in the Padé expansion. The results of both algorithms, here evaluated by computing the migration impulse response in the SEG/EAGE salt model, are of comparable quality.
Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo
McDaniel, Tyler; D’Azevedo, Ed F.; Li, Ying Wai; ...
2017-11-07
Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is therefore formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with applicationmore » of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. Here this procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi- core CPUs and GPUs.« less
Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDaniel, Tyler; D’Azevedo, Ed F.; Li, Ying Wai
Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is therefore formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with applicationmore » of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. Here this procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi- core CPUs and GPUs.« less
Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo.
McDaniel, T; D'Azevedo, E F; Li, Y W; Wong, K; Kent, P R C
2017-11-07
Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is, therefore, formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with an application of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. This procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo, where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi-core central processing units and graphical processing units.
Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo
NASA Astrophysics Data System (ADS)
McDaniel, T.; D'Azevedo, E. F.; Li, Y. W.; Wong, K.; Kent, P. R. C.
2017-11-01
Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is, therefore, formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with an application of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. This procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo, where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi-core central processing units and graphical processing units.
The QuakeSim Project: Numerical Simulations for Active Tectonic Processes
NASA Technical Reports Server (NTRS)
Donnellan, Andrea; Parker, Jay; Lyzenga, Greg; Granat, Robert; Fox, Geoffrey; Pierce, Marlon; Rundle, John; McLeod, Dennis; Grant, Lisa; Tullis, Terry
2004-01-01
In order to develop a solid earth science framework for understanding and studying of active tectonic and earthquake processes, this task develops simulation and analysis tools to study the physics of earthquakes using state-of-the art modeling, data manipulation, and pattern recognition technologies. We develop clearly defined accessible data formats and code protocols as inputs to the simulations. these are adapted to high-performance computers because the solid earth system is extremely complex and nonlinear resulting in computationally intensive problems with millions of unknowns. With these tools it will be possible to construct the more complex models and simulations necessary to develop hazard assessment systems critical for reducing future losses from major earthquakes.
Remote Numerical Simulations of the Interaction of High Velocity Clouds with Random Magnetic Fields
NASA Astrophysics Data System (ADS)
Santillan, Alfredo; Hernandez--Cervantes, Liliana; Gonzalez--Ponce, Alejandro; Kim, Jongsoo
The numerical simulations associated with the interaction of High Velocity Clouds (HVC) with the Magnetized Galactic Interstellar Medium (ISM) are a powerful tool to describe the evolution of the interaction of these objects in our Galaxy. In this work we present a new project referred to as Theoretical Virtual i Observatories. It is oriented toward to perform numerical simulations in real time through a Web page. This is a powerful astrophysical computational tool that consists of an intuitive graphical user interface (GUI) and a database produced by numerical calculations. In this Website the user can make use of the existing numerical simulations from the database or run a new simulation introducing initial conditions such as temperatures, densities, velocities, and magnetic field intensities for both the ISM and HVC. The prototype is programmed using Linux, Apache, MySQL, and PHP (LAMP), based on the open source philosophy. All simulations were performed with the MHD code ZEUS-3D, which solves the ideal MHD equations by finite differences on a fixed Eulerian mesh. Finally, we present typical results that can be obtained with this tool.
Localization of intense electromagnetic waves in plasmas.
Shukla, Padma Kant; Eliasson, Bengt
2008-05-28
We present theoretical and numerical studies of the interaction between relativistically intense laser light and a two-temperature plasma consisting of one relativistically hot and one cold component of electrons. Such plasmas are frequently encountered in intense laser-plasma experiments where collisionless heating via Raman instabilities leads to a high-energetic tail in the electron distribution function. The electromagnetic waves (EMWs) are governed by the Maxwell equations, and the plasma is governed by the relativistic Vlasov and hydrodynamic equations. Owing to the interaction between the laser light and the plasma, we can have trapping of electrons in the intense wakefield of the laser pulse and the formation of relativistic electron holes (REHs) in which laser light is trapped. Such electron holes are characterized by a non-Maxwellian distribution of electrons where we have trapped and free electron populations. We present a model for the interaction between laser light and REHs, and computer simulations that show the stability and dynamics of the coupled electron hole and EMW envelopes.
NASA Astrophysics Data System (ADS)
Ardalan, A. A.; Safari, A.
2004-09-01
An operational algorithm for computation of terrain correction (or local gravity field modeling) based on application of closed-form solution of the Newton integral in terms of Cartesian coordinates in multi-cylindrical equal-area map projection of the reference ellipsoid is presented. Multi-cylindrical equal-area map projection of the reference ellipsoid has been derived and is described in detail for the first time. Ellipsoidal mass elements with various sizes on the surface of the reference ellipsoid are selected and the gravitational potential and vector of gravitational intensity (i.e. gravitational acceleration) of the mass elements are computed via numerical solution of the Newton integral in terms of geodetic coordinates {λ,ϕ,h}. Four base- edge points of the ellipsoidal mass elements are transformed into a multi-cylindrical equal-area map projection surface to build Cartesian mass elements by associating the height of the corresponding ellipsoidal mass elements to the transformed area elements. Using the closed-form solution of the Newton integral in terms of Cartesian coordinates, the gravitational potential and vector of gravitational intensity of the transformed Cartesian mass elements are computed and compared with those of the numerical solution of the Newton integral for the ellipsoidal mass elements in terms of geodetic coordinates. Numerical tests indicate that the difference between the two computations, i.e. numerical solution of the Newton integral for ellipsoidal mass elements in terms of geodetic coordinates and closed-form solution of the Newton integral in terms of Cartesian coordinates, in a multi-cylindrical equal-area map projection, is less than 1.6×10-8 m2/s2 for a mass element with a cross section area of 10×10 m and a height of 10,000 m. For a mass element with a cross section area of 1×1 km and a height of 10,000 m the difference is less than 1.5×10-4m2/s2. Since 1.5× 10-4 m2/s2 is equivalent to 1.5×10-5m in the vertical direction, it can be concluded that a method for terrain correction (or local gravity field modeling) based on closed-form solution of the Newton integral in terms of Cartesian coordinates of a multi-cylindrical equal-area map projection of the reference ellipsoid has been developed which has the accuracy of terrain correction (or local gravity field modeling) based on the Newton integral in terms of ellipsoidal coordinates.
Twisting Anderson pseudospins with light: Quench dynamics in THz-pumped BCS superconductors
NASA Astrophysics Data System (ADS)
Chou, Yang-Zhi; Liao, Yunxiang; Foster, Matthew
We study the preparation and the detection of coherent far-from-equilibrium BCS superconductor dynamics in THz pump-probe experiments. In a recent experiment, an intense monocycle THz pulse with center frequency ω = Δ was injected into a superconductor with BCS gap Δ the post-pump evolution was detected via the optical conductivity. It was argued that nonlinear coupling of the pump to the Anderson pseudospins of the superconductor induces coherent dynamics of the Higgs mode Δ (t) . We validate this picture in a 2D BCS model with a combination of exact numerics and the Lax reduction, and we compute the dynamical phase diagram. The main effect of the pump is to scramble the orientations of Anderson pseudospins along the Fermi surface by twisting them in the xy-plane. We show that more intense pulses can induce a far-from-equilibrium gapless phase (phase I), originally predicted in the context of interaction quenches. We show that the THz pump can reach phase I at much lower energy densities than an interaction quench, and we demonstrate that Lax reduction provides a quantitative tool for computing coherent BCS dynamics. We also compute the optical conductivity for the states discussed here.
Physical and numerical modeling of hydrophysical proceses on the site of underwater pipelines
NASA Astrophysics Data System (ADS)
Garmakova, M. E.; Degtyarev, V. V.; Fedorova, N. N.; Shlychkov, V. A.
2018-03-01
The paper outlines issues related to ensuring the exploitation safety of underwater pipelines that are at risk of accidents. The performed research is based on physical and mathematical modeling of local bottom erosion in the area of pipeline location. The experimental studies were performed on the basis of the Hydraulics Laboratory of the Department of Hydraulic Engineering Construction, Safety and Ecology of NSUACE (Sibstrin). In the course of physical experiments it was revealed that the intensity of the bottom soil reforming depends on the deepening of the pipeline. The ANSYS software has been used for numerical modeling. The process of erosion of the sandy bottom was modeled under the pipeline. Comparison of computational results at various mass flow rates was made.
NASA Technical Reports Server (NTRS)
Chackerian, C., Jr.; Farrenq, R.; Guelachvili, G.; Rossetti, C.; Urban, W.
1984-01-01
Experimental intensity information is combined with numerically obtained vibrational wave functions in a nonlinear least-squares fitting procedure to obtain the ground electronic state electric dipole moment function of carbon monoxide valid in the range of nuclear oscillation (0.87-1.91 A) of about the V = 38th vibrational level. Vibrational transition matrix elements are computed from this function for Delta V = 1, 2, 3 with V not more than 38.
Thermal acoustic oscillations, volume 2. [cryogenic fluid storage
NASA Technical Reports Server (NTRS)
Spradley, L. W.; Sims, W. H.; Fan, C.
1975-01-01
A number of thermal acoustic oscillation phenomena and their effects on cryogenic systems were studied. The conditions which cause or suppress oscillations, the frequency, amplitude and intensity of oscillations when they exist, and the heat loss they induce are discussed. Methods of numerical analysis utilizing the digital computer were developed for use in cryogenic systems design. In addition, an experimental verification program was conducted to study oscillation wave characteristics and boiloff rate. The data were then reduced and compared with the analytical predictions.
Further studies using matched filter theory and stochastic simulation for gust loads prediction
NASA Technical Reports Server (NTRS)
Scott, Robert C.; Pototzky, Anthony S.; Perry, Boyd Iii
1993-01-01
This paper describes two analysis methods -- one deterministic, the other stochastic -- for computing maximized and time-correlated gust loads for aircraft with nonlinear control systems. The first method is based on matched filter theory; the second is based on stochastic simulation. The paper summarizes the methods, discusses the selection of gust intensity for each method and presents numerical results. A strong similarity between the results from the two methods is seen to exist for both linear and nonlinear configurations.
Reconstruction of structural damage based on reflection intensity spectra of fiber Bragg gratings
NASA Astrophysics Data System (ADS)
Huang, Guojun; Wei, Changben; Chen, Shiyuan; Yang, Guowei
2014-12-01
We present an approach for structural damage reconstruction based on the reflection intensity spectra of fiber Bragg gratings (FBGs). Our approach incorporates the finite element method, transfer matrix (T-matrix), and genetic algorithm to solve the inverse photo-elastic problem of damage reconstruction, i.e. to identify the location, size, and shape of a defect. By introducing a parameterized characterization of the damage information, the inverse photo-elastic problem is reduced to an optimization problem, and a relevant computational scheme was developed. The scheme iteratively searches for the solution to the corresponding direct photo-elastic problem until the simulated and measured (or target) reflection intensity spectra of the FBGs near the defect coincide within a prescribed error. Proof-of-concept validations of our approach were performed numerically and experimentally using both holed and cracked plate samples as typical cases of plane-stress problems. The damage identifiability was simulated by changing the deployment of the FBG sensors, including the total number of sensors and their distance to the defect. Both the numerical and experimental results demonstrate that our approach is effective and promising. It provides us with a photo-elastic method for developing a remote, automatic damage-imaging technique that substantially improves damage identification for structural health monitoring.
End-to-end learning for digital hologram reconstruction
NASA Astrophysics Data System (ADS)
Xu, Zhimin; Zuo, Si; Lam, Edmund Y.
2018-02-01
Digital holography is a well-known method to perform three-dimensional imaging by recording the light wavefront information originating from the object. Not only the intensity, but also the phase distribution of the wavefront can then be computed from the recorded hologram in the numerical reconstruction process. However, the reconstructions via the traditional methods suffer from various artifacts caused by twin-image, zero-order term, and noise from image sensors. Here we demonstrate that an end-to-end deep neural network (DNN) can learn to perform both intensity and phase recovery directly from an intensity-only hologram. We experimentally show that the artifacts can be effectively suppressed. Meanwhile, our network doesn't need any preprocessing for initialization, and is comparably fast to train and test, in comparison with the recently published learning-based method. In addition, we validate that the performance improvement can be achieved by introducing a prior on sparsity.
Large-scale atomistic calculations of clusters in intense x-ray pulses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ho, Phay J.; Knight, Chris
Here, we present the methodology of our recently developed Monte-Carlo/ Molecular-Dynamics method for studying the fundamental ultrafast dynamics induced by high-fluence, high-intensity x-ray free electron laser (XFEL) pulses in clusters. The quantum nature of the initiating ionization process is accounted for by a Monte Carlo method to calculate probabilities of electronic transitions, including photo absorption, inner-shell relaxation, photon scattering, electron collision and recombination dynamics, and thus track the transient electronic configurations explicitly. The freed electrons and ions are followed by classical particle trajectories using a molecular dynamics algorithm. These calculations reveal the surprising role of electron-ion recombination processes that leadmore » to the development of nonuniform spatial charge density profiles in x-ray excited clusters over femtosecond timescales. In the high-intensity limit, it is important to include the recombination dynamics in the calculated scattering response even for a 2- fs pulse. We also demonstrate that our numerical codes and algorithms can make e!cient use of the computational power of massively parallel supercomputers to investigate the intense-field dynamics in systems with increasing complexity and size at the ultrafast timescale and in non-linear x-ray interaction regimes. In particular, picosecond trajectories of XFEL clusters with attosecond time resolution containing millions of particles can be e!ciently computed on upwards of 262,144 processes.« less
Large-scale atomistic calculations of clusters in intense x-ray pulses
Ho, Phay J.; Knight, Chris
2017-04-28
Here, we present the methodology of our recently developed Monte-Carlo/ Molecular-Dynamics method for studying the fundamental ultrafast dynamics induced by high-fluence, high-intensity x-ray free electron laser (XFEL) pulses in clusters. The quantum nature of the initiating ionization process is accounted for by a Monte Carlo method to calculate probabilities of electronic transitions, including photo absorption, inner-shell relaxation, photon scattering, electron collision and recombination dynamics, and thus track the transient electronic configurations explicitly. The freed electrons and ions are followed by classical particle trajectories using a molecular dynamics algorithm. These calculations reveal the surprising role of electron-ion recombination processes that leadmore » to the development of nonuniform spatial charge density profiles in x-ray excited clusters over femtosecond timescales. In the high-intensity limit, it is important to include the recombination dynamics in the calculated scattering response even for a 2- fs pulse. We also demonstrate that our numerical codes and algorithms can make e!cient use of the computational power of massively parallel supercomputers to investigate the intense-field dynamics in systems with increasing complexity and size at the ultrafast timescale and in non-linear x-ray interaction regimes. In particular, picosecond trajectories of XFEL clusters with attosecond time resolution containing millions of particles can be e!ciently computed on upwards of 262,144 processes.« less
Defining the computational structure of the motion detector in Drosophila.
Clark, Damon A; Bursztyn, Limor; Horowitz, Mark A; Schnitzer, Mark J; Clandinin, Thomas R
2011-06-23
Many animals rely on visual motion detection for survival. Motion information is extracted from spatiotemporal intensity patterns on the retina, a paradigmatic neural computation. A phenomenological model, the Hassenstein-Reichardt correlator (HRC), relates visual inputs to neural activity and behavioral responses to motion, but the circuits that implement this computation remain unknown. By using cell-type specific genetic silencing, minimal motion stimuli, and in vivo calcium imaging, we examine two critical HRC inputs. These two pathways respond preferentially to light and dark moving edges. We demonstrate that these pathways perform overlapping but complementary subsets of the computations underlying the HRC. A numerical model implementing differential weighting of these operations displays the observed edge preferences. Intriguingly, these pathways are distinguished by their sensitivities to a stimulus correlation that corresponds to an illusory percept, "reverse phi," that affects many species. Thus, this computational architecture may be widely used to achieve edge selectivity in motion detection. Copyright © 2011 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Fauzi, Ahmad
2017-11-01
Numerical computation has many pedagogical advantages: it develops analytical skills and problem-solving skills, helps to learn through visualization, and enhances physics education. Unfortunately, numerical computation is not taught to undergraduate education physics students in Indonesia. Incorporate numerical computation into the undergraduate education physics curriculum presents many challenges. The main challenges are the dense curriculum that makes difficult to put new numerical computation course and most students have no programming experience. In this research, we used case study to review how to integrate numerical computation into undergraduate education physics curriculum. The participants of this research were 54 students of the fourth semester of physics education department. As a result, we concluded that numerical computation could be integrated into undergraduate education physics curriculum using spreadsheet excel combined with another course. The results of this research become complements of the study on how to integrate numerical computation in learning physics using spreadsheet excel.
Finite element analysis of hysteresis effects in piezoelectric transducers
NASA Astrophysics Data System (ADS)
Simkovics, Reinhard; Landes, Hermann; Kaltenbacher, Manfred; Hoffelner, Johann; Lerch, Reinhard
2000-06-01
The design of ultrasonic transducers for high power applications, e.g. in medical therapy or production engineering, asks for effective computer aided design tools to analyze the occurring nonlinear effects. In this paper the finite-element-boundary-element package CAPA is presented that allows to model different types of electromechanical sensors and actuators. These transducers are based on various physical coupling effects, such as piezoelectricity or magneto- mechanical interactions. Their computer modeling requires the numerical solution of a multifield problem, such as coupled electric-mechanical fields or magnetic-mechanical fields as well as coupled mechanical-acoustic fields. With the reported software environment we are able to compute the dynamic behavior of electromechanical sensors and actuators by taking into account geometric nonlinearities, nonlinear wave propagation and ferroelectric as well as magnetic material nonlinearities. After a short introduction to the basic theory of the numerical calculation schemes, two practical examples will demonstrate the applicability of the numerical simulation tool. As a first example an ultrasonic thickness mode transducer consisting of a piezoceramic material used for high power ultrasound production is examined. Due to ferroelectric hysteresis, higher order harmonics can be detected in the actuators input current. Also in case of electrical and mechanical prestressing a resonance frequency shift occurs, caused by ferroelectric hysteresis and nonlinear dependencies of the material coefficients on electric field and mechanical stresses. As a second example, a power ultrasound transducer used in HIFU-therapy (high intensity focused ultrasound) is presented. Due to the compressibility and losses in the propagating fluid a nonlinear shock wave generation can be observed. For both examples a good agreement between numerical simulation and experimental data has been achieved.
NASA Astrophysics Data System (ADS)
Arifler, Dizem; MacAulay, Calum; Follen, Michele; Guillaud, Martial
2013-06-01
Dysplastic progression is known to be associated with changes in morphology and internal structure of cells. A detailed assessment of the influence of these changes on cellular scattering response is needed to develop and optimize optical diagnostic techniques. In this study, we first analyzed a set of quantitative histopathologic images from cervical biopsies and we obtained detailed information on morphometric and photometric features of segmented epithelial cell nuclei. Morphometric parameters included average size and eccentricity of the best-fit ellipse. Photometric parameters included optical density measures that can be related to dielectric properties and texture characteristics of the nuclei. These features enabled us to construct realistic three-dimensional computational models of basal, parabasal, intermediate, and superficial cell nuclei that were representative of four diagnostic categories, namely normal (or negative for dysplasia), mild dysplasia, moderate dysplasia, and severe dysplasia or carcinoma in situ. We then employed the finite-difference time-domain method, a popular numerical tool in electromagnetics, to compute the angle-resolved light scattering properties of these representative models. Results indicated that a high degree of variability can characterize a given diagnostic category, but scattering from moderately and severely dysplastic or cancerous nuclei was generally observed to be stronger compared to scattering from normal and mildly dysplastic nuclei. Simulation results also pointed to significant intensity level variations among different epithelial depths. This suggests that intensity changes associated with dysplastic progression need to be analyzed in a depth-dependent manner.
Adaptive Numerical Dissipation Control in High Order Schemes for Multi-D Non-Ideal MHD
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sjoegreen, B.
2005-01-01
The required type and amount of numerical dissipation/filter to accurately resolve all relevant multiscales of complex MHD unsteady high-speed shock/shear/turbulence/combustion problems are not only physical problem dependent, but also vary from one flow region to another. In addition, proper and efficient control of the divergence of the magnetic field (Div(B)) numerical error for high order shock-capturing methods poses extra requirements for the considered type of CPU intensive computations. The goal is to extend our adaptive numerical dissipation control in high order filter schemes and our new divergence-free methods for ideal MHD to non-ideal MHD that include viscosity and resistivity. The key idea consists of automatic detection of different flow features as distinct sensors to signal the appropriate type and amount of numerical dissipation/filter where needed and leave the rest of the region free from numerical dissipation contamination. These scheme-independent detectors are capable of distinguishing shocks/shears, flame sheets, turbulent fluctuations and spurious high-frequency oscillations. The detection algorithm is based on an artificial compression method (ACM) (for shocks/shears), and redundant multiresolution wavelets (WAV) (for the above types of flow feature). These filters also provide a natural and efficient way for the minimization of Div(B) numerical error.
A Numerical Model of Viscoelastic Layer Entrainment by Airflow in Cough
NASA Astrophysics Data System (ADS)
Mitran, Sorin M.
2008-07-01
Coughing is an alternative mode of ensuring mucus clearance in the lung when normal cilia induced flow breaks down. A numerical model of this process is presented with the following aspects. (1) A portion of the airway comprising the first three bronchus generations is modeled as radially reinforced elastic tubes. Elasticity equations are solved to predict airway deformation under effect of airway pressure. (2) The compressible, turbulent flow induced by rapid lung contraction is modeled by direct numerical simulation for Reynolds numbers in the range 5,000-10,000 and by Large Eddy Simulation for Reynolds numbers in the range 5,000-40,000. (3) A two-layer model of the airway surface liquid (ASL) covering the airway epithelial layer is used. The periciliary liquid (PCL) in direct contact with the epithelial layer is considered to be a Newtonian fluid. Forces modeling cilia beating can act upon this layer. The mucus layer between the PCL and the interior airflow is modeled as an Oldroyd-B fluid. The overall computation is a fluid-structure interaction simulation that tracks changes in ASL thickness and airway diameters that result from impulsive airflow boundary conditions imposed at bronchi ends. In particular, the amount of mucus that is evacuated from the system is computed as a function of cough intensity and mucus rheological properties.
Multidisciplinary Simulation Acceleration using Multiple Shared-Memory Graphical Processing Units
NASA Astrophysics Data System (ADS)
Kemal, Jonathan Yashar
For purposes of optimizing and analyzing turbomachinery and other designs, the unsteady Favre-averaged flow-field differential equations for an ideal compressible gas can be solved in conjunction with the heat conduction equation. We solve all equations using the finite-volume multiple-grid numerical technique, with the dual time-step scheme used for unsteady simulations. Our numerical solver code targets CUDA-capable Graphical Processing Units (GPUs) produced by NVIDIA. Making use of MPI, our solver can run across networked compute notes, where each MPI process can use either a GPU or a Central Processing Unit (CPU) core for primary solver calculations. We use NVIDIA Tesla C2050/C2070 GPUs based on the Fermi architecture, and compare our resulting performance against Intel Zeon X5690 CPUs. Solver routines converted to CUDA typically run about 10 times faster on a GPU for sufficiently dense computational grids. We used a conjugate cylinder computational grid and ran a turbulent steady flow simulation using 4 increasingly dense computational grids. Our densest computational grid is divided into 13 blocks each containing 1033x1033 grid points, for a total of 13.87 million grid points or 1.07 million grid points per domain block. To obtain overall speedups, we compare the execution time of the solver's iteration loop, including all resource intensive GPU-related memory copies. Comparing the performance of 8 GPUs to that of 8 CPUs, we obtain an overall speedup of about 6.0 when using our densest computational grid. This amounts to an 8-GPU simulation running about 39.5 times faster than running than a single-CPU simulation.
A multiplicative regularization for force reconstruction
NASA Astrophysics Data System (ADS)
Aucejo, M.; De Smet, O.
2017-02-01
Additive regularizations, such as Tikhonov-like approaches, are certainly the most popular methods for reconstructing forces acting on a structure. These approaches require, however, the knowledge of a regularization parameter, that can be numerically computed using specific procedures. Unfortunately, these procedures are generally computationally intensive. For this particular reason, it could be of primary interest to propose a method able to proceed without defining any regularization parameter beforehand. In this paper, a multiplicative regularization is introduced for this purpose. By construction, the regularized solution has to be calculated in an iterative manner. In doing so, the amount of regularization is automatically adjusted throughout the resolution process. Validations using synthetic and experimental data highlight the ability of the proposed approach in providing consistent reconstructions.
Edge detection based on computational ghost imaging with structured illuminations
NASA Astrophysics Data System (ADS)
Yuan, Sheng; Xiang, Dong; Liu, Xuemei; Zhou, Xin; Bing, Pibin
2018-03-01
Edge detection is one of the most important tools to recognize the features of an object. In this paper, we propose an optical edge detection method based on computational ghost imaging (CGI) with structured illuminations which are generated by an interference system. The structured intensity patterns are designed to make the edge of an object be directly imaged from detected data in CGI. This edge detection method can extract the boundaries for both binary and grayscale objects in any direction at one time. We also numerically test the influence of distance deviations in the interference system on edge extraction, i.e., the tolerance of the optical edge detection system to distance deviation. Hopefully, it may provide a guideline for scholars to build an experimental system.
A Numerical Simulation and Statistical Modeling of High Intensity Radiated Fields Experiment Data
NASA Technical Reports Server (NTRS)
Smith, Laura J.
2004-01-01
Tests are conducted on a quad-redundant fault tolerant flight control computer to establish upset characteristics of an avionics system in an electromagnetic field. A numerical simulation and statistical model are described in this work to analyze the open loop experiment data collected in the reverberation chamber at NASA LaRC as a part of an effort to examine the effects of electromagnetic interference on fly-by-wire aircraft control systems. By comparing thousands of simulation and model outputs, the models that best describe the data are first identified and then a systematic statistical analysis is performed on the data. All of these efforts are combined which culminate in an extrapolation of values that are in turn used to support previous efforts used in evaluating the data.
Computation of the bluff-body sound generation by a self-consistent mean flow formulation
NASA Astrophysics Data System (ADS)
Fani, A.; Citro, V.; Giannetti, F.; Auteri, F.
2018-03-01
The sound generated by the flow around a circular cylinder is numerically investigated by using a finite-element method. In particular, we study the acoustic emissions generated by the flow past the bluff body at low Mach and Reynolds numbers. We perform a global stability analysis by using the compressible linearized Navier-Stokes equations. The resulting direct global mode provides detailed information related to the underlying hydrodynamic instability and data on the acoustic field generated. In order to recover the intensity of the produced sound, we apply the self-consistent model for non-linear saturation proposed by Mantič-Lugo, Arratia, and Gallaire ["Self-consistent mean flow description of the nonlinear saturation of the vortex shedding in the cylinder wake," Phys. Rev. Lett. 113, 084501 (2014)]. The application of this model allows us to compute the amplitude of the resulting linear mode and the effects of saturation on the mode structure and acoustic field. Our results show excellent agreement with those obtained by a full compressible simulation direct numerical simulation and those derived by the application of classical acoustic analogy formulations.
Al-Ruqaie, I.; Al-Khalifah, N.S.; Shanavaskhan, A.E.
2015-01-01
Varietal identification of olives is an intrinsic and empirical exercise owing to the large number of synonyms and homonyms, intensive exchange of genotypes, presence of varietal clones and lack of proper certification in nurseries. A comparative study of morphological characters of eight olive cultivars grown in Saudi Arabia was carried out and analyzed using NTSYSpc (Numerical Taxonomy System for personal computer) system segregated smaller fruits in one clade and the rest in two clades. Koroneiki, a Greek cultivar with a small sized fruit shared arm with Spanish variety Arbosana. Morphologic analysis using NTSYSpc revealed that biometrics of leaves, fruits and seeds are reliable morphologic characters to distinguish between varieties, except for a few morphologically very similar olive cultivars. The proximate analysis showed significant variations in the protein, fiber, crude fat, ash and moisture content of different cultivars. The study also showed that neither the size of fruit nor the fruit pulp thickness is a limiting factor determining crude fat content of olives. PMID:26858547
Constrained evolution in numerical relativity
NASA Astrophysics Data System (ADS)
Anderson, Matthew William
The strongest potential source of gravitational radiation for current and future detectors is the merger of binary black holes. Full numerical simulation of such mergers can provide realistic signal predictions and enhance the probability of detection. Numerical simulation of the Einstein equations, however, is fraught with difficulty. Stability even in static test cases of single black holes has proven elusive. Common to unstable simulations is the growth of constraint violations. This work examines the effect of controlling the growth of constraint violations by solving the constraints periodically during a simulation, an approach called constrained evolution. The effects of constrained evolution are contrasted with the results of unconstrained evolution, evolution where the constraints are not solved during the course of a simulation. Two different formulations of the Einstein equations are examined: the standard ADM formulation and the generalized Frittelli-Reula formulation. In most cases constrained evolution vastly improves the stability of a simulation at minimal computational cost when compared with unconstrained evolution. However, in the more demanding test cases examined, constrained evolution fails to produce simulations with long-term stability in spite of producing improvements in simulation lifetime when compared with unconstrained evolution. Constrained evolution is also examined in conjunction with a wide variety of promising numerical techniques, including mesh refinement and overlapping Cartesian and spherical computational grids. Constrained evolution in boosted black hole spacetimes is investigated using overlapping grids. Constrained evolution proves to be central to the host of innovations required in carrying out such intensive simulations.
NASA Technical Reports Server (NTRS)
Poinsot, Thierry J.
1993-01-01
Understanding and modeling of turbulent combustion are key problems in the computation of numerous practical systems. Because of the lack of analytical theories in this field and of the difficulty of performing precise experiments, direct numerical simulation (DNS) appears to be one of the most attractive tools to use in addressing this problem. The general objective of DNS of reacting flows is to improve our knowledge of turbulent combustion but also to use this information for turbulent combustion models. For the foreseeable future, numerical simulation of the full three-dimensional governing partial differential equations with variable density and transport properties as well as complex chemistry will remain intractable; thus, various levels of simplification will remain necessary. On one hand, the requirement to simplify is not necessarily a handicap: numerical simulations allow the researcher a degree of control in isolating specific physical phenomena that is inaccessible in experiments. CTR has pursued an intensive research program in the field of DNS for turbulent reacting flows since 1987. DNS of reacting flows is quite different from DNS of non-reacting flows: without reaction, the equations to solve are clearly the five conservation equations of the Navier Stokes system for compressible situations (four for incompressible cases), and the limitation of the approach is the Reynolds number (or in other words the number of points in the computation). For reacting flows, the choice of the equations, the species (each species will require one additional conservation equation), the chemical scheme, and the configuration itself is more complex.
ERIC Educational Resources Information Center
Sinn, John W.
This instructional manual contains five learning activity packets for use in a workshop on computer numerical control for computer-aided manufacturing. The lessons cover the following topics: introduction to computer-aided manufacturing, understanding the lathe, using the computer, computer numerically controlled part programming, and executing a…
Optimization methods and silicon solar cell numerical models
NASA Technical Reports Server (NTRS)
Girardini, K.; Jacobsen, S. E.
1986-01-01
An optimization algorithm for use with numerical silicon solar cell models was developed. By coupling an optimization algorithm with a solar cell model, it is possible to simultaneously vary design variables such as impurity concentrations, front junction depth, back junction depth, and cell thickness to maximize the predicted cell efficiency. An optimization algorithm was developed and interfaced with the Solar Cell Analysis Program in 1 Dimension (SCAP1D). SCAP1D uses finite difference methods to solve the differential equations which, along with several relations from the physics of semiconductors, describe mathematically the performance of a solar cell. A major obstacle is that the numerical methods used in SCAP1D require a significant amount of computer time, and during an optimization the model is called iteratively until the design variables converge to the values associated with the maximum efficiency. This problem was alleviated by designing an optimization code specifically for use with numerically intensive simulations, to reduce the number of times the efficiency has to be calculated to achieve convergence to the optimal solution.
Analytic Formulation and Numerical Implementation of an Acoustic Pressure Gradient Prediction
NASA Technical Reports Server (NTRS)
Lee, Seongkyu; Brentner, Kenneth S.; Farassat, Fereidoun
2007-01-01
The scattering of rotor noise is an area that has received little attention over the years, yet the limited work that has been done has shown that both the directivity and intensity of the acoustic field may be significantly modified by the presence of scattering bodies. One of the inputs needed to compute the scattered acoustic field is the acoustic pressure gradient on a scattering surface. Two new analytical formulations of the acoustic pressure gradient have been developed and implemented in the PSU-WOPWOP rotor noise prediction code. These formulations are presented in this paper. The first formulation is derived by taking the gradient of Farassat's retarded-time Formulation 1A. Although this formulation is relatively simple, it requires numerical time differentiation of the acoustic integrals. In the second formulation, the time differentiation is taken inside the integrals analytically. The acoustic pressure gradient predicted by these new formulations is validated through comparison with the acoustic pressure gradient determined by a purely numerical approach for two model rotors. The agreement between analytic formulations and numerical method is excellent for both stationary and moving observers case.
NASA Technical Reports Server (NTRS)
King, H. F.; Komornicki, A.
1986-01-01
Formulas are presented relating Taylor series expansion coefficients of three functions of several variables, the energy of the trial wave function (W), the energy computed using the optimized variational wave function (E), and the response function (lambda), under certain conditions. Partial derivatives of lambda are obtained through solution of a recursive system of linear equations, and solution through order n yields derivatives of E through order 2n + 1, extending Puley's application of Wigner's 2n + 1 rule to partial derivatives in couple perturbation theory. An examination of numerical accuracy shows that the usual two-term second derivative formula is less stable than an alternative four-term formula, and that previous claims that energy derivatives are stationary properties of the wave function are fallacious. The results have application to quantum theoretical methods for the computation of derivative properties such as infrared frequencies and intensities.
Probabilistic Structural Analysis Theory Development
NASA Technical Reports Server (NTRS)
Burnside, O. H.
1985-01-01
The objective of the Probabilistic Structural Analysis Methods (PSAM) project is to develop analysis techniques and computer programs for predicting the probabilistic response of critical structural components for current and future space propulsion systems. This technology will play a central role in establishing system performance and durability. The first year's technical activity is concentrating on probabilistic finite element formulation strategy and code development. Work is also in progress to survey critical materials and space shuttle mian engine components. The probabilistic finite element computer program NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) is being developed. The final probabilistic code will have, in the general case, the capability of performing nonlinear dynamic of stochastic structures. It is the goal of the approximate methods effort to increase problem solving efficiency relative to finite element methods by using energy methods to generate trial solutions which satisfy the structural boundary conditions. These approximate methods will be less computer intensive relative to the finite element approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fabain, R.T.
1994-05-16
A rock strength analysis program, through intensive log analysis, can quantify rock hardness in terms of confined compressive strength to identify intervals suited for drilling with polycrystalline diamond compact (PDC) bits. Additionally, knowing the confined compressive strength helps determine the optimum PDC bit for the intervals. Computing rock strength as confined compressive strength can more accurately characterize a rock's actual hardness downhole than other methods. the information can be used to improve bit selections and to help adjust drilling parameters to reduce drilling costs. Empirical data compiled from numerous field strength analyses have provided a guide to selecting PDC drillmore » bits. A computer analysis program has been developed to aid in PDC bit selection. The program more accurately defines rock hardness in terms of confined strength, which approximates the in situ rock hardness downhole. Unconfined compressive strength is rock hardness at atmospheric pressure. The program uses sonic and gamma ray logs as well as numerous input data from mud logs. Within the range of lithologies for which the program is valid, rock hardness can be determine with improved accuracy. The program's output is typically graphed in a log format displaying raw data traces from well logs, computer-interpreted lithology, the calculated values of confined compressive strength, and various optional rock mechanic outputs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, Kelly; Budge, Kent; Lowrie, Rob
2016-03-03
Draco is an object-oriented component library geared towards numerically intensive, radiation (particle) transport applications built for parallel computing hardware. It consists of semi-independent packages and a robust build system. The packages in Draco provide a set of components that can be used by multiple clients to build transport codes. The build system can also be extracted for use in clients. Software includes smart pointers, Design-by-Contract assertions, unit test framework, wrapped MPI functions, a file parser, unstructured mesh data structures, a random number generator, root finders and an angular quadrature component.
Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan
2016-01-01
Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical modeling. PMID:27044039
A revised ground-motion and intensity interpolation scheme for shakemap
Worden, C.B.; Wald, D.J.; Allen, T.I.; Lin, K.; Garcia, D.; Cua, G.
2010-01-01
We describe a weighted-average approach for incorporating various types of data (observed peak ground motions and intensities and estimates from groundmotion prediction equations) into the ShakeMap ground motion and intensity mapping framework. This approach represents a fundamental revision of our existing ShakeMap methodology. In addition, the increased availability of near-real-time macroseismic intensity data, the development of newrelationships between intensity and peak ground motions, and new relationships to directly predict intensity from earthquake source information have facilitated the inclusion of intensity measurements directly into ShakeMap computations. Our approach allows for the combination of (1) direct observations (ground-motion measurements or reported intensities), (2) observations converted from intensity to ground motion (or vice versa), and (3) estimated ground motions and intensities from prediction equations or numerical models. Critically, each of the aforementioned data types must include an estimate of its uncertainties, including those caused by scaling the influence of observations to surrounding grid points and those associated with estimates given an unknown fault geometry. The ShakeMap ground-motion and intensity estimates are an uncertainty-weighted combination of these various data and estimates. A natural by-product of this interpolation process is an estimate of total uncertainty at each point on the map, which can be vital for comprehensive inventory loss calculations. We perform a number of tests to validate this new methodology and find that it produces a substantial improvement in the accuracy of ground-motion predictions over empirical prediction equations alone.
Proposal for nanoscale cascaded plasmonic majority gates for non-Boolean computation.
Dutta, Sourav; Zografos, Odysseas; Gurunarayanan, Surya; Radu, Iuliana; Soree, Bart; Catthoor, Francky; Naeemi, Azad
2017-12-19
Surface-plasmon-polariton waves propagating at the interface between a metal and a dielectric, hold the key to future high-bandwidth, dense on-chip integrated logic circuits overcoming the diffraction limitation of photonics. While recent advances in plasmonic logic have witnessed the demonstration of basic and universal logic gates, these CMOS oriented digital logic gates cannot fully utilize the expressive power of this novel technology. Here, we aim at unraveling the true potential of plasmonics by exploiting an enhanced native functionality - the majority voter. Contrary to the state-of-the-art plasmonic logic devices, we use the phase of the wave instead of the intensity as the state or computational variable. We propose and demonstrate, via numerical simulations, a comprehensive scheme for building a nanoscale cascadable plasmonic majority logic gate along with a novel referencing scheme that can directly translate the information encoded in the amplitude and phase of the wave into electric field intensity at the output. Our MIM-based 3-input majority gate displays a highly improved overall area of only 0.636 μm 2 for a single-stage compared with previous works on plasmonic logic. The proposed device demonstrates non-Boolean computational capability and can find direct utility in highly parallel real-time signal processing applications like pattern recognition.
NASA Astrophysics Data System (ADS)
Shaari, M. S.; Akramin, M. R. M.; Ariffin, A. K.; Abdullah, S.; Kikuchi, M.
2018-02-01
The paper is presenting the fatigue crack growth (FCG) behavior of semi-elliptical surface cracks for API X65 gas pipeline using S-version FEM. A method known as global-local overlay technique was used in this study to predict the fatigue behavior that involve of two separate meshes each specifically for global (geometry) and local (crack). The pre-post program was used to model the global geometry (coarser mesh) known as FAST including the material and boundary conditions. Hence, the local crack (finer mesh) will be defined the exact location and the mesh control accordingly. The local mesh was overlaid along with the global before the numerical computation taken place to solve the engineering problem. The stress intensity factors were computed using the virtual crack closure-integral method (VCCM). The most important results is the behavior of the fatigue crack growth, which contains the crack depth (a), crack length (c) and stress intensity factors (SIF). The correlation between the fatigue crack growth and the SIF shows a good growth for the crack depth (a) and dissimilar for the crack length (c) where stunned behavior was resulted. The S-version FEM will benefiting the user due to the overlay technique where it will shorten the computation process.
Efficient variable time-stepping scheme for intense field-atom interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cerjan, C.; Kosloff, R.
1993-03-01
The recently developed Residuum method [Tal-Ezer, Kosloff, and Cerjan, J. Comput. Phys. 100, 179 (1992)], a Krylov subspace technique with variable time-step integration for the solution of the time-dependent Schroedinger equation, is applied to the frequently used soft Coulomb potential in an intense laser field. This one-dimensional potential has asymptotic Coulomb dependence with a softened'' singularity at the origin; thus it models more realistic phenomena. Two of the more important quantities usually calculated in this idealized system are the photoelectron and harmonic photon generation spectra. These quantities are shown to be sensitive to the choice of a numerical integration scheme:more » some spectral features are incorrectly calculated or missing altogether. Furthermore, the Residuum method allows much larger grid spacings for equivalent or higher accuracy in addition to the advantages of variable time stepping. Finally, it is demonstrated that enhanced high-order harmonic generation accompanies intense field stabilization and that preparation of the atom in an intermediate Rydberg state leads to stabilization at much lower laser intensity.« less
NASA Astrophysics Data System (ADS)
Zeng, Chunhua; Zhou, Xiaofeng; Tao, Shufen
2009-12-01
The transient properties of a tumor cell growth model with immune surveillance driven by cross-correlated multiplicative and additive noises are investigated. The explicit expression of extinction rate from the state of a stable tumor to the state of extinction is obtained. Based on the numerical computations, we find the following: (i) the intensity of multiplicative noise D and the intensity of additive noise α enhance the extinction rate for the case of λ <= 0 (i.e. λ denotes cross-correlation intensity between two noises), but for the case of λ > 0, a critical noise intensity D or α exists at which the extinction rate is the smallest; D and α at first weaken the extinction rate and then enhance it. (ii) The immune rate β and the cross-correlation intensity λ play opposite roles on the extinction rate, i.e. β enhances the extinction rate of the tumor cell, while λ weakens the extinction rate of the tumor cell. Namely, the immune rate can enhance the extinction of the tumor cell and the cross-correlation between two noises can enhance stability of the cancer state.
Numerical investigation of solid mixing in a fluidized bed coating process
NASA Astrophysics Data System (ADS)
Kenche, Venkatakrishna; Feng, Yuqing; Ying, Danyang; Solnordal, Chris; Lim, Seng; Witt, Peter J.
2013-06-01
Fluidized beds are widely used in many process industries including the food and pharmaceutical sectors. Despite being an intensive research area, there are no design rules or correlations that can be used to quantitatively predict the solid mixing in a specific system for a given set of operating conditions. This paper presents a numerical study of the gas and solid dynamics in a laboratory scale fluidized bed coating process used for food and pharmaceutical industries. An Eulerian-Eulerian model (EEM) with kinetic theory of granular flow is selected as the modeling technique, with the commercial computational fluid dynamics (CFD) software package ANSYS/Fluent being the numerical platform. The flow structure is investigated in terms of the spatial distribution of gas and solid flow. The solid mixing has been evaluated under different operating conditions. It was found that the solid mixing rate in the horizontal direction is similar to that in the vertical direction under the current design and operating conditions. It takes about 5 s to achieve good mixing.
Research in applied mathematics, numerical analysis, and computer science
NASA Technical Reports Server (NTRS)
1984-01-01
Research conducted at the Institute for Computer Applications in Science and Engineering (ICASE) in applied mathematics, numerical analysis, and computer science is summarized and abstracts of published reports are presented. The major categories of the ICASE research program are: (1) numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; (2) control and parameter identification; (3) computational problems in engineering and the physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and (4) computer systems and software, especially vector and parallel computers.
Analysis of deformable image registration accuracy using computational modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhong Hualiang; Kim, Jinkoo; Chetty, Indrin J.
2010-03-15
Computer aided modeling of anatomic deformation, allowing various techniques and protocols in radiation therapy to be systematically verified and studied, has become increasingly attractive. In this study the potential issues in deformable image registration (DIR) were analyzed based on two numerical phantoms: One, a synthesized, low intensity gradient prostate image, and the other a lung patient's CT image data set. Each phantom was modeled with region-specific material parameters with its deformation solved using a finite element method. The resultant displacements were used to construct a benchmark to quantify the displacement errors of the Demons and B-Spline-based registrations. The results showmore » that the accuracy of these registration algorithms depends on the chosen parameters, the selection of which is closely associated with the intensity gradients of the underlying images. For the Demons algorithm, both single resolution (SR) and multiresolution (MR) registrations required approximately 300 iterations to reach an accuracy of 1.4 mm mean error in the lung patient's CT image (and 0.7 mm mean error averaged in the lung only). For the low gradient prostate phantom, these algorithms (both SR and MR) required at least 1600 iterations to reduce their mean errors to 2 mm. For the B-Spline algorithms, best performance (mean errors of 1.9 mm for SR and 1.6 mm for MR, respectively) on the low gradient prostate was achieved using five grid nodes in each direction. Adding more grid nodes resulted in larger errors. For the lung patient's CT data set, the B-Spline registrations required ten grid nodes in each direction for highest accuracy (1.4 mm for SR and 1.5 mm for MR). The numbers of iterations or grid nodes required for optimal registrations depended on the intensity gradients of the underlying images. In summary, the performance of the Demons and B-Spline registrations have been quantitatively evaluated using numerical phantoms. The results show that parameter selection for optimal accuracy is closely related to the intensity gradients of the underlying images. Also, the result that the DIR algorithms produce much lower errors in heterogeneous lung regions relative to homogeneous (low intensity gradient) regions, suggests that feature-based evaluation of deformable image registration accuracy must be viewed cautiously.« less
Summary of research in applied mathematics, numerical analysis, and computer sciences
NASA Technical Reports Server (NTRS)
1986-01-01
The major categories of current ICASE research programs addressed include: numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; control and parameter identification problems, with emphasis on effective numerical methods; computational problems in engineering and physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and computer systems and software, especially vector and parallel computers.
Investigation of Positively Curved Blade in Compressor Cascade Based on Transition Model
NASA Astrophysics Data System (ADS)
Chen, Shaowen; Lan, Yunhe; Zhou, Zhihua; Wang, Songtao
2016-06-01
Experiment and numerical simulation of flow transition in a compressor cascade with positively curved blade is carried out in a low speed. In the experimental investigation, the outlet aerodynamic parameters are measured using a five-hole aerodynamic probe, and an ink-trace flow visualization is applied to the cascade surface. The effects of transition flow on the boundary layer development, three-dimensional flow separation and aerodynamic performance are studied. The feasibility of a commercial computational fluid dynamic code is validated and the numerical results show a good agreement with experimental data. The blade-positive curving intensifies the radial force from the endwalls to the mid-span near the suction surface, which leads to the smaller scope of the intermittent region, the lesser extents of turbulence intensity and the shorter radial height of the separation bubble near the endwalls, but has little influence on the flow near the mid-span. The large passage vortex is divided into two smaller shedding vortexes under the impact of the radial pressure gradient due to the positively curved blade. The new concentrated shedding vortex results in an increase in the turbulence intensity and secondary flow loss of the corresponding region.
Theoretical investigations on a class of double-focus planar lens on the anisotropic material
NASA Astrophysics Data System (ADS)
Bozorgi, Mahdieh; Atlasbaf, Zahra
2017-05-01
We study a double-focus lens constituted of V-shaped plasmonic nano-antennas (VSPNAs) on the anisotropic TiO2 thin film. The phase and amplitude variations of cross-polarized scattered wave from a unit cell are computed by the developed fast Method of Moments (MoM) in which the dyadic Green's function is evaluated with the transmission line model in the spectral domain. Using the calculated phase and amplitude diagrams, a double-focus lens on the anisotropic thin film is designed in 2 μm. To validate the numerical results, the designed lens is analysed using a full-wave EM-solver. The obtained results show a tunable asymmetric behavior in the focusing intensity of the focal spots for different incident polarizations. It is shown that changing the thickness of anisotropic thin film leads to the changing in such an asymmetric behavior and also the intensity ratio of two focal spots. In addition, the lens performance is examined in the broadband wavelength range from 1.76 to 2.86 μm. It is achieved that the increasing the wavelength leads to decreasing the focal distances of the designed lens and increasing its numerical aperture (NA).
Algorithm 971: An Implementation of a Randomized Algorithm for Principal Component Analysis
LI, HUAMIN; LINDERMAN, GEORGE C.; SZLAM, ARTHUR; STANTON, KELLY P.; KLUGER, YUVAL; TYGERT, MARK
2017-01-01
Recent years have witnessed intense development of randomized methods for low-rank approximation. These methods target principal component analysis and the calculation of truncated singular value decompositions. The present article presents an essentially black-box, foolproof implementation for Mathworks’ MATLAB, a popular software platform for numerical computation. As illustrated via several tests, the randomized algorithms for low-rank approximation outperform or at least match the classical deterministic techniques (such as Lanczos iterations run to convergence) in basically all respects: accuracy, computational efficiency (both speed and memory usage), ease-of-use, parallelizability, and reliability. However, the classical procedures remain the methods of choice for estimating spectral norms and are far superior for calculating the least singular values and corresponding singular vectors (or singular subspaces). PMID:28983138
Energy transfer by radiation in non-grey atomic gases in isothermal and non-isothermal slabs
NASA Technical Reports Server (NTRS)
Poon, P. T. Y.
1975-01-01
A multiband model for the absorption coefficient of atomic hydrogen-helium plasmas is constructed which includes continuum and line contributions. Emission from 28 stronger lines of 106 that have been screened is considered, of which 21 are from hydrogen and 7 belong to helium, with reabsorption due to line-line, line-continuum overlap accurately accounted for. The model is utilized in the computation of intensities and fluxes from shock-heated slabs of 85% H2-15% He mixtures for slab thicknesses from 1 to 30 cm, temperature from 10,000 to 20,000 K, and for different densities. In conjunction with the multiband model, simple numerical schemes have been devised which provide a quick and comprehensive way of computing radiative energy transfer in nonisothermal and nongrey gases.
NASA Technical Reports Server (NTRS)
Huang, Junji; Duan, Lian; Choudhari, Meelan M.
2017-01-01
The acoustic radiation from the turbulent boundary layer on the nozzle wall of a Mach 6 Ludwieg Tube is simulated using Direct Numerical Simulations (DNS), with the flow conditions falling within the operational range of the Mach 6 Hypersonic Ludwieg Tube, Braunschweig (HLB). The mean and turbulence statistics of the nozzle-wall boundary layer show good agreement with those predicted by Pate's correlation and Reynolds Averaged Navier-Stokes (RANS) computations. The rms pressure fluctuation P'(rms)/T(w) plateaus in the freestream core of the nozzle. The intensity of the freestream noise within the nozzle is approximately 20% higher than that radiated from a single at pate with a similar freestream Mach number, potentially because of the contributions to the acoustic radiation from multiple azimuthal segments of the nozzle wall.
NASA Astrophysics Data System (ADS)
Mokem Fokou, I. S.; Nono Dueyou Buckjohn, C.; Siewe Siewe, M.; Tchawoua, C.
2018-03-01
In this manuscript, a hybrid energy harvesting system combining piezoelectric and electromagnetic transduction and subjected to colored noise is investigated. By using the stochastic averaging method, the stationary probability density functions of amplitudes are obtained and reveal interesting dynamics related to the long term behavior of the device. From stationary probability densities, we discuss the stochastic bifurcation through the qualitative change which shows that noise intensity, correlation time and other system parameters can be treated as bifurcation parameters. Numerical simulations are made for a comparison with analytical findings. The Mean first passage time (MFPT) is numerical provided in the purpose to investigate the system stability. By computing the Mean residence time (TMR), we explore the stochastic resonance phenomenon; we show how it is related to the correlation time of colored noise and high output power.
NASA Astrophysics Data System (ADS)
Liu, Chao; Wang, Famei; Zheng, Shijie; Sun, Tao; Lv, Jingwei; Liu, Qiang; Yang, Lin; Mu, Haiwei; Chu, Paul K.
2016-07-01
A highly birefringent photonic crystal fibre is proposed and characterized based on a surface plasmon resonance sensor. The birefringence of the sensor is numerically analyzed by the finite-element method. In the numerical simulation, the resonance wavelength can be directly positioned at this birefringence abrupt change point and the depth of the abrupt change of birefringence reflects the intensity of excited surface plasmon. Consequently, the novel approach can accurately locate the resonance peak of the system without analyzing the loss spectrum. Simulated average sensitivity is as high as 1131 nm/RIU, corresponding to a resolution of 1 × 10-4 RIU in this sensor. Therefore, results obtained via the approach not only show polarization independence and less noble metal consumption, but also reveal better performance in terms of accuracy and computation efficiency.
A probabilistic method for constructing wave time-series at inshore locations using model scenarios
Long, Joseph W.; Plant, Nathaniel G.; Dalyander, P. Soupy; Thompson, David M.
2014-01-01
Continuous time-series of wave characteristics (height, period, and direction) are constructed using a base set of model scenarios and simple probabilistic methods. This approach utilizes an archive of computationally intensive, highly spatially resolved numerical wave model output to develop time-series of historical or future wave conditions without performing additional, continuous numerical simulations. The archive of model output contains wave simulations from a set of model scenarios derived from an offshore wave climatology. Time-series of wave height, period, direction, and associated uncertainties are constructed at locations included in the numerical model domain. The confidence limits are derived using statistical variability of oceanographic parameters contained in the wave model scenarios. The method was applied to a region in the northern Gulf of Mexico and assessed using wave observations at 12 m and 30 m water depths. Prediction skill for significant wave height is 0.58 and 0.67 at the 12 m and 30 m locations, respectively, with similar performance for wave period and direction. The skill of this simplified, probabilistic time-series construction method is comparable to existing large-scale, high-fidelity operational wave models but provides higher spatial resolution output at low computational expense. The constructed time-series can be developed to support a variety of applications including climate studies and other situations where a comprehensive survey of wave impacts on the coastal area is of interest.
Modeling and Validation of Microwave Ablations with Internal Vaporization
Chiang, Jason; Birla, Sohan; Bedoya, Mariajose; Jones, David; Subbiah, Jeyam; Brace, Christopher L.
2014-01-01
Numerical simulation is increasingly being utilized for computer-aided design of treatment devices, analysis of ablation growth, and clinical treatment planning. Simulation models to date have incorporated electromagnetic wave propagation and heat conduction, but not other relevant physics such as water vaporization and mass transfer. Such physical changes are particularly noteworthy during the intense heat generation associated with microwave heating. In this work, a numerical model was created that integrates microwave heating with water vapor generation and transport by using porous media assumptions in the tissue domain. The heating physics of the water vapor model was validated through temperature measurements taken at locations 5, 10 and 20 mm away from the heating zone of the microwave antenna in homogenized ex vivo bovine liver setup. Cross-sectional area of water vapor transport was validated through intra-procedural computed tomography (CT) during microwave ablations in homogenized ex vivo bovine liver. Iso-density contours from CT images were compared to vapor concentration contours from the numerical model at intermittent time points using the Jaccard Index. In general, there was an improving correlation in ablation size dimensions as the ablation procedure proceeded, with a Jaccard Index of 0.27, 0.49, 0.61, 0.67 and 0.69 at 1, 2, 3, 4, and 5 minutes. This study demonstrates the feasibility and validity of incorporating water vapor concentration into thermal ablation simulations and validating such models experimentally. PMID:25330481
NASA Technical Reports Server (NTRS)
Chao, L. Y.; Singh, D.; Shetty, D. K.
1988-01-01
A numerical computational study was carried out to assess the effects of subcritical crack growth on crack stability in the chevron-notched three-point bend specimens. A power-law relationship between the subcritical crack velocity and the applied stress intensity were used along with compliance and stress-intensity relationships for the chevron-notched bend specimen to calculate the load response under fixed deflection rate and a machine compliance. The results indicate that the maximum load during the test occurs at the same crack length for all the deflection rates; the maximum load, however, is dependent on the deflection rate for rates below the critical rate. The resulting dependence of the apparent fracture toughness on the deflection rate is compared to experimental results on soda-lime glass and polycrystalline alumina.
Geocoronal imaging with Dynamics Explorer
NASA Technical Reports Server (NTRS)
Rairden, R. L.; Frank, L. A.; Craven, J. D.
1986-01-01
The ultraviolet photometer of the University of Iowa spin-scan auroral imaging instrumentation on board Dynamics Explorer-1 has returned numerous hydrogen Lyman alpha images of the geocorona from altitudes of 570 km to 23,300 km (1.09 R sub E to 4.66 R sub E geocentric radial distance). The hydrogen density gradient is shown by a plot of the zenith intensities throughout this range, which decrease to near celestial background values as the spacecraft approaches apogee. Characterizing the upper geocorona as optically thin (single-scattering), the zenith intensity is converted directly to vertical column density. This approximation loses its validity deeper in the geocorona, where the hydrogen is demonstrated to be optically thick in that there is no Lyman alpha limb brightening. Further study of the geocoronal hydrogen distribution will require computer modeling of the radiative transfer.
Propagation of an ultra-short, intense laser in a relativistic fluid
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ritchie, A.B.; Decker, C.D.
1997-12-31
A Maxwell-relativistic fluid model is developed to describe the propagation of an ultrashort, intense laser pulse through an underdense plasma. The model makes use of numerically stabilizing fast Fourier transform (FFT) computational methods for both the Maxwell and fluid equations, and it is benchmarked against particle-in-cell (PIC) simulations. Strong fields generated in the wake of the laser are calculated, and the authors observe coherent wake-field radiation generated at harmonics of the plasma frequency due to nonlinearities in the laser-plasma interaction. For a plasma whose density is 10% of critical, the highest members of the plasma harmonic series begin to overlapmore » with the first laser harmonic, suggesting that widely used multiple-scales-theory, by which the laser and plasma frequencies are assumed to be separable, ceases to be a useful approximation.« less
Understanding light scattering by a coated sphere part 2: time domain analysis.
Laven, Philip; Lock, James A
2012-08-01
Numerical computations were made of scattering of an incident electromagnetic pulse by a coated sphere that is large compared to the dominant wavelength of the incident light. The scattered intensity was plotted as a function of the scattering angle and delay time of the scattered pulse. For fixed core and coating radii, the Debye series terms that most strongly contribute to the scattered intensity in different regions of scattering angle-delay time space were identified and analyzed. For a fixed overall radius and an increasing core radius, the first-order rainbow was observed to evolve into three separate components. The original component faded away, while the two new components eventually merged together. The behavior of surface waves generated by grazing incidence at the core/coating and coating/exterior interfaces was also examined and discussed.
NASA Astrophysics Data System (ADS)
Gupta, S. R. D.; Gupta, Santanu D.
1991-10-01
The flow of laser radiation in a plane-parallel cylindrical slab of active amplifying medium with axial symmetry is treated as a problem in radiative transfer. The appropriate one-dimensional transfer equation describing the transfer of laser radiation has been derived by an appeal to Einstein's A, B coefficients (describing the processes of stimulated line absorption, spontaneous line emission, and stimulated line emission sustained by population inversion in the medium) and considering the 'rate equations' to completely establish the rational of the transfer equation obtained. The equation is then exactly solved and the angular distribution of the emergent laser beam intensity is obtained; its numerically computed values are given in tables and plotted in graphs showing the nature of peaks of the emerging laser beam intensity about the axis of the laser cylinder.
Numerical calibration of the stable poisson loaded specimen
NASA Technical Reports Server (NTRS)
Ghosn, Louis J.; Calomino, Anthony M.; Brewer, Dave N.
1992-01-01
An analytical calibration of the Stable Poisson Loaded (SPL) specimen is presented. The specimen configuration is similar to the ASTM E-561 compact-tension specimen with displacement controlled wedge loading used for R-Curve determination. The crack mouth opening displacements (CMOD's) are produced by the diametral expansion of an axially compressed cylindrical pin located in the wake of a machined notch. Due to the unusual loading configuration, a three-dimensional finite element analysis was performed with gap elements simulating the contact between the pin and specimen. In this report, stress intensity factors, CMOD's, and crack displacement profiles are reported for different crack lengths and different contacting conditions. It was concluded that the computed stress intensity factor decreases sharply with increasing crack length, thus making the SPL specimen configuration attractive for fracture testing of brittle, high modulus materials.
Approximate Bayesian Computation in the estimation of the parameters of the Forbush decrease model
NASA Astrophysics Data System (ADS)
Wawrzynczak, A.; Kopka, P.
2017-12-01
Realistic modeling of the complicated phenomena as Forbush decrease of the galactic cosmic ray intensity is a quite challenging task. One aspect is a numerical solution of the Fokker-Planck equation in five-dimensional space (three spatial variables, the time and particles energy). The second difficulty arises from a lack of detailed knowledge about the spatial and time profiles of the parameters responsible for the creation of the Forbush decrease. Among these parameters, the central role plays a diffusion coefficient. Assessment of the correctness of the proposed model can be done only by comparison of the model output with the experimental observations of the galactic cosmic ray intensity. We apply the Approximate Bayesian Computation (ABC) methodology to match the Forbush decrease model to experimental data. The ABC method is becoming increasing exploited for dynamic complex problems in which the likelihood function is costly to compute. The main idea of all ABC methods is to accept samples as an approximate posterior draw if its associated modeled data are close enough to the observed one. In this paper, we present application of the Sequential Monte Carlo Approximate Bayesian Computation algorithm scanning the space of the diffusion coefficient parameters. The proposed algorithm is adopted to create the model of the Forbush decrease observed by the neutron monitors at the Earth in March 2002. The model of the Forbush decrease is based on the stochastic approach to the solution of the Fokker-Planck equation.
Reactive transport modeling in the subsurface environment with OGS-IPhreeqc
NASA Astrophysics Data System (ADS)
He, Wenkui; Beyer, Christof; Fleckenstein, Jan; Jang, Eunseon; Kalbacher, Thomas; Naumov, Dimitri; Shao, Haibing; Wang, Wenqing; Kolditz, Olaf
2015-04-01
Worldwide, sustainable water resource management becomes an increasingly challenging task due to the growth of population and extensive applications of fertilizer in agriculture. Moreover, climate change causes further stresses to both water quantity and quality. Reactive transport modeling in the coupled soil-aquifer system is a viable approach to assess the impacts of different land use and groundwater exploitation scenarios on the water resources. However, the application of this approach is usually limited in spatial scale and to simplified geochemical systems due to the huge computational expense involved. Such computational expense is not only caused by solving the high non-linearity of the initial boundary value problems of water flow in the unsaturated zone numerically with rather fine spatial and temporal discretization for the correct mass balance and numerical stability, but also by the intensive computational task of quantifying geochemical reactions. In the present study, a flexible and efficient tool for large scale reactive transport modeling in variably saturated porous media and its applications are presented. The open source scientific software OpenGeoSys (OGS) is coupled with the IPhreeqc module of the geochemical solver PHREEQC. The new coupling approach makes full use of advantages from both codes: OGS provides a flexible choice of different numerical approaches for simulation of water flow in the vadose zone such as the pressure-based or mixed forms of Richards equation; whereas the IPhreeqc module leads to a simplification of data storage and its communication with OGS, which greatly facilitates the coupling and code updating. Moreover, a parallelization scheme with MPI (Message Passing Interface) is applied, in which the computational task of water flow and mass transport is partitioned through domain decomposition, whereas the efficient parallelization of geochemical reactions is achieved by smart allocation of computational workload over multiple compute nodes. The plausibility of the new coupling is verified by several benchmark tests. In addition, the efficiency of the new coupling approach is demonstrated by its application in a large scale scenario, in which the environmental fate of pesticides in a complex soil-aquifer system is studied.
Reactive transport modeling in variably saturated porous media with OGS-IPhreeqc
NASA Astrophysics Data System (ADS)
He, W.; Beyer, C.; Fleckenstein, J. H.; Jang, E.; Kalbacher, T.; Shao, H.; Wang, W.; Kolditz, O.
2014-12-01
Worldwide, sustainable water resource management becomes an increasingly challenging task due to the growth of population and extensive applications of fertilizer in agriculture. Moreover, climate change causes further stresses to both water quantity and quality. Reactive transport modeling in the coupled soil-aquifer system is a viable approach to assess the impacts of different land use and groundwater exploitation scenarios on the water resources. However, the application of this approach is usually limited in spatial scale and to simplified geochemical systems due to the huge computational expense involved. Such computational expense is not only caused by solving the high non-linearity of the initial boundary value problems of water flow in the unsaturated zone numerically with rather fine spatial and temporal discretization for the correct mass balance and numerical stability, but also by the intensive computational task of quantifying geochemical reactions. In the present study, a flexible and efficient tool for large scale reactive transport modeling in variably saturated porous media and its applications are presented. The open source scientific software OpenGeoSys (OGS) is coupled with the IPhreeqc module of the geochemical solver PHREEQC. The new coupling approach makes full use of advantages from both codes: OGS provides a flexible choice of different numerical approaches for simulation of water flow in the vadose zone such as the pressure-based or mixed forms of Richards equation; whereas the IPhreeqc module leads to a simplification of data storage and its communication with OGS, which greatly facilitates the coupling and code updating. Moreover, a parallelization scheme with MPI (Message Passing Interface) is applied, in which the computational task of water flow and mass transport is partitioned through domain decomposition, whereas the efficient parallelization of geochemical reactions is achieved by smart allocation of computational workload over multiple compute nodes. The plausibility of the new coupling is verified by several benchmark tests. In addition, the efficiency of the new coupling approach is demonstrated by its application in a large scale scenario, in which the environmental fate of pesticides in a complex soil-aquifer system is studied.
NASA Astrophysics Data System (ADS)
Amiraux, Mathieu
Rotorcraft Blade-Vortex Interaction (BVI) remains one of the most challenging flow phenomenon to simulate numerically. Over the past decade, the HART-II rotor test and its extensive experimental dataset has been a major database for validation of CFD codes. Its strong BVI signature, with high levels of intrusive noise and vibrations, makes it a difficult test for computational methods. The main challenge is to accurately capture and preserve the vortices which interact with the rotor, while predicting correct blade deformations and loading. This doctoral dissertation presents the application of a coupled CFD/CSD methodology to the problem of helicopter BVI and compares three levels of fidelity for aerodynamic modeling: a hybrid lifting-line/free-wake (wake coupling) method, with modified compressible unsteady model; a hybrid URANS/free-wake method; and a URANS-based wake capturing method, using multiple overset meshes to capture the entire flow field. To further increase numerical correlation, three helicopter fuselage models are implemented in the framework. The first is a high resolution 3D GPU panel code; the second is an immersed boundary based method, with 3D elliptic grid adaption; the last one uses a body-fitted, curvilinear fuselage mesh. The main contribution of this work is the implementation and systematic comparison of multiple numerical methods to perform BVI modeling. The trade-offs between solution accuracy and computational cost are highlighted for the different approaches. Various improvements have been made to each code to enhance physical fidelity, while advanced technologies, such as GPU computing, have been employed to increase efficiency. The resulting numerical setup covers all aspects of the simulation creating a truly multi-fidelity and multi-physics framework. Overall, the wake capturing approach showed the best BVI phasing correlation and good blade deflection predictions, with slightly under-predicted aerodynamic loading magnitudes. However, it proved to be much more expensive than the other two methods. Wake coupling with RANS solver had very good loading magnitude predictions, and therefore good acoustic intensities, with acceptable computational cost. The lifting-line based technique often had over-predicted aerodynamic levels, due to the degree of empiricism of the model, but its very short run-times, thanks to GPU technology, makes it a very attractive approach.
Advances in Numerical Boundary Conditions for Computational Aeroacoustics
NASA Technical Reports Server (NTRS)
Tam, Christopher K. W.
1997-01-01
Advances in Computational Aeroacoustics (CAA) depend critically on the availability of accurate, nondispersive, least dissipative computation algorithm as well as high quality numerical boundary treatments. This paper focuses on the recent developments of numerical boundary conditions. In a typical CAA problem, one often encounters two types of boundaries. Because a finite computation domain is used, there are external boundaries. On the external boundaries, boundary conditions simulating the solution outside the computation domain are to be imposed. Inside the computation domain, there may be internal boundaries. On these internal boundaries, boundary conditions simulating the presence of an object or surface with specific acoustic characteristics are to be applied. Numerical boundary conditions, both external or internal, developed for simple model problems are reviewed and examined. Numerical boundary conditions for real aeroacoustic problems are also discussed through specific examples. The paper concludes with a description of some much needed research in numerical boundary conditions for CAA.
Multidisciplinary optimization in aircraft design using analytic technology models
NASA Technical Reports Server (NTRS)
Malone, Brett; Mason, W. H.
1991-01-01
An approach to multidisciplinary optimization is presented which combines the Global Sensitivity Equation method, parametric optimization, and analytic technology models. The result is a powerful yet simple procedure for identifying key design issues. It can be used both to investigate technology integration issues very early in the design cycle, and to establish the information flow framework between disciplines for use in multidisciplinary optimization projects using much more computational intense representations of each technology. To illustrate the approach, an examination of the optimization of a short takeoff heavy transport aircraft is presented for numerous combinations of performance and technology constraints.
Practical adaptive quantum tomography
NASA Astrophysics Data System (ADS)
Granade, Christopher; Ferrie, Christopher; Flammia, Steven T.
2017-11-01
We introduce a fast and accurate heuristic for adaptive tomography that addresses many of the limitations of prior methods. Previous approaches were either too computationally intensive or tailored to handle special cases such as single qubits or pure states. By contrast, our approach combines the efficiency of online optimization with generally applicable and well-motivated data-processing techniques. We numerically demonstrate these advantages in several scenarios including mixed states, higher-dimensional systems, and restricted measurements. http://cgranade.com complete data and source code for this work are available online [1], and can be previewed at https://goo.gl/koiWxR.
NASA Technical Reports Server (NTRS)
Potgieter, M. S.; Le Roux, J. A.; Burlaga, L. F.; Mcdonald, F. B.
1993-01-01
Voyager 2 magnetic field measurements are used to simulate merged interaction and rarefaction regions (MIRs and RRs) for 1985-1989 via numerical solutions of the time-dependent, axially symmetric transport equation of cosmic rays in the heliosphere, together with the concurrent use of the wavy neutral sheet as a time-dependent drift parameter. This drift approach was found to be more successful, because it was able to reproduce the intensity levels, the factor modulation, and latitudinal gradients for 1 GeV protons at 23 AU.
Numerical Simulation of Convective Heat and Mass Transfer in a Two-Layer System
NASA Astrophysics Data System (ADS)
Myznikova, B. I.; Kazaryan, V. A.; Tarunin, E. L.; Wertgeim, I. I.
The results are presented of mathematical and computer modeling of natural convection in the “liquid-gas” two-layer system, filling a vertical cylinder surrounded by solid heat conductive tract. The model describes approximately the conjugate heat and mass transfer in the underground oil product storage, filled partially by a hydrocarbon liquid, with natural gas layer above the liquid surface. The geothermal gradient in a rock mass gives rise to the intensive convection in the liquid-gas system. The consideration is worked out for laminar flows, laminar-turbulent transitional regimes, and developed turbulent flows.
Resolution enhancement using simultaneous couple illumination
NASA Astrophysics Data System (ADS)
Hussain, Anwar; Martínez Fuentes, José Luis
2016-10-01
A super-resolution technique based on structured illumination created by a liquid crystal on silicon spatial light modulator (LCOS-SLM) is presented. Single and simultaneous pairs of tilted beams are generated to illuminate a target object. Resolution enhancement of an optical 4f system is demonstrated by using numerical simulations. The resulting intensity images are recorded at a charged couple device (CCD) and stored in the computer memory for further processing. One dimension enhancement can be performed with only 15 images. Two dimensional complete improvement requires 153 different images. The resolution of the optical system is extended three times compared to the band limited system.
A numerical study of mixing enhancement in supersonic reacting flow fields. [in scramjets
NASA Technical Reports Server (NTRS)
Drummond, J. Philip; Mukunda, H. S.
1988-01-01
NASA Langley has intensively investigated the components of ramjet and scramjet systems for endoatmospheric, airbreathing hypersonic propulsion; attention is presently given to the optimization of scramjet combustor fuel-air mixing and reaction characteristics. A supersonic, spatially developing and reacting mixing layer has been found to serve as an excellent physical model for the mixing and reaction process. Attention is presently given to techniques that have been applied to the enhancement of the mixing processes and the overall combustion efficiency of the mixing layer. A fuel injector configuration has been computationally designed which significantly increases mixing and reaction rates.
Ionospheric Alfvén resonator and aurora: Modeling of MICA observations
NASA Astrophysics Data System (ADS)
Tulegenov, B.; Streltsov, A. V.
2017-07-01
We present results from a numerical study of small-scale, intense magnetic field-aligned currents observed in the vicinity of the discrete auroral arc by the Magnetosphere-Ionosphere Coupling in the Alfvén Resonator (MICA) sounding rocket launched from Poker Flat, Alaska, on 19 February 2012. The goal of the MICA project was to investigate the hypothesis that such currents can be produced inside the ionospheric Alfvén resonator by the ionospheric feedback instability (IFI) driven by the system of large-scale magnetic field-aligned currents interacting with the ionosphere. The trajectory of the MICA rocket crossed two discrete auroral arcs and detected packages of intense, small-scale currents at the edges of these arcs, in the most favorable location for the development of the ionospheric feedback instability, predicted by the IFI theory. Simulations of the reduced MHD model derived in the dipole magnetic field geometry with realistic background parameters confirm that IFI indeed generates small-scale ULF waves inside the ionospheric Alfvén resonator with frequency, scale size, and amplitude showing a good, quantitative agreement with the observations. The comparison between numerical results and observations was performed by "flying" a virtual MICA rocket through the computational domain, and this comparison shows that, for example, the waves generated in the numerical model have frequencies in the range from 0.30 to 0.45 Hz, and the waves detected by the MICA rocket have frequencies in the range from 0.18 to 0.50 Hz.
A Bayesian and Physics-Based Ground Motion Parameters Map Generation System
NASA Astrophysics Data System (ADS)
Ramirez-Guzman, L.; Quiroz, A.; Sandoval, H.; Perez-Yanez, C.; Ruiz, A. L.; Delgado, R.; Macias, M. A.; Alcántara, L.
2014-12-01
We present the Ground Motion Parameters Map Generation (GMPMG) system developed by the Institute of Engineering at the National Autonomous University of Mexico (UNAM). The system delivers estimates of information associated with the social impact of earthquakes, engineering ground motion parameters (gmp), and macroseismic intensity maps. The gmp calculated are peak ground acceleration and velocity (pga and pgv) and response spectral acceleration (SA). The GMPMG relies on real-time data received from strong ground motion stations belonging to UNAM's networks throughout Mexico. Data are gathered via satellite and internet service providers, and managed with the data acquisition software Earthworm. The system is self-contained and can perform all calculations required for estimating gmp and intensity maps due to earthquakes, automatically or manually. An initial data processing, by baseline correcting and removing records containing glitches or low signal-to-noise ratio, is performed. The system then assigns a hypocentral location using first arrivals and a simplified 3D model, followed by a moment tensor inversion, which is performed using a pre-calculated Receiver Green's Tensors (RGT) database for a realistic 3D model of Mexico. A backup system to compute epicentral location and magnitude is in place. A Bayesian Kriging is employed to combine recorded values with grids of computed gmp. The latter are obtained by using appropriate ground motion prediction equations (for pgv, pga and SA with T=0.3, 0.5, 1 and 1.5 s ) and numerical simulations performed in real time, using the aforementioned RGT database (for SA with T=2, 2.5 and 3 s). Estimated intensity maps are then computed using SA(T=2S) to Modified Mercalli Intensity correlations derived for central Mexico. The maps are made available to the institutions in charge of the disaster prevention systems. In order to analyze the accuracy of the maps, we compare them against observations not considered in the computations, and present some examples of recent earthquakes. We conclude that the system provides information with a fair goodness-of-fit against observations. This project is partially supported by DGAPA-PAPIIT (UNAM) project TB100313-RR170313.
Integrating Numerical Computation into the Modeling Instruction Curriculum
ERIC Educational Resources Information Center
Caballero, Marcos D.; Burk, John B.; Aiken, John M.; Thoms, Brian D.; Douglas, Scott S.; Scanlon, Erin M.; Schatz, Michael F.
2014-01-01
Numerical computation (the use of a computer to solve, simulate, or visualize a physical problem) has fundamentally changed the way scientific research is done. Systems that are too difficult to solve in closed form are probed using computation. Experiments that are impossible to perform in the laboratory are studied numerically. Consequently, in…
Numerical Package in Computer Supported Numeric Analysis Teaching
ERIC Educational Resources Information Center
Tezer, Murat
2007-01-01
At universities in the faculties of Engineering, Sciences, Business and Economics together with higher education in Computing, it is stated that because of the difficulty, calculators and computers can be used in Numerical Analysis (NA). In this study, the learning computer supported NA will be discussed together with important usage of the…
Leal Neto, Viriato; Vieira, José Wilson; Lima, Fernando Roberto de Andrade
2014-01-01
This article presents a way to obtain estimates of dose in patients submitted to radiotherapy with basis on the analysis of regions of interest on nuclear medicine images. A software called DoRadIo (Dosimetria das Radiações Ionizantes [Ionizing Radiation Dosimetry]) was developed to receive information about source organs and target organs, generating graphical and numerical results. The nuclear medicine images utilized in the present study were obtained from catalogs provided by medical physicists. The simulations were performed with computational exposure models consisting of voxel phantoms coupled with the Monte Carlo EGSnrc code. The software was developed with the Microsoft Visual Studio 2010 Service Pack and the project template Windows Presentation Foundation for C# programming language. With the mentioned tools, the authors obtained the file for optimization of Monte Carlo simulations using the EGSnrc; organization and compaction of dosimetry results with all radioactive sources; selection of regions of interest; evaluation of grayscale intensity in regions of interest; the file of weighted sources; and, finally, all the charts and numerical results. The user interface may be adapted for use in clinical nuclear medicine as a computer-aided tool to estimate the administered activity.
Z2Pack: Numerical implementation of hybrid Wannier centers for identifying topological materials
NASA Astrophysics Data System (ADS)
Gresch, Dominik; Autès, Gabriel; Yazyev, Oleg V.; Troyer, Matthias; Vanderbilt, David; Bernevig, B. Andrei; Soluyanov, Alexey A.
2017-02-01
The intense theoretical and experimental interest in topological insulators and semimetals has established band structure topology as a fundamental material property. Consequently, identifying band topologies has become an important, but often challenging, problem, with no exhaustive solution at the present time. In this work we compile a series of techniques, some previously known, that allow for a solution to this problem for a large set of the possible band topologies. The method is based on tracking hybrid Wannier charge centers computed for relevant Bloch states, and it works at all levels of materials modeling: continuous k .p models, tight-binding models, and ab initio calculations. We apply the method to compute and identify Chern, Z2, and crystalline topological insulators, as well as topological semimetal phases, using real material examples. Moreover, we provide a numerical implementation of this technique (the Z2Pack software package) that is ideally suited for high-throughput screening of materials databases for compounds with nontrivial topologies. We expect that our work will allow researchers to (a) identify topological materials optimal for experimental probes, (b) classify existing compounds, and (c) reveal materials that host novel, not yet described, topological states.
Transformation of tsunami waves passing through the Straits of the Kuril Islands
NASA Astrophysics Data System (ADS)
Kostenko, Irina; Kurkin, Andrey; Pelinovsky, Efim; Zaytsev, Andrey
2015-04-01
Pacific ocean and themselves Kuril Islands are located in the zone of high seismic activity, where underwater earthquakes cause tsunamis. They propagate across Pacific ocean and penetrates into the Okhotsk sea. It is natural to expect that the Kuril Islands reflect the Okhotsk sea from the Pacific tsunami waves. It has long been noted that the historical tsunami appeared less intense in the sea of Okhotsk in comparison with the Pacific coast of the Kuril Islands. Despite the fact that in the area of the Kuril Islands and in the Pacific ocean earthquakes with magnitude more than 8 occur, in the entire history of observations on the Okhotsk sea coast catastrophic tsunami was not registered. The study of the peculiarities of the propagation of historical and hypothetical tsunami in the North-Eastern part of the Pacific ocean was carried out in order to identify level of effect of the Kuril Islands and Straits on them. Tsunami sources were located in the Okhotsk sea and in the Pacific ocean. For this purpose, we performed a series of computational experiments using two bathymetries: 1) with use Kuril Islands; 2) without Kuril Islands. Magnitude and intensity of the tsunami, obtained during numerical simulation of height, were analyzed. The simulation results are compared with the observations. Numerical experiments have shown that in the simulation without the Kuril Islands tsunamis in the Okhotsk sea have higher waves, and in the Central part of the sea relatively quickly damped than in fact. Based on shallow-water equation tsunami numerical code NAMI DANCE was used for numerical simulations. This work was supported by ASTARTE project.
Simulating pad-electrodes with high-definition arrays in transcranial electric stimulation
NASA Astrophysics Data System (ADS)
Kempe, René; Huang, Yu; Parra, Lucas C.
2014-04-01
Objective. Research studies on transcranial electric stimulation, including direct current, often use a computational model to provide guidance on the placing of sponge-electrode pads. However, the expertise and computational resources needed for finite element modeling (FEM) make modeling impractical in a clinical setting. Our objective is to make the exploration of different electrode configurations accessible to practitioners. We provide an efficient tool to estimate current distributions for arbitrary pad configurations while obviating the need for complex simulation software. Approach. To efficiently estimate current distributions for arbitrary pad configurations we propose to simulate pads with an array of high-definition (HD) electrodes and use an efficient linear superposition to then quickly evaluate different electrode configurations. Main results. Numerical results on ten different pad configurations on a normal individual show that electric field intensity simulated with the sampled array deviates from the solutions with pads by only 5% and the locations of peak magnitude fields have a 94% overlap when using a dense array of 336 electrodes. Significance. Computationally intensive FEM modeling of the HD array needs to be performed only once, perhaps on a set of standard heads that can be made available to multiple users. The present results confirm that by using these models one can now quickly and accurately explore and select pad-electrode montages to match a particular clinical need.
NASA Astrophysics Data System (ADS)
Pitta, S.; Rojas, J. I.; Crespo, D.
2017-05-01
Aircraft lap joints play an important role in minimizing the operational cost of airlines. Hence, airlines pay more attention to these technologies to improve efficiency. Namely, a major time consuming and costly process is maintenance of aircraft between the flights, for instance, to detect early formation of cracks, monitoring crack growth, and fixing the corresponding parts with joints, if necessary. This work is focused on the study of repairs of cracked aluminium alloy (AA) 2024-T3 plates to regain their original strength; particularly, cracked AA 2024-T3 substrate plates repaired with doublers of AA 2024-T3 with two configurations (riveted and with adhesive bonding) are analysed. The fatigue life of the substrate plates with cracks of 1, 2, 5, 10 and 12.7mm is computed using Fracture Analysis 3D (FRANC3D) tool. The stress intensity factors for the repaired AA 2024-T3 plates are computed for different crack lengths and compared using commercial FEA tool ABAQUS. The results for the bonded repairs showed significantly lower stress intensity factors compared with the riveted repairs. This improves the overall fatigue life of the bonded joint.
Cone-beam x-ray luminescence computed tomography based on x-ray absorption dosage
NASA Astrophysics Data System (ADS)
Liu, Tianshuai; Rong, Junyan; Gao, Peng; Zhang, Wenli; Liu, Wenlei; Zhang, Yuanke; Lu, Hongbing
2018-02-01
With the advances of x-ray excitable nanophosphors, x-ray luminescence computed tomography (XLCT) has become a promising hybrid imaging modality. In particular, a cone-beam XLCT (CB-XLCT) system has demonstrated its potential in in vivo imaging with the advantage of fast imaging speed over other XLCT systems. Currently, the imaging models of most XLCT systems assume that nanophosphors emit light based on the intensity distribution of x-ray within the object, not completely reflecting the nature of the x-ray excitation process. To improve the imaging quality of CB-XLCT, an imaging model that adopts an excitation model of nanophosphors based on x-ray absorption dosage is proposed in this study. To solve the ill-posed inverse problem, a reconstruction algorithm that combines the adaptive Tikhonov regularization method with the imaging model is implemented for CB-XLCT reconstruction. Numerical simulations and phantom experiments indicate that compared with the traditional forward model based on x-ray intensity, the proposed dose-based model could improve the image quality of CB-XLCT significantly in terms of target shape, localization accuracy, and image contrast. In addition, the proposed model behaves better in distinguishing closer targets, demonstrating its advantage in improving spatial resolution.
Presumed PDF Modeling of Early Flame Propagation in Moderate to Intense Turbulence Environments
NASA Technical Reports Server (NTRS)
Carmen, Christina; Feikema, Douglas A.
2003-01-01
The present paper describes the results obtained from a one-dimensional time dependent numerical technique that simulates early flame propagation in a moderate to intense turbulent environment. Attention is focused on the development of a spark-ignited, premixed, lean methane/air mixture with the unsteady spherical flame propagating in homogeneous and isotropic turbulence. A Monte-Carlo particle tracking method, based upon the method of fractional steps, is utilized to simulate the phenomena represented by a probability density function (PDF) transport equation. Gaussian distributions of fluctuating velocity and fuel concentration are prescribed. Attention is focused on three primary parameters that influence the initial flame kernel growth: the detailed ignition system characteristics, the mixture composition, and the nature of the flow field. The computational results of moderate and intense isotropic turbulence suggests that flames within the distributed reaction zone are not as vulnerable, as traditionally believed, to the adverse effects of increased turbulence intensity. It is also shown that the magnitude of the flame front thickness significantly impacts the turbulent consumption flame speed. Flame conditions studied have fuel equivalence ratio s in the range phi = 0.6 to 0.9 at standard temperature and pressure.
Meng, Bo; Cong, Wenxiang; Xi, Yan; De Man, Bruno; Yang, Jian; Wang, Ge
2017-01-01
Contrast-enhanced computed tomography (CECT) helps enhance the visibility for tumor imaging. When a high-Z contrast agent interacts with X-rays across its K-edge, X-ray photoelectric absorption would experience a sudden increment, resulting in a significant difference of the X-ray transmission intensity between the left and right energy windows of the K-edge. Using photon-counting detectors, the X-ray intensity data in the left and right windows of the K-edge can be measured simultaneously. The differential information of the two kinds of intensity data reflects the contrast-agent concentration distribution. K-edge differences between various matters allow opportunities for the identification of contrast agents in biomedical applications. In this paper, a general radon transform is established to link the contrast-agent concentration to X-ray intensity measurement data. An iterative algorithm is proposed to reconstruct a contrast-agent distribution and tissue attenuation background simultaneously. Comprehensive numerical simulations are performed to demonstrate the merits of the proposed method over the existing K-edge imaging methods. Our results show that the proposed method accurately quantifies a distribution of a contrast agent, optimizing the contrast-to-noise ratio at a high dose efficiency. PMID:28437900
Observational Signatures of a Kink-unstable Coronal Flux Rope Using Hinode /EIS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snow, B.; Botha, G. J. J.; Régnier, S.
The signatures of energy release and energy transport for a kink-unstable coronal flux rope are investigated via forward modeling. Synthetic intensity and Doppler maps are generated from a 3D numerical simulation. The CHIANTI database is used to compute intensities for three Hinode /EIS emission lines that cover the thermal range of the loop. The intensities and Doppler velocities at simulation-resolution are spatially degraded to the Hinode /EIS pixel size (1″), convolved using a Gaussian point-spread function (3″), and exposed for a characteristic time of 50 s. The synthetic images generated for rasters (moving slit) and sit-and-stare (stationary slit) are analyzedmore » to find the signatures of the twisted flux and the associated instability. We find that there are several qualities of a kink-unstable coronal flux rope that can be detected observationally using Hinode /EIS, namely the growth of the loop radius, the increase in intensity toward the radial edge of the loop, and the Doppler velocity following an internal twisted magnetic field line. However, EIS cannot resolve the small, transient features present in the simulation, such as sites of small-scale reconnection (e.g., nanoflares).« less
Modeling Biodegradation and Reactive Transport: Analytical and Numerical Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Y; Glascoe, L
The computational modeling of the biodegradation of contaminated groundwater systems accounting for biochemical reactions coupled to contaminant transport is a valuable tool for both the field engineer/planner with limited computational resources and the expert computational researcher less constrained by time and computer power. There exists several analytical and numerical computer models that have been and are being developed to cover the practical needs put forth by users to fulfill this spectrum of computational demands. Generally, analytical models provide rapid and convenient screening tools running on very limited computational power, while numerical models can provide more detailed information with consequent requirementsmore » of greater computational time and effort. While these analytical and numerical computer models can provide accurate and adequate information to produce defensible remediation strategies, decisions based on inadequate modeling output or on over-analysis can have costly and risky consequences. In this chapter we consider both analytical and numerical modeling approaches to biodegradation and reactive transport. Both approaches are discussed and analyzed in terms of achieving bioremediation goals, recognizing that there is always a tradeoff between computational cost and the resolution of simulated systems.« less
Cryptanalysis and security enhancement of optical cryptography based on computational ghost imaging
NASA Astrophysics Data System (ADS)
Yuan, Sheng; Yao, Jianbin; Liu, Xuemei; Zhou, Xin; Li, Zhongyang
2016-04-01
Optical cryptography based on computational ghost imaging (CGI) has attracted much attention of researchers because it encrypts plaintext into a random intensity vector rather than complexed-valued function. This promising feature of the CGI-based cryptography reduces the amount of data to be transmitted and stored and therefore brings convenience in practice. However, we find that this cryptography is vulnerable to chosen-plaintext attack because of the linear relationship between the input and output of the encryption system, and three feasible strategies are proposed to break it in this paper. Even though a large number of plaintexts need to be chosen in these attack methods, it means that this cryptography still exists security risks. To avoid these attacks, a security enhancement method utilizing an invertible matrix modulation is further discussed and the feasibility is verified by numerical simulations.
The Application of High Energy Resolution Green's Functions to Threat Scenario Simulation
NASA Astrophysics Data System (ADS)
Thoreson, Gregory G.; Schneider, Erich A.
2012-04-01
Radiation detectors installed at key interdiction points provide defense against nuclear smuggling attempts by scanning vehicles and traffic for illicit nuclear material. These hypothetical threat scenarios may be modeled using radiation transport simulations. However, high-fidelity models are computationally intensive. Furthermore, the range of smuggler attributes and detector technologies create a large problem space not easily overcome by brute-force methods. Previous research has demonstrated that decomposing the scenario into independently simulated components using Green's functions can simulate photon detector signals with coarse energy resolution. This paper extends this methodology by presenting physics enhancements and numerical treatments which allow for an arbitrary level of energy resolution for photon transport. As a result, spectroscopic detector signals produced from full forward transport simulations can be replicated while requiring multiple orders of magnitude less computation time.
Ray, Nilanjan
2011-10-01
Fluid motion estimation from time-sequenced images is a significant image analysis task. Its application is widespread in experimental fluidics research and many related areas like biomedical engineering and atmospheric sciences. In this paper, we present a novel flow computation framework to estimate the flow velocity vectors from two consecutive image frames. In an energy minimization-based flow computation, we propose a novel data fidelity term, which: 1) can accommodate various measures, such as cross-correlation or sum of absolute or squared differences of pixel intensities between image patches; 2) has a global mechanism to control the adverse effect of outliers arising out of motion discontinuities, proximity of image borders; and 3) can go hand-in-hand with various spatial smoothness terms. Further, the proposed data term and related regularization schemes are both applicable to dense and sparse flow vector estimations. We validate these claims by numerical experiments on benchmark flow data sets. © 2011 IEEE
Exploiting data representation for fault tolerance
Hoemmen, Mark Frederick; Elliott, J.; Sandia National Lab.; ...
2015-01-06
Incorrect computer hardware behavior may corrupt intermediate computations in numerical algorithms, possibly resulting in incorrect answers. Prior work models misbehaving hardware by randomly flipping bits in memory. We start by accepting this premise, and present an analytic model for the error introduced by a bit flip in an IEEE 754 floating-point number. We then relate this finding to the linear algebra concepts of normalization and matrix equilibration. In particular, we present a case study illustrating that normalizing both vector inputs of a dot product minimizes the probability of a single bit flip causing a large error in the dot product'smore » result. Moreover, the absolute error is either less than one or very large, which allows detection of large errors. Then, we apply this to the GMRES iterative solver. We count all possible errors that can be introduced through faults in arithmetic in the computationally intensive orthogonalization phase of GMRES, and show that when the matrix is equilibrated, the absolute error is bounded above by one.« less
NASA Technical Reports Server (NTRS)
Kraft, R. E.
1996-01-01
A computational method to predict modal reflection coefficients in cylindrical ducts has been developed based on the work of Homicz, Lordi, and Rehm, which uses the Wiener-Hopf method to account for the boundary conditions at the termination of a thin cylindrical pipe. The purpose of this study is to develop a computational routine to predict the reflection coefficients of higher order acoustic modes impinging on the unflanged termination of a cylindrical duct. This effort was conducted wider Task Order 5 of the NASA Lewis LET Program, Active Noise Control of aircraft Engines: Feasibility Study, and will be used as part of the development of an integrated source noise, acoustic propagation, ANC actuator coupling, and control system algorithm simulation. The reflection coefficient prediction will be incorporated into an existing cylindrical duct modal analysis to account for the reflection of modes from the duct termination. This will provide a more accurate, rapid computation design tool for evaluating the effect of reflected waves on active noise control systems mounted in the duct, as well as providing a tool for the design of acoustic treatment in inlet ducts. As an active noise control system design tool, the method can be used preliminary to more accurate but more numerically intensive acoustic propagation models such as finite element methods. The resulting computer program has been shown to give reasonable results, some examples of which are presented. Reliable data to use for comparison is scarce, so complete checkout is difficult, and further checkout is needed over a wider range of system parameters. In future efforts the method will be adapted as a subroutine to the GEAE segmented cylindrical duct modal analysis program.
NASA Astrophysics Data System (ADS)
Yang, Minglin; Wu, Yueqian; Sheng, Xinqing; Ren, Kuan Fang
2017-12-01
Computation of scattering of shaped beams by large nonspherical particles is a challenge in both optics and electromagnetics domains since it concerns many research fields. In this paper, we report our new progress in the numerical computation of the scattering diagrams. Our algorithm permits to calculate the scattering of a particle of size as large as 110 wavelengths or 700 in size parameter. The particle can be transparent or absorbing of arbitrary shape, smooth or with a sharp surface, such as the Chebyshev particles or ice crystals. To illustrate the capacity of the algorithm, a zero order Bessel beam is taken as the incident beam, and the scattering of ellipsoidal particles and Chebyshev particles are taken as examples. Some special phenomena have been revealed and examined. The scattering problem is formulated with the combined tangential formulation and solved iteratively with the aid of the multilevel fast multipole algorithm, which is well parallelized with the message passing interface on the distributed memory computer platform using the hybrid partitioning strategy. The numerical predictions are compared with the results of the rigorous method for a spherical particle to validate the accuracy of the approach. The scattering diagrams of large ellipsoidal particles with various parameters are examined. The effect of aspect ratios, as well as half-cone angle of the incident zero-order Bessel beam and the off-axis distance on scattered intensity, is studied. Scattering by asymmetry Chebyshev particle with size parameter larger than 700 is also given to show the capability of the method for computing scattering by arbitrary shaped particles.
A virtual observatory for photoionized nebulae: the Mexican Million Models database (3MdB).
NASA Astrophysics Data System (ADS)
Morisset, C.; Delgado-Inglada, G.; Flores-Fajardo, N.
2015-04-01
Photoionization models obtained with numerical codes are widely used to study the physics of the interstellar medium (planetary nebulae, HII regions, etc). Grids of models are performed to understand the effects of the different parameters used to describe the regions on the observables (mainly emission line intensities). Most of the time, only a small part of the computed results of such grids are published, and they are sometimes hard to obtain in a user-friendly format. We present here the Mexican Million Models dataBase (3MdB), an effort to resolve both of these issues in the form of a database of photoionization models, easily accessible through the MySQL protocol, and containing a lot of useful outputs from the models, such as the intensities of 178 emission lines, the ionic fractions of all the ions, etc. Some examples of the use of the 3MdB are also presented.
Electric field numerical simulation of disc type electrostatic spinning spinneret
NASA Astrophysics Data System (ADS)
Wei, L.; Deng, ZL; Qin, XH; Liang, ZY
2018-01-01
Electrospinning is a new type of free-end spinning built on electric field. Different from traditional single needle spinneret, in this study, a new disc type free surface spinneret is used to produce multiple jets, this will greatly improve production efficiency of nanofiber. The electric-field distribution of spinneret is the crux of the formation and trajectory of jets. In order to probe the electric field intensity of the disc type spinneret, computational software of Ansoft Maxwell 12 is adopted for a precise and intuitive analysis. The results showed that the whole round cambered surface of the spinning solution at edge of each layer of the spinneret with the maximum curvature has the highest electric field intensity, and through the simulation of the electric field distribution of different spinneret parameters such as layer, the height and radius of the spinneret. Influences of various parameters on the electrostatic spinning are obtained.
NASA Technical Reports Server (NTRS)
Robertson, J. S.; Siegman, W. L.; Jacobson, M. J.
1989-01-01
There is substantial interest in the analytical and numerical modeling of low-frequency, long-range atmospheric acoustic propagation. Ray-based models, because of frequency limitations, do not always give an adequate prediction of quantities such as sound pressure or intensity levels. However, the parabolic approximation method, widely used in ocean acoustics, and often more accurate than ray models for lower frequencies of interest, can be applied to acoustic propagation in the atmosphere. Modifications of an existing implicit finite-difference implementation for computing solutions to the parabolic approximation are discussed. A locally-reacting boundary is used together with a one-parameter impedance model. Intensity calculations are performed for a number of flow resistivity values in both quiescent and windy atmospheres. Variations in the value of this parameter are shown to have substantial effects on the spatial variation of the acoustic signal.
Geocoronal imaging with Dynamics Explorer - A first look
NASA Technical Reports Server (NTRS)
Rairden, R. L.; Frank, L. A.; Craven, J. D.
1983-01-01
The ultraviolet photometer of the University of Iowa spin-scan auroral imaging instrumentation on board Dynamics Explorer-1 has returned numerous hydrogen Lyman alpha images of the geocorona from altitudes of 570 km to 23,300 km (1.09 R sub E to 4.66 R sub E geocentric radial distance). The hydrogen density gradient is shown by a plot of the zenith intensities throughout this range, which decrease to near celestial background values as the spacecraft approaches apogee. Characterizing the upper geocorona as optically thin (single-scattering), the zenith intensity is converted directly to vertical column density. This approximation loses its validity deeper in the geocorona, where the hydrogen is demonstrated to be optically thick in that there is no Lyman alpha limb brightening. Further study of the geocoronal hydrogen distribution will require computer modeling of the radiative transfer. Previously announced in STAR as N83-20889
NASA Technical Reports Server (NTRS)
Arnold, S. M.; Binienda, W. K.; Tan, H. Q.; Xu, M. H.
1992-01-01
Analytical derivations of stress intensity factors (SIF's) of a multicracked plate can be complex and tedious. Recent advances, however, in intelligent application of symbolic computation can overcome these difficulties and provide the means to rigorously and efficiently analyze this class of problems. Here, the symbolic algorithm required to implement the methodology described in Part 1 is presented. The special problem-oriented symbolic functions to derive the fundamental kernels are described, and the associated automatically generated FORTRAN subroutines are given. As a result, a symbolic/FORTRAN package named SYMFRAC, capable of providing accurate SIF's at each crack tip, was developed and validated. Simple illustrative examples using SYMFRAC show the potential of the present approach for predicting the macrocrack propagation path due to existing microcracks in the vicinity of a macrocrack tip, when the influence of the microcrack's location, orientation, size, and interaction are taken into account.
Optical fiber plasmonic lens for near-field focusing fabricated through focused ion beam
NASA Astrophysics Data System (ADS)
Sloyan, Karen; Melkonyan, Henrik; Moreira, Paulo; Dahlem, Marcus S.
2017-02-01
We report on numerical simulations and fabrication of an optical fiber plasmonic lens for near-field focusing applications. The plasmonic lens consists of an Archimedean spiral structure etched through a 100 nm-thick Au layer on the tip of a single-mode SM600 optical fiber operating at a wavelength of 632:8 nm. Three-dimensional finite-difference time-domain computations show that the relative electric field intensity of the focused spot increases 2:1 times when the number of turns increases from 2 to 12. Furthermore, a reduction of the intensity is observed when the initial inner radius is increased. The optimized plasmonic lens focuses light into a spot with a full-width at half-maximum of 182 nm, beyond the diffraction limit. The lens was fabricated by focused ion beam milling, with a 200nm slit width.
NASA Astrophysics Data System (ADS)
Das Gupta, Santanu; Das Gupta, S. R.
1991-10-01
The flow of laser radiation in a plane-parallel cylindrical slab of active amplifying medium with axial symmetry is treated as a problem in radiative transfer. The appropriate one-dimensional transfer equation describing the transfer of laser radiation has been derived by an appeal to Einstein'sA, B coefficients (describing the processes of stimulated line absorption, spontaneous line emission, and stimulated line emission sustained by population inversion in the medium) and considering the ‘rate equations’ to completely establish the rational of the transfer equation obtained. The equation is then exactly solved and the angular distribution of the emergent laser beam intensity is obtained; its numerically computed values are given in tables and plotted in graphs showing the nature of peaks of the emerging laser beam intensity about the axis of the laser cylinder.
A mixed-mode crack analysis of isotropic solids using conservation laws of elasticity
NASA Technical Reports Server (NTRS)
Yau, J. F.; Wang, S. S.; Corten, H. T.
1980-01-01
A simple and convenient method of analysis for studying two-dimensional mixed-mode crack problems is presented. The analysis is formulated on the basis of conservation laws of elasticity and of fundamental relationships in fracture mechanics. The problem is reduced to the determination of mixed-mode stress-intensity factor solutions in terms of conservation integrals involving known auxiliary solutions. One of the salient features of the present analysis is that the stress-intensity solutions can be determined directly by using information extracted in the far field. Several examples with solutions available in the literature are solved to examine the accuracy and other characteristics of the current approach. This method is demonstrated to be superior in its numerical simplicity and computational efficiency to other approaches. Solutions of more complicated and practical engineering fracture problems dealing with the crack emanating from a circular hole are presented also to illustrate the capacity of this method
Numerical Analysis of Crack Tip Plasticity and History Effects under Mixed Mode Conditions
NASA Astrophysics Data System (ADS)
Lopez-Crespo, Pablo; Pommier, Sylvie
The plastic behaviour in the crack tip region has a strong influence on the fatigue life of engineering components. In general, residual stresses developed as a consequence of the plasticity being constrained around the crack tip have a significant role on both the direction of crack propagation and the propagation rate. Finite element methods (FEM) are commonly employed in order to model plasticity. However, if millions of cycles need to be modelled to predict the fatigue behaviour of a component, the method becomes computationally too expensive. By employing a multiscale approach, very precise analyses computed by FEM can be brought to a global scale. The data generated using the FEM enables us to identify a global cyclic elastic-plastic model for the crack tip region. Once this model is identified, it can be employed directly, with no need of additional FEM computations, resulting in fast computations. This is done by partitioning local displacement fields computed by FEM into intensity factors (global data) and spatial fields. A Karhunen-Loeve algorithm developed for image processing was employed for this purpose. In addition, the partitioning is done such as to distinguish into elastic and plastic components. Each of them is further divided into opening mode and shear mode parts. The plastic flow direction was determined with the above approach on a centre cracked panel subjected to a wide range of mixed-mode loading conditions. It was found to agree well with the maximum tangential stress criterion developed by Erdogan and Sih, provided that the loading direction is corrected for residual stresses. In this approach, residual stresses are measured at the global scale through internal intensity factors.
Accelerating numerical solution of stochastic differential equations with CUDA
NASA Astrophysics Data System (ADS)
Januszewski, M.; Kostur, M.
2010-01-01
Numerical integration of stochastic differential equations is commonly used in many branches of science. In this paper we present how to accelerate this kind of numerical calculations with popular NVIDIA Graphics Processing Units using the CUDA programming environment. We address general aspects of numerical programming on stream processors and illustrate them by two examples: the noisy phase dynamics in a Josephson junction and the noisy Kuramoto model. In presented cases the measured speedup can be as high as 675× compared to a typical CPU, which corresponds to several billion integration steps per second. This means that calculations which took weeks can now be completed in less than one hour. This brings stochastic simulation to a completely new level, opening for research a whole new range of problems which can now be solved interactively. Program summaryProgram title: SDE Catalogue identifier: AEFG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Gnu GPL v3 No. of lines in distributed program, including test data, etc.: 978 No. of bytes in distributed program, including test data, etc.: 5905 Distribution format: tar.gz Programming language: CUDA C Computer: any system with a CUDA-compatible GPU Operating system: Linux RAM: 64 MB of GPU memory Classification: 4.3 External routines: The program requires the NVIDIA CUDA Toolkit Version 2.0 or newer and the GNU Scientific Library v1.0 or newer. Optionally gnuplot is recommended for quick visualization of the results. Nature of problem: Direct numerical integration of stochastic differential equations is a computationally intensive problem, due to the necessity of calculating multiple independent realizations of the system. We exploit the inherent parallelism of this problem and perform the calculations on GPUs using the CUDA programming environment. The GPU's ability to execute hundreds of threads simultaneously makes it possible to speed up the computation by over two orders of magnitude, compared to a typical modern CPU. Solution method: The stochastic Runge-Kutta method of the second order is applied to integrate the equation of motion. Ensemble-averaged quantities of interest are obtained through averaging over multiple independent realizations of the system. Unusual features: The numerical solution of the stochastic differential equations in question is performed on a GPU using the CUDA environment. Running time: < 1 minute
NASA Astrophysics Data System (ADS)
Maggi, Matteo; Cianfarra, Paola; Salvini, Francesco
2013-04-01
Faults have a (brittle) deformation zone that can be described as the presence of two distintive zones: an internal Fault core (FC) and an external Fault Damage Zone (FDZ). The FC is characterized by grinding processes that comminute the rock grains to a final grain-size distribution characterized by the prevalence of smaller grains over larger, represented by high fractal dimensions (up to 3.4). On the other hand, the FDZ is characterized by a network of fracture sets with characteristic attitudes (i.e. Riedel cleavages). This deformation pattern has important consequences on rock permeability. FC often represents hydraulic barriers, while FDZ, with its fracture connection, represents zones of higher permability. The observation of faults revealed that dimension and characteristics of FC and FDZ varies both in intensity and dimensions along them. One of the controlling factor in FC and FDZ development is the fault plane geometry. By changing its attitude, fault plane geometry locally alter the stress component produced by the fault kinematics and its combination with the bulk boundary conditions (regional stress field, fluid pressure, rocks rheology) is responsible for the development of zones of higher and lower fracture intensity with variable extension along the fault planes. Furthermore, the displacement along faults provides a cumulative deformation pattern that varies through time. The modeling of the fault evolution through time (4D modeling) is therefore required to fully describe the fracturing and therefore permeability. In this presentation we show a methodology developed to predict distribution of fracture intensity integrating seismic data and numerical modeling. Fault geometry is carefully reconstructed by interpolating stick lines from interpreted seismic sections converted to depth. The modeling is based on a mixed numerical/analytical method. Fault surface is discretized into cells with their geometric and rheological characteristics. For each cell, the acting stress and strength are computed by analytical laws (Coulomb failure). Total brittle deformation for each cell is then computed by cumulating the brittle failure values along the path of each cell belonging to one side onto the facing one. The brittle failure value is provided by the DF function, that is the difference between the computed shear and the strength of the cell at each step along its path by using the Frap in-house developed software. The width of the FC and the FDZ are computed as a function of the DF distribution and displacement around the fault. This methodology has been successfully applied to model the brittle deformation pattern of the Vignanotica normal fault (Gargano, Southern Italy) where fracture intensity is expressed by the dimensionless H/S ratio representing the ratio between the dimension and the spacing of homologous fracture sets (i.e., group of parallel fractures that can be ascribed to the same event/stage/stress field).
Optimizing ion channel models using a parallel genetic algorithm on graphical processors.
Ben-Shalom, Roy; Aviv, Amit; Razon, Benjamin; Korngreen, Alon
2012-01-01
We have recently shown that we can semi-automatically constrain models of voltage-gated ion channels by combining a stochastic search algorithm with ionic currents measured using multiple voltage-clamp protocols. Although numerically successful, this approach is highly demanding computationally, with optimization on a high performance Linux cluster typically lasting several days. To solve this computational bottleneck we converted our optimization algorithm for work on a graphical processing unit (GPU) using NVIDIA's CUDA. Parallelizing the process on a Fermi graphic computing engine from NVIDIA increased the speed ∼180 times over an application running on an 80 node Linux cluster, considerably reducing simulation times. This application allows users to optimize models for ion channel kinetics on a single, inexpensive, desktop "super computer," greatly reducing the time and cost of building models relevant to neuronal physiology. We also demonstrate that the point of algorithm parallelization is crucial to its performance. We substantially reduced computing time by solving the ODEs (Ordinary Differential Equations) so as to massively reduce memory transfers to and from the GPU. This approach may be applied to speed up other data intensive applications requiring iterative solutions of ODEs. Copyright © 2012 Elsevier B.V. All rights reserved.
Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures
2017-10-04
Report: Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures The views, opinions and/or findings contained in this...Chapel Hill Title: Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures Report Term: 0-Other Email: dm...algorithms for scientific and geometric computing by exploiting the power and performance efficiency of heterogeneous shared memory architectures . These
A Novel Polygonal Finite Element Method: Virtual Node Method
NASA Astrophysics Data System (ADS)
Tang, X. H.; Zheng, C.; Zhang, J. H.
2010-05-01
Polygonal finite element method (PFEM), which can construct shape functions on polygonal elements, provides greater flexibility in mesh generation. However, the non-polynomial form of traditional PFEM, such as Wachspress method and Mean Value method, leads to inexact numerical integration. Since the integration technique for non-polynomial functions is immature. To overcome this shortcoming, a great number of integration points have to be used to obtain sufficiently exact results, which increases computational cost. In this paper, a novel polygonal finite element method is proposed and called as virtual node method (VNM). The features of present method can be list as: (1) It is a PFEM with polynomial form. Thereby, Hammer integral and Gauss integral can be naturally used to obtain exact numerical integration; (2) Shape functions of VNM satisfy all the requirements of finite element method. To test the performance of VNM, intensive numerical tests are carried out. It found that, in standard patch test, VNM can achieve significantly better results than Wachspress method and Mean Value method. Moreover, it is observed that VNM can achieve better results than triangular 3-node elements in the accuracy test.
Orbegoso, Elder Mendoza; Saavedra, Rafael; Marcelo, Daniel; La Madrid, Raúl
2017-12-01
In the northern coastal and jungle areas of Peru, cocoa beans are dried using artisan methods, such as direct exposure to sunlight. This traditional process is time intensive, leading to a reduction in productivity and, therefore, delays in delivery times. The present study was intended to numerically characterise the thermal behaviour of three configurations of solar air heating collectors in order to determine which demonstrated the best thermal performance under several controlled operating conditions. For this purpose, a computational fluid dynamics model was developed to describe the simultaneous convective and radiative heat transfer phenomena under several operation conditions. The constructed computational fluid dynamics model was firstly validated through comparison with the data measurements of a one-step solar air heating collector. We then simulated two further three-step solar air heating collectors in order to identify which demonstrated the best thermal performance in terms of outlet air temperature and thermal efficiency. The numerical results show that under the same solar irradiation area of exposition and operating conditions, the three-step solar air heating collector with the collector plate mounted between the second and third channels was 67% more thermally efficient compared to the one-step solar air heating collector. This is because the air exposition with the surface of the collector plate for the three-step solar air heating collector former device was twice than the one-step solar air heating collector. Copyright © 2017 Elsevier Ltd. All rights reserved.
Multifractal Characteristics of Axisymmetric Jet Turbulence Intensity from Rans Numerical Simulation
NASA Astrophysics Data System (ADS)
Seo, Yongwon; Ko, Haeng Sik; Son, Sangyoung
A turbulent jet bears diverse physical characteristics that have been unveiled yet. Of particular interest is to analyze the turbulent intensity, which has been a key factor to assess and determine turbulent jet performance since diffusive and mixing conditions are largely dependent on it. Multifractal measures are useful in terms of identifying characteristics of a physical quantity distributed over a spatial domain. This study examines the multifractal exponents of jet turbulence intensities obtained through numerical simulation. We acquired the turbulence intensities from numerical jet discharge experiments, where two types of nozzle geometry were tested based on a Reynolds-Averaged Navier-Stokes (RANS) equations. The k-𝜀 model and k-ω model were used for turbulence closure models. The results showed that the RANS model successfully regenerates transversal velocity profile, which is almost identical to an analytical solution. The RANS model also shows the decay of turbulence intensity in the longitudinal direction but it depends on the outfall nozzle lengths. The result indicates the existence of a common multifractal spectrum for turbulence intensity obtained from numerical simulation. Although the transverse velocity profiles are similar for two different turbulence models, the minimum Lipschitz-Hölder exponent (αmin) and entropy dimension (α1) are different. These results suggest that the multifractal exponents capture the difference in turbulence structures of hierarchical turbulence intensities produced by different turbulence models.
2017-01-01
Although Arabic numerals (like ‘2016’ and ‘3.14’) are ubiquitous, we show that in interactive computer applications they are often misleading and surprisingly unreliable. We introduce interactive numerals as a new concept and show, like Roman numerals and Arabic numerals, interactive numerals introduce another way of using and thinking about numbers. Properly understanding interactive numerals is essential for all computer applications that involve numerical data entered by users, including finance, medicine, aviation and science. PMID:28484609
Diffusion approximation-based simulation of stochastic ion channels: which method to use?
Pezo, Danilo; Soudry, Daniel; Orio, Patricio
2014-01-01
To study the effects of stochastic ion channel fluctuations on neural dynamics, several numerical implementation methods have been proposed. Gillespie's method for Markov Chains (MC) simulation is highly accurate, yet it becomes computationally intensive in the regime of a high number of channels. Many recent works aim to speed simulation time using the Langevin-based Diffusion Approximation (DA). Under this common theoretical approach, each implementation differs in how it handles various numerical difficulties—such as bounding of state variables to [0,1]. Here we review and test a set of the most recently published DA implementations (Goldwyn et al., 2011; Linaro et al., 2011; Dangerfield et al., 2012; Orio and Soudry, 2012; Schmandt and Galán, 2012; Güler, 2013; Huang et al., 2013a), comparing all of them in a set of numerical simulations that assess numerical accuracy and computational efficiency on three different models: (1) the original Hodgkin and Huxley model, (2) a model with faster sodium channels, and (3) a multi-compartmental model inspired in granular cells. We conclude that for a low number of channels (usually below 1000 per simulated compartment) one should use MC—which is the fastest and most accurate method. For a high number of channels, we recommend using the method by Orio and Soudry (2012), possibly combined with the method by Schmandt and Galán (2012) for increased speed and slightly reduced accuracy. Consequently, MC modeling may be the best method for detailed multicompartment neuron models—in which a model neuron with many thousands of channels is segmented into many compartments with a few hundred channels. PMID:25404914
Numerical characteristics of quantum computer simulation
NASA Astrophysics Data System (ADS)
Chernyavskiy, A.; Khamitov, K.; Teplov, A.; Voevodin, V.; Voevodin, Vl.
2016-12-01
The simulation of quantum circuits is significantly important for the implementation of quantum information technologies. The main difficulty of such modeling is the exponential growth of dimensionality, thus the usage of modern high-performance parallel computations is relevant. As it is well known, arbitrary quantum computation in circuit model can be done by only single- and two-qubit gates, and we analyze the computational structure and properties of the simulation of such gates. We investigate the fact that the unique properties of quantum nature lead to the computational properties of the considered algorithms: the quantum parallelism make the simulation of quantum gates highly parallel, and on the other hand, quantum entanglement leads to the problem of computational locality during simulation. We use the methodology of the AlgoWiki project (algowiki-project.org) to analyze the algorithm. This methodology consists of theoretical (sequential and parallel complexity, macro structure, and visual informational graph) and experimental (locality and memory access, scalability and more specific dynamic characteristics) parts. Experimental part was made by using the petascale Lomonosov supercomputer (Moscow State University, Russia). We show that the simulation of quantum gates is a good base for the research and testing of the development methods for data intense parallel software, and considered methodology of the analysis can be successfully used for the improvement of the algorithms in quantum information science.
Conjugate gradient minimisation approach to generating holographic traps for ultracold atoms.
Harte, Tiffany; Bruce, Graham D; Keeling, Jonathan; Cassettari, Donatella
2014-11-03
Direct minimisation of a cost function can in principle provide a versatile and highly controllable route to computational hologram generation. Here we show that the careful design of cost functions, combined with numerically efficient conjugate gradient minimisation, establishes a practical method for the generation of holograms for a wide range of target light distributions. This results in a guided optimisation process, with a crucial advantage illustrated by the ability to circumvent optical vortex formation during hologram calculation. We demonstrate the implementation of the conjugate gradient method for both discrete and continuous intensity distributions and discuss its applicability to optical trapping of ultracold atoms.
A numerical study of biofilm growth in a microgravity environment
NASA Astrophysics Data System (ADS)
Aristotelous, A. C.; Papanicolaou, N. C.
2017-10-01
A mathematical model is proposed to investigate the effect of microgravity on biofilm growth. We examine the case of biofilm suspended in a quiescent aqueous nutrient solution contained in a rectangular tank. The bacterial colony is assumed to follow logistic growth whereas nutrient absorption is assumed to follow Monod kinetics. The problem is modeled by a coupled system of nonlinear partial differential equations in two spatial dimensions solved using the Discontinuous Galerkin Finite Element method. Nutrient and biofilm concentrations are computed in microgravity and normal gravity conditions. A preliminary quantitative relationship between the biofilm concentration and the gravity field intensity is derived.
C deg continuity elements by Hybrid Stress method. M.S. Thesis, 1982 Final Report
NASA Technical Reports Server (NTRS)
Kang, David Sung-Soo
1991-01-01
An intensive study of the assumed variable distribution necessary for the Assumed Displacement Formulation, the Hellinger-Reissner Formulation, and the Hu-Washizu Formulation is made in a unified manner. With emphasis on physical explanation, a systematic method for the Hybrid Stress element construction is outlined. The numerical examples use four and eight node plane stress elements and eight and twenty node solid elements. Computation cost study indicates that the hybrid stress element derived using recently developed Uncoupled Stress Formulation is comparable in CPU time to the Assumed Displacement element. Overall, main emphasis is placed on providing a broader understanding of the Hybrid Stress Formulation.
Fast and precise processing of material by means of an intensive electron beam
NASA Astrophysics Data System (ADS)
Beisswenger, S.
1984-07-01
For engraving a picture carrying screen of cells into the copper-surface of gravure cylinders, an electron beam system was developed. Numerical computations of the power density in the image planes of the electron beam determined the design of the electron optical assembly. A highly stable electron beam of high power density is generated by a ribbon-like cathode. A system of magnetic lenses is used for fast control of the engraving processes and for dynamic changing of the electron optical demagnification. The electron beam engraving system is capable of engraving up to 150,000 gravure cells per sec.
Tempest: Tools for Addressing the Needs of Next-Generation Climate Models
NASA Astrophysics Data System (ADS)
Ullrich, P. A.; Guerra, J. E.; Pinheiro, M. C.; Fong, J.
2015-12-01
Tempest is a comprehensive simulation-to-science infrastructure that tackles the needs of next-generation, high-resolution, data intensive climate modeling activities. This project incorporates three key components: TempestDynamics, a global modeling framework for experimental numerical methods and high-performance computing; TempestRemap, a toolset for arbitrary-order conservative and consistent remapping between unstructured grids; and TempestExtremes, a suite of detection and characterization tools for identifying weather extremes in large climate datasets. In this presentation, the latest advances with the implementation of this framework will be discussed, and a number of projects now utilizing these tools will be featured.
NASA Astrophysics Data System (ADS)
Belfort, Benjamin; Weill, Sylvain; Lehmann, François
2017-07-01
A novel, non-invasive imaging technique is proposed that determines 2D maps of water content in unsaturated porous media. This method directly relates digitally measured intensities to the water content of the porous medium. This method requires the classical image analysis steps, i.e., normalization, filtering, background subtraction, scaling and calibration. The main advantages of this approach are that no calibration experiment is needed, because calibration curve relating water content and reflected light intensities is established during the main monitoring phase of each experiment and that no tracer or dye is injected into the flow tank. The procedure enables effective processing of a large number of photographs and thus produces 2D water content maps at high temporal resolution. A drainage/imbibition experiment in a 2D flow tank with inner dimensions of 40 cm × 14 cm × 6 cm (L × W × D) is carried out to validate the methodology. The accuracy of the proposed approach is assessed using a statistical framework to perform an error analysis and numerical simulations with a state-of-the-art computational code that solves the Richards' equation. Comparison of the cumulative mass leaving and entering the flow tank and water content maps produced by the photographic measurement technique and the numerical simulations demonstrate the efficiency and high accuracy of the proposed method for investigating vadose zone flow processes. Finally, the photometric procedure has been developed expressly for its extension to heterogeneous media. Other processes may be investigated through different laboratory experiments which will serve as benchmark for numerical codes validation.
Robust numerical electromagnetic eigenfunction expansion algorithms
NASA Astrophysics Data System (ADS)
Sainath, Kamalesh
This thesis summarizes developments in rigorous, full-wave, numerical spectral-domain (integral plane wave eigenfunction expansion [PWE]) evaluation algorithms concerning time-harmonic electromagnetic (EM) fields radiated by generally-oriented and positioned sources within planar and tilted-planar layered media exhibiting general anisotropy, thickness, layer number, and loss characteristics. The work is motivated by the need to accurately and rapidly model EM fields radiated by subsurface geophysical exploration sensors probing layered, conductive media, where complex geophysical and man-made processes can lead to micro-laminate and micro-fractured geophysical formations exhibiting, at the lower (sub-2MHz) frequencies typically employed for deep EM wave penetration through conductive geophysical media, bulk-scale anisotropic (i.e., directional) electrical conductivity characteristics. When the planar-layered approximation (layers of piecewise-constant material variation and transversely-infinite spatial extent) is locally, near the sensor region, considered valid, numerical spectral-domain algorithms are suitable due to their strong low-frequency stability characteristic, and ability to numerically predict time-harmonic EM field propagation in media with response characterized by arbitrarily lossy and (diagonalizable) dense, anisotropic tensors. If certain practical limitations are addressed, PWE can robustly model sensors with general position and orientation that probe generally numerous, anisotropic, lossy, and thick layers. The main thesis contributions, leading to a sensor and geophysical environment-robust numerical modeling algorithm, are as follows: (1) Simple, rapid estimator of the region (within the complex plane) containing poles, branch points, and branch cuts (critical points) (Chapter 2), (2) Sensor and material-adaptive azimuthal coordinate rotation, integration contour deformation, integration domain sub-region partition and sub-region-dependent integration order (Chapter 3), (3) Integration partition-extrapolation-based (Chapter 3) and Gauss-Laguerre Quadrature (GLQ)-based (Chapter 4) evaluations of the deformed, semi-infinite-length integration contour tails, (4) Robust in-situ-based (i.e., at the spectral-domain integrand level) direct/homogeneous-medium field contribution subtraction and analytical curbing of the source current spatial spectrum function's ill behavior (Chapter 5), and (5) Analytical re-casting of the direct-field expressions when the source is embedded within a NBAM, short for non-birefringent anisotropic medium (Chapter 6). The benefits of these contributions are, respectively, (1) Avoiding computationally intensive critical-point location and tracking (computation time savings), (2) Sensor and material-robust curbing of the integrand's oscillatory and slow decay behavior, as well as preventing undesirable critical-point migration within the complex plane (computation speed, precision, and instability-avoidance benefits), (3) sensor and material-robust reduction (or, for GLQ, elimination) of integral truncation error, (4) robustly stable modeling of scattered fields and/or fields radiated from current sources modeled as spatially distributed (10 to 1000-fold compute-speed acceleration also realized for distributed-source computations), and (5) numerically stable modeling of fields radiated from sources within NBAM layers. Having addressed these limitations, are PWE algorithms applicable to modeling EM waves in tilted planar-layered geometries too? This question is explored in Chapter 7 using a Transformation Optics-based approach, allowing one to model wave propagation through layered media that (in the sensor's vicinity) possess tilted planar interfaces. The technique leads to spurious wave scattering however, whose induced computation accuracy degradation requires analysis. Mathematical exhibition, and exhaustive simulation-based study and analysis of the limitations of, this novel tilted-layer modeling formulation is Chapter 7's main contribution.
Yassin, Mohamed F
2013-06-01
Due to heavy traffic emissions within an urban environment, air quality during the last decade becomes worse year by year and hazard to public health. In the present work, numerical modeling of flow and dispersion of gaseous emissions from vehicle exhaust in a street canyon were investigated under changes of the aspect ratio and wind direction. The three-dimensional flow and dispersion of gaseous pollutants were modeled using a computational fluid dynamics (CFD) model which was numerically solved using Reynolds-averaged Navier-Stokes (RANS) equations. The diffusion flow field in the atmospheric boundary layer within the street canyon was studied for different aspect ratios (W/H=1/2, 3/4, and 1) and wind directions (θ=90°, 112.5°, 135°, and 157.5°). The numerical models were validated against wind tunnel results to optimize the turbulence model. The numerical results agreed well with the wind tunnel results. The simulation demonstrated that the minimum concentration at the human respiration height within the street canyon was on the windward side for aspect ratios W/H=1/2 and 1 and wind directions θ=112.5°, 135°, and 157.5°. The pollutant concentration level decreases as the wind direction and aspect ratio increase. The wind velocity and turbulence intensity increase as the aspect ratio and wind direction increase.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lisitsa, Vadim, E-mail: lisitsavv@ipgg.sbras.ru; Novosibirsk State University, Novosibirsk; Tcheverda, Vladimir
We present an algorithm for the numerical simulation of seismic wave propagation in models with a complex near surface part and free surface topography. The approach is based on the combination of finite differences with the discontinuous Galerkin method. The discontinuous Galerkin method can be used on polyhedral meshes; thus, it is easy to handle the complex surfaces in the models. However, this approach is computationally intense in comparison with finite differences. Finite differences are computationally efficient, but in general, they require rectangular grids, leading to the stair-step approximation of the interfaces, which causes strong diffraction of the wavefield. Inmore » this research we present a hybrid algorithm where the discontinuous Galerkin method is used in a relatively small upper part of the model and finite differences are applied to the main part of the model.« less
Spatiotemporal video deinterlacing using control grid interpolation
NASA Astrophysics Data System (ADS)
Venkatesan, Ragav; Zwart, Christine M.; Frakes, David H.; Li, Baoxin
2015-03-01
With the advent of progressive format display and broadcast technologies, video deinterlacing has become an important video-processing technique. Numerous approaches exist in the literature to accomplish deinterlacing. While most earlier methods were simple linear filtering-based approaches, the emergence of faster computing technologies and even dedicated video-processing hardware in display units has allowed higher quality but also more computationally intense deinterlacing algorithms to become practical. Most modern approaches analyze motion and content in video to select different deinterlacing methods for various spatiotemporal regions. We introduce a family of deinterlacers that employs spectral residue to choose between and weight control grid interpolation based spatial and temporal deinterlacing methods. The proposed approaches perform better than the prior state-of-the-art based on peak signal-to-noise ratio, other visual quality metrics, and simple perception-based subjective evaluations conducted by human viewers. We further study the advantages of using soft and hard decision thresholds on the visual performance.
NASA Technical Reports Server (NTRS)
1983-01-01
Experimental work in support of stress studies in high speed silicon sheet growth has been emphasized in this quarter. Creep experiments utilizing four-point bending have been made in the temperature range from 1000 C to 1360 C in CZ silicon as well as on EFG ribbon. A method to measure residual stress over large areas using laser interferometry to map strain distributions under load is under development. A fiber optics sensor to measure ribbon temperature profiles has been constructed and is being tested in a ribbon growth furnace environment. Stress and temperature field modeling work has been directed toward improving various aspects of the finite element computing schemes. Difficulties in computing stress distributions with a very high creep intensity and with non-zero interface stress have been encountered and additional development of the numerical schemes to cope with these problems is required. Temperature field modeling has been extended to include the study of heat transfer effects in the die and meniscus regions.
Organization of the Drosophila larval visual circuit
Gendre, Nanae; Neagu-Maier, G Larisa; Fetter, Richard D; Schneider-Mizell, Casey M; Truman, James W; Zlatic, Marta; Cardona, Albert
2017-01-01
Visual systems transduce, process and transmit light-dependent environmental cues. Computation of visual features depends on photoreceptor neuron types (PR) present, organization of the eye and wiring of the underlying neural circuit. Here, we describe the circuit architecture of the visual system of Drosophila larvae by mapping the synaptic wiring diagram and neurotransmitters. By contacting different targets, the two larval PR-subtypes create two converging pathways potentially underlying the computation of ambient light intensity and temporal light changes already within this first visual processing center. Locally processed visual information then signals via dedicated projection interneurons to higher brain areas including the lateral horn and mushroom body. The stratified structure of the larval optic neuropil (LON) suggests common organizational principles with the adult fly and vertebrate visual systems. The complete synaptic wiring diagram of the LON paves the way to understanding how circuits with reduced numerical complexity control wide ranges of behaviors.
Filtering with Marked Point Process Observations via Poisson Chaos Expansion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun Wei, E-mail: wsun@mathstat.concordia.ca; Zeng Yong, E-mail: zengy@umkc.edu; Zhang Shu, E-mail: zhangshuisme@hotmail.com
2013-06-15
We study a general filtering problem with marked point process observations. The motivation comes from modeling financial ultra-high frequency data. First, we rigorously derive the unnormalized filtering equation with marked point process observations under mild assumptions, especially relaxing the bounded condition of stochastic intensity. Then, we derive the Poisson chaos expansion for the unnormalized filter. Based on the chaos expansion, we establish the uniqueness of solutions of the unnormalized filtering equation. Moreover, we derive the Poisson chaos expansion for the unnormalized filter density under additional conditions. To explore the computational advantage, we further construct a new consistent recursive numerical schememore » based on the truncation of the chaos density expansion for a simple case. The new algorithm divides the computations into those containing solely system coefficients and those including the observations, and assign the former off-line.« less
Computer vision research with new imaging technology
NASA Astrophysics Data System (ADS)
Hou, Guangqi; Liu, Fei; Sun, Zhenan
2015-12-01
Light field imaging is capable of capturing dense multi-view 2D images in one snapshot, which record both intensity values and directions of rays simultaneously. As an emerging 3D device, the light field camera has been widely used in digital refocusing, depth estimation, stereoscopic display, etc. Traditional multi-view stereo (MVS) methods only perform well on strongly texture surfaces, but the depth map contains numerous holes and large ambiguities on textureless or low-textured regions. In this paper, we exploit the light field imaging technology on 3D face modeling in computer vision. Based on a 3D morphable model, we estimate the pose parameters from facial feature points. Then the depth map is estimated through the epipolar plane images (EPIs) method. At last, the high quality 3D face model is exactly recovered via the fusing strategy. We evaluate the effectiveness and robustness on face images captured by a light field camera with different poses.
NASA Astrophysics Data System (ADS)
Miedzinska, Danuta; Boczkowska, Anna; Zubko, Konrad
2010-07-01
In the article a method of numerical verification of experimental results for magnetorheological elastomer samples (MRE) is presented. The samples were shaped into cylinders with diameter of 8 mm and height of 20 mm with various carbonyl iron volume shares (1,5%, 11,5% and 33%). The diameter of soft ferromagnetic substance particles ranged from 6 to 9 μm. During the experiment, initially bended samples were exposed to the magnetic field with intensity levels at 0,1T, 0,3T, 0,5T, 0,7 and 1T. The reaction of the sample to the field action was measured as a displacement of a specimen. Numerical calculation was carried out with the MSC Patran/Marc computer code. For the purpose of numerical analysis the orthotropic material model with the material properties of magnetorheological elastomer along the iron chains, and of the pure elastomer along other directions, was applied. The material properties were obtained from the experimental tests. During the numerical analysis, the initial mechanical load resulting from cylinder deflection was set. Then, the equivalent external force, that was set on the basis of analytical calculations of intermolecular reaction within iron chains in the specific magnetic field, was put on the bended sample. Correspondence of such numerical model with results of the experiment was verified. Similar results of the experiments and both theoretical and FEM analysis indicates that macroscopic modeling of magnetorheological elastomer mechanical properties as orthotropic material delivers accurate enough description of the material's behavior.
Shi, Yulin; Veidenbaum, Alexander V; Nicolau, Alex; Xu, Xiangmin
2015-01-15
Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post hoc processing and analysis. Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22× speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. Copyright © 2014 Elsevier B.V. All rights reserved.
Shi, Yulin; Veidenbaum, Alexander V.; Nicolau, Alex; Xu, Xiangmin
2014-01-01
Background Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post-hoc processing and analysis. New Method Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. Results We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22x speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. Comparison with Existing Method(s) To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Conclusions Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. PMID:25277633
NASA Technical Reports Server (NTRS)
Giassi, D.; Cao, S.; Stocker, D. P.; Takahashi, F.; Bennett, B. A.; Smooke, M. D.; Long, M. B.
2015-01-01
With the conclusion of the SLICE campaign aboard the ISS in 2012, a large amount of data was made available for the analysis of the effect of microgravity on laminar coflow diffusion flames. Previous work focused on the study of sooty flames in microgravity as well as the ability of numerical models to predict its formation in a simplified buoyancy-free environment. The current work shifts the investigation to soot-free flames, putting an emphasis on the chemiluminescence emission from electronically excited CH (CH*). This radical species is of significant interest in combustion studies: it has been shown that the CH* spatial distribution is indicative of the flame front position and, given the relatively simple diagnostic involved with its measurement, several works have been done trying to understand the ability of CH* chemiluminescence to predict the total and local flame heat release rate. In this work, a subset of the SLICE nitrogen-diluted methane flames has been considered, and the effect of fuel and coflow velocity on CH* concentration is discussed and compared with both normal gravity results and numerical simulations. Experimentally, the spectral characterization of the DSLR color camera used to acquire the flame images allowed the signal collected by the blue channel to be considered representative of the CH* emission centered around 431 nm. Due to the axisymmetric flame structure, an Abel deconvolution of the line-of-sight chemiluminescence was used to obtain the radial intensity profile and, thanks to an absolute light intensity calibration, a quantification of the CH* concentration was possible. Results show that, in microgravity, the maximum flame CH* concentration increases with the coflow velocity, but it is weakly dependent on the fuel velocity; normal gravity flames, if not lifted, tend to follow the same trend, albeit with different peak concentrations. Comparisons with numerical simulations display reasonably good agreement between measured and computed flame lengths and radii, and it is shown that the integrated CH* emission scales proportionally to the computed total heat release rate; the two-dimensional CH* spatial distribution, however, does not appear to be a good marker for the local heat release rate.
Semiannual report, 1 April - 30 September 1991
NASA Technical Reports Server (NTRS)
1991-01-01
The major categories of the current Institute for Computer Applications in Science and Engineering (ICASE) research program are: (1) numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; (2) control and parameter identification problems, with emphasis on effective numerical methods; (3) computational problems in engineering and the physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and (4) computer systems and software for parallel computers. Research in these areas is discussed.
Dynamics of Numerics & Spurious Behaviors in CFD Computations. Revised
NASA Technical Reports Server (NTRS)
Yee, Helen C.; Sweby, Peter K.
1997-01-01
The global nonlinear behavior of finite discretizations for constant time steps and fixed or adaptive grid spacings is studied using tools from dynamical systems theory. Detailed analysis of commonly used temporal and spatial discretizations for simple model problems is presented. The role of dynamics in the understanding of long time behavior of numerical integration and the nonlinear stability, convergence, and reliability of using time-marching approaches for obtaining steady-state numerical solutions in computational fluid dynamics (CFD) is explored. The study is complemented with examples of spurious behavior observed in steady and unsteady CFD computations. The CFD examples were chosen to illustrate non-apparent spurious behavior that was difficult to detect without extensive grid and temporal refinement studies and some knowledge from dynamical systems theory. Studies revealed the various possible dangers of misinterpreting numerical simulation of realistic complex flows that are constrained by available computing power. In large scale computations where the physics of the problem under study is not well understood and numerical simulations are the only viable means of solution, extreme care must be taken in both computation and interpretation of the numerical data. The goal of this paper is to explore the important role that dynamical systems theory can play in the understanding of the global nonlinear behavior of numerical algorithms and to aid the identification of the sources of numerical uncertainties in CFD.
Applications in Data-Intensive Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shah, Anuj R.; Adkins, Joshua N.; Baxter, Douglas J.
2010-04-01
This book chapter, to be published in Advances in Computers, Volume 78, in 2010 describes applications of data intensive computing (DIC). This is an invited chapter resulting from a previous publication on DIC. This work summarizes efforts coming out of the PNNL's Data Intensive Computing Initiative. Advances in technology have empowered individuals with the ability to generate digital content with mouse clicks and voice commands. Digital pictures, emails, text messages, home videos, audio, and webpages are common examples of digital content that are generated on a regular basis. Data intensive computing facilitates human understanding of complex problems. Data-intensive applications providemore » timely and meaningful analytical results in response to exponentially growing data complexity and associated analysis requirements through the development of new classes of software, algorithms, and hardware.« less
NASA Astrophysics Data System (ADS)
Degtyarev, Alexander; Khramushin, Vasily
2016-02-01
The paper deals with the computer implementation of direct computational experiments in fluid mechanics, constructed on the basis of the approach developed by the authors. The proposed approach allows the use of explicit numerical scheme, which is an important condition for increasing the effciency of the algorithms developed by numerical procedures with natural parallelism. The paper examines the main objects and operations that let you manage computational experiments and monitor the status of the computation process. Special attention is given to a) realization of tensor representations of numerical schemes for direct simulation; b) realization of representation of large particles of a continuous medium motion in two coordinate systems (global and mobile); c) computing operations in the projections of coordinate systems, direct and inverse transformation in these systems. Particular attention is paid to the use of hardware and software of modern computer systems.
Leal Neto, Viriato; Vieira, José Wilson; Lima, Fernando Roberto de Andrade
2014-01-01
Objective This article presents a way to obtain estimates of dose in patients submitted to radiotherapy with basis on the analysis of regions of interest on nuclear medicine images. Materials and Methods A software called DoRadIo (Dosimetria das Radiações Ionizantes [Ionizing Radiation Dosimetry]) was developed to receive information about source organs and target organs, generating graphical and numerical results. The nuclear medicine images utilized in the present study were obtained from catalogs provided by medical physicists. The simulations were performed with computational exposure models consisting of voxel phantoms coupled with the Monte Carlo EGSnrc code. The software was developed with the Microsoft Visual Studio 2010 Service Pack and the project template Windows Presentation Foundation for C# programming language. Results With the mentioned tools, the authors obtained the file for optimization of Monte Carlo simulations using the EGSnrc; organization and compaction of dosimetry results with all radioactive sources; selection of regions of interest; evaluation of grayscale intensity in regions of interest; the file of weighted sources; and, finally, all the charts and numerical results. Conclusion The user interface may be adapted for use in clinical nuclear medicine as a computer-aided tool to estimate the administered activity. PMID:25741101
Progress Toward an Efficient and General CFD Tool for Propulsion Design/Analysis
NASA Technical Reports Server (NTRS)
Cox, C. F.; Cinnella, P.; Westmoreland, S.
1996-01-01
The simulation of propulsive flows inherently involves chemical activity. Recent years have seen substantial strides made in the development of numerical schemes for reacting flowfields, in particular those involving finite-rate chemistry. However, finite-rate calculations are computationally intensive and require knowledge of the actual kinetics, which are not always known with sufficient accuracy. Alternatively, flow simulations based on the assumption of local chemical equilibrium are capable of obtaining physically reasonable results at far less computational cost. The present study summarizes the development of efficient numerical techniques for the simulation of flows in local chemical equilibrium, whereby a 'Black Box' chemical equilibrium solver is coupled to the usual gasdynamic equations. The generalization of the methods enables the modelling of any arbitrary mixture of thermally perfect gases, including air, combustion mixtures and plasmas. As demonstration of the potential of the methodologies, several solutions, involving reacting and perfect gas flows, will be presented. Included is a preliminary simulation of the SSME startup transient. Future enhancements to the proposed techniques will be discussed, including more efficient finite-rate and hybrid (partial equilibrium) schemes. The algorithms that have been developed and are being optimized provide for an efficient and general tool for the design and analysis of propulsion systems.
Computing Project, Marc develops high-fidelity turbulence models to enhance simulation accuracy and efficient numerical algorithms for future high performance computing hardware architectures. Research Interests High performance computing High order numerical methods for computational fluid dynamics Fluid
NASA Technical Reports Server (NTRS)
1987-01-01
Research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, numerical analysis, and computer science during the period April, 1986 through September 30, 1986 is summarized.
Numerical investigations on cavitation intensity for 3D homogeneous unsteady viscous flows
NASA Astrophysics Data System (ADS)
Leclercq, C.; Archer, A.; Fortes-Patella, R.
2016-11-01
The cavitation erosion remains an industrial issue. In this paper, we deal with the cavitation intensity which can be described as the aggressiveness - or erosive capacity - of a cavitating flow. The estimation of this intensity is a challenging problem both in terms of modelling the cavitating flow and predicting the erosion due to cavitation. For this purpose, a model was proposed to estimate cavitation intensity from 3D unsteady cavitating flow simulations. An intensity model based on pressure and void fraction derivatives was developped and applied to a NACA 65012 hydrofoil tested at LMH-EPFL (École Polytechnique Fédérale de Lausanne) [1]. 2D and 3D unsteady cavitating simulations were performed using a homogeneous model with void fraction transport equation included in Code_Saturne with cavitating module [2]. The article presents a description of the numerical code and the physical approach considered. Comparisons between 2D and 3D simulations, as well as between numerical and experimental results obtained by pitting tests, are analyzed in the paper.
An optical flow-based method for velocity field of fluid flow estimation
NASA Astrophysics Data System (ADS)
Głomb, Grzegorz; Świrniak, Grzegorz; Mroczka, Janusz
2017-06-01
The aim of this paper is to present a method for estimating flow-velocity vector fields using the Lucas-Kanade algorithm. The optical flow measurements are based on the Particle Image Velocimetry (PIV) technique, which is commonly used in fluid mechanics laboratories in both research institutes and industry. Common approaches for an optical characterization of velocity fields base on computation of partial derivatives of the image intensity using finite differences. Nevertheless, the accuracy of velocity field computations is low due to the fact that an exact estimation of spatial derivatives is very difficult in presence of rapid intensity changes in the PIV images, caused by particles having small diameters. The method discussed in this paper solves this problem by interpolating the PIV images using Gaussian radial basis functions. This provides a significant improvement in the accuracy of the velocity estimation but, more importantly, allows for the evaluation of the derivatives in intermediate points between pixels. Numerical analysis proves that the method is able to estimate even a separate vector for each particle with a 5× 5 px2 window, whereas a classical correlation-based method needs at least 4 particle images. With the use of a specialized multi-step hybrid approach to data analysis the method improves the estimation of the particle displacement far above 1 px.
NASA Astrophysics Data System (ADS)
Bury, Yannick; Lucas, Matthieu; Bonnaud, Cyril; Joly, Laurent; ISAE Team; Airbus Team
2014-11-01
We study numerically and experimentally the vortices that develop past a model geometry of a wing equipped with pylon-mounted engine at low speed/moderate incidence flight conditions. For such configuration, the presence of the powerplant installation under the wing initiates a complex, unsteady vortical flow field at the nacelle/pylon/wing junctions. Its interaction with the upper wing boundary layer causes a drop of aircraft performances. In order to decipher the underlying physics, this study is initially conducted on a simplified geometry at a Reynolds number of 200000, based on the chord wing and on the freestream velocity. Two configurations of angle of attack and side-slip angle are investigated. This work relies on unsteady Reynolds Averaged Navier Stokes computations, oil flow visualizations and stereoscopic Particle Image Velocimetry measurements. The vortex dynamics thus produced is described in terms of vortex core position, intensity, size and turbulent intensity thanks to a vortex tracking approach. In addition, the analysis of the velocity flow fields obtained from PIV highlights the influence of the longitudinal vortex initiated at the pylon/wing junction on the separation process of the boundary layer near the upper wing leading-edge.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Victor Ho Fun, E-mail: vhflee@hku.hk; Ng, Sherry Chor Yi; Kwong, Dora Lai Wan
The aim of this study was to investigate if intravenous contrast injection affected the radiation doses to carotid arteries and thyroid during intensity-modulated radiation therapy (IMRT) planning for nasopharyngeal carcinoma (NPC). Thirty consecutive patients with NPC underwent plain computed tomography (CT) followed by repeated scanning after contrast injection. Carotid arteries (common, external, internal), thyroid, target volumes, and other organs-at-risk (OARs), as well as IMRT planning, were based on contrast-enhanced CT (CE-CT) images. All these structures and the IMRT plans were then copied and transferred to the non–contrast-enhanced CT (NCE-CT) images, and dose calculation without optimization was performed again. The radiationmore » doses to the carotid arteries and the thyroid based on CE-CT and NCE-CT were then compared. Based on CE-CT, no statistical differences, despite minute numeric decreases, were noted in all dosimetric parameters (minimum, maximum, mean, median, D05, and D01) of the target volumes, the OARs, the carotid arteries, and the thyroid compared with NCE-CT. Our results suggested that compared with NCE-CT planning, CE-CT scanning should be performed during IMRT for better target and OAR delineation, without discernible change in radiation doses.« less
Cone-beam x-ray luminescence computed tomography based on x-ray absorption dosage.
Liu, Tianshuai; Rong, Junyan; Gao, Peng; Zhang, Wenli; Liu, Wenlei; Zhang, Yuanke; Lu, Hongbing
2018-02-01
With the advances of x-ray excitable nanophosphors, x-ray luminescence computed tomography (XLCT) has become a promising hybrid imaging modality. In particular, a cone-beam XLCT (CB-XLCT) system has demonstrated its potential in in vivo imaging with the advantage of fast imaging speed over other XLCT systems. Currently, the imaging models of most XLCT systems assume that nanophosphors emit light based on the intensity distribution of x-ray within the object, not completely reflecting the nature of the x-ray excitation process. To improve the imaging quality of CB-XLCT, an imaging model that adopts an excitation model of nanophosphors based on x-ray absorption dosage is proposed in this study. To solve the ill-posed inverse problem, a reconstruction algorithm that combines the adaptive Tikhonov regularization method with the imaging model is implemented for CB-XLCT reconstruction. Numerical simulations and phantom experiments indicate that compared with the traditional forward model based on x-ray intensity, the proposed dose-based model could improve the image quality of CB-XLCT significantly in terms of target shape, localization accuracy, and image contrast. In addition, the proposed model behaves better in distinguishing closer targets, demonstrating its advantage in improving spatial resolution. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
NASA Technical Reports Server (NTRS)
1988-01-01
This report summarizes research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, numerical analysis, and computer science during the period April l, 1988 through September 30, 1988.
NASA Technical Reports Server (NTRS)
1984-01-01
Research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, numerical analysis and computer science during the period October 1, 1983 through March 31, 1984 is summarized.
NASA Technical Reports Server (NTRS)
1987-01-01
Research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, numerical analysis, and computer science during the period October 1, 1986 through March 31, 1987 is summarized.
In Praise of Numerical Computation
NASA Astrophysics Data System (ADS)
Yap, Chee K.
Theoretical Computer Science has developed an almost exclusively discrete/algebraic persona. We have effectively shut ourselves off from half of the world of computing: a host of problems in Computational Science & Engineering (CS&E) are defined on the continuum, and, for them, the discrete viewpoint is inadequate. The computational techniques in such problems are well-known to numerical analysis and applied mathematics, but are rarely discussed in theoretical algorithms: iteration, subdivision and approximation. By various case studies, I will indicate how our discrete/algebraic view of computing has many shortcomings in CS&E. We want embrace the continuous/analytic view, but in a new synthesis with the discrete/algebraic view. I will suggest a pathway, by way of an exact numerical model of computation, that allows us to incorporate iteration and approximation into our algorithms’ design. Some recent results give a peek into how this view of algorithmic development might look like, and its distinctive form suggests the name “numerical computational geometry” for such activities.
NASA Technical Reports Server (NTRS)
Ameri, Ali A.; Rigby, David L.; Steinthorsson, Erlendur; Heidmann, James D.; Fabian, John C.
2008-01-01
The effect of the upstream wake on the blade heat transfer has been numerically examined. The geometry and the flow conditions of the first stage turbine blade of GE s E3 engine with a tip clearance equal to 2 percent of the span was utilized. Based on numerical calculations of the vane, a set of wake boundary conditions were approximated, which were subsequently imposed upon the downstream blade. This set consisted of the momentum and thermal wakes as well as the variation in modeled turbulence quantities of turbulence intensity and the length scale. Using a one-blade periodic domain, the distributions of unsteady heat transfer rate on the turbine blade and its tip, as affected by the wake, were determined. Such heat transfer coefficient distribution was computed using the wall heat flux and the adiabatic wall temperature to desensitize the heat transfer coefficient to the wall temperature. For the determination of the wall heat flux and the adiabatic wall temperatures, two sets of computations were required. The results were used in a phase-locked manner to compute the unsteady or steady heat transfer coefficients. It has been found that the unsteady wake has some effect on the distribution of the time averaged heat transfer coefficient on the blade and that this distribution is different from the distribution that is obtainable from a steady computation. This difference was found to be as large as 20 percent of the average heat transfer on the blade surface. On the tip surface, this difference is comparatively smaller and can be as large as four percent of the average.
A Computational and Experimental Study of Resonators in Three Dimensions
NASA Technical Reports Server (NTRS)
Tam, C. K. W.; Ju, H.; Jones, Michael G.; Watson, Willie R.; Parrott, Tony L.
2009-01-01
In a previous work by the present authors, a computational and experimental investigation of the acoustic properties of two-dimensional slit resonators was carried out. The present paper reports the results of a study extending the previous work to three dimensions. This investigation has two basic objectives. The first is to validate the computed results from direct numerical simulations of the flow and acoustic fields of slit resonators in three dimensions by comparing with experimental measurements in a normal incidence impedance tube. The second objective is to study the flow physics of resonant liners responsible for sound wave dissipation. Extensive comparisons are provided between computed and measured acoustic liner properties with both discrete frequency and broadband sound sources. Good agreements are found over a wide range of frequencies and sound pressure levels. Direct numerical simulation confirms the previous finding in two dimensions that vortex shedding is the dominant dissipation mechanism at high sound pressure intensity. However, it is observed that the behavior of the shed vortices in three dimensions is quite different from those of two dimensions. In three dimensions, the shed vortices tend to evolve into ring (circular in plan form) vortices, even though the slit resonator opening from which the vortices are shed has an aspect ratio of 2.5. Under the excitation of discrete frequency sound, the shed vortices align themselves into two regularly spaced vortex trains moving away from the resonator opening in opposite directions. This is different from the chaotic shedding of vortices found in two-dimensional simulations. The effect of slit aspect ratio at a fixed porosity is briefly studied. For the range of liners considered in this investigation, it is found that the absorption coefficient of a liner increases when the open area of the single slit is subdivided into multiple, smaller slits.
GPU-based acceleration of computations in nonlinear finite element deformation analysis.
Mafi, Ramin; Sirouspour, Shahin
2014-03-01
The physics of deformation for biological soft-tissue is best described by nonlinear continuum mechanics-based models, which then can be discretized by the FEM for a numerical solution. However, computational complexity of such models have limited their use in applications requiring real-time or fast response. In this work, we propose a graphic processing unit-based implementation of the FEM using implicit time integration for dynamic nonlinear deformation analysis. This is the most general formulation of the deformation analysis. It is valid for large deformations and strains and can account for material nonlinearities. The data-parallel nature and the intense arithmetic computations of nonlinear FEM equations make it particularly suitable for implementation on a parallel computing platform such as graphic processing unit. In this work, we present and compare two different designs based on the matrix-free and conventional preconditioned conjugate gradients algorithms for solving the FEM equations arising in deformation analysis. The speedup achieved with the proposed parallel implementations of the algorithms will be instrumental in the development of advanced surgical simulators and medical image registration methods involving soft-tissue deformation. Copyright © 2013 John Wiley & Sons, Ltd.
Visual analysis of fluid dynamics at NASA's numerical aerodynamic simulation facility
NASA Technical Reports Server (NTRS)
Watson, Velvin R.
1991-01-01
A study aimed at describing and illustrating visualization tools used in Computational Fluid Dynamics (CFD) and indicating how these tools are likely to change by showing a projected resolution of the human computer interface is presented. The following are outlined using a graphically based test format: the revolution of human computer environments for CFD research; comparison of current environments; current environments with the ideal; predictions for the future CFD environments; what can be done to accelerate the improvements. The following comments are given: when acquiring visualization tools, potential rapid changes must be considered; environmental changes over the next ten years due to human computer interface cannot be fathomed; data flow packages such as AVS, apE, Explorer and Data Explorer are easy to learn and use for small problems, excellent for prototyping, but not so efficient for large problems; the approximation techniques used in visualization software must be appropriate for the data; it has become more cost effective to move jobs that fit on workstations and run only memory intensive jobs on the supercomputer; use of three dimensional skills will be maximized when the three dimensional environment is built in from the start.
Development and application of theoretical models for Rotating Detonation Engine flowfields
NASA Astrophysics Data System (ADS)
Fievisohn, Robert
As turbine and rocket engine technology matures, performance increases between successive generations of engine development are becoming smaller. One means of accomplishing significant gains in thermodynamic performance and power density is to use detonation-based heat release instead of deflagration. This work is focused on developing and applying theoretical models to aid in the design and understanding of Rotating Detonation Engines (RDEs). In an RDE, a detonation wave travels circumferentially along the bottom of an annular chamber where continuous injection of fresh reactants sustains the detonation wave. RDEs are currently being designed, tested, and studied as a viable option for developing a new generation of turbine and rocket engines that make use of detonation heat release. One of the main challenges in the development of RDEs is to understand the complex flowfield inside the annular chamber. While simplified models are desirable for obtaining timely performance estimates for design analysis, one-dimensional models may not be adequate as they do not provide flow structure information. In this work, a two-dimensional physics-based model is developed, which is capable of modeling the curved oblique shock wave, exit swirl, counter-flow, detonation inclination, and varying pressure along the inflow boundary. This is accomplished by using a combination of shock-expansion theory, Chapman-Jouguet detonation theory, the Method of Characteristics (MOC), and other compressible flow equations to create a shock-fitted numerical algorithm and generate an RDE flowfield. This novel approach provides a numerically efficient model that can provide performance estimates as well as details of the large-scale flow structures in seconds on a personal computer. Results from this model are validated against high-fidelity numerical simulations that may require a high-performance computing framework to provide similar performance estimates. This work provides a designer a new tool to conduct large-scale parametric studies to optimize a design space before conducting computationally-intensive, high-fidelity simulations that may be used to examine additional effects. The work presented in this thesis not only bridges the gap between simple one-dimensional models and high-fidelity full numerical simulations, but it also provides an effective tool for understanding and exploring RDE flow processes.
NASA Technical Reports Server (NTRS)
1989-01-01
Research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, numerical analysis, and computer science during the period October 1, 1988 through March 31, 1989 is summarized.
Scaling and efficiency of PRISM in adaptive simulations of turbulent premixed flames
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tonse, Shaheen R.; Bell, J.B.; Brown, N.J.
1999-12-01
The dominant computational cost in modeling turbulent combustion phenomena numerically with high fidelity chemical mechanisms is the time required to solve the ordinary differential equations associated with chemical kinetics. One approach to reducing that computational cost is to develop an inexpensive surrogate model that accurately represents evolution of chemical kinetics. One such approach, PRISM, develops a polynomial representation of the chemistry evolution in a local region of chemical composition space. This representation is then stored for later use. As the computation proceeds, the chemistry evolution for other points within the same region are computed by evaluating these polynomials instead ofmore » calling an ordinary differential equation solver. If initial data for advancing the chemistry is encountered that is not in any region for which a polynomial is defined, the methodology dynamically samples that region and constructs a new representation for that region. The utility of this approach is determined by the size of the regions over which the representation provides a good approximation to the kinetics and the number of these regions that are necessary to model the subset of composition space that is active during a simulation. In this paper, we assess the PRISM methodology in the context of a turbulent premixed flame in two dimensions. We consider a range of turbulent intensities ranging from weak turbulence that has little effect on the flame to strong turbulence that tears pockets of burning fluid from the main flame. For each case, we explore a range of sizes for the local regions and determine the scaling behavior as a function of region size and turbulent intensity.« less
NASA Technical Reports Server (NTRS)
Pratt, D. T.
1984-01-01
An interactive computer code for simulation of a high-intensity turbulent combustor as a single point inhomogeneous stirred reactor was developed from an existing batch processing computer code CDPSR. The interactive CDPSR code was used as a guide for interpretation and direction of DOE-sponsored companion experiments utilizing Xenon tracer with optical laser diagnostic techniques to experimentally determine the appropriate mixing frequency, and for validation of CDPSR as a mixing-chemistry model for a laboratory jet-stirred reactor. The coalescence-dispersion model for finite rate mixing was incorporated into an existing interactive code AVCO-MARK I, to enable simulation of a combustor as a modular array of stirred flow and plug flow elements, each having a prescribed finite mixing frequency, or axial distribution of mixing frequency, as appropriate. Further increase the speed and reliability of the batch kinetics integrator code CREKID was increased by rewriting in vectorized form for execution on a vector or parallel processor, and by incorporating numerical techniques which enhance execution speed by permitting specification of a very low accuracy tolerance.
Two-dimensional nonsteady viscous flow simulation on the Navier-Stokes computer miniNode
NASA Technical Reports Server (NTRS)
Nosenchuck, Daniel M.; Littman, Michael G.; Flannery, William
1986-01-01
The needs of large-scale scientific computation are outpacing the growth in performance of mainframe supercomputers. In particular, problems in fluid mechanics involving complex flow simulations require far more speed and capacity than that provided by current and proposed Class VI supercomputers. To address this concern, the Navier-Stokes Computer (NSC) was developed. The NSC is a parallel-processing machine, comprised of individual Nodes, each comparable in performance to current supercomputers. The global architecture is that of a hypercube, and a 128-Node NSC has been designed. New architectural features, such as a reconfigurable many-function ALU pipeline and a multifunction memory-ALU switch, have provided the capability to efficiently implement a wide range of algorithms. Efficient algorithms typically involve numerically intensive tasks, which often include conditional operations. These operations may be efficiently implemented on the NSC without, in general, sacrificing vector-processing speed. To illustrate the architecture, programming, and several of the capabilities of the NSC, the simulation of two-dimensional, nonsteady viscous flows on a prototype Node, called the miniNode, is presented.
NASA Technical Reports Server (NTRS)
1994-01-01
This report summarizes research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, fluid mechanics, and computer science during the period October 1, 1993 through March 31, 1994. The major categories of the current ICASE research program are: (1) applied and numerical mathematics, including numerical analysis and algorithm development; (2) theoretical and computational research in fluid mechanics in selected areas of interest to LaRC, including acoustics and combustion; (3) experimental research in transition and turbulence and aerodynamics involving LaRC facilities and scientists; and (4) computer science.
Magnetic Field Suppression of Flow in Semiconductor Melt
NASA Technical Reports Server (NTRS)
Fedoseyev, A. I.; Kansa, E. J.; Marin, C.; Volz, M. P.; Ostrogorsky, A. G.
2000-01-01
One of the most promising approaches for the reduction of convection during the crystal growth of conductive melts (semiconductor crystals) is the application of magnetic fields. Current technology allows the experimentation with very intense static fields (up to 80 KGauss) for which nearly convection free results are expected from simple scaling analysis in stabilized systems (vertical Bridgman method with axial magnetic field). However, controversial experimental results were obtained. The computational methods are, therefore, a fundamental tool in the understanding of the phenomena accounting during the solidification of semiconductor materials. Moreover, effects like the bending of the isomagnetic lines, different aspect ratios and misalignments between the direction of the gravity and magnetic field vectors can not be analyzed with analytical methods. The earliest numerical results showed controversial conclusions and are not able to explain the experimental results. Although the generated flows are extremely low, the computational task is a complicated because of the thin boundary layers. That is one of the reasons for the discrepancy in the results that numerical studies reported. Modeling of these magnetically damped crystal growth experiments requires advanced numerical methods. We used, for comparison, three different approaches to obtain the solution of the problem of thermal convection flows: (1) Spectral method in spectral superelement implementation, (2) Finite element method with regularization for boundary layers, (3) Multiquadric method, a novel method with global radial basis functions, that is proven to have exponential convergence. The results obtained by these three methods are presented for a wide region of Rayleigh and Hartman numbers. Comparison and discussion of accuracy, efficiency, reliability and agreement with experimental results will be presented as well.
ICASE semiannual report, April 1 - September 30, 1989
NASA Technical Reports Server (NTRS)
1990-01-01
The Institute conducts unclassified basic research in applied mathematics, numerical analysis, and computer science in order to extend and improve problem-solving capabilities in science and engineering, particularly in aeronautics and space. The major categories of the current Institute for Computer Applications in Science and Engineering (ICASE) research program are: (1) numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; (2) control and parameter identification problems, with emphasis on effective numerical methods; (3) computational problems in engineering and the physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and (4) computer systems and software, especially vector and parallel computers. ICASE reports are considered to be primarily preprints of manuscripts that have been submitted to appropriate research journals or that are to appear in conference proceedings.
A History of Computer Numerical Control.
ERIC Educational Resources Information Center
Haggen, Gilbert L.
Computer numerical control (CNC) has evolved from the first significant counting method--the abacus. Babbage had perhaps the greatest impact on the development of modern day computers with his analytical engine. Hollerith's functioning machine with punched cards was used in tabulating the 1890 U.S. Census. In order for computers to become a…
Nonlinear Computational Aeroelasticity: Formulations and Solution Algorithms
2003-03-01
problem is proposed. Fluid-structure coupling algorithms are then discussed with some emphasis on distributed computing strategies. Numerical results...the structure and the exchange of structure motion to the fluid. The computational fluid dynamics code PFES is our finite element code for the numerical ...unstructured meshes). It was numerically demonstrated [1-3] that EBS can be less diffusive than SUPG [4-6] and the standard Finite Volume schemes
Intensity changes in future extreme precipitation: A statistical event-based approach.
NASA Astrophysics Data System (ADS)
Manola, Iris; van den Hurk, Bart; de Moel, Hans; Aerts, Jeroen
2017-04-01
Short-lived precipitation extremes are often responsible for hazards in urban and rural environments with economic and environmental consequences. The precipitation intensity is expected to increase about 7% per degree of warming, according to the Clausius-Clapeyron (CC) relation. However, the observations often show a much stronger increase in the sub-daily values. In particular, the behavior of the hourly summer precipitation from radar observations with the dew point temperature (the Pi-Td relation) for the Netherlands suggests that for moderate to warm days the intensification of the precipitation can be even higher than 21% per degree of warming, that is 3 times higher than the expected CC relation. The rate of change depends on the initial precipitation intensity, as low percentiles increase with a rate below CC, the medium percentiles with 2CC and the moderate-high and high percentiles with 3CC. This non-linear statistical Pi-Td relation is suggested to be used as a delta-transformation to project how a historic extreme precipitation event would intensify under future, warmer conditions. Here, the Pi-Td relation is applied over a selected historic extreme precipitation event to 'up-scale' its intensity to warmer conditions. Additionally, the selected historic event is simulated in the high-resolution, convective-permitting weather model Harmonie. The initial and boundary conditions are alternated to represent future conditions. The comparison between the statistical and the numerical method of projecting the historic event to future conditions showed comparable intensity changes, which depending on the initial percentile intensity, range from below CC to a 3CC rate of change per degree of warming. The model tends to overestimate the future intensities for the low- and the very high percentiles and the clouds are somewhat displaced, due to small wind and convection changes. The total spatial cloud coverage in the model remains, as also in the statistical method, unchanged. The advantages of the suggested Pi-Td method of projecting future precipitation events from historic events is that it is simple to use, is less expensive time, computational and resource wise compared to a numerical model. The outcome can be used directly for hydrological and climatological studies and for impact analysis such as for flood risk assessments.
Interface modeling in incompressible media using level sets in Escript
NASA Astrophysics Data System (ADS)
Gross, L.; Bourgouin, L.; Hale, A. J.; Mühlhaus, H.-B.
2007-08-01
We use a finite element (FEM) formulation of the level set method to model geological fluid flow problems involving interface propagation. Interface problems are ubiquitous in geophysics. Here we focus on a Rayleigh-Taylor instability, namely mantel plumes evolution, and the growth of lava domes. Both problems require the accurate description of the propagation of an interface between heavy and light materials (plume) or between high viscous lava and low viscous air (lava dome), respectively. The implementation of the models is based on Escript which is a Python module for the solution of partial differential equations (PDEs) using spatial discretization techniques such as FEM. It is designed to describe numerical models in the language of PDEs while using computational components implemented in C and C++ to achieve high performance for time-intensive, numerical calculations. A critical step in the solution geological flow problems is the solution of the velocity-pressure problem. We describe how the Escript module can be used for a high-level implementation of an efficient variant of the well-known Uzawa scheme. We begin with a brief outline of the Escript modules and then present illustrations of its usage for the numerical solutions of the problems mentioned above.
Numerical algorithms for cold-relativistic plasma models in the presence of discontinuties
NASA Astrophysics Data System (ADS)
Hakim, Ammar; Cary, John; Bruhwiler, David; Geddes, Cameron; Leemans, Wim; Esarey, Eric
2006-10-01
A numerical algorithm is presented to solve cold-relativistic electron fluid equations in the presence of sharp gradients and discontinuities. The intended application is to laser wake-field accelerator simulations in which the laser induces accelerating fields thousands of times those achievable in conventional RF accelerators. The relativistic cold-fluid equations are formulated as non-classical system of hyperbolic balance laws. It is shown that the flux Jacobian for this system can not be diagonalized which causes numerical difficulties when developing shock-capturing algorithms. Further, the system is shown to admit generalized delta-shock solutions, first discovered in the context of sticky-particle dynamics (Bouchut, Ser. Adv. Math App. Sci., 22 (1994) pp. 171--190). A new approach, based on relaxation schemes proposed by Jin and Xin (Comm. Pure Appl. Math. 48 (1995) pp. 235--276) and LeVeque and Pelanti (J. Comput. Phys. 172 (2001) pp. 572--591) is developed to solve this system of equations. The method consists of finding an exact solution to a Riemann problem at each cell interface and coupling these to advance the solution in time. Applications to an intense laser propagating in an under-dense plasma are presented.
Symmetry, stability, and computation of degenerate lasing modes
NASA Astrophysics Data System (ADS)
Liu, David; Zhen, Bo; Ge, Li; Hernandez, Felipe; Pick, Adi; Burkhardt, Stephan; Liertzer, Matthias; Rotter, Stefan; Johnson, Steven G.
2017-02-01
We present a general method to obtain the stable lasing solutions for the steady-state ab initio lasing theory (SALT) for the case of a degenerate symmetric laser in two dimensions (2D). We find that under most regimes (with one pathological exception), the stable solutions are clockwise and counterclockwise circulating modes, generalizing previously known results of ring lasers to all 2D rotational symmetry groups. Our method uses a combination of semianalytical solutions close to lasing threshold and numerical solvers to track the lasing modes far above threshold. Near threshold, we find closed-form expressions for both circulating modes and other types of lasing solutions as well as for their linearized Maxwell-Bloch eigenvalues, providing a simple way to determine their stability without having to do a full nonlinear numerical calculation. Above threshold, we show that a key feature of the circulating mode is its "chiral" intensity pattern, which arises from spontaneous symmetry breaking of mirror symmetry, and whose symmetry group requires that the degeneracy persists even when nonlinear effects become important. Finally, we introduce a numerical technique to solve the degenerate SALT equations far above threshold even when spatial discretization artificially breaks the degeneracy.
The impact of turbulent fluctuations on light propagation in a controlled environment
NASA Astrophysics Data System (ADS)
Matt, Silvia; Hou, Weilin; Goode, Wesley
2014-05-01
Underwater temperature and salinity microstructure can lead to localized changes in the index of refraction and can be a limiting factor in oceanic environments. This optical turbulence can affect electro-optical (EO) signal transmissions that impact various applications, from diver visibility to active and passive remote sensing. To quantify the scope of the impacts from turbulent flows on EO signal transmission, and to examine and mitigate turbulence effects, we perform experiments in a controlled turbulence environment allowing the variation of turbulence intensity. This controlled turbulence setup is implemented at the Naval Research Laboratory Stennis Space Center (NRLSSC). Convective turbulence is generated in a classical Rayleigh-Benard tank and the turbulent flow is quantified using a state-of-the-art suite of sensors that includes high-resolution Acoustic Doppler Velocimeter profilers and fast thermistor probes. The measurements are complemented by very high- resolution non-hydrostatic numerical simulations. These computational fluid dynamics simulations allow for a more complete characterization of the convective flow in the laboratory tank than would be provided by measurements alone. Optical image degradation in the tank is assessed in relation to turbulence intensity. The unique approach of integrating optical techniques, turbulence measurements and numerical simulations helps advance our understanding of how to mitigate the effects of turbulence impacts on underwater optical signal transmission, as well as of the use of optical techniques to probe oceanic processes.
Numerical simulation of base flow of a long range flight vehicle
NASA Astrophysics Data System (ADS)
Saha, S.; Rathod, S.; Chandra Murty, M. S. R.; Sinha, P. K.; Chakraborty, Debasis
2012-05-01
Numerical exploration of base flow of a long range flight vehicle is presented for different flight conditions. Three dimensional Navier-Stokes equations are solved along with k-ɛ turbulence model using commercial CFD software. Simulation captured all essential flow features including flow separation at base shoulder, shear layer formation at the jet boundary, recirculation at the base region etc. With the increase in altitude, the plume of the rocket exhaust is seen to bulge more and more and caused more intense free stream and rocket plume interaction leading to higher gas temperature in the base cavity. The flow field in the base cavity is investigated in more detail, which is found to be fairly uniform at different instant of time. Presence of the heat shield is seen to reduce the hot gas entry to the cavity region due to different recirculation pattern in the base region. Computed temperature history obtained from conjugate heat transfer analysis is found to compare very well with flight measured data.
THz Beam Shaper Realizing Fan-Out Patterns
NASA Astrophysics Data System (ADS)
Liebert, K.; Rachon, M.; Siemion, A.; Suszek, J.; But, D.; Knap, W.; Sypek, M.
2017-08-01
Fan-out elements create an array of beams radiating at particular angles along the propagation axis. Therefore, they are able to form a matrix of equidistant spots in the far-field diffraction region. In this work, we report on the first fan-out structures designed for the THz range of radiation. Two types of light-dividing fan-out structures are demonstrated: (i) the 3×1 matrix fan-out structure based on the optimized binary phase grating and (ii) the 3×3 fan-out structure designed on the basis of the well-known Dammann grating. The structures were generated numerically and manufactured using the 3D printing technique with polyamide PA12. To obtain equal powers and symmetry of diffracted beams, the computer-aided optimization algorithm was used. Diffractive optical elements designed for 140 and 282 GHz were evaluated experimentally at both these frequencies using illumination with the wavefront coming from the point-like source. Described fan-out elements formed uniform intensity and equidistant energy distribution in agreement with the numerical simulations.
Robust numerical solution of the reservoir routing equation
NASA Astrophysics Data System (ADS)
Fiorentini, Marcello; Orlandini, Stefano
2013-09-01
The robustness of numerical methods for the solution of the reservoir routing equation is evaluated. The methods considered in this study are: (1) the Laurenson-Pilgrim method, (2) the fourth-order Runge-Kutta method, and (3) the fixed order Cash-Karp method. Method (1) is unable to handle nonmonotonic outflow rating curves. Method (2) is found to fail under critical conditions occurring, especially at the end of inflow recession limbs, when large time steps (greater than 12 min in this application) are used. Method (3) is computationally intensive and it does not solve the limitations of method (2). The limitations of method (2) can be efficiently overcome by reducing the time step in the critical phases of the simulation so as to ensure that water level remains inside the domains of the storage function and the outflow rating curve. The incorporation of a simple backstepping procedure implementing this control into the method (2) yields a robust and accurate reservoir routing method that can be safely used in distributed time-continuous catchment models.
Multitemperature compaction model of a magma melt in the asthenosphere: A numerical approach
NASA Astrophysics Data System (ADS)
Pak, V. V.
2007-09-01
A numerical compaction model of a fluid in a viscous skeleton is developed with regard for a phase transition. The temperatures of phases are different. The solution is found by the method of asymptotic expansion relative to the incompressible variant, which removes a number of computational problems related to the weak compressibility of the skeleton. For each approximation, the problem is solved by the finite element method. The process of 2-D compaction of a magmatic melt in the asthenosphere under a fault zone is examined for one-and two-temperature cases. The magmatic flow concentrates in this region due to a lower pore pressure. Higher temperature magma entering from lower levels causes a local heating of the skeleton and intense melting of its fusible component. In the two-temperature model, a magma concentration anomaly develops under the fault zone. The fundamental limitations substantially complicating the corresponding calculations within the framework of a one-temperature model are pointed out and the necessity of applying a multitemperature variant is substantiated.
Hidden Markov Model-Based CNV Detection Algorithms for Illumina Genotyping Microarrays.
Seiser, Eric L; Innocenti, Federico
2014-01-01
Somatic alterations in DNA copy number have been well studied in numerous malignancies, yet the role of germline DNA copy number variation in cancer is still emerging. Genotyping microarrays generate allele-specific signal intensities to determine genotype, but may also be used to infer DNA copy number using additional computational approaches. Numerous tools have been developed to analyze Illumina genotype microarray data for copy number variant (CNV) discovery, although commonly utilized algorithms freely available to the public employ approaches based upon the use of hidden Markov models (HMMs). QuantiSNP, PennCNV, and GenoCN utilize HMMs with six copy number states but vary in how transition and emission probabilities are calculated. Performance of these CNV detection algorithms has been shown to be variable between both genotyping platforms and data sets, although HMM approaches generally outperform other current methods. Low sensitivity is prevalent with HMM-based algorithms, suggesting the need for continued improvement in CNV detection methodologies.
Twisting Anderson pseudospins with light: Quench dynamics in terahertz-pumped BCS superconductors
NASA Astrophysics Data System (ADS)
Chou, Yang-Zhi; Liao, Yunxiang; Foster, Matthew S.
2017-03-01
We study the preparation (pump) and the detection (probe) of far-from-equilibrium BCS superconductor dynamics in THz pump-probe experiments. In a recent experiment [R. Matsunaga, Y. I. Hamada, K. Makise, Y. Uzawa, H. Terai, Z. Wang, and R. Shimano, Phys. Rev. Lett. 111, 057002 (2013), 10.1103/PhysRevLett.111.057002], an intense monocycle THz pulse with center frequency ω ≃Δ was injected into a superconductor with BCS gap Δ ; the subsequent postpump evolution was detected via the optical conductivity. It was argued that nonlinear coupling of the pump to the Anderson pseudospins of the superconductor induces coherent dynamics of the Higgs (amplitude) mode Δ (t ) . We validate this picture in a two-dimensional BCS model with a combination of exact numerics and the Lax reduction method, and we compute the nonequilibrium phase diagram as a function of the pump intensity. The main effect of the pump is to scramble the orientations of Anderson pseudospins along the Fermi surface by twisting them in the x y plane. We show that more intense pump pulses can induce a far-from-equilibrium phase of gapless superconductivity ("phase I"), originally predicted in the context of interaction quenches in ultracold atoms. We show that the THz pump method can reach phase I at much lower energy densities than an interaction quench, and we demonstrate that Lax reduction (tied to the integrability of the BCS Hamiltonian) provides a general quantitative tool for computing coherent BCS dynamics. We also calculate the Mattis-Bardeen optical conductivity for the nonequilibrium states discussed here.
NASA Astrophysics Data System (ADS)
Warsta, L.; Karvonen, T.
2017-12-01
There are currently 25 shooting and training areas in Finland managed by The Finnish Defence Forces (FDF), where military activities can cause contamination of open waters and groundwater reservoirs. In the YMPYRÄ project, a computer software framework is being developed that combines existing open environmental data and proprietary information collected by FDF with computational models to investigate current and prevent future environmental problems. A data centric philosophy is followed in the development of the system, i.e. the models are updated and extended to handle available data from different areas. The results generated by the models are summarized as easily understandable flow and risk maps that can be opened in GIS programs and used in environmental assessments by experts. Substances investigated with the system include explosives and metals such as lead, and both surface and groundwater dominated areas can be simulated. The YMPYRÄ framework is composed of a three dimensional soil and groundwater flow model, several solute transport models and an uncertainty assessment system. Solute transport models in the framework include particle based, stream tube and finite volume based approaches. The models can be used to simulate solute dissolution from source area, transport in the unsaturated layers to groundwater and finally migration in groundwater to water extraction wells and springs. The models can be used to simulate advection, dispersion, equilibrium adsorption on soil particles, solubility and dissolution from solute phase and dendritic solute decay chains. Correct numerical solutions were confirmed by comparing results to analytical 1D and 2D solutions and by comparing the numerical solutions to each other. The particle based and stream tube type solute transport models were useful as they could complement the traditional finite volume based approach which in certain circumstances produced numerical dispersion due to piecewise solution of the governing equations in computational grids and included computationally intensive and in some cases unstable iterative solutions. The YMPYRÄ framework is being developed by WaterHope, Gain Oy, and SITO Oy consulting companies and funded by FDF.
NASA Astrophysics Data System (ADS)
Velazquez, Antonio; Swartz, R. Andrew
2013-04-01
Renewable energy sources like wind are important technologies, useful to alleviate for the current fossil-fuel crisis. Capturing wind energy in a more efficient way has resulted in the emergence of more sophisticated designs of wind turbines, particularly Horizontal-Axis Wind Turbines (HAWTs). To promote efficiency, traditional finite element methods have been widely used to characterize the aerodynamics of these types of multi-body systems and improve their design. Given their aeroelastic behavior, tapered-swept blades offer the potential to optimize energy capture and decrease fatigue loads. Nevertheless, modeling special complex geometries requires huge computational efforts necessitating tradeoffs between faster computation times at lower cost, and reliability and numerical accuracy. Indeed, the computational cost and the numerical effort invested, using traditional FE methods, to reproduce dependable aerodynamics of these complex-shape beams are sometimes prohibitive. A condensed Spinning Finite Element (SFE) method scheme is presented in this study aimed to alleviate this issue by means of modeling wind-turbine rotor blades properly with tapered-swept cross-section variations of arbitrary order via Lagrangian equations. Axial-flexural-torsional coupling is carried out on axial deformation, torsion, in-plane bending and out-of-plane bending using super-convergent elements. In this study, special attention is paid for the case of damped yaw effects, expressed within the described skew-symmetric damped gyroscopic matrix. Dynamics of the model are analyzed by achieving modal analysis with complex-number eigen-frequencies. By means of mass, damped gyroscopic, and stiffness (axial-flexural-torsional coupling) matrix condensation (order reduction), numerical analysis is carried out for several prototypes with different tapered, swept, and curved variation intensities, and for a practical range of spinning velocities at different rotation angles. A convergence study for the resulting natural frequencies is performed to evaluate the dynamic collateral effects of tapered-swept blade profiles in spinning motion using this new model. Stability analysis in boundary conditions of the postulated model is achieved to test the convergence and integrity of the mathematical model. The proposed framework presumes to be particularly suitable to characterize models with complex-shape cross-sections at low computation cost.
Effect of a Diffusion Zone on Fatigue Crack Propagation in Layered FGMs
NASA Astrophysics Data System (ADS)
Hauber, Brett; Brockman, Robert; Paulino, Glaucio
2008-02-01
Research into functionally graded materials (FGMs) has led to advances in our ability to analyze cracks. However, two prominent aspects remain relatively unexplored: 1) development and validation of modeling methods for fatigue crack propagation in FGMs, and 2) experimental validation of stress intensity models in engineered materials such as two phase monolithic and graded materials. This work addresses some of these problems for a limited set of conditions, material systems (e.g., Ti/TiB), and material gradients. Numerical analyses are conducted for single edge notch bend (SENB) specimens. Stress intensity factors are computed using the specialized finite element code I-Franc (Illinois Fracture Analysis Code), which is tailored for both homogeneous and graded materials, as well as Franc2DL and ABAQUS. Crack extension is considered by means of specified crack increments, together with fatigue evaluations to predict crack propagation life. Results will be used to determine linear material gradient parameters that are significant for prediction of fatigue crack growth behavior.
Apyari, V V; Dmitrienko, S G; Ostrovskaya, V M; Anaev, E K; Zolotov, Y A
2008-07-01
Polyurethane foam (PUF) has been suggested as a solid polymeric reagent for determination of nitrite. The determination is based on the diazotization of end toluidine groups of PUF with nitrite in acidic medium followed by coupling of polymeric diazonium cation with 3-hydroxy-7,8-benzo-1,2,3,4-tetrahydroquinoline. The intensely colored polymeric azodye formed in this reaction can be used as a convenient analytic form for the determination of nitrite by diffuse reflectance spectroscopy (c (min) = 0.7 ng mL(-1)). The possibility of using a desktop scanner, digital camera, and computer data processing for the numerical evaluation of the color intensity of the polymeric azodye has been investigated. A scanner and digital camera can be used for determination of nitrite with the same sensitivity and reproducibility as with diffuse reflectance spectroscopy. The approach developed was applied for determination of nitrite in river water and human exhaled breath condensate.
Laser-plasma interactions for fast ignition
NASA Astrophysics Data System (ADS)
Kemp, A. J.; Fiuza, F.; Debayle, A.; Johzaki, T.; Mori, W. B.; Patel, P. K.; Sentoku, Y.; Silva, L. O.
2014-05-01
In the electron-driven fast-ignition (FI) approach to inertial confinement fusion, petawatt laser pulses are required to generate MeV electrons that deposit several tens of kilojoules in the compressed core of an imploded DT shell. We review recent progress in the understanding of intense laser-plasma interactions (LPI) relevant to FI. Increases in computational and modelling capabilities, as well as algorithmic developments have led to enhancement in our ability to perform multi-dimensional particle-in-cell simulations of LPI at relevant scales. We discuss the physics of the interaction in terms of laser absorption fraction, the laser-generated electron spectra, divergence, and their temporal evolution. Scaling with irradiation conditions such as laser intensity are considered, as well as the dependence on plasma parameters. Different numerical modelling approaches and configurations are addressed, providing an overview of the modelling capabilities and limitations. In addition, we discuss the comparison of simulation results with experimental observables. In particular, we address the question of surrogacy of today's experiments for the full-scale FI problem.
NASA Astrophysics Data System (ADS)
Rebelo, Marina de Sá; Aarre, Ann Kirstine Hummelgaard; Clemmesen, Karen-Louise; Brandão, Simone Cristina Soares; Giorgi, Maria Clementina; Meneghetti, José Cláudio; Gutierrez, Marco Antonio
2009-12-01
A method to compute three-dimension (3D) left ventricle (LV) motion and its color coded visualization scheme for the qualitative analysis in SPECT images is proposed. It is used to investigate some aspects of Cardiac Resynchronization Therapy (CRT). The method was applied to 3D gated-SPECT images sets from normal subjects and patients with severe Idiopathic Heart Failure, before and after CRT. Color coded visualization maps representing the LV regional motion showed significant difference between patients and normal subjects. Moreover, they indicated a difference between the two groups. Numerical results of regional mean values representing the intensity and direction of movement in radial direction are presented. A difference of one order of magnitude in the intensity of the movement on patients in relation to the normal subjects was observed. Quantitative and qualitative parameters gave good indications of potential application of the technique to diagnosis and follow up of patients submitted to CRT.
Health Informatics for Neonatal Intensive Care Units: An Analytical Modeling Perspective
Mench-Bressan, Nadja; McGregor, Carolyn; Pugh, James Edward
2015-01-01
The effective use of data within intensive care units (ICUs) has great potential to create new cloud-based health analytics solutions for disease prevention or earlier condition onset detection. The Artemis project aims to achieve the above goals in the area of neonatal ICUs (NICU). In this paper, we proposed an analytical model for the Artemis cloud project which will be deployed at McMaster Children’s Hospital in Hamilton. We collect not only physiological data but also the infusion pumps data that are attached to NICU beds. Using the proposed analytical model, we predict the amount of storage, memory, and computation power required for the system. Capacity planning and tradeoff analysis would be more accurate and systematic by applying the proposed analytical model in this paper. Numerical results are obtained using real inputs acquired from McMaster Children’s Hospital and a pilot deployment of the system at The Hospital for Sick Children (SickKids) in Toronto. PMID:27170907
Probabilistic analysis of tsunami hazards
Geist, E.L.; Parsons, T.
2006-01-01
Determining the likelihood of a disaster is a key component of any comprehensive hazard assessment. This is particularly true for tsunamis, even though most tsunami hazard assessments have in the past relied on scenario or deterministic type models. We discuss probabilistic tsunami hazard analysis (PTHA) from the standpoint of integrating computational methods with empirical analysis of past tsunami runup. PTHA is derived from probabilistic seismic hazard analysis (PSHA), with the main difference being that PTHA must account for far-field sources. The computational methods rely on numerical tsunami propagation models rather than empirical attenuation relationships as in PSHA in determining ground motions. Because a number of source parameters affect local tsunami runup height, PTHA can become complex and computationally intensive. Empirical analysis can function in one of two ways, depending on the length and completeness of the tsunami catalog. For site-specific studies where there is sufficient tsunami runup data available, hazard curves can primarily be derived from empirical analysis, with computational methods used to highlight deficiencies in the tsunami catalog. For region-wide analyses and sites where there are little to no tsunami data, a computationally based method such as Monte Carlo simulation is the primary method to establish tsunami hazards. Two case studies that describe how computational and empirical methods can be integrated are presented for Acapulco, Mexico (site-specific) and the U.S. Pacific Northwest coastline (region-wide analysis).
Petersson, K J F; Friberg, L E; Karlsson, M O
2010-10-01
Computer models of biological systems grow more complex as computing power increase. Often these models are defined as differential equations and no analytical solutions exist. Numerical integration is used to approximate the solution; this can be computationally intensive, time consuming and be a large proportion of the total computer runtime. The performance of different integration methods depend on the mathematical properties of the differential equations system at hand. In this paper we investigate the possibility of runtime gains by calculating parts of or the whole differential equations system at given time intervals, outside of the differential equations solver. This approach was tested on nine models defined as differential equations with the goal to reduce runtime while maintaining model fit, based on the objective function value. The software used was NONMEM. In four models the computational runtime was successfully reduced (by 59-96%). The differences in parameter estimates, compared to using only the differential equations solver were less than 12% for all fixed effects parameters. For the variance parameters, estimates were within 10% for the majority of the parameters. Population and individual predictions were similar and the differences in OFV were between 1 and -14 units. When computational runtime seriously affects the usefulness of a model we suggest evaluating this approach for repetitive elements of model building and evaluation such as covariate inclusions or bootstraps.
Numerical estimation of cavitation intensity
NASA Astrophysics Data System (ADS)
Krumenacker, L.; Fortes-Patella, R.; Archer, A.
2014-03-01
Cavitation may appear in turbomachinery and in hydraulic orifices, venturis or valves, leading to performance losses, vibrations and material erosion. This study propose a new method to predict the cavitation intensity of the flow, based on a post-processing of unsteady CFD calculations. The paper presents the analyses of cavitating structures' evolution at two different scales: • A macroscopic one in which the growth of cavitating structures is calculated using an URANS software based on a homogeneous model. Simulations of cavitating flows are computed using a barotropic law considering presence of air and interfacial tension, and Reboud's correction on the turbulence model. • Then a small one where a Rayleigh-Plesset software calculates the acoustic energy generated by the implosion of the vapor/gas bubbles with input parameters from macroscopic scale. The volume damage rate of the material during incubation time is supposed to be a part of the cumulated acoustic energy received by the solid wall. The proposed analysis method is applied to calculations on hydrofoil and orifice geometries. Comparisons between model results and experimental works concerning flow characteristic (size of cavity, pressure,velocity) as well as pitting (erosion area, relative cavitation intensity) are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Yi; Zhang Jingtao; Xu Zhizhan
2010-07-15
The exact algebraic solution recently obtained by Guo, Wu, and Van Woerkom (Phys. Rev. A 73 (2006) 023419) made possible accurate calculations of quasienergies of a driven two-level atom with an arbitrary original energy spacing and laser intensity. Due to the complication of the analytic solutions that involves an infinite number of infinite determinants, many mathematical difficulties must be overcome to obtain precise values of quasienergies. In this paper, with a further developed algebraic method, we show how to solve the computational problem completely and results are presented in a data table. With this table, one can easily obtain allmore » quasienergies of a driven two-level atom with an arbitrary original energy spacing and arbitrary intensity and frequency of the driving laser. The numerical solution technique developed here can be applied to the calculation of Freeman resonances in photoelectron energy spectra. As an example for applications, we show how to use the data table to calculate the peak laser intensity at which a Freeman resonance occurs in the transition between the ground Xe 5p P{sub 3/2} state and the Rydberg state Xe 8p P{sub 3/2}.« less
NASA Astrophysics Data System (ADS)
Mei, Dongcheng; Xie, Chongwei; Zhang, Li
2003-11-01
We study the effects of correlations between additive and multiplicative noise on relaxation time in a bistable system driven by cross-correlated noise. Using the projection-operator method, we derived an analytic expression for the relaxation time Tc of the system, which is the function of additive (α) and multiplicative (D) noise intensities, correlation intensity λ of noise, and correlation time τ of noise. After introducing a noise intensity ratio and a dimensionless parameter R=D/α, and then performing numerical computations, we find the following: (i) For the case of R<1, the relaxation time Tc increases as R increases. (ii) For the cases of R⩾1, there is a one-peak structure on the Tc-R plot and the effects of cross-correlated noise on the relaxation time are very notable. (iii) For the case of R<1, Tc almost does not change with both λ and τ, and for the cases of R⩾1, Tc decreases as λ increases, however Tc increases as τ increases. λ and τ play opposite roles in Tc, i.e., λ enhances the fluctuation decay of dynamical variable and τ slows down the fluctuation decay of dynamical variable.
Framework of distributed coupled atmosphere-ocean-wave modeling system
NASA Astrophysics Data System (ADS)
Wen, Yuanqiao; Huang, Liwen; Deng, Jian; Zhang, Jinfeng; Wang, Sisi; Wang, Lijun
2006-05-01
In order to research the interactions between the atmosphere and ocean as well as their important role in the intensive weather systems of coastal areas, and to improve the forecasting ability of the hazardous weather processes of coastal areas, a coupled atmosphere-ocean-wave modeling system has been developed. The agent-based environment framework for linking models allows flexible and dynamic information exchange between models. For the purpose of flexibility, portability and scalability, the framework of the whole system takes a multi-layer architecture that includes a user interface layer, computational layer and service-enabling layer. The numerical experiment presented in this paper demonstrates the performance of the distributed coupled modeling system.
Propagation of quasifracture in viscoelastic media under low-cycle repeated stressing
NASA Technical Reports Server (NTRS)
Liu, X. P.; Hsiao, C. C.
1985-01-01
The propagation of a craze as quasifracture under repeated cyclic stressing in polymeric systems has been under intensive investigation recently. Based upon a time-dependent crazing theory, the governing differential equation describing the propagation of a single craze as quasifracture in an infinite viscoelastic plate has been solved for sinusoidal stresses. Numerical methods have been employed to obtain the normalized craze length as a function of time. The computed results indicate that the length of a quasifracture may decelerate and decrease indicating that its velocity can reverse. This behavior may be consistent with the observed and much discussed craze healing and the enclosure model in fatigue and fracture of solids.
NASA Technical Reports Server (NTRS)
Brosius, Jeffrey W. (Principal Investigator)
1996-01-01
The plasma properties and magnetic field structure of the solar corona were determined using coordinated observations obtained with NASA/GSFC's Solar EUV Rocket Telescope and Spectrograph (SERTS), the Very Large Array (VLA), and Kitt Peak photospheric longitudinal magnetograms. A problem was identified with the SERTS calibration as determined from laboratory measurements. A revised calibration curve was derived by requiring that the numerous available measured line intensity ratios agreed with their respective theoretical values. Densities were derived from line intensity ratios, and active region densities were found to typically exceed quiet Sun densities by factors of only about 2. The active region density was found to remain constant across the SERTS slit, despite the fact that the emission line intensities vary significantly. This indicates that the product of the path length and the volume filling factor must vary significantly from the active region outskirts to the central core. Filling factors were derived and found to range from much less than one to nearly unity. Wavelength shifts were examined along the SERTS slit in the spatially resolved spectra, but no evidence was found for significant Doppler shifts in active region 7563 or in the quiet Sun. The numerical procedure developed by Monsignori-Fossi and Landini was used to derive the active region and quiet sun differential emission measure (DEM) from the spatially averaged spectra. A DEM was estimated for each spatial pixel in the two dimensional active region images by scaling the averaged active region DEM based upon corresponding pixel intensities of SERTS Mg IX, Fe XV, and Fe XVI images. These results, along with density measurements, were used in an IDL computer code which calculated the temperature dependence of the coronal magnetic field in each spatial pixel by minimizing the difference between the observed and calculated 20 and 6 cm microwave brightness temperatures.
NASA Astrophysics Data System (ADS)
Daniluk, Andrzej
2010-03-01
Scientific computing is the field of study concerned with constructing mathematical models, numerical solution techniques and with using computers to analyse and solve scientific and engineering problems. Model-Driven Development (MDD) has been proposed as a means to support the software development process through the use of a model-centric approach. This paper surveys the core MDD technology that was used to develop an application that allows computation of the RHEED intensities dynamically for a disordered surface. New version program summaryProgram title: RHEED1DProcess Catalogue identifier: ADUY_v4_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUY_v4_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 31 971 No. of bytes in distributed program, including test data, etc.: 3 039 820 Distribution format: tar.gz Programming language: Embarcadero C++ Builder Computer: Intel Core Duo-based PC Operating system: Windows XP, Vista, 7 RAM: more than 1 GB Classification: 4.3, 7.2, 6.2, 8, 14 Catalogue identifier of previous version: ADUY_v3_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 2394 Does the new version supersede the previous version?: No Nature of problem: An application that implements numerical simulations should be constructed according to the CSFAR rules: clear and well-documented, simple, fast, accurate, and robust. A clearly written, externally and internally documented program is much easier to understand and modify. A simple program is much less prone to error and is more easily modified than one that is complicated. Simplicity and clarity also help make the program flexible. Making the program fast has economic benefits. It also allows flexibility because some of the features that make a program efficient can be traded off for greater accuracy. Making the program fast also has the benefit of allowing longer calculations with better resolution. The compromise between speed and accuracy has always posted one of the most troublesome challenges for the programmer. Almost all advances in numerical analysis have come about trying to reach these twin goals. Change in the basic algorithms will give greater improvements in accuracy and speed than using special numerical tricks or changing programming language. A robust program works correctly over a broad spectrum of input data. Solution method: The computational model of the program is based on the use of a dynamical diffraction theory in which the electrons are taken to be diffracted by a potential, which is periodic in the dimension perpendicular to the surface. In the case of a disordered surface we can use the proportional model of the scattering potential, in which the potential of a partially filled layer is taken to be the product of the coverage of this layer and the potential of a fully filled layer: U(θ,z)=∑ θ(t/τ)U(1,z), where U(1,z) stands for the potential for the full nth layer, and U(θ,z) the potential of the growing layer. Reasons for new version: Responding to the user feedback the RHEEDGr_09 program has been upgraded to a standard that allows carrying out computations of the RHEED intensities for a disordered surface. Also, functionality and documentation of the program have been improved. Summary of revisions:The logical structure of the Platform-Specific Model of the RHEEDGr_09 program has been modified according to the scheme showed in Fig. 1*. The class diagram in Fig. 1* is a static view of the main platform-specific elements of the RHEED1DProcess architecture. Fig. 2* provides a dynamic view by showing the creation and destruction simplistic sequence diagram for the process. Fig. 3* shows the RHEED1DProcess use case model. As can be seen in Figs. 2-3* the RHEED1DProcess has been designed as a slave process that runs as a separate thread inside each transaction generated by the master Growth09 program (see pii:S0010-4655(09)00386-5 A. Daniluk, Model-Driven Development for scientific computing. Computations of RHEED intensities for a disordered surface. Part II The RHEED1DProcess requires the user to provide the appropriate parameters for the crystal structure under investigation. These parameters are loaded from the parameters.ini file at run-time. Instructions on the preparation of the .ini files can be found in the new distribution. The RHEED1DProcess requires the user to provide the appropriate values of the layers of coverage profiles. The CoverageProfiles.dat file (generated by Growth09 master application) at run-time loads these values. The RHEED1DProcess enables carrying out one-dimensional dynamical calculations for the fcc lattice, with a two-atoms basis and fcc lattice, with one atom basis but yet the zeroth Fourier component of the scattering potential in the TRHEED1D::crystPotUg() function can be modified according to users' specific application requirements. * The figures mentioned can be downloaded, see "Supplementary material" below. Unusual features: The program is distributed in the form of main projects RHEED1DProcess.cbproj and Graph2D0x.cbproj with associated files, and should be compiled using Embarcadero RAD Studio 2010 along with Together visual-modelling platform. The program should be compiled with English/USA regional and language options. Additional comments: This version of the RHEED program is designed to run in conjunction with the GROWTH09 (ADVL_v3_0) program. It does not replace the previous, stand alone, RHEEDGR-09 (ADUY_v3_0) version. Running time: The typical running time is machine and user-parameters dependent. References:[1] OMG, Model Driven Architecture Guide Version 1.0.1, 2003.
NASA Technical Reports Server (NTRS)
Gibson, G.; Miller, M.
1967-01-01
Computer program ETC improves computation of elastic transfer matrices of Legendre polynomials P/0/ and P/1/. Rather than carrying out a double integration numerically, one of the integrations is accomplished analytically and the numerical integration need only be carried out over one variable.
NASA Technical Reports Server (NTRS)
Tao, W.-K.; Shi, J.; Chen, S. S>
2007-01-01
Advances in computing power allow atmospheric prediction models to be mn at progressively finer scales of resolution, using increasingly more sophisticated physical parameterizations and numerical methods. The representation of cloud microphysical processes is a key component of these models, over the past decade both research and operational numerical weather prediction models have started using more complex microphysical schemes that were originally developed for high-resolution cloud-resolving models (CRMs). A recent report to the United States Weather Research Program (USWRP) Science Steering Committee specifically calls for the replacement of implicit cumulus parameterization schemes with explicit bulk schemes in numerical weather prediction (NWP) as part of a community effort to improve quantitative precipitation forecasts (QPF). An improved Goddard bulk microphysical parameterization is implemented into a state-of the-art of next generation of Weather Research and Forecasting (WRF) model. High-resolution model simulations are conducted to examine the impact of microphysical schemes on two different weather events (a midlatitude linear convective system and an Atllan"ic hurricane). The results suggest that microphysics has a major impact on the organization and precipitation processes associated with a summer midlatitude convective line system. The 31CE scheme with a cloud ice-snow-hail configuration led to a better agreement with observation in terms of simulated narrow convective line and rainfall intensity. This is because the 3ICE-hail scheme includes dense ice precipitating (hail) particle with very fast fall speed (over 10 m/s). For an Atlantic hurricane case, varying the microphysical schemes had no significant impact on the track forecast but did affect the intensity (important for air-sea interaction)
NASA Astrophysics Data System (ADS)
Wang, Jinting; Lu, Liqiao; Zhu, Fei
2018-01-01
Finite element (FE) is a powerful tool and has been applied by investigators to real-time hybrid simulations (RTHSs). This study focuses on the computational efficiency, including the computational time and accuracy, of numerical integrations in solving FE numerical substructure in RTHSs. First, sparse matrix storage schemes are adopted to decrease the computational time of FE numerical substructure. In this way, the task execution time (TET) decreases such that the scale of the numerical substructure model increases. Subsequently, several commonly used explicit numerical integration algorithms, including the central difference method (CDM), the Newmark explicit method, the Chang method and the Gui-λ method, are comprehensively compared to evaluate their computational time in solving FE numerical substructure. CDM is better than the other explicit integration algorithms when the damping matrix is diagonal, while the Gui-λ (λ = 4) method is advantageous when the damping matrix is non-diagonal. Finally, the effect of time delay on the computational accuracy of RTHSs is investigated by simulating structure-foundation systems. Simulation results show that the influences of time delay on the displacement response become obvious with the mass ratio increasing, and delay compensation methods may reduce the relative error of the displacement peak value to less than 5% even under the large time-step and large time delay.
Numerical developments for short-pulsed Near Infra-Red laser spectroscopy. Part I: direct treatment
NASA Astrophysics Data System (ADS)
Boulanger, Joan; Charette, André
2005-03-01
This two part study is devoted to the numerical treatment of short-pulsed laser near infra-red spectroscopy. The overall goal is to address the possibility of numerical inverse treatment based on a recently developed direct model to solve the transient radiative transfer equation. This model has been constructed in order to incorporate the last improvements in short-pulsed laser interaction with semi-transparent media and combine a discrete ordinates computing of the implicit source term appearing in the radiative transfer equation with an explicit treatment of the transport of the light intensity using advection schemes, a method encountered in reactive flow dynamics. The incident collimated beam is analytically solved through Bouger Beer Lambert extinction law. In this first part, the direct model is extended to fully non-homogeneous materials and tested with two different spatial schemes in order to be adapted to the inversion methods presented in the following second part. As a first point, fundamental methods and schemes used in the direct model are presented. Then, tests are conducted by comparison with numerical simulations given as references. In a third and last part, multi-dimensional extensions of the code are provided. This allows presentation of numerical results of short pulses propagation in 1, 2 and 3D homogeneous and non-homogeneous materials given some parametrical studies on medium properties and pulse shape. For comparison, an integral method adapted to non-homogeneous media irradiated by a pulsed laser beam is also developed for the 3D case.
Geomagnetic Cutoff Rigidity Computer Program: Theory, Software Description and Example
NASA Technical Reports Server (NTRS)
Smart, D. F.; Shea, M. A.
2001-01-01
The access of charged particles to the earth from space through the geomagnetic field has been of interest since the discovery of the cosmic radiation. The early cosmic ray measurements found that cosmic ray intensity was ordered by the magnetic latitude and the concept of cutoff rigidity was developed. The pioneering work of Stoermer resulted in the theory of particle motion in the geomagnetic field, but the fundamental mathematical equations developed have 'no solution in closed form'. This difficulty has forced researchers to use the 'brute force' technique of numerical integration of individual trajectories to ascertain the behavior of trajectory families or groups. This requires that many of the trajectories must be traced in order to determine what energy (or rigidity) a charged particle must have to penetrate the magnetic field and arrive at a specified position. It turned out the cutoff rigidity was not a simple quantity but had many unanticipated complexities that required many hundreds if not thousands of individual trajectory calculations to solve. The accurate calculation of particle trajectories in the earth's magnetic field is a fundamental problem that limited the efficient utilization of cosmic ray measurements during the early years of cosmic ray research. As the power of computers has improved over the decades, the numerical integration procedure has grown more tractable, and magnetic field models of increasing accuracy and complexity have been utilized. This report is documentation of a general FORTRAN computer program to trace the trajectory of a charged particle of a specified rigidity from a specified position and direction through a model of the geomagnetic field.
Numerical Simulation of Transit-Time Ultrasonic Flowmeters by a Direct Approach.
Luca, Adrian; Marchiano, Regis; Chassaing, Jean-Camille
2016-06-01
This paper deals with the development of a computational code for the numerical simulation of wave propagation through domains with a complex geometry consisting in both solids and moving fluids. The emphasis is on the numerical simulation of ultrasonic flowmeters (UFMs) by modeling the wave propagation in solids with the equations of linear elasticity (ELE) and in fluids with the linearized Euler equations (LEEs). This approach requires high performance computing because of the high number of degrees of freedom and the long propagation distances. Therefore, the numerical method should be chosen with care. In order to minimize the numerical dissipation which may occur in this kind of configuration, the numerical method employed here is the nodal discontinuous Galerkin (DG) method. Also, this method is well suited for parallel computing. To speed up the code, almost all the computational stages have been implemented to run on graphical processing unit (GPU) by using the compute unified device architecture (CUDA) programming model from NVIDIA. This approach has been validated and then used for the two-dimensional simulation of gas UFMs. The large contrast of acoustic impedance characteristic to gas UFMs makes their simulation a real challenge.
A 3D staggered-grid finite difference scheme for poroelastic wave equation
NASA Astrophysics Data System (ADS)
Zhang, Yijie; Gao, Jinghuai
2014-10-01
Three dimensional numerical modeling has been a viable tool for understanding wave propagation in real media. The poroelastic media can better describe the phenomena of hydrocarbon reservoirs than acoustic and elastic media. However, the numerical modeling in 3D poroelastic media demands significantly more computational capacity, including both computational time and memory. In this paper, we present a 3D poroelastic staggered-grid finite difference (SFD) scheme. During the procedure, parallel computing is implemented to reduce the computational time. Parallelization is based on domain decomposition, and communication between processors is performed using message passing interface (MPI). Parallel analysis shows that the parallelized SFD scheme significantly improves the simulation efficiency and 3D decomposition in domain is the most efficient. We also analyze the numerical dispersion and stability condition of the 3D poroelastic SFD method. Numerical results show that the 3D numerical simulation can provide a real description of wave propagation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pardini, Tom; Aquila, Andrew; Boutet, Sebastien
Numerical simulations of the current and future pulse intensity distributions at selected locations along the Far Experimental Hall, the hard X-ray section of the Linac Coherent Light Source (LCLS), are provided. Estimates are given for the pulse fluence, energy and size in and out of focus, taking into account effects due to the experimentally measured divergence of the X-ray beam, and measured figure errors of all X-ray optics in the beam path. Out-of-focus results are validated by comparison with experimental data. Previous work is expanded on, providing quantitatively correct predictions of the pulse intensity distribution. Numerical estimates in focus aremore » particularly important given that the latter cannot be measured with direct imaging techniques due to detector damage. Finally, novel numerical estimates of improvements to the pulse intensity distribution expected as part of the on-going upgrade of the LCLS X-ray transport system are provided. As a result, we suggest how the new generation of X-ray optics to be installed would outperform the old one, satisfying the tight requirements imposed by X-ray free-electron laser facilities.« less
Pardini, Tom; Aquila, Andrew; Boutet, Sebastien; ...
2017-06-15
Numerical simulations of the current and future pulse intensity distributions at selected locations along the Far Experimental Hall, the hard X-ray section of the Linac Coherent Light Source (LCLS), are provided. Estimates are given for the pulse fluence, energy and size in and out of focus, taking into account effects due to the experimentally measured divergence of the X-ray beam, and measured figure errors of all X-ray optics in the beam path. Out-of-focus results are validated by comparison with experimental data. Previous work is expanded on, providing quantitatively correct predictions of the pulse intensity distribution. Numerical estimates in focus aremore » particularly important given that the latter cannot be measured with direct imaging techniques due to detector damage. Finally, novel numerical estimates of improvements to the pulse intensity distribution expected as part of the on-going upgrade of the LCLS X-ray transport system are provided. As a result, we suggest how the new generation of X-ray optics to be installed would outperform the old one, satisfying the tight requirements imposed by X-ray free-electron laser facilities.« less
Hadoop-GIS: A High Performance Spatial Data Warehousing System over MapReduce.
Aji, Ablimit; Wang, Fusheng; Vo, Hoang; Lee, Rubao; Liu, Qiaoling; Zhang, Xiaodong; Saltz, Joel
2013-08-01
Support of high performance queries on large volumes of spatial data becomes increasingly important in many application domains, including geospatial problems in numerous fields, location based services, and emerging scientific applications that are increasingly data- and compute-intensive. The emergence of massive scale spatial data is due to the proliferation of cost effective and ubiquitous positioning technologies, development of high resolution imaging technologies, and contribution from a large number of community users. There are two major challenges for managing and querying massive spatial data to support spatial queries: the explosion of spatial data, and the high computational complexity of spatial queries. In this paper, we present Hadoop-GIS - a scalable and high performance spatial data warehousing system for running large scale spatial queries on Hadoop. Hadoop-GIS supports multiple types of spatial queries on MapReduce through spatial partitioning, customizable spatial query engine RESQUE, implicit parallel spatial query execution on MapReduce, and effective methods for amending query results through handling boundary objects. Hadoop-GIS utilizes global partition indexing and customizable on demand local spatial indexing to achieve efficient query processing. Hadoop-GIS is integrated into Hive to support declarative spatial queries with an integrated architecture. Our experiments have demonstrated the high efficiency of Hadoop-GIS on query response and high scalability to run on commodity clusters. Our comparative experiments have showed that performance of Hadoop-GIS is on par with parallel SDBMS and outperforms SDBMS for compute-intensive queries. Hadoop-GIS is available as a set of library for processing spatial queries, and as an integrated software package in Hive.
Electron transport theory in magnetic nanostructures
NASA Astrophysics Data System (ADS)
Choy, Tat-Sang
Magnetic nanostructure has been a new trend because of its application in making magnetic sensors, magnetic memories, and magnetic reading heads in hard disks drives. Although a variety of nanostructures have been realized in experiments in recent years by innovative sample growth techniques, the theoretical study of these devices remain a challenge. On one hand, atomic scale modeling is often required for studying the magnetic nanostructures; on the other, these structures often have a dimension on the order of one micrometer, which makes the calculation numerically intensive. In this work, we have studied the electron transport theory in magnetic nanostructures, with special attention to the giant magnetoresistance (GMR) structure. We have developed a model that includes the details of the band structure and disorder, both of which are both important in obtaining the conductivity. We have also developed an efficient algorithm to compute the conductivity in magnetic nanostructures. The model and the algorithm are general and can be applied to complicated structures. We have applied the theory to current-perpendicular-to-plane GMR structures and the results agree with experiments. Finally, we have searched for the atomic configuration with the highest GMR using the simulated annealing algorithm. This method is computationally intensive because we have to compute the GMR for 103 to 104 configurations. However it is still very efficient because the number of steps it takes to find the maximum is much smaller than the number of all possible GMR structures. We found that ultra-thin NiCu superlattices have surprisingly large GMR even at the moderate disorder in experiments. This finding may be useful in improving the GMR technology.
Hadoop-GIS: A High Performance Spatial Data Warehousing System over MapReduce
Aji, Ablimit; Wang, Fusheng; Vo, Hoang; Lee, Rubao; Liu, Qiaoling; Zhang, Xiaodong; Saltz, Joel
2013-01-01
Support of high performance queries on large volumes of spatial data becomes increasingly important in many application domains, including geospatial problems in numerous fields, location based services, and emerging scientific applications that are increasingly data- and compute-intensive. The emergence of massive scale spatial data is due to the proliferation of cost effective and ubiquitous positioning technologies, development of high resolution imaging technologies, and contribution from a large number of community users. There are two major challenges for managing and querying massive spatial data to support spatial queries: the explosion of spatial data, and the high computational complexity of spatial queries. In this paper, we present Hadoop-GIS – a scalable and high performance spatial data warehousing system for running large scale spatial queries on Hadoop. Hadoop-GIS supports multiple types of spatial queries on MapReduce through spatial partitioning, customizable spatial query engine RESQUE, implicit parallel spatial query execution on MapReduce, and effective methods for amending query results through handling boundary objects. Hadoop-GIS utilizes global partition indexing and customizable on demand local spatial indexing to achieve efficient query processing. Hadoop-GIS is integrated into Hive to support declarative spatial queries with an integrated architecture. Our experiments have demonstrated the high efficiency of Hadoop-GIS on query response and high scalability to run on commodity clusters. Our comparative experiments have showed that performance of Hadoop-GIS is on par with parallel SDBMS and outperforms SDBMS for compute-intensive queries. Hadoop-GIS is available as a set of library for processing spatial queries, and as an integrated software package in Hive. PMID:24187650
Development and testing of painometer: a smartphone app to assess pain intensity.
de la Vega, Rocío; Roset, Roman; Castarlenas, Elena; Sánchez-Rodríguez, Elisabet; Solé, Ester; Miró, Jordi
2014-10-01
Electronic and information technologies are increasingly being used to assess pain. This study aims to 1) introduce Painometer, a smartphone app that helps users to assess pain intensity, and 2) report on its usability (ie, user performance and satisfaction) and acceptability (ie, the willingness to use it) when it is made available to health care professionals and nonprofessionals. Painometer includes 4 well-known pain intensity scales: the Faces Pain Scale-Revised, the numerical rating scale-11, the Coloured Analogue Scale, and the visual analog scale. Scores reported with these scales, when used in their traditional format, have shown to be valid and reliable. The app was tested in a sample of 24 health care professionals and 30 nonprofessionals. Two iterative usability cycles were conducted with a qualitative usability testing approach and a semistructured interview. The participants had an average of 10 years' experience in using computers. The domains measured were ease of use, errors in usage, most popular characteristics, suggested changes, and acceptability. Adding instructions and changing format and layout details solved the usability problems reported in cycle 1. No further problems were reported in cycle 2. Painometer has been found to be a useful, user-friendly app that may help to improve the accuracy of pain intensity assessment. Painometer, a smartphone app to assess pain intensity, shows good usability and acceptability properties when used by health care professionals and nonprofessionals. Copyright © 2014 American Pain Society. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Casalegno, Mosè; Bernardi, Andrea; Raos, Guido
2013-07-01
Numerical approaches can provide useful information about the microscopic processes underlying photocurrent generation in organic solar cells (OSCs). Among them, the Kinetic Monte Carlo (KMC) method is conceptually the simplest, but computationally the most intensive. A less demanding alternative is potentially represented by so-called Master Equation (ME) approaches, where the equations describing particle dynamics rely on the mean-field approximation and their solution is attained numerically, rather than stochastically. The description of charge separation dynamics, the treatment of electrostatic interactions and numerical stability are some of the key issues which have prevented the application of these methods to OSC modelling, despite of their successes in the study of charge transport in disordered system. Here we describe a three-dimensional ME approach to photocurrent generation in OSCs which attempts to deal with these issues. The reliability of the proposed method is tested against reference KMC simulations on bilayer heterojunction solar cells. Comparison of the current-voltage curves shows that the model well approximates the exact result for most devices. The largest deviations in current densities are mainly due to the adoption of the mean-field approximation for electrostatic interactions. The presence of deep traps, in devices characterized by strong energy disorder, may also affect result quality. Comparison of the simulation times reveals that the ME algorithm runs, on the average, one order of magnitude faster than KMC.
An efficient and accurate 3D displacements tracking strategy for digital volume correlation
NASA Astrophysics Data System (ADS)
Pan, Bing; Wang, Bo; Wu, Dafang; Lubineau, Gilles
2014-07-01
Owing to its inherent computational complexity, practical implementation of digital volume correlation (DVC) for internal displacement and strain mapping faces important challenges in improving its computational efficiency. In this work, an efficient and accurate 3D displacement tracking strategy is proposed for fast DVC calculation. The efficiency advantage is achieved by using three improvements. First, to eliminate the need of updating Hessian matrix in each iteration, an efficient 3D inverse compositional Gauss-Newton (3D IC-GN) algorithm is introduced to replace existing forward additive algorithms for accurate sub-voxel displacement registration. Second, to ensure the 3D IC-GN algorithm that converges accurately and rapidly and avoid time-consuming integer-voxel displacement searching, a generalized reliability-guided displacement tracking strategy is designed to transfer accurate and complete initial guess of deformation for each calculation point from its computed neighbors. Third, to avoid the repeated computation of sub-voxel intensity interpolation coefficients, an interpolation coefficient lookup table is established for tricubic interpolation. The computational complexity of the proposed fast DVC and the existing typical DVC algorithms are first analyzed quantitatively according to necessary arithmetic operations. Then, numerical tests are performed to verify the performance of the fast DVC algorithm in terms of measurement accuracy and computational efficiency. The experimental results indicate that, compared with the existing DVC algorithm, the presented fast DVC algorithm produces similar precision and slightly higher accuracy at a substantially reduced computational cost.
ERIC Educational Resources Information Center
Buche, Mari W.; Davis, Larry R.; Vician, Chelley
2007-01-01
Computers are pervasive in business and education, and it would be easy to assume that all individuals embrace technology. However, evidence shows that roughly 30 to 40 percent of individuals experience some level of computer anxiety. Many academic programs involve computing-intensive courses, but the actual effects of this exposure on computer…
Pulse cleaning flow models and numerical computation of candle ceramic filters.
Tian, Gui-shan; Ma, Zhen-ji; Zhang, Xin-yi; Xu, Ting-xiang
2002-04-01
Analytical and numerical computed models are developed for reverse pulse cleaning system of candle ceramic filters. A standard turbulent model is demonstrated suitably to the designing computation of reverse pulse cleaning system from the experimental and one-dimensional computational result. The computed results can be used to guide the designing of reverse pulse cleaning system, which is optimum Venturi geometry. From the computed results, the general conclusions and the designing methods are obtained.
NASA Astrophysics Data System (ADS)
Ardalan, A.; Safari, A.; Grafarend, E.
2003-04-01
An operational algorithm for computing the ellipsoidal terrain correction based on application of closed form solution of the Newton integral in terms of Cartesian coordinates in the cylindrical equal area map projected surface of a reference ellipsoid has been developed. As the first step the mapping of the points on the surface of a reference ellipsoid onto the cylindrical equal area map projection of a cylinder tangent to a point on the surface of reference ellipsoid closely studied and the map projection formulas are computed. Ellipsoidal mass elements with various sizes on the surface of the reference ellipsoid is considered and the gravitational potential and the vector of gravitational intensity of these mass elements has been computed via the solution of Newton integral in terms of ellipsoidal coordinates. The geographical cross section areas of the selected ellipsoidal mass elements are transferred into cylindrical equal area map projection and based on the transformed area elements Cartesian mass elements with the same height as that of the ellipsoidal mass elements are constructed. Using the close form solution of the Newton integral in terms of Cartesian coordinates the potential of the Cartesian mass elements are computed and compared with the same results based on the application of the ellipsoidal Newton integral over the ellipsoidal mass elements. The results of the numerical computations show that difference between computed gravitational potential of the ellipsoidal mass elements and Cartesian mass element in the cylindrical equal area map projection is of the order of 1.6 × 10-8m^2/s^2 for a mass element with the cross section size of 10 km × 10 km and the height of 1000 m. For a 1 km × 1 km mass element with the same height, this difference is less than 1.5 × 10-4 m^2}/s^2. The results of the numerical computations indicate that a new method for computing the terrain correction based on the closed form solution of the Newton integral in terms of Cartesian coordinates and with accuracy of ellipsoidal terrain correction has been achieved! In this way one can enjoy the simplicity of the solution of the Newton integral in terms of Cartesian coordinates and at the same time the accuracy of the ellipsoidal terrain correction, which is needed for the modern theory of geoid computations.
Three-Dimensional Analysis and Modeling of a Wankel Engine
NASA Technical Reports Server (NTRS)
Raju, M. S.; Willis, E. A.
1991-01-01
A new computer code, AGNI-3D, has been developed for the modeling of combustion, spray, and flow properties in a stratified-charge rotary engine (SCRE). The mathematical and numerical details of the new code are described by the first author in a separate NASA publication. The solution procedure is based on an Eulerian-Lagrangian approach where the unsteady, three-dimensional Navier-Stokes equations for a perfect gas-mixture with variable properties are solved in generalized, Eulerian coordinates on a moving grid by making use of an implicit finite-volume, Steger-Warming flux vector splitting scheme. The liquid-phase equations are solved in Lagrangian coordinates. The engine configuration studied was similar to existing rotary engine flow-visualization and hot-firing test rigs. The results of limited test cases indicate a good degree of qualitative agreement between the predicted and measured pressures. It is conjectured that the impulsive nature of the torque generated by the observed pressure nonuniformity may be one of the mechanisms responsible for the excessive wear of the timing gears observed during the early stages of the rotary combustion engine (RCE) development. It was identified that the turbulence intensities near top-dead-center were dominated by the compression process and only slightly influenced by the intake and exhaust processes. Slow mixing resulting from small turbulence intensities within the rotor pocket and also from a lack of formation of any significant recirculation regions within the rotor pocket were identified as the major factors leading to incomplete combustion. Detailed flowfield results during exhaust and intake, fuel injection, fuel vaporization, combustion, mixing and expansion processes are also presented. The numerical procedure is very efficient as it takes 7 to 10 CPU hours on a CRAY Y-MP for one entire engine cycle when the computations are performed over a 31 x16 x 20 grid.
Multiple pinhole collimator based X-ray luminescence computed tomography
Zhang, Wei; Zhu, Dianwen; Lun, Michael; Li, Changqing
2016-01-01
X-ray luminescence computed tomography (XLCT) is an emerging hybrid imaging modality, which is able to improve the spatial resolution of optical imaging to hundreds of micrometers for deep targets by using superfine X-ray pencil beams. However, due to the low X-ray photon utilization efficiency in a single pinhole collimator based XLCT, it takes a long time to acquire measurement data. Herein, we propose a multiple pinhole collimator based XLCT, in which multiple X-ray beams are generated to scan a sample at multiple positions simultaneously. Compared with the single pinhole based XLCT, the multiple X-ray beam scanning method requires much less measurement time. Numerical simulations and phantom experiments have been performed to demonstrate the feasibility of the multiple X-ray beam scanning method. In one numerical simulation, we used four X-ray beams to scan a cylindrical object with 6 deeply embedded targets. With measurements from 6 angular projections, all 6 targets have been reconstructed successfully. In the phantom experiment, we generated two X-ray pencil beams with a collimator manufactured in-house. Two capillary targets with 0.6 mm edge-to-edge distance embedded in a cylindrical phantom have been reconstructed successfully. With the two beam scanning, we reduced the data acquisition time by 50%. From the reconstructed XLCT images, we found that the Dice similarity of targets is 85.11% and the distance error between two targets is less than 3%. We have measured the radiation dose during XLCT scan and found that the radiation dose, 1.475 mSv, is in the range of a typical CT scan. We have measured the changes of the collimated X-ray beam size and intensity at different distances from the collimator. We have also studied the effects of beam size and intensity in the reconstruction of XLCT. PMID:27446686
Algorithms for computing the geopotential using a simple density layer
NASA Technical Reports Server (NTRS)
Morrison, F.
1976-01-01
Several algorithms have been developed for computing the potential and attraction of a simple density layer. These are numerical cubature, Taylor series, and a mixed analytic and numerical integration using a singularity-matching technique. A computer program has been written to combine these techniques for computing the disturbing acceleration on an artificial earth satellite. A total of 1640 equal-area, constant surface density blocks on an oblate spheroid are used. The singularity-matching algorithm is used in the subsatellite region, Taylor series in the surrounding zone, and numerical cubature on the rest of the earth.
Computing Evans functions numerically via boundary-value problems
NASA Astrophysics Data System (ADS)
Barker, Blake; Nguyen, Rose; Sandstede, Björn; Ventura, Nathaniel; Wahl, Colin
2018-03-01
The Evans function has been used extensively to study spectral stability of travelling-wave solutions in spatially extended partial differential equations. To compute Evans functions numerically, several shooting methods have been developed. In this paper, an alternative scheme for the numerical computation of Evans functions is presented that relies on an appropriate boundary-value problem formulation. Convergence of the algorithm is proved, and several examples, including the computation of eigenvalues for a multi-dimensional problem, are given. The main advantage of the scheme proposed here compared with earlier methods is that the scheme is linear and scalable to large problems.
Nonlinear dynamics and numerical uncertainties in CFD
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sweby, P. K.
1996-01-01
The application of nonlinear dynamics to improve the understanding of numerical uncertainties in computational fluid dynamics (CFD) is reviewed. Elementary examples in the use of dynamics to explain the nonlinear phenomena and spurious behavior that occur in numerics are given. The role of dynamics in the understanding of long time behavior of numerical integrations and the nonlinear stability, convergence, and reliability of using time-marching, approaches for obtaining steady-state numerical solutions in CFD is explained. The study is complemented with spurious behavior observed in CFD computations.
First-order analytic propagation of satellites in the exponential atmosphere of an oblate planet
NASA Astrophysics Data System (ADS)
Martinusi, Vladimir; Dell'Elce, Lamberto; Kerschen, Gaëtan
2017-04-01
The paper offers the fully analytic solution to the motion of a satellite orbiting under the influence of the two major perturbations, due to the oblateness and the atmospheric drag. The solution is presented in a time-explicit form, and takes into account an exponential distribution of the atmospheric density, an assumption that is reasonably close to reality. The approach involves two essential steps. The first one concerns a new approximate mathematical model that admits a closed-form solution with respect to a set of new variables. The second step is the determination of an infinitesimal contact transformation that allows to navigate between the new and the original variables. This contact transformation is obtained in exact form, and afterwards a Taylor series approximation is proposed in order to make all the computations explicit. The aforementioned transformation accommodates both perturbations, improving the accuracy of the orbit predictions by one order of magnitude with respect to the case when the atmospheric drag is absent from the transformation. Numerical simulations are performed for a low Earth orbit starting at an altitude of 350 km, and they show that the incorporation of drag terms into the contact transformation generates an error reduction by a factor of 7 in the position vector. The proposed method aims at improving the accuracy of analytic orbit propagation and transforming it into a viable alternative to the computationally intensive numerical methods.
Data-driven non-linear elasticity: constitutive manifold construction and problem discretization
NASA Astrophysics Data System (ADS)
Ibañez, Ruben; Borzacchiello, Domenico; Aguado, Jose Vicente; Abisset-Chavanne, Emmanuelle; Cueto, Elias; Ladeveze, Pierre; Chinesta, Francisco
2017-11-01
The use of constitutive equations calibrated from data has been implemented into standard numerical solvers for successfully addressing a variety problems encountered in simulation-based engineering sciences (SBES). However, the complexity remains constantly increasing due to the need of increasingly detailed models as well as the use of engineered materials. Data-Driven simulation constitutes a potential change of paradigm in SBES. Standard simulation in computational mechanics is based on the use of two very different types of equations. The first one, of axiomatic character, is related to balance laws (momentum, mass, energy,\\ldots ), whereas the second one consists of models that scientists have extracted from collected, either natural or synthetic, data. Data-driven (or data-intensive) simulation consists of directly linking experimental data to computers in order to perform numerical simulations. These simulations will employ laws, universally recognized as epistemic, while minimizing the need of explicit, often phenomenological, models. The main drawback of such an approach is the large amount of required data, some of them inaccessible from the nowadays testing facilities. Such difficulty can be circumvented in many cases, and in any case alleviated, by considering complex tests, collecting as many data as possible and then using a data-driven inverse approach in order to generate the whole constitutive manifold from few complex experimental tests, as discussed in the present work.
Simulations of relativistic quantum plasmas using real-time lattice scalar QED
NASA Astrophysics Data System (ADS)
Shi, Yuan; Xiao, Jianyuan; Qin, Hong; Fisch, Nathaniel J.
2018-05-01
Real-time lattice quantum electrodynamics (QED) provides a unique tool for simulating plasmas in the strong-field regime, where collective plasma scales are not well separated from relativistic-quantum scales. As a toy model, we study scalar QED, which describes self-consistent interactions between charged bosons and electromagnetic fields. To solve this model on a computer, we first discretize the scalar-QED action on a lattice, in a way that respects geometric structures of exterior calculus and U(1)-gauge symmetry. The lattice scalar QED can then be solved, in the classical-statistics regime, by advancing an ensemble of statistically equivalent initial conditions in time, using classical field equations obtained by extremizing the discrete action. To demonstrate the capability of our numerical scheme, we apply it to two example problems. The first example is the propagation of linear waves, where we recover analytic wave dispersion relations using numerical spectrum. The second example is an intense laser interacting with a one-dimensional plasma slab, where we demonstrate natural transition from wakefield acceleration to pair production when the wave amplitude exceeds the Schwinger threshold. Our real-time lattice scheme is fully explicit and respects local conservation laws, making it reliable for long-time dynamics. The algorithm is readily parallelized using domain decomposition, and the ensemble may be computed using quantum parallelism in the future.
Geospace simulations on the Cell BE processor
NASA Astrophysics Data System (ADS)
Germaschewski, K.; Raeder, J.; Larson, D.
2008-12-01
OpenGGCM (Open Geospace General circulation Model) is an established numerical code that simulates the Earth's space environment. The most computing intensive part is the MHD (magnetohydrodynamics) solver that models the plasma surrounding Earth and its interaction with Earth's magnetic field and the solar wind flowing in from the sun. Like other global magnetosphere codes, OpenGGCM's realism is limited by computational constraints on grid resolution. We investigate porting of the MHD solver to the Cell BE architecture, a novel inhomogeneous multicore architecture capable of up to 230 GFlops per processor. Realizing this high performance on the Cell processor is a programming challenge, though. We implemented the MHD solver using a multi-level parallel approach: On the coarsest level, the problem is distributed to processors based upon the usual domain decomposition approach. Then, on each processor, the problem is divided into 3D columns, each of which is handled by the memory limited SPEs (synergistic processing elements) slice by slice. Finally, SIMD instructions are used to fully exploit the vector/SIMD FPUs in each SPE. Memory management needs to be handled explicitly by the code, using DMA to move data from main memory to the per-SPE local store and vice versa. We obtained excellent performance numbers, a speed-up of a factor of 25 compared to just using the main processor, while still keeping the numerical implementation details of the code maintainable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reich, N.H.; van Sark, W.G.J.H.M.; Turkenburg, W.C.
2010-08-15
In this paper, we show that photovoltaic (PV) energy yields can be simulated using standard rendering and ray-tracing features of Computer Aided Design (CAD) software. To this end, three-dimensional (3-D) sceneries are ray-traced in CAD. The PV power output is then modeled by translating irradiance intensity data of rendered images back into numerical data. To ensure accurate results, the solar irradiation data used as input is compared to numerical data obtained from rendered images, showing excellent agreement. As expected, also ray-tracing precision in the CAD software proves to be very high. To demonstrate PV energy yield simulations using this innovativemore » concept, solar radiation time course data of a few days was modeled in 3-D to simulate distributions of irradiance incident on flat, single- and double-bend shapes and a PV powered computer mouse located on a window sill. Comparisons of measured to simulated PV output of the mouse show that also in practice, simulation accuracies can be very high. Theoretically, this concept has great potential, as it can be adapted to suit a wide range of solar energy applications, such as sun-tracking and concentrator systems, Building Integrated PV (BIPV) or Product Integrated PV (PIPV). However, graphical user interfaces of 'CAD-PV' software tools are not yet available. (author)« less
Symbolic-numeric interface: A review
NASA Technical Reports Server (NTRS)
Ng, E. W.
1980-01-01
A survey of the use of a combination of symbolic and numerical calculations is presented. Symbolic calculations primarily refer to the computer processing of procedures from classical algebra, analysis, and calculus. Numerical calculations refer to both numerical mathematics research and scientific computation. This survey is intended to point out a large number of problem areas where a cooperation of symbolic and numerical methods is likely to bear many fruits. These areas include such classical operations as differentiation and integration, such diverse activities as function approximations and qualitative analysis, and such contemporary topics as finite element calculations and computation complexity. It is contended that other less obvious topics such as the fast Fourier transform, linear algebra, nonlinear analysis and error analysis would also benefit from a synergistic approach.
Numerical Simulation of Screech Tones from Supersonic Jets: Physics and Prediction
NASA Technical Reports Server (NTRS)
Tam, Christopher K. W.; Zaman, Khairul Q. (Technical Monitor)
2002-01-01
The objectives of this project are to: (1) perform a numerical simulation of the jet screech phenomenon; and (2) use the data of the simulations to obtain a better understanding of the physics of jet screech. The original grant period was for three years. This was extended at no cost for an extra year to allow the principal investigator time to publish the results. We would like to report that our research work and results (supported by this grant) have fulfilled both objectives of the grant. The following is a summary of the important accomplishments: (1) We have now demonstrated that it is possible to perform accurate numerical simulations of the jet screech phenomenon. Both the axisymmetric case and the fully three-dimensional case were carried out successfully. It is worthwhile to note that this is the first time the screech tone phenomenon has been successfully simulated numerically; (2) All four screech modes were reproduced in the simulation. The computed screech frequencies and intensities were in good agreement with the NASA Langley Research Center data; (3) The staging phenomenon was reproduced in the simulation; (4) The effects of nozzle lip thickness and jet temperature were studied. Simulated tone frequencies at various nozzle lip thickness and jet temperature were found to agree well with experiments; (5) The simulated data were used to explain, for the first time, why there are two axisymmetric screech modes and two helical/flapping screech modes; (6) The simulated data were used to show that when two tones are observed, they co-exist rather than switching from one mode to the other, back and forth, as some previous investigators have suggested; and (7) Some resources of the grant were used to support the development of new computational aeroacoustics (CAA) methodology. (Our screech tone simulations have benefited because of the availability of these improved methods.)
Vector spherical quasi-Gaussian vortex beams
NASA Astrophysics Data System (ADS)
Mitri, F. G.
2014-02-01
Model equations for describing and efficiently computing the radiation profiles of tightly spherically focused higher-order electromagnetic beams of vortex nature are derived stemming from a vectorial analysis with the complex-source-point method. This solution, termed as a high-order quasi-Gaussian (qG) vortex beam, exactly satisfies the vector Helmholtz and Maxwell's equations. It is characterized by a nonzero integer degree and order (n,m), respectively, an arbitrary waist w0, a diffraction convergence length known as the Rayleigh range zR, and an azimuthal phase dependency in the form of a complex exponential corresponding to a vortex beam. An attractive feature of the high-order solution is the rigorous description of strongly focused (or strongly divergent) vortex wave fields without the need of either the higher-order corrections or the numerically intensive methods. Closed-form expressions and computational results illustrate the analysis and some properties of the high-order qG vortex beams based on the axial and transverse polarization schemes of the vector potentials with emphasis on the beam waist.
Development of a General Form CO 2 and Brine Flux Input Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mansoor, K.; Sun, Y.; Carroll, S.
2014-08-01
The National Risk Assessment Partnership (NRAP) project is developing a science-based toolset for the quantitative analysis of the potential risks associated with changes in groundwater chemistry from CO 2 injection. In order to address uncertainty probabilistically, NRAP is developing efficient, reduced-order models (ROMs) as part of its approach. These ROMs are built from detailed, physics-based process models to provide confidence in the predictions over a range of conditions. The ROMs are designed to reproduce accurately the predictions from the computationally intensive process models at a fraction of the computational time, thereby allowing the utilization of Monte Carlo methods to probemore » variability in key parameters. This report presents the procedures used to develop a generalized model for CO 2 and brine leakage fluxes based on the output of a numerical wellbore simulation. The resulting generalized parameters and ranges reported here will be used for the development of third-generation groundwater ROMs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hasan, IIftekhar; Husain, Tausif; Uddin, Md Wasi
2015-08-24
This paper presents a nonlinear analytical model of a novel double-sided flux concentrating Transverse Flux Machine (TFM) based on the Magnetic Equivalent Circuit (MEC) model. The analytical model uses a series-parallel combination of flux tubes to predict the flux paths through different parts of the machine including air gaps, permanent magnets, stator, and rotor. The two-dimensional MEC model approximates the complex three-dimensional flux paths of the TFM and includes the effects of magnetic saturation. The model is capable of adapting to any geometry that makes it a good alternative for evaluating prospective designs of TFM compared to finite element solversmore » that are numerically intensive and require more computation time. A single-phase, 1-kW, 400-rpm machine is analytically modeled, and its resulting flux distribution, no-load EMF, and torque are verified with finite element analysis. The results are found to be in agreement, with less than 5% error, while reducing the computation time by 25 times.« less
Analytical Modeling of a Novel Transverse Flux Machine for Direct Drive Wind Turbine Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hasan, IIftekhar; Husain, Tausif; Uddin, Md Wasi
2015-09-02
This paper presents a nonlinear analytical model of a novel double sided flux concentrating Transverse Flux Machine (TFM) based on the Magnetic Equivalent Circuit (MEC) model. The analytical model uses a series-parallel combination of flux tubes to predict the flux paths through different parts of the machine including air gaps, permanent magnets (PM), stator, and rotor. The two-dimensional MEC model approximates the complex three-dimensional flux paths of the TFM and includes the effects of magnetic saturation. The model is capable of adapting to any geometry which makes it a good alternative for evaluating prospective designs of TFM as compared tomore » finite element solvers which are numerically intensive and require more computation time. A single phase, 1 kW, 400 rpm machine is analytically modeled and its resulting flux distribution, no-load EMF and torque, verified with Finite Element Analysis (FEA). The results are found to be in agreement with less than 5% error, while reducing the computation time by 25 times.« less
Distributed Two-Dimensional Fourier Transforms on DSPs with an Application for Phase Retrieval
NASA Technical Reports Server (NTRS)
Smith, Jeffrey Scott
2006-01-01
Many applications of two-dimensional Fourier Transforms require fixed timing as defined by system specifications. One example is image-based wavefront sensing. The image-based approach has many benefits, yet it is a computational intensive solution for adaptive optic correction, where optical adjustments are made in real-time to correct for external (atmospheric turbulence) and internal (stability) aberrations, which cause image degradation. For phase retrieval, a type of image-based wavefront sensing, numerous two-dimensional Fast Fourier Transforms (FFTs) are used. To meet the required real-time specifications, a distributed system is needed, and thus, the 2-D FFT necessitates an all-to-all communication among the computational nodes. The 1-D floating point FFT is very efficient on a digital signal processor (DSP). For this study, several architectures and analysis of such are presented which address the all-to-all communication with DSPs. Emphasis of this research is on a 64-node cluster of Analog Devices TigerSharc TS-101 DSPs.
3D multiscale crack propagation using the XFEM applied to a gas turbine blade
NASA Astrophysics Data System (ADS)
Holl, Matthias; Rogge, Timo; Loehnert, Stefan; Wriggers, Peter; Rolfes, Raimund
2014-01-01
This work presents a new multiscale technique to investigate advancing cracks in three dimensional space. This fully adaptive multiscale technique is designed to take into account cracks of different length scales efficiently, by enabling fine scale domains locally in regions of interest, i.e. where stress concentrations and high stress gradients occur. Due to crack propagation, these regions change during the simulation process. Cracks are modeled using the extended finite element method, such that an accurate and powerful numerical tool is achieved. Restricting ourselves to linear elastic fracture mechanics, the -integral yields an accurate solution of the stress intensity factors, and with the criterion of maximum hoop stress, a precise direction of growth. If necessary, the on the finest scale computed crack surface is finally transferred to the corresponding scale. In a final step, the model is applied to a quadrature point of a gas turbine blade, to compute crack growth on the microscale of a real structure.
NASA Technical Reports Server (NTRS)
1992-01-01
Research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, numerical analysis, fluid mechanics including fluid dynamics, acoustics, and combustion, aerodynamics, and computer science during the period 1 Apr. 1992 - 30 Sep. 1992 is summarized.
Numerical Relativity, Black Hole Mergers, and Gravitational Waves: Part I
NASA Technical Reports Server (NTRS)
Centrella, Joan
2012-01-01
This series of 3 lectures will present recent developments in numerical relativity, and their applications to simulating black hole mergers and computing the resulting gravitational waveforms. In this first lecture, we introduce the basic ideas of numerical relativity, highlighting the challenges that arise in simulating gravitational wave sources on a computer.
Computation of rare transitions in the barotropic quasi-geostrophic equations
NASA Astrophysics Data System (ADS)
Laurie, Jason; Bouchet, Freddy
2015-01-01
We investigate the theoretical and numerical computation of rare transitions in simple geophysical turbulent models. We consider the barotropic quasi-geostrophic and two-dimensional Navier-Stokes equations in regimes where bistability between two coexisting large-scale attractors exist. By means of large deviations and instanton theory with the use of an Onsager-Machlup path integral formalism for the transition probability, we show how one can directly compute the most probable transition path between two coexisting attractors analytically in an equilibrium (Langevin) framework and numerically otherwise. We adapt a class of numerical optimization algorithms known as minimum action methods to simple geophysical turbulent models. We show that by numerically minimizing an appropriate action functional in a large deviation limit, one can predict the most likely transition path for a rare transition between two states. By considering examples where theoretical predictions can be made, we show that the minimum action method successfully predicts the most likely transition path. Finally, we discuss the application and extension of such numerical optimization schemes to the computation of rare transitions observed in direct numerical simulations and experiments and to other, more complex, turbulent systems.
Computational Aspects of Data Assimilation and the ESMF
NASA Technical Reports Server (NTRS)
daSilva, A.
2003-01-01
The scientific challenge of developing advanced data assimilation applications is a daunting task. Independently developed components may have incompatible interfaces or may be written in different computer languages. The high-performance computer (HPC) platforms required by numerically intensive Earth system applications are complex, varied, rapidly evolving and multi-part systems themselves. Since the market for high-end platforms is relatively small, there is little robust middleware available to buffer the modeler from the difficulties of HPC programming. To complicate matters further, the collaborations required to develop large Earth system applications often span initiatives, institutions and agencies, involve geoscience, software engineering, and computer science communities, and cross national borders.The Earth System Modeling Framework (ESMF) project is a concerted response to these challenges. Its goal is to increase software reuse, interoperability, ease of use and performance in Earth system models through the use of a common software framework, developed in an open manner by leaders in the modeling community. The ESMF addresses the technical and to some extent the cultural - aspects of Earth system modeling, laying the groundwork for addressing the more difficult scientific aspects, such as the physical compatibility of components, in the future. In this talk we will discuss the general philosophy and architecture of the ESMF, focussing on those capabilities useful for developing advanced data assimilation applications.
Accelerated Compressed Sensing Based CT Image Reconstruction.
Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R; Paul, Narinder S; Cobbold, Richard S C
2015-01-01
In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization.
Accelerated Compressed Sensing Based CT Image Reconstruction
Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R.; Paul, Narinder S.; Cobbold, Richard S. C.
2015-01-01
In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization. PMID:26167200
Efficient visibility-driven medical image visualisation via adaptive binned visibility histogram.
Jung, Younhyun; Kim, Jinman; Kumar, Ashnil; Feng, David Dagan; Fulham, Michael
2016-07-01
'Visibility' is a fundamental optical property that represents the observable, by users, proportion of the voxels in a volume during interactive volume rendering. The manipulation of this 'visibility' improves the volume rendering processes; for instance by ensuring the visibility of regions of interest (ROIs) or by guiding the identification of an optimal rendering view-point. The construction of visibility histograms (VHs), which represent the distribution of all the visibility of all voxels in the rendered volume, enables users to explore the volume with real-time feedback about occlusion patterns among spatially related structures during volume rendering manipulations. Volume rendered medical images have been a primary beneficiary of VH given the need to ensure that specific ROIs are visible relative to the surrounding structures, e.g. the visualisation of tumours that may otherwise be occluded by neighbouring structures. VH construction and its subsequent manipulations, however, are computationally expensive due to the histogram binning of the visibilities. This limits the real-time application of VH to medical images that have large intensity ranges and volume dimensions and require a large number of histogram bins. In this study, we introduce an efficient adaptive binned visibility histogram (AB-VH) in which a smaller number of histogram bins are used to represent the visibility distribution of the full VH. We adaptively bin medical images by using a cluster analysis algorithm that groups the voxels according to their intensity similarities into a smaller subset of bins while preserving the distribution of the intensity range of the original images. We increase efficiency by exploiting the parallel computation and multiple render targets (MRT) extension of the modern graphical processing units (GPUs) and this enables efficient computation of the histogram. We show the application of our method to single-modality computed tomography (CT), magnetic resonance (MR) imaging and multi-modality positron emission tomography-CT (PET-CT). In our experiments, the AB-VH markedly improved the computational efficiency for the VH construction and thus improved the subsequent VH-driven volume manipulations. This efficiency was achieved without major degradation in the VH visually and numerical differences between the AB-VH and its full-bin counterpart. We applied several variants of the K-means clustering algorithm with varying Ks (the number of clusters) and found that higher values of K resulted in better performance at a lower computational gain. The AB-VH also had an improved performance when compared to the conventional method of down-sampling of the histogram bins (equal binning) for volume rendering visualisation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Data intensive computing at Sandia.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, Andrew T.
2010-09-01
Data-Intensive Computing is parallel computing where you design your algorithms and your software around efficient access and traversal of a data set; where hardware requirements are dictated by data size as much as by desired run times usually distilling compact results from massive data.
CT to Cone-beam CT Deformable Registration With Simultaneous Intensity Correction
Zhen, Xin; Gu, Xuejun; Yan, Hao; Zhou, Linghong; Jia, Xun; Jiang, Steve B.
2012-01-01
Computed tomography (CT) to cone-beam computed tomography (CBCT) deformable image registration (DIR) is a crucial step in adaptive radiation therapy. Current intensity-based registration algorithms, such as demons, may fail in the context of CT-CBCT DIR because of inconsistent intensities between the two modalities. In this paper, we propose a variant of demons, called Deformation with Intensity Simultaneously Corrected (DISC), to deal with CT-CBCT DIR. DISC distinguishes itself from the original demons algorithm by performing an adaptive intensity correction step on the CBCT image at every iteration step of the demons registration. Specifically, the intensity correction of a voxel in CBCT is achieved by matching the first and the second moments of the voxel intensities inside a patch around the voxel with those on the CT image. It is expected that such a strategy can remove artifacts in the CBCT image, as well as ensuring the intensity consistency between the two modalities. DISC is implemented on computer graphics processing units (GPUs) in compute unified device architecture (CUDA) programming environment. The performance of DISC is evaluated on a simulated patient case and six clinical head-and-neck cancer patient data. It is found that DISC is robust against the CBCT artifacts and intensity inconsistency and significantly improves the registration accuracy when compared with the original demons. PMID:23032638
Numerical Computation of Sensitivities and the Adjoint Approach
NASA Technical Reports Server (NTRS)
Lewis, Robert Michael
1997-01-01
We discuss the numerical computation of sensitivities via the adjoint approach in optimization problems governed by differential equations. We focus on the adjoint problem in its weak form. We show how one can avoid some of the problems with the adjoint approach, such as deriving suitable boundary conditions for the adjoint equation. We discuss the convergence of numerical approximations of the costate computed via the weak form of the adjoint problem and show the significance for the discrete adjoint problem.
Comments on the Development of Computational Mathematics in Czechoslovakia and in the USSR.
1987-03-01
ACT (COusduMe an reverse .eld NE 4040604W SWi 1410011 6F 660" ambe The talk is an Invited lecture at Ale Conference on the History of Scientific and...Numeric Computations, May 13-15, 1987, Princeton, New Jersey. It present soon basic subjective observations about the history of numerical methods in...invited lecture at ACH Conference on the History of Scientific and Numeric Computations, May 13’-15, 1987, Princeton, New Jersey. It present some basic
NASA Astrophysics Data System (ADS)
Alvanos, Michail; Christoudias, Theodoros
2017-10-01
This paper presents an application of GPU accelerators in Earth system modeling. We focus on atmospheric chemical kinetics, one of the most computationally intensive tasks in climate-chemistry model simulations. We developed a software package that automatically generates CUDA kernels to numerically integrate atmospheric chemical kinetics in the global climate model ECHAM/MESSy Atmospheric Chemistry (EMAC), used to study climate change and air quality scenarios. A source-to-source compiler outputs a CUDA-compatible kernel by parsing the FORTRAN code generated by the Kinetic PreProcessor (KPP) general analysis tool. All Rosenbrock methods that are available in the KPP numerical library are supported.Performance evaluation, using Fermi and Pascal CUDA-enabled GPU accelerators, shows achieved speed-ups of 4. 5 × and 20. 4 × , respectively, of the kernel execution time. A node-to-node real-world production performance comparison shows a 1. 75 × speed-up over the non-accelerated application using the KPP three-stage Rosenbrock solver. We provide a detailed description of the code optimizations used to improve the performance including memory optimizations, control code simplification, and reduction of idle time. The accuracy and correctness of the accelerated implementation are evaluated by comparing to the CPU-only code of the application. The median relative difference is found to be less than 0.000000001 % when comparing the output of the accelerated kernel the CPU-only code.The approach followed, including the computational workload division, and the developed GPU solver code can potentially be used as the basis for hardware acceleration of numerous geoscientific models that rely on KPP for atmospheric chemical kinetics applications.
Micromagnetic computer simulations of spin waves in nanometre-scale patterned magnetic elements
NASA Astrophysics Data System (ADS)
Kim, Sang-Koog
2010-07-01
Current needs for further advances in the nanotechnologies of information-storage and -processing devices have attracted a great deal of interest in spin (magnetization) dynamics in nanometre-scale patterned magnetic elements. For instance, the unique dynamic characteristics of non-uniform magnetic microstructures such as various types of domain walls, magnetic vortices and antivortices, as well as spin wave dynamics in laterally restricted thin-film geometries, have been at the centre of extensive and intensive researches. Understanding the fundamentals of their unique spin structure as well as their robust and novel dynamic properties allows us to implement new functionalities into existing or future devices. Although experimental tools and theoretical approaches are effective means of understanding the fundamentals of spin dynamics and of gaining new insights into them, the limitations of those same tools and approaches have left gaps of unresolved questions in the pertinent physics. As an alternative, however, micromagnetic modelling and numerical simulation has recently emerged as a powerful tool for the study of a variety of phenomena related to spin dynamics of nanometre-scale magnetic elements. In this review paper, I summarize the recent results of simulations of the excitation and propagation and other novel wave characteristics of spin waves, highlighting how the micromagnetic computer simulation approach contributes to an understanding of spin dynamics of nanomagnetism and considering some of the merits of numerical simulation studies. Many examples of micromagnetic modelling for numerical calculations, employing various dimensions and shapes of patterned magnetic elements, are given. The current limitations of continuum micromagnetic modelling and of simulations based on the Landau-Lifshitz-Gilbert equation of motion of magnetization are also discussed, along with further research directions for spin-wave studies.
A Eulerian-Lagrangian Model to Simulate Two-Phase/Particulate Flows
NASA Technical Reports Server (NTRS)
Apte, S. V.; Mahesh, K.; Lundgren, T.
2003-01-01
Figure 1 shows a snapshot of liquid fuel spray coming out of an injector nozzle in a realistic gas-turbine combustor. Here the spray atomization was simulated using a stochastic secondary breakup model (Apte et al. 2003a) with point-particle approximation for the droplets. Very close to the injector, it is observed that the spray density is large and the droplets cannot be treated as point-particles. The volume displaced by the liquid in this region is significant and can alter the gas-phase ow and spray evolution. In order to address this issue, one can compute the dense spray regime by an Eulerian-Lagrangian technique using advanced interface tracking/level-set methods (Sussman et al. 1994; Tryggvason et al. 2001; Herrmann 2003). This, however, is computationally intensive and may not be viable in realistic complex configurations. We therefore plan to develop a methodology based on Eulerian-Lagrangian technique which will allow us to capture the essential features of primary atomization using models to capture interactions between the fluid and droplets and which can be directly applied to the standard atomization models used in practice. The numerical scheme for unstructured grids developed by Mahesh et al. (2003) for incompressible flows is modified to take into account the droplet volume fraction. The numerical framework is directly applicable to realistic combustor geometries. Our main objectives in this work are: Develop a numerical formulation based on Eulerian-Lagrangian techniques with models for interaction terms between the fluid and particles to capture the Kelvin- Helmholtz type instabilities observed during primary atomization. Validate this technique for various two-phase and particulate flows. Assess its applicability to capture primary atomization of liquid jets in conjunction with secondary atomization models.
Computer-intensive simulation of solid-state NMR experiments using SIMPSON.
Tošner, Zdeněk; Andersen, Rasmus; Stevensson, Baltzar; Edén, Mattias; Nielsen, Niels Chr; Vosegaard, Thomas
2014-09-01
Conducting large-scale solid-state NMR simulations requires fast computer software potentially in combination with efficient computational resources to complete within a reasonable time frame. Such simulations may involve large spin systems, multiple-parameter fitting of experimental spectra, or multiple-pulse experiment design using parameter scan, non-linear optimization, or optimal control procedures. To efficiently accommodate such simulations, we here present an improved version of the widely distributed open-source SIMPSON NMR simulation software package adapted to contemporary high performance hardware setups. The software is optimized for fast performance on standard stand-alone computers, multi-core processors, and large clusters of identical nodes. We describe the novel features for fast computation including internal matrix manipulations, propagator setups and acquisition strategies. For efficient calculation of powder averages, we implemented interpolation method of Alderman, Solum, and Grant, as well as recently introduced fast Wigner transform interpolation technique. The potential of the optimal control toolbox is greatly enhanced by higher precision gradients in combination with the efficient optimization algorithm known as limited memory Broyden-Fletcher-Goldfarb-Shanno. In addition, advanced parallelization can be used in all types of calculations, providing significant time reductions. SIMPSON is thus reflecting current knowledge in the field of numerical simulations of solid-state NMR experiments. The efficiency and novel features are demonstrated on the representative simulations. Copyright © 2014 Elsevier Inc. All rights reserved.
Mobile Cloud Computing with SOAP and REST Web Services
NASA Astrophysics Data System (ADS)
Ali, Mushtaq; Fadli Zolkipli, Mohamad; Mohamad Zain, Jasni; Anwar, Shahid
2018-05-01
Mobile computing in conjunction with Mobile web services drives a strong approach where the limitations of mobile devices may possibly be tackled. Mobile Web Services are based on two types of technologies; SOAP and REST, which works with the existing protocols to develop Web services. Both the approaches carry their own distinct features, yet to keep the constraint features of mobile devices in mind, the better in two is considered to be the one which minimize the computation and transmission overhead while offloading. The load transferring of mobile device to remote servers for execution called computational offloading. There are numerous approaches to implement computational offloading a viable solution for eradicating the resources constraints of mobile device, yet a dynamic method of computational offloading is always required for a smooth and simple migration of complex tasks. The intention of this work is to present a distinctive approach which may not engage the mobile resources for longer time. The concept of web services utilized in our work to delegate the computational intensive tasks for remote execution. We tested both SOAP Web services approach and REST Web Services for mobile computing. Two parameters considered in our lab experiments to test; Execution Time and Energy Consumption. The results show that RESTful Web services execution is far better than executing the same application by SOAP Web services approach, in terms of execution time and energy consumption. Conducting experiments with the developed prototype matrix multiplication app, REST execution time is about 200% better than SOAP execution approach. In case of energy consumption REST execution is about 250% better than SOAP execution approach.
NASA Astrophysics Data System (ADS)
Lewy, Serge; Polacsek, Cyril; Barrier, Raphael
2014-12-01
Tone noise radiated through the inlet of a turbofan is mainly due to rotor-stator interactions at subsonic regimes (approach flight), and to the shock waves attached to each blade at supersonic helical tip speeds (takeoff). The axial compressor of a helicopter turboshaft engine is transonic as well and can be studied like turbofans at takeoff. The objective of the paper is to predict the sound power at the inlet radiating into the free field, with a focus on transonic conditions because sound levels are much higher. Direct numerical computation of tone acoustic power is based on a RANS (Reynolds averaged Navier-Stokes) solver followed by an integration of acoustic intensity over specified inlet cross-sections, derived from Cantrell and Hart equations (valid in irrotational flows). In transonic regimes, sound power decreases along the intake because of nonlinear propagation, which must be discriminated from numerical dissipation. This is one of the reasons why an analytical approach is also suggested. It is based on three steps: (i) appraisal of the initial pressure jump of the shock waves; (ii) 2D nonlinear propagation model of Morfey and Fisher; (iii) calculation of the sound power of the 3D ducted acoustic field. In this model, all the blades are assumed to be identical such that only the blade passing frequency and its harmonics are predicted (like in the present numerical simulations). However, transfer from blade passing frequency to multiple pure tones can be evaluated in a fourth step through a statistical analysis of irregularities between blades. Interest of the analytical method is to provide a good estimate of nonlinear acoustic propagation in the upstream duct while being easy and fast to compute. The various methods are applied to two turbofan models, respectively in approach (subsonic) and takeoff (transonic) conditions, and to a Turbomeca turboshaft engine (transonic case). The analytical method in transonic appears to be quite reliable by comparison with the numerical solution and with available experimental data.
NASA Astrophysics Data System (ADS)
Ishii, Katsuya
2011-08-01
This issue includes a special section on computational fluid dynamics (CFD) in memory of the late Professor Kunio Kuwahara, who passed away on 15 September 2008, at the age of 66. In this special section, five articles are included that are based on the lectures and discussions at `The 7th International Nobeyama Workshop on CFD: To the Memory of Professor Kuwahara' held in Tokyo on 23 and 24 September 2009. Professor Kuwahara started his research in fluid dynamics under Professor Imai at the University of Tokyo. His first paper was published in 1969 with the title 'Steady Viscous Flow within Circular Boundary', with Professor Imai. In this paper, he combined theoretical and numerical methods in fluid dynamics. Since that time, he made significant and seminal contributions to computational fluid dynamics. He undertook pioneering numerical studies on the vortex method in 1970s. From then to the early nineties, he developed numerical analyses on a variety of three-dimensional unsteady phenomena of incompressible and compressible fluid flows and/or complex fluid flows using his own supercomputers with academic and industrial co-workers and members of his private research institute, ICFD in Tokyo. In addition, a number of senior and young researchers of fluid mechanics around the world were invited to ICFD and the Nobeyama workshops, which were held near his villa, and they intensively discussed new frontier problems of fluid physics and fluid engineering at Professor Kuwahara's kind hospitality. At the memorial Nobeyama workshop held in 2009, 24 overseas speakers presented their papers, including the talks of Dr J P Boris (Naval Research Laboratory), Dr E S Oran (Naval Research Laboratory), Professor Z J Wang (Iowa State University), Dr M Meinke (RWTH Aachen), Professor K Ghia (University of Cincinnati), Professor U Ghia (University of Cincinnati), Professor F Hussain (University of Houston), Professor M Farge (École Normale Superieure), Professor J Y Yong (National Taiwan University), and Professor H S Kwak (Kumoh National Institute of Technology). For his contributions to CFD, Professor Kuwahara received Awards from the Japan Society of Automobile Engineers and the Japan Society of Mechanical Engineers in 1992, the Computational Mechanics Achievement Award from the Japan Society of Mechanical Engineers in 1993, and the Max Planck Research Award in 1993. He received the Computational Mechanics Award from the Japan Society of Mechanical Engineers again in 2008. Professor Kuwahara also supported the development of the Japan Society of Fluid Mechanics, whose office is located in the same building as ICFD. In the proceedings of the 6th International Nobeyama Workshop on CFD to commemorate the 60th birthday of Professor Kuwahara, Professor Jae Min Hyun of KAIST wrote 'The major professional achievement of Professor Kuwahara may be compressed into two main categories. First and foremost, Professor Kuwahara will long be recorded as the front-line pioneer in using numerical computations to tackle complex problems in fluid mechanics. ...Another important contribution of Professor Kuwahara was in the training and fostering of talented manpower of computational mechanics research.'[1] Among the various topics of the five papers in this special section are examples of Professor Kuwahara's works mentioned by Professor Hyun. The main authors of all papers have grown up in the research circle of Professor Kuwahara. All the papers demostrate the challenge of new aspects of computational fluid dynamics; a new numerical method for compressible flows, thermo-acoustic flows of helium gas in a small tube, electro-osmic flows in a micro/nano channel, MHD flows over a wavy disk, and a new extraction method of multi-object aircraft design rules. Last but not least, this special section is cordially dedicated to the late Professor Kuwahara and his family. Reference [1] Hyun J M 2005 Preface of New Developments in Computational Fluid Dynamics vol 90 Notes on Numerical Fluid Mechanics and Multidisciplinary Design ed K Fujii et al (Berlin: Springer)
NASA Technical Reports Server (NTRS)
Giassi, D.; Cao, S.; Stocker, D. P.; Takahashi, F.; Bennett, B. A. V.; Smooke, M. D.; Long, M. B.
2015-01-01
With the conclusion of the SLICE campaign aboard the ISS in 2012, a large amount of data was made available for the analysis of the effect of microgravity on laminar coflow diffusion flames. Previous work focused on the study of sooty flames in microgravity as well as the ability of numerical models to predict its formation in a simplified buoyancy-free environment. The current work shifts the investigation to soot-free flames, putting an emphasis on the chemiluminescence emission from electronically excited CH (CH*). This radical species is of significant interest in combustion studies: it has been shown that the electronically excited CH spatial distribution is indicative of the flame front position and, given the relatively simple diagnostic involved with its measurement, several works have been done trying to understand the ability of electronically excited CH chemiluminescence to predict the total and local flame heat release rate. In this work, a subset of the SLICE nitrogen-diluted methane flames has been considered, and the effect of fuel and coflow velocity on electronically excited CH concentration is discussed and compared with both normal gravity results and numerical simulations. Experimentally, the spectral characterization of the DSLR color camera used to acquire the flame images allowed the signal collected by the blue channel to be considered representative of the electronically excited CH emission centered around 431 nm. Due to the axisymmetric flame structure, an Abel deconvolution of the line-of-sight chemiluminescence was used to obtain the radial intensity profile and, thanks to an absolute light intensity calibration, a quantification of the electronically excited CH concentration was possible. Results show that, in microgravity, the maximum flame electronically excited CH concentration increases with the coflow velocity, but it is weakly dependent on the fuel velocity; normal gravity flames, if not lifted, tend to follow the same trend, albeit with different peak concentrations. Comparisons with numerical simulations display reasonably good agreement between measured and computed flame lengths and radii, and it is shown that the integrated electronically excited CH emission scales proportionally to the computed total heat release rate; the two-dimensional electronically excited CH spatial distribution, however, does not appear to be a good marker for the local heat release rate.
Probabilistic numerics and uncertainty in computations
Hennig, Philipp; Osborne, Michael A.; Girolami, Mark
2015-01-01
We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations. PMID:26346321
Probabilistic numerics and uncertainty in computations.
Hennig, Philipp; Osborne, Michael A; Girolami, Mark
2015-07-08
We deliver a call to arms for probabilistic numerical methods : algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations.
Intensity noise coupling in soliton fiber oscillators.
Wan, Chenchen; Schibli, Thomas R; Li, Peng; Bevilacqua, Carlo; Ruehl, Axel; Hartl, Ingmar
2017-12-15
We present an experimental and numerical study on the spectrally resolved pump-to-output intensity noise coupling in soliton fiber oscillators. In our study, we observe a strong pump noise coupling to the Kelly sidebands, while the coupling to the soliton pulse is damped. This behavior is observed in erbium-doped as well as holmium-doped fiber oscillators and confirmed by numerical modeling. It can be seen as a general feature of laser oscillators in which soliton pulse formation is dominant. We show that spectral blocking of the Kelly sidebands outside the laser cavity can improve the intensity noise performance of the laser dramatically.
Spurious Numerical Solutions Of Differential Equations
NASA Technical Reports Server (NTRS)
Lafon, A.; Yee, H. C.
1995-01-01
Paper presents detailed study of spurious steady-state numerical solutions of differential equations that contain nonlinear source terms. Main objectives of this study are (1) to investigate how well numerical steady-state solutions of model nonlinear reaction/convection boundary-value problem mimic true steady-state solutions and (2) to relate findings of this investigation to implications for interpretation of numerical results from computational-fluid-dynamics algorithms and computer codes used to simulate reacting flows.
Object-oriented numerical computing C++
NASA Technical Reports Server (NTRS)
Vanrosendale, John
1994-01-01
An object oriented language is one allowing users to create a set of related types and then intermix and manipulate values of these related types. This paper discusses object oriented numerical computing using C++.
DOE Office of Scientific and Technical Information (OSTI.GOV)
BAILEY, DAVID H.; BORWEIN, JONATHAN M.
A recent paper by the present authors, together with mathematical physicists David Broadhurst and M. Larry Glasser, explored Bessel moment integrals, namely definite integrals of the general form {integral}{sub 0}{sup {infinity}} t{sup m}f{sup n}(t) dt, where the function f(t) is one of the classical Bessel functions. In that paper, numerous previously unknown analytic evaluations were obtained, using a combination of analytic methods together with some fairly high-powered numerical computations, often performed on highly parallel computers. In several instances, while we were able to numerically discover what appears to be a solid analytic identity, based on extremely high-precision numerical computations, wemore » were unable to find a rigorous proof. Thus we present here a brief list of some of these unproven but numerically confirmed identities.« less
NASA Astrophysics Data System (ADS)
Petrov, L.
2017-12-01
Processing satellite altimetry data requires the computation of path delayin the neutral atmosphere that is used for correcting ranges. The path delayis computed using numerical weather models and the accuracy of its computationdepends on the accuracy of numerical weather models. Accuracy of numerical modelsof numerical weather models over Antarctica and Greenland where there is a very sparse network of ground stations, is not well known. I used the dataset of GPS RO L1 data, computed predicted path delay for ROobservations using the numerical whether model GEOS-FPIT, formed the differences with observed path delay and used these differences for computationof the corrections to the a priori refractivity profile. These profiles wereused for computing corrections to the a priori zenith path delay. The systematic patter of these corrections are used for de-biasing of the the satellite altimetry results and for characterization of the systematic errorscaused by mismodeling atmosphere.
NASA Astrophysics Data System (ADS)
Kucera, P. A.; Burek, T.; Halley-Gotway, J.
2015-12-01
NCAR's Joint Numerical Testbed Program (JNTP) focuses on the evaluation of experimental forecasts of tropical cyclones (TCs) with the goal of developing new research tools and diagnostic evaluation methods that can be transitioned to operations. Recent activities include the development of new TC forecast verification methods and the development of an adaptable TC display and diagnostic system. The next generation display and diagnostic system is being developed to support evaluation needs of the U.S. National Hurricane Center (NHC) and broader TC research community. The new hurricane display and diagnostic capabilities allow forecasters and research scientists to more deeply examine the performance of operational and experimental models. The system is built upon modern and flexible technology that includes OpenLayers Mapping tools that are platform independent. The forecast track and intensity along with associated observed track information are stored in an efficient MySQL database. The system provides easy-to-use interactive display system, and provides diagnostic tools to examine forecast track stratified by intensity. Consensus forecasts can be computed and displayed interactively. The system is designed to display information for both real-time and for historical TC cyclones. The display configurations are easily adaptable to meet the needs of the end-user preferences. Ongoing enhancements include improving capabilities for stratification and evaluation of historical best tracks, development and implementation of additional methods to stratify and compute consensus hurricane track and intensity forecasts, and improved graphical display tools. The display is also being enhanced to incorporate gridded forecast, satellite, and sea surface temperature fields. The presentation will provide an overview of the display and diagnostic system development and demonstration of the current capabilities.
Cumulative reports and publications through December 31, 1989
NASA Technical Reports Server (NTRS)
1990-01-01
A complete list of reports from the Institute for Computer Applications in Science and Engineering (ICASE) is presented. The major categories of the current ICASE research program are: numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; control and parameter identification problems, with emphasis on effectual numerical methods; computational problems in engineering and the physical sciences, particularly fluid dynamics, acoustics, structural analysis, and chemistry; computer systems and software, especially vector and parallel computers, microcomputers, and data management. Since ICASE reports are intended to be preprints of articles that will appear in journals or conference proceedings, the published reference is included when it is available.
Numerical simulations of stripping effects in high-intensity hydrogen ion linacs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carneiro, J.-P.; /Fermilab; Mustapha, B.
2008-12-01
Numerical simulations of H{sup -} stripping losses from blackbody radiation, electromagnetic fields, and residual gas have been implemented into the beam dynamics code TRACK. Estimates of the stripping losses along two high-intensity H{sup -} linacs are presented: the Spallation Neutron Source linac currently being operated at Oak Ridge National Laboratory and an 8 GeV superconducting linac currently being designed at Fermi National Accelerator Laboratory.
Linear elastic fracture mechanics primer
NASA Technical Reports Server (NTRS)
Wilson, Christopher D.
1992-01-01
This primer is intended to remove the blackbox perception of fracture mechanics computer software by structural engineers. The fundamental concepts of linear elastic fracture mechanics are presented with emphasis on the practical application of fracture mechanics to real problems. Numerous rules of thumb are provided. Recommended texts for additional reading, and a discussion of the significance of fracture mechanics in structural design are given. Griffith's criterion for crack extension, Irwin's elastic stress field near the crack tip, and the influence of small-scale plasticity are discussed. Common stress intensities factor solutions and methods for determining them are included. Fracture toughness and subcritical crack growth are discussed. The application of fracture mechanics to damage tolerance and fracture control is discussed. Several example problems and a practice set of problems are given.
Computer modeling of in terferograms of flowing plasma and determination of the phase shift
NASA Astrophysics Data System (ADS)
Blažek, J.; Kříž, P.; Stach, V.
2000-03-01
Interferograms of the flowing gas contain information about the phase shift between the object and the reference beams. The determination of the phase shift is the first step in getting information about the inner distribution of the density in cylindrically symmetric discharges. Slightly modified Takeda method based on the Fourier transformation is applied to determine the phase information from the interferogram. The least squares spline approximation is used for approximation and smoothing intensity profiles. At the same time, cubic splines with their end-knots conditions naturally realize “hanning windows” eliminating unwanted edge effects. For the purpose of numerical testing of the method, we developed a code that for a density given in advance reconstructs the corresponding interferogram.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, K.; Petersson, N. A.; Rodgers, A.
Acoustic waveform modeling is a computationally intensive task and full three-dimensional simulations are often impractical for some geophysical applications such as long-range wave propagation and high-frequency sound simulation. In this study, we develop a two-dimensional high-order accurate finite-difference code for acoustic wave modeling. We solve the linearized Euler equations by discretizing them with the sixth order accurate finite difference stencils away from the boundary and the third order summation-by-parts (SBP) closure near the boundary. Non-planar topographic boundary is resolved by formulating the governing equation in curvilinear coordinates following the interface. We verify the implementation of the algorithm by numerical examplesmore » and demonstrate the capability of the proposed method for practical acoustic wave propagation problems in the atmosphere.« less
Hypothesis testing for band size detection of high-dimensional banded precision matrices.
An, Baiguo; Guo, Jianhua; Liu, Yufeng
2014-06-01
Many statistical analysis procedures require a good estimator for a high-dimensional covariance matrix or its inverse, the precision matrix. When the precision matrix is banded, the Cholesky-based method often yields a good estimator of the precision matrix. One important aspect of this method is determination of the band size of the precision matrix. In practice, crossvalidation is commonly used; however, we show that crossvalidation not only is computationally intensive but can be very unstable. In this paper, we propose a new hypothesis testing procedure to determine the band size in high dimensions. Our proposed test statistic is shown to be asymptotically normal under the null hypothesis, and its theoretical power is studied. Numerical examples demonstrate the effectiveness of our testing procedure.
NASA Astrophysics Data System (ADS)
Kolbin, A. I.; Shimansky, V. V.
2014-04-01
We developed a code for imaging the surfaces of spotted stars by a set of circular spots with a uniform temperature distribution. The flux from the spotted surface is computed by partitioning the spots into elementary areas. The code takes into account the passing of spots behind the visible stellar limb, limb darkening, and overlapping of spots. Modeling of light curves includes the use of recent results of the theory of stellar atmospheres needed to take into account the temperature dependence of flux intensity and limb darkening coefficients. The search for spot parameters is based on the analysis of several light curves obtained in different photometric bands. We test our technique by applying it to HII 1883.
NASA Technical Reports Server (NTRS)
Iida, H. T.
1966-01-01
Computational procedure reduces the numerical effort whenever the method of finite differences is used to solve ablation problems for which the surface recession is large relative to the initial slab thickness. The number of numerical operations required for a given maximum space mesh size is reduced.
A new fourth-order Fourier-Bessel split-step method for the extended nonlinear Schroedinger equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nash, Patrick L.
2008-01-10
Fourier split-step techniques are often used to compute soliton-like numerical solutions of the nonlinear Schroedinger equation. Here, a new fourth-order implementation of the Fourier split-step algorithm is described for problems possessing azimuthal symmetry in 3 + 1-dimensions. This implementation is based, in part, on a finite difference approximation {delta}{sub perpendicular} {sup FDA} of 1/r ({partial_derivative})/({partial_derivative}r) r({partial_derivative})/({partial_derivative}r) that possesses an associated exact unitary representation of e{sup i/2{lambda}}{sup {delta}{sub perpendicular}{sup FDA}}. The matrix elements of this unitary matrix are given by special functions known as the associated Bessel functions. Hence the attribute Fourier-Bessel for the method. The Fourier-Bessel algorithm is shown tomore » be unitary and unconditionally stable. The Fourier-Bessel algorithm is employed to simulate the propagation of a periodic series of short laser pulses through a nonlinear medium. This numerical simulation calculates waveform intensity profiles in a sequence of planes that are transverse to the general propagation direction, and labeled by the cylindrical coordinate z. These profiles exhibit a series of isolated pulses that are offset from the time origin by characteristic times, and provide evidence for a physical effect that may be loosely termed normal mode condensation. Normal mode condensation is consistent with experimentally observed pulse filamentation into a packet of short bursts, which may occur as a result of short, intense irradiation of a medium.« less
Cost-Benefit Analysis of Computer Resources for Machine Learning
Champion, Richard A.
2007-01-01
Machine learning describes pattern-recognition algorithms - in this case, probabilistic neural networks (PNNs). These can be computationally intensive, in part because of the nonlinear optimizer, a numerical process that calibrates the PNN by minimizing a sum of squared errors. This report suggests efficiencies that are expressed as cost and benefit. The cost is computer time needed to calibrate the PNN, and the benefit is goodness-of-fit, how well the PNN learns the pattern in the data. There may be a point of diminishing returns where a further expenditure of computer resources does not produce additional benefits. Sampling is suggested as a cost-reduction strategy. One consideration is how many points to select for calibration and another is the geometric distribution of the points. The data points may be nonuniformly distributed across space, so that sampling at some locations provides additional benefit while sampling at other locations does not. A stratified sampling strategy can be designed to select more points in regions where they reduce the calibration error and fewer points in regions where they do not. Goodness-of-fit tests ensure that the sampling does not introduce bias. This approach is illustrated by statistical experiments for computing correlations between measures of roadless area and population density for the San Francisco Bay Area. The alternative to training efficiencies is to rely on high-performance computer systems. These may require specialized programming and algorithms that are optimized for parallel performance.
NASA Astrophysics Data System (ADS)
Clay, M. P.; Buaria, D.; Yeung, P. K.; Gotoh, T.
2018-07-01
This paper reports on the successful implementation of a massively parallel GPU-accelerated algorithm for the direct numerical simulation of turbulent mixing at high Schmidt number. The work stems from a recent development (Comput. Phys. Commun., vol. 219, 2017, 313-328), in which a low-communication algorithm was shown to attain high degrees of scalability on the Cray XE6 architecture when overlapping communication and computation via dedicated communication threads. An even higher level of performance has now been achieved using OpenMP 4.5 on the Cray XK7 architecture, where on each node the 16 integer cores of an AMD Interlagos processor share a single Nvidia K20X GPU accelerator. In the new algorithm, data movements are minimized by performing virtually all of the intensive scalar field computations in the form of combined compact finite difference (CCD) operations on the GPUs. A memory layout in departure from usual practices is found to provide much better performance for a specific kernel required to apply the CCD scheme. Asynchronous execution enabled by adding the OpenMP 4.5 NOWAIT clause to TARGET constructs improves scalability when used to overlap computation on the GPUs with computation and communication on the CPUs. On the 27-petaflops supercomputer Titan at Oak Ridge National Laboratory, USA, a GPU-to-CPU speedup factor of approximately 5 is consistently observed at the largest problem size of 81923 grid points for the scalar field computed with 8192 XK7 nodes.
Trivariate characteristics of intensity fluctuations for heavily saturated optical systems.
Das, Biman; Drake, Eli; Jack, John
2004-02-01
Trivariate cumulants of intensity fluctuations have been computed starting from a trivariate intensity probability distribution function, which rests on the assumption that the variation of intensity has a maximum entropy distribution with the constraint that the total intensity is constant. The assumption holds for optical systems such as a thin, long, mirrorless gas laser amplifier where under heavy gain saturation the total output approaches a constant intensity, although intensity of any mode fluctuates rapidly over the average intensity. The relations between trivariate cumulants and central moments that were needed for the computation of trivariate cumulants were derived. The results of the computation show that the cumulants have characteristic values that depend on the number of interacting modes in the system. The cumulant values approach zero when the number of modes is infinite, as expected. The results will be useful for comparison with the experimental triavariate statistics of heavily saturated optical systems such as the output from a thin, long, bidirectional gas laser amplifier.
Termination of the solar wind in the hot, partially ionized interstellar medium. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Lombard, C. K.
1974-01-01
Theoretical foundations for understanding the problem of the termination of the solar wind are reexamined in the light of most recent findings concerning the states of the solar wind and the local interstellar medium. The investigation suggests that a simple extention of Parker's (1961) analytical model provides a useful approximate description of the combined solar wind, interstellar wind plasma flowfield under conditions presently thought to occur. A linear perturbation solution exhibiting both the effects of photoionization and charge exchange is obtained for the supersonic solar wind. A numerical algorithm is described for computing moments of the non-equilibrium hydrogen distribution function and associated source terms for the MHD equations. Computed using the algorithm in conjunction with the extended Parker solution to approximate the plasma flowfield, profiles of hydrogen number density are given in the solar wind along the upstream and downstream axes of flow with respect to the direction of the interstellar wind. Predictions of solar Lyman-alpha backscatter intensities to be observed at 1 a.u. have been computed, in turn, from a set of such hydrogen number density profiles varied over assumed conditions of the interstellar wind.
Fuzzy logic based robotic controller
NASA Technical Reports Server (NTRS)
Attia, F.; Upadhyaya, M.
1994-01-01
Existing Proportional-Integral-Derivative (PID) robotic controllers rely on an inverse kinematic model to convert user-specified cartesian trajectory coordinates to joint variables. These joints experience friction, stiction, and gear backlash effects. Due to lack of proper linearization of these effects, modern control theory based on state space methods cannot provide adequate control for robotic systems. In the presence of loads, the dynamic behavior of robotic systems is complex and nonlinear, especially where mathematical modeling is evaluated for real-time operators. Fuzzy Logic Control is a fast emerging alternative to conventional control systems in situations where it may not be feasible to formulate an analytical model of the complex system. Fuzzy logic techniques track a user-defined trajectory without having the host computer to explicitly solve the nonlinear inverse kinematic equations. The goal is to provide a rule-based approach, which is closer to human reasoning. The approach used expresses end-point error, location of manipulator joints, and proximity to obstacles as fuzzy variables. The resulting decisions are based upon linguistic and non-numerical information. This paper presents a solution to the conventional robot controller which is independent of computationally intensive kinematic equations. Computer simulation results of this approach as obtained from software implementation are also discussed.
High-efficiency wavefunction updates for large scale Quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Kent, Paul; McDaniel, Tyler; Li, Ying Wai; D'Azevedo, Ed
Within ab intio Quantum Monte Carlo (QMC) simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunctions. The evaluation of each Monte Carlo move requires finding the determinant of a dense matrix, which is traditionally iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. For calculations with thousands of electrons, this operation dominates the execution profile. We propose a novel rank- k delayed update scheme. This strategy enables probability evaluation for multiple successive Monte Carlo moves, with application of accepted moves to the matrices delayed until after a predetermined number of moves, k. Accepted events grouped in this manner are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency. This procedure does not change the underlying Monte Carlo sampling or the sampling efficiency. For large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude speedups can be obtained on both multi-core CPU and on GPUs, making this algorithm highly advantageous for current petascale and future exascale computations.
NASA Astrophysics Data System (ADS)
Jiang, Cong; Yu, Zong-Wen; Wang, Xiang-Bin
2018-04-01
We present an analysis for measurement-device-independent quantum key distribution with correlated source-light-intensity errors. Numerical results show that the results here can greatly improve the key rate especially with large intensity fluctuations and channel attenuation compared with prior results if the intensity fluctuations of different sources are correlated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pant, Nidhi; Das, Santanu; Mitra, Sanjit
Mild, unavoidable deviations from circular-symmetry of instrumental beams along with scan strategy can give rise to measurable Statistical Isotropy (SI) violation in Cosmic Microwave Background (CMB) experiments. If not accounted properly, this spurious signal can complicate the extraction of other SI violation signals (if any) in the data. However, estimation of this effect through exact numerical simulation is computationally intensive and time consuming. A generalized analytical formalism not only provides a quick way of estimating this signal, but also gives a detailed understanding connecting the leading beam anisotropy components to a measurable BipoSH characterisation of SI violation. In this paper,more » we provide an approximate generic analytical method for estimating the SI violation generated due to a non-circular (NC) beam and arbitrary scan strategy, in terms of the Bipolar Spherical Harmonic (BipoSH) spectra. Our analytical method can predict almost all the features introduced by a NC beam in a complex scan and thus reduces the need for extensive numerical simulation worth tens of thousands of CPU hours into minutes long calculations. As an illustrative example, we use WMAP beams and scanning strategy to demonstrate the easability, usability and efficiency of our method. We test all our analytical results against that from exact numerical simulations.« less
Multi-fidelity uncertainty quantification in large-scale predictive simulations of turbulent flow
NASA Astrophysics Data System (ADS)
Geraci, Gianluca; Jofre-Cruanyes, Lluis; Iaccarino, Gianluca
2017-11-01
The performance characterization of complex engineering systems often relies on accurate, but computationally intensive numerical simulations. It is also well recognized that in order to obtain a reliable numerical prediction the propagation of uncertainties needs to be included. Therefore, Uncertainty Quantification (UQ) plays a fundamental role in building confidence in predictive science. Despite the great improvement in recent years, even the more advanced UQ algorithms are still limited to fairly simplified applications and only moderate parameter dimensionality. Moreover, in the case of extremely large dimensionality, sampling methods, i.e. Monte Carlo (MC) based approaches, appear to be the only viable alternative. In this talk we describe and compare a family of approaches which aim to accelerate the convergence of standard MC simulations. These methods are based on hierarchies of generalized numerical resolutions (multi-level) or model fidelities (multi-fidelity), and attempt to leverage the correlation between Low- and High-Fidelity (HF) models to obtain a more accurate statistical estimator without introducing additional HF realizations. The performance of these methods are assessed on an irradiated particle laden turbulent flow (PSAAP II solar energy receiver). This investigation was funded by the United States Department of Energy's (DoE) National Nuclear Security Administration (NNSA) under the Predicitive Science Academic Alliance Program (PSAAP) II at Stanford University.
Instability, rupture and fluctuations in thin liquid films: Theory and computations
NASA Astrophysics Data System (ADS)
Gvalani, Rishabh; Duran-Olivencia, Miguel; Kalliadasis, Serafim; Pavliotis, Grigorios
2017-11-01
Thin liquid films are ubiquitous in natural phenomena and technological applications. They are commonly studied via deterministic hydrodynamic equations, but thermal fluctuations often play a crucial role that still needs to be understood. An example of this is dewetting, which involves the rupture of a thin liquid film and the formation of droplets. Such a process is thermally activated and requires fluctuations to be taken into account self-consistently. Here we present an analytical and numerical study of a stochastic thin-film equation derived from first principles. We scrutinise the behaviour of the stochastic thin film equation in the limit of perfectly correlated noise along the wall-normal direction. We also perform Monte Carlo simulations of the stochastic equation by adopting a numerical scheme based on a spectral collocation method. The numerical scheme allows us to explore the fluctuating dynamics of the thin film and the behaviour of the system's free energy close to rupture. Finally, we also study the effect of the noise intensity on the rupture time, which is in good agreement with previous works. Imperial College London (ICL) President's PhD Scholarship; European Research Council Advanced Grant No. 247031; EPSRC Grants EP/L025159, EP/L020564, EP/P031587, EP/L024926, and EP/L016230/1.
Modeling the periodic stratification and gravitational circulation in San Francisco Bay, California
Cheng, Ralph T.; Casulli, Vincenzo
1996-01-01
A high resolution, three-dimensional (3-D) hydrodynamic numerical model is applied to San Francisco Bay, California to simulate the periodic tidal stratification caused by tidal straining and stirring and their long-term effects on gravitational circulation. The numerical model is formulated using fixed levels in the vertical and uniform computational mesh on horizontal planes. The governing conservation equations, the 3-D shallow water equations, are solved by a semi-implicit finite-difference scheme. Numerical simulations for estuarine flows in San Francisco Bay have been performed to reproduce the hydrodynamic properties of tides, tidal and residual currents, and salt transport. All simulations were carried out to cover at least 30 days, so that the spring-neap variance in the model results could be analyzed. High grid resolution used in the model permits the use of a simple turbulence closure scheme which has been shown to be sufficient to reproduce the tidal cyclic stratification and well-mixed conditions in the water column. Low-pass filtered 3-D time-series reveals the classic estuarine gravitational circulation with a surface layer flowing down-estuary and an up-estuary flow near the bottom. The intensity of the gravitational circulation depends upon the amount of freshwater inflow, the degree of stratification, and spring-neap tidal variations.
Static and moving solid/gas interface modeling in a hybrid rocket engine
NASA Astrophysics Data System (ADS)
Mangeot, Alexandre; William-Louis, Mame; Gillard, Philippe
2018-07-01
A numerical model was developed with CFD-ACE software to study the working condition of an oxygen-nitrogen/polyethylene hybrid rocket combustor. As a first approach, a simplified numerical model is presented. It includes a compressible transient gas phase in which a two-step combustion mechanism is implemented coupled to a radiative model. The solid phase from the fuel grain is a semi-opaque material with its degradation process modeled by an Arrhenius type law. Two versions of the model were tested. The first considers the solid/gas interface with a static grid while the second uses grid deformation during the computation to follow the asymmetrical regression. The numerical results are obtained with two different regression kinetics originating from ThermoGravimetry Analysis and test bench results. In each case, the fuel surface temperature is retrieved within a range of 5% error. However, good results are only found using kinetics from the test bench. The regression rate is found within 0.03 mm s-1 and average combustor pressure and its variation over time have the same intensity than the measurements conducted on the test bench. The simulation that uses grid deformation to follow the regression shows a good stability over a 10 s simulated time simulation.
NASA Astrophysics Data System (ADS)
Akhtar, Taimoor; Shoemaker, Christine
2016-04-01
Watershed model calibration is inherently a multi-criteria problem. Conflicting trade-offs exist between different quantifiable calibration criterions indicating the non-existence of a single optimal parameterization. Hence, many experts prefer a manual approach to calibration where the inherent multi-objective nature of the calibration problem is addressed through an interactive, subjective, time-intensive and complex decision making process. Multi-objective optimization can be used to efficiently identify multiple plausible calibration alternatives and assist calibration experts during the parameter estimation process. However, there are key challenges to the use of multi objective optimization in the parameter estimation process which include: 1) multi-objective optimization usually requires many model simulations, which is difficult for complex simulation models that are computationally expensive; and 2) selection of one from numerous calibration alternatives provided by multi-objective optimization is non-trivial. This study proposes a "Hybrid Automatic Manual Strategy" (HAMS) for watershed model calibration to specifically address the above-mentioned challenges. HAMS employs a 3-stage framework for parameter estimation. Stage 1 incorporates the use of an efficient surrogate multi-objective algorithm, GOMORS, for identification of numerous calibration alternatives within a limited simulation evaluation budget. The novelty of HAMS is embedded in Stages 2 and 3 where an interactive visual and metric based analytics framework is available as a decision support tool to choose a single calibration from the numerous alternatives identified in Stage 1. Stage 2 of HAMS provides a goodness-of-fit measure / metric based interactive framework for identification of a small subset (typically less than 10) of meaningful and diverse set of calibration alternatives from the numerous alternatives obtained in Stage 1. Stage 3 incorporates the use of an interactive visual analytics framework for decision support in selection of one parameter combination from the alternatives identified in Stage 2. HAMS is applied for calibration of flow parameters of a SWAT model, (Soil and Water Assessment Tool) designed to simulate flow in the Cannonsville watershed in upstate New York. Results from the application of HAMS to Cannonsville indicate that efficient multi-objective optimization and interactive visual and metric based analytics can bridge the gap between the effective use of both automatic and manual strategies for parameter estimation of computationally expensive watershed models.
Simultaneous computation of jet turbulence and noise
NASA Technical Reports Server (NTRS)
Berman, C. H.; Ramos, J. I.
1989-01-01
The existing flow computation methods, wave computation techniques, and theories based on noise source models are reviewed in order to assess the capabilities of numerical techniques to compute jet turbulence noise and understand the physical mechanisms governing it over a range of subsonic and supersonic nozzle exit conditions. In particular, attention is given to (1) methods for extrapolating near field information, obtained from flow computations, to the acoustic far field and (2) the numerical solution of the time-dependent Lilley equation.
NASA Technical Reports Server (NTRS)
Chesler, L.; Pierce, S.
1971-01-01
Generalized, cyclic, and modified multistep numerical integration methods are developed and evaluated for application to problems of satellite orbit computation. Generalized methods are compared with the presently utilized Cowell methods; new cyclic methods are developed for special second-order differential equations; and several modified methods are developed and applied to orbit computation problems. Special computer programs were written to generate coefficients for these methods, and subroutines were written which allow use of these methods with NASA's GEOSTAR computer program.
Kinematical calculations of RHEED intensity oscillations during the growth of thin epitaxial films
NASA Astrophysics Data System (ADS)
Daniluk, Andrzej
2005-08-01
A practical computing algorithm working in real time has been developed for calculating the reflection high-energy electron diffraction (RHEED) from the molecular beam epitaxy (MBE) growing surface. The calculations are based on the use of kinematical diffraction theory. Simple mathematical models are used for the growth simulation in order to investigate the fundamental behaviors of reflectivity change during the growth of thin epitaxial films prepared using MBE. Program summaryTitle of program:GROWTH Catalogue identifier:ADVL Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVL Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computer for which the program is designed and others on which is has been tested:Pentium-based PC Operating systems or monitors under which the program has been tested:Windows 9x, XP, NT Programming language used:Object Pascal Memory required to execute with typical data:more than 1 MB Number of bits in a word: 64 bits Number of processors used: 1 Number of lines in distributed program, including test data, etc.: 10 989 Number of bytes in distributed program, including test data, etc.:103 048 Nature of the physical problem:Reflection high-energy electron diffraction (RHEED) is a very useful technique for studying growth and surface analysis of thin epitaxial structures prepared using the molecular beam epitaxy (MBE). The simplest approach to calculating the RHEED intensity during the growth of thin epitaxial films is the kinematical diffraction theory (often called kinematical approximation), in which only a single scattering event is taken into account. The biggest advantage of this approach is that we can calculate RHEED intensity in real time. Also, the approach facilitates intuitive understanding of the growth mechanism and surface morphology [P.I. Cohen, G.S. Petrich, P.R. Pukite, G.J. Whaley, A.S. Arrott, Surf. Sci. 216 (1989) 222]. Method of solution:Epitaxial growth of thin films is modeled by a set of non-linear differential equations [P.I. Cohen, G.S. Petrich, P.R. Pukite, G.J. Whaley, A.S. Arrott, Surf. Sci. 216 (1989) 222]. The Runge-Kutta method with adaptive stepsize control was used for solving initial value problem for non-linear differential equations [W.H. Press, B.P. Flannery, S.A. Teukolsky, W.T. Vetterling, Numerical Recipes in Pascal: The Art of Scientific Computing; first ed., Cambridge University Press, 1989; See also: Numerical Recipes in C++, second ed., Cambridge University Press, 1992]. Typical running time: The typical running time is machine and user-parameters dependent. Unusual features of the program: The program is distributed in the form of a main project Growth.dpr file and an independent Rhd.pas file and should be compiled using Object Pascal compilers, including Borland Delphi.
High resolution simulations of a variable HH jet
NASA Astrophysics Data System (ADS)
Raga, A. C.; de Colle, F.; Kajdič, P.; Esquivel, A.; Cantó, J.
2007-04-01
Context: In many papers, the flows in Herbig-Haro (HH) jets have been modeled as collimated outflows with a time-dependent ejection. In particular, a supersonic variability of the ejection velocity leads to the production of "internal working surfaces" which (for appropriate forms of the time-variability) can produce emitting knots that resemble the chains of knots observed along HH jets. Aims: In this paper, we present axisymmetric simulations of an "internal working surface" in a radiative jet (produced by an ejection velocity variability). We concentrate on a given parameter set (i.e., on a jet with a constante ejection density, and a sinusoidal velocity variability with a 20 yr period and a 40 km s-1 half-amplitude), and carry out a study of the behaviour of the solution for increasing numerical resolutions. Methods: In our simulations, we solve the gasdynamic equations together with a 17-species atomic/ionic network, and we are therefore able to compute emission coefficients for different emission lines. Results: We compute 3 adaptive grid simulations, with 20, 163 and 1310 grid points (at the highest grid resolution) across the initial jet radius. From these simulations we see that successively more complex structures are obtained for increasing numerical resolutions. Such an effect is seen in the stratifications of the flow variables as well as in the predicted emission line intensity maps. Conclusions: .We find that while the detailed structure of an internal working surface depends on resolution, the predicted emission line luminosities (integrated over the volume of the working surface) are surprisingly stable. This is definitely good news for the future computation of predictions from radiative jet models for carrying out comparisons with observations of HH objects.
A method for spectral DNS of low Rm channel flows based on the least dissipative modes
NASA Astrophysics Data System (ADS)
Kornet, Kacper; Pothérat, Alban
2015-10-01
We put forward a new type of spectral method for the direct numerical simulation of flows where anisotropy or very fine boundary layers are present. The main idea is to take advantage of the fact that such structures are dissipative and that their presence should reduce the number of degrees of freedom of the flow, when paradoxically, their fine resolution incurs extra computational cost in most current methods. The principle of this method is to use a functional basis with elements that already include these fine structures so as to avoid these extra costs. This leads us to develop an algorithm to implement a spectral method for arbitrary functional bases, and in particular, non-orthogonal ones. We construct a basic implementation of this algorithm to simulate magnetohydrodynamic (MHD) channel flows with an externally imposed, transverse magnetic field, where very thin boundary layers are known to develop along the channel walls. In this case, the sought functional basis can be built out of the eigenfunctions of the dissipation operator, which incorporate these boundary layers, and it turns out to be non-orthogonal. We validate this new scheme against numerical simulations of freely decaying MHD turbulence based on a finite volume code and it is found to provide accurate results. Its ability to fully resolve wall-bounded turbulence with a number of modes close to that required by the dynamics is demonstrated on a simple example. This opens the way to full-blown simulations of MHD turbulence under very high magnetic fields. Until now such simulations were too computationally expensive. In contrast to traditional methods the computational cost of the proposed method, does not depend on the intensity of the magnetic field.
Lithographic image simulation for the 21st century with 19th-century tools
NASA Astrophysics Data System (ADS)
Gordon, Ronald L.; Rosenbluth, Alan E.
2004-01-01
Simulation of lithographic processes in semiconductor manufacturing has gone from a crude learning tool 20 years ago to a critical part of yield enhancement strategy today. Although many disparate models, championed by equally disparate communities, exist to describe various photoresist development phenomena, these communities would all agree that the one piece of the simulation picture that can, and must, be computed accurately is the image intensity in the photoresist. The imaging of a photomask onto a thin-film stack is one of the only phenomena in the lithographic process that is described fully by well-known, definitive physical laws. Although many approximations are made in the derivation of the Fourier transform relations between the mask object, the pupil, and the image, these and their impacts are well-understood and need little further investigation. The imaging process in optical lithography is modeled as a partially-coherent, Kohler illumination system. As Hopkins has shown, we can separate the computation into 2 pieces: one that takes information about the illumination source, the projection lens pupil, the resist stack, and the mask size or pitch, and the other that only needs the details of the mask structure. As the latter piece of the calculation can be expressed as a fast Fourier transform, it is the first piece that dominates. This piece involves computation of a potentially large number of numbers called Transmission Cross-Coefficients (TCCs), which are correlations of the pupil function weighted with the illumination intensity distribution. The advantage of performing the image calculations this way is that the computation of these TCCs represents an up-front cost, not to be repeated if one is only interested in changing the mask features, which is the case in Model-Based Optical Proximity Correction (MBOPC). The down side, however, is that the number of these expensive double integrals that must be performed increases as the square of the mask unit cell area; this number can cause even the fastest computers to balk if one needs to study medium- or long-range effects. One can reduce this computational burden by approximating with a smaller area, but accuracy is usually a concern, especially when building a model that will purportedly represent a manufacturing process. This work will review the current methodologies used to simulate the intensity distribution in air above the resist and address the above problems. More to the point, a methodology has been developed to eliminate the expensive numerical integrations in the TCC calculations, as the resulting integrals in many cases of interest can be either evaluated analytically, or replaced by analytical functions accurate to within machine precision. With the burden of computing these numbers lightened, more accurate representations of the image field can be realized, and better overall models are then possible.
Preface to advances in numerical simulation of plasmas
NASA Astrophysics Data System (ADS)
Parker, Scott E.; Chacon, Luis
2016-10-01
This Journal of Computational Physics Special Issue, titled ;Advances in Numerical Simulation of Plasmas,; presents a snapshot of the international state of the art in the field of computational plasma physics. The articles herein are a subset of the topics presented as invited talks at the 24th International Conference on the Numerical Simulation of Plasmas (ICNSP), August 12-14, 2015 in Golden, Colorado. The choice of papers was highly selective. The ICNSP is held every other year and is the premier scientific meeting in the field of computational plasma physics.
Numerical computation of linear instability of detonations
NASA Astrophysics Data System (ADS)
Kabanov, Dmitry; Kasimov, Aslan
2017-11-01
We propose a method to study linear stability of detonations by direct numerical computation. The linearized governing equations together with the shock-evolution equation are solved in the shock-attached frame using a high-resolution numerical algorithm. The computed results are processed by the Dynamic Mode Decomposition technique to generate dispersion relations. The method is applied to the reactive Euler equations with simple-depletion chemistry as well as more complex multistep chemistry. The results are compared with those known from normal-mode analysis. We acknowledge financial support from King Abdullah University of Science and Technology.
NASA Technical Reports Server (NTRS)
Kutler, Paul; Yee, Helen
1987-01-01
Topics addressed include: numerical aerodynamic simulation; computational mechanics; supercomputers; aerospace propulsion systems; computational modeling in ballistics; turbulence modeling; computational chemistry; computational fluid dynamics; and computational astrophysics.
NASA Astrophysics Data System (ADS)
Reddy, S. R.; Kwembe, T.; Zhang, Z.
2016-12-01
We investigated the possible relationship between the large- scale heat fluxes and intensity change associated with the landfall of Hurricane Katrina. After reaching the category 5 intensity on August 28th , 2005 over the central Gulf of Mexico, Katrina weekend to category 3 before making landfall (August 29th , 2005) on the Louisiana coast with the maximum sustained winds of over 110 knots. We also examined the vertical motions associated with the intensity change of the hurricane. The data for Convective Available Potential Energy for water vapor (CAPE), sea level pressure and wind speed were obtained from the Atmospheric Soundings, and NOAA National Hurricane Center (NHC), respectively for the period August 24 to September 3, 2005. We also computed vertical motions using CAPE values. The study showed that the large-scale heat fluxes reached maximum (7960W/m2) with the central pressure 905mb. The Convective Available Potential Energy and the vertical motions peaked 3-5 days before landfall. The large atmospheric vertical motions associated with the land falling hurricane Katrina produced severe weather including thunderstorm, tornadoes, storm surge and floods Numerical model (WRF/ARW) with data assimilations have been used for this research to investigate the model's performances on hurricane tracks and intensities associated with the hurricane Katrina, which began to strengthen until reaching Category 5 on 28 August 2005. The model was run on a doubly nested domain centered over the central Gulf of Mexico, with grid spacing of 90 km and 30 km for 6 hr periods, from August 28th to August 30th. The model output was compared with the observations and is capable of simulating the surface features, intensity change and track associated with hurricane Katrina.
Validation of numerical model for cook stove using Reynolds averaged Navier-Stokes based solver
NASA Astrophysics Data System (ADS)
Islam, Md. Moinul; Hasan, Md. Abdullah Al; Rahman, Md. Mominur; Rahaman, Md. Mashiur
2017-12-01
Biomass fired cook stoves, for many years, have been the main cooking appliance for the rural people of developing countries. Several researches have been carried out to the find efficient stoves. In the present study, numerical model of an improved household cook stove is developed to analyze the heat transfer and flow behavior of gas during operation. The numerical model is validated with the experimental results. Computation of the numerical model is executed the using non-premixed combustion model. Reynold's averaged Navier-Stokes (RaNS) equation along with the κ - ɛ model governed the turbulent flow associated within the computed domain. The computational results are in well agreement with the experiment. Developed numerical model can be used to predict the effect of different biomasses on the efficiency of the cook stove.
Vectorization on the star computer of several numerical methods for a fluid flow problem
NASA Technical Reports Server (NTRS)
Lambiotte, J. J., Jr.; Howser, L. M.
1974-01-01
A reexamination of some numerical methods is considered in light of the new class of computers which use vector streaming to achieve high computation rates. A study has been made of the effect on the relative efficiency of several numerical methods applied to a particular fluid flow problem when they are implemented on a vector computer. The method of Brailovskaya, the alternating direction implicit method, a fully implicit method, and a new method called partial implicitization have been applied to the problem of determining the steady state solution of the two-dimensional flow of a viscous imcompressible fluid in a square cavity driven by a sliding wall. Results are obtained for three mesh sizes and a comparison is made of the methods for serial computation.
ERIC Educational Resources Information Center
Wilkinson-Riddle, G. J.; Patel, Ashok
1998-01-01
Discusses courseware development, including intelligent tutoring systems, under the Teaching and Learning Technology Programme and the Byzantium project that was designed to define computer-aided learning performance standards suitable for numerate business subjects; examine reasons to use computer-aided learning; and improve access to educational…
ERIC Educational Resources Information Center
Gonzalez-Vega, Laureano
1999-01-01
Using a Computer Algebra System (CAS) to help with the teaching of an elementary course in linear algebra can be one way to introduce computer algebra, numerical analysis, data structures, and algorithms. Highlights the advantages and disadvantages of this approach to the teaching of linear algebra. (Author/MM)
NASA Technical Reports Server (NTRS)
1994-01-01
This report summarizes research conducted at the Institute for Computer Applications in Science and Engineering in the areas of (1) applied and numerical mathematics, including numerical analysis and algorithm development; (2) theoretical and computational research in fluid mechanics in selected areas of interest, including acoustics and combustion; (3) experimental research in transition and turbulence and aerodynamics involving Langley facilities and scientists; and (4) computer science.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heckman, B.K.; Chinn, V.K.
1981-01-01
The development and use of computer programs written to produce the paper tape needed for the automation, or numeric control, of drill presses employed to fabricate computed-designed printed circuit boards are described. (LCL)
Alards-Tomalin, Doug; Walker, Alexander C; Nepon, Hillary; Leboe-McGowan, Launa C
2017-09-01
In the current study, cross-task interactions between number order and sound intensity judgments were assessed using a dual-task paradigm. Participants first categorized numerical sequences composed of Arabic digits as either ordered (ascending, descending) or non-ordered. Following each number sequence, participants then had to judge the intensity level of a target sound. Experiment 1 emphasized processing the two tasks independently (serial processing), while Experiments 2 and 3 emphasized processing the two tasks simultaneously (parallel processing). Cross-task interference occurred only when the task required parallel processing and was specific to ascending numerical sequences, which led to a higher proportion of louder sound intensity judgments. In Experiment 4 we examined whether this unidirectional interaction was the result of participants misattributing enhanced processing fluency experienced on ascending sequences as indicating a louder target sound. The unidirectional finding could not be entirely attributed to misattributed processing fluency, and may also be connected to experientially derived conceptual associations between ascending number sequences and greater magnitude, consistent with conceptual mapping theory.
Quasi-matched propagation of an ultrashort and intense laser pulse in a plasma channel
NASA Astrophysics Data System (ADS)
Benedetti, Carlo; Schroeder, Carl; Esarey, Eric; Leemans, Wim
2011-10-01
The propagation of an ultrashort and relativistically-intense laser pulse in a preformed parabolic plasma channel is investigated. The nonlinear paraxial wave equation is solved both analytically and numerically. Numerical solutions are obtained using the 2D cylindrical, envelope, ponderomotive, hybrid PIC/fluid code INF&RNO, recently developed at LBNL. For an arbitrary laser pulse profile with a given power for each longitudinal slice (less then the critical power for self-focusing), we determine the laser intensity distribution ensuring matched propagation in the channel, neglecting non-paraxial effects (self-steepening, red-shifting, etc.). Similarly, in the case of a Gaussian pulse profile, we determine the optimal channel depth yielding a quasi-matched laser propagation, including the plasma density modification induced by the laser-pulse. The analytical results obtained for both cases in the weakly-relativistic intensity regime are presented and validated through comparison with numerical simulations. Work supported by the Office of Science, Office of High Energy Physics, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.
Evaluating a linearized Euler equations model for strong turbulence effects on sound propagation.
Ehrhardt, Loïc; Cheinet, Sylvain; Juvé, Daniel; Blanc-Benon, Philippe
2013-04-01
Sound propagation outdoors is strongly affected by atmospheric turbulence. Under strongly perturbed conditions or long propagation paths, the sound fluctuations reach their asymptotic behavior, e.g., the intensity variance progressively saturates. The present study evaluates the ability of a numerical propagation model based on the finite-difference time-domain solving of the linearized Euler equations in quantitatively reproducing the wave statistics under strong and saturated intensity fluctuations. It is the continuation of a previous study where weak intensity fluctuations were considered. The numerical propagation model is presented and tested with two-dimensional harmonic sound propagation over long paths and strong atmospheric perturbations. The results are compared to quantitative theoretical or numerical predictions available on the wave statistics, including the log-amplitude variance and the probability density functions of the complex acoustic pressure. The match is excellent for the evaluated source frequencies and all sound fluctuations strengths. Hence, this model captures these many aspects of strong atmospheric turbulence effects on sound propagation. Finally, the model results for the intensity probability density function are compared with a standard fit by a generalized gamma function.
Numerical modeling of magnetic moments for UXO applications
Sanchez, V.; Li, Y.; Nabighian, M.; Wright, D.
2006-01-01
The surface magnetic anomaly observed in UXO clearance is mainly dipolar and, consequently, the dipole is the only magnetic moment regularly recovered in UXO applications. The dipole moment contains information about intensity of magnetization but lacks information about shape. In contrast, higher-order moments, such as quadrupole and octupole, encode asymmetry properties of the magnetization distribution within the buried targets. In order to improve our understanding of magnetization distribution within UXO and non-UXO objects and its potential utility in UXO clearance, we present a 3D numerical modeling study for highly susceptible metallic objects. The basis for the modeling is the solution of a nonlinear integral equation describing magnetization within isolated objects. A solution for magnetization distribution then allows us to compute magnetic moments of the object, analyze their relationships, and provide a depiction of the surface anomaly produced by different moments within the object. Our modeling results show significant high-order moments for more asymmetric objects situated at depths typical of UXO burial, and suggest that the increased relative contribution to magnetic gradient data from these higher-order moments may provide a practical tool for improved UXO discrimination.
Attentional bias in math anxiety.
Rubinsten, Orly; Eidlin, Hili; Wohl, Hadas; Akibli, Orly
2015-01-01
Cognitive theory from the field of general anxiety suggests that the tendency to display attentional bias toward negative information results in anxiety. Accordingly, the current study aims to investigate whether attentional bias is involved in math anxiety (MA) as well (i.e., a persistent negative reaction to math). Twenty seven participants (14 with high levels of MA and 13 with low levels of MA) were presented with a novel computerized numerical version of the well established dot probe task. One of six types of prime stimuli, either math related or typically neutral, was presented on one side of a computer screen. The prime was preceded by a probe (either one or two asterisks) that appeared in either the prime or the opposite location. Participants had to discriminate probe identity (one or two asterisks). Math anxious individuals reacted faster when the probe was at the location of the numerical related stimuli. This suggests the existence of attentional bias in MA. That is, for math anxious individuals, the cognitive system selectively favored the processing of emotionally negative information (i.e., math related words). These findings suggest that attentional bias is linked to unduly intense MA symptoms.
The Use of Convolutional Neural Network in Relating Precipitation to Circulation
NASA Astrophysics Data System (ADS)
Pan, B.; Hsu, K. L.; AghaKouchak, A.; Sorooshian, S.
2017-12-01
Precipitation prediction in dynamical weather and climate models depends on 1) the predictability of pressure or geopotential height for the forecasting period and 2) the successive work of interpreting the pressure field in terms of precipitation events. The later task is represented as parameterization schemes in numerical models, where detailed computing inevitably blurs the hidden cause-and-effect relationship in precipitation generation. The "big data" provided by numerical simulation, reanalysis and observation networks requires better causation analysis for people to digest and realize their use. While classic synoptical analysis methods are very-often insufficient for spatially distributed high dimensional data, a Convolutional Neural Network(CNN) is developed here to directly relate precipitation with circulation. Case study carried over west coast United States during boreal winter showed that CNN can locate and capture key pressure zones of different structures to project precipitation spatial distribution with high accuracy across hourly to monthly scales. This direct connection between atmospheric circulation and precipitation offers a probe for attributing precipitation to the coverage, location, intensity and spatial structure of characteristic pressure zones, which can be used for model diagnosis and improvement.
NASA Astrophysics Data System (ADS)
Saleh, F.; Garambois, P. A.; Biancamaria, S.
2017-12-01
Floods are considered the major natural threats to human societies across all continents. Consequences of floods in highly populated areas are more dramatic with losses of human lives and substantial property damage. This risk is projected to increase with the effects of climate change, particularly sea-level rise, increasing storm frequencies and intensities and increasing population and economic assets in such urban watersheds. Despite the advances in computational resources and modeling techniques, significant gaps exist in predicting complex processes and accurately representing the initial state of the system. Improving flood prediction models and data assimilation chains through satellite has become an absolute priority to produce accurate flood forecasts with sufficient lead times. The overarching goal of this work is to assess the benefits of the Surface Water Ocean Topography SWOT satellite data from a flood prediction perspective. The near real time methodology is based on combining satellite data from a simulator that mimics the future SWOT data, numerical models, high resolution elevation data and real-time local measurement in the New York/New Jersey area.
NASA Astrophysics Data System (ADS)
Yi, Dake; Wang, TzuChiang
2018-06-01
In the paper, a new procedure is proposed to investigate three-dimensional fracture problems of a thin elastic plate with a long through-the-thickness crack under remote uniform tensile loading. The new procedure includes a new analytical method and high accurate finite element simulations. In the part of theoretical analysis, three-dimensional Maxwell stress functions are employed in order to derive three-dimensional crack tip fields. Based on the theoretical analysis, an equation which can describe the relationship among the three-dimensional J-integral J( z), the stress intensity factor K( z) and the tri-axial stress constraint level T z ( z) is derived first. In the part of finite element simulations, a fine mesh including 153360 elements is constructed to compute the stress field near the crack front, J( z) and T z ( z). Numerical results show that in the plane very close to the free surface, the K field solution is still valid for in-plane stresses. Comparison with the numerical results shows that the analytical results are valid.
NASA Technical Reports Server (NTRS)
Chaderjian, Neal M.
1991-01-01
Computations from two Navier-Stokes codes, NSS and F3D, are presented for a tangent-ogive-cylinder body at high angle of attack. Features of this steady flow include a pair of primary vortices on the leeward side of the body as well as secondary vortices. The topological and physical plausibility of this vortical structure is discussed. The accuracy of these codes are assessed by comparison of the numerical solutions with experimental data. The effects of turbulence model, numerical dissipation, and grid refinement are presented. The overall efficiency of these codes are also assessed by examining their convergence rates, computational time per time step, and maximum allowable time step for time-accurate computations. Overall, the numerical results from both codes compared equally well with experimental data, however, the NSS code was found to be significantly more efficient than the F3D code.
The development and application of CFD technology in mechanical engineering
NASA Astrophysics Data System (ADS)
Wei, Yufeng
2017-12-01
Computational Fluid Dynamics (CFD) is an analysis of the physical phenomena involved in fluid flow and heat conduction by computer numerical calculation and graphical display. The numerical method simulates the complexity of the physical problem and the precision of the numerical solution, which is directly related to the hardware speed of the computer and the hardware such as memory. With the continuous improvement of computer performance and CFD technology, it has been widely applied to the field of water conservancy engineering, environmental engineering and industrial engineering. This paper summarizes the development process of CFD, the theoretical basis, the governing equations of fluid mechanics, and introduces the various methods of numerical calculation and the related development of CFD technology. Finally, CFD technology in the mechanical engineering related applications are summarized. It is hoped that this review will help researchers in the field of mechanical engineering.
Tight focusing of radially polarized circular Airy vortex beams
NASA Astrophysics Data System (ADS)
Chen, Musheng; Huang, Sujuan; Shao, Wei
2017-11-01
Tight focusing properties of radially polarized circular Airy vortex beams (CAVB) are studied numerically. The light field expressions for the focused fields are derived based on vectorial Debye theory. We also study the relationship between focal profiles, such as light intensity distribution, radius of focal spot and focal length, and the parameters of CAVB. Numerical results demonstrate that we can generate a radially polarized CAVB with super-long focal length, super-strong longitudinal intensity or subwavelength focused spot at the focal plane by properly choosing the parameters of incident light and high numerical aperture (NA) lens. These results have potential applications for optical trapping, optical storage and particle acceleration.
Numerical calculations of two dimensional, unsteady transonic flows with circulation
NASA Technical Reports Server (NTRS)
Beam, R. M.; Warming, R. F.
1974-01-01
The feasibility of obtaining two-dimensional, unsteady transonic aerodynamic data by numerically integrating the Euler equations is investigated. An explicit, third-order-accurate, noncentered, finite-difference scheme is used to compute unsteady flows about airfoils. Solutions for lifting and nonlifting airfoils are presented and compared with subsonic linear theory. The applicability and efficiency of the numerical indicial function method are outlined. Numerically computed subsonic and transonic oscillatory aerodynamic coefficients are presented and compared with those obtained from subsonic linear theory and transonic wind-tunnel data.
NASA Astrophysics Data System (ADS)
Zlotnik, Sergio
2017-04-01
Information provided by visualisation environments can be largely increased if the data shown is combined with some relevant physical processes and the used is allowed to interact with those processes. This is particularly interesting in VR environments where the user has a deep interplay with the data. For example, a geological seismic line in a 3D "cave" shows information of the geological structure of the subsoil. The available information could be enhanced with the thermal state of the region under study, with water-flow patterns in porous rocks or with rock displacements under some stress conditions. The information added by the physical processes is usually the output of some numerical technique applied to solve a Partial Differential Equation (PDE) that describes the underlying physics. Many techniques are available to obtain numerical solutions of PDE (e.g. Finite Elements, Finite Volumes, Finite Differences, etc). Although, all these traditional techniques require very large computational resources (particularly in 3D), making them useless in a real time visualization environment -such as VR- because the time required to compute a solution is measured in minutes or even in hours. We present here a novel alternative for the resolution of PDE-based problems that is able to provide a 3D solutions for a very large family of problems in real time. That is, the solution is evaluated in a one thousands of a second, making the solver ideal to be embedded into VR environments. Based on Model Order Reduction ideas, the proposed technique divides the computational work in to a computationally intensive "offline" phase, that is run only once in a life time, and an "online" phase that allow the real time evaluation of any solution within a family of problems. Preliminary examples of real time solutions of complex PDE-based problems will be presented, including thermal problems, flow problems, wave problems and some simple coupled problems.
NASA Astrophysics Data System (ADS)
Kasprak, A.; Brasington, J.; Hafen, K.; Wheaton, J. M.
2015-12-01
Numerical models that predict channel evolution through time are an essential tool for investigating processes that occur over timescales which render field observation intractable. However, available morphodynamic models generally take one of two approaches to the complex problem of computing morphodynamics, resulting in oversimplification of the relevant physics (e.g. cellular models) or faithful, yet computationally intensive, representations of the hydraulic and sediment transport processes at play. The practical implication of these approaches is that river scientists must often choose between unrealistic results, in the case of the former, or computational demands that render modeling realistic spatiotemporal scales of channel evolution impossible. Here we present a new modeling framework that operates at the timescale of individual competent flows (e.g. floods), and uses a highly-simplified sediment transport routine that moves volumes of material according to morphologically-derived characteristic transport distances, or path lengths. Using this framework, we have constructed an open-source morphodynamic model, termed MoRPHED, which is here applied, and its validity investigated, at timescales ranging from a single event to a decade on two braided rivers in the UK and New Zealand. We do not purport that MoRPHED is the best, nor even an adequate, tool for modeling braided river dynamics at this range of timescales. Rather, our goal in this research is to explore the utility, feasibility, and sensitivity of an event-scale, path-length-based modeling framework for predicting braided river dynamics. To that end, we further explore (a) which processes are naturally emergent and which must be explicitly parameterized in the model, (b) the sensitivity of the model to the choice of particle travel distance, and (c) whether an event-scale model timestep is adequate for producing braided channel dynamics. The results of this research may inform techniques for future morphodynamic modeling that seeks to maximize computational resources while modeling fluvial dynamics at the timescales of change.
Toward a Big Data Science: A challenge of "Science Cloud"
NASA Astrophysics Data System (ADS)
Murata, Ken T.; Watanabe, Hidenobu
2013-04-01
During these 50 years, along with appearance and development of high-performance computers (and super-computers), numerical simulation is considered to be a third methodology for science, following theoretical (first) and experimental and/or observational (second) approaches. The variety of data yielded by the second approaches has been getting more and more. It is due to the progress of technologies of experiments and observations. The amount of the data generated by the third methodologies has been getting larger and larger. It is because of tremendous development and programming techniques of super computers. Most of the data files created by both experiments/observations and numerical simulations are saved in digital formats and analyzed on computers. The researchers (domain experts) are interested in not only how to make experiments and/or observations or perform numerical simulations, but what information (new findings) to extract from the data. However, data does not usually tell anything about the science; sciences are implicitly hidden in the data. Researchers have to extract information to find new sciences from the data files. This is a basic concept of data intensive (data oriented) science for Big Data. As the scales of experiments and/or observations and numerical simulations get larger, new techniques and facilities are required to extract information from a large amount of data files. The technique is called as informatics as a fourth methodology for new sciences. Any methodologies must work on their facilities: for example, space environment are observed via spacecraft and numerical simulations are performed on super-computers, respectively in space science. The facility of the informatics, which deals with large-scale data, is a computational cloud system for science. This paper is to propose a cloud system for informatics, which has been developed at NICT (National Institute of Information and Communications Technology), Japan. The NICT science cloud, we named as OneSpaceNet (OSN), is the first open cloud system for scientists who are going to carry out their informatics for their own science. The science cloud is not for simple uses. Many functions are expected to the science cloud; such as data standardization, data collection and crawling, large and distributed data storage system, security and reliability, database and meta-database, data stewardship, long-term data preservation, data rescue and preservation, data mining, parallel processing, data publication and provision, semantic web, 3D and 4D visualization, out-reach and in-reach, and capacity buildings. Figure (not shown here) is a schematic picture of the NICT science cloud. Both types of data from observation and simulation are stored in the storage system in the science cloud. It should be noted that there are two types of data in observation. One is from archive site out of the cloud: this is a data to be downloaded through the Internet to the cloud. The other one is data from the equipment directly connected to the science cloud. They are often called as sensor clouds. In the present talk, we first introduce the NICT science cloud. We next demonstrate the efficiency of the science cloud, showing several scientific results which we achieved with this cloud system. Through the discussions and demonstrations, the potential performance of sciences cloud will be revealed for any research fields.
Hasan, Nusair; Farouk, Bakhtier
2015-10-01
Flow and transport induced by resonant acoustic waves in a near-critical fluid filled cylindrical enclosure is investigated both experimentally and numerically. Supercritical carbon dioxide (near the critical or the pseudo-critical states) in a confined resonator is subjected to acoustic field created by an electro-mechanical acoustic transducer and the induced pressure waves are measured by a fast response pressure field microphone. The frequency of the acoustic transducer is chosen such that the lowest acoustic mode propagates along the enclosure. For numerical simulations, a real-fluid computational fluid dynamics model representing the thermo-physical and transport properties of the supercritical fluid is considered. The simulated acoustic field in the resonator is compared with measurements. The formation of acoustic streaming structures in the highly compressible medium is revealed by time-averaging the numerical solutions over a given period. Due to diverging thermo-physical properties of supercritical fluid near the critical point, large scale oscillations are generated even for small sound field intensity. The strength of the acoustic wave field is found to be in direct relation with the thermodynamic state of the fluid. The effects of near-critical property variations and the operating pressure on the formation process of the streaming structures are also investigated. Irregular streaming patterns with significantly higher streaming velocities are observed for near-pseudo-critical states at operating pressures close to the critical pressure. However, these structures quickly re-orient to the typical Rayleigh streaming patterns with the increase operating pressure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, X.; Florinski, V.
We present a new model that couples galactic cosmic-ray (GCR) propagation with magnetic turbulence transport and the MHD background evolution in the heliosphere. The model is applied to the problem of the formation of corotating interaction regions (CIRs) during the last solar minimum from the period between 2007 and 2009. The numerical model simultaneously calculates the large-scale supersonic solar wind properties and its small-scale turbulent content from 0.3 au to the termination shock. Cosmic rays are then transported through the background, and thus computed, with diffusion coefficients derived from the solar wind turbulent properties, using a stochastic Parker approach. Ourmore » results demonstrate that GCR variations depend on the ratio of diffusion coefficients in the fast and slow solar winds. Stream interfaces inside the CIRs always lead to depressions of the GCR intensity. On the other hand, heliospheric current sheet (HCS) crossings do not appreciably affect GCR intensities in the model, which is consistent with the two observations under quiet solar wind conditions. Therefore, variations in diffusion coefficients associated with CIR stream interfaces are more important for GCR propagation than the drift effects of the HCS during a negative solar minimum.« less
Eisenlohr, William Stewart; Stewart, J.E.
1952-01-01
During the night of August 4-5, 1943, a violent thunderstorm of unusual intensity occurred in parts of Braxton, Calhoun, Gilmer, Ritchie, and Wirth Counties in the Little Kanawha River Basin in central West Virginia. Precipitation amounted to as much as 15 inches in 2 hours in some sections. As a result, many small streams and a reach of the Little Kanawha River in the vicinity of Burnsville and Gilmer reached the highest stages known. Computations based on special surveys made at suitable sites on representative small streams in the areas of intense flooding indicate that peak discharges closely approach 50 percent of the Jarvis scale. Twenty-three lives were lost on the small tributaries as numerous homes were swept away by the flood, which developed with incredible rapidity during the early morning hours. Damage estimated at $1,300,000 resulted to farm buildings, crops, land, livestock, railroads, highways, and gas- and oil-producing facilities. Considerable permanent land damage resulted from erosion and deposition of sand and gravel.
Laser–plasma interactions for fast ignition
Kemp, A. J.; Fiuza, F.; Debayle, A.; ...
2014-04-17
In the electron-driven fast-ignition approach to inertial confinement fusion, petawatt laser pulses are required to generate MeV electrons that deposit several tens of kilojoules in the compressed core of an imploded DT shell. We review recent progress in the understanding of intense laser- plasma interactions (LPI) relevant to fast ignition. Increases in computational and modeling capabilities, as well as algorithmic developments have led to enhancement in our ability to perform multidimensional particle-in-cell (PIC) simulations of LPI at relevant scales. We discuss the physics of the interaction in terms of laser absorption fraction, the laser-generated electron spectra, divergence, and their temporalmore » evolution. Scaling with irradiation conditions such as laser intensity, f-number and wavelength are considered, as well as the dependence on plasma parameters. Different numerical modeling approaches and configurations are addressed, providing an overview of the modeling capabilities and limitations. In addition, we discuss the comparison of simulation results with experimental observables. In particular, we address the question of surrogacy of today's experiments for the full-scale fast ignition problem.« less
NASA Astrophysics Data System (ADS)
Guo, Yongfeng; Shen, Yajun; Tan, Jianguo
2016-09-01
The phenomenon of stochastic resonance (SR) in a piecewise nonlinear model driven by a periodic signal and correlated noises for the cases of a multiplicative non-Gaussian noise and an additive Gaussian white noise is investigated. Applying the path integral approach, the unified colored noise approximation and the two-state model theory, the analytical expression of the signal-to-noise ratio (SNR) is derived. It is found that conventional stochastic resonance exists in this system. From numerical computations we obtain that: (i) As a function of the non-Gaussian noise intensity, the SNR is increased when the non-Gaussian noise deviation parameter q is increased. (ii) As a function of the Gaussian noise intensity, the SNR is decreased when q is increased. This demonstrates that the effect of the non-Gaussian noise on SNR is different from that of the Gaussian noise in this system. Moreover, we further discuss the effect of the correlation time of the non-Gaussian noise, cross-correlation strength, the amplitude and frequency of the periodic signal on SR.
A mixed-mode crack analysis of rectilinear anisotropic solids using conservation laws of elasticity
NASA Technical Reports Server (NTRS)
Wang, S. S.; Yau, J. F.; Corten, H. T.
1980-01-01
A very simple and convenient method of analysis for studying two-dimensional mixed-mode crack problems in rectilinear anisotropic solids is presented. The analysis is formulated on the basis of conservation laws of anisotropic elasticity and of fundamental relationships in anisotropic fracture mechanics. The problem is reduced to a system of linear algebraic equations in mixed-mode stress intensity factors. One of the salient features of the present approach is that it can determine directly the mixed-mode stress intensity solutions from the conservation integrals evaluated along a path removed from the crack-tip region without the need of solving the corresponding complex near-field boundary value problem. Several examples with solutions available in the literature are solved to ensure the accuracy of the current analysis. This method is further demonstrated to be superior to other approaches in its numerical simplicity and computational efficiency. Solutions of more complicated and practical engineering problems dealing with the crack emanating from a circular hole in composites are presented also to illustrate the capacity of this method.
NASA Astrophysics Data System (ADS)
Rahman, M. Mostaqur; Hasan, A. B. M. Toufique; Rabbi, M. S.
2017-06-01
In transonic flow conditions, self-sustained shock wave oscillation on biconvex airfoils is initiated by the complex shock wave boundary layer interaction which is frequently observed in several modern internal aeronautical applications such as inturbine cascades, compressor blades, butterfly valves, fans, nozzles, diffusers and so on. Shock wave boundary layer interaction often generates serious problems such as unsteady boundary layer separation, self-excited shock waveoscillation with large pressure fluctuations, buffeting excitations, aeroacoustic noise, nonsynchronous vibration, high cycle fatigue failure and intense drag rise. Recently, the control of the self-excited shock oscillation around an airfoil using passive control techniques is getting intense interest. Among the passive means, control using open cavity has found promising. In this study, the effect of cavity size on the control of self-sustained shock oscillation was investigated numerically. The present computations are validated with available experimental results. The results showed that the average root mean square (RMS) of pressure oscillation around the airfoil with open cavity has reduced significantly when compared to airfoil without cavity (clean airfoil).
Fermilab computing at the Intensity Frontier
Group, Craig; Fuess, S.; Gutsche, O.; ...
2015-12-23
The Intensity Frontier refers to a diverse set of particle physics experiments using high- intensity beams. In this paper I will focus the discussion on the computing requirements and solutions of a set of neutrino and muon experiments in progress or planned to take place at the Fermi National Accelerator Laboratory located near Chicago, Illinois. In addition, the experiments face unique challenges, but also have overlapping computational needs. In principle, by exploiting the commonality and utilizing centralized computing tools and resources, requirements can be satisfied efficiently and scientists of individual experiments can focus more on the science and less onmore » the development of tools and infrastructure.« less
Cost-effective computational method for radiation heat transfer in semi-crystalline polymers
NASA Astrophysics Data System (ADS)
Boztepe, Sinan; Gilblas, Rémi; de Almeida, Olivier; Le Maoult, Yannick; Schmidt, Fabrice
2018-05-01
This paper introduces a cost-effective numerical model for infrared (IR) heating of semi-crystalline polymers. For the numerical and experimental studies presented here semi-crystalline polyethylene (PE) was used. The optical properties of PE were experimentally analyzed under varying temperature and the obtained results were used as input in the numerical studies. The model was built based on optically homogeneous medium assumption whereas the strong variation in the thermo-optical properties of semi-crystalline PE under heating was taken into account. Thus, the change in the amount radiative energy absorbed by the PE medium was introduced in the model induced by its temperature-dependent thermo-optical properties. The computational study was carried out considering an iterative closed-loop computation, where the absorbed radiation was computed using an in-house developed radiation heat transfer algorithm -RAYHEAT- and the computed results was transferred into the commercial software -COMSOL Multiphysics- for solving transient heat transfer problem to predict temperature field. The predicted temperature field was used to iterate the thermo-optical properties of PE that varies under heating. In order to analyze the accuracy of the numerical model experimental analyses were carried out performing IR-thermographic measurements during the heating of the PE plate. The applicability of the model in terms of computational cost, number of numerical input and accuracy was highlighted.
NASA Astrophysics Data System (ADS)
Romanowicz, Barbara; Yuan, Kaiqing; Masson, Yder; Adourian, Sevan
2017-04-01
We have recently constructed the first global whole mantle radially anisotropic shear wave velocity model based on time domain full waveform inversion and numerical wavefield computations using the Spectral Element Method (French et al., 2013; French and Romanowicz, 2014). This model's most salient features are broad chimney-like low velocity conduits, rooted within the large-low-shear-velocity provinces (LLSVPs) at the base of the mantle, and extending from the core-mantle boundary up through most of the lower mantle, projecting to the earth's surface in the vicinity of major hotspots. The robustness of these features is confirmed through several non-linear synthetic tests, which we present here, including several iterations of inversion using a different starting model than that which served for the published model. The roots of these not-so-classical "plumes" are regions of more pronounced low shear velocity. While the detailed structure is not yet resolvable tomographically, at least two of them contain large (>800 km diameter) ultra-low-velocity zones (ULVZs), one under Hawaii (Cottaar and Romanowicz, 2012) and the other one under Samoa (Thorne et al., 2013). Through 3D numerical forward modelling of Sdiff phases down to 10s period, using data from broadband arrays illuminating the base of the Iceland plume from different directions, we show that such a large ULVZ also exists at the root of this plume, embedded within a taller region of moderately reduced low shear velocity, such as proposed by He et al. (2015). We also show that such a wide, but localized ULVZ is unique in a broad region around the base of the Iceland Plume. Because of the intense computational effort required for forward modelling of trial structures, to first order this ULVZ is represented by a cylindrical structure of diameter 900 km, height 20 km and velocity reduction 20%. To further refine the model, we have developed a technique which we call "tomographic telescope", in which we are able to compute the teleseismic wavefield down to periods of 10s only once, while subsequent iterations require numerical wavefield computations only within the target region, in this case, around the base of the Iceland plume. We describe the method and preliminary results of its implementation.
Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics
NASA Technical Reports Server (NTRS)
Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.
NASA Astrophysics Data System (ADS)
Řidký, V.; Šidlof, P.; Vlček, V.
2013-04-01
The work is devoted to comparing measured data with the results of numerical simulations. As mathematical model was used mathematical model whitout turbulence for incompressible flow In the experiment was observed the behavior of designed NACA0015 airfoil in airflow. For the numerical solution was used OpenFOAM computational package, this is open-source software based on finite volume method. In the numerical solution is prescribed displacement of the airfoil, which corresponds to the experiment. The velocity at a point close to the airfoil surface is compared with the experimental data obtained from interferographic measurements of the velocity field. Numerical solution is computed on a 3D mesh composed of about 1 million ortogonal hexahedron elements. The time step is limited by the Courant number. Parallel computations are run on supercomputers of the CIV at Technical University in Prague (HAL and FOX) and on a computer cluster of the Faculty of Mechatronics of Liberec (HYDRA). Run time is fixed at five periods, the results from the fifth periods and average value for all periods are then be compared with experiment.
Tensor methodology and computational geometry in direct computational experiments in fluid mechanics
NASA Astrophysics Data System (ADS)
Degtyarev, Alexander; Khramushin, Vasily; Shichkina, Julia
2017-07-01
The paper considers a generalized functional and algorithmic construction of direct computational experiments in fluid dynamics. Notation of tensor mathematics is naturally embedded in the finite - element operation in the construction of numerical schemes. Large fluid particle, which have a finite size, its own weight, internal displacement and deformation is considered as an elementary computing object. Tensor representation of computational objects becomes strait linear and uniquely approximation of elementary volumes and fluid particles inside them. The proposed approach allows the use of explicit numerical scheme, which is an important condition for increasing the efficiency of the algorithms developed by numerical procedures with natural parallelism. It is shown that advantages of the proposed approach are achieved among them by considering representation of large particles of a continuous medium motion in dual coordinate systems and computing operations in the projections of these two coordinate systems with direct and inverse transformations. So new method for mathematical representation and synthesis of computational experiment based on large particle method is proposed.
High order parallel numerical schemes for solving incompressible flows
NASA Technical Reports Server (NTRS)
Lin, Avi; Milner, Edward J.; Liou, May-Fun; Belch, Richard A.
1992-01-01
The use of parallel computers for numerically solving flow fields has gained much importance in recent years. This paper introduces a new high order numerical scheme for computational fluid dynamics (CFD) specifically designed for parallel computational environments. A distributed MIMD system gives the flexibility of treating different elements of the governing equations with totally different numerical schemes in different regions of the flow field. The parallel decomposition of the governing operator to be solved is the primary parallel split. The primary parallel split was studied using a hypercube like architecture having clusters of shared memory processors at each node. The approach is demonstrated using examples of simple steady state incompressible flows. Future studies should investigate the secondary split because, depending on the numerical scheme that each of the processors applies and the nature of the flow in the specific subdomain, it may be possible for a processor to seek better, or higher order, schemes for its particular subcase.
An efficient technique for the numerical solution of the bidomain equations.
Whiteley, Jonathan P
2008-08-01
Computing the numerical solution of the bidomain equations is widely accepted to be a significant computational challenge. In this study we extend a previously published semi-implicit numerical scheme with good stability properties that has been used to solve the bidomain equations (Whiteley, J.P. IEEE Trans. Biomed. Eng. 53:2139-2147, 2006). A new, efficient numerical scheme is developed which utilizes the observation that the only component of the ionic current that must be calculated on a fine spatial mesh and updated frequently is the fast sodium current. Other components of the ionic current may be calculated on a coarser mesh and updated less frequently, and then interpolated onto the finer mesh. Use of this technique to calculate the transmembrane potential and extracellular potential induces very little error in the solution. For the simulations presented in this study an increase in computational efficiency of over two orders of magnitude over standard numerical techniques is obtained.
Computational methods for aerodynamic design using numerical optimization
NASA Technical Reports Server (NTRS)
Peeters, M. F.
1983-01-01
Five methods to increase the computational efficiency of aerodynamic design using numerical optimization, by reducing the computer time required to perform gradient calculations, are examined. The most promising method consists of drastically reducing the size of the computational domain on which aerodynamic calculations are made during gradient calculations. Since a gradient calculation requires the solution of the flow about an airfoil whose geometry was slightly perturbed from a base airfoil, the flow about the base airfoil is used to determine boundary conditions on the reduced computational domain. This method worked well in subcritical flow.
Macías-Díaz, J E; Macías, Siegfried; Medina-Ramírez, I E
2013-12-01
In this manuscript, we present a computational model to approximate the solutions of a partial differential equation which describes the growth dynamics of microbial films. The numerical technique reported in this work is an explicit, nonlinear finite-difference methodology which is computationally implemented using Newton's method. Our scheme is compared numerically against an implicit, linear finite-difference discretization of the same partial differential equation, whose computer coding requires an implementation of the stabilized bi-conjugate gradient method. Our numerical results evince that the nonlinear approach results in a more efficient approximation to the solutions of the biofilm model considered, and demands less computer memory. Moreover, the positivity of initial profiles is preserved in the practice by the nonlinear scheme proposed. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Bornyakov, V. G.; Boyda, D. L.; Goy, V. A.; Molochkov, A. V.; Nakamura, Atsushi; Nikolaev, A. A.; Zakharov, V. I.
2017-05-01
We propose and test a new approach to computation of canonical partition functions in lattice QCD at finite density. We suggest a few steps procedure. We first compute numerically the quark number density for imaginary chemical potential i μq I . Then we restore the grand canonical partition function for imaginary chemical potential using the fitting procedure for the quark number density. Finally we compute the canonical partition functions using high precision numerical Fourier transformation. Additionally we compute the canonical partition functions using the known method of the hopping parameter expansion and compare results obtained by two methods in the deconfining as well as in the confining phases. The agreement between two methods indicates the validity of the new method. Our numerical results are obtained in two flavor lattice QCD with clover improved Wilson fermions.
Ida, Masato; Taniguchi, Nobuyuki
2003-09-01
This paper introduces a candidate for the origin of the numerical instabilities in large eddy simulation repeatedly observed in academic and practical industrial flow computations. Without resorting to any subgrid-scale modeling, but based on a simple assumption regarding the streamwise component of flow velocity, it is shown theoretically that in a channel-flow computation, the application of the Gaussian filtering to the incompressible Navier-Stokes equations yields a numerically unstable term, a cross-derivative term, which is similar to one appearing in the Gaussian filtered Vlasov equation derived by Klimas [J. Comput. Phys. 68, 202 (1987)] and also to one derived recently by Kobayashi and Shimomura [Phys. Fluids 15, L29 (2003)] from the tensor-diffusivity subgrid-scale term in a dynamic mixed model. The present result predicts that not only the numerical methods and the subgrid-scale models employed but also only the applied filtering process can be a seed of this numerical instability. An investigation concerning the relationship between the turbulent energy scattering and the unstable term shows that the instability of the term does not necessarily represent the backscatter of kinetic energy which has been considered a possible origin of numerical instabilities in large eddy simulation. The present findings raise the question whether a numerically stable subgrid-scale model can be ideally accurate.
Classical problems in computational aero-acoustics
NASA Technical Reports Server (NTRS)
Hardin, Jay C.
1996-01-01
In relation to the expected problems in the development of computational aeroacoustics (CAA), the preliminary applications were to classical problems where the known analytical solutions could be used to validate the numerical results. Such comparisons were used to overcome the numerical problems inherent in these calculations. Comparisons were made between the various numerical approaches to the problems such as direct simulations, acoustic analogies and acoustic/viscous splitting techniques. The aim was to demonstrate the applicability of CAA as a tool in the same class as computational fluid dynamics. The scattering problems that occur are considered and simple sources are discussed.
NASA Astrophysics Data System (ADS)
Lou, Yang; Zhou, Weimin; Matthews, Thomas P.; Appleton, Catherine M.; Anastasio, Mark A.
2017-04-01
Photoacoustic computed tomography (PACT) and ultrasound computed tomography (USCT) are emerging modalities for breast imaging. As in all emerging imaging technologies, computer-simulation studies play a critically important role in developing and optimizing the designs of hardware and image reconstruction methods for PACT and USCT. Using computer-simulations, the parameters of an imaging system can be systematically and comprehensively explored in a way that is generally not possible through experimentation. When conducting such studies, numerical phantoms are employed to represent the physical properties of the patient or object to-be-imaged that influence the measured image data. It is highly desirable to utilize numerical phantoms that are realistic, especially when task-based measures of image quality are to be utilized to guide system design. However, most reported computer-simulation studies of PACT and USCT breast imaging employ simple numerical phantoms that oversimplify the complex anatomical structures in the human female breast. We develop and implement a methodology for generating anatomically realistic numerical breast phantoms from clinical contrast-enhanced magnetic resonance imaging data. The phantoms will depict vascular structures and the volumetric distribution of different tissue types in the breast. By assigning optical and acoustic parameters to different tissue structures, both optical and acoustic breast phantoms will be established for use in PACT and USCT studies.
GPU accelerated dynamic functional connectivity analysis for functional MRI data.
Akgün, Devrim; Sakoğlu, Ünal; Esquivel, Johnny; Adinoff, Bryon; Mete, Mutlu
2015-07-01
Recent advances in multi-core processors and graphics card based computational technologies have paved the way for an improved and dynamic utilization of parallel computing techniques. Numerous applications have been implemented for the acceleration of computationally-intensive problems in various computational science fields including bioinformatics, in which big data problems are prevalent. In neuroimaging, dynamic functional connectivity (DFC) analysis is a computationally demanding method used to investigate dynamic functional interactions among different brain regions or networks identified with functional magnetic resonance imaging (fMRI) data. In this study, we implemented and analyzed a parallel DFC algorithm based on thread-based and block-based approaches. The thread-based approach was designed to parallelize DFC computations and was implemented in both Open Multi-Processing (OpenMP) and Compute Unified Device Architecture (CUDA) programming platforms. Another approach developed in this study to better utilize CUDA architecture is the block-based approach, where parallelization involves smaller parts of fMRI time-courses obtained by sliding-windows. Experimental results showed that the proposed parallel design solutions enabled by the GPUs significantly reduce the computation time for DFC analysis. Multicore implementation using OpenMP on 8-core processor provides up to 7.7× speed-up. GPU implementation using CUDA yielded substantial accelerations ranging from 18.5× to 157× speed-up once thread-based and block-based approaches were combined in the analysis. Proposed parallel programming solutions showed that multi-core processor and CUDA-supported GPU implementations accelerated the DFC analyses significantly. Developed algorithms make the DFC analyses more practical for multi-subject studies with more dynamic analyses. Copyright © 2015 Elsevier Ltd. All rights reserved.
Scaling effects in spiral capsule robots.
Liang, Liang; Hu, Rong; Chen, Bai; Tang, Yong; Xu, Yan
2017-04-01
Spiral capsule robots can be applied to human gastrointestinal tracts and blood vessels. Because of significant variations in the sizes of the inner diameters of the intestines as well as blood vessels, this research has been unable to meet the requirements for medical applications. By applying the fluid dynamic equations, using the computational fluid dynamics method, to a robot axial length ranging from 10 -5 to 10 -2 m, the operational performance indicators (axial driving force, load torque, and maximum fluid pressure on the pipe wall) of the spiral capsule robot and the fluid turbulent intensity around the robot spiral surfaces was numerically calculated in a straight rigid pipe filled with fluid. The reasonableness and validity of the calculation method adopted in this study were verified by the consistency of the calculated values by the computational fluid dynamics method and the experimental values from a relevant literature. The results show that the greater the fluid turbulent intensity, the greater the impact of the fluid turbulence on the driving performance of the spiral capsule robot and the higher the energy consumption of the robot. For the same level of size of the robot, the axial driving force, the load torque, and the maximum fluid pressure on the pipe wall of the outer spiral robot were larger than those of the inner spiral robot. For different requirements of the operating environment, we can choose a certain kind of spiral capsule robot. This study provides a theoretical foundation for spiral capsule robots.
Extended optical theorem in isotropic solids and its application to the elastic radiation force
NASA Astrophysics Data System (ADS)
Leão-Neto, J. P.; Lopes, J. H.; Silva, G. T.
2017-04-01
In this article, we derive the extended optical theorem for the elastic-wave scattering by a spherical inclusion (with and without absorption) in a solid matrix. This theorem expresses the extinction cross-section, i.e., the time-averaged power extracted from the incoming beam per its intensity, regarding the partial-wave expansion coefficients of the incident and scattered waves. We also establish the connection between the optical theorem and the elastic radiation force by a plane wave in a linear and isotropic solid. We obtain the absorption, scattering, and extinction efficiencies (the corresponding power per characteristic incident intensity per sphere cross-section area) for a plane wave and a spherically focused beam. We discuss to which extent the radiation force theory for plane waves can be used to the focused beam case. Considering an iron sphere embedded in an aluminum matrix, we numerically compute the scattering and elastic radiation force efficiencies. The radiation force on a stainless steel sphere embedded in a tissue-like medium (soft solid) is also computed. In this case, resonances are observed in the force as a function of the sphere size parameter (the wavenumber times the sphere radius). Remarkably, the relative difference between our findings and previous lossless liquid models is about 100% in the long-wavelength limit. Regarding some applications, the obtained results have a direct impact on ultrasound-based elastography techniques and ultrasonic nondestructive testing, as well as implantable devices activated by ultrasound.
NASA Astrophysics Data System (ADS)
Dombrovsky, Leonid A.; Reviznikov, Dmitry L.; Kryukov, Alexei P.; Levashov, Vladimir Yu
2017-10-01
An effect of shielding of an intense solar radiation towards a solar probe with the use of micron-sized SiC particles generated during ablation of a composite thermal protection material is estimated on a basis of numerical solution to a combined radiative and heat transfer problem. The radiative properties of particles are calculated using the Mie theory, and the spectral two-flux model is employed in radiative transfer calculations for non-uniform particle clouds. A computational model for generation and evolution of the cloud is based on a conjugated heat transfer problem taking into account heating and thermal destruction of the matrix of thermal protection material and sublimation of SiC particles in the generated cloud. The effect of light pressure, which is especially important for small particles, is also taken into account. The computational data for mass loss due to the particle cloud sublimation showed the low value about 1 kg/m2 per hour at the distance between the vehicle and the Sun surface of about four radii of the Sun. This indicates that embedding of silicon carbide or other particles into a thermal protection layer and the resulting generation of a particle cloud can be considered as a promising way to improve the possibilities of space missions due to a significant decrease in the vehicle working distance from the solar photosphere.
Ronald E. Coleman
1977-01-01
SEMTAP (Serpentine End Match TApe Program) is an easy and inexpensive method of programing a numerically controlled router for the manufacture of SEM (Serpentine End Matching) joints. The SEMTAP computer program allows the user to issue commands that will accurately direct a numerically controlled router along any SEM path. The user need not be a computer programer to...
Vision-Based UAV Flight Control and Obstacle Avoidance
2006-01-01
denoted it by Vb = (Vb1, Vb2 , Vb3). Fig. 2 shows the block diagram of the proposed vision-based motion analysis and obstacle avoidance system. We denote...structure analysis often involve computation- intensive computer vision tasks, such as feature extraction and geometric modeling. Computation-intensive...First, we extract a set of features from each block. 2) Second, we compute the distance between these two sets of features. In conventional motion
Some Aspects of Nonlinear Dynamics and CFD
NASA Technical Reports Server (NTRS)
Yee, Helen C.; Merriam, Marshal (Technical Monitor)
1996-01-01
The application of nonlinear dynamics to improve the understanding of numerical uncertainties in computational fluid dynamics (CFD) is reviewed. Elementary examples in the use of dynamics to explain the nonlinear phenomena and spurious behavior that occur in numerics are given. The role of dynamics in the understanding of long time behavior of numerical integrations and the nonlinear stability, convergence, and reliability of using time-marching approaches for obtaining steady-state numerical solutions in CFD is explained. The study is complemented with examples of spurious behavior observed in CFD computations.
From cosmos to connectomes: the evolution of data-intensive science.
Burns, Randal; Vogelstein, Joshua T; Szalay, Alexander S
2014-09-17
The analysis of data requires computation: originally by hand and more recently by computers. Different models of computing are designed and optimized for different kinds of data. In data-intensive science, the scale and complexity of data exceeds the comfort zone of local data stores on scientific workstations. Thus, cloud computing emerges as the preeminent model, utilizing data centers and high-performance clusters, enabling remote users to access and query subsets of the data efficiently. We examine how data-intensive computational systems originally built for cosmology, the Sloan Digital Sky Survey (SDSS), are now being used in connectomics, at the Open Connectome Project. We list lessons learned and outline the top challenges we expect to face. Success in computational connectomics would drastically reduce the time between idea and discovery, as SDSS did in cosmology. Copyright © 2014 Elsevier Inc. All rights reserved.
The COBAIN (COntact Binary Atmospheres with INterpolation) Code for Radiative Transfer
NASA Astrophysics Data System (ADS)
Kochoska, Angela; Prša, Andrej; Horvat, Martin
2018-01-01
Standard binary star modeling codes make use of pre-existing solutions of the radiative transfer equation in stellar atmospheres. The various model atmospheres available today are consistently computed for single stars, under different assumptions - plane-parallel or spherical atmosphere approximation, local thermodynamical equilibrium (LTE) or non-LTE (NLTE), etc. However, they are nonetheless being applied to contact binary atmospheres by populating the surface corresponding to each component separately and neglecting any mixing that would typically occur at the contact boundary. In addition, single stellar atmosphere models do not take into account irradiance from a companion star, which can pose a serious problem when modeling close binaries. 1D atmosphere models are also solved under the assumption of an atmosphere in hydrodynamical equilibrium, which is not necessarily the case for contact atmospheres, as the potentially different densities and temperatures can give rise to flows that play a key role in the heat and radiation transfer.To resolve the issue of erroneous modeling of contact binary atmospheres using single star atmosphere tables, we have developed a generalized radiative transfer code for computation of the normal emergent intensity of a stellar surface, given its geometry and internal structure. The code uses a regular mesh of equipotential surfaces in a discrete set of spherical coordinates, which are then used to interpolate the values of the structural quantites (density, temperature, opacity) in any given point inside the mesh. The radiaitive transfer equation is numerically integrated in a set of directions spanning the unit sphere around each point and iterated until the intensity values for all directions and all mesh points converge within a given tolerance. We have found that this approach, albeit computationally expensive, is the only one that can reproduce the intensity distribution of the non-symmetric contact binary atmosphere and can be used with any existing or new model of the structure of contact binaries. We present results on several test objects and future prospects of the implementation in state-of-the-art binary star modeling software.
Mechanics of the acoustic radiation force in tissue-like solids
NASA Astrophysics Data System (ADS)
Dontsov, Egor V.
The acoustic radiation force (ARF) is a phenomenon affiliated with the nonlinear effects of high-intensity wave propagation. It represents the mean momentum transfer from the sound wave to the medium, and allows for an effective computation of the mean motion (e.g. acoustic streaming in fluids) induced by a high-intensity sound wave. Nowadays, the high-intensity focused ultrasound is frequently used in medical diagnosis applications due to its ability to "push" inside the tissue with the radiation body force and facilitate the local quantification of tissue's viscoelastic properties. The main objectives of this study include: i) the theoretical investigation of the ARF in fluids and tissue-like solids generated respectively by the amplitude modulated plane wave and focused ultrasound; ii) computation of the nonlinear acoustic wave propagation when the amplitude of the focused ultrasound field is modulated by a low-frequency signal, and iii) modeling of the ARF-induced motion in tissue-like solids for the purpose of quantifying their nonlinear elasticity via the magnitude of the ARF. Regarding the first part, a comparison with the existing theory of the ARF reveals a number of key features that are brought to light by the new formulation, including the contributions to the ARF of ultrasound modulation and thermal expansion, as well as the precise role of constitutive nonlinearities in generating the sustained body force in tissue-like solids by a focused ultrasound beam. In the second part, the hybrid time-frequency domain algorithm for the numerical analysis of the nonlinear wave equation is proposed. The approach is validated by comparing the results to the finite-difference modeling in time domain. Regarding the third objective, the Fourier transform approach is used to compute the ARF-induced shear wave motion in tissue-mimicking phantoms. A comparison between the experiment (tests performed at the Mayo Clinic) and model permitted the estimation of a particular coefficient of nonlinear tissue elasticity from the amplitude of the ARF-generated shear waves. For completeness, the ARF estimates of this coefficient are verified via an established technique known as acoustoelasticity.
Kopans, D B
2000-07-01
Clearly, the cost of double reading varies with the approach used. The Massachusetts General Hospital method can only lead to an increase in recalls and the costs that these engender (anxiety for the women recalled, trauma from any biopsies obtained, and the actual monetary costs of additional imaging and interventions). It is of interest that one potential cost, the concern that women recalled may be reluctant to participate again in screening, does not seem to be the case. Women who are recalled appear to be more likely to participate in future screening. Double interpretation where there must be a consensus between the interpreting radiologists, and if this cannot be reached a third arbiter, is the most labor intensive, but can reduce the number of recalls in a double reading system. Computer systems have been developed to act as a second reader. The films must be digitized and then fed through the reader, but studies suggest that the computer can identify cancers that may be overlooked by a human reader. The challenge is to do this without too many false-positive calls. If the radiologist finds the false-positives are too numerous and distracting, then the system is not used. As digital mammographic systems proliferate, and computer algorithms become more sophisticated, the second human reader will likely be replaced by a computer-aided detection system and double reading will become the norm.
NASA Astrophysics Data System (ADS)
Baba, J. S.; Koju, V.; John, D.
2015-03-01
The propagation of light in turbid media is an active area of research with relevance to numerous investigational fields, e.g., biomedical diagnostics and therapeutics. The statistical random-walk nature of photon propagation through turbid media is ideal for computational based modeling and simulation. Ready access to super computing resources provide a means for attaining brute force solutions to stochastic light-matter interactions entailing scattering by facilitating timely propagation of sufficient (>107) photons while tracking characteristic parameters based on the incorporated physics of the problem. One such model that works well for isotropic but fails for anisotropic scatter, which is the case for many biomedical sample scattering problems, is the diffusion approximation. In this report, we address this by utilizing Berry phase (BP) evolution as a means for capturing anisotropic scattering characteristics of samples in the preceding depth where the diffusion approximation fails. We extend the polarization sensitive Monte Carlo method of Ramella-Roman, et al., to include the computationally intensive tracking of photon trajectory in addition to polarization state at every scattering event. To speed-up the computations, which entail the appropriate rotations of reference frames, the code was parallelized using OpenMP. The results presented reveal that BP is strongly correlated to the photon penetration depth, thus potentiating the possibility of polarimetric depth resolved characterization of highly scattering samples, e.g., biological tissues.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baba, Justin S; John, Dwayne O; Koju, Vijay
The propagation of light in turbid media is an active area of research with relevance to numerous investigational fields, e.g., biomedical diagnostics and therapeutics. The statistical random-walk nature of photon propagation through turbid media is ideal for computational based modeling and simulation. Ready access to super computing resources provide a means for attaining brute force solutions to stochastic light-matter interactions entailing scattering by facilitating timely propagation of sufficient (>10million) photons while tracking characteristic parameters based on the incorporated physics of the problem. One such model that works well for isotropic but fails for anisotropic scatter, which is the case formore » many biomedical sample scattering problems, is the diffusion approximation. In this report, we address this by utilizing Berry phase (BP) evolution as a means for capturing anisotropic scattering characteristics of samples in the preceding depth where the diffusion approximation fails. We extend the polarization sensitive Monte Carlo method of Ramella-Roman, et al.,1 to include the computationally intensive tracking of photon trajectory in addition to polarization state at every scattering event. To speed-up the computations, which entail the appropriate rotations of reference frames, the code was parallelized using OpenMP. The results presented reveal that BP is strongly correlated to the photon penetration depth, thus potentiating the possibility of polarimetric depth resolved characterization of highly scattering samples, e.g., biological tissues.« less
Numerical simulations of self-focusing of ultrafast laser pulses
NASA Astrophysics Data System (ADS)
Fibich, Gadi; Ren, Weiqing; Wang, Xiao-Ping
2003-05-01
Simulation of nonlinear propagation of intense ultrafast laser pulses is a hard problem, because of the steep spatial gradients and the temporal shocks that form during the propagation. In this study we adapt the iterative grid distribution method of Ren and Wang [J. Comput. Phys. 159, 246 (2000)] to solve the two-dimensional nonlinear Schrödinger equation with normal time dispersion, space-time focusing, and self-steepening. Our simulations show that, after the asymmetric temporal pulse splitting, the rear peak self-focuses faster than the front one. As a result, the collapse of the rear peak is arrested before that of the front peak. Unlike what has sometimes been conjectured, however, collapse of the two peaks is not arrested through multiple splittings, but rather through temporal dispersion.
Self-organizing maps for learning the edit costs in graph matching.
Neuhaus, Michel; Bunke, Horst
2005-06-01
Although graph matching and graph edit distance computation have become areas of intensive research recently, the automatic inference of the cost of edit operations has remained an open problem. In the present paper, we address the issue of learning graph edit distance cost functions for numerically labeled graphs from a corpus of sample graphs. We propose a system of self-organizing maps (SOMs) that represent the distance measuring spaces of node and edge labels. Our learning process is based on the concept of self-organization. It adapts the edit costs in such a way that the similarity of graphs from the same class is increased, whereas the similarity of graphs from different classes decreases. The learning procedure is demonstrated on two different applications involving line drawing graphs and graphs representing diatoms, respectively.
NASA Technical Reports Server (NTRS)
Yesilyurt, Serhat; Vujisic, Ljubomir; Motakef, Shariar; Szofran, F. R.; Volz, Martin P.
1998-01-01
Thermoelectric currents at the growth interface of GeSi during Bridgman growth are shown to promote convection when a low intensity axial magnetic field is applied. TEMC, typically, is characterized by a meridional flow driven by the rotation of the fluid; meridional convection alters composition of the melt, and shape of the growth interface substantially. TEMC effect is more important in micro-gravity environment than the terrestrial one, and can be used to control convection during the growth of GeSi. In this work, coupled thermo-solutal flow equations (energy, scalar transport, momentum and mass) are solved in tandem with Maxwell's equations to compute the thermo-solutat flow field, electric currents, and the growth-interface shape.
Synchronization in complex networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arenas, A.; Diaz-Guilera, A.; Moreno, Y.
Synchronization processes in populations of locally interacting elements are in the focus of intense research in physical, biological, chemical, technological and social systems. The many efforts devoted to understand synchronization phenomena in natural systems take now advantage of the recent theory of complex networks. In this review, we report the advances in the comprehension of synchronization phenomena when oscillating elements are constrained to interact in a complex network topology. We also overview the new emergent features coming out from the interplay between the structure and the function of the underlying pattern of connections. Extensive numerical work as well as analyticalmore » approaches to the problem are presented. Finally, we review several applications of synchronization in complex networks to different disciplines: biological systems and neuroscience, engineering and computer science, and economy and social sciences.« less
Plasma Physics Calculations on a Parallel Macintosh Cluster
NASA Astrophysics Data System (ADS)
Decyk, Viktor; Dauger, Dean; Kokelaar, Pieter
2000-03-01
We have constructed a parallel cluster consisting of 16 Apple Macintosh G3 computers running the MacOS, and achieved very good performance on numerically intensive, parallel plasma particle-in-cell simulations. A subset of the MPI message-passing library was implemented in Fortran77 and C. This library enabled us to port code, without modification, from other parallel processors to the Macintosh cluster. For large problems where message packets are large and relatively few in number, performance of 50-150 MFlops/node is possible, depending on the problem. This is fast enough that 3D calculations can be routinely done. Unlike Unix-based clusters, no special expertise in operating systems is required to build and run the cluster. Full details are available on our web site: http://exodus.physics.ucla.edu/appleseed/.
Plasma Physics Calculations on a Parallel Macintosh Cluster
NASA Astrophysics Data System (ADS)
Decyk, Viktor K.; Dauger, Dean E.; Kokelaar, Pieter R.
We have constructed a parallel cluster consisting of 16 Apple Macintosh G3 computers running the MacOS, and achieved very good performance on numerically intensive, parallel plasma particle-in-cell simulations. A subset of the MPI message-passing library was implemented in Fortran77 and C. This library enabled us to port code, without modification, from other parallel processors to the Macintosh cluster. For large problems where message packets are large and relatively few in number, performance of 50-150 Mflops/node is possible, depending on the problem. This is fast enough that 3D calculations can be routinely done. Unlike Unix-based clusters, no special expertise in operating systems is required to build and run the cluster. Full details are available on our web site: http://exodus.physics.ucla.edu/appleseed/.
NASA Astrophysics Data System (ADS)
Campos-García, Manuel; Granados-Agustín, Fermín.; Cornejo-Rodríguez, Alejandro; Estrada-Molina, Amilcar; Avendaño-Alejo, Maximino; Moreno-Oliva, Víctor Iván.
2013-11-01
In order to obtain a clearer interpretation of the Intensity Transport Equation (ITE), in this work, we propose an algorithm to solve it for some particular wavefronts and its corresponding intensity distributions. By simulating intensity distributions in some planes, the ITE is turns into a Poisson equation with Neumann boundary conditions. The Poisson equation is solved by means of the iterative algorithm SOR (Simultaneous Over-Relaxation).
NASA Astrophysics Data System (ADS)
Deryabin, M. S.; Kasyanov, D. A.; Kurin, V. V.; Garasyov, M. A.
2016-05-01
We show that a significant energy redistribution occurs in the spectrum of reflected nonlinear waves, when an intense acoustic beam is reflected from an acoustically soft boundary, which manifests itself at short wave distances from a reflecting boundary. This effect leads to the appearance of extrema in the distributions of the amplitude and intensity of the field of the reflected acoustic beam near the reflecting boundary. The results of physical experiments are confirmed by numerical modeling of the process of transformation of nonlinear waves reflected from an acoustically soft boundary. Numerical modeling was performed by means of the Khokhlov—Zabolotskaya—Kuznetsov (KZK) equation.
Courant number and unsteady flow computation
Lai, Chintu; ,
1993-01-01
The Courant number C, the key to unsteady flow computation, is a ratio of physical wave velocity, ??, to computational signal-transmission velocity, ??, i.e., C = ??/??. In this way, it uniquely relates a physical quantity to a mathematical quantity. Because most unsteady open-channel flows are describable by a set of n characteristic equations along n characteristic paths, each represented by velocity ??i, i = 1,2,....,n, there exist as many as n components for the numerator of C. To develop a numerical model, a numerical integration must be made on each characteristic curve from an earlier point to a later point on the curve. Different numerical methods are available in unsteady flow computation due to the different paths along which the numerical integration is actually performed. For the denominator of C, the ?? defined as ?? = ?? 0 = ??x/??t has been customarily used; thus, the Courant number has the familiar form of C?? = ??/??0. This form will be referred to as ???common Courant number??? in this paper. The commonly used numerical criteria C?? for stability, neutral stability and instability, are imprecise or not universal in the sense that r0 does not always reflect the true maximum computational data-transmission speed of the scheme at hand, i.e., Ctau is no indication for the Courant constraint. In view of this , a new Courant number, called the ???natural Courant number???, Cn, that truly reflects the Courant constraint, has been defined. However, considering the numerous advantages inherent in the traditional C??, a useful and meaningful composite Courant number, denoted by C??* has been formulated from C??. It is hoped that the new aspects of the Courant number discussed herein afford the hydraulician a broader perspective, consistent criteria, and unified guidelines, with which to model various unsteady flows.
Aeroacoustics of Turbulent High-Speed Jets
NASA Technical Reports Server (NTRS)
Rao, Ram Mohan; Lundgren, Thomas S.
1996-01-01
Aeroacoustic noise generation in a supersonic round jet is studied to understand in particular the effect of turbulence structure on the noise without numerically compromising the turbulence itself. This means that direct numerical simulations (DNS's) are needed. In order to use DNS at high enough Reynolds numbers to get sufficient turbulence structure we have decided to solve the temporal jet problem, using periodicity in the direction of the jet axis. Physically this means that turbulent structures in the jet are repeated in successive downstream cells instead of being gradually modified downstream into a jet plume. Therefore in order to answer some questions about the turbulence we will partially compromise the overall structure of the jet. The first section of chapter 1 describes some work on the linear stability of a supersonic round jet and the implications of this for the jet noise problem. In the second section we present preliminary work done using a TVD numerical scheme on a CM5. This work is only two-dimensional (plane) but shows very interesting results, including weak shock waves. However this is a nonviscous computation and the method resolves the shocks by adding extra numerical dissipation where the gradients are large. One wonders whether the extra dissipation would influence small turbulent structures like small intense vortices. The second chapter is an extensive discussion of preliminary numerical work using the spectral method to solve the compressible Navier-Stokes equations to study turbulent jet flows. The method uses Fourier expansions in the azimuthal and streamwise direction and a 1-D B-spline basis representation in the radial direction. The B-spline basis is locally supported and this ensures block diagonal matrix equations which are solved in O(N) steps. A very accurate highly resolved DNS of a turbulent jet flow is expected.
Numerical methods for engine-airframe integration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murthy, S.N.B.; Paynter, G.C.
1986-01-01
Various papers on numerical methods for engine-airframe integration are presented. The individual topics considered include: scientific computing environment for the 1980s, overview of prediction of complex turbulent flows, numerical solutions of the compressible Navier-Stokes equations, elements of computational engine/airframe integrations, computational requirements for efficient engine installation, application of CAE and CFD techniques to complete tactical missile design, CFD applications to engine/airframe integration, and application of a second-generation low-order panel methods to powerplant installation studies. Also addressed are: three-dimensional flow analysis of turboprop inlet and nacelle configurations, application of computational methods to the design of large turbofan engine nacelles, comparison ofmore » full potential and Euler solution algorithms for aeropropulsive flow field computations, subsonic/transonic, supersonic nozzle flows and nozzle integration, subsonic/transonic prediction capabilities for nozzle/afterbody configurations, three-dimensional viscous design methodology of supersonic inlet systems for advanced technology aircraft, and a user's technology assessment.« less
The Construction of 3-d Neutral Density for Arbitrary Data Sets
NASA Astrophysics Data System (ADS)
Riha, S.; McDougall, T. J.; Barker, P. M.
2014-12-01
The Neutral Density variable allows inference of water pathways from thermodynamic properties in the global ocean, and is therefore an essential component of global ocean circulation analysis. The widely used algorithm for the computation of Neutral Density yields accurate results for data sets which are close to the observed climatological ocean. Long-term numerical climate simulations, however, often generate a significant drift from present-day climate, which renders the existing algorithm inaccurate. To remedy this problem, new algorithms which operate on arbitrary data have been developed, which may potentially be used to compute Neutral Density during runtime of a numerical model.We review existing approaches for the construction of Neutral Density in arbitrary data sets, detail their algorithmic structure, and present an analysis of the computational cost for implementations on a single-CPU computer. We discuss possible strategies for the implementation in state-of-the-art numerical models, with a focus on distributed computing environments.
Fusing Symbolic and Numerical Diagnostic Computations
NASA Technical Reports Server (NTRS)
James, Mark
2007-01-01
X-2000 Anomaly Detection Language denotes a developmental computing language, and the software that establishes and utilizes the language, for fusing two diagnostic computer programs, one implementing a numerical analysis method, the other implementing a symbolic analysis method into a unified event-based decision analysis software system for realtime detection of events (e.g., failures) in a spacecraft, aircraft, or other complex engineering system. The numerical analysis method is performed by beacon-based exception analysis for multi-missions (BEAMs), which has been discussed in several previous NASA Tech Briefs articles. The symbolic analysis method is, more specifically, an artificial-intelligence method of the knowledge-based, inference engine type, and its implementation is exemplified by the Spacecraft Health Inference Engine (SHINE) software. The goal in developing the capability to fuse numerical and symbolic diagnostic components is to increase the depth of analysis beyond that previously attainable, thereby increasing the degree of confidence in the computed results. In practical terms, the sought improvement is to enable detection of all or most events, with no or few false alarms.
NASA Technical Reports Server (NTRS)
Chuang, C.-H.; Goodson, Troy D.; Ledsinger, Laura A.
1995-01-01
This report describes current work in the numerical computation of multiple burn, fuel-optimal orbit transfers and presents an analysis of the second variation for extremal multiple burn orbital transfers as well as a discussion of a guidance scheme which may be implemented for such transfers. The discussion of numerical computation focuses on the use of multivariate interpolation to aid the computation in the numerical optimization. The second variation analysis includes the development of the conditions for the examination of both fixed and free final time transfers. Evaluations for fixed final time are presented for extremal one, two, and three burn solutions of the first variation. The free final time problem is considered for an extremal two burn solution. In addition, corresponding changes of the second variation formulation over thrust arcs and coast arcs are included. The guidance scheme discussed is an implicit scheme which implements a neighboring optimal feedback guidance strategy to calculate both thrust direction and thrust on-off times.
NASA Technical Reports Server (NTRS)
Sreenivas, Kidambi; Whitfield, David L.
1995-01-01
Two linearized solvers (time and frequency domain) based on a high resolution numerical scheme are presented. The basic approach is to linearize the flux vector by expressing it as a sum of a mean and a perturbation. This allows the governing equations to be maintained in conservation law form. A key difference between the time and frequency domain computations is that the frequency domain computations require only one grid block irrespective of the interblade phase angle for which the flow is being computed. As a result of this and due to the fact that the governing equations for this case are steady, frequency domain computations are substantially faster than the corresponding time domain computations. The linearized equations are used to compute flows in turbomachinery blade rows (cascades) arising due to blade vibrations. Numerical solutions are compared to linear theory (where available) and to numerical solutions of the nonlinear Euler equations.
ERIC Educational Resources Information Center
Everhart, Julie M.; Alber-Morgan, Sheila R.; Park, Ju Hee
2011-01-01
This study investigated the effects of computer-based practice on the acquisition and maintenance of basic academic skills for two children with moderate to intensive disabilities. The special education teacher created individualized computer games that enabled the participants to independently practice academic skills that corresponded with their…
Implementation and adherence issues in a workplace treadmill desk intervention.
Tudor-Locke, Catrine; Hendrick, Chelsea A; Duet, Megan T; Swift, Damon L; Schuna, John M; Martin, Corby K; Johnson, William D; Church, Timothy S
2014-10-01
We report experiences, observations, and general lessons learned, specifically with regards to participant recruitment and adherence, while implementing a 6-month randomized controlled treadmill desk intervention (the WorkStation Pilot Study) in a real-world office-based health insurance workplace. Despite support from the company's upper administration, relatively few employees responded to the company-generated e-mail to participate in the study. Ultimately only 41 overweight/obese participants were deemed eligible and enrolled from a recruitment pool of 728 workers. Participants allocated to the Treadmill Desk Group found the treadmill desk difficult to use for 45 min twice a day as scheduled. Overall attendance averaged 45%-50% of all possible scheduled sessions. The most frequently reported reasons for missing sessions included work conflict (35%), out of office (30%), and illness/injury/drop-out (20%). Although focus groups indicated consistently positive comments about treadmill desks, an apparent challenge was fitting a rigid schedule of shared use to an equally rigid and demanding work schedule punctuated with numerous tasks and obligations that could not easily be interrupted. Regardless, we documented that sedentary office workers average ∼43 min of light-intensity (∼2 METs) treadmill walking daily in response to a scheduled, facilitated, and shared access workplace intervention. Workstation alternatives that combine computer-based work with light-intensity physical activity are a potential solution to health problems associated with excessive sedentary behavior; however, there are numerous administrative, capital, and human resource challenges confronting employers considering providing treadmill desks to workers in a cost-effective and equitable manner.
NASA Astrophysics Data System (ADS)
Li, Jiaji; Chen, Qian; Zhang, Jialin; Zuo, Chao
2017-10-01
Optical diffraction tomography (ODT) is an effective label-free technique for quantitatively refractive index imaging, which enables long-term monitoring of the internal three-dimensional (3D) structures and molecular composition of biological cells with minimal perturbation. However, existing optical tomographic methods generally rely on interferometric configuration for phase measurement and sophisticated mechanical systems for sample rotation or beam scanning. Thereby, the measurement is suspect to phase error coming from the coherent speckle, environmental vibrations, and mechanical error during data acquisition process. To overcome these limitations, we present a new ODT technique based on non-interferometric phase retrieval and programmable illumination emitting from a light-emitting diode (LED) array. The experimental system is built based on a traditional bright field microscope, with the light source replaced by a programmable LED array, which provides angle-variable quasi-monochromatic illumination with an angular coverage of +/-37 degrees in both x and y directions (corresponding to an illumination numerical aperture of ˜ 0.6). Transport of intensity equation (TIE) is utilized to recover the phase at different illumination angles, and the refractive index distribution is reconstructed based on the ODT framework under first Rytov approximation. The missing-cone problem in ODT is addressed by using the iterative non-negative constraint algorithm, and the misalignment of the LED array is further numerically corrected to improve the accuracy of refractive index quantification. Experiments on polystyrene beads and thick biological specimens show that the proposed approach allows accurate refractive index reconstruction while greatly reduced the system complexity and environmental sensitivity compared to conventional interferometric ODT approaches.
NASA Astrophysics Data System (ADS)
Žumer, Slobodan; Čančula, Miha; Čopar, Simon; Ravnik, Miha
2015-10-01
Geometrical constrains and intrinsic chirality in nematic mesophases enable formation of stable and metastable complex defect structures. Recently selected knotted and linked disclinations have been formed using laser manipulation of nematic braids entangling colloidal particles in nematic colloids [Tkalec et al., Science 2011; Copar et al., PNAS 2015]. In unwinded chiral nematic phases stable and metastable toron and hopfion defects have been implemented by laser tweezers [Smalyukh et al., Nature Materials 2010; Chen et al., PRL2013] and in chiral nematic colloids particles dressed by solitonic deformations [Porenta et al., Sci. Rep. 2014]. Modelling studies based on the numerical minimisation of the phenomenological free energy, supported with the adapted topological theory [Copar and Zumer, PRL 2011; Copar, Phys. Rep. 2014] allow describing the observed nematic defect structures and also predicting numerous structures in confined blue phases [Fukuda and Zumer, Nature Comms 2011 and PRL 2011] and stable knotted disclinations in cholesteric droplets with homeotropic boundary [Sec et al., Nature Comms 2014]. Coupling the modeling with finite difference time domain light field computation enables understanding of light propagation and light induced restructuring in these mesophases. The method was recently demonstrated for the description of low intensity light beam changes during the propagation along disclination lines [Brasselet et al., PRL 2009; Cancula et al., PRE 2014]. Allowing also high intensity light an order restructuring is induced [Porenta et al., Soft Matter 2012; Cancula et al., 2015]. These approaches help to uncover the potential of topological structures for beyond-display optical and photonic applications.
Lucke-Wold, Brandon P.; Phillips, Michael; Turner, Ryan C.; Logsdon, Aric F.; Smith, Kelly E.; Huber, Jason D.; Rosen, Charles L.; Regele, Jonathan D.
2016-01-01
3 million concussions occur each year in the United States. The mechanisms linking acute injury to chronic deficits are poorly understood. Mild traumatic brain injury has been described clinically in terms of acute functional deficits, but the underlying histopathologic changes that occur are relatively unknown due to limited high-function imaging modalities. In order to improve our understanding of acute injury mechanisms, appropriately designed preclinical models must be utilized. The clinical relevance of compression wave injury models revolves around the ability to produce consistent histopathologic deficits. Repetitive mild traumatic brain injuries activate similar neuroinflammatory cascades, cell death markers, and increases in amyloid precursor protein in both humans and rodents. Humans however infrequently succumb to mild traumatic brain injuries and therefore the intensity and magnitude of impacts must be inferred. Understanding compression wave properties and mechanical loading could help link the histopathologic deficits seen in rodents to what might be happening in human brains following repetitive concussions. Advances in mathematical and computer modeling can help characterize the wave properties generated by the compression wave model. While this concept of linking duration and intensity of impact to subsequent histopathologic deficits makes sense, numerical modeling of compression waves has not been performed in this context. In this collaborative interdisciplinary work, numerical simulations were performed to study the creation of compression waves in our experimental model. This work was conducted in conjunction with a repetitive compression wave injury paradigm in rats in order to better understand how the wave generation correlates with validated histopathologic deficits. PMID:27880054
The Russian effort in establishing large atomic and molecular databases
NASA Astrophysics Data System (ADS)
Presnyakov, Leonid P.
1998-07-01
The database activities in Russia have been developed in connection with UV and soft X-ray spectroscopic studies of extraterrestrial and laboratory (magnetically confined and laser-produced) plasmas. Two forms of database production are used: i) a set of computer programs to calculate radiative and collisional data for the general atom or ion, and ii) development of numeric database systems with the data stored in the computer. The first form is preferable for collisional data. At the Lebedev Physical Institute, an appropriate set of the codes has been developed. It includes all electronic processes at collision energies from the threshold up to the relativistic limit. The ion -atom (and -ion) collisional data are calculated with the methods developed recently. The program for the calculations of the level populations and line intensities is used for spectrical diagnostics of transparent plasmas. The second form of database production is widely used at the Institute of Physico-Technical Measurements (VNIIFTRI), and the Troitsk Center: the Institute of Spectroscopy and TRINITI. The main results obtained at the centers above are reviewed. Plans for future developments jointly with international collaborations are discussed.
Methods of recording and analysing cough sounds.
Subburaj, S; Parvez, L; Rajagopalan, T G
1996-01-01
Efforts have been directed to evolve a computerized system for acquisition and multi-dimensional analysis of the cough sound. The system consists of a PC-AT486 computer with an ADC board having 12 bit resolution. The audio cough sound is acquired using a sensitive miniature microphone at a sampling rate of 8 kHz in the computer and simultaneously recorded in real time using a digital audio tape recorder which also serves as a back up. Analysis of the cough sound is done in time and frequency domains using the digitized data which provide numerical values for key parameters like cough counts, bouts, their intensity and latency. In addition, the duration of each event and cough patterns provide a unique tool which allows objective evaluation of antitussive and expectorant drugs. Both on-line and off-line checks ensure error-free performance over long periods of time. The entire system has been evaluated for sensitivity, accuracy, precision and reliability. Successful use of this system in clinical studies has established what perhaps is the first integrated approach for the objective evaluation of cough.
Deploying electromagnetic particle-in-cell (EM-PIC) codes on Xeon Phi accelerators boards
NASA Astrophysics Data System (ADS)
Fonseca, Ricardo
2014-10-01
The complexity of the phenomena involved in several relevant plasma physics scenarios, where highly nonlinear and kinetic processes dominate, makes purely theoretical descriptions impossible. Further understanding of these scenarios requires detailed numerical modeling, but fully relativistic particle-in-cell codes such as OSIRIS are computationally intensive. The quest towards Exaflop computer systems has lead to the development of HPC systems based on add-on accelerator cards, such as GPGPUs and more recently the Xeon Phi accelerators that power the current number 1 system in the world. These cards, also referred to as Intel Many Integrated Core Architecture (MIC) offer peak theoretical performances of >1 TFlop/s for general purpose calculations in a single board, and are receiving significant attention as an attractive alternative to CPUs for plasma modeling. In this work we report on our efforts towards the deployment of an EM-PIC code on a Xeon Phi architecture system. We will focus on the parallelization and vectorization strategies followed, and present a detailed performance evaluation of code performance in comparison with the CPU code.
New variational bounds on convective transport. II. Computations and implications
NASA Astrophysics Data System (ADS)
Souza, Andre; Tobasco, Ian; Doering, Charles R.
2016-11-01
We study the maximal rate of scalar transport between parallel walls separated by distance h, by an incompressible fluid with scalar diffusion coefficient κ. Given velocity vector field u with intensity measured by the Péclet number Pe =h2 < | ∇ u |2 >1/2 / κ (where < . > is space-time average) the challenge is to determine the largest enhancement of wall-to-wall scalar flux over purely diffusive transport, i.e., the Nusselt number Nu . Variational formulations of the problem are studied numerically and optimizing flow fields are computed over a range of Pe . Implications of this optimal wall-to-wall transport problem for the classical problem of Rayleigh-Bénard convection are discussed: the maximal scaling Nu Pe 2 / 3 corresponds, via the identity Pe2 = Ra (Nu - 1) where Ra is the usual Rayleigh number, to Nu Ra 1 / 2 as Ra -> ∞ . Supported in part by National Science Foundation Graduate Research Fellowship DGE-0813964, awards OISE-0967140, PHY-1205219, DMS-1311833, and DMS-1515161, and the John Simon Guggenheim Memorial Foundation.
A semi-analytical study of positive corona discharge in wire-plane electrode configuration
NASA Astrophysics Data System (ADS)
Yanallah, K.; Pontiga, F.; Chen, J. H.
2013-08-01
Wire-to-plane positive corona discharge in air has been studied using an analytical model of two species (electrons and positive ions). The spatial distributions of electric field and charged species are obtained by integrating Gauss's law and the continuity equations of species along the Laplacian field lines. The experimental values of corona current intensity and applied voltage, together with Warburg's law, have been used to formulate the boundary condition for the electron density on the corona wire. To test the accuracy of the model, the approximate electric field distribution has been compared with the exact numerical solution obtained from a finite element analysis. A parametrical study of wire-to-plane corona discharge has then been undertaken using the approximate semi-analytical solutions. Thus, the spatial distributions of electric field and charged particles have been computed for different values of the gas pressure, wire radius and electrode separation. Also, the two dimensional distribution of ozone density has been obtained using a simplified plasma chemistry model. The approximate semi-analytical solutions can be evaluated in a negligible computational time, yet provide precise estimates of corona discharge variables.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adur, Rohan, E-mail: adur@physics.osu.edu; Du, Chunhui; Manuilov, Sergei A.
2015-05-07
The dipole field from a probe magnet can be used to localize a discrete spectrum of standing spin wave modes in a continuous ferromagnetic thin film without lithographic modification to the film. Obtaining the resonance field for a localized mode is not trivial due to the effect of the confined and inhomogeneous magnetization precession. We compare the results of micromagnetic and analytic methods to find the resonance field of localized modes in a ferromagnetic thin film, and investigate the accuracy of these methods by comparing with a numerical minimization technique that assumes Bessel function modes with pinned boundary conditions. Wemore » find that the micromagnetic technique, while computationally more intensive, reveals that the true magnetization profiles of localized modes are similar to Bessel functions with gradually decaying dynamic magnetization at the mode edges. We also find that an analytic solution, which is simple to implement and computationally much faster than other methods, accurately describes the resonance field of localized modes when exchange fields are negligible, and demonstrating the accessibility of localized mode analysis.« less
Optimizing spectral CT parameters for material classification tasks
NASA Astrophysics Data System (ADS)
Rigie, D. S.; La Rivière, P. J.
2016-06-01
In this work, we propose a framework for optimizing spectral CT imaging parameters and hardware design with regard to material classification tasks. Compared with conventional CT, many more parameters must be considered when designing spectral CT systems and protocols. These choices will impact material classification performance in a non-obvious, task-dependent way with direct implications for radiation dose reduction. In light of this, we adapt Hotelling Observer formalisms typically applied to signal detection tasks to the spectral CT, material-classification problem. The result is a rapidly computable metric that makes it possible to sweep out many system configurations, generating parameter optimization curves (POC’s) that can be used to select optimal settings. The proposed model avoids restrictive assumptions about the basis-material decomposition (e.g. linearity) and incorporates signal uncertainty with a stochastic object model. This technique is demonstrated on dual-kVp and photon-counting systems for two different, clinically motivated material classification tasks (kidney stone classification and plaque removal). We show that the POC’s predicted with the proposed analytic model agree well with those derived from computationally intensive numerical simulation studies.
Exploiting the chaotic behaviour of atmospheric models with reconfigurable architectures
NASA Astrophysics Data System (ADS)
Russell, Francis P.; Düben, Peter D.; Niu, Xinyu; Luk, Wayne; Palmer, T. N.
2017-12-01
Reconfigurable architectures are becoming mainstream: Amazon, Microsoft and IBM are supporting such architectures in their data centres. The computationally intensive nature of atmospheric modelling is an attractive target for hardware acceleration using reconfigurable computing. Performance of hardware designs can be improved through the use of reduced-precision arithmetic, but maintaining appropriate accuracy is essential. We explore reduced-precision optimisation for simulating chaotic systems, targeting atmospheric modelling, in which even minor changes in arithmetic behaviour will cause simulations to diverge quickly. The possibility of equally valid simulations having differing outcomes means that standard techniques for comparing numerical accuracy are inappropriate. We use the Hellinger distance to compare statistical behaviour between reduced-precision CPU implementations to guide reconfigurable designs of a chaotic system, then analyse accuracy, performance and power efficiency of the resulting implementations. Our results show that with only a limited loss in accuracy corresponding to less than 10% uncertainty in input parameters, the throughput and energy efficiency of a single-precision chaotic system implemented on a Xilinx Virtex-6 SX475T Field Programmable Gate Array (FPGA) can be more than doubled.