Zhang, Yong-Tao; Shi, Jing; Shu, Chi-Wang; Zhou, Ye
2003-10-01
A quantitative study is carried out in this paper to investigate the size of numerical viscosities and the resolution power of high-order weighted essentially nonoscillatory (WENO) schemes for solving one- and two-dimensional Navier-Stokes equations for compressible gas dynamics with high Reynolds numbers. A one-dimensional shock tube problem, a one-dimensional example with parameters motivated by supernova and laser experiments, and a two-dimensional Rayleigh-Taylor instability problem are used as numerical test problems. For the two-dimensional Rayleigh-Taylor instability problem, or similar problems with small-scale structures, the details of the small structures are determined by the physical viscosity (therefore, the Reynolds number) in the Navier-Stokes equations. Thus, to obtain faithful resolution to these small-scale structures, the numerical viscosity inherent in the scheme must be small enough so that the physical viscosity dominates. A careful mesh refinement study is performed to capture the threshold mesh for full resolution, for specific Reynolds numbers, when WENO schemes of different orders of accuracy are used. It is demonstrated that high-order WENO schemes are more CPU time efficient to reach the same resolution, both for the one-dimensional and two-dimensional test problems.
NASA Astrophysics Data System (ADS)
Parker, Robert L.; Booker, John R.
1996-12-01
The properties of the log of the admittance in the complex frequency plane lead to an integral representation for one-dimensional magnetotelluric (MT) apparent resistivity and impedance phase similar to that found previously for complex admittance. The inverse problem of finding a one-dimensional model for MT data can then be solved using the same techniques as for complex admittance, with similar results. For instance, the one-dimensional conductivity model that minimizes the χ2 misfit statistic for noisy apparent resistivity and phase is a series of delta functions. One of the most important applications of the delta function solution to the inverse problem for complex admittance has been answering the question of whether or not a given set of measurements is consistent with the modeling assumption of one-dimensionality. The new solution allows this test to be performed directly on standard MT data. Recently, it has been shown that induction data must pass the same one-dimensional consistency test if they correspond to the polarization in which the electric field is perpendicular to the strike of two-dimensional structure. This greatly magnifies the utility of the consistency test. The new solution also allows one to compute the upper and lower bounds permitted on phase or apparent resistivity at any frequency given a collection of MT data. Applications include testing the mutual consistency of apparent resistivity and phase data and placing bounds on missing phase or resistivity data. Examples presented demonstrate detection and correction of equipment and processing problems and verification of compatibility with two-dimensional B-polarization for MT data after impedance tensor decomposition and for continuous electromagnetic profiling data.
NASA Technical Reports Server (NTRS)
Gnoffo, Peter A.
2009-01-01
The quality of simulated hypersonic stagnation region heating on tetrahedral meshes is investigated by using a three-dimensional, upwind reconstruction algorithm for the inviscid flux vector. Two test problems are investigated: hypersonic flow over a three-dimensional cylinder with special attention to the uniformity of the solution in the spanwise direction and hypersonic flow over a three-dimensional sphere. The tetrahedral cells used in the simulation are derived from a structured grid where cell faces are bisected across the diagonal resulting in a consistent pattern of diagonals running in a biased direction across the otherwise symmetric domain. This grid is known to accentuate problems in both shock capturing and stagnation region heating encountered with conventional, quasi-one-dimensional inviscid flux reconstruction algorithms. Therefore the test problem provides a sensitive test for algorithmic effects on heating. This investigation is believed to be unique in its focus on three-dimensional, rotated upwind schemes for the simulation of hypersonic heating on tetrahedral grids. This study attempts to fill the void left by the inability of conventional (quasi-one-dimensional) approaches to accurately simulate heating in a tetrahedral grid system. Results show significant improvement in spanwise uniformity of heating with some penalty of ringing at the captured shock. Issues with accuracy near the peak shear location are identified and require further study.
A fast numerical method for the valuation of American lookback put options
NASA Astrophysics Data System (ADS)
Song, Haiming; Zhang, Qi; Zhang, Ran
2015-10-01
A fast and efficient numerical method is proposed and analyzed for the valuation of American lookback options. American lookback option pricing problem is essentially a two-dimensional unbounded nonlinear parabolic problem. We reformulate it into a two-dimensional parabolic linear complementary problem (LCP) on an unbounded domain. The numeraire transformation and domain truncation technique are employed to convert the two-dimensional unbounded LCP into a one-dimensional bounded one. Furthermore, the variational inequality (VI) form corresponding to the one-dimensional bounded LCP is obtained skillfully by some discussions. The resulting bounded VI is discretized by a finite element method. Meanwhile, the stability of the semi-discrete solution and the symmetric positive definiteness of the full-discrete matrix are established for the bounded VI. The discretized VI related to options is solved by a projection and contraction method. Numerical experiments are conducted to test the performance of the proposed method.
Simulation and Analysis of Converging Shock Wave Test Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramsey, Scott D.; Shashkov, Mikhail J.
2012-06-21
Results and analysis pertaining to the simulation of the Guderley converging shock wave test problem (and associated code verification hydrodynamics test problems involving converging shock waves) in the LANL ASC radiation-hydrodynamics code xRAGE are presented. One-dimensional (1D) spherical and two-dimensional (2D) axi-symmetric geometric setups are utilized and evaluated in this study, as is an instantiation of the xRAGE adaptive mesh refinement capability. For the 2D simulations, a 'Surrogate Guderley' test problem is developed and used to obviate subtleties inherent to the true Guderley solution's initialization on a square grid, while still maintaining a high degree of fidelity to the originalmore » problem, and minimally straining the general credibility of associated analysis and conclusions.« less
Benchmark problems in computational aeroacoustics
NASA Technical Reports Server (NTRS)
Porter-Locklear, Freda
1994-01-01
A recent directive at NASA Langley is aimed at numerically predicting principal noise sources. During my summer stay, I worked with high-order ENO code, developed by Dr. Harold Atkins, for solving the unsteady compressible Navier-Stokes equations, as it applies to computational aeroacoustics (CAA). A CAA workshop, composed of six categories of benchmark problems, has been organized to test various numerical properties of code. My task was to determine the robustness of Atkins' code for these test problems. In one category, we tested the nonlinear wave propagation of the code for the one-dimensional Euler equations, with initial pressure, density, and velocity conditions. Using freestream boundary conditions, our results were plausible. In another category, we solved the linearized two-dimensional Euler equations to test the effectiveness of radiation boundary conditions. Here we utilized MAPLE to compute eigenvalues and eigenvectors of the Jacobian given variable and flux vectors. We experienced a minor problem with inflow and outflow boundary conditions. Next, we solved the quasi one dimensional unsteady flow equations with an incoming acoustic wave of amplitude 10(exp -6). The small amplitude sound wave was incident on a convergent-divergent nozzle. After finding a steady-state solution and then marching forward, our solution indicated that after 30 periods the acoustic wave had dissipated (a period is time required for sound wave to traverse one end of nozzle to other end).
Verification and benchmark testing of the NUFT computer code
NASA Astrophysics Data System (ADS)
Lee, K. H.; Nitao, J. J.; Kulshrestha, A.
1993-10-01
This interim report presents results of work completed in the ongoing verification and benchmark testing of the NUFT (Nonisothermal Unsaturated-saturated Flow and Transport) computer code. NUFT is a suite of multiphase, multicomponent models for numerical solution of thermal and isothermal flow and transport in porous media, with application to subsurface contaminant transport problems. The code simulates the coupled transport of heat, fluids, and chemical components, including volatile organic compounds. Grid systems may be cartesian or cylindrical, with one-, two-, or fully three-dimensional configurations possible. In this initial phase of testing, the NUFT code was used to solve seven one-dimensional unsaturated flow and heat transfer problems. Three verification and four benchmarking problems were solved. In the verification testing, excellent agreement was observed between NUFT results and the analytical or quasianalytical solutions. In the benchmark testing, results of code intercomparison were very satisfactory. From these testing results, it is concluded that the NUFT code is ready for application to field and laboratory problems similar to those addressed here. Multidimensional problems, including those dealing with chemical transport, will be addressed in a subsequent report.
Fast Implicit Methods For Elliptic Moving Interface Problems
2015-12-11
analyzed, and tested for the Fourier transform of piecewise polynomials given on d-dimensional simplices in D-dimensional Euclidean space. These transforms...evaluation, and one to three orders of magnitude slower than the classical uniform Fast Fourier Transform. Second, bilinear quadratures ---which...a fast algorithm was derived, analyzed, and tested for the Fourier transform of pi ecewise polynomials given on d-dimensional simplices in D
Students' Conceptual Difficulties in Quantum Mechanics: Potential Well Problems
ERIC Educational Resources Information Center
Ozcan, Ozgur; Didis, Nilufer; Tasar, Mehmet Fatih
2009-01-01
In this study, students' conceptual difficulties about some basic concepts in quantum mechanics like one-dimensional potential well problems and probability density of tunneling particles were identified. For this aim, a multiple choice instrument named Quantum Mechanics Conceptual Test has been developed by one of the researchers of this study…
One-dimensional Gromov minimal filling problem
NASA Astrophysics Data System (ADS)
Ivanov, Alexandr O.; Tuzhilin, Alexey A.
2012-05-01
The paper is devoted to a new branch in the theory of one-dimensional variational problems with branching extremals, the investigation of one-dimensional minimal fillings introduced by the authors. On the one hand, this problem is a one-dimensional version of a generalization of Gromov's minimal fillings problem to the case of stratified manifolds. On the other hand, this problem is interesting in itself and also can be considered as a generalization of another classical problem, the Steiner problem on the construction of a shortest network connecting a given set of terminals. Besides the statement of the problem, we discuss several properties of the minimal fillings and state several conjectures. Bibliography: 38 titles.
Inverse Heat Conduction Methods in the CHAR Code for Aerothermal Flight Data Reconstruction
NASA Technical Reports Server (NTRS)
Oliver, A Brandon; Amar, Adam J.
2016-01-01
Reconstruction of flight aerothermal environments often requires the solution of an inverse heat transfer problem, which is an ill-posed problem of specifying boundary conditions from discrete measurements in the interior of the domain. This paper will present the algorithms implemented in the CHAR code for use in reconstruction of EFT-1 flight data and future testing activities. Implementation nuances will be discussed, and alternative hybrid-methods that are permitted by the implementation will be described. Results will be presented for a number of one-dimensional and multi-dimensional problems
2-dimensional implicit hydrodynamics on adaptive grids
NASA Astrophysics Data System (ADS)
Stökl, A.; Dorfi, E. A.
2007-12-01
We present a numerical scheme for two-dimensional hydrodynamics computations using a 2D adaptive grid together with an implicit discretization. The combination of these techniques has offered favorable numerical properties applicable to a variety of one-dimensional astrophysical problems which motivated us to generalize this approach for two-dimensional applications. Due to the different topological nature of 2D grids compared to 1D problems, grid adaptivity has to avoid severe grid distortions which necessitates additional smoothing parameters to be included into the formulation of a 2D adaptive grid. The concept of adaptivity is described in detail and several test computations demonstrate the effectivity of smoothing. The coupled solution of this grid equation together with the equations of hydrodynamics is illustrated by computation of a 2D shock tube problem.
Assessment of numerical techniques for unsteady flow calculations
NASA Technical Reports Server (NTRS)
Hsieh, Kwang-Chung
1989-01-01
The characteristics of unsteady flow motions have long been a serious concern in the study of various fluid dynamic and combustion problems. With the advancement of computer resources, numerical approaches to these problems appear to be feasible. The objective of this paper is to assess the accuracy of several numerical schemes for unsteady flow calculations. In the present study, Fourier error analysis is performed for various numerical schemes based on a two-dimensional wave equation. Four methods sieved from the error analysis are then adopted for further assessment. Model problems include unsteady quasi-one-dimensional inviscid flows, two-dimensional wave propagations, and unsteady two-dimensional inviscid flows. According to the comparison between numerical and exact solutions, although second-order upwind scheme captures the unsteady flow and wave motions quite well, it is relatively more dissipative than sixth-order central difference scheme. Among various numerical approaches tested in this paper, the best performed one is Runge-Kutta method for time integration and six-order central difference for spatial discretization.
One-Dimensional Modelling of Internal Ballistics
NASA Astrophysics Data System (ADS)
Monreal-González, G.; Otón-Martínez, R. A.; Velasco, F. J. S.; García-Cascáles, J. R.; Ramírez-Fernández, F. J.
2017-10-01
A one-dimensional model is introduced in this paper for problems of internal ballistics involving solid propellant combustion. First, the work presents the physical approach and equations adopted. Closure relationships accounting for the physical phenomena taking place during combustion (interfacial friction, interfacial heat transfer, combustion) are deeply discussed. Secondly, the numerical method proposed is presented. Finally, numerical results provided by this code (UXGun) are compared with results of experimental tests and with the outcome from a well-known zero-dimensional code. The model provides successful results in firing tests of artillery guns, predicting with good accuracy the maximum pressure in the chamber and muzzle velocity what highlights its capabilities as prediction/design tool for internal ballistics.
Phase-space finite elements in a least-squares solution of the transport equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drumm, C.; Fan, W.; Pautz, S.
2013-07-01
The linear Boltzmann transport equation is solved using a least-squares finite element approximation in the space, angular and energy phase-space variables. The method is applied to both neutral particle transport and also to charged particle transport in the presence of an electric field, where the angular and energy derivative terms are handled with the energy/angular finite elements approximation, in a manner analogous to the way the spatial streaming term is handled. For multi-dimensional problems, a novel approach is used for the angular finite elements: mapping the surface of a unit sphere to a two-dimensional planar region and using a meshingmore » tool to generate a mesh. In this manner, much of the spatial finite-elements machinery can be easily adapted to handle the angular variable. The energy variable and the angular variable for one-dimensional problems make use of edge/beam elements, also building upon the spatial finite elements capabilities. The methods described here can make use of either continuous or discontinuous finite elements in space, angle and/or energy, with the use of continuous finite elements resulting in a smaller problem size and the use of discontinuous finite elements resulting in more accurate solutions for certain types of problems. The work described in this paper makes use of continuous finite elements, so that the resulting linear system is symmetric positive definite and can be solved with a highly efficient parallel preconditioned conjugate gradients algorithm. The phase-space finite elements capability has been built into the Sceptre code and applied to several test problems, including a simple one-dimensional problem with an analytic solution available, a two-dimensional problem with an isolated source term, showing how the method essentially eliminates ray effects encountered with discrete ordinates, and a simple one-dimensional charged-particle transport problem in the presence of an electric field. (authors)« less
A dimensionally split Cartesian cut cell method for hyperbolic conservation laws
NASA Astrophysics Data System (ADS)
Gokhale, Nandan; Nikiforakis, Nikos; Klein, Rupert
2018-07-01
We present a dimensionally split method for solving hyperbolic conservation laws on Cartesian cut cell meshes. The approach combines local geometric and wave speed information to determine a novel stabilised cut cell flux, and we provide a full description of its three-dimensional implementation in the dimensionally split framework of Klein et al. [1]. The convergence and stability of the method are proved for the one-dimensional linear advection equation, while its multi-dimensional numerical performance is investigated through the computation of solutions to a number of test problems for the linear advection and Euler equations. When compared to the cut cell flux of Klein et al., it was found that the new flux alleviates the problem of oscillatory boundary solutions produced by the former at higher Courant numbers, and also enables the computation of more accurate solutions near stagnation points. Being dimensionally split, the method is simple to implement and extends readily to multiple dimensions.
Three dimensional elements with Lagrange multipliers for the modified couple stress theory
NASA Astrophysics Data System (ADS)
Kwon, Young-Rok; Lee, Byung-Chai
2018-07-01
Three dimensional mixed elements for the modified couple stress theory are proposed. The C1 continuity for the displacement field, which is required because of the curvature term in the variational form of the theory, is satisfied weakly by introducing a supplementary rotation as an independent variable and constraining the relation between the rotation and the displacement with a Lagrange multiplier vector. An additional constraint about the deviatoric curvature is also considered for three dimensional problems. Weak forms with one constraint and two constraints are derived, and four elements satisfying convergence criteria are developed by applying different approximations to each field of independent variables. The elements pass a [InlineEquation not available: see fulltext.] patch test for three dimensional problems. Numerical examples show that the additional constraint could be considered essential for the three dimensional elements, and one of the elements is recommended for practical applications via the comparison of the performances of the elements. In addition, all the proposed elements can represent the size effect well.
Discontinuous dual-primal mixed finite elements for elliptic problems
NASA Technical Reports Server (NTRS)
Bottasso, Carlo L.; Micheletti, Stefano; Sacco, Riccardo
2000-01-01
We propose a novel discontinuous mixed finite element formulation for the solution of second-order elliptic problems. Fully discontinuous piecewise polynomial finite element spaces are used for the trial and test functions. The discontinuous nature of the test functions at the element interfaces allows to introduce new boundary unknowns that, on the one hand enforce the weak continuity of the trial functions, and on the other avoid the need to define a priori algorithmic fluxes as in standard discontinuous Galerkin methods. Static condensation is performed at the element level, leading to a solution procedure based on the sole interface unknowns. The resulting family of discontinuous dual-primal mixed finite element methods is presented in the one and two-dimensional cases. In the one-dimensional case, we show the equivalence of the method with implicit Runge-Kutta schemes of the collocation type exhibiting optimal behavior. Numerical experiments in one and two dimensions demonstrate the order accuracy of the new method, confirming the results of the analysis.
NASA Technical Reports Server (NTRS)
Ransom, Jonathan B.
2002-01-01
A multifunctional interface method with capabilities for variable-fidelity modeling and multiple method analysis is presented. The methodology provides an effective capability by which domains with diverse idealizations can be modeled independently to exploit the advantages of one approach over another. The multifunctional method is used to couple independently discretized subdomains, and it is used to couple the finite element and the finite difference methods. The method is based on a weighted residual variational method and is presented for two-dimensional scalar-field problems. A verification test problem and a benchmark application are presented, and the computational implications are discussed.
Multigrid one shot methods for optimal control problems: Infinite dimensional control
NASA Technical Reports Server (NTRS)
Arian, Eyal; Taasan, Shlomo
1994-01-01
The multigrid one shot method for optimal control problems, governed by elliptic systems, is introduced for the infinite dimensional control space. ln this case, the control variable is a function whose discrete representation involves_an increasing number of variables with grid refinement. The minimization algorithm uses Lagrange multipliers to calculate sensitivity gradients. A preconditioned gradient descent algorithm is accelerated by a set of coarse grids. It optimizes for different scales in the representation of the control variable on different discretization levels. An analysis which reduces the problem to the boundary is introduced. It is used to approximate the two level asymptotic convergence rate, to determine the amplitude of the minimization steps, and the choice of a high pass filter to be used when necessary. The effectiveness of the method is demonstrated on a series of test problems. The new method enables the solutions of optimal control problems at the same cost of solving the corresponding analysis problems just a few times.
Focusing on the golden ball metaheuristic: an extended study on a wider set of problems.
Osaba, E; Diaz, F; Carballedo, R; Onieva, E; Perallos, A
2014-01-01
Nowadays, the development of new metaheuristics for solving optimization problems is a topic of interest in the scientific community. In the literature, a large number of techniques of this kind can be found. Anyway, there are many recently proposed techniques, such as the artificial bee colony and imperialist competitive algorithm. This paper is focused on one recently published technique, the one called Golden Ball (GB). The GB is a multiple-population metaheuristic based on soccer concepts. Although it was designed to solve combinatorial optimization problems, until now, it has only been tested with two simple routing problems: the traveling salesman problem and the capacitated vehicle routing problem. In this paper, the GB is applied to four different combinatorial optimization problems. Two of them are routing problems, which are more complex than the previously used ones: the asymmetric traveling salesman problem and the vehicle routing problem with backhauls. Additionally, one constraint satisfaction problem (the n-queen problem) and one combinatorial design problem (the one-dimensional bin packing problem) have also been used. The outcomes obtained by GB are compared with the ones got by two different genetic algorithms and two distributed genetic algorithms. Additionally, two statistical tests are conducted to compare these results.
Focusing on the Golden Ball Metaheuristic: An Extended Study on a Wider Set of Problems
Osaba, E.; Diaz, F.; Carballedo, R.; Onieva, E.; Perallos, A.
2014-01-01
Nowadays, the development of new metaheuristics for solving optimization problems is a topic of interest in the scientific community. In the literature, a large number of techniques of this kind can be found. Anyway, there are many recently proposed techniques, such as the artificial bee colony and imperialist competitive algorithm. This paper is focused on one recently published technique, the one called Golden Ball (GB). The GB is a multiple-population metaheuristic based on soccer concepts. Although it was designed to solve combinatorial optimization problems, until now, it has only been tested with two simple routing problems: the traveling salesman problem and the capacitated vehicle routing problem. In this paper, the GB is applied to four different combinatorial optimization problems. Two of them are routing problems, which are more complex than the previously used ones: the asymmetric traveling salesman problem and the vehicle routing problem with backhauls. Additionally, one constraint satisfaction problem (the n-queen problem) and one combinatorial design problem (the one-dimensional bin packing problem) have also been used. The outcomes obtained by GB are compared with the ones got by two different genetic algorithms and two distributed genetic algorithms. Additionally, two statistical tests are conducted to compare these results. PMID:25165742
Phase-shifting point diffraction interferometer mask designs
Goldberg, Kenneth Alan
2001-01-01
In a phase-shifting point diffraction interferometer, different image-plane mask designs can improve the operation of the interferometer. By keeping the test beam window of the mask small compared to the separation distance between the beams, the problem of energy from the reference beam leaking through the test beam window is reduced. By rotating the grating and mask 45.degree., only a single one-dimensional translation stage is required for phase-shifting. By keeping two reference pinholes in the same orientation about the test beam window, only a single grating orientation, and thus a single one-dimensional translation stage, is required. The use of a two-dimensional grating allows for a multiplicity of pinholes to be used about the pattern of diffracted orders of the grating at the mask. Orientation marks on the mask can be used to orient the device and indicate the position of the reference pinholes.
Finite element solution of lubrication problems
NASA Technical Reports Server (NTRS)
Reddi, M. M.
1971-01-01
A variational formulation of the transient lubrication problem is presented and the corresponding finite element equations derived for three and six point triangles, and, four and eight point quadrilaterals. Test solutions for a one dimensional slider bearing used in validating the computer program are given. Utility of the method is demonstrated by a solution of the shrouded step bearing.
NASA Astrophysics Data System (ADS)
Heuzé, Thomas
2017-10-01
We present in this work two finite volume methods for the simulation of unidimensional impact problems, both for bars and plane waves, on elastic-plastic solid media within the small strain framework. First, an extension of Lax-Wendroff to elastic-plastic constitutive models with linear and nonlinear hardenings is presented. Second, a high order TVD method based on flux-difference splitting [1] and Superbee flux limiter [2] is coupled with an approximate elastic-plastic Riemann solver for nonlinear hardenings, and follows that of Fogarty [3] for linear ones. Thermomechanical coupling is accounted for through dissipation heating and thermal softening, and adiabatic conditions are assumed. This paper essentially focuses on one-dimensional problems since analytical solutions exist or can easily be developed. Accordingly, these two numerical methods are compared to analytical solutions and to the explicit finite element method on test cases involving discontinuous and continuous solutions. This allows to study in more details their respective performance during the loading, unloading and reloading stages. Particular emphasis is also paid to the accuracy of the computed plastic strains, some differences being found according to the numerical method used. Lax-Wendoff two-dimensional discretization of a one-dimensional problem is also appended at the end to demonstrate the extensibility of such numerical scheme to multidimensional problems.
An Advanced One-Dimensional Finite Element Model for Incompressible Thermally Expandable Flow
Hu, Rui
2017-03-27
Here, this paper provides an overview of a new one-dimensional finite element flow model for incompressible but thermally expandable flow. The flow model was developed for use in system analysis tools for whole-plant safety analysis of sodium fast reactors. Although the pressure-based formulation was implemented, the use of integral equations in the conservative form ensured the conservation laws of the fluid. A stabilization scheme based on streamline-upwind/Petrov-Galerkin and pressure-stabilizing/Petrov-Galerkin formulations is also introduced. The flow model and its implementation have been verified by many test problems, including density wave propagation, steep gradient problems, discharging between tanks, and the conjugate heatmore » transfer in a heat exchanger.« less
An Advanced One-Dimensional Finite Element Model for Incompressible Thermally Expandable Flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Rui
Here, this paper provides an overview of a new one-dimensional finite element flow model for incompressible but thermally expandable flow. The flow model was developed for use in system analysis tools for whole-plant safety analysis of sodium fast reactors. Although the pressure-based formulation was implemented, the use of integral equations in the conservative form ensured the conservation laws of the fluid. A stabilization scheme based on streamline-upwind/Petrov-Galerkin and pressure-stabilizing/Petrov-Galerkin formulations is also introduced. The flow model and its implementation have been verified by many test problems, including density wave propagation, steep gradient problems, discharging between tanks, and the conjugate heatmore » transfer in a heat exchanger.« less
Multi-Dimensional, Non-Pyrolyzing Ablation Test Problems
NASA Technical Reports Server (NTRS)
Risch, Tim; Kostyk, Chris
2016-01-01
Non-pyrolyzingcarbonaceous materials represent a class of candidate material for hypersonic vehicle components providing both structural and thermal protection system capabilities. Two problems relevant to this technology are presented. The first considers the one-dimensional ablation of a carbon material subject to convective heating. The second considers two-dimensional conduction in a rectangular block subject to radiative heating. Surface thermochemistry for both problems includes finite-rate surface kinetics at low temperatures, diffusion limited ablation at intermediate temperatures, and vaporization at high temperatures. The first problem requires the solution of both the steady-state thermal profile with respect to the ablating surface and the transient thermal history for a one-dimensional ablating planar slab with temperature-dependent material properties. The slab front face is convectively heated and also reradiates to a room temperature environment. The back face is adiabatic. The steady-state temperature profile and steady-state mass loss rate should be predicted. Time-dependent front and back face temperature, surface recession and recession rate along with the final temperature profile should be predicted for the time-dependent solution. The second problem requires the solution for the transient temperature history for an ablating, two-dimensional rectangular solid with anisotropic, temperature-dependent thermal properties. The front face is radiatively heated, convectively cooled, and also reradiates to a room temperature environment. The back face and sidewalls are adiabatic. The solution should include the following 9 items: final surface recession profile, time-dependent temperature history of both the front face and back face at both the centerline and sidewall, as well as the time-dependent surface recession and recession rate on the front face at both the centerline and sidewall. The results of the problems from all submitters will be collected, summarized, and presented at a later conference.
Numerical applications of the advective-diffusive codes for the inner magnetosphere
NASA Astrophysics Data System (ADS)
Aseev, N. A.; Shprits, Y. Y.; Drozdov, A. Y.; Kellerman, A. C.
2016-11-01
In this study we present analytical solutions for convection and diffusion equations. We gather here the analytical solutions for the one-dimensional convection equation, the two-dimensional convection problem, and the one- and two-dimensional diffusion equations. Using obtained analytical solutions, we test the four-dimensional Versatile Electron Radiation Belt code (the VERB-4D code), which solves the modified Fokker-Planck equation with additional convection terms. The ninth-order upwind numerical scheme for the one-dimensional convection equation shows much more accurate results than the results obtained with the third-order scheme. The universal limiter eliminates unphysical oscillations generated by high-order linear upwind schemes. Decrease in the space step leads to convergence of a numerical solution of the two-dimensional diffusion equation with mixed terms to the analytical solution. We compare the results of the third- and ninth-order schemes applied to magnetospheric convection modeling. The results show significant differences in electron fluxes near geostationary orbit when different numerical schemes are used.
Comment on "Calculations for the one-dimensional soft Coulomb problem and the hard Coulomb limit".
Carrillo-Bernal, M A; Núñez-Yépez, H N; Salas-Brito, A L; Solis, Didier A
2015-02-01
In the referred paper, the authors use a numerical method for solving ordinary differential equations and a softened Coulomb potential -1/√[x(2)+β(2)] to study the one-dimensional Coulomb problem by approaching the parameter β to zero. We note that even though their numerical findings in the soft potential scenario are correct, their conclusions do not extend to the one-dimensional Coulomb problem (β=0). Their claims regarding the possible existence of an even ground state with energy -∞ with a Dirac-δ eigenfunction and of well-defined parity eigenfunctions in the one-dimensional hydrogen atom are questioned.
CAFE: A New Relativistic MHD Code
NASA Astrophysics Data System (ADS)
Lora-Clavijo, F. D.; Cruz-Osorio, A.; Guzmán, F. S.
2015-06-01
We introduce CAFE, a new independent code designed to solve the equations of relativistic ideal magnetohydrodynamics (RMHD) in three dimensions. We present the standard tests for an RMHD code and for the relativistic hydrodynamics regime because we have not reported them before. The tests include the one-dimensional Riemann problems related to blast waves, head-on collisions of streams, and states with transverse velocities, with and without magnetic field, which is aligned or transverse, constant or discontinuous across the initial discontinuity. Among the two-dimensional (2D) and 3D tests without magnetic field, we include the 2D Riemann problem, a one-dimensional shock tube along a diagonal, the high-speed Emery wind tunnel, the Kelvin-Helmholtz (KH) instability, a set of jets, and a 3D spherical blast wave, whereas in the presence of a magnetic field we show the magnetic rotor, the cylindrical explosion, a case of Kelvin-Helmholtz instability, and a 3D magnetic field advection loop. The code uses high-resolution shock-capturing methods, and we present the error analysis for a combination that uses the Harten, Lax, van Leer, and Einfeldt (HLLE) flux formula combined with a linear, piecewise parabolic method and fifth-order weighted essentially nonoscillatory reconstructors. We use the flux-constrained transport and the divergence cleaning methods to control the divergence-free magnetic field constraint.
NASA Astrophysics Data System (ADS)
Di Nucci, Carmine
2018-05-01
This note examines the two-dimensional unsteady isothermal free surface flow of an incompressible fluid in a non-deformable, homogeneous, isotropic, and saturated porous medium (with zero recharge and neglecting capillary effects). Coupling a Boussinesq-type model for nonlinear water waves with Darcy's law, the two-dimensional flow problem is solved using one-dimensional model equations including vertical effects and seepage face. In order to take into account the seepage face development, the system equations (given by the continuity and momentum equations) are completed by an integral relation (deduced from the Cauchy theorem). After testing the model against data sets available in the literature, some numerical simulations, concerning the unsteady flow through a rectangular dam (with an impermeable horizontal bottom), are presented and discussed.
Creation of problem-dependent Doppler-broadened cross sections in the KENO Monte Carlo code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Shane W. D.; Celik, Cihangir; Maldonado, G. Ivan
2015-11-06
In this paper, we introduce a quick method for improving the accuracy of Monte Carlo simulations by generating one- and two-dimensional cross sections at a user-defined temperature before performing transport calculations. A finite difference method is used to Doppler-broaden cross sections to the desired temperature, and unit-base interpolation is done to generate the probability distributions for double differential two-dimensional thermal moderator cross sections at any arbitrarily user-defined temperature. The accuracy of these methods is tested using a variety of contrived problems. In addition, various benchmarks at elevated temperatures are modeled, and results are compared with benchmark results. Lastly, the problem-dependentmore » cross sections are observed to produce eigenvalue estimates that are closer to the benchmark results than those without the problem-dependent cross sections.« less
Guide to the Revised Ground-Water Flow and Heat Transport Simulator: HYDROTHERM - Version 3
Kipp, Kenneth L.; Hsieh, Paul A.; Charlton, Scott R.
2008-01-01
The HYDROTHERM computer program simulates multi-phase ground-water flow and associated thermal energy transport in three dimensions. It can handle high fluid pressures, up to 1 ? 109 pascals (104 atmospheres), and high temperatures, up to 1,200 degrees Celsius. This report documents the release of Version 3, which includes various additions, modifications, and corrections that have been made to the original simulator. Primary changes to the simulator include: (1) the ability to simulate unconfined ground-water flow, (2) a precipitation-recharge boundary condition, (3) a seepage-surface boundary condition at the land surface, (4) the removal of the limitation that a specified-pressure boundary also have a specified temperature, (5) a new iterative solver for the linear equations based on a generalized minimum-residual method, (6) the ability to use time- or depth-dependent functions for permeability, (7) the conversion of the program code to Fortran 90 to employ dynamic allocation of arrays, and (8) the incorporation of a graphical user interface (GUI) for input and output. The graphical user interface has been developed for defining a simulation, running the HYDROTHERM simulator interactively, and displaying the results. The combination of the graphical user interface and the HYDROTHERM simulator forms the HYDROTHERM INTERACTIVE (HTI) program. HTI can be used for two-dimensional simulations only. New features in Version 3 of the HYDROTHERM simulator have been verified using four test problems. Three problems come from the published literature and one problem was simulated by another partially saturated flow and thermal transport simulator. The test problems include: transient partially saturated vertical infiltration, transient one-dimensional horizontal infiltration, two-dimensional steady-state drainage with a seepage surface, and two-dimensional drainage with coupled heat transport. An example application to a hypothetical stratovolcano system with unconfined ground-water flow is presented in detail. It illustrates the use of HTI with the combination precipitation-recharge and seepage-surface boundary condition, and functions as a tutorial example problem for the new user.
Learning Relative Motion Concepts in Immersive and Non-immersive Virtual Environments
NASA Astrophysics Data System (ADS)
Kozhevnikov, Michael; Gurlitt, Johannes; Kozhevnikov, Maria
2013-12-01
The focus of the current study is to understand which unique features of an immersive virtual reality environment have the potential to improve learning relative motion concepts. Thirty-seven undergraduate students learned relative motion concepts using computer simulation either in immersive virtual environment (IVE) or non-immersive desktop virtual environment (DVE) conditions. Our results show that after the simulation activities, both IVE and DVE groups exhibited a significant shift toward a scientific understanding in their conceptual models and epistemological beliefs about the nature of relative motion, and also a significant improvement on relative motion problem-solving tests. In addition, we analyzed students' performance on one-dimensional and two-dimensional questions in the relative motion problem-solving test separately and found that after training in the simulation, the IVE group performed significantly better than the DVE group on solving two-dimensional relative motion problems. We suggest that egocentric encoding of the scene in IVE (where the learner constitutes a part of a scene they are immersed in), as compared to allocentric encoding on a computer screen in DVE (where the learner is looking at the scene from "outside"), is more beneficial than DVE for studying more complex (two-dimensional) relative motion problems. Overall, our findings suggest that such aspects of virtual realities as immersivity, first-hand experience, and the possibility of changing different frames of reference can facilitate understanding abstract scientific phenomena and help in displacing intuitive misconceptions with more accurate mental models.
One-dimensional high-order compact method for solving Euler's equations
NASA Astrophysics Data System (ADS)
Mohamad, M. A. H.; Basri, S.; Basuno, B.
2012-06-01
In the field of computational fluid dynamics, many numerical algorithms have been developed to simulate inviscid, compressible flows problems. Among those most famous and relevant are based on flux vector splitting and Godunov-type schemes. Previously, this system was developed through computational studies by Mawlood [1]. However the new test cases for compressible flows, the shock tube problems namely the receding flow and shock waves were not investigated before by Mawlood [1]. Thus, the objective of this study is to develop a high-order compact (HOC) finite difference solver for onedimensional Euler equation. Before developing the solver, a detailed investigation was conducted to assess the performance of the basic third-order compact central discretization schemes. Spatial discretization of the Euler equation is based on flux-vector splitting. From this observation, discretization of the convective flux terms of the Euler equation is based on a hybrid flux-vector splitting, known as the advection upstream splitting method (AUSM) scheme which combines the accuracy of flux-difference splitting and the robustness of flux-vector splitting. The AUSM scheme is based on the third-order compact scheme to the approximate finite difference equation was completely analyzed consequently. In one-dimensional problem for the first order schemes, an explicit method is adopted by using time integration method. In addition to that, development and modification of source code for the one-dimensional flow is validated with four test cases namely, unsteady shock tube, quasi-one-dimensional supersonic-subsonic nozzle flow, receding flow and shock waves in shock tubes. From these results, it was also carried out to ensure that the definition of Riemann problem can be identified. Further analysis had also been done in comparing the characteristic of AUSM scheme against experimental results, obtained from previous works and also comparative analysis with computational results generated by van Leer, KFVS and AUSMPW schemes. Furthermore, there is a remarkable improvement with the extension of the AUSM scheme from first-order to third-order accuracy in terms of shocks, contact discontinuities and rarefaction waves.
Device-Independent Tests of Classical and Quantum Dimensions
NASA Astrophysics Data System (ADS)
Gallego, Rodrigo; Brunner, Nicolas; Hadley, Christopher; Acín, Antonio
2010-12-01
We address the problem of testing the dimensionality of classical and quantum systems in a “black-box” scenario. We develop a general formalism for tackling this problem. This allows us to derive lower bounds on the classical dimension necessary to reproduce given measurement data. Furthermore, we generalize the concept of quantum dimension witnesses to arbitrary quantum systems, allowing one to place a lower bound on the Hilbert space dimension necessary to reproduce certain data. Illustrating these ideas, we provide simple examples of classical and quantum dimension witnesses.
NASA Technical Reports Server (NTRS)
Misiakos, K.; Lindholm, F. A.
1986-01-01
Several parameters of certain three-dimensional semiconductor devices including diodes, transistors, and solar cells can be determined without solving the actual boundary-value problem. The recombination current, transit time, and open-circuit voltage of planar diodes are emphasized here. The resulting analytical expressions enable determination of the surface recombination velocity of shallow planar diodes. The method involves introducing corresponding one-dimensional models having the same values of these parameters.
Applications of an exponential finite difference technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Handschuh, R.F.; Keith, T.G. Jr.
1988-07-01
An exponential finite difference scheme first presented by Bhattacharya for one dimensional unsteady heat conduction problems in Cartesian coordinates was extended. The finite difference algorithm developed was used to solve the unsteady diffusion equation in one dimensional cylindrical coordinates and was applied to two and three dimensional conduction problems in Cartesian coordinates. Heat conduction involving variable thermal conductivity was also investigated. The method was used to solve nonlinear partial differential equations in one and two dimensional Cartesian coordinates. Predicted results are compared to exact solutions where available or to results obtained by other numerical methods.
Comparative study of high-resolution shock-capturing schemes for a real gas
NASA Technical Reports Server (NTRS)
Montagne, J.-L.; Yee, H. C.; Vinokur, M.
1987-01-01
Recently developed second-order explicit shock-capturing methods, in conjunction with generalized flux-vector splittings, and a generalized approximate Riemann solver for a real gas are studied. The comparisons are made on different one-dimensional Riemann (shock-tube) problems for equilibrium air with various ranges of Mach numbers, densities and pressures. Six different Riemann problems are considered. These tests provide a check on the validity of the generalized formulas, since theoretical prediction of their properties appears to be difficult because of the non-analytical form of the state equation. The numerical results in the supersonic and low-hypersonic regimes indicate that these produce good shock-capturing capability and that the shock resolution is only slightly affected by the state equation of equilibrium air. The difference in shock resolution between the various methods varies slightly from one Riemann problem to the other, but the overall accuracy is very similar. For the one-dimensional case, the relative efficiency in terms of operation count for the different methods is within 30%. The main difference between the methods lies in their versatility in being extended to multidimensional problems with efficient implicit solution procedures.
On l(1): Optimal decentralized performance
NASA Technical Reports Server (NTRS)
Sourlas, Dennis; Manousiouthakis, Vasilios
1993-01-01
In this paper, the Manousiouthakis parametrization of all decentralized stabilizing controllers is employed in mathematically formulating the l(sup 1) optimal decentralized controller synthesis problem. The resulting optimization problem is infinite dimensional and therefore not directly amenable to computations. It is shown that finite dimensional optimization problems that have value arbitrarily close to the infinite dimensional one can be constructed. Based on this result, an algorithm that solves the l(sup 1) decentralized performance problems is presented. A global optimization approach to the solution of the infinite dimensional approximating problems is also discussed.
Application of Central Upwind Scheme for Solving Special Relativistic Hydrodynamic Equations
Yousaf, Muhammad; Ghaffar, Tayabia; Qamar, Shamsul
2015-01-01
The accurate modeling of various features in high energy astrophysical scenarios requires the solution of the Einstein equations together with those of special relativistic hydrodynamics (SRHD). Such models are more complicated than the non-relativistic ones due to the nonlinear relations between the conserved and state variables. A high-resolution shock-capturing central upwind scheme is implemented to solve the given set of equations. The proposed technique uses the precise information of local propagation speeds to avoid the excessive numerical diffusion. The second order accuracy of the scheme is obtained with the use of MUSCL-type initial reconstruction and Runge-Kutta time stepping method. After a discussion of the equations solved and of the techniques employed, a series of one and two-dimensional test problems are carried out. To validate the method and assess its accuracy, the staggered central and the kinetic flux-vector splitting schemes are also applied to the same model. The scheme is robust and efficient. Its results are comparable to those obtained from the sophisticated algorithms, even in the case of highly relativistic two-dimensional test problems. PMID:26070067
Using Betweenness Centrality to Identify Manifold Shortcuts
Cukierski, William J.; Foran, David J.
2010-01-01
High-dimensional data presents a challenge to tasks of pattern recognition and machine learning. Dimensionality reduction (DR) methods remove the unwanted variance and make these tasks tractable. Several nonlinear DR methods, such as the well known ISOMAP algorithm, rely on a neighborhood graph to compute geodesic distances between data points. These graphs can contain unwanted edges which connect disparate regions of one or more manifolds. This topological sensitivity is well known [1], [2], [3], yet handling high-dimensional, noisy data in the absence of a priori manifold knowledge, remains an open and difficult problem. This work introduces a divisive, edge-removal method based on graph betweenness centrality which can robustly identify manifold-shorting edges. The problem of graph construction in high dimension is discussed and the proposed algorithm is fit into the ISOMAP workflow. ROC analysis is performed and the performance is tested on synthetic and real datasets. PMID:20607142
NASA Astrophysics Data System (ADS)
Rezeau, L.; Belmont, G.; Manuzzo, R.; Aunai, N.; Dargent, J.
2018-01-01
We explore the structure of the magnetopause using a crossing observed by the Magnetospheric Multiscale (MMS) spacecraft on 16 October 2015. Several methods (minimum variance analysis, BV method, and constant velocity analysis) are first applied to compute the normal to the magnetopause considered as a whole. The different results obtained are not identical, and we show that the whole boundary is not stationary and not planar, so that basic assumptions of these methods are not well satisfied. We then analyze more finely the internal structure for investigating the departures from planarity. Using the basic mathematical definition of what is a one-dimensional physical problem, we introduce a new single spacecraft method, called LNA (local normal analysis) for determining the varying normal, and we compare the results so obtained with those coming from the multispacecraft minimum directional derivative (MDD) tool developed by Shi et al. (2005). This last method gives the dimensionality of the magnetic variations from multipoint measurements and also allows estimating the direction of the local normal when the variations are locally 1-D. This study shows that the magnetopause does include approximate one-dimensional substructures but also two- and three-dimensional structures. It also shows that the dimensionality of the magnetic variations can differ from the variations of other fields so that, at some places, the magnetic field can have a 1-D structure although all the plasma variations do not verify the properties of a global one-dimensional problem. A generalization of the MDD tool is proposed.
A revised version of the transfer matrix method to analyze one-dimensional structures
NASA Technical Reports Server (NTRS)
Nitzsche, F.
1983-01-01
A new and general method to analyze both free and forced vibration characteristics of one-dimensional structures is discussed in this paper. This scheme links for the first time the classical transfer matrix method with the recently developed integrating matrix technique to integrate systems of differential equations. Two alternative approaches to the problem are presented. The first is based upon the lumped parameter model to account for the inertia properties of the structure. The second releases that constraint allowing a more precise description of the physical system. The free vibration of a straight uniform beam under different support conditions is analyzed to test the accuracy of the two models. Finally some results for the free vibration of a 12th order system representing a curved, rotating beam prove that the present method is conveniently extended to more complicated structural dynamics problems.
An approximate Riemann solver for magnetohydrodynamics (that works in more than one dimension)
NASA Technical Reports Server (NTRS)
Powell, Kenneth G.
1994-01-01
An approximate Riemann solver is developed for the governing equations of ideal magnetohydrodynamics (MHD). The Riemann solver has an eight-wave structure, where seven of the waves are those used in previous work on upwind schemes for MHD, and the eighth wave is related to the divergence of the magnetic field. The structure of the eighth wave is not immediately obvious from the governing equations as they are usually written, but arises from a modification of the equations that is presented in this paper. The addition of the eighth wave allows multidimensional MHD problems to be solved without the use of staggered grids or a projection scheme, one or the other of which was necessary in previous work on upwind schemes for MHD. A test problem made up of a shock tube with rotated initial conditions is solved to show that the two-dimensional code yields answers consistent with the one-dimensional methods developed previously.
Application of the finite element groundwater model FEWA to the engineered test facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Craig, P.M.; Davis, E.C.
1985-09-01
A finite element model for water transport through porous media (FEWA) has been applied to the unconfined aquifer at the Oak Ridge National Laboratory Solid Waste Storage Area 6 Engineered Test Facility (ETF). The model was developed in 1983 as part of the Shallow Land Burial Technology - Humid Task (ONL-WL14) and was previously verified using several general hydrologic problems for which an analytic solution exists. Model application and calibration, as described in this report, consisted of modeling the ETF water table for three specialized cases: a one-dimensional steady-state simulation, a one-dimensional transient simulation, and a two-dimensional transient simulation. Inmore » the one-dimensional steady-state simulation, the FEWA output accurately predicted the water table during a long period in which there were no man-induced or natural perturbations to the system. The input parameters of most importance for this case were hydraulic conductivity and aquifer bottom elevation. In the two transient cases, the FEWA output has matched observed water table responses to a single rainfall event occurring in February 1983, yielding a calibrated finite element model that is useful for further study of additional precipitation events as well as contaminant transport at the experimental site.« less
Implicit and semi-implicit schemes in the Versatile Advection Code: numerical tests
NASA Astrophysics Data System (ADS)
Toth, G.; Keppens, R.; Botchev, M. A.
1998-04-01
We describe and evaluate various implicit and semi-implicit time integration schemes applied to the numerical simulation of hydrodynamical and magnetohydrodynamical problems. The schemes were implemented recently in the software package Versatile Advection Code, which uses modern shock capturing methods to solve systems of conservation laws with optional source terms. The main advantage of implicit solution strategies over explicit time integration is that the restrictive constraint on the allowed time step can be (partially) eliminated, thus the computational cost is reduced. The test problems cover one and two dimensional, steady state and time accurate computations, and the solutions contain discontinuities. For each test, we confront explicit with implicit solution strategies.
Automated modal parameter estimation using correlation analysis and bootstrap sampling
NASA Astrophysics Data System (ADS)
Yaghoubi, Vahid; Vakilzadeh, Majid K.; Abrahamsson, Thomas J. S.
2018-02-01
The estimation of modal parameters from a set of noisy measured data is a highly judgmental task, with user expertise playing a significant role in distinguishing between estimated physical and noise modes of a test-piece. Various methods have been developed to automate this procedure. The common approach is to identify models with different orders and cluster similar modes together. However, most proposed methods based on this approach suffer from high-dimensional optimization problems in either the estimation or clustering step. To overcome this problem, this study presents an algorithm for autonomous modal parameter estimation in which the only required optimization is performed in a three-dimensional space. To this end, a subspace-based identification method is employed for the estimation and a non-iterative correlation-based method is used for the clustering. This clustering is at the heart of the paper. The keys to success are correlation metrics that are able to treat the problems of spatial eigenvector aliasing and nonunique eigenvectors of coalescent modes simultaneously. The algorithm commences by the identification of an excessively high-order model from frequency response function test data. The high number of modes of this model provides bases for two subspaces: one for likely physical modes of the tested system and one for its complement dubbed the subspace of noise modes. By employing the bootstrap resampling technique, several subsets are generated from the same basic dataset and for each of them a model is identified to form a set of models. Then, by correlation analysis with the two aforementioned subspaces, highly correlated modes of these models which appear repeatedly are clustered together and the noise modes are collected in a so-called Trashbox cluster. Stray noise modes attracted to the mode clusters are trimmed away in a second step by correlation analysis. The final step of the algorithm is a fuzzy c-means clustering procedure applied to a three-dimensional feature space to assign a degree of physicalness to each cluster. The proposed algorithm is applied to two case studies: one with synthetic data and one with real test data obtained from a hammer impact test. The results indicate that the algorithm successfully clusters similar modes and gives a reasonable quantification of the extent to which each cluster is physical.
NASA Astrophysics Data System (ADS)
Jiang, Jie; Zheng, Songmu
2012-12-01
In this paper, we study a Neumann and free boundary problem for the one-dimensional viscous radiative and reactive gas. We prove that under rather general assumptions on the heat conductivity κ, for any arbitrary large smooth initial data, the problem admits a unique global classical solution. Our global existence results improve those results by Umehara and Tani ["Global solution to the one-dimensional equations for a self-gravitating viscous radiative and reactive gas," J. Differ. Equations 234(2), 439-463 (2007), 10.1016/j.jde.2006.09.023; Umehara and Tani "Global solvability of the free-boundary problem for one-dimensional motion of a self-gravitating viscous radiative and reactive gas," Proc. Jpn. Acad., Ser. A: Math. Sci. 84(7), 123-128 (2008)], 10.3792/pjaa.84.123 and by Qin, Hu, and Wang ["Global smooth solutions for the compressible viscous and heat-conductive gas," Q. Appl. Math. 69(3), 509-528 (2011)]., 10.1090/S0033-569X-2011-01218-0 Moreover, we analyze the asymptotic behavior of the global solutions to our problem, and we prove that the global solution will converge to an equilibrium as time goes to infinity. This is the result obtained for this problem in the literature for the first time.
FeynArts model file for MSSM transition counterterms from DREG to DRED
NASA Astrophysics Data System (ADS)
Stöckinger, Dominik; Varšo, Philipp
2012-02-01
The FeynArts model file MSSMdreg2dred implements MSSM transition counterterms which can convert one-loop Green functions from dimensional regularization to dimensional reduction. They correspond to a slight extension of the well-known Martin/Vaughn counterterms, specialized to the MSSM, and can serve also as supersymmetry-restoring counterterms. The paper provides full analytic results for the counterterms and gives one- and two-loop usage examples. The model file can simplify combining MS¯-parton distribution functions with supersymmetric renormalization or avoiding the renormalization of ɛ-scalars in dimensional reduction. Program summaryProgram title:MSSMdreg2dred.mod Catalogue identifier: AEKR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKR_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: LGPL-License [1] No. of lines in distributed program, including test data, etc.: 7600 No. of bytes in distributed program, including test data, etc.: 197 629 Distribution format: tar.gz Programming language: Mathematica, FeynArts Computer: Any, capable of running Mathematica and FeynArts Operating system: Any, with running Mathematica, FeynArts installation Classification: 4.4, 5, 11.1 Subprograms used: Cat Id Title Reference ADOW_v1_0 FeynArts CPC 140 (2001) 418 Nature of problem: The computation of one-loop Feynman diagrams in the minimal supersymmetric standard model (MSSM) requires regularization. Two schemes, dimensional regularization and dimensional reduction are both common but have different pros and cons. In order to combine the advantages of both schemes one would like to easily convert existing results from one scheme into the other. Solution method: Finite counterterms are constructed which correspond precisely to the one-loop scheme differences for the MSSM. They are provided as a FeynArts [2] model file. Using this model file together with FeynArts, the (ultra-violet) regularization of any MSSM one-loop Green function is switched automatically from dimensional regularization to dimensional reduction. In particular the counterterms serve as supersymmetry-restoring counterterms for dimensional regularization. Restrictions: The counterterms are restricted to the one-loop level and the MSSM. Running time: A few seconds to generate typical Feynman graphs with FeynArts.
Confined One Dimensional Harmonic Oscillator as a Two-Mode System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gueorguiev, V G; Rau, A P; Draayer, J P
2005-07-11
The one-dimensional harmonic oscillator in a box problem is possibly the simplest example of a two-mode system. This system has two exactly solvable limits, the harmonic oscillator and a particle in a (one-dimensional) box. Each of the two limits has a characteristic spectral structure describing the two different excitation modes of the system. Near each of these limits, one can use perturbation theory to achieve an accurate description of the eigenstates. Away from the exact limits, however, one has to carry out a matrix diagonalization because the basis-state mixing that occurs is typically too large to be reproduced in anymore » other way. An alternative to casting the problem in terms of one or the other basis set consists of using an ''oblique'' basis that uses both sets. Through a study of this alternative in this one-dimensional problem, we are able to illustrate practical solutions and infer the applicability of the concept for more complex systems, such as in the study of complex nuclei where oblique-basis calculations have been successful.« less
Highly Parallel Alternating Directions Algorithm for Time Dependent Problems
NASA Astrophysics Data System (ADS)
Ganzha, M.; Georgiev, K.; Lirkov, I.; Margenov, S.; Paprzycki, M.
2011-11-01
In our work, we consider the time dependent Stokes equation on a finite time interval and on a uniform rectangular mesh, written in terms of velocity and pressure. For this problem, a parallel algorithm based on a novel direction splitting approach is developed. Here, the pressure equation is derived from a perturbed form of the continuity equation, in which the incompressibility constraint is penalized in a negative norm induced by the direction splitting. The scheme used in the algorithm is composed of two parts: (i) velocity prediction, and (ii) pressure correction. This is a Crank-Nicolson-type two-stage time integration scheme for two and three dimensional parabolic problems in which the second-order derivative, with respect to each space variable, is treated implicitly while the other variable is made explicit at each time sub-step. In order to achieve a good parallel performance the solution of the Poison problem for the pressure correction is replaced by solving a sequence of one-dimensional second order elliptic boundary value problems in each spatial direction. The parallel code is implemented using the standard MPI functions and tested on two modern parallel computer systems. The performed numerical tests demonstrate good level of parallel efficiency and scalability of the studied direction-splitting-based algorithm.
Health-related needs of people with multiple chronic diseases: differences and underlying factors.
Hopman, Petra; Schellevis, François G; Rijken, Mieke
2016-03-01
To examine the health-related needs of people with multiple chronic diseases in the Netherlands compared to people with one chronic disease, and to identify different subgroups of multimorbid patients based on differences in their health problems. Participants were 1092 people with one or more chronic diseases of a nationwide prospective panel study on the consequences of chronic illness in the Netherlands. They completed the EQ-6D, a multi-dimensional questionnaire on health problems (October 2013). Chi-square tests and analyses of variance were performed to test for differences between multimorbid patients and patients with one chronic disease. To identify subgroups of multimorbid patients, cluster analysis was performed and differences in EQ-6D scores between clusters were tested with Chi-square tests. Multimorbid patients (51 % of the total sample) experience more problems in most health domains than patients with one chronic disease. Almost half (44 %) of the multimorbid people had many health problems in different domains. These people were more often female, had a smaller household size, had a lower health literacy, and suffered from more chronic diseases. Remarkably, a small subgroup of multimorbid patients (4 %, mostly elderly males) is characterized by all having cognitive problems. Based on the problems they experience, we conclude that patients with multimorbidity have relatively many and diverse health-related needs. Extensive health-related needs among people with multimorbidity may relate not only to the number of chronic diseases they suffer from, but also to their patient characteristics. This should be taken into account, when identifying target groups for comprehensive support programmes.
Nonlinear Conservation Laws and Finite Volume Methods
NASA Astrophysics Data System (ADS)
Leveque, Randall J.
Introduction Software Notation Classification of Differential Equations Derivation of Conservation Laws The Euler Equations of Gas Dynamics Dissipative Fluxes Source Terms Radiative Transfer and Isothermal Equations Multi-dimensional Conservation Laws The Shock Tube Problem Mathematical Theory of Hyperbolic Systems Scalar Equations Linear Hyperbolic Systems Nonlinear Systems The Riemann Problem for the Euler Equations Numerical Methods in One Dimension Finite Difference Theory Finite Volume Methods Importance of Conservation Form - Incorrect Shock Speeds Numerical Flux Functions Godunov's Method Approximate Riemann Solvers High-Resolution Methods Other Approaches Boundary Conditions Source Terms and Fractional Steps Unsplit Methods Fractional Step Methods General Formulation of Fractional Step Methods Stiff Source Terms Quasi-stationary Flow and Gravity Multi-dimensional Problems Dimensional Splitting Multi-dimensional Finite Volume Methods Grids and Adaptive Refinement Computational Difficulties Low-Density Flows Discrete Shocks and Viscous Profiles Start-Up Errors Wall Heating Slow-Moving Shocks Grid Orientation Effects Grid-Aligned Shocks Magnetohydrodynamics The MHD Equations One-Dimensional MHD Solving the Riemann Problem Nonstrict Hyperbolicity Stiffness The Divergence of B Riemann Problems in Multi-dimensional MHD Staggered Grids The 8-Wave Riemann Solver Relativistic Hydrodynamics Conservation Laws in Spacetime The Continuity Equation The 4-Momentum of a Particle The Stress-Energy Tensor Finite Volume Methods Multi-dimensional Relativistic Flow Gravitation and General Relativity References
On the theory of oscillating airfoils of finite span in subsonic compressible flow
NASA Technical Reports Server (NTRS)
Reissner, Eric
1950-01-01
The problem of oscillating lifting surface of finite span in subsonic compressible flow is reduced to an integral equation. The kernel of the integral equation is approximated by a simpler expression, on the basis of the assumption of sufficiently large aspect ratio. With this approximation the double integral occurring in the formulation of the problem is reduced to two single integrals, one of which is taken over the chord and the other over the span of the lifting surface. On the basis of this reduction the three-dimensional problem appears separated into two two-dimensional problems, one of them being effectively the problem of two-dimensional flow and the other being the problem of spanwise circulation distribution. Earlier results concerning the oscillating lifting surface of finite span in incompressible flow are contained in the present more general results.
Teaching the Falling Ball Problem with Dimensional Analysis
ERIC Educational Resources Information Center
Sznitman, Josué; Stone, Howard A.; Smits, Alexander J.; Grotberg, James B.
2013-01-01
Dimensional analysis is often a subject reserved for students of fluid mechanics. However, the principles of scaling and dimensional analysis are applicable to various physical problems, many of which can be introduced early on in a university physics curriculum. Here, we revisit one of the best-known examples from a first course in classic…
NASA Astrophysics Data System (ADS)
Balsara, Dinshaw S.; Vides, Jeaniffer; Gurski, Katharine; Nkonga, Boniface; Dumbser, Michael; Garain, Sudip; Audit, Edouard
2016-01-01
Just as the quality of a one-dimensional approximate Riemann solver is improved by the inclusion of internal sub-structure, the quality of a multidimensional Riemann solver is also similarly improved. Such multidimensional Riemann problems arise when multiple states come together at the vertex of a mesh. The interaction of the resulting one-dimensional Riemann problems gives rise to a strongly-interacting state. We wish to endow this strongly-interacting state with physically-motivated sub-structure. The self-similar formulation of Balsara [16] proves especially useful for this purpose. While that work is based on a Galerkin projection, in this paper we present an analogous self-similar formulation that is based on a different interpretation. In the present formulation, we interpret the shock jumps at the boundary of the strongly-interacting state quite literally. The enforcement of the shock jump conditions is done with a least squares projection (Vides, Nkonga and Audit [67]). With that interpretation, we again show that the multidimensional Riemann solver can be endowed with sub-structure. However, we find that the most efficient implementation arises when we use a flux vector splitting and a least squares projection. An alternative formulation that is based on the full characteristic matrices is also presented. The multidimensional Riemann solvers that are demonstrated here use one-dimensional HLLC Riemann solvers as building blocks. Several stringent test problems drawn from hydrodynamics and MHD are presented to show that the method works. Results from structured and unstructured meshes demonstrate the versatility of our method. The reader is also invited to watch a video introduction to multidimensional Riemann solvers on http://www.nd.edu/ dbalsara/Numerical-PDE-Course.
Exact Analytical Solutions for Elastodynamic Impact
2015-11-30
corroborated by derivation of exact discrete solutions from recursive equations for the impact problems. 15. SUBJECT TERMS One-dimensional impact; Elastic...wave propagation; Laplace transform; Floor function; Discrete solutions 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18...impact Elastic wave propagation Laplace transform Floor function Discrete solutionsWe consider the one-dimensional impact problem in which a semi
Pressure distribution under flexible polishing tools. II - Cylindrical (conical) optics
NASA Astrophysics Data System (ADS)
Mehta, Pravin K.
1990-10-01
A previously developed eigenvalue model is extended to determine polishing pressure distribution by rectangular tools with unequal stiffness in two directions on cylindrical optics. Tool misfit is divided into two simplified one-dimensional problems and one simplified two-dimensional problem. Tools with nonuniform cross-sections are treated with a new one-dimensional eigenvalue algorithm, permitting evaluation of tool designs where the edge is more flexible than the interior. This maintains edge pressure variations within acceptable parameters. Finite element modeling is employed to resolve upper bounds, which handle pressure changes in the two-dimensional misfit element. Paraboloids and hyperboloids from the NASA AXAF system are treated with the AXAFPOD software for this method, and are verified with NASTRAN finite element analyses. The maximum deviation from the one-dimensional azimuthal pressure variation is predicted to be 10 percent and 20 percent for paraboloids and hyperboloids, respectively.
NASA Astrophysics Data System (ADS)
Kharibegashvili, S. S.; Jokhadze, O. M.
2014-04-01
A mixed problem for a one-dimensional semilinear wave equation with nonlinear boundary conditions is considered. Conditions of this type occur, for example, in the description of the longitudinal oscillations of a spring fastened elastically at one end, but not in accordance with Hooke's linear law. Uniqueness and existence questions are investigated for global and blowup solutions to this problem, in particular how they depend on the nature of the nonlinearities involved in the equation and the boundary conditions. Bibliography: 14 titles.
Glazoff, Michael V.; Gering, Kevin L.; Garnier, John E.; Rashkeev, Sergey N.; Pyt'ev, Yuri Petrovich
2016-05-17
Embodiments discussed herein in the form of methods, systems, and computer-readable media deal with the application of advanced "projectional" morphological algorithms for solving a broad range of problems. In a method of performing projectional morphological analysis, an N-dimensional input signal is supplied. At least one N-dimensional form indicative of at least one feature in the N-dimensional input signal is identified. The N-dimensional input signal is filtered relative to the at least one N-dimensional form and an N-dimensional output signal is generated indicating results of the filtering at least as differences in the N-dimensional input signal relative to the at least one N-dimensional form.
Simulation-based hypothesis testing of high dimensional means under covariance heterogeneity.
Chang, Jinyuan; Zheng, Chao; Zhou, Wen-Xin; Zhou, Wen
2017-12-01
In this article, we study the problem of testing the mean vectors of high dimensional data in both one-sample and two-sample cases. The proposed testing procedures employ maximum-type statistics and the parametric bootstrap techniques to compute the critical values. Different from the existing tests that heavily rely on the structural conditions on the unknown covariance matrices, the proposed tests allow general covariance structures of the data and therefore enjoy wide scope of applicability in practice. To enhance powers of the tests against sparse alternatives, we further propose two-step procedures with a preliminary feature screening step. Theoretical properties of the proposed tests are investigated. Through extensive numerical experiments on synthetic data sets and an human acute lymphoblastic leukemia gene expression data set, we illustrate the performance of the new tests and how they may provide assistance on detecting disease-associated gene-sets. The proposed methods have been implemented in an R-package HDtest and are available on CRAN. © 2017, The International Biometric Society.
Evaluation of a wave-vector-frequency-domain method for nonlinear wave propagation
Jing, Yun; Tao, Molei; Clement, Greg T.
2011-01-01
A wave-vector-frequency-domain method is presented to describe one-directional forward or backward acoustic wave propagation in a nonlinear homogeneous medium. Starting from a frequency-domain representation of the second-order nonlinear acoustic wave equation, an implicit solution for the nonlinear term is proposed by employing the Green’s function. Its approximation, which is more suitable for numerical implementation, is used. An error study is carried out to test the efficiency of the model by comparing the results with the Fubini solution. It is shown that the error grows as the propagation distance and step-size increase. However, for the specific case tested, even at a step size as large as one wavelength, sufficient accuracy for plane-wave propagation is observed. A two-dimensional steered transducer problem is explored to verify the nonlinear acoustic field directional independence of the model. A three-dimensional single-element transducer problem is solved to verify the forward model by comparing it with an existing nonlinear wave propagation code. Finally, backward-projection behavior is examined. The sound field over a plane in an absorptive medium is backward projected to the source and compared with the initial field, where good agreement is observed. PMID:21302985
Parallel solution of sparse one-dimensional dynamic programming problems
NASA Technical Reports Server (NTRS)
Nicol, David M.
1989-01-01
Parallel computation offers the potential for quickly solving large computational problems. However, it is often a non-trivial task to effectively use parallel computers. Solution methods must sometimes be reformulated to exploit parallelism; the reformulations are often more complex than their slower serial counterparts. We illustrate these points by studying the parallelization of sparse one-dimensional dynamic programming problems, those which do not obviously admit substantial parallelization. We propose a new method for parallelizing such problems, develop analytic models which help us to identify problems which parallelize well, and compare the performance of our algorithm with existing algorithms on a multiprocessor.
Secure positioning technique based on the encrypted visible light map
NASA Astrophysics Data System (ADS)
Lee, Y. U.; Jung, G.
2017-01-01
For overcoming the performance degradation problems of the conventional visible light (VL) positioning system, which are due to the co-channel interference by adjacent light and the irregularity of the VL reception position in the three dimensional (3-D) VL channel, the secure positioning technique based on the two dimensional (2-D) encrypted VL map is proposed, implemented as the prototype for the specific embedded positioning system, and verified by performance tests in this paper. It is shown from the test results that the proposed technique achieves the performance enhancement over 21.7% value better than the conventional one in the real positioning environment, and the well known PN code is the optimal stream encryption key for the good VL positioning.
An adaptive grid algorithm for one-dimensional nonlinear equations
NASA Technical Reports Server (NTRS)
Gutierrez, William E.; Hills, Richard G.
1990-01-01
Richards' equation, which models the flow of liquid through unsaturated porous media, is highly nonlinear and difficult to solve. Step gradients in the field variables require the use of fine grids and small time step sizes. The numerical instabilities caused by the nonlinearities often require the use of iterative methods such as Picard or Newton interation. These difficulties result in large CPU requirements in solving Richards equation. With this in mind, adaptive and multigrid methods are investigated for use with nonlinear equations such as Richards' equation. Attention is focused on one-dimensional transient problems. To investigate the use of multigrid and adaptive grid methods, a series of problems are studied. First, a multigrid program is developed and used to solve an ordinary differential equation, demonstrating the efficiency with which low and high frequency errors are smoothed out. The multigrid algorithm and an adaptive grid algorithm is used to solve one-dimensional transient partial differential equations, such as the diffusive and convective-diffusion equations. The performance of these programs are compared to that of the Gauss-Seidel and tridiagonal methods. The adaptive and multigrid schemes outperformed the Gauss-Seidel algorithm, but were not as fast as the tridiagonal method. The adaptive grid scheme solved the problems slightly faster than the multigrid method. To solve nonlinear problems, Picard iterations are introduced into the adaptive grid and tridiagonal methods. Burgers' equation is used as a test problem for the two algorithms. Both methods obtain solutions of comparable accuracy for similar time increments. For the Burgers' equation, the adaptive grid method finds the solution approximately three times faster than the tridiagonal method. Finally, both schemes are used to solve the water content formulation of the Richards' equation. For this problem, the adaptive grid method obtains a more accurate solution in fewer work units and less computation time than required by the tridiagonal method. The performance of the adaptive grid method tends to degrade as the solution process proceeds in time, but still remains faster than the tridiagonal scheme.
Formulation for Simultaneous Aerodynamic Analysis and Design Optimization
NASA Technical Reports Server (NTRS)
Hou, G. W.; Taylor, A. C., III; Mani, S. V.; Newman, P. A.
1993-01-01
An efficient approach for simultaneous aerodynamic analysis and design optimization is presented. This approach does not require the performance of many flow analyses at each design optimization step, which can be an expensive procedure. Thus, this approach brings us one step closer to meeting the challenge of incorporating computational fluid dynamic codes into gradient-based optimization techniques for aerodynamic design. An adjoint-variable method is introduced to nullify the effect of the increased number of design variables in the problem formulation. The method has been successfully tested on one-dimensional nozzle flow problems, including a sample problem with a normal shock. Implementations of the above algorithm are also presented that incorporate Newton iterations to secure a high-quality flow solution at the end of the design process. Implementations with iterative flow solvers are possible and will be required for large, multidimensional flow problems.
Solution of Radiation and Convection Heat-Transfer Problems
NASA Technical Reports Server (NTRS)
Oneill, R. F.
1986-01-01
Computer program P5399B developed to accommodate variety of fin-type heat conduction applications involving radiative or convective boundary conditions with additionally imposed local heat flux. Program also accommodates significant variety of one-dimensional heat-transfer problems not corresponding specifically to fin-type applications. Program easily accommodates all but few specialized one-dimensional heat-transfer analyses as well as many twodimensional analyses.
Uniform high order spectral methods for one and two dimensional Euler equations
NASA Technical Reports Server (NTRS)
Cai, Wei; Shu, Chi-Wang
1991-01-01
Uniform high order spectral methods to solve multi-dimensional Euler equations for gas dynamics are discussed. Uniform high order spectral approximations with spectral accuracy in smooth regions of solutions are constructed by introducing the idea of the Essentially Non-Oscillatory (ENO) polynomial interpolations into the spectral methods. The authors present numerical results for the inviscid Burgers' equation, and for the one dimensional Euler equations including the interactions between a shock wave and density disturbance, Sod's and Lax's shock tube problems, and the blast wave problem. The interaction between a Mach 3 two dimensional shock wave and a rotating vortex is simulated.
NASA Technical Reports Server (NTRS)
Banks, H. T.; Kojima, Fumio
1988-01-01
The identification of the geometrical structure of the system boundary for a two-dimensional diffusion system is reported. The domain identification problem treated here is converted into an optimization problem based on a fit-to-data criterion and theoretical convergence results for approximate identification techniques are discussed. Results of numerical experiments to demonstrate the efficacy of the theoretical ideas are reported.
Vigelius, Matthias; Meyer, Bernd
2012-01-01
For many biological applications, a macroscopic (deterministic) treatment of reaction-drift-diffusion systems is insufficient. Instead, one has to properly handle the stochastic nature of the problem and generate true sample paths of the underlying probability distribution. Unfortunately, stochastic algorithms are computationally expensive and, in most cases, the large number of participating particles renders the relevant parameter regimes inaccessible. In an attempt to address this problem we present a genuine stochastic, multi-dimensional algorithm that solves the inhomogeneous, non-linear, drift-diffusion problem on a mesoscopic level. Our method improves on existing implementations in being multi-dimensional and handling inhomogeneous drift and diffusion. The algorithm is well suited for an implementation on data-parallel hardware architectures such as general-purpose graphics processing units (GPUs). We integrate the method into an operator-splitting approach that decouples chemical reactions from the spatial evolution. We demonstrate the validity and applicability of our algorithm with a comprehensive suite of standard test problems that also serve to quantify the numerical accuracy of the method. We provide a freely available, fully functional GPU implementation. Integration into Inchman, a user-friendly web service, that allows researchers to perform parallel simulations of reaction-drift-diffusion systems on GPU clusters is underway. PMID:22506001
Corrected simulations for one-dimensional diffusion processes with naturally occurring boundaries.
Shafiey, Hassan; Gan, Xinjun; Waxman, David
2017-11-01
To simulate a diffusion process, a usual approach is to discretize the time in the associated stochastic differential equation. This is the approach used in the Euler method. In the present work we consider a one-dimensional diffusion process where the terms occurring, within the stochastic differential equation, prevent the process entering a region. The outcome is a naturally occurring boundary (which may be absorbing or reflecting). A complication occurs in a simulation of this situation. The term involving a random variable, within the discretized stochastic differential equation, may take a trajectory across the boundary into a "forbidden region." The naive way of dealing with this problem, which we refer to as the "standard" approach, is simply to reset the trajectory to the boundary, based on the argument that crossing the boundary actually signifies achieving the boundary. In this work we show, within the framework of the Euler method, that such resetting introduces a spurious force into the original diffusion process. This force may have a significant influence on trajectories that come close to a boundary. We propose a corrected numerical scheme, for simulating one-dimensional diffusion processes with naturally occurring boundaries. This involves correcting the standard approach, so that an exact property of the diffusion process is precisely respected. As a consequence, the proposed scheme does not introduce a spurious force into the dynamics. We present numerical test cases, based on exactly soluble one-dimensional problems with one or two boundaries, which suggest that, for a given value of the discrete time step, the proposed scheme leads to substantially more accurate results than the standard approach. Alternatively, the standard approach needs considerably more computation time to obtain a comparable level of accuracy to the proposed scheme, because the standard approach requires a significantly smaller time step.
Corrected simulations for one-dimensional diffusion processes with naturally occurring boundaries
NASA Astrophysics Data System (ADS)
Shafiey, Hassan; Gan, Xinjun; Waxman, David
2017-11-01
To simulate a diffusion process, a usual approach is to discretize the time in the associated stochastic differential equation. This is the approach used in the Euler method. In the present work we consider a one-dimensional diffusion process where the terms occurring, within the stochastic differential equation, prevent the process entering a region. The outcome is a naturally occurring boundary (which may be absorbing or reflecting). A complication occurs in a simulation of this situation. The term involving a random variable, within the discretized stochastic differential equation, may take a trajectory across the boundary into a "forbidden region." The naive way of dealing with this problem, which we refer to as the "standard" approach, is simply to reset the trajectory to the boundary, based on the argument that crossing the boundary actually signifies achieving the boundary. In this work we show, within the framework of the Euler method, that such resetting introduces a spurious force into the original diffusion process. This force may have a significant influence on trajectories that come close to a boundary. We propose a corrected numerical scheme, for simulating one-dimensional diffusion processes with naturally occurring boundaries. This involves correcting the standard approach, so that an exact property of the diffusion process is precisely respected. As a consequence, the proposed scheme does not introduce a spurious force into the dynamics. We present numerical test cases, based on exactly soluble one-dimensional problems with one or two boundaries, which suggest that, for a given value of the discrete time step, the proposed scheme leads to substantially more accurate results than the standard approach. Alternatively, the standard approach needs considerably more computation time to obtain a comparable level of accuracy to the proposed scheme, because the standard approach requires a significantly smaller time step.
Iterative spectral methods and spectral solutions to compressible flows
NASA Technical Reports Server (NTRS)
Hussaini, M. Y.; Zang, T. A.
1982-01-01
A spectral multigrid scheme is described which can solve pseudospectral discretizations of self-adjoint elliptic problems in O(N log N) operations. An iterative technique for efficiently implementing semi-implicit time-stepping for pseudospectral discretizations of Navier-Stokes equations is discussed. This approach can handle variable coefficient terms in an effective manner. Pseudospectral solutions of compressible flow problems are presented. These include one dimensional problems and two dimensional Euler solutions. Results are given both for shock-capturing approaches and for shock-fitting ones.
Mixing Regimes in a Spatially Confined, Two-Dimensional, Supersonic Shear Layer
1992-07-31
MODEL ................................... 3 THE MODEL PROBLEMS .............................................. 6 THE ONE-DIMENSIONAL PROBLEM...the effects of the numerical diffusion on the spectrum. Guirguis et al.ś and Farouk et al."’ have studied spatially evolving mixing layers for equal...approximations. Physical and Numerical Model General Formulation We solve the time-dependent, two-dimensional, compressible, Navier-Stokes equations for a
Determination of the temperature field of shell structures
NASA Astrophysics Data System (ADS)
Rodionov, N. G.
1986-10-01
A stationary heat conduction problem is formulated for the case of shell structures, such as those found in gas-turbine and jet engines. A two-dimensional elliptic differential equation of stationary heat conduction is obtained which allows, in an approximate manner, for temperature changes along a third variable, i.e., the shell thickness. The two-dimensional problem is reduced to a series of one-dimensional problems which are then solved using efficient difference schemes. The approach proposed here is illustrated by a specific example.
NASA Astrophysics Data System (ADS)
Li, Weixuan; Lin, Guang; Li, Bing
2016-09-01
Many uncertainty quantification (UQ) approaches suffer from the curse of dimensionality, that is, their computational costs become intractable for problems involving a large number of uncertainty parameters. In these situations, the classic Monte Carlo often remains the preferred method of choice because its convergence rate O (n - 1 / 2), where n is the required number of model simulations, does not depend on the dimension of the problem. However, many high-dimensional UQ problems are intrinsically low-dimensional, because the variation of the quantity of interest (QoI) is often caused by only a few latent parameters varying within a low-dimensional subspace, known as the sufficient dimension reduction (SDR) subspace in the statistics literature. Motivated by this observation, we propose two inverse regression-based UQ algorithms (IRUQ) for high-dimensional problems. Both algorithms use inverse regression to convert the original high-dimensional problem to a low-dimensional one, which is then efficiently solved by building a response surface for the reduced model, for example via the polynomial chaos expansion. The first algorithm, which is for the situations where an exact SDR subspace exists, is proved to converge at rate O (n-1), hence much faster than MC. The second algorithm, which doesn't require an exact SDR, employs the reduced model as a control variate to reduce the error of the MC estimate. The accuracy gain could still be significant, depending on how well the reduced model approximates the original high-dimensional one. IRUQ also provides several additional practical advantages: it is non-intrusive; it does not require computing the high-dimensional gradient of the QoI; and it reports an error bar so the user knows how reliable the result is.
Simulators for Maintenance Training: Some Issues, Problems and Areas for Future Research
1978-07-01
trainer into a full-scale, three-dimensional simulation of one cabinet of the NIKE HIPAR system. Test points for troubleshooting were located on simulated...described was used to teach maintenance of the NIKE HIPAR system. It too was considered to be a general purpose trainer in that its basic features could be...types of maintenance simulators based on a detailed task analysis of the NIKE HIPAR system as it existed one year before it was scheduled to become
A method of smoothed particle hydrodynamics using spheroidal kernels
NASA Technical Reports Server (NTRS)
Fulbright, Michael S.; Benz, Willy; Davies, Melvyn B.
1995-01-01
We present a new method of three-dimensional smoothed particle hydrodynamics (SPH) designed to model systems dominated by deformation along a preferential axis. These systems cause severe problems for SPH codes using spherical kernels, which are best suited for modeling systems which retain rough spherical symmetry. Our method allows the smoothing length in the direction of the deformation to evolve independently of the smoothing length in the perpendicular plane, resulting in a kernel with a spheroidal shape. As a result the spatial resolution in the direction of deformation is significantly improved. As a test case we present the one-dimensional homologous collapse of a zero-temperature, uniform-density cloud, which serves to demonstrate the advantages of spheroidal kernels. We also present new results on the problem of the tidal disruption of a star by a massive black hole.
Maximizing kinetic energy transfer in one-dimensional many-body collisions
NASA Astrophysics Data System (ADS)
Ricardo, Bernard; Lee, Paul
2015-03-01
The main problem discussed in this paper involves a simple one-dimensional two-body collision, in which the problem can be extended into a chain of one-dimensional many-body collisions. The result is quite interesting, as it provides us with a thorough mathematical understanding that will help in designing a chain system for maximum energy transfer for a range of collision types. In this paper, we will show that there is a way to improve the kinetic energy transfer between two masses, and the idea can be applied recursively. However, this method only works for a certain range of collision types, which is indicated by a range of coefficients of restitution. Although the concept of momentum, elastic and inelastic collision, as well as Newton’s laws, are taught in junior college physics, especially in Singapore schools, students in this level are not expected to be able to do this problem quantitatively, as it requires rigorous mathematics, including calculus. Nevertheless, this paper provides nice analytical steps that address some common misconceptions in students’ way of thinking about one-dimensional collisions.
One-Dimensional Czedli-Type Islands
ERIC Educational Resources Information Center
Horvath, Eszter K.; Mader, Attila; Tepavcevic, Andreja
2011-01-01
The notion of an island has surfaced in recent algebra and coding theory research. Discrete versions provide interesting combinatorial problems. This paper presents the one-dimensional case with finitely many heights, a topic convenient for student research.
Plane Poiseuille flow of a rarefied gas in the presence of strong gravitation.
Doi, Toshiyuki
2011-02-01
Plane Poiseuille flow of a rarefied gas, which flows horizontally in the presence of strong gravitation, is studied based on the Boltzmann equation. Applying the asymptotic analysis for a small variation in the flow direction [Y. Sone, Molecular Gas Dynamics (Birkhäuser, 2007)], the two-dimensional problem is reduced to a one-dimensional problem, as in the case of a Poiseuille flow in the absence of gravitation, and the solution is obtained in a semianalytical form. The reduced one-dimensional problem is solved numerically for a hard sphere molecular gas over a wide range of the gas-rarefaction degree and the gravitational strength. The presence of gravitation reduces the mass flow rate, and the effect of gravitation is significant for large Knudsen numbers. To verify the validity of the asymptotic solution, a two-dimensional problem of a flow through a long channel is directly solved numerically, and the validity of the asymptotic solution is confirmed. ©2011 American Physical Society
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doebling, Scott William
This paper documents the escape of high explosive (HE) products problem. The problem, first presented by Fickett & Rivard, tests the implementation and numerical behavior of a high explosive detonation and energy release model and its interaction with an associated compressible hydrodynamics simulation code. The problem simulates the detonation of a finite-length, one-dimensional piece of HE that is driven by a piston from one end and adjacent to a void at the other end. The HE equation of state is modeled as a polytropic ideal gas. The HE detonation is assumed to be instantaneous with an infinitesimal reaction zone. Viamore » judicious selection of the material specific heat ratio, the problem has an exact solution with linear characteristics, enabling a straightforward calculation of the physical variables as a function of time and space. Lastly, implementation of the exact solution in the Python code ExactPack is discussed, as are verification cases for the exact solution code.« less
HUFF, a One-Dimensional Hydrodynamics Code for Strong Shocks
1978-12-01
results for two sample problems. The first problem discussed is a one-kiloton nuclear burst in infinite sea level air. The second problem is the one...of HUFF as an effective first order hydro- dynamic computer code. 1 KT Explosion The one-kiloton nuclear explosion in infinite sea level air was
NASA Technical Reports Server (NTRS)
Morozov, S. K.; Krasitskiy, O. P.
1978-01-01
A computational scheme and a standard program is proposed for solving systems of nonstationary spatially one-dimensional nonlinear differential equations using Newton's method. The proposed scheme is universal in its applicability and its reduces to a minimum the work of programming. The program is written in the FORTRAN language and can be used without change on electronic computers of type YeS and BESM-6. The standard program described permits the identification of nonstationary (or stationary) solutions to systems of spatially one-dimensional nonlinear (or linear) partial differential equations. The proposed method may be used to solve a series of geophysical problems which take chemical reactions, diffusion, and heat conductivity into account, to evaluate nonstationary thermal fields in two-dimensional structures when in one of the geometrical directions it can take a small number of discrete levels, and to solve problems in nonstationary gas dynamics.
Action-minimizing solutions of the one-dimensional N-body problem
NASA Astrophysics Data System (ADS)
Yu, Xiang; Zhang, Shiqing
2018-05-01
We supplement the following result of C. Marchal on the Newtonian N-body problem: A path minimizing the Lagrangian action functional between two given configurations is always a true (collision-free) solution when the dimension d of the physical space R^d satisfies d≥2. The focus of this paper is on the fixed-ends problem for the one-dimensional Newtonian N-body problem. We prove that a path minimizing the action functional in the set of paths joining two given configurations and having all the time the same order is always a true (collision-free) solution. Considering the one-dimensional N-body problem with equal masses, we prove that (i) collision instants are isolated for a path minimizing the action functional between two given configurations, (ii) if the particles at two endpoints have the same order, then the path minimizing the action functional is always a true (collision-free) solution and (iii) when the particles at two endpoints have different order, although there must be collisions for any path, we can prove that there are at most N! - 1 collisions for any action-minimizing path.
A discontinuous Galerkin method for two-dimensional PDE models of Asian options
NASA Astrophysics Data System (ADS)
Hozman, J.; Tichý, T.; Cvejnová, D.
2016-06-01
In our previous research we have focused on the problem of plain vanilla option valuation using discontinuous Galerkin method for numerical PDE solution. Here we extend a simple one-dimensional problem into two-dimensional one and design a scheme for valuation of Asian options, i.e. options with payoff depending on the average of prices collected over prespecified horizon. The algorithm is based on the approach combining the advantages of the finite element methods together with the piecewise polynomial generally discontinuous approximations. Finally, an illustrative example using DAX option market data is provided.
A Numerical Investigation of the Burnett Equations Based on the Second Law
NASA Technical Reports Server (NTRS)
Comeaux, Keith A.; Chapman, Dean R.; MacCormack, Robert W.; Edwards, Thomas A. (Technical Monitor)
1995-01-01
The Burnett equations have been shown to potentially violate the second law of thermodynamics. The objective of this investigation is to correlate the numerical problems experienced by the Burnett equations to the negative production of entropy. The equations have had a long history of numerical instability to small wavelength disturbances. Recently, Zhong corrected the instability problem and made solutions attainable for one dimensional shock waves and hypersonic blunt bodies. Difficulties still exist when attempting to solve hypersonic flat plate boundary layers and blunt body wake flows, however. Numerical experiments will include one-dimensional shock waves, quasi-one dimensional nozzles, and expanding Prandlt-Meyer flows and specifically examine the entropy production for these cases.
Classification Objects, Ideal Observers & Generative Models
ERIC Educational Resources Information Center
Olman, Cheryl; Kersten, Daniel
2004-01-01
A successful vision system must solve the problem of deriving geometrical information about three-dimensional objects from two-dimensional photometric input. The human visual system solves this problem with remarkable efficiency, and one challenge in vision research is to understand how neural representations of objects are formed and what visual…
Current status of one- and two-dimensional numerical models: Successes and limitations
NASA Technical Reports Server (NTRS)
Schwartz, R. J.; Gray, J. L.; Lundstrom, M. S.
1985-01-01
The capabilities of one and two-dimensional numerical solar cell modeling programs (SCAP1D and SCAP2D) are described. The occasions when a two-dimensional model is required are discussed. The application of the models to design, analysis, and prediction are presented along with a discussion of problem areas for solar cell modeling.
NASA Technical Reports Server (NTRS)
Chen, T.; Raju, I. S.
2002-01-01
A coupled finite element (FE) method and meshless local Petrov-Galerkin (MLPG) method for analyzing two-dimensional potential problems is presented in this paper. The analysis domain is subdivided into two regions, a finite element (FE) region and a meshless (MM) region. A single weighted residual form is written for the entire domain. Independent trial and test functions are assumed in the FE and MM regions. A transition region is created between the two regions. The transition region blends the trial and test functions of the FE and MM regions. The trial function blending is achieved using a technique similar to the 'Coons patch' method that is widely used in computer-aided geometric design. The test function blending is achieved by using either FE or MM test functions on the nodes in the transition element. The technique was evaluated by applying the coupled method to two potential problems governed by the Poisson equation. The coupled method passed all the patch test problems and gave accurate solutions for the problems studied.
Advanced numerical methods for three dimensional two-phase flow calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toumi, I.; Caruge, D.
1997-07-01
This paper is devoted to new numerical methods developed for both one and three dimensional two-phase flow calculations. These methods are finite volume numerical methods and are based on the use of Approximate Riemann Solvers concepts to define convective fluxes versus mean cell quantities. The first part of the paper presents the numerical method for a one dimensional hyperbolic two-fluid model including differential terms as added mass and interface pressure. This numerical solution scheme makes use of the Riemann problem solution to define backward and forward differencing to approximate spatial derivatives. The construction of this approximate Riemann solver uses anmore » extension of Roe`s method that has been successfully used to solve gas dynamic equations. As far as the two-fluid model is hyperbolic, this numerical method seems very efficient for the numerical solution of two-phase flow problems. The scheme was applied both to shock tube problems and to standard tests for two-fluid computer codes. The second part describes the numerical method in the three dimensional case. The authors discuss also some improvements performed to obtain a fully implicit solution method that provides fast running steady state calculations. Such a scheme is not implemented in a thermal-hydraulic computer code devoted to 3-D steady-state and transient computations. Some results obtained for Pressurised Water Reactors concerning upper plenum calculations and a steady state flow in the core with rod bow effect evaluation are presented. In practice these new numerical methods have proved to be stable on non staggered grids and capable of generating accurate non oscillating solutions for two-phase flow calculations.« less
Monolithic multigrid methods for two-dimensional resistive magnetohydrodynamics
Adler, James H.; Benson, Thomas R.; Cyr, Eric C.; ...
2016-01-06
Magnetohydrodynamic (MHD) representations are used to model a wide range of plasma physics applications and are characterized by a nonlinear system of partial differential equations that strongly couples a charged fluid with the evolution of electromagnetic fields. The resulting linear systems that arise from discretization and linearization of the nonlinear problem are generally difficult to solve. In this paper, we investigate multigrid preconditioners for this system. We consider two well-known multigrid relaxation methods for incompressible fluid dynamics: Braess--Sarazin relaxation and Vanka relaxation. We first extend these to the context of steady-state one-fluid viscoresistive MHD. Then we compare the two relaxationmore » procedures within a multigrid-preconditioned GMRES method employed within Newton's method. To isolate the effects of the different relaxation methods, we use structured grids, inf-sup stable finite elements, and geometric interpolation. Furthermore, we present convergence and timing results for a two-dimensional, steady-state test problem.« less
Positivity-preserving numerical schemes for multidimensional advection
NASA Technical Reports Server (NTRS)
Leonard, B. P.; Macvean, M. K.; Lock, A. P.
1993-01-01
This report describes the construction of an explicit, single time-step, conservative, finite-volume method for multidimensional advective flow, based on a uniformly third-order polynomial interpolation algorithm (UTOPIA). Particular attention is paid to the problem of flow-to-grid angle-dependent, anisotropic distortion typical of one-dimensional schemes used component-wise. The third-order multidimensional scheme automatically includes certain cross-difference terms that guarantee good isotropy (and stability). However, above first-order, polynomial-based advection schemes do not preserve positivity (the multidimensional analogue of monotonicity). For this reason, a multidimensional generalization of the first author's universal flux-limiter is sought. This is a very challenging problem. A simple flux-limiter can be found; but this introduces strong anisotropic distortion. A more sophisticated technique, limiting part of the flux and then restoring the isotropy-maintaining cross-terms afterwards, gives more satisfactory results. Test cases are confined to two dimensions; three-dimensional extensions are briefly discussed.
Asteroseismic Constraints on the Models of Hot B Subdwarfs: Convective Helium-Burning Cores
NASA Astrophysics Data System (ADS)
Schindler, Jan-Torge; Green, Elizabeth M.; Arnett, W. David
2017-10-01
Asteroseismology of non-radial pulsations in Hot B Subdwarfs (sdB stars) offers a unique view into the interior of core-helium-burning stars. Ground-based and space-borne high precision light curves allow for the analysis of pressure and gravity mode pulsations to probe the structure of sdB stars deep into the convective core. As such asteroseismological analysis provides an excellent opportunity to test our understanding of stellar evolution. In light of the newest constraints from asteroseismology of sdB and red clump stars, standard approaches of convective mixing in 1D stellar evolution models are called into question. The problem lies in the current treatment of overshooting and the entrainment at the convective boundary. Unfortunately no consistent algorithm of convective mixing exists to solve the problem, introducing uncertainties to the estimates of stellar ages. Three dimensional simulations of stellar convection show the natural development of an overshooting region and a boundary layer. In search for a consistent prescription of convection in one dimensional stellar evolution models, guidance from three dimensional simulations and asteroseismological results is indispensable.
Aerodynamic Shape Optimization Using A Real-Number-Encoded Genetic Algorithm
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.
2001-01-01
A new method for aerodynamic shape optimization using a genetic algorithm with real number encoding is presented. The algorithm is used to optimize three different problems, a simple hill climbing problem, a quasi-one-dimensional nozzle problem using an Euler equation solver and a three-dimensional transonic wing problem using a nonlinear potential solver. Results indicate that the genetic algorithm is easy to implement and extremely reliable, being relatively insensitive to design space noise.
Doebling, Scott William
2016-10-22
This paper documents the escape of high explosive (HE) products problem. The problem, first presented by Fickett & Rivard, tests the implementation and numerical behavior of a high explosive detonation and energy release model and its interaction with an associated compressible hydrodynamics simulation code. The problem simulates the detonation of a finite-length, one-dimensional piece of HE that is driven by a piston from one end and adjacent to a void at the other end. The HE equation of state is modeled as a polytropic ideal gas. The HE detonation is assumed to be instantaneous with an infinitesimal reaction zone. Viamore » judicious selection of the material specific heat ratio, the problem has an exact solution with linear characteristics, enabling a straightforward calculation of the physical variables as a function of time and space. Lastly, implementation of the exact solution in the Python code ExactPack is discussed, as are verification cases for the exact solution code.« less
Development of a particle method of characteristics (PMOC) for one-dimensional shock waves
NASA Astrophysics Data System (ADS)
Hwang, Y.-H.
2018-03-01
In the present study, a particle method of characteristics is put forward to simulate the evolution of one-dimensional shock waves in barotropic gaseous, closed-conduit, open-channel, and two-phase flows. All these flow phenomena can be described with the same set of governing equations. The proposed scheme is established based on the characteristic equations and formulated by assigning the computational particles to move along the characteristic curves. Both the right- and left-running characteristics are traced and represented by their associated computational particles. It inherits the computational merits from the conventional method of characteristics (MOC) and moving particle method, but without their individual deficiencies. In addition, special particles with dual states deduced to the enforcement of the Rankine-Hugoniot relation are deliberately imposed to emulate the shock structure. Numerical tests are carried out by solving some benchmark problems, and the computational results are compared with available analytical solutions. From the derivation procedure and obtained computational results, it is concluded that the proposed PMOC will be a useful tool to replicate one-dimensional shock waves.
2D and 3D Traveling Salesman Problem
ERIC Educational Resources Information Center
Haxhimusa, Yll; Carpenter, Edward; Catrambone, Joseph; Foldes, David; Stefanov, Emil; Arns, Laura; Pizlo, Zygmunt
2011-01-01
When a two-dimensional (2D) traveling salesman problem (TSP) is presented on a computer screen, human subjects can produce near-optimal tours in linear time. In this study we tested human performance on a real and virtual floor, as well as in a three-dimensional (3D) virtual space. Human performance on the real floor is as good as that on a…
Updates to Multi-Dimensional Flux Reconstruction for Hypersonic Simulations on Tetrahedral Grids
NASA Technical Reports Server (NTRS)
Gnoffo, Peter A.
2010-01-01
The quality of simulated hypersonic stagnation region heating with tetrahedral meshes is investigated by using an updated three-dimensional, upwind reconstruction algorithm for the inviscid flux vector. An earlier implementation of this algorithm provided improved symmetry characteristics on tetrahedral grids compared to conventional reconstruction methods. The original formulation however displayed quantitative differences in heating and shear that were as large as 25% compared to a benchmark, structured-grid solution. The primary cause of this discrepancy is found to be an inherent inconsistency in the formulation of the flux limiter. The inconsistency is removed by employing a Green-Gauss formulation of primitive gradients at nodes to replace the previous Gram-Schmidt algorithm. Current results are now in good agreement with benchmark solutions for two challenge problems: (1) hypersonic flow over a three-dimensional cylindrical section with special attention to the uniformity of the solution in the spanwise direction and (2) hypersonic flow over a three-dimensional sphere. The tetrahedral cells used in the simulation are derived from a structured grid where cell faces are bisected across the diagonal resulting in a consistent pattern of diagonals running in a biased direction across the otherwise symmetric domain. This grid is known to accentuate problems in both shock capturing and stagnation region heating encountered with conventional, quasi-one-dimensional inviscid flux reconstruction algorithms. Therefore the test problems provide a sensitive indicator for algorithmic effects on heating. Additional simulations on a sharp, double cone and the shuttle orbiter are then presented to demonstrate the capabilities of the new algorithm on more geometrically complex flows with tetrahedral grids. These results provide the first indication that pure tetrahedral elements utilizing the updated, three-dimensional, upwind reconstruction algorithm may be used for the simulation of heating and shear in hypersonic flows in upwind, finite volume formulations.
Constructing space difference schemes which satisfy a cell entropy inequality
NASA Technical Reports Server (NTRS)
Merriam, Marshal L.
1989-01-01
A numerical methodology for solving convection problems is presented, using finite difference schemes which satisfy the second law of thermodynamics on a cell-by-cell basis in addition to the usual conservation laws. It is shown that satisfaction of a cell entropy inequality is sufficient, in some cases, to guarantee nonlinear stability. Some details are given for several one-dimensional problems, including the quasi-one-dimensional Euler equations applied to flow in a nozzle.
Pattern-set generation algorithm for the one-dimensional multiple stock sizes cutting stock problem
NASA Astrophysics Data System (ADS)
Cui, Yaodong; Cui, Yi-Ping; Zhao, Zhigang
2015-09-01
A pattern-set generation algorithm (PSG) for the one-dimensional multiple stock sizes cutting stock problem (1DMSSCSP) is presented. The solution process contains two stages. In the first stage, the PSG solves the residual problems repeatedly to generate the patterns in the pattern set, where each residual problem is solved by the column-generation approach, and each pattern is generated by solving a single large object placement problem. In the second stage, the integer linear programming model of the 1DMSSCSP is solved using a commercial solver, where only the patterns in the pattern set are considered. The computational results of benchmark instances indicate that the PSG outperforms existing heuristic algorithms and rivals the exact algorithm in solution quality.
Finite element meshing approached as a global minimization process
DOE Office of Scientific and Technical Information (OSTI.GOV)
WITKOWSKI,WALTER R.; JUNG,JOSEPH; DOHRMANN,CLARK R.
2000-03-01
The ability to generate a suitable finite element mesh in an automatic fashion is becoming the key to being able to automate the entire engineering analysis process. However, placing an all-hexahedron mesh in a general three-dimensional body continues to be an elusive goal. The approach investigated in this research is fundamentally different from any other that is known of by the authors. A physical analogy viewpoint is used to formulate the actual meshing problem which constructs a global mathematical description of the problem. The analogy used was that of minimizing the electrical potential of a system charged particles within amore » charged domain. The particles in the presented analogy represent duals to mesh elements (i.e., quads or hexes). Particle movement is governed by a mathematical functional which accounts for inter-particles repulsive, attractive and alignment forces. This functional is minimized to find the optimal location and orientation of each particle. After the particles are connected a mesh can be easily resolved. The mathematical description for this problem is as easy to formulate in three-dimensions as it is in two- or one-dimensions. The meshing algorithm was developed within CoMeT. It can solve the two-dimensional meshing problem for convex and concave geometries in a purely automated fashion. Investigation of the robustness of the technique has shown a success rate of approximately 99% for the two-dimensional geometries tested. Run times to mesh a 100 element complex geometry were typically in the 10 minute range. Efficiency of the technique is still an issue that needs to be addressed. Performance is an issue that is critical for most engineers generating meshes. It was not for this project. The primary focus of this work was to investigate and evaluate a meshing algorithm/philosophy with efficiency issues being secondary. The algorithm was also extended to mesh three-dimensional geometries. Unfortunately, only simple geometries were tested before this project ended. The primary complexity in the extension was in the connectivity problem formulation. Defining all of the interparticle interactions that occur in three-dimensions and expressing them in mathematical relationships is very difficult.« less
NASA Astrophysics Data System (ADS)
Miller, K. L.; Berg, S. J.; Davison, J. H.; Sudicky, E. A.; Forsyth, P. A.
2018-01-01
Although high performance computers and advanced numerical methods have made the application of fully-integrated surface and subsurface flow and transport models such as HydroGeoSphere common place, run times for large complex basin models can still be on the order of days to weeks, thus, limiting the usefulness of traditional workhorse algorithms for uncertainty quantification (UQ) such as Latin Hypercube simulation (LHS) or Monte Carlo simulation (MCS), which generally require thousands of simulations to achieve an acceptable level of accuracy. In this paper we investigate non-intrusive polynomial chaos for uncertainty quantification, which in contrast to random sampling methods (e.g., LHS and MCS), represents a model response of interest as a weighted sum of polynomials over the random inputs. Once a chaos expansion has been constructed, approximating the mean, covariance, probability density function, cumulative distribution function, and other common statistics as well as local and global sensitivity measures is straightforward and computationally inexpensive, thus making PCE an attractive UQ method for hydrologic models with long run times. Our polynomial chaos implementation was validated through comparison with analytical solutions as well as solutions obtained via LHS for simple numerical problems. It was then used to quantify parametric uncertainty in a series of numerical problems with increasing complexity, including a two-dimensional fully-saturated, steady flow and transient transport problem with six uncertain parameters and one quantity of interest; a one-dimensional variably-saturated column test involving transient flow and transport, four uncertain parameters, and two quantities of interest at 101 spatial locations and five different times each (1010 total); and a three-dimensional fully-integrated surface and subsurface flow and transport problem for a small test catchment involving seven uncertain parameters and three quantities of interest at 241 different times each. Numerical experiments show that polynomial chaos is an effective and robust method for quantifying uncertainty in fully-integrated hydrologic simulations, which provides a rich set of features and is computationally efficient. Our approach has the potential for significant speedup over existing sampling based methods when the number of uncertain model parameters is modest ( ≤ 20). To our knowledge, this is the first implementation of the algorithm in a comprehensive, fully-integrated, physically-based three-dimensional hydrosystem model.
Preliminary results on the fracture analysis of multi-site cracking of lap joints in aircraft skins
NASA Astrophysics Data System (ADS)
Beuth, J. L., Jr.; Hutchinson, John W.
1992-07-01
Results of a fracture mechanics analysis relevant to fatigue crack growth at rivets in lap joints of aircraft skins are presented. Multi-site damage (MSD) is receiving increased attention within the context of problems of aging aircraft. Fracture analyses previously carried out include small-scale modeling of rivet/skin interactions, larger-scale two-dimensional models of lap joints similar to that developed here, and full scale three-dimensional models of large portions of the aircraft fuselage. Fatigue testing efforts have included flat coupon specimens, two-dimensional lap joint tests, and full scale tests on specimens designed to closely duplicate aircraft sections. Most of this work is documented in the proceedings of previous symposia on the aging aircraft problem. The effect MSD has on the ability of skin stiffeners to arrest the growth of long skin cracks is a particularly important topic that remains to be addressed. One of the most striking features of MSD observed in joints of some test sections and in the joints of some of the older aircraft fuselages is the relative uniformity of the fatigue cracks from rivet to rivet along an extended row of rivets. This regularity suggests that nucleation of the cracks must not be overly difficult. Moreover, it indicates that there is some mechanism which keeps longer cracks from running away from shorter ones, or, equivalently, a mechanism for shorter cracks to catch-up with longer cracks. This basic mechanism has not been identified, and one of the objectives of the work is to see to what extent the mechanism is revealed by a fracture analysis of the MSD cracks. Another related aim is to present accurate stress intensity factor variations with crack length which can be used to estimate fatigue crack growth lifetimes once cracks have been initiated. Results are presented which illustrate the influence of load shedding from rivets with long cracks to neighboring rivets with shorter cracks. Results are also included for the effect of residual stress due to the riveting process itself.
Preliminary results on the fracture analysis of multi-site cracking of lap joints in aircraft skins
NASA Technical Reports Server (NTRS)
Beuth, J. L., Jr.; Hutchinson, John W.
1992-01-01
Results of a fracture mechanics analysis relevant to fatigue crack growth at rivets in lap joints of aircraft skins are presented. Multi-site damage (MSD) is receiving increased attention within the context of problems of aging aircraft. Fracture analyses previously carried out include small-scale modeling of rivet/skin interactions, larger-scale two-dimensional models of lap joints similar to that developed here, and full scale three-dimensional models of large portions of the aircraft fuselage. Fatigue testing efforts have included flat coupon specimens, two-dimensional lap joint tests, and full scale tests on specimens designed to closely duplicate aircraft sections. Most of this work is documented in the proceedings of previous symposia on the aging aircraft problem. The effect MSD has on the ability of skin stiffeners to arrest the growth of long skin cracks is a particularly important topic that remains to be addressed. One of the most striking features of MSD observed in joints of some test sections and in the joints of some of the older aircraft fuselages is the relative uniformity of the fatigue cracks from rivet to rivet along an extended row of rivets. This regularity suggests that nucleation of the cracks must not be overly difficult. Moreover, it indicates that there is some mechanism which keeps longer cracks from running away from shorter ones, or, equivalently, a mechanism for shorter cracks to catch-up with longer cracks. This basic mechanism has not been identified, and one of the objectives of the work is to see to what extent the mechanism is revealed by a fracture analysis of the MSD cracks. Another related aim is to present accurate stress intensity factor variations with crack length which can be used to estimate fatigue crack growth lifetimes once cracks have been initiated. Results are presented which illustrate the influence of load shedding from rivets with long cracks to neighboring rivets with shorter cracks. Results are also included for the effect of residual stress due to the riveting process itself.
An Improved Zero Potential Circuit for Readout of a Two-Dimensional Resistive Sensor Array
Wu, Jian-Feng; Wang, Feng; Wang, Qi; Li, Jian-Qing; Song, Ai-Guo
2016-01-01
With one operational amplifier (op-amp) in negative feedback, the traditional zero potential circuit could access one element in the two-dimensional (2-D) resistive sensor array with the shared row-column fashion but it suffered from the crosstalk problem for the non-scanned elements’ bypass currents, which were injected into array’s non-scanned electrodes from zero potential. Firstly, for suppressing the crosstalk problem, we designed a novel improved zero potential circuit with one more op-amp in negative feedback to sample the total bypass current and calculate the precision resistance of the element being tested (EBT) with it. The improved setting non-scanned-electrode zero potential circuit (S-NSE-ZPC) was given as an example for analyzing and verifying the performance of the improved zero potential circuit. Secondly, in the S-NSE-ZPC and the improved S-NSE-ZPC, the effects of different parameters of the resistive sensor arrays and their readout circuits on the EBT’s measurement accuracy were simulated with the NI Multisim 12. Thirdly, part features of the improved circuit were verified with the experiments of a prototype circuit. Followed, the results were discussed and the conclusions were given. The experiment results show that the improved circuit, though it requires one more op-amp, one more resistor and one more sampling channel, can access the EBT in the 2-D resistive sensor array more accurately. PMID:27929410
An Improved Zero Potential Circuit for Readout of a Two-Dimensional Resistive Sensor Array.
Wu, Jian-Feng; Wang, Feng; Wang, Qi; Li, Jian-Qing; Song, Ai-Guo
2016-12-06
With one operational amplifier (op-amp) in negative feedback, the traditional zero potential circuit could access one element in the two-dimensional (2-D) resistive sensor array with the shared row-column fashion but it suffered from the crosstalk problem for the non-scanned elements' bypass currents, which were injected into array's non-scanned electrodes from zero potential. Firstly, for suppressing the crosstalk problem, we designed a novel improved zero potential circuit with one more op-amp in negative feedback to sample the total bypass current and calculate the precision resistance of the element being tested (EBT) with it. The improved setting non-scanned-electrode zero potential circuit (S-NSE-ZPC) was given as an example for analyzing and verifying the performance of the improved zero potential circuit. Secondly, in the S-NSE-ZPC and the improved S-NSE-ZPC, the effects of different parameters of the resistive sensor arrays and their readout circuits on the EBT's measurement accuracy were simulated with the NI Multisim 12. Thirdly, part features of the improved circuit were verified with the experiments of a prototype circuit. Followed, the results were discussed and the conclusions were given. The experiment results show that the improved circuit, though it requires one more op-amp, one more resistor and one more sampling channel, can access the EBT in the 2-D resistive sensor array more accurately.
Surface matching for correlation of virtual models: Theory and application
NASA Technical Reports Server (NTRS)
Caracciolo, Roberto; Fanton, Francesco; Gasparetto, Alessandro
1994-01-01
Virtual reality can enable a robot user to off line generate and test in a virtual environment a sequence of operations to be executed by the robot in an assembly cell. Virtual models of objects are to be correlated to the real entities they represent by means of a suitable transformation. A solution to the correlation problem, which is basically a problem of 3-dimensional adjusting, has been found exploiting the surface matching theory. An iterative algorithm has been developed, which matches the geometric surface representing the shape of the virtual model of an object, with a set of points measured on the surface in the real world. A peculiar feature of the algorithm is to work also if there is no one-to-one correspondence between the measured points and those representing the surface model. Furthermore the problem of avoiding convergence to local minima is solved, by defining a starting point of states ensuring convergence to the global minimum. The developed algorithm has been tested by simulation. Finally, this paper proposes a specific application, i.e., correlating a robot cell, equipped for biomedical use with its virtual representation.
One-dimensional Coulomb problem in Dirac materials
NASA Astrophysics Data System (ADS)
Downing, C. A.; Portnoi, M. E.
2014-11-01
We investigate the one-dimensional Coulomb potential with application to a class of quasirelativistic systems, so-called Dirac-Weyl materials, described by matrix Hamiltonians. We obtain the exact solution of the shifted and truncated Coulomb problems, with the wave functions expressed in terms of special functions (namely, Whittaker functions), while the energy spectrum must be determined via solutions to transcendental equations. Most notably, there are critical band gaps below which certain low-lying quantum states are missing in a manifestation of atomic collapse.
NASA Astrophysics Data System (ADS)
Nazarov, Anton
2012-11-01
In this paper we present Affine.m-a program for computations in representation theory of finite-dimensional and affine Lie algebras and describe implemented algorithms. The algorithms are based on the properties of weights and Weyl symmetry. Computation of weight multiplicities in irreducible and Verma modules, branching of representations and tensor product decomposition are the most important problems for us. These problems have numerous applications in physics and we provide some examples of these applications. The program is implemented in the popular computer algebra system Mathematica and works with finite-dimensional and affine Lie algebras. Catalogue identifier: AENA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENB_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, UK Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 24 844 No. of bytes in distributed program, including test data, etc.: 1 045 908 Distribution format: tar.gz Programming language: Mathematica. Computer: i386-i686, x86_64. Operating system: Linux, Windows, Mac OS, Solaris. RAM: 5-500 Mb Classification: 4.2, 5. Nature of problem: Representation theory of finite-dimensional Lie algebras has many applications in different branches of physics, including elementary particle physics, molecular physics, nuclear physics. Representations of affine Lie algebras appear in string theories and two-dimensional conformal field theory used for the description of critical phenomena in two-dimensional systems. Also Lie symmetries play a major role in a study of quantum integrable systems. Solution method: We work with weights and roots of finite-dimensional and affine Lie algebras and use Weyl symmetry extensively. Central problems which are the computations of weight multiplicities, branching and fusion coefficients are solved using one general recurrent algorithm based on generalization of Weyl character formula. We also offer alternative implementation based on the Freudenthal multiplicity formula which can be faster in some cases. Restrictions: Computational complexity grows fast with the rank of an algebra, so computations for algebras of ranks greater than 8 are not practical. Unusual features: We offer the possibility of using a traditional mathematical notation for the objects in representation theory of Lie algebras in computations if Affine.m is used in the Mathematica notebook interface. Running time: From seconds to days depending on the rank of the algebra and the complexity of the representation.
Numerical analysis of singular solutions of two-dimensional problems of asymmetric elasticity
NASA Astrophysics Data System (ADS)
Korepanov, V. V.; Matveenko, V. P.; Fedorov, A. Yu.; Shardakov, I. N.
2013-07-01
An algorithm for the numerical analysis of singular solutions of two-dimensional problems of asymmetric elasticity is considered. The algorithm is based on separation of a power-law dependence from the finite-element solution in a neighborhood of singular points in the domain under study, where singular solutions are possible. The obtained power-law dependencies allow one to conclude whether the stresses have singularities and what the character of these singularities is. The algorithm was tested for problems of classical elasticity by comparing the stress singularity exponents obtained by the proposed method and from known analytic solutions. Problems with various cases of singular points, namely, body surface points at which either the smoothness of the surface is violated, or the type of boundary conditions is changed, or distinct materials are in contact, are considered as applications. The stress singularity exponents obtained by using the models of classical and asymmetric elasticity are compared. It is shown that, in the case of cracks, the stress singularity exponents are the same for the elasticity models under study, but for other cases of singular points, the stress singularity exponents obtained on the basis of asymmetric elasticity have insignificant quantitative distinctions from the solutions of the classical elasticity.
The PAC-MAN model: Benchmark case for linear acoustics in computational physics
NASA Astrophysics Data System (ADS)
Ziegelwanger, Harald; Reiter, Paul
2017-10-01
Benchmark cases in the field of computational physics, on the one hand, have to contain a certain complexity to test numerical edge cases and, on the other hand, require the existence of an analytical solution, because an analytical solution allows the exact quantification of the accuracy of a numerical simulation method. This dilemma causes a need for analytical sound field formulations of complex acoustic problems. A well known example for such a benchmark case for harmonic linear acoustics is the ;Cat's Eye model;, which describes the three-dimensional sound field radiated from a sphere with a missing octant analytically. In this paper, a benchmark case for two-dimensional (2D) harmonic linear acoustic problems, viz., the ;PAC-MAN model;, is proposed. The PAC-MAN model describes the radiated and scattered sound field around an infinitely long cylinder with a cut out sector of variable angular width. While the analytical calculation of the 2D sound field allows different angular cut-out widths and arbitrarily positioned line sources, the computational cost associated with the solution of this problem is similar to a 1D problem because of a modal formulation of the sound field in the PAC-MAN model.
NASA Technical Reports Server (NTRS)
Stein, M.; Stein, P. A.
1978-01-01
Approximate solutions for three nonlinear orthotropic plate problems are presented: (1) a thick plate attached to a pad having nonlinear material properties which, in turn, is attached to a substructure which is then deformed; (2) a long plate loaded in inplane longitudinal compression beyond its buckling load; and (3) a long plate loaded in inplane shear beyond its buckling load. For all three problems, the two dimensional plate equations are reduced to one dimensional equations in the y-direction by using a one dimensional trigonometric approximation in the x-direction. Each problem uses different trigonometric terms. Solutions are obtained using an existing algorithm for simultaneous, first order, nonlinear, ordinary differential equations subject to two point boundary conditions. Ordinary differential equations are derived to determine the variable coefficients of the trigonometric terms.
Improved finite element methodology for integrated thermal structural analysis
NASA Technical Reports Server (NTRS)
Dechaumphai, P.; Thornton, E. A.
1982-01-01
An integrated thermal-structural finite element approach for efficient coupling of thermal and structural analysis is presented. New thermal finite elements which yield exact nodal and element temperatures for one dimensional linear steady state heat transfer problems are developed. A nodeless variable formulation is used to establish improved thermal finite elements for one dimensional nonlinear transient and two dimensional linear transient heat transfer problems. The thermal finite elements provide detailed temperature distributions without using additional element nodes and permit a common discretization with lower order congruent structural finite elements. The accuracy of the integrated approach is evaluated by comparisons with analytical solutions and conventional finite element thermal structural analyses for a number of academic and more realistic problems. Results indicate that the approach provides a significant improvement in the accuracy and efficiency of thermal stress analysis for structures with complex temperature distributions.
Spatial visualization in physics problem solving.
Kozhevnikov, Maria; Motes, Michael A; Hegarty, Mary
2007-07-08
Three studies were conducted to examine the relation of spatial visualization to solving kinematics problems that involved either predicting the two-dimensional motion of an object, translating from one frame of reference to another, or interpreting kinematics graphs. In Study 1, 60 physics-naíve students were administered kinematics problems and spatial visualization ability tests. In Study 2, 17 (8 high- and 9 low-spatial ability) additional students completed think-aloud protocols while they solved the kinematics problems. In Study 3, the eye movements of fifteen (9 high- and 6 low-spatial ability) students were recorded while the students solved kinematics problems. In contrast to high-spatial students, most low-spatial students did not combine two motion vectors, were unable to switch frames of reference, and tended to interpret graphs literally. The results of the study suggest an important relationship between spatial visualization ability and solving kinematics problems with multiple spatial parameters. 2007 Cognitive Science Society, Inc.
Solution to the sign problem in a frustrated quantum impurity model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hann, Connor T., E-mail: connor.hann@yale.edu; Huffman, Emilie; Chandrasekharan, Shailesh
2017-01-15
In this work we solve the sign problem of a frustrated quantum impurity model consisting of three quantum spin-half chains interacting through an anti-ferromagnetic Heisenberg interaction at one end. We first map the model into a repulsive Hubbard model of spin-half fermions hopping on three independent one dimensional chains that interact through a triangular hopping at one end. We then convert the fermion model into an inhomogeneous one dimensional model and express the partition function as a weighted sum over fermion worldline configurations. By imposing a pairing of fermion worldlines in half the space we show that all negative weightmore » configurations can be eliminated. This pairing naturally leads to the original frustrated quantum spin model at half filling and thus solves its sign problem.« less
NASA Astrophysics Data System (ADS)
Liao, Qinzhuo; Zhang, Dongxiao; Tchelepi, Hamdi
2017-06-01
In numerical modeling of subsurface flow and transport problems, formation properties may not be deterministically characterized, which leads to uncertainty in simulation results. In this study, we propose a sparse grid collocation method, which adopts nested quadrature rules with delay and transformation to quantify the uncertainty of model solutions. We show that the nested Kronrod-Patterson-Hermite quadrature is more efficient than the unnested Gauss-Hermite quadrature. We compare the convergence rates of various quadrature rules including the domain truncation and domain mapping approaches. To further improve accuracy and efficiency, we present a delayed process in selecting quadrature nodes and a transformed process for approximating unsmooth or discontinuous solutions. The proposed method is tested by an analytical function and in one-dimensional single-phase and two-phase flow problems with different spatial variances and correlation lengths. An additional example is given to demonstrate its applicability to three-dimensional black-oil models. It is found from these examples that the proposed method provides a promising approach for obtaining satisfactory estimation of the solution statistics and is much more efficient than the Monte-Carlo simulations.
Computer analysis of multicircuit shells of revolution by the field method
NASA Technical Reports Server (NTRS)
Cohen, G. A.
1975-01-01
The field method, presented previously for the solution of even-order linear boundary value problems defined on one-dimensional open branch domains, is extended to boundary value problems defined on one-dimensional domains containing circuits. This method converts the boundary value problem into two successive numerically stable initial value problems, which may be solved by standard forward integration techniques. In addition, a new method for the treatment of singular boundary conditions is presented. This method, which amounts to a partial interchange of the roles of force and displacement variables, is problem independent with respect to both accuracy and speed of execution. This method was implemented in a computer program to calculate the static response of ring stiffened orthotropic multicircuit shells of revolution to asymmetric loads. Solutions are presented for sample problems which illustrate the accuracy and efficiency of the method.
Solution methods for one-dimensional viscoelastic problems
NASA Technical Reports Server (NTRS)
Stubstad, John M.; Simitses, George J.
1987-01-01
A recently developed differential methodology for solution of one-dimensional nonlinear viscoelastic problems is presented. Using the example of an eccentrically loaded cantilever beam-column, the results from the differential formulation are compared to results generated using a previously published integral solution technique. It is shown that the results obtained from these distinct methodologies exhibit a surprisingly high degree of correlation with one another. A discussion of the various factors affecting the numerical accuracy and rate of convergence of these two procedures is also included. Finally, the influences of some 'higher order' effects, such as straining along the centroidal axis are discussed.
NASA Technical Reports Server (NTRS)
Pizzo, Michelle; Daryabeigi, Kamran; Glass, David
2015-01-01
The ability to solve the heat conduction equation is needed when designing materials to be used on vehicles exposed to extremely high temperatures; e.g. vehicles used for atmospheric entry or hypersonic flight. When using test and flight data, computational methods such as finite difference schemes may be used to solve for both the direct heat conduction problem, i.e., solving between internal temperature measurements, and the inverse heat conduction problem, i.e., using the direct solution to march forward in space to the surface of the material to estimate both surface temperature and heat flux. The completed research first discusses the methods used in developing a computational code to solve both the direct and inverse heat transfer problems using one dimensional, centered, implicit finite volume schemes and one dimensional, centered, explicit space marching techniques. The developed code assumed the boundary conditions to be specified time varying temperatures and also considered temperature dependent thermal properties. The completed research then discusses the results of analyzing temperature data measured while radiantly heating a carbon/carbon specimen up to 1920 F. The temperature was measured using thermocouple (TC) plugs (small carbon/carbon material specimens) with four embedded TC plugs inserted into the larger carbon/carbon specimen. The purpose of analyzing the test data was to estimate the surface heat flux and temperature values from the internal temperature measurements using direct and inverse heat transfer methods, thus aiding in the thermal and structural design and analysis of high temperature vehicles.
NASA Technical Reports Server (NTRS)
Yee, H. C.; Warming, R. F.; Harten, A.
1985-01-01
First-order, second-order, and implicit total variation diminishing (TVD) schemes are reviewed using the modified flux approach. Some transient and steady-state calculations are then carried out to illustrate the applicability of these schemes to the Euler equations. It is shown that the second-order explicit TVD schemes generate good shock resolution for both transient and steady-state one-dimensional and two-dimensional problems. Numerical experiments for a quasi-one-dimensional nozzle problem show that the second-order implicit TVD scheme produces a fairly rapid convergence rate and remains stable even when running with a Courant number of 10 to the 6th.
A transformation method for constrained-function minimization
NASA Technical Reports Server (NTRS)
Park, S. K.
1975-01-01
A direct method for constrained-function minimization is discussed. The method involves the construction of an appropriate function mapping all of one finite dimensional space onto the region defined by the constraints. Functions which produce such a transformation are constructed for a variety of constraint regions including, for example, those arising from linear and quadratic inequalities and equalities. In addition, the computational performance of this method is studied in the situation where the Davidon-Fletcher-Powell algorithm is used to solve the resulting unconstrained problem. Good performance is demonstrated for 19 test problems by achieving rapid convergence to a solution from several widely separated starting points.
Turbine blade and vane heat flux sensor development, phase 1
NASA Technical Reports Server (NTRS)
Atkinson, W. H.; Cyr, M. A.; Strange, R. R.
1984-01-01
Heat flux sensors available for installation in the hot section airfoils of advanced aircraft gas turbine engines were developed. Two heat flux sensors were designed, fabricated, calibrated, and tested. Measurement techniques are compared in an atmospheric pressure combustor rig test. Sensors, embedded thermocouple and the Gordon gauge, were fabricated that met the geometric and fabricability requirements and could withstand the hot section environmental conditions. Calibration data indicate that these sensors yielded repeatable results and have the potential to meet the accuracy goal of measuring local heat flux to within 5%. Thermal cycle tests and thermal soak tests indicated that the sensors are capable of surviving extended periods of exposure to the environment conditions in the turbine. Problems in calibration of the sensors caused by severe non-one dimensional heat flow were encountered. Modifications to the calibration techniques are needed to minimize this problem and proof testing of the sensors in an engine is needed to verify the designs.
Turbine blade and vane heat flux sensor development, phase 1
NASA Astrophysics Data System (ADS)
Atkinson, W. H.; Cyr, M. A.; Strange, R. R.
1984-08-01
Heat flux sensors available for installation in the hot section airfoils of advanced aircraft gas turbine engines were developed. Two heat flux sensors were designed, fabricated, calibrated, and tested. Measurement techniques are compared in an atmospheric pressure combustor rig test. Sensors, embedded thermocouple and the Gordon gauge, were fabricated that met the geometric and fabricability requirements and could withstand the hot section environmental conditions. Calibration data indicate that these sensors yielded repeatable results and have the potential to meet the accuracy goal of measuring local heat flux to within 5%. Thermal cycle tests and thermal soak tests indicated that the sensors are capable of surviving extended periods of exposure to the environment conditions in the turbine. Problems in calibration of the sensors caused by severe non-one dimensional heat flow were encountered. Modifications to the calibration techniques are needed to minimize this problem and proof testing of the sensors in an engine is needed to verify the designs.
Numerical computations on one-dimensional inverse scattering problems
NASA Technical Reports Server (NTRS)
Dunn, M. H.; Hariharan, S. I.
1983-01-01
An approximate method to determine the index of refraction of a dielectric obstacle is presented. For simplicity one dimensional models of electromagnetic scattering are treated. The governing equations yield a second order boundary value problem, in which the index of refraction appears as a functional parameter. The availability of reflection coefficients yield two additional boundary conditions. The index of refraction by a k-th order spline which can be written as a linear combination of B-splines is approximated. For N distinct reflection coefficients, the resulting N boundary value problems yield a system of N nonlinear equations in N unknowns which are the coefficients of the B-splines.
NASA Technical Reports Server (NTRS)
Chan, S. T. K.; Lee, C. H.; Brashears, M. R.
1975-01-01
A finite element algorithm for solving unsteady, three-dimensional high velocity impact problems is presented. A computer program was developed based on the Eulerian hydroelasto-viscoplastic formulation and the utilization of the theorem of weak solutions. The equations solved consist of conservation of mass, momentum, and energy, equation of state, and appropriate constitutive equations. The solution technique is a time-dependent finite element analysis utilizing three-dimensional isoparametric elements, in conjunction with a generalized two-step time integration scheme. The developed code was demonstrated by solving one-dimensional as well as three-dimensional impact problems for both the inviscid hydrodynamic model and the hydroelasto-viscoplastic model.
Intertwined Hamiltonians in two-dimensional curved spaces
NASA Astrophysics Data System (ADS)
Aghababaei Samani, Keivan; Zarei, Mina
2005-04-01
The problem of intertwined Hamiltonians in two-dimensional curved spaces is investigated. Explicit results are obtained for Euclidean plane, Minkowski plane, Poincaré half plane (AdS2), de Sitter plane (dS2), sphere, and torus. It is shown that the intertwining operator is related to the Killing vector fields and the isometry group of corresponding space. It is shown that the intertwined potentials are closely connected to the integral curves of the Killing vector fields. Two problems are considered as applications of the formalism presented in the paper. The first one is the problem of Hamiltonians with equispaced energy levels and the second one is the problem of Hamiltonians whose spectrum is like the spectrum of a free particle.
NASA Astrophysics Data System (ADS)
Paardekooper, S.-J.
2017-08-01
We present a new method for numerical hydrodynamics which uses a multidimensional generalization of the Roe solver and operates on an unstructured triangular mesh. The main advantage over traditional methods based on Riemann solvers, which commonly use one-dimensional flux estimates as building blocks for a multidimensional integration, is its inherently multidimensional nature, and as a consequence its ability to recognize multidimensional stationary states that are not hydrostatic. A second novelty is the focus on graphics processing units (GPUs). By tailoring the algorithms specifically to GPUs, we are able to get speedups of 100-250 compared to a desktop machine. We compare the multidimensional upwind scheme to a traditional, dimensionally split implementation of the Roe solver on several test problems, and we find that the new method significantly outperforms the Roe solver in almost all cases. This comes with increased computational costs per time-step, which makes the new method approximately a factor of 2 slower than a dimensionally split scheme acting on a structured grid.
NASA Astrophysics Data System (ADS)
Rochette, D.; Clain, S.; André, P.; Bussière, W.; Gentils, F.
2007-05-01
Medium voltage (MV) cells have to respect standards (for example IEC ones (IEC TC 17C 2003 IEC 62271-200 High Voltage Switchgear and Controlgear—Part 200 1st edn)) that define security levels against internal arc faults such as an accidental electrical arc occurring in the apparatus. New protection filters based on porous materials are developed to provide better energy absorption properties and a higher protection level for people. To study the filter behaviour during a major electrical accident, a two-dimensional model is proposed. The main point is the use of a dedicated numerical scheme for a non-conservative hyperbolic problem. We present a numerical simulation of the process during the first 0.2 s when the safety valve bursts and we compare the numerical results with tests carried out in a high power test laboratory on real electrical apparatus.
NASA Astrophysics Data System (ADS)
Fauzi, Ahmad; Ratna Kawuri, Kunthi; Pratiwi, Retno
2017-01-01
Researchers of students’ conceptual change usually collects data from written tests and interviews. Moreover, reports of conceptual change often simply refer to changes in concepts, such as on a test, without any identification of the learning processes that have taken place. Research has shown that students have difficulties with vectors in university introductory physics courses and high school physics courses. In this study, we intended to explore students’ understanding of one-dimensional and two-dimensional vector in multi perspective views. In this research, we explore students’ understanding through test perspective and interviews perspective. Our research study adopted the mixed-methodology design. The participants of this research were sixty students of third semester of physics education department. The data of this research were collected by testand interviews. In this study, we divided the students’ understanding of one-dimensional vector and two-dimensional vector in two categories, namely vector skills of the addition of one-dimensionaland two-dimensional vector and the relation between vector skills and conceptual understanding. From the investigation, only 44% of students provided correct answer for vector skills of the addition of one-dimensional and two-dimensional vector and only 27% students provided correct answer for the relation between vector skills and conceptual understanding.
NASA Astrophysics Data System (ADS)
Regis, Rommel G.
2014-02-01
This article develops two new algorithms for constrained expensive black-box optimization that use radial basis function surrogates for the objective and constraint functions. These algorithms are called COBRA and Extended ConstrLMSRBF and, unlike previous surrogate-based approaches, they can be used for high-dimensional problems where all initial points are infeasible. They both follow a two-phase approach where the first phase finds a feasible point while the second phase improves this feasible point. COBRA and Extended ConstrLMSRBF are compared with alternative methods on 20 test problems and on the MOPTA08 benchmark automotive problem (D.R. Jones, Presented at MOPTA 2008), which has 124 decision variables and 68 black-box inequality constraints. The alternatives include a sequential penalty derivative-free algorithm, a direct search method with kriging surrogates, and two multistart methods. Numerical results show that COBRA algorithms are competitive with Extended ConstrLMSRBF and they generally outperform the alternatives on the MOPTA08 problem and most of the test problems.
Application of SEAWAT to select variable-density and viscosity problems
Dausman, Alyssa M.; Langevin, Christian D.; Thorne, Danny T.; Sukop, Michael C.
2010-01-01
SEAWAT is a combined version of MODFLOW and MT3DMS, designed to simulate three-dimensional, variable-density, saturated groundwater flow. The most recent version of the SEAWAT program, SEAWAT Version 4 (or SEAWAT_V4), supports equations of state for fluid density and viscosity. In SEAWAT_V4, fluid density can be calculated as a function of one or more MT3DMS species, and optionally, fluid pressure. Fluid viscosity is calculated as a function of one or more MT3DMS species, and the program also includes additional functions for representing the dependence of fluid viscosity on temperature. This report documents testing of and experimentation with SEAWAT_V4 with six previously published problems that include various combinations of density-dependent flow due to temperature variations and/or concentration variations of one or more species. Some of the problems also include variations in viscosity that result from temperature differences in water and oil. Comparisons between the results of SEAWAT_V4 and other published results are generally consistent with one another, with minor differences considered acceptable.
NASA Astrophysics Data System (ADS)
Safaei, S.; Haghnegahdar, A.; Razavi, S.
2016-12-01
Complex environmental models are now the primary tool to inform decision makers for the current or future management of environmental resources under the climate and environmental changes. These complex models often contain a large number of parameters that need to be determined by a computationally intensive calibration procedure. Sensitivity analysis (SA) is a very useful tool that not only allows for understanding the model behavior, but also helps in reducing the number of calibration parameters by identifying unimportant ones. The issue is that most global sensitivity techniques are highly computationally demanding themselves for generating robust and stable sensitivity metrics over the entire model response surface. Recently, a novel global sensitivity analysis method, Variogram Analysis of Response Surfaces (VARS), is introduced that can efficiently provide a comprehensive assessment of global sensitivity using the Variogram concept. In this work, we aim to evaluate the effectiveness of this highly efficient GSA method in saving computational burden, when applied to systems with extra-large number of input factors ( 100). We use a test function and a hydrological modelling case study to demonstrate the capability of VARS method in reducing problem dimensionality by identifying important vs unimportant input factors.
NASA Astrophysics Data System (ADS)
Dimitriadis, Panayiotis; Tegos, Aristoteles; Oikonomou, Athanasios; Pagana, Vassiliki; Koukouvinos, Antonios; Mamassis, Nikos; Koutsoyiannis, Demetris; Efstratiadis, Andreas
2016-03-01
One-dimensional and quasi-two-dimensional hydraulic freeware models (HEC-RAS, LISFLOOD-FP and FLO-2d) are widely used for flood inundation mapping. These models are tested on a benchmark test with a mixed rectangular-triangular channel cross section. Using a Monte-Carlo approach, we employ extended sensitivity analysis by simultaneously varying the input discharge, longitudinal and lateral gradients and roughness coefficients, as well as the grid cell size. Based on statistical analysis of three output variables of interest, i.e. water depths at the inflow and outflow locations and total flood volume, we investigate the uncertainty enclosed in different model configurations and flow conditions, without the influence of errors and other assumptions on topography, channel geometry and boundary conditions. Moreover, we estimate the uncertainty associated to each input variable and we compare it to the overall one. The outcomes of the benchmark analysis are further highlighted by applying the three models to real-world flood propagation problems, in the context of two challenging case studies in Greece.
Flight control with adaptive critic neural network
NASA Astrophysics Data System (ADS)
Han, Dongchen
2001-10-01
In this dissertation, the adaptive critic neural network technique is applied to solve complex nonlinear system control problems. Based on dynamic programming, the adaptive critic neural network can embed the optimal solution into a neural network. Though trained off-line, the neural network forms a real-time feedback controller. Because of its general interpolation properties, the neurocontroller has inherit robustness. The problems solved here are an agile missile control for U.S. Air Force and a midcourse guidance law for U.S. Navy. In the first three papers, the neural network was used to control an air-to-air agile missile to implement a minimum-time heading-reverse in a vertical plane corresponding to following conditions: a system without constraint, a system with control inequality constraint, and a system with state inequality constraint. While the agile missile is a one-dimensional problem, the midcourse guidance law is the first test-bed for multiple-dimensional problem. In the fourth paper, the neurocontroller is synthesized to guide a surface-to-air missile to a fixed final condition, and to a flexible final condition from a variable initial condition. In order to evaluate the adaptive critic neural network approach, the numerical solutions for these cases are also obtained by solving two-point boundary value problem with a shooting method. All of the results showed that the adaptive critic neural network could solve complex nonlinear system control problems.
{lambda} elements for one-dimensional singular problems with known strength of singularity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, K.K.; Surana, K.S.
1996-10-01
This paper presents a new and general procedure for designing special elements called {lambda} elements for one dimensional singular problems where the strength of the singularity is know. The {lambda} elements presented here are of type C{sup 0}. These elements also provide inter-element C{sup 0} continuity with p-version elements. The {lambda} elements do not require a precise knowledge of the extent of singular zone, i.e., their use may be extended beyond the singular zone. When {lambda} elements are used at the singularity, a singular problem behaves like a smooth problem thereby eliminating the need for h, p-adaptive processes all together.more » One dimensional steady state radial flow of an upper convected Maxwell fluid is considered as a sample problem. Least squares approach (or least squares finite element formulation: LSFEF) is used to construct the integral form (error functional I) from the differential equations. Numerical results presented for radially inward flow with inner radius r{sub i} = 0.1, 0.01, 0.001, 0.0001, 0.00001, and Deborah number of 2 (De = 2) demonstrate the accuracy, faster convergence of the iterative solution procedure, faster convergence rate of the error functional and mesh independent characteristics of the {lambda} elements regardless of the severity of the singularity.« less
High Performance Parallel Analysis of Coupled Problems for Aircraft Propulsion
NASA Technical Reports Server (NTRS)
Felippa, C. A.; Farhat, C.; Lanteri, S.; Maman, N.; Piperno, S.; Gumaste, U.
1994-01-01
In order to predict the dynamic response of a flexible structure in a fluid flow, the equations of motion of the structure and the fluid must be solved simultaneously. In this paper, we present several partitioned procedures for time-integrating this focus coupled problem and discuss their merits in terms of accuracy, stability, heterogeneous computing, I/O transfers, subcycling, and parallel processing. All theoretical results are derived for a one-dimensional piston model problem with a compressible flow, because the complete three-dimensional aeroelastic problem is difficult to analyze mathematically. However, the insight gained from the analysis of the coupled piston problem and the conclusions drawn from its numerical investigation are confirmed with the numerical simulation of the two-dimensional transient aeroelastic response of a flexible panel in a transonic nonlinear Euler flow regime.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benseghir, Rym, E-mail: benseghirrym@ymail.com, E-mail: benseghirrym@ymail.com; Benchettah, Azzedine, E-mail: abenchettah@hotmail.com; Raynaud de Fitte, Paul, E-mail: prf@univ-rouen.fr
2015-11-30
A stochastic equation system corresponding to the description of the motion of a barotropic viscous gas in a discretized one-dimensional domain with a weight regularizing the density is considered. In [2], the existence of an invariant measure was established for this discretized problem in the stationary case. In this paper, applying a slightly modified version of Khas’minskii’s theorem [5], we generalize this result in the periodic case by proving the existence of a periodic measure for this problem.
Fourier analysis of finite element preconditioned collocation schemes
NASA Technical Reports Server (NTRS)
Deville, Michel O.; Mund, Ernest H.
1990-01-01
The spectrum of the iteration operator of some finite element preconditioned Fourier collocation schemes is investigated. The first part of the paper analyses one-dimensional elliptic and hyperbolic model problems and the advection-diffusion equation. Analytical expressions of the eigenvalues are obtained with use of symbolic computation. The second part of the paper considers the set of one-dimensional differential equations resulting from Fourier analysis (in the tranverse direction) of the 2-D Stokes problem. All results agree with previous conclusions on the numerical efficiency of finite element preconditioning schemes.
An exact solution of solute transport by one-dimensional random velocity fields
Cvetkovic, V.D.; Dagan, G.; Shapiro, A.M.
1991-01-01
The problem of one-dimensional transport of passive solute by a random steady velocity field is investigated. This problem is representative of solute movement in porous media, for example, in vertical flow through a horizontally stratified formation of variable porosity with a constant flux at the soil surface. Relating moments of particle travel time and displacement, exact expressions for the advection and dispersion coefficients in the Focker-Planck equation are compared with the perturbation results for large distances. The first- and second-order approximations for the dispersion coefficient are robust for a lognormal velocity field. The mean Lagrangian velocity is the harmonic mean of the Eulerian velocity for large distances. This is an artifact of one-dimensional flow where the continuity equation provides for a divergence free fluid flux, rather than a divergence free fluid velocity. ?? 1991 Springer-Verlag.
Discontinuous finite element method for vector radiative transfer
NASA Astrophysics Data System (ADS)
Wang, Cun-Hai; Yi, Hong-Liang; Tan, He-Ping
2017-03-01
The discontinuous finite element method (DFEM) is applied to solve the vector radiative transfer in participating media. The derivation in a discrete form of the vector radiation governing equations is presented, in which the angular space is discretized by the discrete-ordinates approach with a local refined modification, and the spatial domain is discretized into finite non-overlapped discontinuous elements. The elements in the whole solution domain are connected by modelling the boundary numerical flux between adjacent elements, which makes the DFEM numerically stable for solving radiative transfer equations. Several various problems of vector radiative transfer are tested to verify the performance of the developed DFEM, including vector radiative transfer in a one-dimensional parallel slab containing a Mie/Rayleigh/strong forward scattering medium and a two-dimensional square medium. The fact that DFEM results agree very well with the benchmark solutions in published references shows that the developed DFEM in this paper is accurate and effective for solving vector radiative transfer problems.
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang
1992-01-01
The nonlinear stability of compact schemes for shock calculations is investigated. In recent years compact schemes were used in various numerical simulations including direct numerical simulation of turbulence. However to apply them to problems containing shocks, one has to resolve the problem of spurious numerical oscillation and nonlinear instability. A framework to apply nonlinear limiting to a local mean is introduced. The resulting scheme can be proven total variation (1D) or maximum norm (multi D) stable and produces nice numerical results in the test cases. The result is summarized in the preprint entitled 'Nonlinearly Stable Compact Schemes for Shock Calculations', which was submitted to SIAM Journal on Numerical Analysis. Research was continued on issues related to two and three dimensional essentially non-oscillatory (ENO) schemes. The main research topics include: parallel implementation of ENO schemes on Connection Machines; boundary conditions; shock interaction with hydrogen bubbles, a preparation for the full combustion simulation; and direct numerical simulation of compressible sheared turbulence.
Design of an image encryption scheme based on a multiple chaotic map
NASA Astrophysics Data System (ADS)
Tong, Xiao-Jun
2013-07-01
In order to solve the problem that chaos is degenerated in limited computer precision and Cat map is the small key space, this paper presents a chaotic map based on topological conjugacy and the chaotic characteristics are proved by Devaney definition. In order to produce a large key space, a Cat map named block Cat map is also designed for permutation process based on multiple-dimensional chaotic maps. The image encryption algorithm is based on permutation-substitution, and each key is controlled by different chaotic maps. The entropy analysis, differential analysis, weak-keys analysis, statistical analysis, cipher random analysis, and cipher sensibility analysis depending on key and plaintext are introduced to test the security of the new image encryption scheme. Through the comparison to the proposed scheme with AES, DES and Logistic encryption methods, we come to the conclusion that the image encryption method solves the problem of low precision of one dimensional chaotic function and has higher speed and higher security.
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang
1998-01-01
This project is about the development of high order, non-oscillatory type schemes for computational fluid dynamics. Algorithm analysis, implementation, and applications are performed. Collaborations with NASA scientists have been carried out to ensure that the research is relevant to NASA objectives. The combination of ENO finite difference method with spectral method in two space dimension is considered, jointly with Cai [3]. The resulting scheme behaves nicely for the two dimensional test problems with or without shocks. Jointly with Cai and Gottlieb, we have also considered one-sided filters for spectral approximations to discontinuous functions [2]. We proved theoretically the existence of filters to recover spectral accuracy up to the discontinuity. We also constructed such filters for practical calculations.
Parallel Visualization of Large-Scale Aerodynamics Calculations: A Case Study on the Cray T3E
NASA Technical Reports Server (NTRS)
Ma, Kwan-Liu; Crockett, Thomas W.
1999-01-01
This paper reports the performance of a parallel volume rendering algorithm for visualizing a large-scale, unstructured-grid dataset produced by a three-dimensional aerodynamics simulation. This dataset, containing over 18 million tetrahedra, allows us to extend our performance results to a problem which is more than 30 times larger than the one we examined previously. This high resolution dataset also allows us to see fine, three-dimensional features in the flow field. All our tests were performed on the Silicon Graphics Inc. (SGI)/Cray T3E operated by NASA's Goddard Space Flight Center. Using 511 processors, a rendering rate of almost 9 million tetrahedra/second was achieved with a parallel overhead of 26%.
Modelling of Heat and Moisture Loss Through NBC Ensembles
1991-11-01
the heat and moisture transport through various NBC clothing ensembles. The analysis involves simplifying the three dimensional physical problem of... clothing on a person to that of a one dimensional problem of flow through parallel layers of clothing and air. Body temperatures are calculated based on...prescribed work rates, ambient conditions and clothing properties. Sweat response and respiration rates are estimated based on empirical data to
Comparison of four approaches to a rock facies classification problem
Dubois, M.K.; Bohling, Geoffrey C.; Chakrabarti, S.
2007-01-01
In this study, seven classifiers based on four different approaches were tested in a rock facies classification problem: classical parametric methods using Bayes' rule, and non-parametric methods using fuzzy logic, k-nearest neighbor, and feed forward-back propagating artificial neural network. Determining the most effective classifier for geologic facies prediction in wells without cores in the Panoma gas field, in Southwest Kansas, was the objective. Study data include 3600 samples with known rock facies class (from core) with each sample having either four or five measured properties (wire-line log curves), and two derived geologic properties (geologic constraining variables). The sample set was divided into two subsets, one for training and one for testing the ability of the trained classifier to correctly assign classes. Artificial neural networks clearly outperformed all other classifiers and are effective tools for this particular classification problem. Classical parametric models were inadequate due to the nature of the predictor variables (high dimensional and not linearly correlated), and feature space of the classes (overlapping). The other non-parametric methods tested, k-nearest neighbor and fuzzy logic, would need considerable improvement to match the neural network effectiveness, but further work, possibly combining certain aspects of the three non-parametric methods, may be justified. ?? 2006 Elsevier Ltd. All rights reserved.
Walters, Glenn D; Diamond, Pamela M; Magaletta, Philip R
2010-03-01
Three indicators derived from the Personality Assessment Inventory (PAI) Alcohol Problems scale (ALC)-tolerance/high consumption, loss of control, and negative social and psychological consequences-were subjected to taxometric analysis-mean above minus below a cut (MAMBAC), maximum covariance (MAXCOV), and latent mode factor analysis (L-Mode)-in 1,374 federal prison inmates (905 males, 469 females). Whereas the total sample yielded ambiguous results, the male subsample produced dimensional results, and the female subsample produced taxonic results. Interpreting these findings in light of previous taxometric research on alcohol abuse and dependence it is speculated that while alcohol use disorders may be taxonic in female offenders, they are probably both taxonic and dimensional in male offenders. Two models of male alcohol use disorder in males are considered, one in which the diagnostic features are categorical and the severity of symptomatology is dimensional, and one in which some diagnostic features (e.g., withdrawal) are taxonic and other features (e.g., social problems) are dimensional.
Conformal mapping and bound states in bent waveguides
NASA Astrophysics Data System (ADS)
Sadurní, E.; Schleich, W. P.
2010-12-01
Is it possible to trap a quantum particle in an open geometry? In this work we deal with the boundary value problem of the stationary Schroedinger (or Helmholtz) equation within a waveguide with straight segments and a rectangular bending. The problem can be reduced to a one-dimensional matrix Schroedinger equation using two descriptions: oblique modes and conformal coordinates. We use a corner-corrected WKB formalism to find the energies of the one-dimensional problem. It is shown that the presence of bound states is an effect due to the boundary alone, with no classical counterpart for this geometry. The conformal description proves to be simpler, as the coupling of transversal modes is not essential in this case.
The program FANS-3D (finite analytic numerical simulation 3-dimensional) and its applications
NASA Technical Reports Server (NTRS)
Bravo, Ramiro H.; Chen, Ching-Jen
1992-01-01
In this study, the program named FANS-3D (Finite Analytic Numerical Simulation-3 Dimensional) is presented. FANS-3D was designed to solve problems of incompressible fluid flow and combined modes of heat transfer. It solves problems with conduction and convection modes of heat transfer in laminar flow, with provisions for radiation and turbulent flows. It can solve singular or conjugate modes of heat transfer. It also solves problems in natural convection, using the Boussinesq approximation. FANS-3D was designed to solve heat transfer problems inside one, two and three dimensional geometries that can be represented by orthogonal planes in a Cartesian coordinate system. It can solve internal and external flows using appropriate boundary conditions such as symmetric, periodic and user specified.
Artificial viscosity to cure the carbuncle phenomenon: The three-dimensional case
NASA Astrophysics Data System (ADS)
Rodionov, Alexander V.
2018-05-01
The carbuncle phenomenon (also known as the shock instability) has remained a serious computational challenge since it was first noticed and described [1,2]. In [3] the author presented a summary on this subject and proposed a new technique for curing the problem. Its idea is to introduce some dissipation in the form of right-hand sides of the Navier-Stokes equations into the basic method of solving Euler equations; in so doing, the molecular viscosity coefficient is replaced by the artificial viscosity coefficient. The new cure for the carbuncle flaw was tested and tuned for the case of using first-order schemes in two-dimensional simulations. Its efficiency was demonstrated on several well-known test problems. In this paper we extend the technique of [3] to the case of three-dimensional simulations.
A gyrokinetic one-dimensional scrape-off layer model of an edge-localized mode heat pulse
Shi, E. L.; Hakim, A. H.; Hammett, G. W.
2015-02-03
An electrostatic gyrokinetic-based model is applied to simulate parallel plasma transport in the scrape-off layer to a divertor plate. We focus on a test problem that has been studied previously, using parameters chosen to model a heat pulse driven by an edge-localized mode in JET. Previous work has used direct particle-in-cellequations with full dynamics, or Vlasov or fluid equations with only parallel dynamics. With the use of the gyrokinetic quasineutrality equation and logical sheathboundary conditions, spatial and temporal resolution requirements are no longer set by the electron Debye length and plasma frequency, respectively. Finally, this test problem also helps illustratemore » some of the physics contained in the Hamiltonian form of the gyrokineticequations and some of the numerical challenges in developing an edge gyrokinetic code.« less
On some structure-turbulence interaction problems
NASA Technical Reports Server (NTRS)
Maekawa, S.; Lin, Y. K.
1976-01-01
The interactions between a turbulent flow structure; responding to its excitation were studied. The turbulence was typical of those associated with a boundary layer, having a cross-spectral density indicative of convection and statistical decay. A number of structural models were considered. Among the one-dimensional models were an unsupported infinite beam and a periodically supported infinite beam. The fuselage construction of an aircraft was then considered. For the two-dimensional case a simple membrane was used to illustrate the type of formulation applicable to most two-dimensional structures. Both the one-dimensional and two-dimensional structures studied were backed by a cavity filled with an initially quiescent fluid to simulate the acoustic environment when the structure forms one side of a cabin of a sea vessel or aircraft.
Scaling between Wind Tunnels-Results Accuracy in Two-Dimensional Testing
NASA Astrophysics Data System (ADS)
Rasuo, Bosko
The establishment of exact two-dimensional flow conditions in wind tunnels is a very difficult problem. This has been evident for wind tunnels of all types and scales. In this paper, the principal factors that influence the accuracy of two-dimensional wind tunnel test results are analyzed. The influences of the Reynolds number, Mach number and wall interference with reference to solid and flow blockage (blockage of wake) as well as the influence of side-wall boundary layer control are analyzed. Interesting results are brought to light regarding the Reynolds number effects of the test model versus the Reynolds number effects of the facility in subsonic and transonic flow.
A new Lagrangian method for three-dimensional steady supersonic flows
NASA Technical Reports Server (NTRS)
Loh, Ching-Yuen; Liou, Meng-Sing
1993-01-01
In this report, the new Lagrangian method introduced by Loh and Hui is extended for three-dimensional, steady supersonic flow computation. The derivation of the conservation form and the solution of the local Riemann solver using the Godunov and the high-resolution TVD (total variation diminished) scheme is presented. This new approach is accurate and robust, capable of handling complicated geometry and interactions between discontinuous waves. Test problems show that the extended Lagrangian method retains all the advantages of the two-dimensional method (e.g., crisp resolution of a slip-surface (contact discontinuity) and automatic grid generation). In this report, we also suggest a novel three dimensional Riemann problem in which interesting and intricate flow features are present.
Difference equation state approximations for nonlinear hereditary control problems
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1982-01-01
Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems.
Approximate Approaches to the One-Dimensional Finite Potential Well
ERIC Educational Resources Information Center
Singh, Shilpi; Pathak, Praveen; Singh, Vijay A.
2011-01-01
The one-dimensional finite well is a textbook problem. We propose approximate approaches to obtain the energy levels of the well. The finite well is also encountered in semiconductor heterostructures where the carrier mass inside the well (m[subscript i]) is taken to be distinct from mass outside (m[subscript o]). A relevant parameter is the mass…
Filtering techniques for efficient inversion of two-dimensional Nuclear Magnetic Resonance data
NASA Astrophysics Data System (ADS)
Bortolotti, V.; Brizi, L.; Fantazzini, P.; Landi, G.; Zama, F.
2017-10-01
The inversion of two-dimensional Nuclear Magnetic Resonance (NMR) data requires the solution of a first kind Fredholm integral equation with a two-dimensional tensor product kernel and lower bound constraints. For the solution of this ill-posed inverse problem, the recently presented 2DUPEN algorithm [V. Bortolotti et al., Inverse Problems, 33(1), 2016] uses multiparameter Tikhonov regularization with automatic choice of the regularization parameters. In this work, I2DUPEN, an improved version of 2DUPEN that implements Mean Windowing and Singular Value Decomposition filters, is deeply tested. The reconstruction problem with filtered data is formulated as a compressed weighted least squares problem with multi-parameter Tikhonov regularization. Results on synthetic and real 2D NMR data are presented with the main purpose to deeper analyze the separate and combined effects of these filtering techniques on the reconstructed 2D distribution.
Location of acoustic emission sources generated by air flow
Kosel; Grabec; Muzic
2000-03-01
The location of continuous acoustic emission sources is a difficult problem of non-destructive testing. This article describes one-dimensional location of continuous acoustic emission sources by using an intelligent locator. The intelligent locator solves a location problem based on learning from examples. To verify whether continuous acoustic emission caused by leakage air flow can be located accurately by the intelligent locator, an experiment on a thin aluminum band was performed. Results show that it is possible to determine an accurate location by using a combination of a cross-correlation function with an appropriate bandpass filter. By using this combination, discrete and continuous acoustic emission sources can be located by using discrete acoustic emission sources for locator learning.
Oscillations and stability of numerical solutions of the heat conduction equation
NASA Technical Reports Server (NTRS)
Kozdoba, L. A.; Levi, E. V.
1976-01-01
The mathematical model and results of numerical solutions are given for the one dimensional problem when the linear equations are written in a rectangular coordinate system. All the computations are easily realizable for two and three dimensional problems when the equations are written in any coordinate system. Explicit and implicit schemes are shown in tabular form for stability and oscillations criteria; the initial temperature distribution is considered uniform.
One-Dimensional Forward–Forward Mean-Field Games
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gomes, Diogo A., E-mail: diogo.gomes@kaust.edu.sa; Nurbekyan, Levon; Sedjro, Marc
While the general theory for the terminal-initial value problem for mean-field games (MFGs) has achieved a substantial progress, the corresponding forward–forward problem is still poorly understood—even in the one-dimensional setting. Here, we consider one-dimensional forward–forward MFGs, study the existence of solutions and their long-time convergence. First, we discuss the relation between these models and systems of conservation laws. In particular, we identify new conserved quantities and study some qualitative properties of these systems. Next, we introduce a class of wave-like equations that are equivalent to forward–forward MFGs, and we derive a novel formulation as a system of conservation laws. Formore » first-order logarithmic forward–forward MFG, we establish the existence of a global solution. Then, we consider a class of explicit solutions and show the existence of shocks. Finally, we examine parabolic forward–forward MFGs and establish the long-time convergence of the solutions.« less
Dissipative closures for statistical moments, fluid moments, and subgrid scales in plasma turbulence
NASA Astrophysics Data System (ADS)
Smith, Stephen Andrew
1997-11-01
Closures are necessary in the study physical systems with large numbers of degrees of freedom when it is only possible to compute a small number of modes. The modes that are to be computed, the resolved modes, are coupled to unresolved modes that must be estimated. This thesis focuses on dissipative closures models for two problems that arises in the study of plasma turbulence: the fluid moment closure problem and the subgrid scale closure problem. The fluid moment closures of Hammett and Perkins (1990) were originally applied to a one-dimensional kinetic equation, the Vlasov equation. These closures are generalized in this thesis and applied to the stochastic oscillator problem, a standard paradigm problem for statistical closures. The linear theory of the Hammett- Perkins closures is shown to converge with increasing numbers of moments. A novel parameterized hyperviscosity is proposed for two- dimensional drift-wave turbulence. The magnitude and exponent of the hyperviscosity are expressed as functions of the large scale advection velocity. Traditionally hyperviscosities are applied to simulations with a fixed exponent that must be arbitrarily chosen. Expressing the exponent as a function of the simulation parameters eliminates this ambiguity. These functions are parameterized by comparing the hyperviscous dissipation to the subgrid dissipation calculated from direct numerical simulations. Tests of the parameterization demonstrate that it performs better than using no additional damping term or than using a standard hyperviscosity. Heuristic arguments are presented to extend this hyperviscosity model to three-dimensional (3D) drift-wave turbulence where eddies are highly elongated along the field line. Preliminary results indicate that this generalized 3D hyperviscosity is capable of reducing the resolution requirements for 3D gyrofluid turbulence simulations.
Singh, Brajesh K; Srivastava, Vineet K
2015-04-01
The main goal of this paper is to present a new approximate series solution of the multi-dimensional (heat-like) diffusion equation with time-fractional derivative in Caputo form using a semi-analytical approach: fractional-order reduced differential transform method (FRDTM). The efficiency of FRDTM is confirmed by considering four test problems of the multi-dimensional time fractional-order diffusion equation. FRDTM is a very efficient, effective and powerful mathematical tool which provides exact or very close approximate solutions for a wide range of real-world problems arising in engineering and natural sciences, modelled in terms of differential equations.
Singh, Brajesh K.; Srivastava, Vineet K.
2015-01-01
The main goal of this paper is to present a new approximate series solution of the multi-dimensional (heat-like) diffusion equation with time-fractional derivative in Caputo form using a semi-analytical approach: fractional-order reduced differential transform method (FRDTM). The efficiency of FRDTM is confirmed by considering four test problems of the multi-dimensional time fractional-order diffusion equation. FRDTM is a very efficient, effective and powerful mathematical tool which provides exact or very close approximate solutions for a wide range of real-world problems arising in engineering and natural sciences, modelled in terms of differential equations. PMID:26064639
NASA Astrophysics Data System (ADS)
Siripatana, Chairat; Thongpan, Hathaikarn; Promraksa, Arwut
2017-03-01
This article explores a volumetric approach in formulating differential equations for a class of engineering flow problems involving component transfer within or between two phases. In contrast to conventional formulation which is based on linear velocities, this work proposed a slightly different approach based on volumetric flow-rate which is essentially constant in many industrial processes. In effect, many multi-dimensional flow problems found industrially can be simplified into multi-component or multi-phase but one-dimensional flow problems. The formulation is largely generic, covering counter-current, concurrent or batch, fixed and fluidized bed arrangement. It was also intended to use for start-up, shut-down, control and steady state simulation. Since many realistic and industrial operation are dynamic with variable velocity and porosity in relation to position, analytical solutions are rare and limited to only very simple cases. Thus we also provide a numerical solution using Crank-Nicolson finite difference scheme. This solution is inherently stable as tested against a few cases published in the literature. However, it is anticipated that, for unconfined flow or non-constant flow-rate, traditional formulation should be applied.
Quantum field between moving mirrors: A three dimensional example
NASA Technical Reports Server (NTRS)
Hacyan, S.; Jauregui, Roco; Villarreal, Carlos
1995-01-01
The scalar quantum field uniformly moving plates in three dimensional space is studied. Field equations for Dirichlet boundary conditions are solved exactly. Comparison of the resulting wavefunctions with their instantaneous static counterpart is performed via Bogolubov coefficients. Unlike the one dimensional problem, 'particle' creation as well as squeezing may occur. The time dependent Casimir energy is also evaluated.
On the existence of solutions to a one-dimensional degenerate nonlinear wave equation
NASA Astrophysics Data System (ADS)
Hu, Yanbo
2018-07-01
This paper is concerned with the degenerate initial-boundary value problem to the one-dimensional nonlinear wave equation utt =((1 + u) aux) x which arises in a number of various physical contexts. The global existence of smooth solutions to the degenerate problem was established under relaxed conditions on the initial-boundary data by the characteristic decomposition method. Moreover, we show that the solution is uniformly C 1 , α continuous up to the degenerate boundary and the degenerate curve is C 1 , α continuous for α ∈ (0 , min a/1+a, 1/1+a).
Linearized compressible-flow theory for sonic flight speeds
NASA Technical Reports Server (NTRS)
Heaslet, Max A; Lomax, Harvard; Spreiter, John R
1950-01-01
The partial differential equation for the perturbation velocity potential is examined for free-stream Mach numbers close to and equal to one. It is found that, under the assumptions of linearized theory, solutions can be found consistent with the theory for lifting-surface problems both in stationary three-dimensional flow and in unsteady two-dimensional flow. Several examples are solved including a three dimensional swept-back wing and two dimensional harmonically-oscillating wing, both for a free stream Mach number equal to one. Momentum relations for the evaluation of wave and vortex drag are also discussed. (author)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, K.K.; Surana, K.S.
1996-10-01
This paper presents a new and general procedure for designing hierarchical and non-hierarchical special elements called {lambda} elements for one dimensional singular problems where the strength of the singularity is unknown. The {lambda} element formulations presented here permit correct numerical simulation of linear as well as non-linear singular problems without a priori knowledge of the strength of the singularity. A procedure is also presented for determining the exact strength of the singularity using the converged solution. It is shown that in special instances, the general formulation of {lambda} elements can also be made hierarchical. The {lambda} elements presented here aremore » of type C{sup 0} and provide C{sup 0} inter-element continuity with p-version elements. One dimensional steady state radial flow of an upper convected Maxwell fluid is considered as a sample problem. Since in this case {lambda}{sub i} are known, this problem provides a good example for investigating the performance of the formulation proposed here. Least squares approach (or Least Squares Finite Element Formulation: LSFEF) is used to construct the integral form (error functional I) from the differential equations. Numerical studies are presented for radially inward flow of an upper convected Maxwell fluid with inner radius r{sub i} = .1 and .01 etc. and Deborah number De = 2.« less
An Exact, Compressible One-Dimensional Riemann Solver for General, Convex Equations of State
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamm, James Russell
2015-03-05
This note describes an algorithm with which to compute numerical solutions to the one- dimensional, Cartesian Riemann problem for compressible flow with general, convex equations of state. While high-level descriptions of this approach are to be found in the literature, this note contains most of the necessary details required to write software for this problem. This explanation corresponds to the approach used in the source code that evaluates solutions for the 1D, Cartesian Riemann problem with a JWL equation of state in the ExactPack package [16, 29]. Numerical examples are given with the proposed computational approach for a polytropic equationmore » of state and for the JWL equation of state.« less
A cubic spline approximation for problems in fluid mechanics
NASA Technical Reports Server (NTRS)
Rubin, S. G.; Graves, R. A., Jr.
1975-01-01
A cubic spline approximation is presented which is suited for many fluid-mechanics problems. This procedure provides a high degree of accuracy, even with a nonuniform mesh, and leads to an accurate treatment of derivative boundary conditions. The truncation errors and stability limitations of several implicit and explicit integration schemes are presented. For two-dimensional flows, a spline-alternating-direction-implicit method is evaluated. The spline procedure is assessed, and results are presented for the one-dimensional nonlinear Burgers' equation, as well as the two-dimensional diffusion equation and the vorticity-stream function system describing the viscous flow in a driven cavity. Comparisons are made with analytic solutions for the first two problems and with finite-difference calculations for the cavity flow.
NASA Technical Reports Server (NTRS)
Cothran, E. K.
1982-01-01
The computer program written in support of one dimensional analytical approach to thermal modeling of Bridgman type crystal growth is presented. The program listing and flow charts are included, along with the complete thermal model. Sample problems include detailed comments on input and output to aid the first time user.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kavanagh, D.L.; Antchagno, M.J.; Egawa, E.K.
1960-12-31
Operating instructions are presented for DMM, a Remington Rand 1103A program using one-space-dimensional multigroup diffusion theory to calculate the reactivity or critical conditions and flux distribution of a multiregion reactor. Complete descriptions of the routines and problem input and output specifications are also included. (D.L.C.)
Lenarda, P; Paggi, M
A comprehensive computational framework based on the finite element method for the simulation of coupled hygro-thermo-mechanical problems in photovoltaic laminates is herein proposed. While the thermo-mechanical problem takes place in the three-dimensional space of the laminate, moisture diffusion occurs in a two-dimensional domain represented by the polymeric layers and by the vertical channel cracks in the solar cells. Therefore, a geometrical multi-scale solution strategy is pursued by solving the partial differential equations governing heat transfer and thermo-elasticity in the three-dimensional space, and the partial differential equation for moisture diffusion in the two dimensional domains. By exploiting a staggered scheme, the thermo-mechanical problem is solved first via a fully implicit solution scheme in space and time, with a specific treatment of the polymeric layers as zero-thickness interfaces whose constitutive response is governed by a novel thermo-visco-elastic cohesive zone model based on fractional calculus. Temperature and relative displacements along the domains where moisture diffusion takes place are then projected to the finite element model of diffusion, coupled with the thermo-mechanical problem by the temperature and crack opening dependent diffusion coefficient. The application of the proposed method to photovoltaic modules pinpoints two important physical aspects: (i) moisture diffusion in humidity freeze tests with a temperature dependent diffusivity is a much slower process than in the case of a constant diffusion coefficient; (ii) channel cracks through Silicon solar cells significantly enhance moisture diffusion and electric degradation, as confirmed by experimental tests.
Jonsson, Jakob; Munck, Ingrid; Volberg, Rachel; Carlbring, Per
2017-06-01
Recent increases in the number of online gambling sites have made gambling more available, which may contribute to an increase in gambling problems. At the same time, online gambling provides opportunities to introduce measures intended to prevent problem gambling. GamTest is an online test of gambling behavior that provides information that can be used to give players individualized feedback and recommendations for action. The aim of this study is to explore the dimensionality of GamTest and validate it against the Problem Gambling Severity Index (PGSI) and the gambler's own perceived problems. A recent psychometric approach, exploratory structural equation modeling (ESEM) is used. Well-defined constructs are identified in a two-step procedure fitting a traditional exploratory factor analysis model as well as a so-called bifactor model. Using data collected at four Nordic gambling sites in the autumn of 2009 (n = 10,402), the GamTest ESEM analyses indicate high correspondence with the players' own understanding of their problems and with the PGSI, a validated measure of problem gambling. We conclude that GamTest captures five dimensions of problematic gambling (i.e., overconsumption of money and time, and monetary, social and emotional negative consequences) with high reliability, and that the bifactor approach, composed of a general factor and specific residual factors, reproduces all these factors except one, the negative consequences emotional factor, which contributes to the dominant part of the general factor. The results underscore the importance of tailoring feedback and support to online gamblers with a particular focus on how to handle emotions in relation to their gambling behavior.
Parallel DSMC Solution of Three-Dimensional Flow Over a Finite Flat Plate
NASA Technical Reports Server (NTRS)
Nance, Robert P.; Wilmoth, Richard G.; Moon, Bongki; Hassan, H. A.; Saltz, Joel
1994-01-01
This paper describes a parallel implementation of the direct simulation Monte Carlo (DSMC) method. Runtime library support is used for scheduling and execution of communication between nodes, and domain decomposition is performed dynamically to maintain a good load balance. Performance tests are conducted using the code to evaluate various remapping and remapping-interval policies, and it is shown that a one-dimensional chain-partitioning method works best for the problems considered. The parallel code is then used to simulate the Mach 20 nitrogen flow over a finite-thickness flat plate. It is shown that the parallel algorithm produces results which compare well with experimental data. Moreover, it yields significantly faster execution times than the scalar code, as well as very good load-balance characteristics.
A Maximum Entropy Method for Particle Filtering
NASA Astrophysics Data System (ADS)
Eyink, Gregory L.; Kim, Sangil
2006-06-01
Standard ensemble or particle filtering schemes do not properly represent states of low priori probability when the number of available samples is too small, as is often the case in practical applications. We introduce here a set of parametric resampling methods to solve this problem. Motivated by a general H-theorem for relative entropy, we construct parametric models for the filter distributions as maximum-entropy/minimum-information models consistent with moments of the particle ensemble. When the prior distributions are modeled as mixtures of Gaussians, our method naturally generalizes the ensemble Kalman filter to systems with highly non-Gaussian statistics. We apply the new particle filters presented here to two simple test cases: a one-dimensional diffusion process in a double-well potential and the three-dimensional chaotic dynamical system of Lorenz.
Difference equation state approximations for nonlinear hereditary control problems
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1984-01-01
Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems. Previously announced in STAR as N83-33589
NASA Astrophysics Data System (ADS)
Liu, Changying; Wu, Xinyuan
2017-07-01
In this paper we explore arbitrarily high-order Lagrange collocation-type time-stepping schemes for effectively solving high-dimensional nonlinear Klein-Gordon equations with different boundary conditions. We begin with one-dimensional periodic boundary problems and first formulate an abstract ordinary differential equation (ODE) on a suitable infinity-dimensional function space based on the operator spectrum theory. We then introduce an operator-variation-of-constants formula which is essential for the derivation of our arbitrarily high-order Lagrange collocation-type time-stepping schemes for the nonlinear abstract ODE. The nonlinear stability and convergence are rigorously analysed once the spatial differential operator is approximated by an appropriate positive semi-definite matrix under some suitable smoothness assumptions. With regard to the two dimensional Dirichlet or Neumann boundary problems, our new time-stepping schemes coupled with discrete Fast Sine / Cosine Transformation can be applied to simulate the two-dimensional nonlinear Klein-Gordon equations effectively. All essential features of the methodology are present in one-dimensional and two-dimensional cases, although the schemes to be analysed lend themselves with equal to higher-dimensional case. The numerical simulation is implemented and the numerical results clearly demonstrate the advantage and effectiveness of our new schemes in comparison with the existing numerical methods for solving nonlinear Klein-Gordon equations in the literature.
NASA Astrophysics Data System (ADS)
Felipe-Sesé, Luis; López-Alba, Elías; Siegmann, Philip; Díaz, Francisco A.
2016-12-01
A low-cost approach for three-dimensional (3-D) full-field displacement measurement is applied for the analysis of large displacements involved in two different mechanical events. The method is based on a combination of fringe projection and two-dimensional digital image correlation (DIC) techniques. The two techniques have been employed simultaneously using an RGB camera and a color encoding method; therefore, it is possible to measure in-plane and out-of-plane displacements at the same time with only one camera even at high speed rates. The potential of the proposed methodology has been employed for the analysis of large displacements during contact experiments in a soft material block. Displacement results have been successfully compared with those obtained using a 3D-DIC commercial system. Moreover, the analysis of displacements during an impact test on a metal plate was performed to emphasize the application of the methodology for dynamics events. Results show a good level of agreement, highlighting the potential of FP + 2D DIC as low-cost alternative for the analysis of large deformations problems.
NASA Astrophysics Data System (ADS)
Huyakorn, Peter S.; Springer, Everett P.; Guvanasen, Varut; Wadsworth, Terry D.
1986-12-01
A three-dimensional finite-element model for simulating water flow in variably saturated porous media is presented. The model formulation is general and capable of accommodating complex boundary conditions associated with seepage faces and infiltration or evaporation on the soil surface. Included in this formulation is an improved Picard algorithm designed to cope with severely nonlinear soil moisture relations. The algorithm is formulated for both rectangular and triangular prism elements. The element matrices are evaluated using an "influence coefficient" technique that avoids costly numerical integration. Spatial discretization of a three-dimensional region is performed using a vertical slicing approach designed to accommodate complex geometry with irregular boundaries, layering, and/or lateral discontinuities. Matrix solution is achieved using a slice successive overrelaxation scheme that permits a fairly large number of nodal unknowns (on the order of several thousand) to be handled efficiently on small minicomputers. Six examples are presented to verify and demonstrate the utility of the proposed finite-element model. The first four examples concern one- and two-dimensional flow problems used as sample problems to benchmark the code. The remaining examples concern three-dimensional problems. These problems are used to illustrate the performance of the proposed algorithm in three-dimensional situations involving seepage faces and anisotropic soil media.
Central Schemes for Multi-Dimensional Hamilton-Jacobi Equations
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron; Biegel, Bryan (Technical Monitor)
2002-01-01
We present new, efficient central schemes for multi-dimensional Hamilton-Jacobi equations. These non-oscillatory, non-staggered schemes are first- and second-order accurate and are designed to scale well with an increasing dimension. Efficiency is obtained by carefully choosing the location of the evolution points and by using a one-dimensional projection step. First-and second-order accuracy is verified for a variety of multi-dimensional, convex and non-convex problems.
NASA Astrophysics Data System (ADS)
Guinot, Vincent
2017-11-01
The validity of flux and source term formulae used in shallow water models with porosity for urban flood simulations is assessed by solving the two-dimensional shallow water equations over computational domains representing periodic building layouts. The models under assessment are the Single Porosity (SP), the Integral Porosity (IP) and the Dual Integral Porosity (DIP) models. 9 different geometries are considered. 18 two-dimensional initial value problems and 6 two-dimensional boundary value problems are defined. This results in a set of 96 fine grid simulations. Analysing the simulation results leads to the following conclusions: (i) the DIP flux and source term models outperform those of the SP and IP models when the Riemann problem is aligned with the main street directions, (ii) all models give erroneous flux closures when is the Riemann problem is not aligned with one of the main street directions or when the main street directions are not orthogonal, (iii) the solution of the Riemann problem is self-similar in space-time when the street directions are orthogonal and the Riemann problem is aligned with one of them, (iv) a momentum balance confirms the existence of the transient momentum dissipation model presented in the DIP model, (v) none of the source term models presented so far in the literature allows all flow configurations to be accounted for(vi) future laboratory experiments aiming at the validation of flux and source term closures should focus on the high-resolution, two-dimensional monitoring of both water depth and flow velocity fields.
ERIC Educational Resources Information Center
Ellison, Mark D.
2008-01-01
The one-dimensional particle-in-a-box model used to introduce quantum mechanics to students suffers from a tenuous connection to a real physical system. This article presents a two-dimensional model, the particle confined within a ring, that directly corresponds to observations of surface electrons in a metal trapped inside a circular barrier.…
Assessment of WENO-extended two-fluid modelling in compressible multiphase flows
NASA Astrophysics Data System (ADS)
Kitamura, Keiichi; Nonomura, Taku
2017-03-01
The two-fluid modelling based on an advection-upwind-splitting-method (AUSM)-family numerical flux function, AUSM+-up, following the work by Chang and Liou [Journal of Computational Physics 2007;225: 840-873], has been successfully extended to the fifth order by weighted-essentially-non-oscillatory (WENO) schemes. Then its performance is surveyed in several numerical tests. The results showed a desired performance in one-dimensional benchmark test problems: Without relying upon an anti-diffusion device, the higher-order two-fluid method captures the phase interface within a fewer grid points than the conventional second-order method, as well as a rarefaction wave and a very weak shock. At a high pressure ratio (e.g. 1,000), the interpolated variables appeared to affect the performance: the conservative-variable-based characteristic-wise WENO interpolation showed less sharper but more robust representations of the shocks and expansions than the primitive-variable-based counterpart did. In two-dimensional shock/droplet test case, however, only the primitive-variable-based WENO with a huge void fraction realised a stable computation.
Numerical solutions of acoustic wave propagation problems using Euler computations
NASA Technical Reports Server (NTRS)
Hariharan, S. I.
1984-01-01
This paper reports solution procedures for problems arising from the study of engine inlet wave propagation. The first problem is the study of sound waves radiated from cylindrical inlets. The second one is a quasi-one-dimensional problem to study the effect of nonlinearities and the third one is the study of nonlinearities in two dimensions. In all three problems Euler computations are done with a fourth-order explicit scheme. For the first problem results are shown in agreement with experimental data and for the second problem comparisons are made with an existing asymptotic theory. The third problem is part of an ongoing work and preliminary results are presented for this case.
Convergence acceleration of the Proteus computer code with multigrid methods
NASA Technical Reports Server (NTRS)
Demuren, A. O.; Ibraheem, S. O.
1995-01-01
This report presents the results of a study to implement convergence acceleration techniques based on the multigrid concept in the two-dimensional and three-dimensional versions of the Proteus computer code. The first section presents a review of the relevant literature on the implementation of the multigrid methods in computer codes for compressible flow analysis. The next two sections present detailed stability analysis of numerical schemes for solving the Euler and Navier-Stokes equations, based on conventional von Neumann analysis and the bi-grid analysis, respectively. The next section presents details of the computational method used in the Proteus computer code. Finally, the multigrid implementation and applications to several two-dimensional and three-dimensional test problems are presented. The results of the present study show that the multigrid method always leads to a reduction in the number of iterations (or time steps) required for convergence. However, there is an overhead associated with the use of multigrid acceleration. The overhead is higher in 2-D problems than in 3-D problems, thus overall multigrid savings in CPU time are in general better in the latter. Savings of about 40-50 percent are typical in 3-D problems, but they are about 20-30 percent in large 2-D problems. The present multigrid method is applicable to steady-state problems and is therefore ineffective in problems with inherently unstable solutions.
Cellular automatons applied to gas dynamic problems
NASA Technical Reports Server (NTRS)
Long, Lyle N.; Coopersmith, Robert M.; Mclachlan, B. G.
1987-01-01
This paper compares the results of a relatively new computational fluid dynamics method, cellular automatons, with experimental data and analytical results. This technique has been shown to qualitatively predict fluidlike behavior; however, there have been few published comparisons with experiment or other theories. Comparisons are made for a one-dimensional supersonic piston problem, Stokes first problem, and the flow past a normal flat plate. These comparisons are used to assess the ability of the method to accurately model fluid dynamic behavior and to point out its limitations. Reasonable results were obtained for all three test cases, but the fundamental limitations of cellular automatons are numerous. It may be misleading, at this time, to say that cellular automatons are a computationally efficient technique. Other methods, based on continuum or kinetic theory, would also be very efficient if as little of the physics were included.
Improving Audio Quality in Distance Learning Applications.
ERIC Educational Resources Information Center
Richardson, Craig H.
This paper discusses common causes of problems encountered with audio systems in distance learning networks and offers practical suggestions for correcting the problems. Problems and discussions are divided into nine categories: (1) acoustics, including reverberant classrooms leading to distorted or garbled voices, as well as one-dimensional audio…
Asymptotic theory of circular polarization memory.
Dark, Julia P; Kim, Arnold D
2017-09-01
We establish a quantitative theory of circular polarization memory, which is the unexpected persistence of the incident circular polarization state in a strongly scattering medium. Using an asymptotic analysis of the three-dimensional vector radiative transfer equation (VRTE) in the limit of strong scattering, we find that circular polarization memory must occur in a boundary layer near the portion of the boundary on which polarized light is incident. The boundary layer solution satisfies a one-dimensional conservative scattering VRTE. Through a spectral analysis of this boundary layer problem, we introduce the dominant mode, which is the slowest-decaying mode in the boundary layer. To observe circular polarization memory for a particular set of optical parameters, we find that this dominant mode must pass three tests: (1) this dominant mode is given by the largest, discrete eigenvalue of a reduced problem that corresponds to Fourier mode k=0 in the azimuthal angle, and depends only on Stokes parameters U and V; (2) the polarization state of this dominant mode is largely circular polarized so that |V|≫|U|; and (3) the circular polarization of this dominant mode is maintained for all directions so that V is sign-definite. By applying these three tests to numerical calculations for monodisperse distributions of Mie scatterers, we determine the values of the size and relative refractive index when circular polarization memory occurs. In addition, we identify a reduced, scalar-like problem that provides an accurate approximation for the dominant mode when circular polarization memory occurs.
Solving time-dependent two-dimensional eddy current problems
NASA Technical Reports Server (NTRS)
Lee, Min Eig; Hariharan, S. I.; Ida, Nathan
1988-01-01
Results of transient eddy current calculations are reported. For simplicity, a two-dimensional transverse magnetic field which is incident on an infinitely long conductor is considered. The conductor is assumed to be a good but not perfect conductor. The resulting problem is an interface initial boundary value problem with the boundary of the conductor being the interface. A finite difference method is used to march the solution explicitly in time. The method is shown. Treatment of appropriate radiation conditions is given special consideration. Results are validated with approximate analytic solutions. Two stringent test cases of high and low frequency incident waves are considered to validate the results.
NASA Astrophysics Data System (ADS)
Brenner, Konstantin; Hennicker, Julian; Masson, Roland; Samier, Pierre
2018-03-01
In this work, we extend, to two-phase flow, the single-phase Darcy flow model proposed in [26], [12] in which the (d - 1)-dimensional flow in the fractures is coupled with the d-dimensional flow in the matrix. Three types of so called hybrid-dimensional two-phase Darcy flow models are proposed. They all account for fractures acting either as drains or as barriers, since they allow pressure jumps at the matrix-fracture interfaces. The models also permit to treat gravity dominated flow as well as discontinuous capillary pressure at the material interfaces. The three models differ by their transmission conditions at matrix fracture interfaces: while the first model accounts for the nonlinear two-phase Darcy flux conservations, the second and third ones are based on the linear single phase Darcy flux conservations combined with different approximations of the mobilities. We adapt the Vertex Approximate Gradient (VAG) scheme to this problem, in order to account for anisotropy and heterogeneity aspects as well as for applicability on general meshes. Several test cases are presented to compare our hybrid-dimensional models to the generic equi-dimensional model, in which fractures have the same dimension as the matrix, leading to deep insight about the quality of the proposed reduced models.
Lee, Seungyeoun; Kim, Yongkang; Kwon, Min-Seok; Park, Taesung
2015-01-01
Genome-wide association studies (GWAS) have extensively analyzed single SNP effects on a wide variety of common and complex diseases and found many genetic variants associated with diseases. However, there is still a large portion of the genetic variants left unexplained. This missing heritability problem might be due to the analytical strategy that limits analyses to only single SNPs. One of possible approaches to the missing heritability problem is to consider identifying multi-SNP effects or gene-gene interactions. The multifactor dimensionality reduction method has been widely used to detect gene-gene interactions based on the constructive induction by classifying high-dimensional genotype combinations into one-dimensional variable with two attributes of high risk and low risk for the case-control study. Many modifications of MDR have been proposed and also extended to the survival phenotype. In this study, we propose several extensions of MDR for the survival phenotype and compare the proposed extensions with earlier MDR through comprehensive simulation studies. PMID:26339630
NASA Astrophysics Data System (ADS)
Hetmaniok, Edyta; Hristov, Jordan; Słota, Damian; Zielonka, Adam
2017-05-01
The paper presents the procedure for solving the inverse problem for the binary alloy solidification in a two-dimensional space. This is a continuation of some previous works of the authors investigating a similar problem but in the one-dimensional domain. Goal of the problem consists in identification of the heat transfer coefficient on boundary of the region and in reconstruction of the temperature distribution inside the considered region in case when the temperature measurements in selected points of the alloy are known. Mathematical model of the problem is based on the heat conduction equation with the substitute thermal capacity and with the liquidus and solidus temperatures varying in dependance on the concentration of the alloy component. For describing this concentration the Scheil model is used. Investigated procedure involves also the parallelized Ant Colony Optimization algorithm applied for minimizing a functional expressing the error of approximate solution.
Analysis of the Hessian for Aerodynamic Optimization: Inviscid Flow
NASA Technical Reports Server (NTRS)
Arian, Eyal; Ta'asan, Shlomo
1996-01-01
In this paper we analyze inviscid aerodynamic shape optimization problems governed by the full potential and the Euler equations in two and three dimensions. The analysis indicates that minimization of pressure dependent cost functions results in Hessians whose eigenvalue distributions are identical for the full potential and the Euler equations. However the optimization problems in two and three dimensions are inherently different. While the two dimensional optimization problems are well-posed the three dimensional ones are ill-posed. Oscillations in the shape up to the smallest scale allowed by the design space can develop in the direction perpendicular to the flow, implying that a regularization is required. A natural choice of such a regularization is derived. The analysis also gives an estimate of the Hessian's condition number which implies that the problems at hand are ill-conditioned. Infinite dimensional approximations for the Hessians are constructed and preconditioners for gradient based methods are derived from these approximate Hessians.
One-dimensional hybrid model of plasma-solid interaction in argon plasma at higher pressures
NASA Astrophysics Data System (ADS)
Jelínek, P.; Hrach, R.
2007-04-01
One of problems important in the present plasma science is the surface treatment of materials at higher pressures, including the atmospheric pressure plasma. The theoretical analysis of processes in such plasmas is difficult, because the theories derived for collisionless or slightly collisional plasma lose their validity at medium and high pressures, therefore the methods of computational physics are being widely used. There are two basic ways, how to model the physical processes taking place during the interaction of plasma with immersed solids. The first technique is the particle approach, the second one is called the fluid modelling. Both these approaches have their limitations-small efficiency of particle modelling and limited accuracy of fluid models. In computer modelling is endeavoured to use advantages by combination of these two approaches, this combination is named hybrid modelling. In our work one-dimensional hybrid model of plasma-solid interaction has been developed for an electropositive plasma at higher pressures. We have used hybrid model for this problem only as the test for our next applications, e.g. pulsed discharge, RF discharge, etc. The hybrid model consists of a combined molecular dynamics-Monte Carlo model for fast electrons and fluid model for slow electrons and positive argon ions. The latter model also contains Poisson's equation, to obtain a self-consistent electric field distribution. The derived results include the spatial distributions of electric potential, concentrations and fluxes of individual charged species near the substrate for various pressures and for various probe voltage bias.
NASA Astrophysics Data System (ADS)
Dolimont, Adrien; Rivière-Lorphèvre, Edouard; Ducobu, François; Backaert, Stéphane
2018-05-01
Additive manufacturing is growing faster and faster. This leads us to study the functionalization of the parts that are produced by these processes. Electron Beam melting (EBM) is one of these technologies. It is a powder based additive manufacturing (AM) method. With this process, it is possible to manufacture high-density metal parts with complex topology. One of the big problems with these technologies is the surface finish. To improve the quality of the surface, some finishing operations are needed. In this study, the focus is set on chemical polishing. The goal is to determine how the chemical etching impacts the dimensional accuracy and the surface roughness of EBM parts. To this end, an experimental campaign was carried out on the most widely used material in EBM, Ti6Al4V. Different exposure times were tested. The impact of these times on surface quality was evaluated. To help predicting the excess thickness to be provided, the dimensional impact of chemical polishing on EBM parts was estimated. 15 parts were measured before and after chemical machining. The improvement of surface quality was also evaluated after each treatment.
Computer aided photographic engineering
NASA Technical Reports Server (NTRS)
Hixson, Jeffrey A.; Rieckhoff, Tom
1988-01-01
High speed photography is an excellent source of engineering data but only provides a two-dimensional representation of a three-dimensional event. Multiple cameras can be used to provide data for the third dimension but camera locations are not always available. A solution to this problem is to overlay three-dimensional CAD/CAM models of the hardware being tested onto a film or photographic image, allowing the engineer to measure surface distances, relative motions between components, and surface variations.
NASA Technical Reports Server (NTRS)
Beggs, John H.; Luebbers, Raymond J.; Kunz, Karl S.
1992-01-01
The Penn State Finite Difference Time Domain Electromagnetic Code Version B is a three dimensional numerical electromagnetic scattering code based upon the Finite Difference Time Domain Technique (FDTD). The supplied version of the code is one version of our current three dimensional FDTD code set. This manual provides a description of the code and corresponding results for several scattering problems. The manual is organized into 14 sections: introduction, description of the FDTD method, operation, resource requirements, Version B code capabilities, a brief description of the default scattering geometry, a brief description of each subroutine, a description of the include file, a discussion of radar cross section computations, a discussion of some scattering results, a sample problem setup section, a new problem checklist, references and figure titles.
Franić, Sanja; Dolan, Conor V; Borsboom, Denny; Hudziak, James J; van Beijsterveldt, Catherina E M; Boomsma, Dorret I
2013-09-01
In the present article, we discuss the role that quantitative genetic methodology may play in assessing and understanding the dimensionality of psychological (psychometric) instruments. Specifically, we study the relationship between the observed covariance structures, on the one hand, and the underlying genetic and environmental influences giving rise to such structures, on the other. We note that this relationship may be such that it hampers obtaining a clear estimate of dimensionality using standard tools for dimensionality assessment alone. One situation in which dimensionality assessment may be impeded is that in which genetic and environmental influences, of which the observed covariance structure is a function, differ from each other in structure and dimensionality. We demonstrate that in such situations settling dimensionality issues may be problematic, and propose using quantitative genetic modeling to uncover the (possibly different) dimensionalities of the underlying genetic and environmental structures. We illustrate using simulations and an empirical example on childhood internalizing problems.
NASA Astrophysics Data System (ADS)
Ren, Wenjie; Li, Hongnan; Song, Gangbing; Huo, Linsheng
2009-03-01
The problem of optimizing an absorber system for three-dimensional seismic structures is addressed. The objective is to determine the number and position of absorbers to minimize the coupling effects of translation-torsion of structures at minimum cost. A procedure for a multi-objective optimization problem is developed by integrating a dominance-based selection operator and a dominance-based penalty function method. Based on the two-branch tournament genetic algorithm, the selection operator is constructed by evaluating individuals according to their dominance in one run. The technique guarantees the better performing individual winning its competition, provides a slight selection pressure toward individuals and maintains diversity in the population. Moreover, due to the evaluation for individuals in each generation being finished in one run, less computational effort is taken. Penalty function methods are generally used to transform a constrained optimization problem into an unconstrained one. The dominance-based penalty function contains necessary information on non-dominated character and infeasible position of an individual, essential for success in seeking a Pareto optimal set. The proposed approach is used to obtain a set of non-dominated designs for a six-storey three-dimensional building with shape memory alloy dampers subjected to earthquake.
ERIC Educational Resources Information Center
Monaghan, James M.; Clement, John
1999-01-01
Presents evidence for students' qualitative and quantitative difficulties with apparently simple one-dimensional relative-motion problems, students' spontaneous visualization of relative-motion problems, the visualizations facilitating solution of these problems, and students' memories of the online computer simulation used as a framework for…
Spatial Visualization in Physics Problem Solving
ERIC Educational Resources Information Center
Kozhevnikov, Maria; Motes, Michael A.; Hegarty, Mary
2007-01-01
Three studies were conducted to examine the relation of spatial visualization to solving kinematics problems that involved either predicting the two-dimensional motion of an object, translating from one frame of reference to another, or interpreting kinematics graphs. In Study 1, 60 physics-naive students were administered kinematics problems and…
NASA Astrophysics Data System (ADS)
Bogiatzis, P.; Ishii, M.; Davis, T. A.
2016-12-01
Seismic tomography inverse problems are among the largest high-dimensional parameter estimation tasks in Earth science. We show how combinatorics and graph theory can be used to analyze the structure of such problems, and to effectively decompose them into smaller ones that can be solved efficiently by means of the least squares method. In combination with recent high performance direct sparse algorithms, this reduction in dimensionality allows for an efficient computation of the model resolution and covariance matrices using limited resources. Furthermore, we show that a new sparse singular value decomposition method can be used to obtain the complete spectrum of the singular values. This procedure provides the means for more objective regularization and further dimensionality reduction of the problem. We apply this methodology to a moderate size, non-linear seismic tomography problem to image the structure of the crust and the upper mantle beneath Japan using local deep earthquakes recorded by the High Sensitivity Seismograph Network stations.
Study of multi-dimensional radiative energy transfer in molecular gases
NASA Technical Reports Server (NTRS)
Liu, Jiwen; Tiwari, S. N.
1993-01-01
The Monte Carlo method (MCM) is applied to analyze radiative heat transfer in nongray gases. The nongray model employed is based on the statistical arrow band model with an exponential-tailed inverse intensity distribution. Consideration of spectral correlation results in some distinguishing features of the Monte Carlo formulations. Validation of the Monte Carlo formulations has been conducted by comparing results of this method with other solutions. Extension of a one-dimensional problem to a multi-dimensional problem requires some special treatments in the Monte Carlo analysis. Use of different assumptions results in different sets of Monte Carlo formulations. The nongray narrow band formulations provide the most accurate results.
On numerical modeling of one-dimensional geothermal histories
Haugerud, R.A.
1989-01-01
Numerical models of one-dimensional geothermal histories are one way of understanding the relations between tectonics and transient thermal structure in the crust. Such models can be powerful tools for interpreting geochronologic and thermobarometric data. A flexible program to calculate these models on a microcomputer is available and examples of its use are presented. Potential problems with this approach include the simplifying assumptions that are made, limitations of the numerical techniques, and the neglect of convective heat transfer. ?? 1989.
Zhang, Duan Z.; Padrino, Juan C.
2017-06-01
The ensemble averaging technique is applied to model mass transport by diffusion in random networks. The system consists of an ensemble of random networks, where each network is made of pockets connected by tortuous channels. Inside a channel, fluid transport is assumed to be governed by the one-dimensional diffusion equation. Mass balance leads to an integro-differential equation for the pocket mass density. The so-called dual-porosity model is found to be equivalent to the leading order approximation of the integration kernel when the diffusion time scale inside the channels is small compared to the macroscopic time scale. As a test problem,more » we consider the one-dimensional mass diffusion in a semi-infinite domain. Because of the required time to establish the linear concentration profile inside a channel, for early times the similarity variable is xt $-$1/4 rather than xt $-$1/2 as in the traditional theory. We found this early time similarity can be explained by random walk theory through the network.« less
Diagnosis and treatment of unconsummated marriage in an Iranian couple.
Bokaie, Mahshid; Khalesi, Zahra Bostani; Yasini-Ardekani, Seyed Mojtaba
2017-09-01
Unconsummated marriage is a problem among couples who would not be able to perform natural sexual intercourse and vaginal penetration. This disorder is more common in developing countries and sometimes couples would come up with non-technical and non-scientific methods to overcome their problem. Multi-dimensional approach and narrative exposure therapy used in this case. This study would report a case of unconsummated marriage between a couple after 6 years. The main problem of this couple was vaginismus and post-traumatic stress. Treatment with multi-dimensional approach for this couple included methods like narrative exposure therapy, educating the anatomy of female and male reproductive system, correcting misconceptions, educating foreplay, educating body exploring and non-sexual and sexual massage and penetrating the vagina first by women finger and then men's after relaxation. The entire stages of the treatment lasted for four sessions and at the one-month follow-up couple's satisfaction was desirable. Unconsummated marriage is one of the main sexual problems; it is more common in developing countries than developed countries and cultural factors are effective on intensifying this disorder. The use of multi-dimensional approach in this study led to expedite diagnosis and treatment of vaginismus.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Weixuan; Lin, Guang; Li, Bing
2016-09-01
A well-known challenge in uncertainty quantification (UQ) is the "curse of dimensionality". However, many high-dimensional UQ problems are essentially low-dimensional, because the randomness of the quantity of interest (QoI) is caused only by uncertain parameters varying within a low-dimensional subspace, known as the sufficient dimension reduction (SDR) subspace. Motivated by this observation, we propose and demonstrate in this paper an inverse regression-based UQ approach (IRUQ) for high-dimensional problems. Specifically, we use an inverse regression procedure to estimate the SDR subspace and then convert the original problem to a low-dimensional one, which can be efficiently solved by building a response surface model such as a polynomial chaos expansion. The novelty and advantages of the proposed approach is seen in its computational efficiency and practicality. Comparing with Monte Carlo, the traditionally preferred approach for high-dimensional UQ, IRUQ with a comparable cost generally gives much more accurate solutions even for high-dimensional problems, and even when the dimension reduction is not exactly sufficient. Theoretically, IRUQ is proved to converge twice as fast as the approach it uses seeking the SDR subspace. For example, while a sliced inverse regression method converges to the SDR subspace at the rate ofmore » $$O(n^{-1/2})$$, the corresponding IRUQ converges at $$O(n^{-1})$$. IRUQ also provides several desired conveniences in practice. It is non-intrusive, requiring only a simulator to generate realizations of the QoI, and there is no need to compute the high-dimensional gradient of the QoI. Finally, error bars can be derived for the estimation results reported by IRUQ.« less
Hiestand, Laurie
2011-11-01
In this study I tested Benson Ginsburg's theory that dogs should show diminished ability, compared to wolves, in orienting in three-dimensional space and manipulating objects sequentially. Dogs of all ages and juvenile wolves should do poorly on these measures, but at some time before sexual maturity, the juvenile wolves should begin improving to the level of adult wolves. Two adult and seven juvenile wolves were compared with 40 adult German shepherds. The initial task was to pull a single rope suspended from the ceiling; complexity was increased by the addition of ropes and by changing spatial configurations. Adult wolf performance was consistently successful across all tests and requirements. Juvenile wolves had little difficulty with one and two rope tests, but did more poorly in the three rope tests. The behavior of the dogs grouped into four profiles (# of dogs): non-responders (6), one rope (15), two rope (14), and three rope responders (5).
Analysis of a Two-Dimensional Thermal Cloaking Problem on the Basis of Optimization
NASA Astrophysics Data System (ADS)
Alekseev, G. V.
2018-04-01
For a two-dimensional model of thermal scattering, inverse problems arising in the development of tools for cloaking material bodies on the basis of a mixed thermal cloaking strategy are considered. By applying the optimization approach, these problems are reduced to optimization ones in which the role of controls is played by variable parameters of the medium occupying the cloaking shell and by the heat flux through a boundary segment of the basic domain. The solvability of the direct and optimization problems is proved, and an optimality system is derived. Based on its analysis, sufficient conditions on the input data are established that ensure the uniqueness and stability of optimal solutions.
A semi-implicit level set method for multiphase flows and fluid-structure interaction problems
NASA Astrophysics Data System (ADS)
Cottet, Georges-Henri; Maitre, Emmanuel
2016-06-01
In this paper we present a novel semi-implicit time-discretization of the level set method introduced in [8] for fluid-structure interaction problems. The idea stems from a linear stability analysis derived on a simplified one-dimensional problem. The semi-implicit scheme relies on a simple filter operating as a pre-processing on the level set function. It applies to multiphase flows driven by surface tension as well as to fluid-structure interaction problems. The semi-implicit scheme avoids the stability constraints that explicit scheme need to satisfy and reduces significantly the computational cost. It is validated through comparisons with the original explicit scheme and refinement studies on two-dimensional benchmarks.
An Alternative Approach to Identifying a Dimension in Second Language Proficiency.
ERIC Educational Resources Information Center
Griffin, Patrick E.; And Others
Current practice in language testing has not yet integrated classical test theory with assessment of language skills. In addition, language testing needs to be part of theory development. Lack of sound testing procedures can lead to problems in research design and ultimately, inappropriate theory development. The debate over dimensionality of…
BEST3D user's manual: Boundary Element Solution Technology, 3-Dimensional Version 3.0
NASA Technical Reports Server (NTRS)
1991-01-01
The theoretical basis and programming strategy utilized in the construction of the computer program BEST3D (boundary element solution technology - three dimensional) and detailed input instructions are provided for the use of the program. An extensive set of test cases and sample problems is included in the manual and is also available for distribution with the program. The BEST3D program was developed under the 3-D Inelastic Analysis Methods for Hot Section Components contract (NAS3-23697). The overall objective of this program was the development of new computer programs allowing more accurate and efficient three-dimensional thermal and stress analysis of hot section components, i.e., combustor liners, turbine blades, and turbine vanes. The BEST3D program allows both linear and nonlinear analysis of static and quasi-static elastic problems and transient dynamic analysis for elastic problems. Calculation of elastic natural frequencies and mode shapes is also provided.
Constrained-transport Magnetohydrodynamics with Adaptive Mesh Refinement in CHARM
NASA Astrophysics Data System (ADS)
Miniati, Francesco; Martin, Daniel F.
2011-07-01
We present the implementation of a three-dimensional, second-order accurate Godunov-type algorithm for magnetohydrodynamics (MHD) in the adaptive-mesh-refinement (AMR) cosmological code CHARM. The algorithm is based on the full 12-solve spatially unsplit corner-transport-upwind (CTU) scheme. The fluid quantities are cell-centered and are updated using the piecewise-parabolic method (PPM), while the magnetic field variables are face-centered and are evolved through application of the Stokes theorem on cell edges via a constrained-transport (CT) method. The so-called multidimensional MHD source terms required in the predictor step for high-order accuracy are applied in a simplified form which reduces their complexity in three dimensions without loss of accuracy or robustness. The algorithm is implemented on an AMR framework which requires specific synchronization steps across refinement levels. These include face-centered restriction and prolongation operations and a reflux-curl operation, which maintains a solenoidal magnetic field across refinement boundaries. The code is tested against a large suite of test problems, including convergence tests in smooth flows, shock-tube tests, classical two- and three-dimensional MHD tests, a three-dimensional shock-cloud interaction problem, and the formation of a cluster of galaxies in a fully cosmological context. The magnetic field divergence is shown to remain negligible throughout.
NASA Astrophysics Data System (ADS)
Tripathy, Rohit; Bilionis, Ilias; Gonzalez, Marcial
2016-09-01
Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the model, we design a two-step maximum likelihood optimization procedure that ensures the orthogonality of the projection matrix by exploiting recent results on the Stiefel manifold, i.e., the manifold of matrices with orthogonal columns. The additional benefit of our probabilistic formulation, is that it allows us to select the dimensionality of the AS via the Bayesian information criterion. We validate our approach by showing that it can discover the right AS in synthetic examples without gradient information using both noiseless and noisy observations. We demonstrate that our method is able to discover the same AS as the classical approach in a challenging one-hundred-dimensional problem involving an elliptic stochastic partial differential equation with random conductivity. Finally, we use our approach to study the effect of geometric and material uncertainties in the propagation of solitary waves in a one dimensional granular system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tripathy, Rohit, E-mail: rtripath@purdue.edu; Bilionis, Ilias, E-mail: ibilion@purdue.edu; Gonzalez, Marcial, E-mail: marcial-gonzalez@purdue.edu
2016-09-15
Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range ofmore » physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the model, we design a two-step maximum likelihood optimization procedure that ensures the orthogonality of the projection matrix by exploiting recent results on the Stiefel manifold, i.e., the manifold of matrices with orthogonal columns. The additional benefit of our probabilistic formulation, is that it allows us to select the dimensionality of the AS via the Bayesian information criterion. We validate our approach by showing that it can discover the right AS in synthetic examples without gradient information using both noiseless and noisy observations. We demonstrate that our method is able to discover the same AS as the classical approach in a challenging one-hundred-dimensional problem involving an elliptic stochastic partial differential equation with random conductivity. Finally, we use our approach to study the effect of geometric and material uncertainties in the propagation of solitary waves in a one dimensional granular system.« less
NASA Technical Reports Server (NTRS)
Hawk, J. D.; Stockman, N. O.; Farrell, C. A., Jr.
1978-01-01
Incompressible potential flow calculations are presented that were corrected for compressibility in two-dimensional inlets at arbitrary operating conditions. Included are a statement of the problem to be solved, a description of each of the computer programs, and sufficient documentation, including a test case, to enable a user to run the program.
Computer programs for calculating two-dimensional potential flow through deflected nozzles
NASA Technical Reports Server (NTRS)
Hawk, J. D.; Stockman, N. O.
1979-01-01
Computer programs to calculate the incompressible potential flow, corrected for compressibility, in two-dimensional nozzles at arbitrary operating conditions are presented. A statement of the problem to be solved, a description of each of the computer programs, and sufficient documentation, including a test case, to enable a user to run the program are included.
NASA Astrophysics Data System (ADS)
Magee, Daniel J.; Niemeyer, Kyle E.
2018-03-01
The expedient design of precision components in aerospace and other high-tech industries requires simulations of physical phenomena often described by partial differential equations (PDEs) without exact solutions. Modern design problems require simulations with a level of resolution difficult to achieve in reasonable amounts of time-even in effectively parallelized solvers. Though the scale of the problem relative to available computing power is the greatest impediment to accelerating these applications, significant performance gains can be achieved through careful attention to the details of memory communication and access. The swept time-space decomposition rule reduces communication between sub-domains by exhausting the domain of influence before communicating boundary values. Here we present a GPU implementation of the swept rule, which modifies the algorithm for improved performance on this processing architecture by prioritizing use of private (shared) memory, avoiding interblock communication, and overwriting unnecessary values. It shows significant improvement in the execution time of finite-difference solvers for one-dimensional unsteady PDEs, producing speedups of 2 - 9 × for a range of problem sizes, respectively, compared with simple GPU versions and 7 - 300 × compared with parallel CPU versions. However, for a more sophisticated one-dimensional system of equations discretized with a second-order finite-volume scheme, the swept rule performs 1.2 - 1.9 × worse than a standard implementation for all problem sizes.
NASA Astrophysics Data System (ADS)
Krasnitckii, S. A.; Kolomoetc, D. R.; Smirnov, A. M.; Gutkin, M. Yu
2017-05-01
The boundary-value problem in the classical theory of elasticity for a core-shell nanowire with an eccentric parallelepipedal core of an arbitrary rectangular cross section is solved. The core is subjected to one-dimensional cross dilatation eigenstrain. The misfit stresses are given in a closed analytical form suitable for theoretical modeling of misfit accommodation in relevant heterostructures.
LETTERS AND COMMENTS: Energy in one-dimensional linear waves in a string
NASA Astrophysics Data System (ADS)
Burko, Lior M.
2010-09-01
We consider the energy density and energy transfer in small amplitude, one-dimensional waves on a string and find that the common expressions used in textbooks for the introductory physics with calculus course give wrong results for some cases, including standing waves. We discuss the origin of the problem, and how it can be corrected in a way appropriate for the introductory calculus-based physics course.
Quantum Hamilton equations of motion for bound states of one-dimensional quantum systems
NASA Astrophysics Data System (ADS)
Köppe, J.; Patzold, M.; Grecksch, W.; Paul, W.
2018-06-01
On the basis of Nelson's stochastic mechanics derivation of the Schrödinger equation, a formal mathematical structure of non-relativistic quantum mechanics equivalent to the one in classical analytical mechanics has been established in the literature. We recently were able to augment this structure by deriving quantum Hamilton equations of motion by finding the Nash equilibrium of a stochastic optimal control problem, which is the generalization of Hamilton's principle of classical mechanics to quantum systems. We showed that these equations allow a description and numerical determination of the ground state of quantum problems without using the Schrödinger equation. We extend this approach here to deliver the complete discrete energy spectrum and related eigenfunctions for bound states of one-dimensional stationary quantum systems. We exemplify this analytically for the one-dimensional harmonic oscillator and numerically by analyzing a quartic double-well potential, a model of broad importance in many areas of physics. We furthermore point out a relation between the tunnel splitting of such models and mean first passage time concepts applied to Nelson's diffusion paths in the ground state.
An Implicit Characteristic Based Method for Electromagnetics
NASA Technical Reports Server (NTRS)
Beggs, John H.; Briley, W. Roger
2001-01-01
An implicit characteristic-based approach for numerical solution of Maxwell's time-dependent curl equations in flux conservative form is introduced. This method combines a characteristic based finite difference spatial approximation with an implicit lower-upper approximate factorization (LU/AF) time integration scheme. This approach is advantageous for three-dimensional applications because the characteristic differencing enables a two-factor approximate factorization that retains its unconditional stability in three space dimensions, and it does not require solution of tridiagonal systems. Results are given both for a Fourier analysis of stability, damping and dispersion properties, and for one-dimensional model problems involving propagation and scattering for free space and dielectric materials using both uniform and nonuniform grids. The explicit Finite Difference Time Domain Method (FDTD) algorithm is used as a convenient reference algorithm for comparison. The one-dimensional results indicate that for low frequency problems on a highly resolved uniform or nonuniform grid, this LU/AF algorithm can produce accurate solutions at Courant numbers significantly greater than one, with a corresponding improvement in efficiency for simulating a given period of time. This approach appears promising for development of dispersion optimized LU/AF schemes for three dimensional applications.
Sparse learning of stochastic dynamical equations
NASA Astrophysics Data System (ADS)
Boninsegna, Lorenzo; Nüske, Feliks; Clementi, Cecilia
2018-06-01
With the rapid increase of available data for complex systems, there is great interest in the extraction of physically relevant information from massive datasets. Recently, a framework called Sparse Identification of Nonlinear Dynamics (SINDy) has been introduced to identify the governing equations of dynamical systems from simulation data. In this study, we extend SINDy to stochastic dynamical systems which are frequently used to model biophysical processes. We prove the asymptotic correctness of stochastic SINDy in the infinite data limit, both in the original and projected variables. We discuss algorithms to solve the sparse regression problem arising from the practical implementation of SINDy and show that cross validation is an essential tool to determine the right level of sparsity. We demonstrate the proposed methodology on two test systems, namely, the diffusion in a one-dimensional potential and the projected dynamics of a two-dimensional diffusion process.
NASA Astrophysics Data System (ADS)
Franck, I. M.; Koutsourelakis, P. S.
2017-01-01
This paper is concerned with the numerical solution of model-based, Bayesian inverse problems. We are particularly interested in cases where the cost of each likelihood evaluation (forward-model call) is expensive and the number of unknown (latent) variables is high. This is the setting in many problems in computational physics where forward models with nonlinear PDEs are used and the parameters to be calibrated involve spatio-temporarily varying coefficients, which upon discretization give rise to a high-dimensional vector of unknowns. One of the consequences of the well-documented ill-posedness of inverse problems is the possibility of multiple solutions. While such information is contained in the posterior density in Bayesian formulations, the discovery of a single mode, let alone multiple, poses a formidable computational task. The goal of the present paper is two-fold. On one hand, we propose approximate, adaptive inference strategies using mixture densities to capture multi-modal posteriors. On the other, we extend our work in [1] with regard to effective dimensionality reduction techniques that reveal low-dimensional subspaces where the posterior variance is mostly concentrated. We validate the proposed model by employing Importance Sampling which confirms that the bias introduced is small and can be efficiently corrected if the analyst wishes to do so. We demonstrate the performance of the proposed strategy in nonlinear elastography where the identification of the mechanical properties of biological materials can inform non-invasive, medical diagnosis. The discovery of multiple modes (solutions) in such problems is critical in achieving the diagnostic objectives.
NASA Astrophysics Data System (ADS)
Bilyeu, David
This dissertation presents an extension of the Conservation Element Solution Element (CESE) method from second- to higher-order accuracy. The new method retains the favorable characteristics of the original second-order CESE scheme, including (i) the use of the space-time integral equation for conservation laws, (ii) a compact mesh stencil, (iii) the scheme will remain stable up to a CFL number of unity, (iv) a fully explicit, time-marching integration scheme, (v) true multidimensionality without using directional splitting, and (vi) the ability to handle two- and three-dimensional geometries by using unstructured meshes. This algorithm has been thoroughly tested in one, two and three spatial dimensions and has been shown to obtain the desired order of accuracy for solving both linear and non-linear hyperbolic partial differential equations. The scheme has also shown its ability to accurately resolve discontinuities in the solutions. Higher order unstructured methods such as the Discontinuous Galerkin (DG) method and the Spectral Volume (SV) methods have been developed for one-, two- and three-dimensional application. Although these schemes have seen extensive development and use, certain drawbacks of these methods have been well documented. For example, the explicit versions of these two methods have very stringent stability criteria. This stability criteria requires that the time step be reduced as the order of the solver increases, for a given simulation on a given mesh. The research presented in this dissertation builds upon the work of Chang, who developed a fourth-order CESE scheme to solve a scalar one-dimensional hyperbolic partial differential equation. The completed research has resulted in two key deliverables. The first is a detailed derivation of a high-order CESE methods on unstructured meshes for solving the conservation laws in two- and three-dimensional spaces. The second is the code implementation of these numerical methods in a computer code. For code development, a one-dimensional solver for the Euler equations was developed. This work is an extension of Chang's work on the fourth-order CESE method for solving a one-dimensional scalar convection equation. A generic formulation for the nth-order CESE method, where n ≥ 4, was derived. Indeed, numerical implementation of the scheme confirmed that the order of convergence was consistent with the order of the scheme. For the two- and three-dimensional solvers, SOLVCON was used as the basic framework for code implementation. A new solver kernel for the fourth-order CESE method has been developed and integrated into the framework provided by SOLVCON. The main part of SOLVCON, which deals with unstructured meshes and parallel computing, remains intact. The SOLVCON code for data transmission between computer nodes for High Performance Computing (HPC). To validate and verify the newly developed high-order CESE algorithms, several one-, two- and three-dimensional simulations where conducted. For the arbitrary order, one-dimensional, CESE solver, three sets of governing equations were selected for simulation: (i) the linear convection equation, (ii) the linear acoustic equations, (iii) the nonlinear Euler equations. All three systems of equations were used to verify the order of convergence through mesh refinement. In addition the Euler equations were used to solve the Shu-Osher and Blastwave problems. These two simulations demonstrated that the new high-order CESE methods can accurately resolve discontinuities in the flow field.For the two-dimensional, fourth-order CESE solver, the Euler equation was employed in four different test cases. The first case was used to verify the order of convergence through mesh refinement. The next three cases demonstrated the ability of the new solver to accurately resolve discontinuities in the flows. This was demonstrated through: (i) the interaction between acoustic waves and an entropy pulse, (ii) supersonic flow over a circular blunt body, (iii) supersonic flow over a guttered wedge. To validate and verify the three-dimensional, fourth-order CESE solver, two different simulations where selected. The first used the linear convection equations to demonstrate fourth-order convergence. The second used the Euler equations to simulate supersonic flow over a spherical body to demonstrate the scheme's ability to accurately resolve shocks. All test cases used are well known benchmark problems and as such, there are multiple sources available to validate the numerical results. Furthermore, the simulations showed that the high-order CESE solver was stable at a CFL number near unity.
NASA Astrophysics Data System (ADS)
Sandoval, J. H.; Bellotti, F. F.; Yamashita, M. T.; Frederico, T.; Fedorov, D. V.; Jensen, A. S.; Zinner, N. T.
2018-03-01
The quantum mechanical three-body problem is a source of continuing interest due to its complexity and not least due to the presence of fascinating solvable cases. The prime example is the Efimov effect where infinitely many bound states of identical bosons can arise at the threshold where the two-body problem has zero binding energy. An important aspect of the Efimov effect is the effect of spatial dimensionality; it has been observed in three dimensional systems, yet it is believed to be impossible in two dimensions. Using modern experimental techniques, it is possible to engineer trap geometry and thus address the intricate nature of quantum few-body physics as function of dimensionality. Here we present a framework for studying the three-body problem as one (continuously) changes the dimensionality of the system all the way from three, through two, and down to a single dimension. This is done by considering the Efimov favorable case of a mass-imbalanced system and with an external confinement provided by a typical experimental case with a (deformed) harmonic trap.
A Two-Dimensional Linear Bicharacteristic FDTD Method
NASA Technical Reports Server (NTRS)
Beggs, John H.
2002-01-01
The linear bicharacteristic scheme (LBS) was originally developed to improve unsteady solutions in computational acoustics and aeroacoustics. The LBS has previously been extended to treat lossy materials for one-dimensional problems. It is a classical leapfrog algorithm, but is combined with upwind bias in the spatial derivatives. This approach preserves the time-reversibility of the leapfrog algorithm, which results in no dissipation, and it permits more flexibility by the ability to adopt a characteristic based method. The use of characteristic variables allows the LBS to include the Perfectly Matched Layer boundary condition with no added storage or complexity. The LBS offers a central storage approach with lower dispersion than the Yee algorithm, plus it generalizes much easier to nonuniform grids. It has previously been applied to two and three-dimensional free-space electromagnetic propagation and scattering problems. This paper extends the LBS to the two-dimensional case. Results are presented for point source radiation problems, and the FDTD algorithm is chosen as a convenient reference for comparison.
Two-Dimensional Failure Waves and Ignition Fronts in Premixed Combustion
NASA Technical Reports Server (NTRS)
Vedarajan, T. G.; Buckmaster J.; Ronney, P.
1998-01-01
This paper is a continuation of our work on edge-flames in premixed combustion. An edge-flame is a two-dimensional structure constructed from a one-dimensional configuration that has two stable solutions (bistable equilibrium). Edge-flames can display wavelike behavior, advancing as ignition fronts or retreating as failure waves. Here we consider two one-dimensional configurations: twin deflagrations in a straining flow generated by the counterflow of fresh streams of mixture: and a single deflagration subject to radiation losses. The edge-flames constructed from the first configuration have positive or negative speeds, according to the value of the strain rate. But our numerical solutions strongly suggest that only positive speeds (corresponding to ignition fronts) can exist for the second configuration. We show that this phenomenon can also occur in diffusion flames when the Lewis numbers are small. And we discuss the asymptotics of the one-dimensional twin deflagration configuration. an overlooked problem from the 70s.
Knopman, Debra S.; Voss, Clifford I.; Garabedian, Stephen P.
1991-01-01
Tests of a one-dimensional sampling design methodology on measurements of bromide concentration collected during the natural gradient tracer test conducted by the U.S. Geological Survey on Cape Cod, Massachusetts, demonstrate its efficacy for field studies of solute transport in groundwater and the utility of one-dimensional analysis. The methodology was applied to design of sparse two-dimensional networks of fully screened wells typical of those often used in engineering practice. In one-dimensional analysis, designs consist of the downstream distances to rows of wells oriented perpendicular to the groundwater flow direction and the timing of sampling to be carried out on each row. The power of a sampling design is measured by its effectiveness in simultaneously meeting objectives of model discrimination, parameter estimation, and cost minimization. One-dimensional models of solute transport, differing in processes affecting the solute and assumptions about the structure of the flow field, were considered for description of tracer cloud migration. When fitting each model using nonlinear regression, additive and multiplicative error forms were allowed for the residuals which consist of both random and model errors. The one-dimensional single-layer model of a nonreactive solute with multiplicative error was judged to be the best of those tested. Results show the efficacy of the methodology in designing sparse but powerful sampling networks. Designs that sample five rows of wells at five or fewer times in any given row performed as well for model discrimination as the full set of samples taken up to eight times in a given row from as many as 89 rows. Also, designs for parameter estimation judged to be good by the methodology were as effective in reducing the variance of parameter estimates as arbitrary designs with many more samples. Results further showed that estimates of velocity and longitudinal dispersivity in one-dimensional models based on data from only five rows of fully screened wells each sampled five or fewer times were practically equivalent to values determined from moments analysis of the complete three-dimensional set of 29,285 samples taken during 16 sampling times.
Uncluttered Single-Image Visualization of Vascular Structures using GPU and Integer Programming
Won, Joong-Ho; Jeon, Yongkweon; Rosenberg, Jarrett; Yoon, Sungroh; Rubin, Geoffrey D.; Napel, Sandy
2013-01-01
Direct projection of three-dimensional branching structures, such as networks of cables, blood vessels, or neurons onto a 2D image creates the illusion of intersecting structural parts and creates challenges for understanding and communication. We present a method for visualizing such structures, and demonstrate its utility in visualizing the abdominal aorta and its branches, whose tomographic images might be obtained by computed tomography or magnetic resonance angiography, in a single two-dimensional stylistic image, without overlaps among branches. The visualization method, termed uncluttered single-image visualization (USIV), involves optimization of geometry. This paper proposes a novel optimization technique that utilizes an interesting connection of the optimization problem regarding USIV to the protein structure prediction problem. Adopting the integer linear programming-based formulation for the protein structure prediction problem, we tested the proposed technique using 30 visualizations produced from five patient scans with representative anatomical variants in the abdominal aortic vessel tree. The novel technique can exploit commodity-level parallelism, enabling use of general-purpose graphics processing unit (GPGPU) technology that yields a significant speedup. Comparison of the results with the other optimization technique previously reported elsewhere suggests that, in most aspects, the quality of the visualization is comparable to that of the previous one, with a significant gain in the computation time of the algorithm. PMID:22291148
NASA Technical Reports Server (NTRS)
Datta, Anubhav; Johnson, Wayne R.
2009-01-01
This paper has two objectives. The first objective is to formulate a 3-dimensional Finite Element Model for the dynamic analysis of helicopter rotor blades. The second objective is to implement and analyze a dual-primal iterative substructuring based Krylov solver, that is parallel and scalable, for the solution of the 3-D FEM analysis. The numerical and parallel scalability of the solver is studied using two prototype problems - one for ideal hover (symmetric) and one for a transient forward flight (non-symmetric) - both carried out on up to 48 processors. In both hover and forward flight conditions, a perfect linear speed-up is observed, for a given problem size, up to the point of substructure optimality. Substructure optimality and the linear parallel speed-up range are both shown to depend on the problem size as well as on the selection of the coarse problem. With a larger problem size, linear speed-up is restored up to the new substructure optimality. The solver also scales with problem size - even though this conclusion is premature given the small prototype grids considered in this study.
Spillover, nonlinearity, and flexible structures
NASA Technical Reports Server (NTRS)
Bass, Robert W.; Zes, Dean
1991-01-01
Many systems whose evolution in time is governed by Partial Differential Equations (PDEs) are linearized around a known equilibrium before Computer Aided Control Engineering (CACE) is considered. In this case, there are infinitely many independent vibrational modes, and it is intuitively evident on physical grounds that infinitely many actuators would be needed in order to control all modes. A more precise, general formulation of this grave difficulty (spillover problem) is due to A.V. Balakrishnan. A possible route to circumvention of this difficulty lies in leaving the PDE in its original nonlinear form, and adding the essentially finite dimensional control action prior to linearization. One possibly applicable technique is the Liapunov Schmidt rigorous reduction of singular infinite dimensional implicit function problems to finite dimensional implicit function problems. Omitting details of Banach space rigor, the formalities of this approach are given.
NASA Technical Reports Server (NTRS)
Beggs, John H.; Luebbers, Raymond J.; Kunz, Karl S.
1991-01-01
The Penn State Finite Difference Time Domain Electromagnetic Scattering Code Version C is a three dimensional numerical electromagnetic scattering code based upon the Finite Difference Time Domain Technique (FDTD). The supplied version of the code is one version of our current three dimensional FDTD code set. This manual provides a description of the code and corresponding results for several scattering problems. The manual is organized into fourteen sections: introduction, description of the FDTD method, operation, resource requirements, Version C code capabilities, a brief description of the default scattering geometry, a brief description of each subroutine, a description of the include file (COMMONC.FOR), a section briefly discussing Radar Cross Section (RCS) computations, a section discussing some scattering results, a sample problem setup section, a new problem checklist, references and figure titles.
NASA Technical Reports Server (NTRS)
Beggs, John H.; Luebbers, Raymond J.; Kunz, Karl S.
1991-01-01
The Penn State Finite Difference Time Domain Electromagnetic Scattering Code Version D is a three dimensional numerical electromagnetic scattering code based upon the Finite Difference Time Domain Technique (FDTD). The supplied version of the code is one version of our current three dimensional FDTD code set. This manual provides a description of the code and corresponding results for several scattering problems. The manual is organized into fourteen sections: introduction, description of the FDTD method, operation, resource requirements, Version D code capabilities, a brief description of the default scattering geometry, a brief description of each subroutine, a description of the include file (COMMOND.FOR), a section briefly discussing Radar Cross Section (RCS) computations, a section discussing some scattering results, a sample problem setup section, a new problem checklist, references and figure titles.
NASA Technical Reports Server (NTRS)
Beggs, John H.; Luebbers, Raymond J.; Kunz, Karl S.
1992-01-01
The Penn State Finite Difference Time Domain (FDTD) Electromagnetic Scattering Code Version A is a three dimensional numerical electromagnetic scattering code based on the Finite Difference Time Domain technique. The supplied version of the code is one version of our current three dimensional FDTD code set. The manual provides a description of the code and the corresponding results for the default scattering problem. The manual is organized into 14 sections: introduction, description of the FDTD method, operation, resource requirements, Version A code capabilities, a brief description of the default scattering geometry, a brief description of each subroutine, a description of the include file (COMMONA.FOR), a section briefly discussing radar cross section (RCS) computations, a section discussing the scattering results, a sample problem setup section, a new problem checklist, references, and figure titles.
NASA Technical Reports Server (NTRS)
Beggs, John H.; Luebbers, Raymond J.; Kunz, Karl S.
1991-01-01
The Penn State Finite Difference Time Domain Electromagnetic Scattering Code Version B is a three dimensional numerical electromagnetic scattering code based upon the Finite Difference Time Domain Technique (FDTD). The supplied version of the code is one version of our current three dimensional FDTD code set. This manual provides a description of the code and corresponding results for several scattering problems. The manual is organized into fourteen sections: introduction, description of the FDTD method, operation, resource requirements, Version B code capabilities, a brief description of the default scattering geometry, a brief description of each subroutine, a description of the include file (COMMONB.FOR), a section briefly discussing Radar Cross Section (RCS) computations, a section discussing some scattering results, a sample problem setup section, a new problem checklist, references and figure titles.
A selection principle for Benard-type convection
NASA Technical Reports Server (NTRS)
Knightly, G. H.; Sather, D.
1985-01-01
In a Benard-type convection problem, the stationary flows of an infinite layer of fluid lying between two rigid horizontal walls and heated uniformly from below are determined. As the temperature difference across the layer increases beyond a certain value, other convective motions appear. These motions are often cellular in character in that their streamlines are confined to certain well-defined cells having, for example, the shape of rolls or hexagons. A selection principle that explains why hexagonal cells seem to be preferred for certain ranges of the parameters is formulated. An operator-theoretical formulation of one generalized Bernard problem is given. The infinite dimensional problem is reduced to one of solving a finite dimensional system of equations, namely, the selection equations. These equations are solved and a linearized stability analysis of the resultant stationary flows is presented.
A selection principle in Benard-type convection
NASA Technical Reports Server (NTRS)
Knightly, G. H.; Sather, D.
1983-01-01
In a Benard-type convection problem, the stationary flows of an infinite layer of fluid lying between two rigid horizontal walls and heated uniformly from below are determined. As the temperature difference across the layer increases beyond a certain value, other convective motions appear. These motions areoften cellular in character in that their streamlines are confined to certain well-defined cells having, for example, the shape of rolls or hexagons. A selection principle that explains why hexagonal cells seem to be preferred for certain ranges of the parameters is formulated. An operator-theoretical formulation of one generalized Bernard problem is given. The infinite dimensional problem is reduced to one of solving a finite dimensional system of equations, namely, the selection equations. These equations are solved and a linearized stability analysis of the resultant stationary flows is presented.
NASA Technical Reports Server (NTRS)
Anderson, B. H.; Benson, T. J.
1983-01-01
A supersonic three-dimensional viscous forward-marching computer design code called PEPSIS is used to obtain a numerical solution of the three-dimensional problem of the interaction of a glancing sidewall oblique shock wave and a turbulent boundary layer. Very good results are obtained for a test case that was run to investigate the use of the wall-function boundary-condition approximation for a highly complex three-dimensional shock-boundary layer interaction. Two additional test cases (coarse mesh and medium mesh) are run to examine the question of near-wall resolution when no-slip boundary conditions are applied. A comparison with experimental data shows that the PEPSIS code gives excellent results in general and is practical for three-dimensional supersonic inlet calculations.
NASA Technical Reports Server (NTRS)
Anderson, B. H.; Benson, T. J.
1983-01-01
A supersonic three-dimensional viscous forward-marching computer design code called PEPSIS is used to obtain a numerical solution of the three-dimensional problem of the interaction of a glancing sidewall oblique shock wave and a turbulent boundary layer. Very good results are obtained for a test case that was run to investigate the use of the wall-function boundary-condition approximation for a highly complex three-dimensional shock-boundary layer interaction. Two additional test cases (coarse mesh and medium mesh) are run to examine the question of near-wall resolution when no-slip boundary conditions are applied. A comparison with experimental data shows that the PEPSIS code gives excellent results in general and is practical for three-dimensional supersonic inlet calculations.
Optimization of the lithium/thionyl chloride battery
NASA Technical Reports Server (NTRS)
White, Ralph E.
1987-01-01
The progress which has been made in modeling the lithium/thionyl chloride cell over the past year and proposed research for the coming year are discussed. A one-dimensional mathematical model for a lithium/thionyl chloride cell has been developed and used to investigate methods of improving cell performance. During the course of the work a problem was detected with the banded solver being used. It was replaced with one more reliable. Future work may take one of two directions. The one-dimensional model could be augmented to include additional features and to investigate in more detail the cell temperature behavior, or a simplified two-dimensional model for the spirally wound design of this battery could be developed to investigate the heat flow within the cell.
Asymptotics of empirical eigenstructure for high dimensional spiked covariance.
Wang, Weichen; Fan, Jianqing
2017-06-01
We derive the asymptotic distributions of the spiked eigenvalues and eigenvectors under a generalized and unified asymptotic regime, which takes into account the magnitude of spiked eigenvalues, sample size, and dimensionality. This regime allows high dimensionality and diverging eigenvalues and provides new insights into the roles that the leading eigenvalues, sample size, and dimensionality play in principal component analysis. Our results are a natural extension of those in Paul (2007) to a more general setting and solve the rates of convergence problems in Shen et al. (2013). They also reveal the biases of estimating leading eigenvalues and eigenvectors by using principal component analysis, and lead to a new covariance estimator for the approximate factor model, called shrinkage principal orthogonal complement thresholding (S-POET), that corrects the biases. Our results are successfully applied to outstanding problems in estimation of risks of large portfolios and false discovery proportions for dependent test statistics and are illustrated by simulation studies.
Asymptotics of empirical eigenstructure for high dimensional spiked covariance
Wang, Weichen
2017-01-01
We derive the asymptotic distributions of the spiked eigenvalues and eigenvectors under a generalized and unified asymptotic regime, which takes into account the magnitude of spiked eigenvalues, sample size, and dimensionality. This regime allows high dimensionality and diverging eigenvalues and provides new insights into the roles that the leading eigenvalues, sample size, and dimensionality play in principal component analysis. Our results are a natural extension of those in Paul (2007) to a more general setting and solve the rates of convergence problems in Shen et al. (2013). They also reveal the biases of estimating leading eigenvalues and eigenvectors by using principal component analysis, and lead to a new covariance estimator for the approximate factor model, called shrinkage principal orthogonal complement thresholding (S-POET), that corrects the biases. Our results are successfully applied to outstanding problems in estimation of risks of large portfolios and false discovery proportions for dependent test statistics and are illustrated by simulation studies. PMID:28835726
Decentralized Dimensionality Reduction for Distributed Tensor Data Across Sensor Networks.
Liang, Junli; Yu, Guoyang; Chen, Badong; Zhao, Minghua
2016-11-01
This paper develops a novel decentralized dimensionality reduction algorithm for the distributed tensor data across sensor networks. The main contributions of this paper are as follows. First, conventional centralized methods, which utilize entire data to simultaneously determine all the vectors of the projection matrix along each tensor mode, are not suitable for the network environment. Here, we relax the simultaneous processing manner into the one-vector-by-one-vector (OVBOV) manner, i.e., determining the projection vectors (PVs) related to each tensor mode one by one. Second, we prove that in the OVBOV manner each PV can be determined without modifying any tensor data, which simplifies corresponding computations. Third, we cast the decentralized PV determination problem as a set of subproblems with consensus constraints, so that it can be solved in the network environment only by local computations and information communications among neighboring nodes. Fourth, we introduce the null space and transform the PV determination problem with complex orthogonality constraints into an equivalent hidden convex one without any orthogonality constraint, which can be solved by the Lagrange multiplier method. Finally, experimental results are given to show that the proposed algorithm is an effective dimensionality reduction scheme for the distributed tensor data across the sensor networks.
FEMFLOW3D; a finite-element program for the simulation of three-dimensional aquifers; version 1.0
Durbin, Timothy J.; Bond, Linda D.
1998-01-01
This document also includes model validation, source code, and example input and output files. Model validation was performed using four test problems. For each test problem, the results of a model simulation with FEMFLOW3D were compared with either an analytic solution or the results of an independent numerical approach. The source code, written in the ANSI x3.9-1978 FORTRAN standard, and the complete input and output of an example problem are listed in the appendixes.
NASA Technical Reports Server (NTRS)
Lakin, W. D.
1981-01-01
The use of integrating matrices in solving differential equations associated with rotating beam configurations is examined. In vibration problems, by expressing the equations of motion of the beam in matrix notation, utilizing the integrating matrix as an operator, and applying the boundary conditions, the spatial dependence is removed from the governing partial differential equations and the resulting ordinary differential equations can be cast into standard eigenvalue form. Integrating matrices are derived based on two dimensional rectangular grids with arbitrary grid spacings allowed in one direction. The derivation of higher dimensional integrating matrices is the initial step in the generalization of the integrating matrix methodology to vibration and stability problems involving plates and shells.
Numerical Recovering of a Speed of Sound by the BC-Method in 3D
NASA Astrophysics Data System (ADS)
Pestov, Leonid; Bolgova, Victoria; Danilin, Alexandr
We develop the numerical algorithm for solving the inverse problem for the wave equation by the Boundary Control method. The problem, which we refer to as a forward one, is an initial boundary value problem for the wave equation with zero initial data in the bounded domain. The inverse problem is to find the speed of sound c(x) by the measurements of waves induced by a set of boundary sources. The time of observation is assumed to be greater then two acoustical radius of the domain. The numerical algorithm for sound reconstruction is based on two steps. The first one is to find a (sufficiently large) number of controls {f_j} (the basic control is defined by the position of the source and some time delay), which generates the same number of known harmonic functions, i.e. Δ {u_j}(.,T) = 0 , where {u_j} is the wave generated by the control {f_j} . After that the linear integral equation w.r.t. the speed of sound is obtained. The piecewise constant model of the speed is used. The result of numerical testing of 3-dimensional model is presented.
Analytical solutions of the two-dimensional Dirac equation for a topological channel intersection
NASA Astrophysics Data System (ADS)
Anglin, J. R.; Schulz, A.
2017-01-01
Numerical simulations in a tight-binding model have shown that an intersection of topologically protected one-dimensional chiral channels can function as a beam splitter for noninteracting fermions on a two-dimensional lattice [Qiao, Jung, and MacDonald, Nano Lett. 11, 3453 (2011), 10.1021/nl201941f; Qiao et al., Phys. Rev. Lett. 112, 206601 (2014), 10.1103/PhysRevLett.112.206601]. Here we confirm this result analytically in the corresponding continuum k .p model, by solving the associated two-dimensional Dirac equation, in the presence of a "checkerboard" potential that provides a right-angled intersection between two zero-line modes. The method by which we obtain our analytical solutions is systematic and potentially generalizable to similar problems involving intersections of one-dimensional systems.
An adaptive front tracking technique for three-dimensional transient flows
NASA Astrophysics Data System (ADS)
Galaktionov, O. S.; Anderson, P. D.; Peters, G. W. M.; van de Vosse, F. N.
2000-01-01
An adaptive technique, based on both surface stretching and surface curvature analysis for tracking strongly deforming fluid volumes in three-dimensional flows is presented. The efficiency and accuracy of the technique are demonstrated for two- and three-dimensional flow simulations. For the two-dimensional test example, the results are compared with results obtained using a different tracking approach based on the advection of a passive scalar. Although for both techniques roughly the same structures are found, the resolution for the front tracking technique is much higher. In the three-dimensional test example, a spherical blob is tracked in a chaotic mixing flow. For this problem, the accuracy of the adaptive tracking is demonstrated by the volume conservation for the advected blob. Adaptive front tracking is suitable for simulation of the initial stages of fluid mixing, where the interfacial area can grow exponentially with time. The efficiency of the algorithm significantly benefits from parallelization of the code. Copyright
Approximation and Numerical Analysis of Nonlinear Equations of Evolution.
1980-01-31
dominant convective terms, or Stefan type problems such as the flow of fluids through porous media or the melting and freezing of ice. Such problems...means of formulating time-dependent Stefan problems was initiated. Classes of problems considered here include the one-phase and two-phase Stefan ...some new numerical methods were 2 developed for two dimensional, two-phase Stefan problems with time dependent boundary conditions. A variety of example
Quantum solution for the one-dimensional Coulomb problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nunez-Yepez, H. N.; Salas-Brito, A. L.; Solis, Didier A.
2011-06-15
The one-dimensional hydrogen atom has been a much studied system with a wide range of applications. Since the pioneering work of Loudon [R. Loudon, Am. J. Phys. 27, 649 (1959).], a number of different features related to the nature of the eigenfunctions have been found. However, many of the claims made throughout the years in this regard are not correct--such as the existence of only odd eigenstates or of an infinite binding-energy ground state. We explicitly show that the one-dimensional hydrogen atom does not admit a ground state of infinite binding energy and that the one-dimensional Coulomb potential is notmore » its own supersymmetric partner. Furthermore, we argue that at the root of many such false claims lies the omission of a superselection rule that effectively separates the right side from the left side of the singularity of the Coulomb potential.« less
Multiexponential models of (1+1)-dimensional dilaton gravity and Toda-Liouville integrable models
NASA Astrophysics Data System (ADS)
de Alfaro, V.; Filippov, A. T.
2010-01-01
We study general properties of a class of two-dimensional dilaton gravity (DG) theories with potentials containing several exponential terms. We isolate and thoroughly study a subclass of such theories in which the equations of motion reduce to Toda and Liouville equations. We show that the equation parameters must satisfy a certain constraint, which we find and solve for the most general multiexponential model. It follows from the constraint that integrable Toda equations in DG theories generally cannot appear without accompanying Liouville equations. The most difficult problem in the two-dimensional Toda-Liouville (TL) DG is to solve the energy and momentum constraints. We discuss this problem using the simplest examples and identify the main obstacles to solving it analytically. We then consider a subclass of integrable two-dimensional theories where scalar matter fields satisfy the Toda equations and the two-dimensional metric is trivial. We consider the simplest case in some detail. In this example, we show how to obtain the general solution. We also show how to simply derive wavelike solutions of general TL systems. In the DG theory, these solutions describe nonlinear waves coupled to gravity and also static states and cosmologies. For static states and cosmologies, we propose and study a more general one-dimensional TL model typically emerging in one-dimensional reductions of higher-dimensional gravity and supergravity theories. We especially attend to making the analytic structure of the solutions of the Toda equations as simple and transparent as possible.
Propagation in and scattering from a matched metamaterial having a zero index of refraction.
Ziolkowski, Richard W
2004-10-01
Planar metamaterials that exhibit a zero index of refraction have been realized experimentally by several research groups. Their existence stimulated the present investigation, which details the properties of a passive, dispersive metamaterial that is matched to free space and has an index of refraction equal to zero. Thus, unlike previous zero-index investigations, both the permittivity and permeability are zero here at a specified frequency. One-, two-, and three-dimensional source problems are treated analytically. The one- and two-dimensional source problem results are confirmed numerically with finite difference time domain (FDTD) simulations. The FDTD simulator is also used to treat the corresponding one- and two-dimensional scattering problems. It is shown that in both the source and scattering configurations the electromagnetic fields in a matched zero-index medium take on a static character in space, yet remain dynamic in time, in such a manner that the underlying physics remains associated with propagating fields. Zero phase variation at various points in the zero-index medium is demonstrated once steady-state conditions are obtained. These behaviors are used to illustrate why a zero-index metamaterial, such as a zero-index electromagnetic band-gap structured medium, significantly narrows the far-field pattern associated with an antenna located within it. They are also used to show how a matched zero-index slab could be used to transform curved wave fronts into planar ones.
NASA Astrophysics Data System (ADS)
Khachaturov, R. V.
2014-06-01
A mathematical model of X-ray reflection and scattering by multilayered nanostructures in the quasi-optical approximation is proposed. X-ray propagation and the electric field distribution inside the multilayered structure are considered with allowance for refraction, which is taken into account via the second derivative with respect to the depth of the structure. This model is used to demonstrate the possibility of solving inverse problems in order to determine the characteristics of irregularities not only over the depth (as in the one-dimensional problem) but also over the length of the structure. An approximate combinatorial method for system decomposition and composition is proposed for solving the inverse problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anton, Luis; MartI, Jose M; Ibanez, Jose M
2010-05-01
We obtain renormalized sets of right and left eigenvectors of the flux vector Jacobians of the relativistic MHD equations, which are regular and span a complete basis in any physical state including degenerate ones. The renormalization procedure relies on the characterization of the degeneracy types in terms of the normal and tangential components of the magnetic field to the wave front in the fluid rest frame. Proper expressions of the renormalized eigenvectors in conserved variables are obtained through the corresponding matrix transformations. Our work completes previous analysis that present different sets of right eigenvectors for non-degenerate and degenerate states, andmore » can be seen as a relativistic generalization of earlier work performed in classical MHD. Based on the full wave decomposition (FWD) provided by the renormalized set of eigenvectors in conserved variables, we have also developed a linearized (Roe-type) Riemann solver. Extensive testing against one- and two-dimensional standard numerical problems allows us to conclude that our solver is very robust. When compared with a family of simpler solvers that avoid the knowledge of the full characteristic structure of the equations in the computation of the numerical fluxes, our solver turns out to be less diffusive than HLL and HLLC, and comparable in accuracy to the HLLD solver. The amount of operations needed by the FWD solver makes it less efficient computationally than those of the HLL family in one-dimensional problems. However, its relative efficiency increases in multidimensional simulations.« less
Well-posedness of the Cauchy problem for models of large amplitude internal waves
NASA Astrophysics Data System (ADS)
Guyenne, Philippe; Lannes, David; Saut, Jean-Claude
2010-02-01
We consider in this paper the 'shallow-water/shallow-water' asymptotic model obtained in Choi and Camassa (1999 J. Fluid Mech. 396 1-36), Craig et al (2005 Commun. Pure. Appl. Math. 58 1587-641) (one-dimensional interface) and Bona et al (2008 J. Math. Pures Appl. 89 538-66) (two-dimensional interface) from the two-layer system with rigid lid, for the description of large amplitude internal waves at the interface of two layers of immiscible fluids of different densities. For one-dimensional interfaces, this system is of hyperbolic type and its local well-posedness does not raise serious difficulties, although other issues (blow-up, loss of hyperbolicity, etc) turn out to be delicate. For two-dimensional interfaces, the system is nonlocal. Nevertheless, we prove that it conserves some properties of 'hyperbolic type' and show that the associated Cauchy problem is locally well posed in suitable Sobolev classes provided some natural restrictions are imposed on the data. These results are illustrated by numerical simulations with emphasis on the formation of shock waves.
Pairing phase diagram of three holes in the generalized Hubbard model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Navarro, O.; Espinosa, J.E.
Investigations of high-{Tc} superconductors suggest that the electronic correlation may play a significant role in the formation of pairs. Although the main interest is on the physic of two-dimensional highly correlated electron systems, the one-dimensional models related to high temperature superconductivity are very popular due to the conjecture that properties of the 1D and 2D variants of certain models have common aspects. Within the models for correlated electron systems, that attempt to capture the essential physics of high-temperature superconductors and parent compounds, the Hubbard model is one of the simplest. Here, the pairing problem of a three electrons system hasmore » been studied by using a real-space method and the generalized Hubbard Hamiltonian. This method includes the correlated hopping interactions as an extension of the previously proposed mapping method, and is based on mapping the correlated many body problem onto an equivalent site- and bond-impurity tight-binding one in a higher dimensional space, where the problem was solved in a non-perturbative way. In a linear chain, the authors analyzed the pairing phase diagram of three correlated holes for different values of the Hamiltonian parameters. For some value of the hopping parameters they obtain an analytical solution for all kind of interactions.« less
Fractional Steps methods for transient problems on commodity computer architectures
NASA Astrophysics Data System (ADS)
Krotkiewski, M.; Dabrowski, M.; Podladchikov, Y. Y.
2008-12-01
Fractional Steps methods are suitable for modeling transient processes that are central to many geological applications. Low memory requirements and modest computational complexity facilitates calculations on high-resolution three-dimensional models. An efficient implementation of Alternating Direction Implicit/Locally One-Dimensional schemes for an Opteron-based shared memory system is presented. The memory bandwidth usage, the main bottleneck on modern computer architectures, is specially addressed. High efficiency of above 2 GFlops per CPU is sustained for problems of 1 billion degrees of freedom. The optimized sequential implementation of all 1D sweeps is comparable in execution time to copying the used data in the memory. Scalability of the parallel implementation on up to 8 CPUs is close to perfect. Performing one timestep of the Locally One-Dimensional scheme on a system of 1000 3 unknowns on 8 CPUs takes only 11 s. We validate the LOD scheme using a computational model of an isolated inclusion subject to a constant far field flux. Next, we study numerically the evolution of a diffusion front and the effective thermal conductivity of composites consisting of multiple inclusions and compare the results with predictions based on the differential effective medium approach. Finally, application of the developed parabolic solver is suggested for a real-world problem of fluid transport and reactions inside a reservoir.
Space plasma contractor research, 1988
NASA Technical Reports Server (NTRS)
Williams, John D.; Wilbur, Paul J.
1989-01-01
Results of experiments conducted on hollow cathode-based plasma contractors are reported. Specific tests in which attempts were made to vary plasma conditions in the simulated ionospheric plasma are described. Experimental results showing the effects of contractor flowrate and ion collecting surface size on contactor performance and contactor plasma plume geometry are presented. In addition to this work, one-dimensional solutions to spherical and cylindircal space-charge limited double-sheath problems are developed. A technique is proposed that can be used to apply these solutions to the problem of current flow through elongated double-sheaths that separate two cold plasmas. Two conference papers which describe the essential features of the plasma contacting process and present data that should facilitate calibration of comprehensive numerical models of the plasma contacting process are also included.
Excitation of ship waves by a submerged object: New solution to the classical problem
NASA Astrophysics Data System (ADS)
Arzhannikov, A. V.; Kotelnikov, I. A.
2016-08-01
We have proposed a new method for solving the problem of ship waves excited on the surface of a nonviscous liquid by a submerged object that moves at a variable speed. As a first application of this method, we have obtained a new solution to the classic problem of ship waves generated by a submerged ball that moves rectilinearly with constant velocity parallel to the equilibrium surface of the liquid. For this example, we have derived asymptotic expressions describing the vertical displacement of the liquid surface in the limit of small and large values of the Froude number. The exact solution is presented in the form of two terms, each of which is reduced to one-dimensional integrals. One term describes the "Bernoulli hump" and another term the "Kelvin wedge." As a second example, we considered vertical oscillation of the submerged ball. In this case, the solution leads to the calculation of one-dimensional integral and describes surface waves propagating from the epicenter above the ball.
Excitation of ship waves by a submerged object: New solution to the classical problem.
Arzhannikov, A V; Kotelnikov, I A
2016-08-01
We have proposed a new method for solving the problem of ship waves excited on the surface of a nonviscous liquid by a submerged object that moves at a variable speed. As a first application of this method, we have obtained a new solution to the classic problem of ship waves generated by a submerged ball that moves rectilinearly with constant velocity parallel to the equilibrium surface of the liquid. For this example, we have derived asymptotic expressions describing the vertical displacement of the liquid surface in the limit of small and large values of the Froude number. The exact solution is presented in the form of two terms, each of which is reduced to one-dimensional integrals. One term describes the "Bernoulli hump" and another term the "Kelvin wedge." As a second example, we considered vertical oscillation of the submerged ball. In this case, the solution leads to the calculation of one-dimensional integral and describes surface waves propagating from the epicenter above the ball.
NASA Astrophysics Data System (ADS)
Antonella Dino, Giovanna; Clemente, Paolo; De Luca, Domenico Antonio; Lasagna, Manuela
2013-04-01
Residual sludge coming from dimensional stones working plants (diamond framesaw and ganguesaw with abrasive shots processes) represents a problem for Stone Industries. In fact the cost connected to their landfilling amounts to more than 3% of operating costs of dimensional stone working plants. Furthermore their strict feature as waste to dump (CER code 010413) contrasts the EU principles of "resource preservation" and "waste recovery". The main problems related to their management are: size distribution (fine materials, potentially asphyxial), presence of heavy metals (due to the working processes) and TPH content (due to oil machines losses). Residual sludge, considered according to Italian Legislative Decree n.152/06, can be used, as waste, for environmental restoration of derelict land or in cement plants. It is also possible to think about their systematic treatment in consortium plats for the production of Secondary Raw Materials (SRM) or "New Products" (NP, eg. artificial loam, waterproofing materials, ....). The research evidences that, on the basis of a correct sludge management, treatment and characterization, economic and environmental benefits are possible (NP or SRM in spite of waste to dump). To individuate different applications of residual sludge in civil and environmental contexts, a geotechnical (size distribution, permeability, Atterberg limits, cohesion and friction angle evaluation, Proctor soil test) characterization was foreseen. The geotechnical tests were conducted on sludge as such and on three different mixes: - Mix 1 - Bentonite clay (5-10%) added to sludge a.s (90-95%); - Mix 2 - Sludge a.s. (90-80-70%) added to coarse materials coming from crushed dimensional stones (10-20-30%); - Mix 3 - Sludge a.s. (50-70%) mixed with sand, compost, natural loam (50-30% mixture of sand, compost, natural loam). The results obtained from the four sets of tests were fundamental to evaluate: - the characteristics of the original materials; - the chance to obtain new products for dumps waterproofing (Mix 1). In this case the permeability has to be at least 10-9 m/s; - the opportunity to use them for land rehabilitation and reclamation (fine and coarse materials to fill quarry or civil works pits - Mix2; artificial loam to use for quarry and civil works revegetation - Mix 3). In Mix 3 phytotoxicity tests have been performed in cooperation with Agricultural Dept. - University of Turin. In this case the "cradle to grave principle" would be applied: "waste" coming from dimensional stone working plants could return to quarries. The results coming from geotechnical tests are promising, but to exploit sludge mixtures in civil and environmental applications it is necessary to guarantee, by means of appropriate chemical analysis, that there are no problems connected to soil, water and air pollution (connected to heavy metals and TPH contents). Magnetic or hydrogravimetric separation can be performed to reduce heavy metal content, instead TPH decrement can be reached by mean of specific agronomic treatments (eg. Bioremediation). Several in situ tests will be performed to compare the laboratory results to the "pre-industrial" ones: the obtained results will be potentially useful to propose some integration to the present Italian legislation.
On the dynamics of the Ising model of cooperative phenomena
Montroll, Elliott W.
1981-01-01
A two-dimensional (and to some degree three-dimensional) version of Glauber's one-dimensional spin relaxation model is described. The model is constructed to yield the Ising model of cooperative phenomena at equilibrium. A complete hierarchy of differential equations for multispin correlation functions is constructed. Some remarks are made concerning the solution of them for the initial value problem of determining the relaxation of an initial set of spin distributions. PMID:16592955
A New Direction of Cancer Classification: Positive Effect of Low-Ranking MicroRNAs.
Li, Feifei; Piao, Minghao; Piao, Yongjun; Li, Meijing; Ryu, Keun Ho
2014-10-01
Many studies based on microRNA (miRNA) expression profiles showed a new aspect of cancer classification. Because one characteristic of miRNA expression data is the high dimensionality, feature selection methods have been used to facilitate dimensionality reduction. The feature selection methods have one shortcoming thus far: they just consider the problem of where feature to class is 1:1 or n:1. However, because one miRNA may influence more than one type of cancer, human miRNA is considered to be ranked low in traditional feature selection methods and are removed most of the time. In view of the limitation of the miRNA number, low-ranking miRNAs are also important to cancer classification. We considered both high- and low-ranking features to cover all problems (1:1, n:1, 1:n, and m:n) in cancer classification. First, we used the correlation-based feature selection method to select the high-ranking miRNAs, and chose the support vector machine, Bayes network, decision tree, k-nearest-neighbor, and logistic classifier to construct cancer classification. Then, we chose Chi-square test, information gain, gain ratio, and Pearson's correlation feature selection methods to build the m:n feature subset, and used the selected miRNAs to determine cancer classification. The low-ranking miRNA expression profiles achieved higher classification accuracy compared with just using high-ranking miRNAs in traditional feature selection methods. Our results demonstrate that the m:n feature subset made a positive impression of low-ranking miRNAs in cancer classification.
On solving three-dimensional open-dimension rectangular packing problems
NASA Astrophysics Data System (ADS)
Junqueira, Leonardo; Morabito, Reinaldo
2017-05-01
In this article, a recently proposed three-dimensional open-dimension rectangular packing problem is considered, in which the objective is to find a minimal volume rectangular container that packs a set of rectangular boxes. The literature has tackled small-sized instances of this problem by means of optimization solvers, position-free mixed-integer programming (MIP) formulations and piecewise linearization approaches. In this study, the problem is alternatively addressed by means of grid-based position MIP formulations, whereas still considering optimization solvers and the same piecewise linearization techniques. A comparison of the computational performance of both models is then presented, when tested with benchmark problem instances and with new instances, and it is shown that the grid-based position MIP formulation can be competitive, depending on the characteristics of the instances. The grid-based position MIP formulation is also embedded with real-world practical constraints, such as cargo stability, and results are additionally presented.
Aerodynamic design optimization via reduced Hessian SQP with solution refining
NASA Technical Reports Server (NTRS)
Feng, Dan; Pulliam, Thomas H.
1995-01-01
An all-at-once reduced Hessian Successive Quadratic Programming (SQP) scheme has been shown to be efficient for solving aerodynamic design optimization problems with a moderate number of design variables. This paper extends this scheme to allow solution refining. In particular, we introduce a reduced Hessian refining technique that is critical for making a smooth transition of the Hessian information from coarse grids to fine grids. Test results on a nozzle design using quasi-one-dimensional Euler equations show that through solution refining the efficiency and the robustness of the all-at-once reduced Hessian SQP scheme are significantly improved.
The Scaling Group of the 1-D Invisicid Euler Equations
NASA Astrophysics Data System (ADS)
Schmidt, Emma; Ramsey, Scott; Boyd, Zachary; Baty, Roy
2017-11-01
The one dimensional (1-D) compressible Euler equations in non-ideal media support scale invariant solutions under a variety of initial conditions. Famous scale invariant solutions include the Noh, Sedov, Guderley, and collapsing cavity hydrodynamic test problems. We unify many classical scale invariant solutions under a single scaling group analysis. The scaling symmetry group generator provides a framework for determining all scale invariant solutions emitted by the 1-D Euler equations for arbitrary geometry, initial conditions, and equation of state. We approach the Euler equations from a geometric standpoint, and conduct scaling analyses for a broad class of materials.
Equilibrium charge distribution on a finite straight one-dimensional wire
NASA Astrophysics Data System (ADS)
Batle, Josep; Ciftja, Orion; Abdalla, Soliman; Elhoseny, Mohamed; Alkhambashi, Majid; Farouk, Ahmed
2017-09-01
The electrostatic properties of uniformly charged regular bodies are prominently discussed on college-level electromagnetism courses. However, one of the most basic problems of electrostatics that deals with how a continuous charge distribution reaches equilibrium is rarely mentioned at this level. In this work we revisit the problem of equilibrium charge distribution on a straight one-dimensional (1D) wire with finite length. The majority of existing treatments in the literature deal with the 1D wire as a limiting case of a higher-dimensional structure that can be treated analytically for a Coulomb interaction potential between point charges. Surprisingly, different models (for instance, an ellipsoid or a cylinder model) may lead to different results, thus there is even some ambiguity on whether the problem is well-posed. In this work we adopt a different approach where we do not start with any higher-dimensional body that reduces to a 1D wire in the appropriate limit. Instead, our starting point is the obvious one, a finite straight 1D wire that contains charge. However, the new tweak in the model is the assumption that point charges interact with each other via a non-Coulomb power-law interaction potential. This potential is well-behaved, allows exact analytical results and approaches the standard Coulomb interaction potential as a limit. The results originating from this approach suggest that the equilibrium charge distribution for a finite straight 1D wire is a uniform charge density when the power-law interaction potential approaches the Coulomb interaction potential as a suitable limit. We contrast such a finding to results obtained using a different regularised logarithmic interaction potential which allows exact treatment in 1D. The present self-contained material may be of interest to instructors teaching electromagnetism as well as students who will discover that simple-looking problems may sometimes pose important scientific challenges.
FPPAC94: A two-dimensional multispecies nonlinear Fokker-Planck package for UNIX systems
NASA Astrophysics Data System (ADS)
Mirin, A. A.; McCoy, M. G.; Tomaschke, G. P.; Killeen, J.
1994-07-01
FPPAC94 solves the complete nonlinear multispecies Fokker-Planck collison operator for a plasma in two-dimensional velocity space. The operator is expressed in terms of spherical coordinates (speed and pitch angle) under the assumption of azimuthal symmetry. Provision is made for additional physics contributions (e.g. rf heating, electric field acceleration). The charged species, referred to as general species, are assumed to be in the presence of an arbitrary number of fixed Maxwellian species. The electrons may be treated either as one of these Maxwellian species or as a general species. Coulomb interactions among all charged species are considered This program is a new version of FPPAC. FPPAC was last published in Computer Physics Communications in 1988. This new version is identical in scope to the previous version. However, it is written in standard Fortran 77 and is able to execute on a variety of Unix systems. The code has been tested on the Cray-C90, HP-755 and Sun Sparc-1. The answers agree on all platforms where the code has been tested. The test problems are the same as those provided in 1988. This version also corrects a bug in the 1988 version.
NASA Astrophysics Data System (ADS)
Aldrin, John C.; Mayes, Alexander; Jauriqui, Leanne; Biedermann, Eric; Heffernan, Julieanne; Livings, Richard; Goodlet, Brent; Mazdiyasni, Siamack
2018-04-01
A case study is presented evaluating uncertainty in Resonance Ultrasound Spectroscopy (RUS) inversion for a single crystal (SX) Ni-based superalloy Mar-M247 cylindrical dog-bone specimens. A number of surrogate models were developed with FEM model solutions, using different sampling schemes (regular grid, Monte Carlo sampling, Latin Hyper-cube sampling) and model approaches, N-dimensional cubic spline interpolation and Kriging. Repeated studies were used to quantify the well-posedness of the inversion problem, and the uncertainty was assessed in material property and crystallographic orientation estimates given typical geometric dimension variability in aerospace components. Surrogate model quality was found to be an important factor in inversion results when the model more closely represents the test data. One important discovery was when the model matches well with test data, a Kriging surrogate model using un-sorted Latin Hypercube sampled data performed as well as the best results from an N-dimensional interpolation model using sorted data. However, both surrogate model quality and mode sorting were found to be less critical when inverting properties from either experimental data or simulated test cases with uncontrolled geometric variation.
Modeling axisymmetric flow and transport
Langevin, C.D.
2008-01-01
Unmodified versions of common computer programs such as MODFLOW, MT3DMS, and SEAWAT that use Cartesian geometry can accurately simulate axially symmetric ground water flow and solute transport. Axisymmetric flow and transport are simulated by adjusting several input parameters to account for the increase in flow area with radial distance from the injection or extraction well. Logarithmic weighting of interblock transmissivity, a standard option in MODFLOW, can be used for axisymmetric models to represent the linear change in hydraulic conductance within a single finite-difference cell. Results from three test problems (ground water extraction, an aquifer push-pull test, and upconing of saline water into an extraction well) show good agreement with analytical solutions or with results from other numerical models designed specifically to simulate the axisymmetric geometry. Axisymmetric models are not commonly used but can offer an efficient alternative to full three-dimensional models, provided the assumption of axial symmetry can be justified. For the upconing problem, the axisymmetric model was more than 1000 times faster than an equivalent three-dimensional model. Computational gains with the axisymmetric models may be useful for quickly determining appropriate levels of grid resolution for three-dimensional models and for estimating aquifer parameters from field tests.
Modal Ring Method for the Scattering of Electromagnetic Waves
NASA Technical Reports Server (NTRS)
Baumeister, Kenneth J.; Kreider, Kevin L.
1993-01-01
The modal ring method for electromagnetic scattering from perfectly electric conducting (PEC) symmetrical bodies is presented. The scattering body is represented by a line of finite elements (triangular) on its outer surface. The infinite computational region surrounding the body is represented analytically by an eigenfunction expansion. The modal ring method effectively reduces the two dimensional scattering problem to a one-dimensional problem similar to the method of moments. The modal element method is capable of handling very high frequency scattering because it has a highly banded solution matrix.
An equivalent domain integral method in the two-dimensional analysis of mixed mode crack problems
NASA Technical Reports Server (NTRS)
Raju, I. S.; Shivakumar, K. N.
1990-01-01
An equivalent domain integral (EDI) method for calculating J-integrals for two-dimensional cracked elastic bodies is presented. The details of the method and its implementation are presented for isoparametric elements. The EDI method gave accurate values of the J-integrals for two mode I and two mixed mode problems. Numerical studies showed that domains consisting of one layer of elements are sufficient to obtain accurate J-integral values. Two procedures for separating the individual modes from the domain integrals are presented.
Bounded solutions in a T-shaped waveguide and the spectral properties of the Dirichlet ladder
NASA Astrophysics Data System (ADS)
Nazarov, S. A.
2014-08-01
The Dirichlet problem is considered on the junction of thin quantum waveguides (of thickness h ≪ 1) in the shape of an infinite two-dimensional ladder. Passage to the limit as h → +0 is discussed. It is shown that the asymptotically correct transmission conditions at nodes of the corresponding one-dimensional quantum graph are Dirichlet conditions rather than the conventional Kirchhoff transmission conditions. The result is obtained by analyzing bounded solutions of a problem in the T-shaped waveguide that the boundary layer phenomenon.
Exploratory tests of two strut fuel injectors for supersonic combustion
NASA Technical Reports Server (NTRS)
Anderson, G. Y.; Gooderum, P. B.
1974-01-01
Results of supersonic mixing and combustion tests performed with two simple strut injector configurations, one with parallel injectors and one with perpendicular injectors, are presented and analyzed. Good agreement is obtained between static pressure measured on the duct wall downstream of the strut injectors and distributions obtained from one-dimensional calculations. Measured duct heat load agrees with results of the one-dimensional calculations for moderate amounts of reaction, but is underestimated when large separated regions occur near the injection location. For the parallel injection strut, good agreement is obtained between the shape of the injected fuel distribution inferred from gas sample measurements at the duct exit and the distribution calculated with a multiple-jet mixing theory. The overall fraction of injected fuel reacted in the multiple-jet calculation closely matches the amount of fuel reaction necessary to match static pressure with the one-dimensional calculation. Gas sample measurements with the perpendicular injection strut also give results consistent with the amount of fuel reaction in the one-dimensional calculation.
Optimal reservoir operation policies using novel nested algorithms
NASA Astrophysics Data System (ADS)
Delipetrev, Blagoj; Jonoski, Andreja; Solomatine, Dimitri
2015-04-01
Historically, the two most widely practiced methods for optimal reservoir operation have been dynamic programming (DP) and stochastic dynamic programming (SDP). These two methods suffer from the so called "dual curse" which prevents them to be used in reasonably complex water systems. The first one is the "curse of dimensionality" that denotes an exponential growth of the computational complexity with the state - decision space dimension. The second one is the "curse of modelling" that requires an explicit model of each component of the water system to anticipate the effect of each system's transition. We address the problem of optimal reservoir operation concerning multiple objectives that are related to 1) reservoir releases to satisfy several downstream users competing for water with dynamically varying demands, 2) deviations from the target minimum and maximum reservoir water levels and 3) hydropower production that is a combination of the reservoir water level and the reservoir releases. Addressing such a problem with classical methods (DP and SDP) requires a reasonably high level of discretization of the reservoir storage volume, which in combination with the required releases discretization for meeting the demands of downstream users leads to computationally expensive formulations and causes the curse of dimensionality. We present a novel approach, named "nested" that is implemented in DP, SDP and reinforcement learning (RL) and correspondingly three new algorithms are developed named nested DP (nDP), nested SDP (nSDP) and nested RL (nRL). The nested algorithms are composed from two algorithms: 1) DP, SDP or RL and 2) nested optimization algorithm. Depending on the way we formulate the objective function related to deficits in the allocation problem in the nested optimization, two methods are implemented: 1) Simplex for linear allocation problems, and 2) quadratic Knapsack method in the case of nonlinear problems. The novel idea is to include the nested optimization algorithm into the state transition that lowers the starting problem dimension and alleviates the curse of dimensionality. The algorithms can solve multi-objective optimization problems, without significantly increasing the complexity and the computational expenses. The algorithms can handle dense and irregular variable discretization, and are coded in Java as prototype applications. The three algorithms were tested at the multipurpose reservoir Knezevo of the Zletovica hydro-system located in the Republic of Macedonia, with eight objectives, including urban water supply, agriculture, ensuring ecological flow, and generation of hydropower. Because the Zletovica hydro-system is relatively complex, the novel algorithms were pushed to their limits, demonstrating their capabilities and limitations. The nSDP and nRL derived/learned the optimal reservoir policy using 45 (1951-1995) years historical data. The nSDP and nRL optimal reservoir policy was tested on 10 (1995-2005) years historical data, and compared with nDP optimal reservoir operation in the same period. The nested algorithms and optimal reservoir operation results are analysed and explained.
An approximation theory for the identification of linear thermoelastic systems
NASA Technical Reports Server (NTRS)
Rosen, I. G.; Su, Chien-Hua Frank
1990-01-01
An abstract approximation framework and convergence theory for the identification of thermoelastic systems is developed. Starting from an abstract operator formulation consisting of a coupled second order hyperbolic equation of elasticity and first order parabolic equation for heat conduction, well-posedness is established using linear semigroup theory in Hilbert space, and a class of parameter estimation problems is then defined involving mild solutions. The approximation framework is based upon generic Galerkin approximation of the mild solutions, and convergence of solutions of the resulting sequence of approximating finite dimensional parameter identification problems to a solution of the original infinite dimensional inverse problem is established using approximation results for operator semigroups. An example involving the basic equations of one dimensional linear thermoelasticity and a linear spline based scheme are discussed. Numerical results indicate how the approach might be used in a study of damping mechanisms in flexible structures.
Pan, Rui; Wang, Hansheng; Li, Runze
2016-01-01
This paper is concerned with the problem of feature screening for multi-class linear discriminant analysis under ultrahigh dimensional setting. We allow the number of classes to be relatively large. As a result, the total number of relevant features is larger than usual. This makes the related classification problem much more challenging than the conventional one, where the number of classes is small (very often two). To solve the problem, we propose a novel pairwise sure independence screening method for linear discriminant analysis with an ultrahigh dimensional predictor. The proposed procedure is directly applicable to the situation with many classes. We further prove that the proposed method is screening consistent. Simulation studies are conducted to assess the finite sample performance of the new procedure. We also demonstrate the proposed methodology via an empirical analysis of a real life example on handwritten Chinese character recognition. PMID:28127109
Nakano, Takashi; Otsuka, Makoto; Yoshimoto, Junichiro; Doya, Kenji
2015-01-01
A theoretical framework of reinforcement learning plays an important role in understanding action selection in animals. Spiking neural networks provide a theoretically grounded means to test computational hypotheses on neurally plausible algorithms of reinforcement learning through numerical simulation. However, most of these models cannot handle observations which are noisy, or occurred in the past, even though these are inevitable and constraining features of learning in real environments. This class of problem is formally known as partially observable reinforcement learning (PORL) problems. It provides a generalization of reinforcement learning to partially observable domains. In addition, observations in the real world tend to be rich and high-dimensional. In this work, we use a spiking neural network model to approximate the free energy of a restricted Boltzmann machine and apply it to the solution of PORL problems with high-dimensional observations. Our spiking network model solves maze tasks with perceptually ambiguous high-dimensional observations without knowledge of the true environment. An extended model with working memory also solves history-dependent tasks. The way spiking neural networks handle PORL problems may provide a glimpse into the underlying laws of neural information processing which can only be discovered through such a top-down approach.
Nakano, Takashi; Otsuka, Makoto; Yoshimoto, Junichiro; Doya, Kenji
2015-01-01
A theoretical framework of reinforcement learning plays an important role in understanding action selection in animals. Spiking neural networks provide a theoretically grounded means to test computational hypotheses on neurally plausible algorithms of reinforcement learning through numerical simulation. However, most of these models cannot handle observations which are noisy, or occurred in the past, even though these are inevitable and constraining features of learning in real environments. This class of problem is formally known as partially observable reinforcement learning (PORL) problems. It provides a generalization of reinforcement learning to partially observable domains. In addition, observations in the real world tend to be rich and high-dimensional. In this work, we use a spiking neural network model to approximate the free energy of a restricted Boltzmann machine and apply it to the solution of PORL problems with high-dimensional observations. Our spiking network model solves maze tasks with perceptually ambiguous high-dimensional observations without knowledge of the true environment. An extended model with working memory also solves history-dependent tasks. The way spiking neural networks handle PORL problems may provide a glimpse into the underlying laws of neural information processing which can only be discovered through such a top-down approach. PMID:25734662
An assessment of the demographic and clinical correlates of the dimensions of alcohol use behaviour.
Smith, Gillian W; Shevlin, Mark; Murphy, Jamie; Houston, James E
2010-01-01
To identify population-based clinical and demographic correlates of alcohol use dimensions. Using data from a population-based sample of Great Britain (n = 7849), structural equation modelling (SEM) was used to identify associations between demographic and clinical variables and two competing dimensional models of the Alcohol Use Disorders Identification Test (AUDIT). A two-factor SEM fit best. In this model, Factor 1, alcohol consumption, was associated with male sex, younger age, lower educational attainment, generalized anxiety disorder (GAD) and suicide attempts. Factor 2, alcohol-related problems, was associated with the demographic variables (to a lesser extent) and to a wider range of clinical variables, including depressive episode, GAD, mixed anxiety and depressive disorder, obsessive compulsive disorder, phobia, suicidal thoughts and suicide attempts. The one-factor SEM was associated with demographic and all assessed clinical correlates; however, this model did not fit the data well. Two main conclusions justify the two-factor approach to alcohol use classification. First, the model fit was considerably superior and, second, the dimensions of alcohol consumption and alcohol-related problems vary considerably in their associations with measures of demographic and clinical risk. A one-factor representation of alcohol use, for instance, would fail to recognize that measures of affective/anxiety disorders are more consistently related to alcohol-related problems than to alcohol consumption. It is suggested therefore that to fully understand the complexity of alcohol use behaviour and its associated risk, future research should acknowledge the basic underlying dimensional structure of the construct.
Gebremedhin, Daniel H; Weatherford, Charles A
2015-02-01
This is a response to the comment we received on our recent paper "Calculations for the one-dimensional soft Coulomb problem and the hard Coulomb limit." In that paper, we introduced a computational algorithm that is appropriate for solving stiff initial value problems, and which we applied to the one-dimensional time-independent Schrödinger equation with a soft Coulomb potential. We solved for the eigenpairs using a shooting method and hence turned it into an initial value problem. In particular, we examined the behavior of the eigenpairs as the softening parameter approached zero (hard Coulomb limit). The commenters question the existence of the ground state of the hard Coulomb potential, which we inferred by extrapolation of the softening parameter to zero. A key distinction between the commenters' approach and ours is that they consider only the half-line while we considered the entire x axis. Based on mathematical considerations, the commenters consider only a vanishing solution function at the origin, and they question our conclusion that the ground state of the hard Coulomb potential exists. The ground state we inferred resembles a δ(x), and hence it cannot even be addressed based on their argument. For the excited states, there is agreement with the fact that the particle is always excluded from the origin. Our discussion with regard to the symmetry of the excited states is an extrapolation of the soft Coulomb case and is further explained herein.
A 16-bit Coherent Ising Machine for One-Dimensional Ring and Cubic Graph Problems
NASA Astrophysics Data System (ADS)
Takata, Kenta; Marandi, Alireza; Hamerly, Ryan; Haribara, Yoshitaka; Maruo, Daiki; Tamate, Shuhei; Sakaguchi, Hiromasa; Utsunomiya, Shoko; Yamamoto, Yoshihisa
2016-09-01
Many tasks in our modern life, such as planning an efficient travel, image processing and optimizing integrated circuit design, are modeled as complex combinatorial optimization problems with binary variables. Such problems can be mapped to finding a ground state of the Ising Hamiltonian, thus various physical systems have been studied to emulate and solve this Ising problem. Recently, networks of mutually injected optical oscillators, called coherent Ising machines, have been developed as promising solvers for the problem, benefiting from programmability, scalability and room temperature operation. Here, we report a 16-bit coherent Ising machine based on a network of time-division-multiplexed femtosecond degenerate optical parametric oscillators. The system experimentally gives more than 99.6% of success rates for one-dimensional Ising ring and nondeterministic polynomial-time (NP) hard instances. The experimental and numerical results indicate that gradual pumping of the network combined with multiple spectral and temporal modes of the femtosecond pulses can improve the computational performance of the Ising machine, offering a new path for tackling larger and more complex instances.
Hughes, Joseph D.; Langevin, Christian D.; Chartier, Kevin L.; White, Jeremy T.
2012-01-01
A flexible Surface-Water Routing (SWR1) Process that solves the continuity equation for one-dimensional and two-dimensional surface-water flow routing has been developed for the U.S. Geological Survey three-dimensional groundwater model, MODFLOW-2005. Simple level- and tilted-pool reservoir routing and a diffusive-wave approximation of the Saint-Venant equations have been implemented. Both methods can be implemented in the same model and the solution method can be simplified to represent constant-stage elements that are functionally equivalent to the standard MODFLOW River or Drain Package boundary conditions. A generic approach has been used to represent surface-water features (reaches) and allows implementation of a variety of geometric forms. One-dimensional geometric forms include rectangular, trapezoidal, and irregular cross section reaches to simulate one-dimensional surface-water features, such as canals and streams. Two-dimensional geometric forms include reaches defined using specified stage-volume-area-perimeter (SVAP) tables and reaches covering entire finite-difference grid cells to simulate two-dimensional surface-water features, such as wetlands and lakes. Specified SVAP tables can be used to represent reaches that are smaller than the finite-difference grid cell (for example, isolated lakes), or reaches that cannot be represented accurately using the defined top of the model. Specified lateral flows (which can represent point and distributed flows) and stage-dependent rainfall and evaporation can be applied to each reach. The SWR1 Process can be used with the MODFLOW Unsaturated Zone Flow (UZF1) Package to permit dynamic simulation of runoff from the land surface to specified reaches. Surface-water/groundwater interactions in the SWR1 Process are mathematically defined to be a function of the difference between simulated stages and groundwater levels, and the specific form of the reach conductance equation used in each reach. Conductance can be specified directly or calculated as a function of the simulated wetted perimeter and defined reach bed hydraulic properties, or as a weighted combination of both reach bed hydraulic properties and horizontal hydraulic conductivity. Each reach can be explicitly coupled to a single specific groundwater-model layer or coupled to multiple groundwater-model layers based on the reach geometry and groundwater-model layer elevations in the row and column containing the reach. Surface-water flow between reservoirs is simulated using control structures. Surface-water flow between reaches, simulated by the diffusive-wave approximation, can also be simulated using control structures. A variety of control structures have been included in the SWR1 Process and include (1) excess-volume structures, (2) uncontrolled-discharge structures, (3) pumps, (4) defined stage-discharge relations, (5) culverts, (6) fixed- or movable-crest weirs, and (7) fixed or operable gated spillways. Multiple control structures can be implemented in individual reaches and are treated as composite flow structures. Solution of the continuity equation at the reach-group scale (a single reach or a user-defined collection of individual reaches) is achieved using exact Newton methods with direct solution methods or exact and inexact Newton methods with Krylov sub-space methods. Newton methods have been used in the SWR1 Process because of their ability to solve nonlinear problems. Multiple SWR1 time steps can be simulated for each MODFLOW time step, and a simple adaptive time-step algorithm, based on user-specified rainfall, stage, flow, or convergence constraints, has been implemented to better resolve surface-water response. A simple linear- or sigmoid-depth scaling approach also has been implemented to account for increased bed roughness at small surface-water depths and to increase numerical stability. A line-search algorithm also has been included to improve the quality of the Newton-step upgrade vector, if possible. The SWR1 Process has been benchmarked against one- and two-dimensional numerical solutions from existing one- and two-dimensional numerical codes that solve the dynamic-wave approximation of the Saint-Venant equations. Two-dimensional solutions test the ability of the SWR1 Process to simulate the response of a surface-water system to (1) steady flow conditions for an inclined surface (solution of Manning's equation), and (2) transient inflow and rainfall for an inclined surface. The one-dimensional solution tests the ability of the SWR1 Process to simulate a looped network with multiple upstream inflows and several control structures. The SWR1 Process also has been compared to a level-pool reservoir solution. A synthetic test problem was developed to evaluate a number of different SWR1 solution options and simulate surface-water/groundwater interaction. The solution approach used in the SWR1 Process may not be applicable for all surface-water/groundwater problems. The SWR1 Process is best suited for modeling long-term changes (days to years) in surface-water and groundwater flow. Use of the SWR1 Process is not recommended for modeling the transient exchange of water between streams and aquifers when local and convective acceleration and other secondary effects (for example, wind and Coriolis forces) are substantial. Dam break evaluations and two-dimensional evaluations of spatially extensive domains are examples where acceleration terms and secondary effects would be significant, respectively.
NASA Astrophysics Data System (ADS)
Krasnitckii, S. A.; Kolomoetc, D. R.; Smirnov, A. M.; Gutkin, M. Yu
2017-03-01
We present an analytical solution to the boundary-value problem in the classical theory of elasticity for a core-shell nanowire with an eccentric parallelepipedal core of an arbitrary rectangular cross section. The core is subjected to one-dimensional cross dilatation eigenstrain. The misfit stresses are found in a concise and transparent closed form which is convenient for practical use in theoretical modeling of misfit relaxation processes.
Two-Dimensional Grammars And Their Applications To Artificial Intelligence
NASA Astrophysics Data System (ADS)
Lee, Edward T.
1987-05-01
During the past several years, the concepts and techniques of two-dimensional grammars1,2 have attracted growing attention as promising avenues of approach to problems in picture generation as well as in picture description3 representation, recognition, transformation and manipulation. Two-dimensional grammar techniques serve the purpose of exploiting the structure or underlying relationships in a picture. This approach attempts to describe a complex picture in terms of their components and their relative positions. This resembles the way a sentence is described in terms of its words and phrases, and the terms structural picture recognition, linguistic picture recognition, or syntactic picture recognition are often used. By using this approach, the problem of picture recognition becomes similar to that of phrase recognition in a language. However, describing pictures using a string grammar (one-dimensional grammar), the only relation between sub-pictures and/or primitives is the concatenation; that is each picture or primitive can be connected only at the left or right. This one-dimensional relation has not been very effective in describing two-dimensional pictures. A natural generaliza-tion is to use two-dimensional grammars. In this paper, two-dimensional grammars and their applications to artificial intelligence are presented. Picture grammars and two-dimensional grammars are introduced and illustrated by examples. In particular, two-dimensional grammars for generating all possible squares and all possible rhombuses are presented. The applications of two-dimensional grammars to solving region filling problems are discussed. An algorithm for region filling using two-dimensional grammars is presented together with illustrative examples. The advantages of using this algorithm in terms of computation time are also stated. A high-level description of a two-level picture generation system is proposed. The first level is the picture primitive generation using two-dimensional grammars. The second level is picture generation using either string description or entity-relationship (ER) diagram description. Illustrative examples are also given. The advantages of ER diagram description together with its comparison to string description are also presented. The results obtained in this paper may have useful applications in artificial intelligence, robotics, expert systems, picture processing, pattern recognition, knowledge engineering and pictorial database design. Furthermore, examples related to satellite surveillance and identifications are also included.
Three-dimensional numerical simulations of turbulent cavitating flow in a rectangular channel
NASA Astrophysics Data System (ADS)
Iben, Uwe; Makhnov, Andrei; Schmidt, Alexander
2018-05-01
Cavitation is a phenomenon of formation of bubbles (cavities) in liquid as a result of pressure drop. Cavitation plays an important role in a wide range of applications. For example, cavitation is one of the key problems of design and manufacturing of pumps, hydraulic turbines, ship's propellers, etc. Special attention is paid to cavitation erosion and to performance degradation of hydraulic devices (noise, fluctuations of the mass flow rate, etc.) caused by the formation of a two-phase system with an increased compressibility. Therefore, development of a model to predict cavitation inception and collapse of cavities in high-speed turbulent flows is an important fundamental and applied task. To test the algorithm three-dimensional simulations of turbulent flow of a cavitating liquid in a rectangular channel have been conducted. The obtained results demonstrate the efficiency and robustness of the formulated model and the algorithm.
NASA Astrophysics Data System (ADS)
Gao, Xinya; Wang, Yonghong; Li, Junrui; Dan, Xizuo; Wu, Sijin; Yang, Lianxiang
2017-06-01
It is difficult to measure absolute three-dimensional deformation using traditional digital speckle pattern interferometry (DSPI) when the boundary condition of an object being tested is not exactly given. In practical applications, the boundary condition cannot always be specifically provided, limiting the use of DSPI in real-world applications. To tackle this problem, a DSPI system that is integrated by the spatial carrier method and a color camera has been established. Four phase maps are obtained simultaneously by spatial carrier color-digital speckle pattern interferometry using four speckle interferometers with different illumination directions. One out-of-plane and two in-plane absolute deformations can be acquired simultaneously without knowing the boundary conditions using the absolute deformation extraction algorithm based on four phase maps. Finally, the system is proved by experimental results through measurement of the deformation of a flat aluminum plate with a groove.
SedFoam-2.0: a 3-D two-phase flow numerical model for sediment transport
NASA Astrophysics Data System (ADS)
Chauchat, Julien; Cheng, Zhen; Nagel, Tim; Bonamy, Cyrille; Hsu, Tian-Jian
2017-11-01
In this paper, a three-dimensional two-phase flow solver, SedFoam-2.0, is presented for sediment transport applications. The solver is extended from twoPhaseEulerFoam available in the 2.1.0 release of the open-source CFD (computational fluid dynamics) toolbox OpenFOAM. In this approach the sediment phase is modeled as a continuum, and constitutive laws have to be prescribed for the sediment stresses. In the proposed solver, two different intergranular stress models are implemented: the kinetic theory of granular flows and the dense granular flow rheology μ(I). For the fluid stress, laminar or turbulent flow regimes can be simulated and three different turbulence models are available for sediment transport: a simple mixing length model (one-dimensional configuration only), a k - ɛ, and a k - ω model. The numerical implementation is demonstrated on four test cases: sedimentation of suspended particles, laminar bed load, sheet flow, and scour at an apron. These test cases illustrate the capabilities of SedFoam-2.0 to deal with complex turbulent sediment transport problems with different combinations of intergranular stress and turbulence models.
Experimental Detection and Characterization of Void using Time-Domain Reflection Wave
NASA Astrophysics Data System (ADS)
Zahari, M. N. H.; Madun, A.; Dahlan, S. H.; Joret, A.; Zainal Abidin, M. H.; Mohammad, A. H.; Omar, A. H.
2018-04-01
Recent technologies in engineering views have brought the significant improvement in terms of performance and precision. One of those improvements is in geophysics studies for underground detection. Reflection method has been demonstrated to able to detect and locate subsurface anomalies in previous studies, including voids. Conventional method merely involves field testing only for limited areas. This may lead to undiscovered of the void position. Problems arose when the voids were not recognised in early stage and thus, causing hazards, costs increment, and can lead to serious accidents and structural damages. Therefore, to achieve better certainty of the site investigation, a dynamic approach is needed to be implemented. To estimate and characterize the anomalies signal in a better way, an attempt has been made to model air-filled void as experimental testing at site. Robust detection and characterization of voids through inexpensive cost using reflection method are proposed to improve the detectability and characterization of the void. The result shows 2-Dimensional and 3-Dimensional analyses of void based on reflection data with P-waves velocity at 454.54 m/s.
Self-similar solutions to isothermal shock problems
NASA Astrophysics Data System (ADS)
Deschner, Stephan C.; Illenseer, Tobias F.; Duschl, Wolfgang J.
We investigate exact solutions for isothermal shock problems in different one-dimensional geometries. These solutions are given as analytical expressions if possible, or are computed using standard numerical methods for solving ordinary differential equations. We test the numerical solutions against the analytical expressions to verify the correctness of all numerical algorithms. We use similarity methods to derive a system of ordinary differential equations (ODE) yielding exact solutions for power law density distributions as initial conditions. Further, the system of ODEs accounts for implosion problems (IP) as well as explosion problems (EP) by changing the initial or boundary conditions, respectively. Taking genuinely isothermal approximations into account leads to additional insights of EPs in contrast to earlier models. We neglect a constant initial energy contribution but introduce a parameter to adjust the initial mass distribution of the system. Moreover, we show that due to this parameter a constant initial density is not allowed for isothermal EPs. Reasonable restrictions for this parameter are given. Both, the (genuinely) isothermal implosion as well as the explosion problem are solved for the first time.
Optical reflection from planetary surfaces as an operator-eigenvalue problem
Wildey, R.L.
1986-01-01
The understanding of quantum mechanical phenomena has come to rely heavily on theory framed in terms of operators and their eigenvalue equations. This paper investigates the utility of that technique as related to the reciprocity principle in diffuse reflection. The reciprocity operator is shown to be unitary and Hermitian; hence, its eigenvectors form a complete orthonormal basis. The relevant eigenvalue is found to be infinitely degenerate. A superposition of the eigenfunctions found from solution by separation of variables is inadequate to form a general solution that can be fitted to a one-dimensional boundary condition, because the difficulty of resolving the reciprocity operator into a superposition of independent one-dimensional operators has yet to be overcome. A particular lunar application in the form of a failed prediction of limb-darkening of the full Moon from brightness versus phase illustrates this problem. A general solution is derived which fully exploits the determinative powers of the reciprocity operator as an unresolved two-dimensional operator. However, a solution based on a sum of one-dimensional operators, if possible, would be much more powerful. A close association is found between the reciprocity operator and the particle-exchange operator of quantum mechanics, which may indicate the direction for further successful exploitation of the approach based on the operational calculus. ?? 1986 D. Reidel Publishing Company.
NASA Astrophysics Data System (ADS)
Prasad, S.; Bruce, L. M.
2007-04-01
There is a growing interest in using multiple sources for automatic target recognition (ATR) applications. One approach is to take multiple, independent observations of a phenomenon and perform a feature level or a decision level fusion for ATR. This paper proposes a method to utilize these types of multi-source fusion techniques to exploit hyperspectral data when only a small number of training pixels are available. Conventional hyperspectral image based ATR techniques project the high dimensional reflectance signature onto a lower dimensional subspace using techniques such as Principal Components Analysis (PCA), Fisher's linear discriminant analysis (LDA), subspace LDA and stepwise LDA. While some of these techniques attempt to solve the curse of dimensionality, or small sample size problem, these are not necessarily optimal projections. In this paper, we present a divide and conquer approach to address the small sample size problem. The hyperspectral space is partitioned into contiguous subspaces such that the discriminative information within each subspace is maximized, and the statistical dependence between subspaces is minimized. We then treat each subspace as a separate source in a multi-source multi-classifier setup and test various decision fusion schemes to determine their efficacy. Unlike previous approaches which use correlation between variables for band grouping, we study the efficacy of higher order statistical information (using average mutual information) for a bottom up band grouping. We also propose a confidence measure based decision fusion technique, where the weights associated with various classifiers are based on their confidence in recognizing the training data. To this end, training accuracies of all classifiers are used for weight assignment in the fusion process of test pixels. The proposed methods are tested using hyperspectral data with known ground truth, such that the efficacy can be quantitatively measured in terms of target recognition accuracies.
Stiffener-skin interactions in pressure-loaded composite panels
NASA Technical Reports Server (NTRS)
Loup, D. C.; Hyer, M. W.; Starnes, J. H., Jr.
1986-01-01
The effects of flange thickness, web height, and skin stiffness on the strain distributions in the skin-stiffener interface region of pressure-loaded graphite-epoxy panels, stiffened by the type-T stiffener, were examined at pressure levels up to one atmosphere. The results indicate that at these pressures geometric nonlinearities are important, and that the overall stiffener stiffness has a significant effect on panel response, particularly on the out-of-plane deformation or pillowing of the skin. The strain gradients indicated that the interface between the skin and the stiffener experiences two components of shear stress, in addition to a normal (peel) stress. Thus, the skin-stiffener interface problem is a three-dimensional problem rather than a two-dimensional one, as is often assumed.
NASA Astrophysics Data System (ADS)
Caplan, R. M.
2013-04-01
We present a simple to use, yet powerful code package called NLSEmagic to numerically integrate the nonlinear Schrödinger equation in one, two, and three dimensions. NLSEmagic is a high-order finite-difference code package which utilizes graphic processing unit (GPU) parallel architectures. The codes running on the GPU are many times faster than their serial counterparts, and are much cheaper to run than on standard parallel clusters. The codes are developed with usability and portability in mind, and therefore are written to interface with MATLAB utilizing custom GPU-enabled C codes with the MEX-compiler interface. The packages are freely distributed, including user manuals and set-up files. Catalogue identifier: AEOJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOJ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 124453 No. of bytes in distributed program, including test data, etc.: 4728604 Distribution format: tar.gz Programming language: C, CUDA, MATLAB. Computer: PC, MAC. Operating system: Windows, MacOS, Linux. Has the code been vectorized or parallelized?: Yes. Number of processors used: Single CPU, number of GPU processors dependent on chosen GPU card (max is currently 3072 cores on GeForce GTX 690). Supplementary material: Setup guide, Installation guide. RAM: Highly dependent on dimensionality and grid size. For typical medium-large problem size in three dimensions, 4GB is sufficient. Keywords: Nonlinear Schröodinger Equation, GPU, high-order finite difference, Bose-Einstien condensates. Classification: 4.3, 7.7. Nature of problem: Integrate solutions of the time-dependent one-, two-, and three-dimensional cubic nonlinear Schrödinger equation. Solution method: The integrators utilize a fully-explicit fourth-order Runge-Kutta scheme in time and both second- and fourth-order differencing in space. The integrators are written to run on NVIDIA GPUs and are interfaced with MATLAB including built-in visualization and analysis tools. Restrictions: The main restriction for the GPU integrators is the amount of RAM on the GPU as the code is currently only designed for running on a single GPU. Unusual features: Ability to visualize real-time simulations through the interaction of MATLAB and the compiled GPU integrators. Additional comments: Setup guide and Installation guide provided. Program has a dedicated web site at www.nlsemagic.com. Running time: A three-dimensional run with a grid dimension of 87×87×203 for 3360 time steps (100 non-dimensional time units) takes about one and a half minutes on a GeForce GTX 580 GPU card.
ERIC Educational Resources Information Center
Ruscio, John; Walters, Glenn D.
2009-01-01
Factor-analytic research is common in the study of constructs and measures in psychological assessment. Latent factors can represent traits as continuous underlying dimensions or as discrete categories. When examining the distributions of estimated scores on latent factors, one would expect unimodal distributions for dimensional data and bimodal…
A 3-D turbulent flow analysis using finite elements with k-ɛ model
NASA Astrophysics Data System (ADS)
Okuda, H.; Yagawa, G.; Eguchi, Y.
1989-03-01
This paper describes the finite element turbulent flow analysis, which is suitable for three-dimensional large scale problems. The k-ɛ turbulence model as well as the conservation equations of mass and momentum are discretized in space using rather low order elements. Resulting coefficient matrices are evaluated by one-point quadrature in order to reduce the computational storage and the CPU cost. The time integration scheme based on the velocity correction method is employed to obtain steady state solutions. For the verification of this FEM program, two-dimensional plenum flow is simulated and compared with experiment. As the application to three-dimensional practical problems, the turbulent flows in the upper plenum of the fast breeder reactor are calculated for various boundary conditions.
Analysing the magnetopause internal structure: new possibilities offered by MMS
NASA Astrophysics Data System (ADS)
Belmont, G.; Rezeau, L.; Manuzzo, R.; Aunai, N.; Dargent, J.
2017-12-01
We explore the structure of the magnetopause using a crossing observed by the MMS spacecraft on October 16th, 2015. Several methods (MVA, BV, CVA) are first applied to compute the normal to the magnetopause considered as a whole. The different results obtained are not identical and we show that the whole boundary is not stationary and not planar, so that basic assumptions of these methods are not well satisfied. We then analyse more finely the internal structure for investigating the departures from planarity. Using the basic mathematical definition of what is a one-dimensional physical problem, we introduce a new method, called LNA (Local Normal Analysis) for determining the varying normal, and we compare the results so obtained with those coming from the MDD tool developed by [Shi et al., 2005]. This method gives the dimensionality of the magnetic variations from multi-point measurements and allows estimating the direction of the local normal using the magnetic field. On the other hand, LNA is a single-spacecraft method which gives the local normal from the magnetic field and particle data. This study shows that the magnetopause does include approximate one-dimensional sub-structures but also two and three dimensional intervals. It also shows that the dimensionality of the magnetic variations can differ from the variations of the other fields so that, at some places, the magnetic field can have a 1D structure although all the plasma variations do not verify the properties of a global one-dimensional problem. Finally a generalisation and a systematic application of the MDD method to the physical quantities of interest is shown.
Well-balanced compressible cut-cell simulation of atmospheric flow.
Klein, R; Bates, K R; Nikiforakis, N
2009-11-28
Cut-cell meshes present an attractive alternative to terrain-following coordinates for the representation of topography within atmospheric flow simulations, particularly in regions of steep topographic gradients. In this paper, we present an explicit two-dimensional method for the numerical solution on such meshes of atmospheric flow equations including gravitational sources. This method is fully conservative and allows for time steps determined by the regular grid spacing, avoiding potential stability issues due to arbitrarily small boundary cells. We believe that the scheme is unique in that it is developed within a dimensionally split framework, in which each coordinate direction in the flow is solved independently at each time step. Other notable features of the scheme are: (i) its conceptual and practical simplicity, (ii) its flexibility with regard to the one-dimensional flux approximation scheme employed, and (iii) the well-balancing of the gravitational sources allowing for stable simulation of near-hydrostatic flows. The presented method is applied to a selection of test problems including buoyant bubble rise interacting with geometry and lee-wave generation due to topography.
Solving groundwater flow problems by conjugate-gradient methods and the strongly implicit procedure
Hill, Mary C.
1990-01-01
The performance of the preconditioned conjugate-gradient method with three preconditioners is compared with the strongly implicit procedure (SIP) using a scalar computer. The preconditioners considered are the incomplete Cholesky (ICCG) and the modified incomplete Cholesky (MICCG), which require the same computer storage as SIP as programmed for a problem with a symmetric matrix, and a polynomial preconditioner (POLCG), which requires less computer storage than SIP. Although POLCG is usually used on vector computers, it is included here because of its small storage requirements. In this paper, published comparisons of the solvers are evaluated, all four solvers are compared for the first time, and new test cases are presented to provide a more complete basis by which the solvers can be judged for typical groundwater flow problems. Based on nine test cases, the following conclusions are reached: (1) SIP is actually as efficient as ICCG for some of the published, linear, two-dimensional test cases that were reportedly solved much more efficiently by ICCG; (2) SIP is more efficient than other published comparisons would indicate when common convergence criteria are used; and (3) for problems that are three-dimensional, nonlinear, or both, and for which common convergence criteria are used, SIP is often more efficient than ICCG, and is sometimes more efficient than MICCG.
Moving boundary problems for a rarefied gas: Spatially one-dimensional case
NASA Astrophysics Data System (ADS)
Tsuji, Tetsuro; Aoki, Kazuo
2013-10-01
Unsteady flows of a rarefied gas in a full space caused by an oscillation of an infinitely wide plate in its normal direction are investigated numerically on the basis of the Bhatnagar-Gross-Krook (BGK) model of the Boltzmann equation. The paper aims at showing properties and difficulties inherent to moving boundary problems in kinetic theory of gases using a simple one-dimensional setting. More specifically, the following two problems are considered: (Problem I) the plate starts a forced harmonic oscillation (forced motion); (Problem II) the plate, which is subject to an external restoring force obeying Hooke’s law, is displaced from its equilibrium position and released (free motion). The physical interest in Problem I lies in the propagation of nonlinear acoustic waves in a rarefied gas, whereas that in Problem II in the decay rate of the oscillation of the plate. An accurate numerical method, which is capable of describing singularities caused by the oscillating plate, is developed on the basis of the method of characteristics and is applied to the two problems mentioned above. As a result, the unsteady behavior of the solution, such as the propagation of discontinuities and some weaker singularities in the molecular velocity distribution function, are clarified. Some results are also compared with those based on the existing method.
NASA Astrophysics Data System (ADS)
Kotake, Kei; Sumiyoshi, Kohsuke; Yamada, Shoichi; Takiwaki, Tomoya; Kuroda, Takami; Suwa, Yudai; Nagakura, Hiroki
2012-08-01
This is a status report on our endeavor to reveal the mechanism of core-collapse supernovae (CCSNe) by large-scale numerical simulations. Multi-dimensionality of the supernova engine, general relativistic magnetohydrodynamics, energy and lepton number transport by neutrinos emitted from the forming neutron star, as well as nuclear interactions there, are all believed to play crucial roles in repelling infalling matter and producing energetic explosions. These ingredients are non-linearly coupled with one another in the dynamics of core collapse, bounce, and shock expansion. Serious quantitative studies of CCSNe hence make extensive numerical computations mandatory. Since neutrinos are neither in thermal nor in chemical equilibrium in general, their distributions in the phase space should be computed. This is a six-dimensional (6D) neutrino transport problem and quite a challenge, even for those with access to the most advanced numerical resources such as the "K computer". To tackle this problem, we have embarked on efforts on multiple fronts. In particular, we report in this paper our recent progresses in the treatment of multidimensional (multi-D) radiation hydrodynamics. We are currently proceeding on two different paths to the ultimate goal. In one approach, we employ an approximate but highly efficient scheme for neutrino transport and treat 3D hydrodynamics and/or general relativity rigorously; some neutrino-driven explosions will be presented and quantitative comparisons will be made between 2D and 3D models. In the second approach, on the other hand, exact, but so far Newtonian, Boltzmann equations are solved in two and three spatial dimensions; we will show some example test simulations. We will also address the perspectives of exascale computations on the next generation supercomputers.
Mehl, Steffen W.; Hill, Mary C.
2006-01-01
This report documents the addition of shared node Local Grid Refinement (LGR) to MODFLOW-2005, the U.S. Geological Survey modular, transient, three-dimensional, finite-difference ground-water flow model. LGR provides the capability to simulate ground-water flow using one block-shaped higher-resolution local grid (a child model) within a coarser-grid parent model. LGR accomplishes this by iteratively coupling two separate MODFLOW-2005 models such that heads and fluxes are balanced across the shared interfacing boundary. LGR can be used in two-and three-dimensional, steady-state and transient simulations and for simulations of confined and unconfined ground-water systems. Traditional one-way coupled telescopic mesh refinement (TMR) methods can have large, often undetected, inconsistencies in heads and fluxes across the interface between two model grids. The iteratively coupled shared-node method of LGR provides a more rigorous coupling in which the solution accuracy is controlled by convergence criteria defined by the user. In realistic problems, this can result in substantially more accurate solutions and require an increase in computer processing time. The rigorous coupling enables sensitivity analysis, parameter estimation, and uncertainty analysis that reflects conditions in both model grids. This report describes the method used by LGR, evaluates LGR accuracy and performance for two- and three-dimensional test cases, provides input instructions, and lists selected input and output files for an example problem. It also presents the Boundary Flow and Head (BFH) Package, which allows the child and parent models to be simulated independently using the boundary conditions obtained through the iterative process of LGR.
NASTRAN analysis for the Airmass Sunburst model 'C' Ultralight Aircraft
NASA Technical Reports Server (NTRS)
Verbestel, John; Smith, Howard W.
1993-01-01
The purpose of this project was to create a three dimensional NASTRAN model of the Airmass Sunburst Ultralight comparable to one made for finite element analysis. A two dimensional sample problem will be calculated by hand and by NASTRAN to make sure that NASTRAN finds similar results. A three dimensional model, similar to the one analyzed by the finite element program, will be run on NASTRAN. A comparison will be done between the NASTRAN results and the finite element program results. This study will deal mainly with the aerodynamic loads on the wing and surrounding support structure at an attack angle of 10 degrees.
Numerical aerodynamic simulation facility. [for flows about three-dimensional configurations
NASA Technical Reports Server (NTRS)
Bailey, F. R.; Hathaway, A. W.
1978-01-01
Critical to the advancement of computational aerodynamics capability is the ability to simulate flows about three-dimensional configurations that contain both compressible and viscous effects, including turbulence and flow separation at high Reynolds numbers. Analyses were conducted of two solution techniques for solving the Reynolds averaged Navier-Stokes equations describing the mean motion of a turbulent flow with certain terms involving the transport of turbulent momentum and energy modeled by auxiliary equations. The first solution technique is an implicit approximate factorization finite-difference scheme applied to three-dimensional flows that avoids the restrictive stability conditions when small grid spacing is used. The approximate factorization reduces the solution process to a sequence of three one-dimensional problems with easily inverted matrices. The second technique is a hybrid explicit/implicit finite-difference scheme which is also factored and applied to three-dimensional flows. Both methods are applicable to problems with highly distorted grids and a variety of boundary conditions and turbulence models.
Generalized continued fractions and ergodic theory
NASA Astrophysics Data System (ADS)
Pustyl'nikov, L. D.
2003-02-01
In this paper a new theory of generalized continued fractions is constructed and applied to numbers, multidimensional vectors belonging to a real space, and infinite-dimensional vectors with integral coordinates. The theory is based on a concept generalizing the procedure for constructing the classical continued fractions and substantially using ergodic theory. One of the versions of the theory is related to differential equations. In the finite-dimensional case the constructions thus introduced are used to solve problems posed by Weyl in analysis and number theory concerning estimates of trigonometric sums and of the remainder in the distribution law for the fractional parts of the values of a polynomial, and also the problem of characterizing algebraic and transcendental numbers with the use of generalized continued fractions. Infinite-dimensional generalized continued fractions are applied to estimate sums of Legendre symbols and to obtain new results in the classical problem of the distribution of quadratic residues and non-residues modulo a prime. In the course of constructing these continued fractions, an investigation is carried out of the ergodic properties of a class of infinite-dimensional dynamical systems which are also of independent interest.
Modes of self-organization of diluted bubbly liquids in acoustic fields: One-dimensional theory.
Gumerov, Nail A; Akhatov, Iskander S
2017-02-01
The paper is dedicated to mathematical modeling of self-organization of bubbly liquids in acoustic fields. A continuum model describing the two-way interaction of diluted polydisperse bubbly liquids and acoustic fields in weakly-nonlinear approximation is studied analytically and numerically in the one-dimensional case. It is shown that the regimes of self-organization of monodisperse bubbly liquids can be controlled by only a few dimensionless parameters. Two basic modes, clustering and propagating shock waves of void fraction (acoustically induced transparency), are identified and criteria for their realization in the space of parameters are proposed. A numerical method for solving of one-dimensional self-organization problems is developed. Computational results for mono- and polydisperse systems are discussed.
Firefly Mating Algorithm for Continuous Optimization Problems
Ritthipakdee, Amarita; Premasathian, Nol; Jitkongchuen, Duangjai
2017-01-01
This paper proposes a swarm intelligence algorithm, called firefly mating algorithm (FMA), for solving continuous optimization problems. FMA uses genetic algorithm as the core of the algorithm. The main feature of the algorithm is a novel mating pair selection method which is inspired by the following 2 mating behaviors of fireflies in nature: (i) the mutual attraction between males and females causes them to mate and (ii) fireflies of both sexes are of the multiple-mating type, mating with multiple opposite sex partners. A female continues mating until her spermatheca becomes full, and, in the same vein, a male can provide sperms for several females until his sperm reservoir is depleted. This new feature enhances the global convergence capability of the algorithm. The performance of FMA was tested with 20 benchmark functions (sixteen 30-dimensional functions and four 2-dimensional ones) against FA, ALC-PSO, COA, MCPSO, LWGSODE, MPSODDS, DFOA, SHPSOS, LSA, MPDPGA, DE, and GABC algorithms. The experimental results showed that the success rates of our proposed algorithm with these functions were higher than those of other algorithms and the proposed algorithm also required fewer numbers of iterations to reach the global optima. PMID:28808442
Firefly Mating Algorithm for Continuous Optimization Problems.
Ritthipakdee, Amarita; Thammano, Arit; Premasathian, Nol; Jitkongchuen, Duangjai
2017-01-01
This paper proposes a swarm intelligence algorithm, called firefly mating algorithm (FMA), for solving continuous optimization problems. FMA uses genetic algorithm as the core of the algorithm. The main feature of the algorithm is a novel mating pair selection method which is inspired by the following 2 mating behaviors of fireflies in nature: (i) the mutual attraction between males and females causes them to mate and (ii) fireflies of both sexes are of the multiple-mating type, mating with multiple opposite sex partners. A female continues mating until her spermatheca becomes full, and, in the same vein, a male can provide sperms for several females until his sperm reservoir is depleted. This new feature enhances the global convergence capability of the algorithm. The performance of FMA was tested with 20 benchmark functions (sixteen 30-dimensional functions and four 2-dimensional ones) against FA, ALC-PSO, COA, MCPSO, LWGSODE, MPSODDS, DFOA, SHPSOS, LSA, MPDPGA, DE, and GABC algorithms. The experimental results showed that the success rates of our proposed algorithm with these functions were higher than those of other algorithms and the proposed algorithm also required fewer numbers of iterations to reach the global optima.
Validation of tsunami inundation model TUNA-RP using OAR-PMEL-135 benchmark problem set
NASA Astrophysics Data System (ADS)
Koh, H. L.; Teh, S. Y.; Tan, W. K.; Kh'ng, X. Y.
2017-05-01
A standard set of benchmark problems, known as OAR-PMEL-135, is developed by the US National Tsunami Hazard Mitigation Program for tsunami inundation model validation. Any tsunami inundation model must be tested for its accuracy and capability using this standard set of benchmark problems before it can be gainfully used for inundation simulation. The authors have previously developed an in-house tsunami inundation model known as TUNA-RP. This inundation model solves the two-dimensional nonlinear shallow water equations coupled with a wet-dry moving boundary algorithm. This paper presents the validation of TUNA-RP against the solutions provided in the OAR-PMEL-135 benchmark problem set. This benchmark validation testing shows that TUNA-RP can indeed perform inundation simulation with accuracy consistent with that in the tested benchmark problem set.
NASA Astrophysics Data System (ADS)
Popov, Nikolay S.
2017-11-01
Solvability of some initial-boundary value problems for linear hyperbolic equations of the fourth order is studied. A condition on the lateral boundary in these problems relates the values of a solution or the conormal derivative of a solution to the values of some integral operator applied to a solution. Nonlocal boundary-value problems for one-dimensional hyperbolic second-order equations with integral conditions on the lateral boundary were considered in the articles by A.I. Kozhanov. Higher-dimensional hyperbolic equations of higher order with integral conditions on the lateral boundary were not studied earlier. The existence and uniqueness theorems of regular solutions are proven. The method of regularization and the method of continuation in a parameter are employed to establish solvability.
Computer model of two-dimensional solute transport and dispersion in ground water
Konikow, Leonard F.; Bredehoeft, J.D.
1978-01-01
This report presents a model that simulates solute transport in flowing ground water. The model is both general and flexible in that it can be applied to a wide range of problem types. It is applicable to one- or two-dimensional problems involving steady-state or transient flow. The model computes changes in concentration over time caused by the processes of convective transport, hydrodynamic dispersion, and mixing (or dilution) from fluid sources. The model assumes that the solute is non-reactive and that gradients of fluid density, viscosity, and temperature do not affect the velocity distribution. However, the aquifer may be heterogeneous and (or) anisotropic. The model couples the ground-water flow equation with the solute-transport equation. The digital computer program uses an alternating-direction implicit procedure to solve a finite-difference approximation to the ground-water flow equation, and it uses the method of characteristics to solve the solute-transport equation. The latter uses a particle- tracking procedure to represent convective transport and a two-step explicit procedure to solve a finite-difference equation that describes the effects of hydrodynamic dispersion, fluid sources and sinks, and divergence of velocity. This explicit procedure has several stability criteria, but the consequent time-step limitations are automatically determined by the program. The report includes a listing of the computer program, which is written in FORTRAN IV and contains about 2,000 lines. The model is based on a rectangular, block-centered, finite difference grid. It allows the specification of any number of injection or withdrawal wells and of spatially varying diffuse recharge or discharge, saturated thickness, transmissivity, boundary conditions, and initial heads and concentrations. The program also permits the designation of up to five nodes as observation points, for which a summary table of head and concentration versus time is printed at the end of the calculations. The data input formats for the model require three data cards and from seven to nine data sets to describe the aquifer properties, boundaries, and stresses. The accuracy of the model was evaluated for two idealized problems for which analytical solutions could be obtained. In the case of one-dimensional flow the agreement was nearly exact, but in the case of plane radial flow a small amount of numerical dispersion occurred. An analysis of several test problems indicates that the error in the mass balance will be generally less than 10 percent. The test problems demonstrated that the accuracy and precision of the numerical solution is sensitive to the initial number of particles placed in each cell and to the size of the time increment, as determined by the stability criteria. Mass balance errors are commonly the greatest during the first several time increments, but tend to decrease and stabilize with time.
Variational asymptotic modeling of composite dimensionally reducible structures
NASA Astrophysics Data System (ADS)
Yu, Wenbin
A general framework to construct accurate reduced models for composite dimensionally reducible structures (beams, plates and shells) was formulated based on two theoretical foundations: decomposition of the rotation tensor and the variational asymptotic method. Two engineering software systems, Variational Asymptotic Beam Sectional Analysis (VABS, new version) and Variational Asymptotic Plate and Shell Analysis (VAPAS), were developed. Several restrictions found in previous work on beam modeling were removed in the present effort. A general formulation of Timoshenko-like cross-sectional analysis was developed, through which the shear center coordinates and a consistent Vlasov model can be obtained. Recovery relations are given to recover the asymptotic approximations for the three-dimensional field variables. A new version of VABS has been developed, which is a much improved program in comparison to the old one. Numerous examples are given for validation. A Reissner-like model being as asymptotically correct as possible was obtained for composite plates and shells. After formulating the three-dimensional elasticity problem in intrinsic form, the variational asymptotic method was used to systematically reduce the dimensionality of the problem by taking advantage of the smallness of the thickness. The through-the-thickness analysis is solved by a one-dimensional finite element method to provide the stiffnesses as input for the two-dimensional nonlinear plate or shell analysis as well as recovery relations to approximately express the three-dimensional results. The known fact that there exists more than one theory that is asymptotically correct to a given order is adopted to cast the refined energy into a Reissner-like form. A two-dimensional nonlinear shell theory consistent with the present modeling process was developed. The engineering computer code VAPAS was developed and inserted into DYMORE to provide an efficient and accurate analysis of composite plates and shells. Numerical results are compared with the exact solutions, and the excellent agreement proves that one can use VAPAS to analyze composite plates and shells efficiently and accurately. In conclusion, rigorous modeling approaches were developed for composite beams, plates and shells within a general framework. No such consistent and general treatment is found in the literature. The associated computer programs VABS and VAPAS are envisioned to have many applications in industry.
The quantum-field renormalization group in the problem of a growing phase boundary
DOE Office of Scientific and Technical Information (OSTI.GOV)
Antonov, N.V.; Vasil`ev, A.N.
1995-09-01
Within the quantum-field renormalization-group approach we examine the stochastic equation discussed by S.I. Pavlik in describing a randomly growing phase boundary. We show that, in contrast to Pavlik`s assertion, the model is not multiplicatively renormalizable and that its consistent renormalization-group analysis requires introducing an infinite number of counterterms and the respective coupling constants ({open_quotes}charge{close_quotes}). An explicit calculation in the one-loop approximation shows that a two-dimensional surface of renormalization-group points exits in the infinite-dimensional charge space. If the surface contains an infrared stability region, the problem allows for scaling with the nonuniversal critical dimensionalities of the height of the phase boundarymore » and time, {delta}{sub h} and {delta}{sub t}, which satisfy the exact relationship 2 {delta}{sub h}= {delta}{sub t} + d, where d is the dimensionality of the phase boundary. 23 refs., 1 tab.« less
High-order scheme for the source-sink term in a one-dimensional water temperature model
Jing, Zheng; Kang, Ling
2017-01-01
The source-sink term in water temperature models represents the net heat absorbed or released by a water system. This term is very important because it accounts for solar radiation that can significantly affect water temperature, especially in lakes. However, existing numerical methods for discretizing the source-sink term are very simplistic, causing significant deviations between simulation results and measured data. To address this problem, we present a numerical method specific to the source-sink term. A vertical one-dimensional heat conduction equation was chosen to describe water temperature changes. A two-step operator-splitting method was adopted as the numerical solution. In the first step, using the undetermined coefficient method, a high-order scheme was adopted for discretizing the source-sink term. In the second step, the diffusion term was discretized using the Crank-Nicolson scheme. The effectiveness and capability of the numerical method was assessed by performing numerical tests. Then, the proposed numerical method was applied to a simulation of Guozheng Lake (located in central China). The modeling results were in an excellent agreement with measured data. PMID:28264005
High-order scheme for the source-sink term in a one-dimensional water temperature model.
Jing, Zheng; Kang, Ling
2017-01-01
The source-sink term in water temperature models represents the net heat absorbed or released by a water system. This term is very important because it accounts for solar radiation that can significantly affect water temperature, especially in lakes. However, existing numerical methods for discretizing the source-sink term are very simplistic, causing significant deviations between simulation results and measured data. To address this problem, we present a numerical method specific to the source-sink term. A vertical one-dimensional heat conduction equation was chosen to describe water temperature changes. A two-step operator-splitting method was adopted as the numerical solution. In the first step, using the undetermined coefficient method, a high-order scheme was adopted for discretizing the source-sink term. In the second step, the diffusion term was discretized using the Crank-Nicolson scheme. The effectiveness and capability of the numerical method was assessed by performing numerical tests. Then, the proposed numerical method was applied to a simulation of Guozheng Lake (located in central China). The modeling results were in an excellent agreement with measured data.
ERIC Educational Resources Information Center
Moore, Nathan T.; Deming, John C.
2010-01-01
The garlic problem presented in this article develops several themes related to dimensional analysis and also introduces students to a few basic statistical ideas. This garlic problem was used in a university preparatory chemistry class, designed for students with no chemistry background. However, this course is unique because one of the primary…
Convergence of an hp-Adaptive Finite Element Strategy in Two and Three Space-Dimensions
NASA Astrophysics Data System (ADS)
Bürg, Markus; Dörfler, Willy
2010-09-01
We show convergence of an automatic hp-adaptive refinement strategy for the finite element method on the elliptic boundary value problem. The strategy is a generalization of a refinement strategy proposed for one-dimensional situations to problems in two and three space-dimensions.
NASA Technical Reports Server (NTRS)
Rogers, Stuart E.
1990-01-01
The current work is initiated in an effort to obtain an efficient, accurate, and robust algorithm for the numerical solution of the incompressible Navier-Stokes equations in two- and three-dimensional generalized curvilinear coordinates for both steady-state and time-dependent flow problems. This is accomplished with the use of the method of artificial compressibility and a high-order flux-difference splitting technique for the differencing of the convective terms. Time accuracy is obtained in the numerical solutions by subiterating the equations in psuedo-time for each physical time step. The system of equations is solved with a line-relaxation scheme which allows the use of very large pseudo-time steps leading to fast convergence for steady-state problems as well as for the subiterations of time-dependent problems. Numerous laminar test flow problems are computed and presented with a comparison against analytically known solutions or experimental results. These include the flow in a driven cavity, the flow over a backward-facing step, the steady and unsteady flow over a circular cylinder, flow over an oscillating plate, flow through a one-dimensional inviscid channel with oscillating back pressure, the steady-state flow through a square duct with a 90 degree bend, and the flow through an artificial heart configuration with moving boundaries. An adequate comparison with the analytical or experimental results is obtained in all cases. Numerical comparisons of the upwind differencing with central differencing plus artificial dissipation indicates that the upwind differencing provides a much more robust algorithm, which requires significantly less computing time. The time-dependent problems require on the order of 10 to 20 subiterations, indicating that the elliptical nature of the problem does require a substantial amount of computing effort.
Fourth-order convergence of a compact scheme for the one-dimensional biharmonic equation
NASA Astrophysics Data System (ADS)
Fishelov, D.; Ben-Artzi, M.; Croisille, J.-P.
2012-09-01
The convergence of a fourth-order compact scheme to the one-dimensional biharmonic problem is established in the case of general Dirichlet boundary conditions. The compact scheme invokes value of the unknown function as well as Pade approximations of its first-order derivative. Using the Pade approximation allows us to approximate the first-order derivative within fourth-order accuracy. However, although the truncation error of the discrete biharmonic scheme is of fourth-order at interior point, the truncation error drops to first-order at near-boundary points. Nonetheless, we prove that the scheme retains its fourth-order (optimal) accuracy. This is done by a careful inspection of the matrix elements of the discrete biharmonic operator. A number of numerical examples corroborate this effect. We also present a study of the eigenvalue problem uxxxx = νu. We compute and display the eigenvalues and the eigenfunctions related to the continuous and the discrete problems. By the positivity of the eigenvalues, one can deduce the stability of of the related time-dependent problem ut = -uxxxx. In addition, we study the eigenvalue problem uxxxx = νuxx. This is related to the stability of the linear time-dependent equation uxxt = νuxxxx. Its continuous and discrete eigenvalues and eigenfunction (or eigenvectors) are computed and displayed graphically.
A Non Local Electron Heat Transport Model for Multi-Dimensional Fluid Codes
NASA Astrophysics Data System (ADS)
Schurtz, Guy
2000-10-01
Apparent inhibition of thermal heat flow is one of the most ancient problems in computational Inertial Fusion and flux-limited Spitzer-Harm conduction has been a mainstay in multi-dimensional hydrodynamic codes for more than 25 years. Theoretical investigation of the problem indicates that heat transport in laser produced plasmas has to be considered as a non local process. Various authors contributed to the non local theory and proposed convolution formulas designed for practical implementation in one-dimensional fluid codes. Though the theory, confirmed by kinetic calculations, actually predicts a reduced heat flux, it fails to explain the very small limiters required in two-dimensional simulations. Fokker-Planck simulations by Epperlein, Rickard and Bell [PRL 61, 2453 (1988)] demonstrated that non local effects could lead to a strong reduction of heat flow in two dimensions, even in situations where a one-dimensional analysis suggests that the heat flow is nearly classical. We developed at CEA/DAM a non local electron heat transport model suitable for implementation in our two-dimensional radiation hydrodynamic code FCI2. This model may be envisionned as the first step of an iterative solution of the Fokker-Planck equations; it takes the mathematical form of multigroup diffusion equations, the solution of which yields both the heat flux and the departure of the electron distribution function to the Maxwellian. Although direct implementation of the model is straightforward, formal solutions of it can be expressed in convolution form, exhibiting a three-dimensional tensor propagator. Reduction to one dimension retrieves the original formula of Luciani, Mora and Virmont [PRL 51, 1664 (1983)]. Intense magnetic fields may be generated by thermal effects in laser targets; these fields, as well as non local effects, will inhibit electron conduction. We present simulations where both effects are taken into account and shortly discuss the coupling strategy between them.
Eigenmode Analysis of Boundary Conditions for One-Dimensional Preconditioned Euler Equations
NASA Technical Reports Server (NTRS)
Darmofal, David L.
1998-01-01
An analysis of the effect of local preconditioning on boundary conditions for the subsonic, one-dimensional Euler equations is presented. Decay rates for the eigenmodes of the initial boundary value problem are determined for different boundary conditions. Riemann invariant boundary conditions based on the unpreconditioned Euler equations are shown to be reflective with preconditioning, and, at low Mach numbers, disturbances do not decay. Other boundary conditions are investigated which are non-reflective with preconditioning and numerical results are presented confirming the analysis.
NASA Technical Reports Server (NTRS)
Tokars, Roger; Adamovsky, Grigory; Anderson, Robert; Hirt, Stefanie; Huang, John; Floyd, Bertram
2012-01-01
A 15- by 15-cm supersonic wind tunnel application of a one-dimensional laser beam scanning approach to shock sensing is presented. The measurement system design allowed easy switching between a focused beam and a laser sheet mode for comparison purposes. The scanning results were compared to images from the tunnel Schlieren imaging system. The tests revealed detectable changes in the laser beam in the presence of shocks. The results lend support to the use of the one-dimensional scanning beam approach for detecting and locating shocks in a flow, but some issues must be addressed in regards to noise and other limitations of the system.
Phase unwrapping in three dimensions with application to InSAR time series.
Hooper, Andrew; Zebker, Howard A
2007-09-01
The problem of phase unwrapping in two dimensions has been studied extensively in the past two decades, but the three-dimensional (3D) problem has so far received relatively little attention. We develop here a theoretical framework for 3D phase unwrapping and also describe two algorithms for implementation, both of which can be applied to synthetic aperture radar interferometry (InSAR) time series. We test the algorithms on simulated data and find both give more accurate results than a two-dimensional algorithm. When applied to actual InSAR time series, we find good agreement both between the algorithms and with ground truth.
Healy, R.W.; Russell, T.F.
1993-01-01
A new mass-conservative method for solution of the one-dimensional advection-dispersion equation is derived and discussed. Test results demonstrate that the finite-volume Eulerian-Lagrangian localized adjoint method (FVELLAM) outperforms standard finite-difference methods, in terms of accuracy and efficiency, for solute transport problems that are dominated by advection. For dispersion-dominated problems, the performance of the method is similar to that of standard methods. Like previous ELLAM formulations, FVELLAM systematically conserves mass globally with all types of boundary conditions. FVELLAM differs from other ELLAM approaches in that integrated finite differences, instead of finite elements, are used to approximate the governing equation. This approach, in conjunction with a forward tracking scheme, greatly facilitates mass conservation. The mass storage integral is numerically evaluated at the current time level, and quadrature points are then tracked forward in time to the next level. Forward tracking permits straightforward treatment of inflow boundaries, thus avoiding the inherent problem in backtracking, as used by most characteristic methods, of characteristic lines intersecting inflow boundaries. FVELLAM extends previous ELLAM results by obtaining mass conservation locally on Lagrangian space-time elements. Details of the integration, tracking, and boundary algorithms are presented. Test results are given for problems in Cartesian and radial coordinates.
Escape rates over potential barriers: variational principles and the Hamilton-Jacobi equation
NASA Astrophysics Data System (ADS)
Cortés, Emilio; Espinosa, Francisco
We describe a rigorous formalism to study some extrema statistics problems, like maximum probability events or escape rate processes, by taking into account that the Hamilton-Jacobi equation completes, in a natural way, the required set of boundary conditions of the Euler-Lagrange equation, for this kind of variational problem. We apply this approach to a one-dimensional stochastic process, driven by colored noise, for a double-parabola potential, where we have one stable and one unstable steady states.
NASA Astrophysics Data System (ADS)
Gontis, V.; Kononovicius, A.
2017-10-01
We address the problem of long-range memory in the financial markets. There are two conceptually different ways to reproduce power-law decay of auto-correlation function: using fractional Brownian motion as well as non-linear stochastic differential equations. In this contribution we address this problem by analyzing empirical return and trading activity time series from the Forex. From the empirical time series we obtain probability density functions of burst and inter-burst duration. Our analysis reveals that the power-law exponents of the obtained probability density functions are close to 3 / 2, which is a characteristic feature of the one-dimensional stochastic processes. This is in a good agreement with earlier proposed model of absolute return based on the non-linear stochastic differential equations derived from the agent-based herding model.
Ortega, Julio; Asensio-Cubero, Javier; Gan, John Q; Ortiz, Andrés
2016-07-15
Brain-computer interfacing (BCI) applications based on the classification of electroencephalographic (EEG) signals require solving high-dimensional pattern classification problems with such a relatively small number of training patterns that curse of dimensionality problems usually arise. Multiresolution analysis (MRA) has useful properties for signal analysis in both temporal and spectral analysis, and has been broadly used in the BCI field. However, MRA usually increases the dimensionality of the input data. Therefore, some approaches to feature selection or feature dimensionality reduction should be considered for improving the performance of the MRA based BCI. This paper investigates feature selection in the MRA-based frameworks for BCI. Several wrapper approaches to evolutionary multiobjective feature selection are proposed with different structures of classifiers. They are evaluated by comparing with baseline methods using sparse representation of features or without feature selection. The statistical analysis, by applying the Kolmogorov-Smirnoff and Kruskal-Wallis tests to the means of the Kappa values evaluated by using the test patterns in each approach, has demonstrated some advantages of the proposed approaches. In comparison with the baseline MRA approach used in previous studies, the proposed evolutionary multiobjective feature selection approaches provide similar or even better classification performances, with significant reduction in the number of features that need to be computed.
NASA Astrophysics Data System (ADS)
Ghattas, O.; Petra, N.; Cui, T.; Marzouk, Y.; Benjamin, P.; Willcox, K.
2016-12-01
Model-based projections of the dynamics of the polar ice sheets play a central role in anticipating future sea level rise. However, a number of mathematical and computational challenges place significant barriers on improving predictability of these models. One such challenge is caused by the unknown model parameters (e.g., in the basal boundary conditions) that must be inferred from heterogeneous observational data, leading to an ill-posed inverse problem and the need to quantify uncertainties in its solution. In this talk we discuss the problem of estimating the uncertainty in the solution of (large-scale) ice sheet inverse problems within the framework of Bayesian inference. Computing the general solution of the inverse problem--i.e., the posterior probability density--is intractable with current methods on today's computers, due to the expense of solving the forward model (3D full Stokes flow with nonlinear rheology) and the high dimensionality of the uncertain parameters (which are discretizations of the basal sliding coefficient field). To overcome these twin computational challenges, it is essential to exploit problem structure (e.g., sensitivity of the data to parameters, the smoothing property of the forward model, and correlations in the prior). To this end, we present a data-informed approach that identifies low-dimensional structure in both parameter space and the forward model state space. This approach exploits the fact that the observations inform only a low-dimensional parameter space and allows us to construct a parameter-reduced posterior. Sampling this parameter-reduced posterior still requires multiple evaluations of the forward problem, therefore we also aim to identify a low dimensional state space to reduce the computational cost. To this end, we apply a proper orthogonal decomposition (POD) approach to approximate the state using a low-dimensional manifold constructed using ``snapshots'' from the parameter reduced posterior, and the discrete empirical interpolation method (DEIM) to approximate the nonlinearity in the forward problem. We show that using only a limited number of forward solves, the resulting subspaces lead to an efficient method to explore the high-dimensional posterior.
NASA Technical Reports Server (NTRS)
Handschuh, Robert F.
1987-01-01
An exponential finite difference algorithm, as first presented by Bhattacharya for one-dimensianal steady-state, heat conduction in Cartesian coordinates, has been extended. The finite difference algorithm developed was used to solve the diffusion equation in one-dimensional cylindrical coordinates and applied to two- and three-dimensional problems in Cartesian coordinates. The method was also used to solve nonlinear partial differential equations in one (Burger's equation) and two (Boundary Layer equations) dimensional Cartesian coordinates. Predicted results were compared to exact solutions where available, or to results obtained by other numerical methods. It was found that the exponential finite difference method produced results that were more accurate than those obtained by other numerical methods, especially during the initial transient portion of the solution. Other applications made using the exponential finite difference technique included unsteady one-dimensional heat transfer with temperature varying thermal conductivity and the development of the temperature field in a laminar Couette flow.
NASA Astrophysics Data System (ADS)
Jung, Joon Hee; Jang, Gang-Won; Shin, Dongil; Kim, Yoon Young
2018-03-01
This paper presents a method to analyze thin-walled beams with quadrilateral cross sections reinforced with diaphragms using a one-dimensional higher-order beam theory. The effect of a diaphragm is reflected focusing on the increase of static stiffness. The deformations on the beam-interfacing boundary of a thin diaphragm are described by using deformation modes of the beam cross section while the deformations inside the diaphragm are approximated in the form of complete cubic polynomials. By using the principle of minimum potential energy, its stiffness that significantly affects distortional deformation of a thin-walled beam can be considered in the one-dimensional beam analysis. It is shown that the accuracy of the resulting one-dimensional analysis is comparable with that by a shell element based analysis. As a means to demonstrate the usefulness of the present approach for design, position optimization problems of diaphragms for stiffness reinforcement of an automotive side frame are solved.
exponential finite difference technique for solving partial differential equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Handschuh, R.F.
1987-01-01
An exponential finite difference algorithm, as first presented by Bhattacharya for one-dimensianal steady-state, heat conduction in Cartesian coordinates, has been extended. The finite difference algorithm developed was used to solve the diffusion equation in one-dimensional cylindrical coordinates and applied to two- and three-dimensional problems in Cartesian coordinates. The method was also used to solve nonlinear partial differential equations in one (Burger's equation) and two (Boundary Layer equations) dimensional Cartesian coordinates. Predicted results were compared to exact solutions where available, or to results obtained by other numerical methods. It was found that the exponential finite difference method produced results that weremore » more accurate than those obtained by other numerical methods, especially during the initial transient portion of the solution. Other applications made using the exponential finite difference technique included unsteady one-dimensional heat transfer with temperature varying thermal conductivity and the development of the temperature field in a laminar Couette flow.« less
Evaluation of the mechanical properties of class-F fly ash.
Kim, Bumjoo; Prezzi, Monica
2008-01-01
Coal-burning power plants in the United States (US) generate more than 70 million tons of fly ash as a by-product annually. Recycling large volumes of fly ash in geotechnical applications may offer an attractive alternative to the disposal problem as most of it is currently dumped in ponds or landfills. Class-F fly ash, resulting from burning of bituminous or anthracite coals, is the most common type of fly ash in the US. In the present study, the mechanical characteristics (compaction response, compressibility, and shear strength) of class-F fly ash were investigated by performing various laboratory tests (compaction test, one-dimensional compression test, direct shear test and consolidated-drained triaxial compression test) on fly ash samples collected from three power plants in the state of Indiana (US). Test results have shown that despite some morphological differences, class-F fly ash exhibits mechanical properties that are, in general, comparable to those observed in natural sandy soils.
Non-ideal magnetohydrodynamics on a moving mesh
NASA Astrophysics Data System (ADS)
Marinacci, Federico; Vogelsberger, Mark; Kannan, Rahul; Mocz, Philip; Pakmor, Rüdiger; Springel, Volker
2018-05-01
In certain astrophysical systems, the commonly employed ideal magnetohydrodynamics (MHD) approximation breaks down. Here, we introduce novel explicit and implicit numerical schemes of ohmic resistivity terms in the moving-mesh code AREPO. We include these non-ideal terms for two MHD techniques: the Powell 8-wave formalism and a constrained transport scheme, which evolves the cell-centred magnetic vector potential. We test our implementation against problems of increasing complexity, such as one- and two-dimensional diffusion problems, and the evolution of progressive and stationary Alfvén waves. On these test problems, our implementation recovers the analytic solutions to second-order accuracy. As first applications, we investigate the tearing instability in magnetized plasmas and the gravitational collapse of a rotating magnetized gas cloud. In both systems, resistivity plays a key role. In the former case, it allows for the development of the tearing instability through reconnection of the magnetic field lines. In the latter, the adopted (constant) value of ohmic resistivity has an impact on both the gas distribution around the emerging protostar and the mass loading of magnetically driven outflows. Our new non-ideal MHD implementation opens up the possibility to study magneto-hydrodynamical systems on a moving mesh beyond the ideal MHD approximation.
Comparison of Artificial Compressibility Methods
NASA Technical Reports Server (NTRS)
Kiris, Cetin; Housman, Jeffrey; Kwak, Dochan
2004-01-01
Various artificial compressibility methods for calculating the three-dimensional incompressible Navier-Stokes equations are compared. Each method is described and numerical solutions to test problems are conducted. A comparison based on convergence behavior, accuracy, and robustness is given.
Posttest analysis of a 1:6-scale reinforced concrete reactor containment building
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weatherby, J.R.
In an experiment conducted at Sandia National Laboratories, 1:6-scale model of a reinforced concrete light water reactor containment building was pressurized with nitrogen gas to more than three times its design pressure. The pressurization produced one large tear and several smaller tears in the steel liner plate that functioned as the primary pneumatic seal for the structure. The data collected from the overpressurization test have been used to evaluate and further refine methods of structural analysis that can be used to predict the performance of containment buildings under conditions produced by a severe accident. This report describes posttest finite elementmore » analyses of the 1:6-scale model tests and compares pretest predictions of the structural response to the experimental results. Strain and displacements calculated in axisymmetric finite element analyses of the 1:6-scale model are compared to strains and displacement measured in the experiment. Detailed analyses of the liner plate are also described in the report. The region of the liner surrounding the large tear was analyzed using two different two-dimensional finite elements model. The results from these analyzed indicate that the primary mechanisms that initiated the tear can be captured in a two- dimensional finite element model. Furthermore, the analyses show that studs used to anchor the liner to the concrete wall, played an important role in initiating the liner tear. Three-dimensional finite element analyses of liner plates loaded by studs are also presented. Results from the three-dimensional analyses are compared to results from two-dimensional analyses of the same problems. 12 refs., 56 figs., 1 tab.« less
Underwater striling engine design with modified one-dimensional model
NASA Astrophysics Data System (ADS)
Li, Daijin; Qin, Kan; Luo, Kai
2015-09-01
Stirling engines are regarded as an efficient and promising power system for underwater devices. Currently, many researches on one-dimensional model is used to evaluate thermodynamic performance of Stirling engine, but in which there are still some aspects which cannot be modeled with proper mathematical models such as mechanical loss or auxiliary power. In this paper, a four-cylinder double-acting Stirling engine for Unmanned Underwater Vehicles (UUVs) is discussed. And a one-dimensional model incorporated with empirical equations of mechanical loss and auxiliary power obtained from experiments is derived while referring to the Stirling engine computer model of National Aeronautics and Space Administration (NASA). The P-40 Stirling engine with sufficient testing results from NASA is utilized to validate the accuracy of this one-dimensional model. It shows that the maximum error of output power of theoretical analysis results is less than 18% over testing results, and the maximum error of input power is no more than 9%. Finally, a Stirling engine for UUVs is designed with Schmidt analysis method and the modified one-dimensional model, and the results indicate this designed engine is capable of showing desired output power.
The effect of dissipative inhomogeneous medium on the statistics of the wave intensity
NASA Technical Reports Server (NTRS)
Saatchi, Sasan S.
1993-01-01
One of the main theoretical points in the theory of wave propagation in random medium is the derivation of closed form equations to describe the statistics of the propagating waves. In particular, in one dimensional problems, the closed form representation of the multiple scattering effects is important since it contributes in understanding such problems like wave localization, backscattering enhancement, and intensity fluctuations. In this the propagation of plane waves in a layer of one-dimensional dissipative random medium is considered. The medium is modeled by a complex permittivity whose real part is a constant representing the absorption. The one dimensional problem is mathematically equivalent to the analysis of a transmission line with randomly perturbed distributed parameters and a single mode lossy waveguide and the results can be used to study the propagation of radio waves through atmosphere and the remote sensing of geophysical media. It is assumed the scattering medium consists of an ensemble of one-dimensional point scatterers randomly positioned in a layer of thickness L with diffuse boundaries. A Poisson impulse process with density lambda is used to model the position of scatterers in the medium. By employing the Markov properties of this process an exact closed form equation of Kolmogorov-Feller type was obtained for the probability density of the reflection coefficient. This equation was solved by combining two limiting cases: (1) when the density of scatterers is small; and (2) when the medium is weakly dissipative. A two variable perturbation method for small lambda was used to obtain solutions valid for thick layers. These solutions are then asymptotically evaluated for small dissipation. To show the effect of dissipation, the mean and fluctuations of the reflected power are obtained. The results were compared with a lossy homogeneous medium and with a lossless inhomogeneous medium and the regions where the effect of absorption is not essential were discussed.
Aerosol Polarimetry Sensor (APS): Design Summary, Performance and Potential Modifications
NASA Technical Reports Server (NTRS)
Cairns, Brian
2014-01-01
APS is a mature design that has already been built and has a TRL of 7. Algorithmic and retrieval capabilities continue to improve and make better and more sophisticated used of the data. Adjoint solutions, both in one dimensional and three dimensional are computationally efficient and should be the preferred implementation for the calculation of Jacobians (one dimensional), or cost-function gradients (three dimensional). Adjoint solutions necessarily provide resolution of internal fields and simplify incorporation of active measurements in retrievals, which will be necessary for a future ACE mission. Its best to test these capabilities when you know the answer: OSSEs that are well constrained observationally provide the best place to test future multi-instrument platform capabilities and ensure capabilities will meet scientific needs.
A one-dimensional nonlinear problem of thermoelasticity in extended thermodynamics
NASA Astrophysics Data System (ADS)
Rawy, E. K.
2018-06-01
We solve a nonlinear, one-dimensional initial boundary-value problem of thermoelasticity in generalized thermodynamics. A Cattaneo-type evolution equation for the heat flux is used, which differs from the one used extensively in the literature. The hyperbolic nature of the associated linear system is clarified through a study of the characteristic curves. Progressive wave solutions with two finite speeds are noted. A numerical treatment is presented for the nonlinear system using a three-step, quasi-linearization, iterative finite-difference scheme for which the linear system of equations is the initial step in the iteration. The obtained results are discussed in detail. They clearly show the hyperbolic nature of the system, and may be of interest in investigating thermoelastic materials, not only at low temperatures, but also during high temperature processes involving rapid changes in temperature as in laser treatment of surfaces.
Numerical solution of inverse scattering for near-field optics.
Bao, Gang; Li, Peijun
2007-06-01
A novel regularized recursive linearization method is developed for a two-dimensional inverse medium scattering problem that arises in near-field optics, which reconstructs the scatterer of an inhomogeneous medium located on a substrate from data accessible through photon scanning tunneling microscopy experiments. Based on multiple frequency scattering data, the method starts from the Born approximation corresponding to weak scattering at a low frequency, and each update is obtained by continuation on the wavenumber from solutions of one forward problem and one adjoint problem of the Helmholtz equation.
Optimal Padding for the Two-Dimensional Fast Fourier Transform
NASA Technical Reports Server (NTRS)
Dean, Bruce H.; Aronstein, David L.; Smith, Jeffrey S.
2011-01-01
One-dimensional Fast Fourier Transform (FFT) operations work fastest on grids whose size is divisible by a power of two. Because of this, padding grids (that are not already sized to a power of two) so that their size is the next highest power of two can speed up operations. While this works well for one-dimensional grids, it does not work well for two-dimensional grids. For a two-dimensional grid, there are certain pad sizes that work better than others. Therefore, the need exists to generalize a strategy for determining optimal pad sizes. There are three steps in the FFT algorithm. The first is to perform a one-dimensional transform on each row in the grid. The second step is to transpose the resulting matrix. The third step is to perform a one-dimensional transform on each row in the resulting grid. Steps one and three both benefit from padding the row to the next highest power of two, but the second step needs a novel approach. An algorithm was developed that struck a balance between optimizing the grid pad size with prime factors that are small (which are optimal for one-dimensional operations), and with prime factors that are large (which are optimal for two-dimensional operations). This algorithm optimizes based on average run times, and is not fine-tuned for any specific application. It increases the amount of times that processor-requested data is found in the set-associative processor cache. Cache retrievals are 4-10 times faster than conventional memory retrievals. The tested implementation of the algorithm resulted in faster execution times on all platforms tested, but with varying sized grids. This is because various computer architectures process commands differently. The test grid was 512 512. Using a 540 540 grid on a Pentium V processor, the code ran 30 percent faster. On a PowerPC, a 256x256 grid worked best. A Core2Duo computer preferred either a 1040x1040 (15 percent faster) or a 1008x1008 (30 percent faster) grid. There are many industries that can benefit from this algorithm, including optics, image-processing, signal-processing, and engineering applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Childs, K.W.
1991-07-01
HEATING is a FORTRAN program designed to solve steady-state and/or transient heat conduction problems in one-, two-, or three- dimensional Cartesian, cylindrical, or spherical coordinates. A model may include multiple materials, and the thermal conductivity, density, and specific heat of each material may be both time- and temperature-dependent. The thermal conductivity may be anisotropic. Materials may undergo change of phase. Thermal properties of materials may be input or may be extracted from a material properties library. Heating generation rates may be dependent on time, temperature, and position, and boundary temperatures may be time- and position-dependent. The boundary conditions, which maymore » be surface-to-boundary or surface-to-surface, may be specified temperatures or any combination of prescribed heat flux, forced convection, natural convection, and radiation. The boundary condition parameters may be time- and/or temperature-dependent. General graybody radiation problems may be modeled with user-defined factors for radiant exchange. The mesh spacing may be variable along each axis. HEATING is variably dimensioned and utilizes free-form input. Three steady-state solution techniques are available: point-successive-overrelaxation iterative method with extrapolation, direct-solution (for one-dimensional or two-dimensional problems), and conjugate gradient. Transient problems may be solved using one of several finite-difference schemes: Crank-Nicolson implicit, Classical Implicit Procedure (CIP), Classical Explicit Procedure (CEP), or Levy explicit method (which for some circumstances allows a time step greater than the CEP stability criterion). The solution of the system of equations arising from the implicit techniques is accomplished by point-successive-overrelaxation iteration and includes procedures to estimate the optimum acceleration parameter.« less
A constrained robust least squares approach for contaminant release history identification
NASA Astrophysics Data System (ADS)
Sun, Alexander Y.; Painter, Scott L.; Wittmeyer, Gordon W.
2006-04-01
Contaminant source identification is an important type of inverse problem in groundwater modeling and is subject to both data and model uncertainty. Model uncertainty was rarely considered in the previous studies. In this work, a robust framework for solving contaminant source recovery problems is introduced. The contaminant source identification problem is first cast into one of solving uncertain linear equations, where the response matrix is constructed using a superposition technique. The formulation presented here is general and is applicable to any porous media flow and transport solvers. The robust least squares (RLS) estimator, which originated in the field of robust identification, directly accounts for errors arising from model uncertainty and has been shown to significantly reduce the sensitivity of the optimal solution to perturbations in model and data. In this work, a new variant of RLS, the constrained robust least squares (CRLS), is formulated for solving uncertain linear equations. CRLS allows for additional constraints, such as nonnegativity, to be imposed. The performance of CRLS is demonstrated through one- and two-dimensional test problems. When the system is ill-conditioned and uncertain, it is found that CRLS gave much better performance than its classical counterpart, the nonnegative least squares. The source identification framework developed in this work thus constitutes a reliable tool for recovering source release histories in real applications.
Filippov, Alexander E; Gorb, Stanislav N
2015-02-06
One of the important problems appearing in experimental realizations of artificial adhesives inspired by gecko foot hair is so-called clusterization. If an artificially produced structure is flexible enough to allow efficient contact with natural rough surfaces, after a few attachment-detachment cycles, the fibres of the structure tend to adhere one to another and form clusters. Normally, such clusters are much larger than original fibres and, because they are less flexible, form much worse adhesive contacts especially with the rough surfaces. Main problem here is that the forces responsible for the clusterization are the same intermolecular forces which attract fibres to fractal surface of the substrate. However, arrays of real gecko setae are much less susceptible to this problem. One of the possible reasons for this is that ends of the seta have more sophisticated non-uniformly distributed three-dimensional structure than that of existing artificial systems. In this paper, we simulated three-dimensional spatial geometry of non-uniformly distributed branches of nanofibres of the setal tip numerically, studied its attachment-detachment dynamics and discussed its advantages versus uniformly distributed geometry.
Two-Dimensional Finite Element Ablative Thermal Response Analysis of an Arcjet Stagnation Test
NASA Technical Reports Server (NTRS)
Dec, John A.; Laub, Bernard; Braun, Robert D.
2011-01-01
The finite element ablation and thermal response (FEAtR, hence forth called FEAR) design and analysis program simulates the one, two, or three-dimensional ablation, internal heat conduction, thermal decomposition, and pyrolysis gas flow of thermal protection system materials. As part of a code validation study, two-dimensional axisymmetric results from FEAR are compared to thermal response data obtained from an arc-jet stagnation test in this paper. The results from FEAR are also compared to the two-dimensional axisymmetric computations from the two-dimensional implicit thermal response and ablation program under the same arcjet conditions. The ablating material being used in this arcjet test is phenolic impregnated carbon ablator with an LI-2200 insulator as backup material. The test is performed at the NASA, Ames Research Center Interaction Heating Facility. Spatially distributed computational fluid dynamics solutions for the flow field around the test article are used for the surface boundary conditions.
Edge detection and localization with edge pattern analysis and inflection characterization
NASA Astrophysics Data System (ADS)
Jiang, Bo
2012-05-01
In general edges are considered to be abrupt changes or discontinuities in two dimensional image signal intensity distributions. The accuracy of front-end edge detection methods in image processing impacts the eventual success of higher level pattern analysis downstream. To generalize edge detectors designed from a simple ideal step function model to real distortions in natural images, research on one dimensional edge pattern analysis to improve the accuracy of edge detection and localization proposes an edge detection algorithm, which is composed by three basic edge patterns, such as ramp, impulse, and step. After mathematical analysis, general rules for edge representation based upon the classification of edge types into three categories-ramp, impulse, and step (RIS) are developed to reduce detection and localization errors, especially reducing "double edge" effect that is one important drawback to the derivative method. But, when applying one dimensional edge pattern in two dimensional image processing, a new issue is naturally raised that the edge detector should correct marking inflections or junctions of edges. Research on human visual perception of objects and information theory pointed out that a pattern lexicon of "inflection micro-patterns" has larger information than a straight line. Also, research on scene perception gave an idea that contours have larger information are more important factor to determine the success of scene categorization. Therefore, inflections or junctions are extremely useful features, whose accurate description and reconstruction are significant in solving correspondence problems in computer vision. Therefore, aside from adoption of edge pattern analysis, inflection or junction characterization is also utilized to extend traditional derivative edge detection algorithm. Experiments were conducted to test my propositions about edge detection and localization accuracy improvements. The results support the idea that these edge detection method improvements are effective in enhancing the accuracy of edge detection and localization.
Shaffer, Patrick; Valsson, Omar; Parrinello, Michele
2016-01-01
The capabilities of molecular simulations have been greatly extended by a number of widely used enhanced sampling methods that facilitate escaping from metastable states and crossing large barriers. Despite these developments there are still many problems which remain out of reach for these methods which has led to a vigorous effort in this area. One of the most important problems that remains unsolved is sampling high-dimensional free-energy landscapes and systems that are not easily described by a small number of collective variables. In this work we demonstrate a new way to compute free-energy landscapes of high dimensionality based on the previously introduced variationally enhanced sampling, and we apply it to the miniprotein chignolin. PMID:26787868
Design of an advanced flight planning system
NASA Technical Reports Server (NTRS)
Sorensen, J. A.; Goka, T.
1985-01-01
The demand for both fuel conservation and four-dimensional traffic management require that the preflight planning process be designed to account for advances in airborne flight management and weather forecasting. The steps and issues in designing such an advanced flight planning system are presented. Focus is placed on the different optimization options for generating the three-dimensional reference path. For the cruise phase, one can use predefined jet routes, direct routes based on a network of evenly spaced grid points, or a network where the grid points are existing navaid locations. Each choice has its own problem in determining an optimum solution. Finding the reference path is further complicated by choice of cruise altitude levels, use of a time-varying weather field, and requiring a fixed time-of-arrival (four-dimensional problem).
Two-dimensional radiative transfer. I - Planar geometry. [in stellar atmospheres
NASA Technical Reports Server (NTRS)
Mihalas, D.; Auer, L. H.; Mihalas, B. R.
1978-01-01
Differential-equation methods for solving the transfer equation in two-dimensional planar geometries are developed. One method, which uses a Hermitian integration formula on ray segments through grid points, proves to be extremely well suited to velocity-dependent problems. An efficient elimination scheme is developed for which the computing time scales linearly with the number of angles and frequencies; problems with large velocity amplitudes can thus be treated accurately. A very accurate and efficient method for performing a formal solution is also presented. A discussion is given of several examples of periodic media and free-standing slabs, both in static cases and with velocity fields. For the free-standing slabs, two-dimensional transport effects are significant near boundaries, but no important effects were found in any of the periodic cases studied.
NASA Astrophysics Data System (ADS)
Paramestha, D. L.; Santosa, B.
2018-04-01
Two-dimensional Loading Heterogeneous Fleet Vehicle Routing Problem (2L-HFVRP) is a combination of Heterogeneous Fleet VRP and a packing problem well-known as Two-Dimensional Bin Packing Problem (BPP). 2L-HFVRP is a Heterogeneous Fleet VRP in which these costumer demands are formed by a set of two-dimensional rectangular weighted item. These demands must be served by a heterogeneous fleet of vehicles with a fix and variable cost from the depot. The objective function 2L-HFVRP is to minimize the total transportation cost. All formed routes must be consistent with the capacity and loading process of the vehicle. Sequential and unrestricted scenarios are considered in this paper. We propose a metaheuristic which is a combination of the Genetic Algorithm (GA) and the Cross Entropy (CE) named Cross Entropy Genetic Algorithm (CEGA) to solve the 2L-HFVRP. The mutation concept on GA is used to speed up the algorithm CE to find the optimal solution. The mutation mechanism was based on local improvement (2-opt, 1-1 Exchange, and 1-0 Exchange). The probability transition matrix mechanism on CE is used to avoid getting stuck in the local optimum. The effectiveness of CEGA was tested on benchmark instance based 2L-HFVRP. The result of experiments shows a competitive result compared with the other algorithm.
Algorithm and code development for unsteady three-dimensional Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Obayashi, Shigeru
1991-01-01
A streamwise upwind algorithm for solving the unsteady 3-D Navier-Stokes equations was extended to handle the moving grid system. It is noted that the finite volume concept is essential to extend the algorithm. The resulting algorithm is conservative for any motion of the coordinate system. Two extensions to an implicit method were considered and the implicit extension that makes the algorithm computationally efficient is implemented into Ames's aeroelasticity code, ENSAERO. The new flow solver has been validated through the solution of test problems. Test cases include three-dimensional problems with fixed and moving grids. The first test case shown is an unsteady viscous flow over an F-5 wing, while the second test considers the motion of the leading edge vortex as well as the motion of the shock wave for a clipped delta wing. The resulting algorithm has been implemented into ENSAERO. The upwind version leads to higher accuracy in both steady and unsteady computations than the previously used central-difference method does, while the increase in the computational time is small.
A Finite Difference Method for Modeling Migration of Impurities in Multilayer Systems
NASA Astrophysics Data System (ADS)
Tosa, V.; Kovacs, Katalin; Mercea, P.; Piringer, O.
2008-09-01
A finite difference method to solve the one-dimensional diffusion of impurities in a multilayer system was developed for the special case in which a partition coefficient K impose a ratio of the concentrations at the interface between two adiacent layers. The fictitious point method was applied to derive the algebraic equations for the mesh points at the interface, while for the non-uniform mesh points within the layers a combined method was used. The method was tested and then applied to calculate migration of impurities from multilayer systems into liquids or solids samples, in migration experiments performed for quality testing purposes. An application was developed in the field of impurities migrations from multilayer plastic packagings into food, a problem of increasing importance in food industry.
NASA Technical Reports Server (NTRS)
Iida, H. T.
1966-01-01
Computational procedure reduces the numerical effort whenever the method of finite differences is used to solve ablation problems for which the surface recession is large relative to the initial slab thickness. The number of numerical operations required for a given maximum space mesh size is reduced.
Dane, Markus; Gonis, Antonios
2016-07-05
Based on a computational procedure for determining the functional derivative with respect to the density of any antisymmetric N-particle wave function for a non-interacting system that leads to the density, we devise a test as to whether or not a wave function known to lead to a given density corresponds to a solution of a Schrödinger equation for some potential. We examine explicitly the case of non-interacting systems described by Slater determinants. Here, numerical examples for the cases of a one-dimensional square-well potential with infinite walls and the harmonic oscillator potential illustrate the formalism.
A reconceptualization of the somatoform disorders.
Noyes, Russell; Stuart, Scott P; Watson, David B
2008-01-01
Since its introduction in DSM-III, the Somatoform Disorders category has been a subject of controversy. Critics of the grouping have claimed that it promotes dualism, assumes psychogenesis, and that it contains heterogeneous disorders that lack validity. The history of these disorders is one of shifting conceptualizations and disputes. A number of changes in the classification have been proposed, but few address problems that arise with the current formulation. The authors propose a dimensional reconceptualization based on marked and persistent somatic distress and care-eliciting behavior. This formulation is based on the interpersonal model of somatization. The authors propose testing of this conceptualization and indicate how this might be done.
Algorithms for Maneuvering Spacecraft Around Small Bodies
NASA Technical Reports Server (NTRS)
Acikmese, A. Bechet; Bayard, David
2006-01-01
A document describes mathematical derivations and applications of autonomous guidance algorithms for maneuvering spacecraft in the vicinities of small astronomical bodies like comets or asteroids. These algorithms compute fuel- or energy-optimal trajectories for typical maneuvers by solving the associated optimal-control problems with relevant control and state constraints. In the derivations, these problems are converted from their original continuous (infinite-dimensional) forms to finite-dimensional forms through (1) discretization of the time axis and (2) spectral discretization of control inputs via a finite number of Chebyshev basis functions. In these doubly discretized problems, the Chebyshev coefficients are the variables. These problems are, variously, either convex programming problems or programming problems that can be convexified. The resulting discrete problems are convex parameter-optimization problems; this is desirable because one can take advantage of very efficient and robust algorithms that have been developed previously and are well established for solving such problems. These algorithms are fast, do not require initial guesses, and always converge to global optima. Following the derivations, the algorithms are demonstrated by applying them to numerical examples of flyby, descent-to-hover, and ascent-from-hover maneuvers.
One-dimensional Vlasov-Maxwell equilibrium for the force-free Harris sheet.
Harrison, Michael G; Neukirch, Thomas
2009-04-03
In this Letter, the first nonlinear force-free Vlasov-Maxwell equilibrium is presented. One component of the equilibrium magnetic field has the same spatial structure as the Harris sheet, but whereas the Harris sheet is kept in force balance by pressure gradients, in the force-free solution presented here force balance is maintained by magnetic shear. Magnetic pressure, plasma pressure and plasma density are constant. The method used to find the equilibrium is based on the analogy of the one-dimensional Vlasov-Maxwell equilibrium problem to the motion of a pseudoparticle in a two-dimensional conservative potential. The force-free solution can be generalized to a complete family of equilibria that describe the transition between the purely pressure-balanced Harris sheet to the force-free Harris sheet.
Benchmarking a Visual-Basic based multi-component one-dimensional reactive transport modeling tool
NASA Astrophysics Data System (ADS)
Torlapati, Jagadish; Prabhakar Clement, T.
2013-01-01
We present the details of a comprehensive numerical modeling tool, RT1D, which can be used for simulating biochemical and geochemical reactive transport problems. The code can be run within the standard Microsoft EXCEL Visual Basic platform, and it does not require any additional software tools. The code can be easily adapted by others for simulating different types of laboratory-scale reactive transport experiments. We illustrate the capabilities of the tool by solving five benchmark problems with varying levels of reaction complexity. These literature-derived benchmarks are used to highlight the versatility of the code for solving a variety of practical reactive transport problems. The benchmarks are described in detail to provide a comprehensive database, which can be used by model developers to test other numerical codes. The VBA code presented in the study is a practical tool that can be used by laboratory researchers for analyzing both batch and column datasets within an EXCEL platform.
Artificial neural network methods in quantum mechanics
NASA Astrophysics Data System (ADS)
Lagaris, I. E.; Likas, A.; Fotiadis, D. I.
1997-08-01
In a previous article we have shown how one can employ Artificial Neural Networks (ANNs) in order to solve non-homogeneous ordinary and partial differential equations. In the present work we consider the solution of eigenvalue problems for differential and integrodifferential operators, using ANNs. We start by considering the Schrödinger equation for the Morse potential that has an analytically known solution, to test the accuracy of the method. We then proceed with the Schrödinger and the Dirac equations for a muonic atom, as well as with a nonlocal Schrödinger integrodifferential equation that models the n + α system in the framework of the resonating group method. In two dimensions we consider the well-studied Henon-Heiles Hamiltonian and in three dimensions the model problem of three coupled anharmonic oscillators. The method in all of the treated cases proved to be highly accurate, robust and efficient. Hence it is a promising tool for tackling problems of higher complexity and dimensionality.
NASA Astrophysics Data System (ADS)
Jorris, Timothy R.
2007-12-01
To support the Air Force's Global Reach concept, a Common Aero Vehicle is being designed to support the Global Strike mission. "Waypoints" are specified for reconnaissance or multiple payload deployments and "no-fly zones" are specified for geopolitical restrictions or threat avoidance. Due to time critical targets and multiple scenario analysis, an autonomous solution is preferred over a time-intensive, manually iterative one. Thus, a real-time or near real-time autonomous trajectory optimization technique is presented to minimize the flight time, satisfy terminal and intermediate constraints, and remain within the specified vehicle heating and control limitations. This research uses the Hypersonic Cruise Vehicle (HCV) as a simplified two-dimensional platform to compare multiple solution techniques. The solution techniques include a unique geometric approach developed herein, a derived analytical dynamic optimization technique, and a rapidly emerging collocation numerical approach. This up-and-coming numerical technique is a direct solution method involving discretization then dualization, with pseudospectral methods and nonlinear programming used to converge to the optimal solution. This numerical approach is applied to the Common Aero Vehicle (CAV) as the test platform for the full three-dimensional reentry trajectory optimization problem. The culmination of this research is the verification of the optimality of this proposed numerical technique, as shown for both the two-dimensional and three-dimensional models. Additionally, user implementation strategies are presented to improve accuracy and enhance solution convergence. Thus, the contributions of this research are the geometric approach, the user implementation strategies, and the determination and verification of a numerical solution technique for the optimal reentry trajectory problem that minimizes time to target while satisfying vehicle dynamics and control limitation, and heating, waypoint, and no-fly zone constraints.
Dimensional stability tests over time and temperature for several low-expansion glass ceramics.
Hall, D B
1996-04-01
The dimensional stabilities of five commercially available low-expansion glass ceramics have been measured between -40 °C and +90 °C. Materials tested include Zerodur, Zerodur M, Astrositall, Clearceram 55, and Clearceram 63. With the use of a standardized thermal testing procedure, the thermal expansion, isothermal shrinkage, and hysteresis behavior of the various materials are compared with one another. A detailed comparison of three separate melts of Astrositall, two separate melts of Zerodur, and one melt of Zerodur M indicates that between -40 °C and +90 °C the dimensional stability and uniformity characteristics of two of the melts of Astrositall are somewhat better than those of the other two materials. To my knowledge, this is the first published comparison of data from these glass ceramics taken with identical test procedures.
Mower, Timothy E.; Higgins, Jerry D.; Yang, In C.; Peters, Charles A.
1994-01-01
Study of the hydrologic system at Yucca Mountain, Nevada, requires the extraction of pore-water samples from welded and nonwelded, unsaturated tuffs. Two compression methods (triaxial compression and one-dimensional compression) were examined to develop a repeatable extraction technique and to investigate the effects of the extraction method on the original pore-fluid composition. A commercially available triaxial cell was modified to collect pore water expelled from tuff cores. The triaxial cell applied a maximum axial stress of 193 MPa and a maximum confining stress of 68 MPa. Results obtained from triaxial compression testing indicated that pore-water samples could be obtained from nonwelded tuff cores that had initial moisture contents as small as 13 percent (by weight of dry soil). Injection of nitrogen gas while the test core was held at the maximum axial stress caused expulsion of additional pore water and reduced the required initial moisture content from 13 to 11 percent. Experimental calculations, together with experience gained from testing moderately welded tuff cores, indicated that the triaxial cell used in this study could not apply adequate axial or confining stress to expel pore water from cores of densely welded tuffs. This concern led to the design, fabrication, and testing of a one-dimensional compression cell. The one-dimensional compression cell used in this study was constructed from hardened 4340-alloy and nickel-alloy steels and could apply a maximum axial stress of 552 MPa. The major components of the device include a corpus ring and sample sleeve to confine the sample, a piston and base platen to apply axial load, and drainage plates to transmit expelled water from the test core out of the cell. One-dimensional compression extracted pore water from nonwelded tuff cores that had initial moisture contents as small as 7.6 percent; pore water was expelled from densely welded tuff cores that had initial moisture contents as small as 7.7 percent. Injection of nitrogen gas at the maximum axial stress did not produce additional pore water from nonwelded tuff cores, but was critical to recovery of pore water from densely welded tuff cores. Gas injection reduced the required initial moisture content in welded tuff cores from 7.7 to 6.5 percent. Based on the mechanical ability of a pore-water extraction method to remove water from welded and nonwelded tuff cores, one-dimensional compression is a more effective extraction method than triaxial compression. However, because the effects that one-dimensional compression has on pore-water chemistry are not completely understood, additional testing will be needed to verify that this method is suitable for pore-water extraction from Yucca Mountain tuffs.
An Autonomous Star Identification Algorithm Based on One-Dimensional Vector Pattern for Star Sensors
Luo, Liyan; Xu, Luping; Zhang, Hua
2015-01-01
In order to enhance the robustness and accelerate the recognition speed of star identification, an autonomous star identification algorithm for star sensors is proposed based on the one-dimensional vector pattern (one_DVP). In the proposed algorithm, the space geometry information of the observed stars is used to form the one-dimensional vector pattern of the observed star. The one-dimensional vector pattern of the same observed star remains unchanged when the stellar image rotates, so the problem of star identification is simplified as the comparison of the two feature vectors. The one-dimensional vector pattern is adopted to build the feature vector of the star pattern, which makes it possible to identify the observed stars robustly. The characteristics of the feature vector and the proposed search strategy for the matching pattern make it possible to achieve the recognition result as quickly as possible. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition accuracy and robustness by the proposed algorithm are better than those by the pyramid algorithm, the modified grid algorithm, and the LPT algorithm. The theoretical analysis and experimental results show that the proposed algorithm outperforms the other three star identification algorithms. PMID:26198233
Luo, Liyan; Xu, Luping; Zhang, Hua
2015-07-07
In order to enhance the robustness and accelerate the recognition speed of star identification, an autonomous star identification algorithm for star sensors is proposed based on the one-dimensional vector pattern (one_DVP). In the proposed algorithm, the space geometry information of the observed stars is used to form the one-dimensional vector pattern of the observed star. The one-dimensional vector pattern of the same observed star remains unchanged when the stellar image rotates, so the problem of star identification is simplified as the comparison of the two feature vectors. The one-dimensional vector pattern is adopted to build the feature vector of the star pattern, which makes it possible to identify the observed stars robustly. The characteristics of the feature vector and the proposed search strategy for the matching pattern make it possible to achieve the recognition result as quickly as possible. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition accuracy and robustness by the proposed algorithm are better than those by the pyramid algorithm, the modified grid algorithm, and the LPT algorithm. The theoretical analysis and experimental results show that the proposed algorithm outperforms the other three star identification algorithms.
Caracciolo, Sergio; Sicuro, Gabriele
2014-10-01
We discuss the equivalence relation between the Euclidean bipartite matching problem on the line and on the circumference and the Brownian bridge process on the same domains. The equivalence allows us to compute the correlation function and the optimal cost of the original combinatorial problem in the thermodynamic limit; moreover, we solve also the minimax problem on the line and on the circumference. The properties of the average cost and correlation functions are discussed.
NASA Astrophysics Data System (ADS)
Qian, Ying-Jing; Yang, Xiao-Dong; Zhai, Guan-Qiao; Zhang, Wei
2017-08-01
Innovated by the nonlinear modes concept in the vibrational dynamics, the vertical periodic orbits around the triangular libration points are revisited for the Circular Restricted Three-body Problem. The ζ -component motion is treated as the dominant motion and the ξ and η -component motions are treated as the slave motions. The slave motions are in nature related to the dominant motion through the approximate nonlinear polynomial expansions with respect to the ζ -position and ζ -velocity during the one of the periodic orbital motions. By employing the relations among the three directions, the three-dimensional system can be transferred into one-dimensional problem. Then the approximate three-dimensional vertical periodic solution can be analytically obtained by solving the dominant motion only on ζ -direction. To demonstrate the effectiveness of the proposed method, an accuracy study was carried out to validate the polynomial expansion (PE) method. As one of the applications, the invariant nonlinear relations in polynomial expansion form are used as constraints to obtain numerical solutions by differential correction. The nonlinear relations among the directions provide an alternative point of view to explore the overall dynamics of periodic orbits around libration points with general rules.
Pilla, Ajai; Pathipaka, Suman
2016-01-01
Introduction The dimensional stability of the impression material could have an influence on the accuracy of the final restoration. Vinyl Polysiloxane Impression materials (VPS) are most frequently used as the impression material in fixed prosthodontics. As VPS is hydrophobic when it is poured with gypsum products, manufacturers added intrinsic surfactants and marketed as hydrophilic VPS. These hydrophilic VPS have shown increased wettability with gypsum slurries. VPS are available in different viscosities ranging from very low to very high for usage under different impression techniques. Aim To compare the dimensional accuracy of hydrophilic VPS and hydrophobic VPS using monophase, one step and two step putty wash impression techniques. Materials and Methods To test the dimensional accuracy of the impression materials a stainless steel die was fabricated as prescribed by ADA specification no. 19 for elastomeric impression materials. A total of 60 impressions were made. The materials were divided into two groups, Group1 hydrophilic VPS (Aquasil) and Group 2 hydrophobic VPS (Variotime). These were further divided into three subgroups A, B, C for monophase, one-step and two-step putty wash technique with 10 samples in each subgroup. The dimensional accuracy of the impressions was evaluated after 24 hours using vertical profile projector with lens magnification range of 20X-125X illumination. The study was analyzed through one-way ANOVA, post-hoc Tukey HSD test and unpaired t-test for mean comparison between groups. Results Results showed that the three different impression techniques (monophase, 1-step, 2-step putty wash techniques) did cause significant change in dimensional accuracy between hydrophilic VPS and hydrophobic VPS impression materials. One-way ANOVA disclosed, mean dimensional change and SD for hydrophilic VPS varied between 0.56% and 0.16%, which were low, suggesting hydrophilic VPS was satisfactory with all three impression techniques. However, mean dimensional change and SD for hydrophobic VPS were much higher with monophase, mere increase for 1-step and 2-step, than the standard steel die (p<0.05). Unpaired t-test displayed that hydrophilic VPS judged satisfactory compared to hydrophobic VPS among 1-step and 2-step impression technique. Conclusion Within the limitations of this study, it can be concluded that hydrophilic Vinyl polysiloxane was more dimensionally accurate than hydrophobic Vinyl polysiloxane using monophase, one step and two step putty wash impression techniques under moist conditions. PMID:27042587
Boundary condition computational procedures for inviscid, supersonic steady flow field calculations
NASA Technical Reports Server (NTRS)
Abbett, M. J.
1971-01-01
Results are given of a comparative study of numerical procedures for computing solid wall boundary points in supersonic inviscid flow calculatons. Twenty five different calculation procedures were tested on two sample problems: a simple expansion wave and a simple compression (two-dimensional steady flow). A simple calculation procedure was developed. The merits and shortcomings of the various procedures are discussed, along with complications for three-dimensional and time-dependent flows.
Recent Advances in Agglomerated Multigrid
NASA Technical Reports Server (NTRS)
Nishikawa, Hiroaki; Diskin, Boris; Thomas, James L.; Hammond, Dana P.
2013-01-01
We report recent advancements of the agglomerated multigrid methodology for complex flow simulations on fully unstructured grids. An agglomerated multigrid solver is applied to a wide range of test problems from simple two-dimensional geometries to realistic three- dimensional configurations. The solver is evaluated against a single-grid solver and, in some cases, against a structured-grid multigrid solver. Grid and solver issues are identified and overcome, leading to significant improvements over single-grid solvers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polat, Orhan, E-mail: orhan.polat@deu.edu.tr; Özer, Çaglar, E-mail: caglar.ozer@deu.edu.tr; Dokuz Eylul University, The Graduate School of Natural and Applied Sciences, Department of Geophysical Engineering, Izmir-Turkey
In this study; we examined one dimensional crustal velocity structure of Izmir gulf and surroundings. We used nearly one thousand high quality (A and B class) earthquake data which recorded by Disaster and Emergency Management Presidency (AFAD) [1], Bogazici University (BU-KOERI) [2] and National Observatory of Athens (NOA) [3,4]. We tried several synthetic tests to understand power of new velocity structure, and examined phase residuals, RMS values and shifting tests. After evaluating these tests; we decided one dimensional velocity structure and minimum 1-D P wave velocities, hypocentral parameter and earthquake locations from VELEST algorithm. Distribution of earthquakes was visibly improvedmore » by using new minimum velocity structure.« less
A one-dimensional interactive soil-atmosphere model for testing formulations of surface hydrology
NASA Technical Reports Server (NTRS)
Koster, Randal D.; Eagleson, Peter S.
1990-01-01
A model representing a soil-atmosphere column in a GCM is developed for off-line testing of GCM soil hydrology parameterizations. Repeating three representative GCM sensitivity experiments with this one-dimensional model demonstrates that, to first order, the model reproduces a GCM's sensitivity to imposed changes in parameterization and therefore captures the essential physics of the GCM. The experiments also show that by allowing feedback between the soil and atmosphere, the model improves on off-line tests that rely on prescribed precipitation, radiation, and other surface forcing.
Weatherill, D.; Simmons, C.T.; Voss, C.I.; Robinson, N.I.
2004-01-01
This study proposes the use of several problems of unstable steady state convection with variable fluid density in a porous layer of infinite horizontal extent as two-dimensional (2-D) test cases for density-dependent groundwater flow and solute transport simulators. Unlike existing density-dependent model benchmarks, these problems have well-defined stability criteria that are determined analytically. These analytical stability indicators can be compared with numerical model results to test the ability of a code to accurately simulate buoyancy driven flow and diffusion. The basic analytical solution is for a horizontally infinite fluid-filled porous layer in which fluid density decreases with depth. The proposed test problems include unstable convection in an infinite horizontal box, in a finite horizontal box, and in an infinite inclined box. A dimensionless Rayleigh number incorporating properties of the fluid and the porous media determines the stability of the layer in each case. Testing the ability of numerical codes to match both the critical Rayleigh number at which convection occurs and the wavelength of convection cells is an addition to the benchmark problems currently in use. The proposed test problems are modelled in 2-D using the SUTRA [SUTRA-A model for saturated-unsaturated variable-density ground-water flow with solute or energy transport. US Geological Survey Water-Resources Investigations Report, 02-4231, 2002. 250 p] density-dependent groundwater flow and solute transport code. For the case of an infinite horizontal box, SUTRA results show a distinct change from stable to unstable behaviour around the theoretical critical Rayleigh number of 4??2 and the simulated wavelength of unstable convection agrees with that predicted by the analytical solution. The effects of finite layer aspect ratio and inclination on stability indicators are also tested and numerical results are in excellent agreement with theoretical stability criteria and with numerical results previously reported in traditional fluid mechanics literature. ?? 2004 Elsevier Ltd. All rights reserved.
NMR Analysis of Unknowns: An Introduction to 2D NMR Spectroscopy
ERIC Educational Resources Information Center
Alonso, David E.; Warren, Steven E.
2005-01-01
A study combined 1D (one-dimensional) and 2D (two-dimensional) NMR spectroscopy to solve structural organic problems of three unknowns, which include 2-, 3-, and 4-heptanone. Results showed [to the first power]H NMR and [to the thirteenth power]C NMR signal assignments for 2- and 3-heptanone were more challenging than for 4-heptanone owing to the…
Efficient Inversion of Mult-frequency and Multi-Source Electromagnetic Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gary D. Egbert
2007-03-22
The project covered by this report focused on development of efficient but robust non-linear inversion algorithms for electromagnetic induction data, in particular for data collected with multiple receivers, and multiple transmitters, a situation extremely common in eophysical EM subsurface imaging methods. A key observation is that for such multi-transmitter problems each step in commonly used linearized iterative limited memory search schemes such as conjugate gradients (CG) requires solution of forward and adjoint EM problems for each of the N frequencies or sources, essentially generating data sensitivities for an N dimensional data-subspace. These multiple sensitivities allow a good approximation to themore » full Jacobian of the data mapping to be built up in many fewer search steps than would be required by application of textbook optimization methods, which take no account of the multiplicity of forward problems that must be solved for each search step. We have applied this idea to a develop a hybrid inversion scheme that combines features of the iterative limited memory type methods with a Newton-type approach using a partial calculation of the Jacobian. Initial tests on 2D problems show that the new approach produces results essentially identical to a Newton type Occam minimum structure inversion, while running more rapidly than an iterative (fixed regularization parameter) CG style inversion. Memory requirements, while greater than for something like CG, are modest enough that even in 3D the scheme should allow 3D inverse problems to be solved on a common desktop PC, at least for modest (~ 100 sites, 15-20 frequencies) data sets. A secondary focus of the research has been development of a modular system for EM inversion, using an object oriented approach. This system has proven useful for more rapid prototyping of inversion algorithms, in particular allowing initial development and testing to be conducted with two-dimensional example problems, before approaching more computationally cumbersome three-dimensional problems.« less
A Bell-Curved Based Algorithm for Mixed Continuous and Discrete Structural Optimization
NASA Technical Reports Server (NTRS)
Kincaid, Rex K.; Weber, Michael; Sobieszczanski-Sobieski, Jaroslaw
2001-01-01
An evolutionary based strategy utilizing two normal distributions to generate children is developed to solve mixed integer nonlinear programming problems. This Bell-Curve Based (BCB) evolutionary algorithm is similar in spirit to (mu + mu) evolutionary strategies and evolutionary programs but with fewer parameters to adjust and no mechanism for self adaptation. First, a new version of BCB to solve purely discrete optimization problems is described and its performance tested against a tabu search code for an actuator placement problem. Next, the performance of a combined version of discrete and continuous BCB is tested on 2-dimensional shape problems and on a minimum weight hub design problem. In the latter case the discrete portion is the choice of the underlying beam shape (I, triangular, circular, rectangular, or U).
Information Gain Based Dimensionality Selection for Classifying Text Documents
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dumidu Wijayasekara; Milos Manic; Miles McQueen
2013-06-01
Selecting the optimal dimensions for various knowledge extraction applications is an essential component of data mining. Dimensionality selection techniques are utilized in classification applications to increase the classification accuracy and reduce the computational complexity. In text classification, where the dimensionality of the dataset is extremely high, dimensionality selection is even more important. This paper presents a novel, genetic algorithm based methodology, for dimensionality selection in text mining applications that utilizes information gain. The presented methodology uses information gain of each dimension to change the mutation probability of chromosomes dynamically. Since the information gain is calculated a priori, the computational complexitymore » is not affected. The presented method was tested on a specific text classification problem and compared with conventional genetic algorithm based dimensionality selection. The results show an improvement of 3% in the true positives and 1.6% in the true negatives over conventional dimensionality selection methods.« less
Principles for problem aggregation and assignment in medium scale multiprocessors
NASA Technical Reports Server (NTRS)
Nicol, David M.; Saltz, Joel H.
1987-01-01
One of the most important issues in parallel processing is the mapping of workload to processors. This paper considers a large class of problems having a high degree of potential fine grained parallelism, and execution requirements that are either not predictable, or are too costly to predict. The main issues in mapping such a problem onto medium scale multiprocessors are those of aggregation and assignment. We study a method of parameterized aggregation that makes few assumptions about the workload. The mapping of aggregate units of work onto processors is uniform, and exploits locality of workload intensity to balance the unknown workload. In general, a finer aggregate granularity leads to a better balance at the price of increased communication/synchronization costs; the aggregation parameters can be adjusted to find a reasonable granularity. The effectiveness of this scheme is demonstrated on three model problems: an adaptive one-dimensional fluid dynamics problem with message passing, a sparse triangular linear system solver on both a shared memory and a message-passing machine, and a two-dimensional time-driven battlefield simulation employing message passing. Using the model problems, the tradeoffs are studied between balanced workload and the communication/synchronization costs. Finally, an analytical model is used to explain why the method balances workload and minimizes the variance in system behavior.
On the Performance Evaluation of 3D Reconstruction Techniques from a Sequence of Images
NASA Astrophysics Data System (ADS)
Eid, Ahmed; Farag, Aly
2005-12-01
The performance evaluation of 3D reconstruction techniques is not a simple problem to solve. This is not only due to the increased dimensionality of the problem but also due to the lack of standardized and widely accepted testing methodologies. This paper presents a unified framework for the performance evaluation of different 3D reconstruction techniques. This framework includes a general problem formalization, different measuring criteria, and a classification method as a first step in standardizing the evaluation process. Performance characterization of two standard 3D reconstruction techniques, stereo and space carving, is also presented. The evaluation is performed on the same data set using an image reprojection testing methodology to reduce the dimensionality of the evaluation domain. Also, different measuring strategies are presented and applied to the stereo and space carving techniques. These measuring strategies have shown consistent results in quantifying the performance of these techniques. Additional experiments are performed on the space carving technique to study the effect of the number of input images and the camera pose on its performance.
An adaptive ANOVA-based PCKF for high-dimensional nonlinear inverse modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Weixuan, E-mail: weixuan.li@usc.edu; Lin, Guang, E-mail: guang.lin@pnnl.gov; Zhang, Dongxiao, E-mail: dxz@pku.edu.cn
2014-02-01
The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect—except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos basis functions in the expansion helps to capture uncertainty more accurately but increases computational cost. Selection of basis functionsmore » is particularly important for high-dimensional stochastic problems because the number of polynomial chaos basis functions required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE basis functions are pre-set based on users' experience. Also, for sequential data assimilation problems, the basis functions kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE basis functions for different problems and automatically adjusts the number of basis functions in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm was tested with different examples and demonstrated great effectiveness in comparison with non-adaptive PCKF and EnKF algorithms.« less
An Adaptive ANOVA-based PCKF for High-Dimensional Nonlinear Inverse Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
LI, Weixuan; Lin, Guang; Zhang, Dongxiao
2014-02-01
The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect—except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos bases in the expansion helps to capture uncertainty more accurately but increases computational cost. Bases selection is particularly importantmore » for high-dimensional stochastic problems because the number of polynomial chaos bases required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE bases are pre-set based on users’ experience. Also, for sequential data assimilation problems, the bases kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE bases for different problems and automatically adjusts the number of bases in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm is tested with different examples and demonstrated great effectiveness in comparison with non-adaptive PCKF and EnKF algorithms.« less
Progress on a Taylor weak statement finite element algorithm for high-speed aerodynamic flows
NASA Technical Reports Server (NTRS)
Baker, A. J.; Freels, J. D.
1989-01-01
A new finite element numerical Computational Fluid Dynamics (CFD) algorithm has matured to the point of efficiently solving two-dimensional high speed real-gas compressible flow problems in generalized coordinates on modern vector computer systems. The algorithm employs a Taylor Weak Statement classical Galerkin formulation, a variably implicit Newton iteration, and a tensor matrix product factorization of the linear algebra Jacobian under a generalized coordinate transformation. Allowing for a general two-dimensional conservation law system, the algorithm has been exercised on the Euler and laminar forms of the Navier-Stokes equations. Real-gas fluid properties are admitted, and numerical results verify solution accuracy, efficiency, and stability over a range of test problem parameters.
Ordering phase transition in the one-dimensional Axelrod model
NASA Astrophysics Data System (ADS)
Vilone, D.; Vespignani, A.; Castellano, C.
2002-12-01
We study the one-dimensional behavior of a cellular automaton aimed at the description of the formation and evolution of cultural domains. The model exhibits a non-equilibrium transition between a phase with all the system sharing the same culture and a disordered phase of coexisting regions with different cultural features. Depending on the initial distribution of the disorder the transition occurs at different values of the model parameters. This phenomenology is qualitatively captured by a mean-field approach, which maps the dynamics into a multi-species reaction-diffusion problem.
Interior radiances in optically deep absorbing media. I - Exact solutions for one-dimensional model.
NASA Technical Reports Server (NTRS)
Kattawar, G. W.; Plass, G. N.
1973-01-01
An exact analytic solution to the one-dimensional scattering problem with arbitrary single scattering albedo and arbitrary surface albedo is presented. Expressions are given for the emergent flux from a homogeneous layer, the internal flux within the layer, and the radiative heating. A comparison of these results with the values calculated from the matrix operator theory indicates an exceedingly high accuracy. A detailed study is made of the error in the matrix operator results and its dependence on the accuracy of the starting value.
NASA Technical Reports Server (NTRS)
Sulkanen, Martin E.; Borovsky, Joseph E.
1992-01-01
The study of relativistic plasma double layers is described through the solution of the one-dimensional, unmagnetized, steady-state Poisson-Vlasov equations and by means of one-dimensional, unmagnetized, particle-in-cell simulations. The thickness vs potential-drop scaling law is extended to relativistic potential drops and relativistic plasma temperatures. The transition in the scaling law for 'strong' double layers suggested by analytical two-beam models by Carlqvist (1982) is confirmed, and causality problems of standard double-layer simulation techniques applied to relativistic plasma systems are discussed.
NASA Astrophysics Data System (ADS)
Itai, K.
1987-02-01
Two models which describe one-dimensional hopping motion of a heavy particle interacting with phonons are discussed. Model A corresponds to hopping in 1D metals or to the polaron problem. In model B the momentum dependence of the particle-phonon coupling is proportional to k-1/2. The scaling equations show that only in model B does localization occur for a coupling larger than a critical value. In the localization region this model shows close analogy to the Caldeira-Leggett model for macroscopic quantum tunneling.
NASA Astrophysics Data System (ADS)
Talib, Imran; Belgacem, Fethi Bin Muhammad; Asif, Naseer Ahmad; Khalil, Hammad
2017-01-01
In this research article, we derive and analyze an efficient spectral method based on the operational matrices of three dimensional orthogonal Jacobi polynomials to solve numerically the mixed partial derivatives type multi-terms high dimensions generalized class of fractional order partial differential equations. We transform the considered fractional order problem to an easily solvable algebraic equations with the aid of the operational matrices. Being easily solvable, the associated algebraic system leads to finding the solution of the problem. Some test problems are considered to confirm the accuracy and validity of the proposed numerical method. The convergence of the method is ensured by comparing our Matlab software simulations based obtained results with the exact solutions in the literature, yielding negligible errors. Moreover, comparative results discussed in the literature are extended and improved in this study.
Numerical solution of special ultra-relativistic Euler equations using central upwind scheme
NASA Astrophysics Data System (ADS)
Ghaffar, Tayabia; Yousaf, Muhammad; Qamar, Shamsul
2018-06-01
This article is concerned with the numerical approximation of one and two-dimensional special ultra-relativistic Euler equations. The governing equations are coupled first-order nonlinear hyperbolic partial differential equations. These equations describe perfect fluid flow in terms of the particle density, the four-velocity and the pressure. A high-resolution shock-capturing central upwind scheme is employed to solve the model equations. To avoid excessive numerical diffusion, the considered scheme avails the specific information of local propagation speeds. By using Runge-Kutta time stepping method and MUSCL-type initial reconstruction, we have obtained 2nd order accuracy of the proposed scheme. After discussing the model equations and the numerical technique, several 1D and 2D test problems are investigated. For all the numerical test cases, our proposed scheme demonstrates very good agreement with the results obtained by well-established algorithms, even in the case of highly relativistic 2D test problems. For validation and comparison, the staggered central scheme and the kinetic flux-vector splitting (KFVS) method are also implemented to the same model. The robustness and efficiency of central upwind scheme is demonstrated by the numerical results.
Fast solver for large scale eddy current non-destructive evaluation problems
NASA Astrophysics Data System (ADS)
Lei, Naiguang
Eddy current testing plays a very important role in non-destructive evaluations of conducting test samples. Based on Faraday's law, an alternating magnetic field source generates induced currents, called eddy currents, in an electrically conducting test specimen. The eddy currents generate induced magnetic fields that oppose the direction of the inducing magnetic field in accordance with Lenz's law. In the presence of discontinuities in material property or defects in the test specimen, the induced eddy current paths are perturbed and the associated magnetic fields can be detected by coils or magnetic field sensors, such as Hall elements or magneto-resistance sensors. Due to the complexity of the test specimen and the inspection environments, the availability of theoretical simulation models is extremely valuable for studying the basic field/flaw interactions in order to obtain a fuller understanding of non-destructive testing phenomena. Theoretical models of the forward problem are also useful for training and validation of automated defect detection systems. Theoretical models generate defect signatures that are expensive to replicate experimentally. In general, modelling methods can be classified into two categories: analytical and numerical. Although analytical approaches offer closed form solution, it is generally not possible to obtain largely due to the complex sample and defect geometries, especially in three-dimensional space. Numerical modelling has become popular with advances in computer technology and computational methods. However, due to the huge time consumption in the case of large scale problems, accelerations/fast solvers are needed to enhance numerical models. This dissertation describes a numerical simulation model for eddy current problems using finite element analysis. Validation of the accuracy of this model is demonstrated via comparison with experimental measurements of steam generator tube wall defects. These simulations generating two-dimension raster scan data typically takes one to two days on a dedicated eight-core PC. A novel direct integral solver for eddy current problems and GPU-based implementation is also investigated in this research to reduce the computational time.
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan
1994-01-01
LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 1 of a series of three reference publications that describe LENS, provide a detailed guide to its usage, and present many example problems. Part 1 derives the governing equations and describes the numerical solution procedures for the types of problems that can be solved. The accuracy and efficiency of LSENS are examined by means of various test problems, and comparisons with other methods and codes are presented. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.
3D-PDR: Three-dimensional photodissociation region code
NASA Astrophysics Data System (ADS)
Bisbas, T. G.; Bell, T. A.; Viti, S.; Yates, J.; Barlow, M. J.
2018-03-01
3D-PDR is a three-dimensional photodissociation region code written in Fortran. It uses the Sundials package (written in C) to solve the set of ordinary differential equations and it is the successor of the one-dimensional PDR code UCL_PDR (ascl:1303.004). Using the HEALpix ray-tracing scheme (ascl:1107.018), 3D-PDR solves a three-dimensional escape probability routine and evaluates the attenuation of the far-ultraviolet radiation in the PDR and the propagation of FIR/submm emission lines out of the PDR. The code is parallelized (OpenMP) and can be applied to 1D and 3D problems.
[Development of a presenteeism questionnaire for skilled workers at high-technology enterprises].
Sheng, Li; Huang, Jian-Shi; He, Lian; Cui, Jian-Xia; Xie, Jing; Liu, Feng-Juan
2009-08-01
To develop a presenteeism questionnaire for Chinese high-technology skilled workers. Methods being used would include literature review, face-to-face in depth interview, experts' consultation in developing the questionnaire. The presenteeism questionnaire includes two sections;one on employee's general health condition and the second one is a survey on the influences of employees'health conditions on their productivity. The first section includes 55 terms and 8 dimensionalities as below: Ache, Symptoms, Sleeping problem, Attention, Bad emotion, Pressure, Fatigue, Social adapting. These dimensionalities Cronbach's alpha are 0.79, 0.83, 0.75, 0.69, 0.83, 0.86, 0.80, 0.88 respectively and their half Spearman-Browns are 0.78, 0.75, 0.61, 0.62, 0.82, 0.81, 0.77, 0.88, respectively. Goodness of fit test model indices are as below: chi(2)/df -3.68, normed fit index 0.95, non-normed fit index 0.96, compatative fit index 0.96, standardized root mean residual 0.05, root mean square error of approximation 0.05. The relate-coefficient with SF-36 is 0.55. 42.77% of employees having received the survey, claim that their health problem do not influence their productivity, and 55.72% of them claiming that their productivity are reduced to 50%-90% because of their health problems while another 1.51% of them claim that their productivity reduced more than 50%. 84.5% of the interviewees claim that they have never been absent at work because of health problems, and 15.3% of them claim that their total hours of absence are between 0 and 100. Only 0.2% of the workers claim that the total hours of absence are more than 100. The developed presenteeism shows good reliability and higher validity, so can be used to measure the presenteeism of skilled workers working at high-technology enterprises.
NASA Technical Reports Server (NTRS)
Beggs, John H.; Luebbers, Raymond J.; Kunz, Karl S.
1992-01-01
The Penn State Finite Difference Time Domain Electromagnetic Scattering Code Version C is a three-dimensional numerical electromagnetic scattering code based on the Finite Difference Time Domain (FDTD) technique. The supplied version of the code is one version of our current three-dimensional FDTD code set. The manual given here provides a description of the code and corresponding results for several scattering problems. The manual is organized into 14 sections: introduction, description of the FDTD method, operation, resource requirements, Version C code capabilities, a brief description of the default scattering geometry, a brief description of each subroutine, a description of the include file (COMMONC.FOR), a section briefly discussing radar cross section computations, a section discussing some scattering results, a new problem checklist, references, and figure titles.
Parallel processing a three-dimensional free-lagrange code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandell, D.A.; Trease, H.E.
1989-01-01
A three-dimensional, time-dependent free-Lagrange hydrodynamics code has been multitasked and autotasked on a CRAY X-MP/416. The multitasking was done by using the Los Alamos Multitasking Control Library, which is a superset of the CRAY multitasking library. Autotasking is done by using constructs which are only comment cards if the source code is not run through a preprocessor. The three-dimensional algorithm has presented a number of problems that simpler algorithms, such as those for one-dimensional hydrodynamics, did not exhibit. Problems in converting the serial code, originally written for a CRAY-1, to a multitasking code are discussed. Autotasking of a rewritten versionmore » of the code is discussed. Timing results for subroutines and hot spots in the serial code are presented and suggestions for additional tools and debugging aids are given. Theoretical speedup results obtained from Amdahl's law and actual speedup results obtained on a dedicated machine are presented. Suggestions for designing large parallel codes are given.« less
A Two-Dimensional Linear Bicharacteristic Scheme for Electromagnetics
NASA Technical Reports Server (NTRS)
Beggs, John H.
2002-01-01
The upwind leapfrog or Linear Bicharacteristic Scheme (LBS) has previously been implemented and demonstrated on one-dimensional electromagnetic wave propagation problems. This memorandum extends the Linear Bicharacteristic Scheme for computational electromagnetics to model lossy dielectric and magnetic materials and perfect electrical conductors in two dimensions. This is accomplished by proper implementation of the LBS for homogeneous lossy dielectric and magnetic media and for perfect electrical conductors. Both the Transverse Electric and Transverse Magnetic polarizations are considered. Computational requirements and a Fourier analysis are also discussed. Heterogeneous media are modeled through implementation of surface boundary conditions and no special extrapolations or interpolations at dielectric material boundaries are required. Results are presented for two-dimensional model problems on uniform grids, and the Finite Difference Time Domain (FDTD) algorithm is chosen as a convenient reference algorithm for comparison. The results demonstrate that the two-dimensional explicit LBS is a dissipation-free, second-order accurate algorithm which uses a smaller stencil than the FDTD algorithm, yet it has less phase velocity error.
An incompressible two-dimensional multiphase particle-in-cell model for dense particle flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snider, D.M.; O`Rourke, P.J.; Andrews, M.J.
1997-06-01
A two-dimensional, incompressible, multiphase particle-in-cell (MP-PIC) method is presented for dense particle flows. The numerical technique solves the governing equations of the fluid phase using a continuum model and those of the particle phase using a Lagrangian model. Difficulties associated with calculating interparticle interactions for dense particle flows with volume fractions above 5% have been eliminated by mapping particle properties to a Eulerian grid and then mapping back computed stress tensors to particle positions. This approach utilizes the best of Eulerian/Eulerian continuum models and Eulerian/Lagrangian discrete models. The solution scheme allows for distributions of types, sizes, and density of particles,more » with no numerical diffusion from the Lagrangian particle calculations. The computational method is implicit with respect to pressure, velocity, and volume fraction in the continuum solution thus avoiding courant limits on computational time advancement. MP-PIC simulations are compared with one-dimensional problems that have analytical solutions and with two-dimensional problems for which there are experimental data.« less
NASA Astrophysics Data System (ADS)
Besse, Nicolas; Coulette, David
2016-08-01
Achieving plasmas with good stability and confinement properties is a key research goal for magnetic fusion devices. The underlying equations are the Vlasov-Poisson and Vlasov-Maxwell (VPM) equations in three space variables, three velocity variables, and one time variable. Even in those somewhat academic cases where global equilibrium solutions are known, studying their stability requires the analysis of the spectral properties of the linearized operator, a daunting task. We have identified a model, for which not only equilibrium solutions can be constructed, but many of their stability properties are amenable to rigorous analysis. It uses a class of solution to the VPM equations (or to their gyrokinetic approximations) known as waterbag solutions which, in particular, are piecewise constant in phase-space. It also uses, not only the gyrokinetic approximation of fast cyclotronic motion around magnetic field lines, but also an asymptotic approximation regarding the magnetic-field-induced anisotropy: the spatial variation along the field lines is taken much slower than across them. Together, these assumptions result in a drastic reduction in the dimensionality of the linearized problem, which becomes a set of two nested one-dimensional problems: an integral equation in the poloidal variable, followed by a one-dimensional complex Schrödinger equation in the radial variable. We show here that the operator associated to the poloidal variable is meromorphic in the eigenparameter, the pulsation frequency. We also prove that, for all but a countable set of real pulsation frequencies, the operator is compact and thus behaves mostly as a finite-dimensional one. The numerical algorithms based on such ideas have been implemented in a companion paper [D. Coulette and N. Besse, "Numerical resolution of the global eigenvalue problem for gyrokinetic-waterbag model in toroidal geometry" (submitted)] and were found to be surprisingly close to those for the original gyrokinetic-Vlasov equations. The purpose of the present paper is to make these new ideas accessible to two readerships: applied mathematicians and plasma physicists.
An interactive parallel programming environment applied in atmospheric science
NASA Technical Reports Server (NTRS)
vonLaszewski, G.
1996-01-01
This article introduces an interactive parallel programming environment (IPPE) that simplifies the generation and execution of parallel programs. One of the tasks of the environment is to generate message-passing parallel programs for homogeneous and heterogeneous computing platforms. The parallel programs are represented by using visual objects. This is accomplished with the help of a graphical programming editor that is implemented in Java and enables portability to a wide variety of computer platforms. In contrast to other graphical programming systems, reusable parts of the programs can be stored in a program library to support rapid prototyping. In addition, runtime performance data on different computing platforms is collected in a database. A selection process determines dynamically the software and the hardware platform to be used to solve the problem in minimal wall-clock time. The environment is currently being tested on a Grand Challenge problem, the NASA four-dimensional data assimilation system.
EFFECTS OF LASER RADIATION ON MATTER: Maximum depth of keyhole melting of metals by a laser beam
NASA Astrophysics Data System (ADS)
Pinsker, V. A.; Cherepanov, G. P.
1990-11-01
A calculation is reported of the maximum depth and diameter of a narrow crater formed in a stationary metal target exposed to high-power cw CO2 laser radiation. The energy needed for erosion of a unit volume is assumed to be constant and the energy losses experienced by the beam in the vapor-gas channel are ignored. The heat losses in the metal are allowed for by an analytic solution of the three-dimensional boundary-value heat-conduction problem of the temperature field in the vicinity of a thin but long crater with a constant temperature on its surface. An approximate solution of this problem by a method proposed earlier by one of the present authors was tested on a computer. The dimensions of the thin crater were found to be very different from those obtained earlier subject to a less rigorous allowance for the heat losses.
Approximation theory for LQG (Linear-Quadratic-Gaussian) optimal control of flexible structures
NASA Technical Reports Server (NTRS)
Gibson, J. S.; Adamian, A.
1988-01-01
An approximation theory is presented for the LQG (Linear-Quadratic-Gaussian) optimal control problem for flexible structures whose distributed models have bounded input and output operators. The main purpose of the theory is to guide the design of finite dimensional compensators that approximate closely the optimal compensator. The optimal LQG problem separates into an optimal linear-quadratic regulator problem and an optimal state estimation problem. The solution of the former problem lies in the solution to an infinite dimensional Riccati operator equation. The approximation scheme approximates the infinite dimensional LQG problem with a sequence of finite dimensional LQG problems defined for a sequence of finite dimensional, usually finite element or modal, approximations of the distributed model of the structure. Two Riccati matrix equations determine the solution to each approximating problem. The finite dimensional equations for numerical approximation are developed, including formulas for converting matrix control and estimator gains to their functional representation to allow comparison of gains based on different orders of approximation. Convergence of the approximating control and estimator gains and of the corresponding finite dimensional compensators is studied. Also, convergence and stability of the closed-loop systems produced with the finite dimensional compensators are discussed. The convergence theory is based on the convergence of the solutions of the finite dimensional Riccati equations to the solutions of the infinite dimensional Riccati equations. A numerical example with a flexible beam, a rotating rigid body, and a lumped mass is given.
NASA Astrophysics Data System (ADS)
Qu, Aifang; Xiang, Wei
2018-05-01
In this paper, we study the stability of the three-dimensional jet created by a supersonic flow past a concave cornered wedge with the lower pressure at the downstream. The gas beyond the jet boundary is assumed to be static. It can be formulated as a nonlinear hyperbolic free boundary problem in a cornered domain with two characteristic free boundaries of different types: one is the rarefaction wave, while the other one is the contact discontinuity, which can be either a vortex sheet or an entropy wave. A more delicate argument is developed to establish the existence and stability of the square jet structure under the perturbation of the supersonic incoming flow and the pressure at the downstream. The methods and techniques developed here are also helpful for other problems involving similar difficulties.
Plane Poiseuille Flow of a Rarefied Gas in the Presence of a Strong Gravitation
NASA Astrophysics Data System (ADS)
Doi, Toshiyuki
2010-11-01
Poiseuille flow of a rarefied gas between two horizontal planes in the presence of a strong gravitation is considered, where the gravity is so strong that the path of a molecule is curved considerably as it ascends or descends the distance of the planes. The gas behavior is studied based on the Boltzmann equation. An asymptotic analysis for a slow variation in the longitudinal direction is carried out and the problem is reduced to a spatially one dimensional problem, as was in the Poiseuille flow problem in the absence of the gravitation. The mass flow rate as well as the macroscopic variables is obtained for a wide range of the mean free path of the gas and the gravity. A numerical analysis of a two dimensional problem is also carried out and the result of the asymptotic analysis is verified.
High-Fidelity Real-Time Simulation on Deployed Platforms
2010-08-26
three–dimensional transient heat conduction “ Swiss Cheese ” problem; and a three–dimensional unsteady incompressible Navier- Stokes low–Reynolds–number...our approach with three examples: a two?dimensional Helmholtz acoustics ?horn? problem; a three?dimensional transient heat conduction ? Swiss Cheese ...solutions; a transient lin- ear heat conduction problem in a three–dimensional “ Swiss Cheese ” configuration Ω — to illustrate treat- ment of many
Solving quantum optimal control problems using Clebsch variables and Lin constraints
NASA Astrophysics Data System (ADS)
Delgado-Téllez, M.; Ibort, A.; Rodríguez de la Peña, T.
2018-01-01
Clebsch variables (and Lin constraints) are applied to the study of a class of optimal control problems for affine-controlled quantum systems. The optimal control problem will be modelled with controls defined on an auxiliary space where the dynamical group of the system acts freely. The reciprocity between both theories: the classical theory defined by the objective functional and the quantum system, is established by using a suitable version of Lagrange’s multipliers theorem and a geometrical interpretation of the constraints of the system as defining a subspace of horizontal curves in an associated bundle. It is shown how the solutions of the variational problem defined by the objective functional determine solutions of the quantum problem. Then a new way of obtaining explicit solutions for a family of optimal control problems for affine-controlled quantum systems (finite or infinite dimensional) is obtained. One of its main advantages, is the the use of Clebsch variables allows to compute such solutions from solutions of invariant problems that can often be computed explicitly. This procedure can be presented as an algorithm that can be applied to a large class of systems. Finally, some simple examples, spin control, a simple quantum Hamiltonian with an ‘Elroy beanie’ type classical model and a controlled one-dimensional quantum harmonic oscillator, illustrating the main features of the theory, will be discussed.
A three-dimensional Dirichlet-to-Neumann operator for water waves over topography
NASA Astrophysics Data System (ADS)
Andrade, D.; Nachbin, A.
2018-06-01
Surface water waves are considered propagating over highly variable non-smooth topographies. For this three dimensional problem a Dirichlet-to-Neumann (DtN) operator is constructed reducing the numerical modeling and evolution to the two dimensional free surface. The corresponding Fourier-type operator is defined through a matrix decomposition. The topographic component of the decomposition requires special care and a Galerkin method is provided accordingly. One dimensional numerical simulations, along the free surface, validate the DtN formulation in the presence of a large amplitude, rapidly varying topography. An alternative, conformal mapping based, method is used for benchmarking. A two dimensional simulation in the presence of a Luneburg lens (a particular submerged mound) illustrates the accurate performance of the three dimensional DtN operator.
NASA Astrophysics Data System (ADS)
Koloch, Grzegorz; Kaminski, Bogumil
2010-10-01
In the paper we examine a modification of the classical Vehicle Routing Problem (VRP) in which shapes of transported cargo are accounted for. This problem, known as a three-dimensional VRP with loading constraints (3D-VRP), is appropriate when transported commodities are not perfectly divisible, but they have fixed and heterogeneous dimensions. In the paper restrictions on allowable cargo positionings are also considered. These restrictions are derived from business practice and they extended the baseline 3D-VRP formulation as considered by Koloch and Kaminski (2010). In particular, we investigate how additional restrictions influence relative performance of two proposed optimization algorithms: the nested and the joint one. Performance of both methods is compared on artificial problems and on a big-scale real life case study.
An equivalent domain integral for analysis of two-dimensional mixed mode problems
NASA Technical Reports Server (NTRS)
Raju, I. S.; Shivakumar, K. N.
1989-01-01
An equivalent domain integral (EDI) method for calculating J-integrals for two-dimensional cracked elastic bodies subjected to mixed mode loading is presented. The total and product integrals consist of the sum of an area or domain integral and line integrals on the crack faces. The EDI method gave accurate values of the J-integrals for two mode I and two mixed mode problems. Numerical studies showed that domains consisting of one layer of elements are sufficient to obtain accurate J-integral values. Two procedures for separating the individual modes from the domain integrals are presented. The procedure that uses the symmetric and antisymmetric components of the stress and displacement fields to calculate the individual modes gave accurate values of the integrals for all the problems analyzed.
Decimated Input Ensembles for Improved Generalization
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Oza, Nikunj C.; Norvig, Peter (Technical Monitor)
1999-01-01
Recently, many researchers have demonstrated that using classifier ensembles (e.g., averaging the outputs of multiple classifiers before reaching a classification decision) leads to improved performance for many difficult generalization problems. However, in many domains there are serious impediments to such "turnkey" classification accuracy improvements. Most notable among these is the deleterious effect of highly correlated classifiers on the ensemble performance. One particular solution to this problem is generating "new" training sets by sampling the original one. However, with finite number of patterns, this causes a reduction in the training patterns each classifier sees, often resulting in considerably worsened generalization performance (particularly for high dimensional data domains) for each individual classifier. Generally, this drop in the accuracy of the individual classifier performance more than offsets any potential gains due to combining, unless diversity among classifiers is actively promoted. In this work, we introduce a method that: (1) reduces the correlation among the classifiers; (2) reduces the dimensionality of the data, thus lessening the impact of the 'curse of dimensionality'; and (3) improves the classification performance of the ensemble.
NASA Technical Reports Server (NTRS)
Rebstock, Rainer
1987-01-01
Numerical methods are developed for control of three dimensional adaptive test sections. The physical properties of the design problem occurring in the external field computation are analyzed, and a design procedure suited for solution of the problem is worked out. To do this, the desired wall shape is determined by stepwise modification of an initial contour. The necessary changes in geometry are determined with the aid of a panel procedure, or, with incident flow near the sonic range, with a transonic small perturbation (TSP) procedure. The designed wall shape, together with the wall deflections set during the tunnel run, are the input to a newly derived one-step formula which immediately yields the adapted wall contour. This is particularly important since the classical iterative adaptation scheme is shown to converge poorly for 3D flows. Experimental results obtained in the adaptive test section with eight flexible walls are presented to demonstrate the potential of the procedure. Finally, a method is described to minimize wall interference in 3D flows by adapting only the top and bottom wind tunnel walls.
Artificial viscosity in Godunov-type schemes to cure the carbuncle phenomenon
NASA Astrophysics Data System (ADS)
Rodionov, Alexander V.
2017-09-01
This work presents a new approach for curing the carbuncle instability. The idea underlying the approach is to introduce some dissipation in the form of right-hand sides of the Navier-Stokes equations into the basic method of solving Euler equations; in so doing, we replace the molecular viscosity coefficient by the artificial viscosity coefficient and calculate heat conductivity assuming that the Prandtl number is constant. For the artificial viscosity coefficient we have chosen a formula that is consistent with the von Neumann and Richtmyer artificial viscosity, but has its specific features (extension to multidimensional simulations, introduction of a threshold compression intensity that restricts additional dissipation to the shock layer only). The coefficients and the expression for the characteristic mesh size in this formula are chosen from a large number of Quirk-type problem computations. The new cure for the carbuncle flaw has been tested on first-order schemes (Godunov, Roe, HLLC and AUSM+ schemes) as applied to one- and two-dimensional simulations on smooth structured grids. Its efficiency has been demonstrated on several well-known test problems.
NASA Technical Reports Server (NTRS)
Plumb, R. A.
1985-01-01
Two dimensional modeling has become an established technique for the simulation of the global structure of trace constituents. Such models are simpler to formulate and cheaper to operate than three dimensional general circulation models, while avoiding some of the gross simplifications of one dimensional models. Nevertheless, the parameterization of eddy fluxes required in a 2-D model is not a trivial problem. This fact has apparently led some to interpret the shortcomings of existing 2-D models as indicating that the parameterization procedure is wrong in principle. There are grounds to believe that these shortcomings result primarily from incorrect implementations of the predictions of eddy transport theory and that a properly based parameterization may provide a good basis for atmospheric modeling. The existence of these GCM-derived coefficients affords an unprecedented opportunity to test the validity of the flux-gradient parameterization. To this end, a zonally averaged (2-D) model was developed, using these coefficients in the transport parameterization. Results from this model for a number of contrived tracer experiments were compared with the parent GCM. The generally good agreement substantially validates the flus-gradient parameterization, and thus the basic principle of 2-D modeling.
Fast generation of Fresnel holograms based on multirate filtering.
Tsang, Peter; Liu, Jung-Ping; Cheung, Wai-Keung; Poon, Ting-Chung
2009-12-01
One of the major problems in computer-generated holography is the high computation cost involved for the calculation of fringe patterns. Recently, the problem has been addressed by imposing a horizontal parallax only constraint whereby the process can be simplified to the computation of one-dimensional sublines, each representing a scan plane of the object scene. Subsequently the sublines can be expanded to a two-dimensional hologram through multiplication with a reference signal. Furthermore, economical hardware is available with which sublines can be generated in a computationally free manner with high throughput of approximately 100 M pixels/second. Apart from decreasing the computation loading, the sublines can be treated as intermediate data that can be compressed by simply downsampling the number of sublines. Despite these favorable features, the method is suitable only for the generation of white light (rainbow) holograms, and the resolution of the reconstructed image is inferior to the classical Fresnel hologram. We propose to generate holograms from one-dimensional sublines so that the above-mentioned problems can be alleviated. However, such an approach also leads to a substantial increase in computation loading. To overcome this problem we encapsulated the conversion of sublines to holograms as a multirate filtering process and implemented the latter by use of a fast Fourier transform. Evaluation reveals that, for holograms of moderate size, our method is capable of operating 40,000 times faster than the calculation of Fresnel holograms based on the precomputed table lookup method. Although there is no relative vertical parallax between object points at different distance planes, a global vertical parallax is preserved for the object scene as a whole and the reconstructed image can be observed easily.
Feynman Path Integral Approach to Electron Diffraction for One and Two Slits: Analytical Results
ERIC Educational Resources Information Center
Beau, Mathieu
2012-01-01
In this paper we present an analytic solution of the famous problem of diffraction and interference of electrons through one and two slits (for simplicity, only the one-dimensional case is considered). In addition to exact formulae, various approximations of the electron distribution are shown which facilitate the interpretation of the results.…
Restricted random search method based on taboo search in the multiple minima problem
NASA Astrophysics Data System (ADS)
Hong, Seung Do; Jhon, Mu Shik
1997-03-01
The restricted random search method is proposed as a simple Monte Carlo sampling method to search minima fast in the multiple minima problem. This method is based on taboo search applied recently to continuous test functions. The concept of the taboo region instead of the taboo list is used and therefore the sampling of a region near an old configuration is restricted in this method. This method is applied to 2-dimensional test functions and the argon clusters. This method is found to be a practical and efficient method to search near-global configurations of test functions and the argon clusters.
A Conference on Three-Dimensional Representation held in University of Minnesota on 24-26 May 1989
NASA Astrophysics Data System (ADS)
Biederman, Irving
1989-06-01
This is the final report for a conference grant entitled: A conference on Three-Dimensional Representation. The two and one-half day conference was held at the University of Minn. on May 24 to 26, 1989 to evaluate the current status of problem associated with three-dimensional representations from current computational, psychological, development, and neurophysiological perspectives. Nineteen presentations were made spanning these approaches. One hundred sixty-six individuals attended the conference. Of 44 evaluations received, 75 percent rated the conference as excellent, 20 percent as good, and 5 percent as fair. None rated it poor. The report consists of the original and revised program, conference abstracts evaluation summary and the rooster of attendees.
Theoretical Studies of Magnetic Systems. Final Report, August 1, 1994 - November 30, 1997
DOE R&D Accomplishments Database
Gor`kov, L. P.; Novotny, M. A.; Schrieffer, J. R.
1997-01-01
During the grant period the authors have studied five areas of research: (1) low dimensional ferrimagnets; (2) lattice effects in the mixed valence problem; (3) spin compensation in the one dimensional Kondo lattice; (4) the interaction of quasi particles in short coherence length superconductors; and (5) novel effects in angle resolved photoemission spectra from nearly antiferromagnetic materials. Progress in each area is summarized.
Model parameter learning using Kullback-Leibler divergence
NASA Astrophysics Data System (ADS)
Lin, Chungwei; Marks, Tim K.; Pajovic, Milutin; Watanabe, Shinji; Tung, Chih-kuan
2018-02-01
In this paper, we address the following problem: For a given set of spin configurations whose probability distribution is of the Boltzmann type, how do we determine the model coupling parameters? We demonstrate that directly minimizing the Kullback-Leibler divergence is an efficient method. We test this method against the Ising and XY models on the one-dimensional (1D) and two-dimensional (2D) lattices, and provide two estimators to quantify the model quality. We apply this method to two types of problems. First, we apply it to the real-space renormalization group (RG). We find that the obtained RG flow is sufficiently good for determining the phase boundary (within 1% of the exact result) and the critical point, but not accurate enough for critical exponents. The proposed method provides a simple way to numerically estimate amplitudes of the interactions typically truncated in the real-space RG procedure. Second, we apply this method to the dynamical system composed of self-propelled particles, where we extract the parameter of a statistical model (a generalized XY model) from a dynamical system described by the Viscek model. We are able to obtain reasonable coupling values corresponding to different noise strengths of the Viscek model. Our method is thus able to provide quantitative analysis of dynamical systems composed of self-propelled particles.
Asteroid mass estimation using Markov-Chain Monte Carlo techniques
NASA Astrophysics Data System (ADS)
Siltala, Lauri; Granvik, Mikael
2016-10-01
Estimates for asteroid masses are based on their gravitational perturbations on the orbits of other objects such as Mars, spacecraft, or other asteroids and/or their satellites. In the case of asteroid-asteroid perturbations, this leads to a 13-dimensional inverse problem where the aim is to derive the mass of the perturbing asteroid and six orbital elements for both the perturbing asteroid and the test asteroid using astrometric observations. We have developed and implemented three different mass estimation algorithms utilizing asteroid-asteroid perturbations into the OpenOrb asteroid-orbit-computation software: the very rough 'marching' approximation, in which the asteroid orbits are fixed at a given epoch, reducing the problem to a one-dimensional estimation of the mass, an implementation of the Nelder-Mead simplex method, and most significantly, a Markov-Chain Monte Carlo (MCMC) approach. We will introduce each of these algorithms with particular focus on the MCMC algorithm, and present example results for both synthetic and real data. Our results agree with the published mass estimates, but suggest that the published uncertainties may be misleading as a consequence of using linearized mass-estimation methods. Finally, we discuss remaining challenges with the algorithms as well as future plans, particularly in connection with ESA's Gaia mission.
Random analysis of bearing capacity of square footing using the LAS procedure
NASA Astrophysics Data System (ADS)
Kawa, Marek; Puła, Wojciech; Suska, Michał
2016-09-01
In the present paper, a three-dimensional problem of bearing capacity of square footing on random soil medium is analyzed. The random fields of strength parameters c and φ are generated using LAS procedure (Local Average Subdivision, Fenton and Vanmarcke 1990). The procedure used is re-implemented by the authors in Mathematica environment in order to combine it with commercial program. Since the procedure is still tested the random filed has been assumed as one-dimensional: the strength properties of soil are random in vertical direction only. Individual realizations of bearing capacity boundary-problem with strength parameters of medium defined the above procedure are solved using FLAC3D Software. The analysis is performed for two qualitatively different cases, namely for the purely cohesive and cohesive-frictional soils. For the latter case the friction angle and cohesion have been assumed as independent random variables. For these two cases the random square footing bearing capacity results have been obtained for the range of fluctuation scales from 0.5 m to 10 m. Each time 1000 Monte Carlo realizations have been performed. The obtained results allow not only the mean and variance but also the probability density function to be estimated. An example of application of this function for reliability calculation has been presented in the final part of the paper.
NASA Technical Reports Server (NTRS)
Farhat, C.; Park, K. C.; Dubois-Pelerin, Y.
1991-01-01
An unconditionally stable second order accurate implicit-implicit staggered procedure for the finite element solution of fully coupled thermoelasticity transient problems is proposed. The procedure is stabilized with a semi-algebraic augmentation technique. A comparative cost analysis reveals the superiority of the proposed computational strategy to other conventional staggered procedures. Numerical examples of one and two-dimensional thermomechanical coupled problems demonstrate the accuracy of the proposed numerical solution algorithm.
Genetic demixing and evolution in linear stepping stone models
NASA Astrophysics Data System (ADS)
Korolev, K. S.; Avlund, Mikkel; Hallatschek, Oskar; Nelson, David R.
2010-04-01
Results for mutation, selection, genetic drift, and migration in a one-dimensional continuous population are reviewed and extended. The population is described by a continuous limit of the stepping stone model, which leads to the stochastic Fisher-Kolmogorov-Petrovsky-Piscounov equation with additional terms describing mutations. Although the stepping stone model was first proposed for population genetics, it is closely related to “voter models” of interest in nonequilibrium statistical mechanics. The stepping stone model can also be regarded as an approximation to the dynamics of a thin layer of actively growing pioneers at the frontier of a colony of micro-organisms undergoing a range expansion on a Petri dish. The population tends to segregate into monoallelic domains. This segregation slows down genetic drift and selection because these two evolutionary forces can only act at the boundaries between the domains; the effects of mutation, however, are not significantly affected by the segregation. Although fixation in the neutral well-mixed (or “zero-dimensional”) model occurs exponentially in time, it occurs only algebraically fast in the one-dimensional model. An unusual sublinear increase is also found in the variance of the spatially averaged allele frequency with time. If selection is weak, selective sweeps occur exponentially fast in both well-mixed and one-dimensional populations, but the time constants are different. The relatively unexplored problem of evolutionary dynamics at the edge of an expanding circular colony is studied as well. Also reviewed are how the observed patterns of genetic diversity can be used for statistical inference and the differences are highlighted between the well-mixed and one-dimensional models. Although the focus is on two alleles or variants, q -allele Potts-like models of gene segregation are considered as well. Most of the analytical results are checked with simulations and could be tested against recent spatial experiments on range expansions of inoculations of Escherichia coli and Saccharomyces cerevisiae.
A Solution Adaptive Technique Using Tetrahedral Unstructured Grids
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.
2000-01-01
An adaptive unstructured grid refinement technique has been developed and successfully applied to several three dimensional inviscid flow test cases. The method is based on a combination of surface mesh subdivision and local remeshing of the volume grid Simple functions of flow quantities are employed to detect dominant features of the flowfield The method is designed for modular coupling with various error/feature analyzers and flow solvers. Several steady-state, inviscid flow test cases are presented to demonstrate the applicability of the method for solving practical three-dimensional problems. In all cases, accurate solutions featuring complex, nonlinear flow phenomena such as shock waves and vortices have been generated automatically and efficiently.
NASA Astrophysics Data System (ADS)
Stritzel, J.; Melchert, O.; Wollweber, M.; Roth, B.
2017-09-01
The direct problem of optoacoustic signal generation in biological media consists of solving an inhomogeneous three-dimensional (3D) wave equation for an initial acoustic stress profile. In contrast, the more defiant inverse problem requires the reconstruction of the initial stress profile from a proper set of observed signals. In this article, we consider an effectively 1D approach, based on the assumption of a Gaussian transverse irradiation source profile and plane acoustic waves, in which the effects of acoustic diffraction are described in terms of a linear integral equation. The respective inverse problem along the beam axis can be cast into a Volterra integral equation of the second kind for which we explore here efficient numerical schemes in order to reconstruct initial stress profiles from observed signals, constituting a methodical progress of computational aspects of optoacoustics. In this regard, we explore the validity as well as the limits of the inversion scheme via numerical experiments, with parameters geared toward actual optoacoustic problem instances. The considered inversion input consists of synthetic data, obtained in terms of the effectively 1D approach, and, more generally, a solution of the 3D optoacoustic wave equation. Finally, we also analyze the effect of noise and different detector-to-sample distances on the optoacoustic signal and the reconstructed pressure profiles.
An efficicient data structure for three-dimensional vertex based finite volume method
NASA Astrophysics Data System (ADS)
Akkurt, Semih; Sahin, Mehmet
2017-11-01
A vertex based three-dimensional finite volume algorithm has been developed using an edge based data structure.The mesh data structure of the given algorithm is similar to ones that exist in the literature. However, the data structures are redesigned and simplied in order to fit requirements of the vertex based finite volume method. In order to increase the cache efficiency, the data access patterns for the vertex based finite volume method are investigated and these datas are packed/allocated in a way that they are close to each other in the memory. The present data structure is not limited with tetrahedrons, arbitrary polyhedrons are also supported in the mesh without putting any additional effort. Furthermore, the present data structure also supports adaptive refinement and coarsening. For the implicit and parallel implementation of the FVM algorithm, PETSc and MPI libraries are employed. The performance and accuracy of the present algorithm are tested for the classical benchmark problems by comparing the CPU time for the open source algorithms.
The NATA code; theory and analysis. Volume 2: User's manual
NASA Technical Reports Server (NTRS)
Bade, W. L.; Yos, J. M.
1975-01-01
The NATA code is a computer program for calculating quasi-one-dimensional gas flow in axisymmetric nozzles and rectangular channels, primarily to describe conditions in electric archeated wind tunnels. The program provides solutions based on frozen chemistry, chemical equilibrium, and nonequilibrium flow with finite reaction rates. The shear and heat flux on the nozzle wall are calculated and boundary layer displacement effects on the inviscid flow are taken into account. The program contains compiled-in thermochemical, chemical kinetic and transport cross section data for high-temperature air, CO2-N2-Ar mixtures, helium, and argon. It calculates stagnation conditions on axisymmetric or two-dimensional models and conditions on the flat surface of a blunt wedge. Included in the report are: definitions of the inputs and outputs; precoded data on gas models, reactions, thermodynamic and transport properties of species, and nozzle geometries; explanations of diagnostic outputs and code abort conditions; test problems; and a user's manual for an auxiliary program (NOZFIT) used to set up analytical curvefits to nozzle profiles.
NASA Technical Reports Server (NTRS)
Bittker, David A.; Radhakrishnan, Krishnan
1994-01-01
LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 3 of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part 3 explains the kinetics and kinetics-plus-sensitivity analysis problems supplied with LSENS and presents sample results. These problems illustrate the various capabilities of, and reaction models that can be solved by, the code and may provide a convenient starting point for the user to construct the problem data file required to execute LSENS. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.
NASA Astrophysics Data System (ADS)
Fedorov, Sergey V.; Selivanov, Victor V.; Veldanov, Vladislav A.
2017-06-01
Accumulation of microdamages as a result of intensive plastic deformation leads to a decrease in the average density of the high-velocity elements that are formed at the explosive collapse of the special shape metal liners. For compaction of such elements in tests of their spacecraft meteoroid protection reliability, the use of magnetic-field action on the produced elements during their movement trajectory before interaction with a target is proposed. On the basis of numerical modeling within the one-dimensional axisymmetric problem of continuum mechanics and electrodynamics, the physical processes occurring in the porous conducting elastoplastic cylinder placed in a magnetic field are investigated. Using this model, the parameters of the magnetic-pulse action necessary for the compaction of the steel and aluminum elements are determined.
Deconvolution using a neural network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehman, S.K.
1990-11-15
Viewing one dimensional deconvolution as a matrix inversion problem, we compare a neural network backpropagation matrix inverse with LMS, and pseudo-inverse. This is a largely an exercise in understanding how our neural network code works. 1 ref.
Packing Boxes into Multiple Containers Using Genetic Algorithm
NASA Astrophysics Data System (ADS)
Menghani, Deepak; Guha, Anirban
2016-07-01
Container loading problems have been studied extensively in the literature and various analytical, heuristic and metaheuristic methods have been proposed. This paper presents two different variants of a genetic algorithm framework for the three-dimensional container loading problem for optimally loading boxes into multiple containers with constraints. The algorithms are designed so that it is easy to incorporate various constraints found in real life problems. The algorithms are tested on data of standard test cases from literature and are found to compare well with the benchmark algorithms in terms of utilization of containers. This, along with the ability to easily incorporate a wide range of practical constraints, makes them attractive for implementation in real life scenarios.
TPSLVM: a dimensionality reduction algorithm based on thin plate splines.
Jiang, Xinwei; Gao, Junbin; Wang, Tianjiang; Shi, Daming
2014-10-01
Dimensionality reduction (DR) has been considered as one of the most significant tools for data analysis. One type of DR algorithms is based on latent variable models (LVM). LVM-based models can handle the preimage problem easily. In this paper we propose a new LVM-based DR model, named thin plate spline latent variable model (TPSLVM). Compared to the well-known Gaussian process latent variable model (GPLVM), our proposed TPSLVM is more powerful especially when the dimensionality of the latent space is low. Also, TPSLVM is robust to shift and rotation. This paper investigates two extensions of TPSLVM, i.e., the back-constrained TPSLVM (BC-TPSLVM) and TPSLVM with dynamics (TPSLVM-DM) as well as their combination BC-TPSLVM-DM. Experimental results show that TPSLVM and its extensions provide better data visualization and more efficient dimensionality reduction compared to PCA, GPLVM, ISOMAP, etc.
Low frequency acoustic and electromagnetic scattering
NASA Technical Reports Server (NTRS)
Hariharan, S. I.; Maccamy, R. C.
1986-01-01
This paper deals with two classes of problems arising from acoustics and electromagnetics scattering in the low frequency stations. The first class of problem is solving Helmholtz equation with Dirichlet boundary conditions on an arbitrary two dimensional body while the second one is an interior-exterior interface problem with Helmholtz equation in the exterior. Low frequency analysis show that there are two intermediate problems which solve the above problems accurate to 0(k/2/ log k) where k is the frequency. These solutions greatly differ from the zero frequency approximations. For the Dirichlet problem numerical examples are shown to verify the theoretical estimates.
Army Research Laboratory. 1999 Annual Review
1999-01-01
identification, and tracking of moving vehicles. Sound scattering in the air is caused by fluctuations in temperature, Cj , and winds, C*. Most Army models of...realistic inhomogeneous atmosphere. Hill Three-Dimensional Modeling and Simulation of Kinetic Energy Penetrators and Armor Materials During Ballistic...versions of these tools have been tested on a model muzzle brake fluid flow problem for ARDEC Benet Labs and on a helicopter rotor aerody- namics problem
Unsupervised universal steganalyzer for high-dimensional steganalytic features
NASA Astrophysics Data System (ADS)
Hou, Xiaodan; Zhang, Tao
2016-11-01
The research in developing steganalytic features has been highly successful. These features are extremely powerful when applied to supervised binary classification problems. However, they are incompatible with unsupervised universal steganalysis because the unsupervised method cannot distinguish embedding distortion from varying levels of noises caused by cover variation. This study attempts to alleviate the problem by introducing similarity retrieval of image statistical properties (SRISP), with the specific aim of mitigating the effect of cover variation on the existing steganalytic features. First, cover images with some statistical properties similar to those of a given test image are searched from a retrieval cover database to establish an aided sample set. Then, unsupervised outlier detection is performed on a test set composed of the given test image and its aided sample set to determine the type (cover or stego) of the given test image. Our proposed framework, called SRISP-aided unsupervised outlier detection, requires no training. Thus, it does not suffer from model mismatch mess. Compared with prior unsupervised outlier detectors that do not consider SRISP, the proposed framework not only retains the universality but also exhibits superior performance when applied to high-dimensional steganalytic features.
Jamming and condensation in one-dimensional driven flow
NASA Astrophysics Data System (ADS)
Soh, Hyungjoon; Ha, Meesoon; Jeong, Hawoong
2018-03-01
We revisit the slow-bond (SB) problem of the one-dimensional (1D) totally asymmetric simple exclusion process (TASEP) with modified hopping rates. In the original SB problem, it turns out that a local defect is always relevant to the system as jamming, so that phase separation occurs in the 1D TASEP. However, crossover scaling behaviors are also observed as finite-size effects. In order to check if the SB can be irrelevant to the system with particle interaction, we employ the condensation concept in the zero-range process. The hopping rate in the modified TASEP depends on the interaction parameter and the distance up to the nearest particle in the moving direction, besides the SB factor. In particular, we focus on the interplay of jamming and condensation in the current-density relation of 1D driven flow. Based on mean-field calculations, we present the fundamental diagram and the phase diagram of the modified SB problem, which are numerically checked. Finally, we discuss how the condensation of holes suppresses the jamming of particles and vice versa, where the partially condensed phase is the most interesting, compared to that in the original SB problem.
Jamming and condensation in one-dimensional driven flow.
Soh, Hyungjoon; Ha, Meesoon; Jeong, Hawoong
2018-03-01
We revisit the slow-bond (SB) problem of the one-dimensional (1D) totally asymmetric simple exclusion process (TASEP) with modified hopping rates. In the original SB problem, it turns out that a local defect is always relevant to the system as jamming, so that phase separation occurs in the 1D TASEP. However, crossover scaling behaviors are also observed as finite-size effects. In order to check if the SB can be irrelevant to the system with particle interaction, we employ the condensation concept in the zero-range process. The hopping rate in the modified TASEP depends on the interaction parameter and the distance up to the nearest particle in the moving direction, besides the SB factor. In particular, we focus on the interplay of jamming and condensation in the current-density relation of 1D driven flow. Based on mean-field calculations, we present the fundamental diagram and the phase diagram of the modified SB problem, which are numerically checked. Finally, we discuss how the condensation of holes suppresses the jamming of particles and vice versa, where the partially condensed phase is the most interesting, compared to that in the original SB problem.
Stress Recovery and Error Estimation for Shell Structures
NASA Technical Reports Server (NTRS)
Yazdani, A. A.; Riggs, H. R.; Tessler, A.
2000-01-01
The Penalized Discrete Least-Squares (PDLS) stress recovery (smoothing) technique developed for two dimensional linear elliptic problems is adapted here to three-dimensional shell structures. The surfaces are restricted to those which have a 2-D parametric representation, or which can be built-up of such surfaces. The proposed strategy involves mapping the finite element results to the 2-D parametric space which describes the geometry, and smoothing is carried out in the parametric space using the PDLS-based Smoothing Element Analysis (SEA). Numerical results for two well-known shell problems are presented to illustrate the performance of SEA/PDLS for these problems. The recovered stresses are used in the Zienkiewicz-Zhu a posteriori error estimator. The estimated errors are used to demonstrate the performance of SEA-recovered stresses in automated adaptive mesh refinement of shell structures. The numerical results are encouraging. Further testing involving more complex, practical structures is necessary.
A technique for the reduction of banding in Landsat Thematic Mapper Images
Helder, Dennis L.; Quirk, Bruce K.; Hood, Joy J.
1992-01-01
The radiometric difference between forward and reverse scans in Landsat thematic mapper (TM) images, referred to as "banding," can create problems when enhancing the image for interpretation or when performing quantitative studies. Recent research has led to the development of a method that reduces the banding in Landsat TM data sets. It involves passing a one-dimensional spatial kernel over the data set. This kernel is developed from the statistics of the banding pattern and is based on the Wiener filter. It has been implemented on both a DOS-based microcomputer and several UNIX-based computer systems. The algorithm has successfully reduced the banding in several test data sets.
A global analysis of the ozone deficit in the upper stratosphere and lower mesosphere
NASA Technical Reports Server (NTRS)
Eluszkiewicz, Janusz; Allen, Mark
1993-01-01
The global measurements of temperature, ozone, water vapor, and nitrogen dioxide acquired by the Limb Infrared Monitor of the Stratosphere (LIMS), supplemented by a precomputed distribution of chlorine monoxide, are used to test the balance between odd oxygen production and loss in the upper stratosphere and lower mesosphere. An efficient photochemical equilibrium model, whose validity is ascertained by comparison with the results from a fully time-dependent one-dimensional model at selected latitudes, is used in the calculations. The computed ozone abundances are systematically lower than observations for May 1-7, 1979, which suggests, contrary to the conclusions of other recent studies, a real problem in model simulations of stratospheric ozone.
Modeling Electronic Quantum Transport with Machine Learning
Lopez Bezanilla, Alejandro; von Lilienfeld Toal, Otto A.
2014-06-11
We present a machine learning approach to solve electronic quantum transport equations of one-dimensional nanostructures. The transmission coefficients of disordered systems were computed to provide training and test data sets to the machine. The system’s representation encodes energetic as well as geometrical information to characterize similarities between disordered configurations, while the Euclidean norm is used as a measure of similarity. Errors for out-of-sample predictions systematically decrease with training set size, enabling the accurate and fast prediction of new transmission coefficients. The remarkable performance of our model to capture the complexity of interference phenomena lends further support to its viability inmore » dealing with transport problems of undulatory nature.« less
Stability and chaos in Kustaanheimo-Stiefel space induced by the Hopf fibration
NASA Astrophysics Data System (ADS)
Roa, Javier; Urrutxua, Hodei; Peláez, Jesús
2016-07-01
The need for the extra dimension in Kustaanheimo-Stiefel (KS) regularization is explained by the topology of the Hopf fibration, which defines the geometry and structure of KS space. A trajectory in Cartesian space is represented by a four-dimensional manifold called the fundamental manifold. Based on geometric and topological aspects classical concepts of stability are translated to KS language. The separation between manifolds of solutions generalizes the concept of Lyapunov stability. The dimension-raising nature of the fibration transforms fixed points, limit cycles, attractive sets, and Poincaré sections to higher dimensional subspaces. From these concepts chaotic systems are studied. In strongly perturbed problems, the numerical error can break the topological structure of KS space: points in a fibre are no longer transformed to the same point in Cartesian space. An observer in three dimensions will see orbits departing from the same initial conditions but diverging in time. This apparent randomness of the integration can only be understood in four dimensions. The concept of topological stability results in a simple method for estimating the time-scale in which numerical simulations can be trusted. Ideally, all trajectories departing from the same fibre should be KS transformed to a unique trajectory in three-dimensional space, because the fundamental manifold that they constitute is unique. By monitoring how trajectories departing from one fibre separate from the fundamental manifold a critical time, equivalent to the Lyapunov time, is estimated. These concepts are tested on N-body examples: the Pythagorean problem, and an example of field stars interacting with a binary.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meyer, Chad D.; Balsara, Dinshaw S.; Aslam, Tariq D.
2014-01-15
Parabolic partial differential equations appear in several physical problems, including problems that have a dominant hyperbolic part coupled to a sub-dominant parabolic component. Explicit methods for their solution are easy to implement but have very restrictive time step constraints. Implicit solution methods can be unconditionally stable but have the disadvantage of being computationally costly or difficult to implement. Super-time-stepping methods for treating parabolic terms in mixed type partial differential equations occupy an intermediate position. In such methods each superstep takes “s” explicit Runge–Kutta-like time-steps to advance the parabolic terms by a time-step that is s{sup 2} times larger than amore » single explicit time-step. The expanded stability is usually obtained by mapping the short recursion relation of the explicit Runge–Kutta scheme to the recursion relation of some well-known, stable polynomial. Prior work has built temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Chebyshev polynomials. Since their stability is based on the boundedness of the Chebyshev polynomials, these methods have been called RKC1 and RKC2. In this work we build temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Legendre polynomials. We call these methods RKL1 and RKL2. The RKL1 method is first-order accurate in time; the RKL2 method is second-order accurate in time. We verify that the newly-designed RKL1 and RKL2 schemes have a very desirable monotonicity preserving property for one-dimensional problems – a solution that is monotone at the beginning of a time step retains that property at the end of that time step. It is shown that RKL1 and RKL2 methods are stable for all values of the diffusion coefficient up to the maximum value. We call this a convex monotonicity preserving property and show by examples that it is very useful in parabolic problems with variable diffusion coefficients. This includes variable coefficient parabolic equations that might give rise to skew symmetric terms. The RKC1 and RKC2 schemes do not share this convex monotonicity preserving property. One-dimensional and two-dimensional von Neumann stability analyses of RKC1, RKC2, RKL1 and RKL2 are also presented, showing that the latter two have some advantages. The paper includes several details to facilitate implementation. A detailed accuracy analysis is presented to show that the methods reach their design accuracies. A stringent set of test problems is also presented. To demonstrate the robustness and versatility of our methods, we show their successful operation on problems involving linear and non-linear heat conduction and viscosity, resistive magnetohydrodynamics, ambipolar diffusion dominated magnetohydrodynamics, level set methods and flux limited radiation diffusion. In a prior paper (Meyer, Balsara and Aslam 2012 [36]) we have also presented an extensive test-suite showing that the RKL2 method works robustly in the presence of shocks in an anisotropically conducting, magnetized plasma.« less
NASA Astrophysics Data System (ADS)
Meyer, Chad D.; Balsara, Dinshaw S.; Aslam, Tariq D.
2014-01-01
Parabolic partial differential equations appear in several physical problems, including problems that have a dominant hyperbolic part coupled to a sub-dominant parabolic component. Explicit methods for their solution are easy to implement but have very restrictive time step constraints. Implicit solution methods can be unconditionally stable but have the disadvantage of being computationally costly or difficult to implement. Super-time-stepping methods for treating parabolic terms in mixed type partial differential equations occupy an intermediate position. In such methods each superstep takes “s” explicit Runge-Kutta-like time-steps to advance the parabolic terms by a time-step that is s2 times larger than a single explicit time-step. The expanded stability is usually obtained by mapping the short recursion relation of the explicit Runge-Kutta scheme to the recursion relation of some well-known, stable polynomial. Prior work has built temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Chebyshev polynomials. Since their stability is based on the boundedness of the Chebyshev polynomials, these methods have been called RKC1 and RKC2. In this work we build temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Legendre polynomials. We call these methods RKL1 and RKL2. The RKL1 method is first-order accurate in time; the RKL2 method is second-order accurate in time. We verify that the newly-designed RKL1 and RKL2 schemes have a very desirable monotonicity preserving property for one-dimensional problems - a solution that is monotone at the beginning of a time step retains that property at the end of that time step. It is shown that RKL1 and RKL2 methods are stable for all values of the diffusion coefficient up to the maximum value. We call this a convex monotonicity preserving property and show by examples that it is very useful in parabolic problems with variable diffusion coefficients. This includes variable coefficient parabolic equations that might give rise to skew symmetric terms. The RKC1 and RKC2 schemes do not share this convex monotonicity preserving property. One-dimensional and two-dimensional von Neumann stability analyses of RKC1, RKC2, RKL1 and RKL2 are also presented, showing that the latter two have some advantages. The paper includes several details to facilitate implementation. A detailed accuracy analysis is presented to show that the methods reach their design accuracies. A stringent set of test problems is also presented. To demonstrate the robustness and versatility of our methods, we show their successful operation on problems involving linear and non-linear heat conduction and viscosity, resistive magnetohydrodynamics, ambipolar diffusion dominated magnetohydrodynamics, level set methods and flux limited radiation diffusion. In a prior paper (Meyer, Balsara and Aslam 2012 [36]) we have also presented an extensive test-suite showing that the RKL2 method works robustly in the presence of shocks in an anisotropically conducting, magnetized plasma.
Nonclassical models of the theory of plates and shells
NASA Astrophysics Data System (ADS)
Annin, Boris D.; Volchkov, Yuri M.
2017-11-01
Publications dealing with the study of methods of reducing a three-dimensional problem of the elasticity theory to a two-dimensional problem of the theory of plates and shells are reviewed. Two approaches are considered: the use of kinematic and force hypotheses and expansion of solutions of the three-dimensional elasticity theory in terms of the complete system of functions. Papers where a three-dimensional problem is reduced to a two-dimensional problem with the use of several approximations of each of the unknown functions (stresses and displacements) by segments of the Legendre polynomials are also reviewed.
Very high order discontinuous Galerkin method in elliptic problems
NASA Astrophysics Data System (ADS)
Jaśkowiec, Jan
2017-09-01
The paper deals with high-order discontinuous Galerkin (DG) method with the approximation order that exceeds 20 and reaches 100 and even 1000 with respect to one-dimensional case. To achieve such a high order solution, the DG method with finite difference method has to be applied. The basis functions of this method are high-order orthogonal Legendre or Chebyshev polynomials. These polynomials are defined in one-dimensional space (1D), but they can be easily adapted to two-dimensional space (2D) by cross products. There are no nodes in the elements and the degrees of freedom are coefficients of linear combination of basis functions. In this sort of analysis the reference elements are needed, so the transformations of the reference element into the real one are needed as well as the transformations connected with the mesh skeleton. Due to orthogonality of the basis functions, the obtained matrices are sparse even for finite elements with more than thousands degrees of freedom. In consequence, the truncation errors are limited and very high-order analysis can be performed. The paper is illustrated with a set of benchmark examples of 1D and 2D for the elliptic problems. The example presents the great effectiveness of the method that can shorten the length of calculation over hundreds times.
Very high order discontinuous Galerkin method in elliptic problems
NASA Astrophysics Data System (ADS)
Jaśkowiec, Jan
2018-07-01
The paper deals with high-order discontinuous Galerkin (DG) method with the approximation order that exceeds 20 and reaches 100 and even 1000 with respect to one-dimensional case. To achieve such a high order solution, the DG method with finite difference method has to be applied. The basis functions of this method are high-order orthogonal Legendre or Chebyshev polynomials. These polynomials are defined in one-dimensional space (1D), but they can be easily adapted to two-dimensional space (2D) by cross products. There are no nodes in the elements and the degrees of freedom are coefficients of linear combination of basis functions. In this sort of analysis the reference elements are needed, so the transformations of the reference element into the real one are needed as well as the transformations connected with the mesh skeleton. Due to orthogonality of the basis functions, the obtained matrices are sparse even for finite elements with more than thousands degrees of freedom. In consequence, the truncation errors are limited and very high-order analysis can be performed. The paper is illustrated with a set of benchmark examples of 1D and 2D for the elliptic problems. The example presents the great effectiveness of the method that can shorten the length of calculation over hundreds times.
Distributed Computation of the knn Graph for Large High-Dimensional Point Sets
Plaku, Erion; Kavraki, Lydia E.
2009-01-01
High-dimensional problems arising from robot motion planning, biology, data mining, and geographic information systems often require the computation of k nearest neighbor (knn) graphs. The knn graph of a data set is obtained by connecting each point to its k closest points. As the research in the above-mentioned fields progressively addresses problems of unprecedented complexity, the demand for computing knn graphs based on arbitrary distance metrics and large high-dimensional data sets increases, exceeding resources available to a single machine. In this work we efficiently distribute the computation of knn graphs for clusters of processors with message passing. Extensions to our distributed framework include the computation of graphs based on other proximity queries, such as approximate knn or range queries. Our experiments show nearly linear speedup with over one hundred processors and indicate that similar speedup can be obtained with several hundred processors. PMID:19847318
NASA Astrophysics Data System (ADS)
Brdar, S.; Seifert, A.
2018-01-01
We present a novel Monte-Carlo ice microphysics model, McSnow, to simulate the evolution of ice particles due to deposition, aggregation, riming, and sedimentation. The model is an application and extension of the super-droplet method of Shima et al. (2009) to the more complex problem of rimed ice particles and aggregates. For each individual super-particle, the ice mass, rime mass, rime volume, and the number of monomers are predicted establishing a four-dimensional particle-size distribution. The sensitivity of the model to various assumptions is discussed based on box model and one-dimensional simulations. We show that the Monte-Carlo method provides a feasible approach to tackle this high-dimensional problem. The largest uncertainty seems to be related to the treatment of the riming processes. This calls for additional field and laboratory measurements of partially rimed snowflakes.
Verification of low-Mach number combustion codes using the method of manufactured solutions
NASA Astrophysics Data System (ADS)
Shunn, Lee; Ham, Frank; Knupp, Patrick; Moin, Parviz
2007-11-01
Many computational combustion models rely on tabulated constitutive relations to close the system of equations. As these reactive state-equations are typically multi-dimensional and highly non-linear, their implications on the convergence and accuracy of simulation codes are not well understood. In this presentation, the effects of tabulated state-relationships on the computational performance of low-Mach number combustion codes are explored using the method of manufactured solutions (MMS). Several MMS examples are developed and applied, progressing from simple one-dimensional configurations to problems involving higher dimensionality and solution-complexity. The manufactured solutions are implemented in two multi-physics hydrodynamics codes: CDP developed at Stanford University and FUEGO developed at Sandia National Laboratories. In addition to verifying the order-of-accuracy of the codes, the MMS problems help highlight certain robustness issues in existing variable-density flow-solvers. Strategies to overcome these issues are briefly discussed.
Ermakov's Superintegrable Toy and Nonlocal Symmetries
NASA Astrophysics Data System (ADS)
Leach, P. G. L.; Karasu Kalkanli, A.; Nucci, M. C.; Andriopoulos, K.
2005-11-01
We investigate the symmetry properties of a pair of Ermakov equations. The system is superintegrable and yet possesses only three Lie point symmetries with the algebra sl(2, R). The number of point symmetries is insufficient and the algebra unsuitable for the complete specification of the system. We use the method of reduction of order to reduce the nonlinear fourth-order system to a third-order system comprising a linear second-order equation and a conservation law. We obtain the representation of the complete symmetry group from this system. Four of the required symmetries are nonlocal and the algebra is the direct sum of a one-dimensional Abelian algebra with the semidirect sum of a two-dimensional solvable algebra with a two-dimensional Abelian algebra. The problem illustrates the difficulties which can arise in very elementary systems. Our treatment demonstrates the existence of possible routes to overcome these problems in a systematic fashion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McEneaney, William M.
2004-08-15
Stochastic games under imperfect information are typically computationally intractable even in the discrete-time/discrete-state case considered here. We consider a problem where one player has perfect information.A function of a conditional probability distribution is proposed as an information state.In the problem form here, the payoff is only a function of the terminal state of the system,and the initial information state is either linear ora sum of max-plus delta functions.When the initial information state belongs to these classes, its propagation is finite-dimensional.The state feedback value function is also finite-dimensional,and obtained via dynamic programming,but has a nonstandard form due to the necessity ofmore » an expanded state variable.Under a saddle point assumption,Certainty Equivalence is obtained and the proposed function is indeed an information state.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ridolfi, E.; Napolitano, F., E-mail: francesco.napolitano@uniroma1.it; Alfonso, L.
2016-06-08
The description of river topography has a crucial role in accurate one-dimensional (1D) hydraulic modelling. Specifically, cross-sectional data define the riverbed elevation, the flood-prone area, and thus, the hydraulic behavior of the river. Here, the problem of the optimal cross-sectional spacing is solved through an information theory-based concept. The optimal subset of locations is the one with the maximum information content and the minimum amount of redundancy. The original contribution is the introduction of a methodology to sample river cross sections in the presence of bridges. The approach is tested on the Grosseto River (IT) and is compared to existingmore » guidelines. The results show that the information theory-based approach can support traditional methods to estimate rivers’ cross-sectional spacing.« less
One-Dimensional Fokker-Planck Equation with Quadratically Nonlinear Quasilocal Drift
NASA Astrophysics Data System (ADS)
Shapovalov, A. V.
2018-04-01
The Fokker-Planck equation in one-dimensional spacetime with quadratically nonlinear nonlocal drift in the quasilocal approximation is reduced with the help of scaling of the coordinates and time to a partial differential equation with a third derivative in the spatial variable. Determining equations for the symmetries of the reduced equation are derived and the Lie symmetries are found. A group invariant solution having the form of a traveling wave is found. Within the framework of Adomian's iterative method, the first iterations of an approximate solution of the Cauchy problem are obtained. Two illustrative examples of exact solutions are found.
NASA Astrophysics Data System (ADS)
Belkin, A. E.; Semenov, V. K.
2016-05-01
We consider the problem of modeling the test where a solid-rubber tire runs on a chassis dynamometer for determining the tire rolling resistance characteristics.We state the problem of free steady-state rolling of the tire along the test drum with the energy scattering in the rubber in the course of cyclic deformation taken into account. The viscoelastic behavior of the rubber is described by the Bergströ m-Boyce model whose numerical parameters are experimentally determined from the results of compression tests with specimens. The finite element method is used to obtain the solution of the three-dimensional viscoelasticity problem. To estimate the adequacy of the constructed model, we compare the numerical results with the results obtained in the solid-rubber tire tests on the Hasbach stand from the values of the rolling resistance forces for various loads on the tire.
Statistical Methods for Turbine Blade Dynamics
2008-09-30
disks Journal of Sound and Vibration 317 , pp. 625-645. Calanni, G., Volovoi, V., Ruzzene, M, Vining, C., Cento, P., (2007). Application of Bayesian...are investigated for two vibration problems regarding a one-dimensional beam and a three-dimensional plate structure. It is to be noted that the...gaps," Reliability Engi- neering and System Safety, no. 85, pp. 249-266, 2004. [8] BENFIELD, W. A. andHRUDA, R. F., " Vibration analysis of structures
A quantitative study on magnesium alloy stent biodegradation.
Gao, Yuanming; Wang, Lizhen; Gu, Xuenan; Chu, Zhaowei; Guo, Meng; Fan, Yubo
2018-06-06
Insufficient scaffolding time in the process of rapid corrosion is the main problem of magnesium alloy stent (MAS). Finite element method had been used to investigate corrosion of MAS. However, related researches mostly described all elements suffered corrosion in view of one-dimensional corrosion. Multi-dimensional corrosions significantly influence mechanical integrity of MAS structures such as edges and corners. In this study, the effects of multi-dimensional corrosion were studied using experiment quantitatively, then a phenomenological corrosion model was developed to consider these effects. We implemented immersion test with magnesium alloy (AZ31B) cubes, which had different numbers of exposed surfaces to analyze differences of dimension. It was indicated that corrosion rates of cubes are almost proportional to their exposed-surface numbers, especially when pitting corrosions are not marked. The cubes also represented the hexahedron elements in simulation. In conclusion, corrosion rate of every element accelerates by increasing corrosion-surface numbers in multi-dimensional corrosion. The damage ratios among elements with the same size are proportional to the ratios of corrosion-surface numbers under uniform corrosion. The finite element simulation using proposed model provided more details of changes of morphology and mechanics in scaffolding time by removing 25.7% of elements of MAS. The proposed corrosion model reflected the effects of multi-dimension on corrosions. It would be used to predict degradation process of MAS quantitatively. Copyright © 2018 Elsevier Ltd. All rights reserved.
Sánchez Pérez, J F; Conesa, M; Alhama, I; Alhama, F; Cánovas, M
2017-01-01
Classical dimensional analysis and nondimensionalization are assumed to be two similar approaches in the search for dimensionless groups. Both techniques, simplify the study of many problems. The first approach does not need to know the mathematical model, being sufficient a deep understanding of the physical phenomenon involved, while the second one begins with the governing equations and reduces them to their dimensionless form by simple mathematical manipulations. In this work, a formal protocol is proposed for applying the nondimensionalization process to ordinary differential equations, linear or not, leading to dimensionless normalized equations from which the resulting dimensionless groups have two inherent properties: In one hand, they are physically interpreted as balances between counteracting quantities in the problem, and on the other hand, they are of the order of magnitude unity. The solutions provided by nondimensionalization are more precise in every case than those from dimensional analysis, as it is illustrated by the applications studied in this work.
Programming a hillslope water movement model on the MPP
NASA Technical Reports Server (NTRS)
Devaney, J. E.; Irving, A. R.; Camillo, P. J.; Gurney, R. J.
1987-01-01
A physically based numerical model was developed of heat and moisture flow within a hillslope on a parallel architecture computer, as a precursor to a model of a complete catchment. Moisture flow within a catchment includes evaporation, overland flow, flow in unsaturated soil, and flow in saturated soil. Because of the empirical evidence that moisture flow in unsaturated soil is mainly in the vertical direction, flow in the unsaturated zone can be modeled as a series of one dimensional columns. This initial version of the hillslope model includes evaporation and a single column of one dimensional unsaturated zone flow. This case has already been solved on an IBM 3081 computer and is now being applied to the massively parallel processor architecture so as to make the extension to the one dimensional case easier and to check the problems and benefits of using a parallel architecture machine.
Parallel Simulation of Three-Dimensional Free-Surface Fluid Flow Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
BAER,THOMAS A.; SUBIA,SAMUEL R.; SACKINGER,PHILIP A.
2000-01-18
We describe parallel simulations of viscous, incompressible, free surface, Newtonian fluid flow problems that include dynamic contact lines. The Galerlin finite element method was used to discretize the fully-coupled governing conservation equations and a ''pseudo-solid'' mesh mapping approach was used to determine the shape of the free surface. In this approach, the finite element mesh is allowed to deform to satisfy quasi-static solid mechanics equations subject to geometric or kinematic constraints on the boundaries. As a result, nodal displacements must be included in the set of problem unknowns. Issues concerning the proper constraints along the solid-fluid dynamic contact line inmore » three dimensions are discussed. Parallel computations are carried out for an example taken from the coating flow industry, flow in the vicinity of a slot coater edge. This is a three-dimensional free-surface problem possessing a contact line that advances at the web speed in one region but transitions to static behavior in another part of the flow domain. Discussion focuses on parallel speedups for fixed problem size, a class of problems of immediate practical importance.« less
A One-Dimensional Global-Scaling Erosive Burning Model Informed by Blowing Wall Turbulence
NASA Technical Reports Server (NTRS)
Kibbey, Timothy P.
2014-01-01
A derivation of turbulent flow parameters, combined with data from erosive burning test motors and blowing wall tests results in erosive burning model candidates useful in one-dimensional internal ballistics analysis capable of scaling across wide ranges of motor size. The real-time burn rate data comes from three test campaigns of subscale segmented solid rocket motors tested at two facilities. The flow theory admits the important effect of the blowing wall on the turbulent friction coefficient by using blowing wall data to determine the blowing wall friction coefficient. The erosive burning behavior of full-scale motors is now predicted more closely than with other recent models.
Nonlinear bending models for beams and plates
Antipov, Y. A.
2014-01-01
A new nonlinear model for large deflections of a beam is proposed. It comprises the Euler–Bernoulli boundary value problem for the deflection and a nonlinear integral condition. When bending does not alter the beam length, this condition guarantees that the deflected beam has the original length and fixes the horizontal displacement of the free end. The numerical results are in good agreement with the ones provided by the elastica model. Dynamic and two-dimensional generalizations of this nonlinear one-dimensional static model are also discussed. The model problem for an inextensible rectangular Kirchhoff plate, when one side is clamped, the opposite one is subjected to a shear force, and the others are free of moments and forces, is reduced to a singular integral equation with two fixed singularities. The singularities of the unknown function are examined, and a series-form solution is derived by the collocation method in terms of the associated Jacobi polynomials. The procedure requires solving an infinite system of linear algebraic equations for the expansion coefficients subject to the inextensibility condition. PMID:25294960
Li, Xue; Dong, Jiao
2018-01-01
The material considered in this study not only has a functionally graded characteristic but also exhibits different tensile and compressive moduli of elasticity. One-dimensional and two-dimensional mechanical models for a functionally graded beam with a bimodular effect were established first. By taking the grade function as an exponential expression, the analytical solutions of a bimodular functionally graded beam under pure bending and lateral-force bending were obtained. The regression from a two-dimensional solution to a one-dimensional solution is verified. The physical quantities in a bimodular functionally graded beam are compared with their counterparts in a classical problem and a functionally graded beam without a bimodular effect. The validity of the plane section assumption under pure bending and lateral-force bending is analyzed. Three typical cases that the tensile modulus is greater than, equal to, or less than the compressive modulus are discussed. The result indicates that due to the introduction of the bimodular functionally graded effect of the materials, the maximum tensile and compressive bending stresses may not take place at the bottom and top of the beam. The real location at which the maximum bending stress takes place is determined via the extreme condition for the analytical solution. PMID:29772835
A second-order accurate kinetic-theory-based method for inviscid compressible flows
NASA Technical Reports Server (NTRS)
Deshpande, Suresh M.
1986-01-01
An upwind method for the numerical solution of the Euler equations is presented. This method, called the kinetic numerical method (KNM), is based on the fact that the Euler equations are moments of the Boltzmann equation of the kinetic theory of gases when the distribution function is Maxwellian. The KNM consists of two phases, the convection phase and the collision phase. The method is unconditionally stable and explicit. It is highly vectorizable and can be easily made total variation diminishing for the distribution function by a suitable choice of the interpolation strategy. The method is applied to a one-dimensional shock-propagation problem and to a two-dimensional shock-reflection problem.
NASA Astrophysics Data System (ADS)
Das, Sumanta; Elfving, Vincent E.; Reiter, Florentin; Sørensen, Anders S.
2018-04-01
In a preceding paper we introduced a formalism to study the scattering of low-intensity fields from a system of multilevel emitters embedded in a three-dimensional (3 D ) dielectric medium. Here we show how this photon-scattering relation can be used to analyze the scattering of single photons and weak coherent states from any generic multilevel quantum emitter coupled to a one-dimensional (1 D ) waveguide. The reduction of the photon-scattering relation to 1 D waveguides provides a direct solution of the scattering problem involving low-intensity fields in the waveguide QED regime. To show how our formalism works, we consider examples of multilevel emitters and evaluate the transmitted and reflected field amplitude. Furthermore, we extend our study to include the dynamical response of the emitters for scattering of a weak coherent photon pulse. As our photon-scattering relation is based on the Heisenberg picture, it is quite useful for problems involving photodetection in the waveguide architecture. We show this by considering a specific problem of state generation by photodetection in a multilevel emitter, where our formalism exhibits its full potential. Since the considered emitters are generic, the 1 D results apply to a plethora of physical systems such as atoms, ions, quantum dots, superconducting qubits, and nitrogen-vacancy centers coupled to a 1 D waveguide or transmission line.
A one-dimensional model of subsurface hillslope flow
Jason C. Fisher
1997-01-01
Abstract - A one-dimensional, finite difference model of saturated subsurface flow within a hillslope was developed. The model uses rainfall, elevation data, a hydraulic conductivity, and a storage coefficient to predict the saturated thickness in time and space. The model was tested against piezometric data collected in a swale located in the headwaters of the North...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Juan, E-mail: cheng_juan@iapcm.ac.cn; Shu, Chi-Wang, E-mail: shu@dam.brown.edu
In applications such as astrophysics and inertial confinement fusion, there are many three-dimensional cylindrical-symmetric multi-material problems which are usually simulated by Lagrangian schemes in the two-dimensional cylindrical coordinates. For this type of simulation, a critical issue for the schemes is to keep spherical symmetry in the cylindrical coordinate system if the original physical problem has this symmetry. In the past decades, several Lagrangian schemes with such symmetry property have been developed, but all of them are only first order accurate. In this paper, we develop a second order cell-centered Lagrangian scheme for solving compressible Euler equations in cylindrical coordinates, basedmore » on the control volume discretizations, which is designed to have uniformly second order accuracy and capability to preserve one-dimensional spherical symmetry in a two-dimensional cylindrical geometry when computed on an equal-angle-zoned initial grid. The scheme maintains several good properties such as conservation for mass, momentum and total energy, and the geometric conservation law. Several two-dimensional numerical examples in cylindrical coordinates are presented to demonstrate the good performance of the scheme in terms of accuracy, symmetry, non-oscillation and robustness. The advantage of higher order accuracy is demonstrated in these examples.« less
On the Ck-embedding of Lorentzian manifolds in Ricci-flat spaces
NASA Astrophysics Data System (ADS)
Avalos, R.; Dahia, F.; Romero, C.
2018-05-01
In this paper, we investigate the problem of non-analytic embeddings of Lorentzian manifolds in Ricci-flat semi-Riemannian spaces. In order to do this, we first review some relevant results in the area and then motivate both the mathematical and physical interests in this problem. We show that any n-dimensional compact Lorentzian manifold (Mn, g), with g in the Sobolev space Hs+3, s >n/2 , admits an isometric embedding in a (2n + 2)-dimensional Ricci-flat semi-Riemannian manifold. The sharpest result available for these types of embeddings, in the general setting, comes as a corollary of Greene's remarkable embedding theorems R. Greene [Mem. Am. Math. Soc. 97, 1 (1970)], which guarantee the embedding of a compact n-dimensional semi-Riemannian manifold into an n(n + 5)-dimensional semi-Euclidean space, thereby guaranteeing the embedding into a Ricci-flat space with the same dimension. The theorem presented here improves this corollary in n2 + 3n - 2 codimensions by replacing the Riemann-flat condition with the Ricci-flat one from the beginning. Finally, we will present a corollary of this theorem, which shows that a compact strip in an n-dimensional globally hyperbolic space-time can be embedded in a (2n + 2)-dimensional Ricci-flat semi-Riemannian manifold.
Reactive transport in a partially molten system with binary solid solution
NASA Astrophysics Data System (ADS)
Jordan, J.; Hesse, M. A.
2017-12-01
Melt extraction from the Earth's mantle through high-porosity channels is required to explain the composition of the oceanic crust. Feedbacks from reactive melt transport are thought to localize melt into a network of high-porosity channels. Recent studies invoke lithological heterogeneities in the Earth's mantle to seed the localization of partial melts. Therefore, it is necessary to understand the reaction fronts that form as melt flows across the lithological interface of a heterogeneity and the background mantle. Simplified melting models of such systems aide in the interpretation and formulation of larger scale mantle models. Motivated by the aforementioned facts, we present a chromatographic analysis of reactive melt transport across lithological boundaries, using theory for hyperbolic conservation laws. This is an extension of well-known linear trace element chromatography to the coupling of major elements and energy transport. Our analysis allows the prediction of the feedbacks that arise in reactive melt transport due to melting, freezing, dissolution and precipitation for frontal reactions. This study considers the simplified case of a rigid, partially molten porous medium with binary solid solution. As melt traverses a lithological contact-modeled as a Riemann problem-a rich set of features arise, including a reacted zone between an advancing reaction front and partial chemical preservation of the initial contact. Reactive instabilities observed in this study originate at the lithological interface rather than along a chemical gradient as in most studies of mantle dynamics. We present a regime diagram that predicts where reaction fronts become unstable, thereby allowing melt localization into high-porosity channels through reactive instabilities. After constructing the regime diagram, we test the one-dimensional hyperbolic theory against two-dimensional numerical experiments. The one-dimensional hyperbolic theory is sufficient for predicting the qualitative behavior of reactive melt transport simulations conducted in two-dimensions. The theoretical framework presented can be extended to more complex and realistic phase behavior, and is therefore a useful tool for understanding nonlinear feedbacks in reactive melt transport problems relevant to mantle dynamics.
2009-01-01
Background The characterisation, or binning, of metagenome fragments is an important first step to further downstream analysis of microbial consortia. Here, we propose a one-dimensional signature, OFDEG, derived from the oligonucleotide frequency profile of a DNA sequence, and show that it is possible to obtain a meaningful phylogenetic signal for relatively short DNA sequences. The one-dimensional signal is essentially a compact representation of higher dimensional feature spaces of greater complexity and is intended to improve on the tetranucleotide frequency feature space preferred by current compositional binning methods. Results We compare the fidelity of OFDEG against tetranucleotide frequency in both an unsupervised and semi-supervised setting on simulated metagenome benchmark data. Four tests were conducted using assembler output of Arachne and phrap, and for each, performance was evaluated on contigs which are greater than or equal to 8 kbp in length and contigs which are composed of at least 10 reads. Using both G-C content in conjunction with OFDEG gave an average accuracy of 96.75% (semi-supervised) and 95.19% (unsupervised), versus 94.25% (semi-supervised) and 82.35% (unsupervised) for tetranucleotide frequency. Conclusion We have presented an observation of an alternative characteristic of DNA sequences. The proposed feature representation has proven to be more beneficial than the existing tetranucleotide frequency space to the metagenome binning problem. We do note, however, that our observation of OFDEG deserves further anlaysis and investigation. Unsupervised clustering revealed OFDEG related features performed better than standard tetranucleotide frequency in representing a relevant organism specific signal. Further improvement in binning accuracy is given by semi-supervised classification using OFDEG. The emphasis on a feature-driven, bottom-up approach to the problem of binning reveals promising avenues for future development of techniques to characterise short environmental sequences without bias toward cultivable organisms. PMID:19958473
Application of holography to flow visualization
NASA Technical Reports Server (NTRS)
Lee, G.
1984-01-01
Laser holographic interferometry is being applied to many different types of aerodynamics problems. These include two and three dimensional flows in wind tunnels, ballistic ranges, rotor test chambers and turbine facilities. Density over a large field is measured and velocity, pressure, and mach number can be deduced.
Hanson, Jack; Paliwal, Kuldip; Litfin, Thomas; Yang, Yuedong; Zhou, Yaoqi
2018-06-19
Accurate prediction of a protein contact map depends greatly on capturing as much contextual information as possible from surrounding residues for a target residue pair. Recently, ultra-deep residual convolutional networks were found to be state-of-the-art in the latest Critical Assessment of Structure Prediction techniques (CASP12, (Schaarschmidt et al., 2018)) for protein contact map prediction by attempting to provide a protein-wide context at each residue pair. Recurrent neural networks have seen great success in recent protein residue classification problems due to their ability to propagate information through long protein sequences, especially Long Short-Term Memory (LSTM) cells. Here we propose a novel protein contact map prediction method by stacking residual convolutional networks with two-dimensional residual bidirectional recurrent LSTM networks, and using both one-dimensional sequence-based and two-dimensional evolutionary coupling-based information. We show that the proposed method achieves a robust performance over validation and independent test sets with the Area Under the receiver operating characteristic Curve (AUC)>0.95 in all tests. When compared to several state-of-the-art methods for independent testing of 228 proteins, the method yields an AUC value of 0.958, whereas the next-best method obtains an AUC of 0.909. More importantly, the improvement is over contacts at all sequence-position separations. Specifically, a 8.95%, 5.65% and 2.84% increase in precision were observed for the top L∕10 predictions over the next best for short, medium and long-range contacts, respectively. This confirms the usefulness of ResNets to congregate the short-range relations and 2D-BRLSTM to propagate the long-range dependencies throughout the entire protein contact map 'image'. SPOT-Contact server url: http://sparks-lab.org/jack/server/SPOT-Contact/. Supplementary data is available at Bioinformatics online.
A real-space stochastic density matrix approach for density functional electronic structure.
Beck, Thomas L
2015-12-21
The recent development of real-space grid methods has led to more efficient, accurate, and adaptable approaches for large-scale electrostatics and density functional electronic structure modeling. With the incorporation of multiscale techniques, linear-scaling real-space solvers are possible for density functional problems if localized orbitals are used to represent the Kohn-Sham energy functional. These methods still suffer from high computational and storage overheads, however, due to extensive matrix operations related to the underlying wave function grid representation. In this paper, an alternative stochastic method is outlined that aims to solve directly for the one-electron density matrix in real space. In order to illustrate aspects of the method, model calculations are performed for simple one-dimensional problems that display some features of the more general problem, such as spatial nodes in the density matrix. This orbital-free approach may prove helpful considering a future involving increasingly parallel computing architectures. Its primary advantage is the near-locality of the random walks, allowing for simultaneous updates of the density matrix in different regions of space partitioned across the processors. In addition, it allows for testing and enforcement of the particle number and idempotency constraints through stabilization of a Feynman-Kac functional integral as opposed to the extensive matrix operations in traditional approaches.
Quantum mechanics and hidden superconformal symmetry
NASA Astrophysics Data System (ADS)
Bonezzi, R.; Corradini, O.; Latini, E.; Waldron, A.
2017-12-01
Solvability of the ubiquitous quantum harmonic oscillator relies on a spectrum generating osp (1 |2 ) superconformal symmetry. We study the problem of constructing all quantum mechanical models with a hidden osp (1 |2 ) symmetry on a given space of states. This problem stems from interacting higher spin models coupled to gravity. In one dimension, we show that the solution to this problem is the Vasiliev-Plyushchay family of quantum mechanical models with hidden superconformal symmetry obtained by viewing the harmonic oscillator as a one dimensional Dirac system, so that Grassmann parity equals wave function parity. These models—both oscillator and particlelike—realize all possible unitary irreducible representations of osp (1 |2 ).
A Dimensionally Reduced Clustering Methodology for Heterogeneous Occupational Medicine Data Mining.
Saâdaoui, Foued; Bertrand, Pierre R; Boudet, Gil; Rouffiac, Karine; Dutheil, Frédéric; Chamoux, Alain
2015-10-01
Clustering is a set of techniques of the statistical learning aimed at finding structures of heterogeneous partitions grouping homogenous data called clusters. There are several fields in which clustering was successfully applied, such as medicine, biology, finance, economics, etc. In this paper, we introduce the notion of clustering in multifactorial data analysis problems. A case study is conducted for an occupational medicine problem with the purpose of analyzing patterns in a population of 813 individuals. To reduce the data set dimensionality, we base our approach on the Principal Component Analysis (PCA), which is the statistical tool most commonly used in factorial analysis. However, the problems in nature, especially in medicine, are often based on heterogeneous-type qualitative-quantitative measurements, whereas PCA only processes quantitative ones. Besides, qualitative data are originally unobservable quantitative responses that are usually binary-coded. Hence, we propose a new set of strategies allowing to simultaneously handle quantitative and qualitative data. The principle of this approach is to perform a projection of the qualitative variables on the subspaces spanned by quantitative ones. Subsequently, an optimal model is allocated to the resulting PCA-regressed subspaces.
A Novel Multiobjective Evolutionary Algorithm Based on Regression Analysis
Song, Zhiming; Wang, Maocai; Dai, Guangming; Vasile, Massimiliano
2015-01-01
As is known, the Pareto set of a continuous multiobjective optimization problem with m objective functions is a piecewise continuous (m − 1)-dimensional manifold in the decision space under some mild conditions. However, how to utilize the regularity to design multiobjective optimization algorithms has become the research focus. In this paper, based on this regularity, a model-based multiobjective evolutionary algorithm with regression analysis (MMEA-RA) is put forward to solve continuous multiobjective optimization problems with variable linkages. In the algorithm, the optimization problem is modelled as a promising area in the decision space by a probability distribution, and the centroid of the probability distribution is (m − 1)-dimensional piecewise continuous manifold. The least squares method is used to construct such a model. A selection strategy based on the nondominated sorting is used to choose the individuals to the next generation. The new algorithm is tested and compared with NSGA-II and RM-MEDA. The result shows that MMEA-RA outperforms RM-MEDA and NSGA-II on the test instances with variable linkages. At the same time, MMEA-RA has higher efficiency than the other two algorithms. A few shortcomings of MMEA-RA have also been identified and discussed in this paper. PMID:25874246
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lobel, R.
TRUMP is a general finite difference computer program for the solution of transient and steady state heat transfer problems. It is a very general program capable of solving heat transfer problems in one, two or three dimensions for plane, cylindrical or spherical geometry. Because of the variety of possible geometries, the effort required to describe the geometry can be large. GIFT was written to minimize this effort for one-dimensional heat flow problems. After describing the inner and outer boundaries of a region made of a single material along with the modes of heat transfer which thermally connect different regions, GIFTmore » will calculate all the geometric data (BLOCK 04) and thermal network data (BLOCK 05) required by TRUMP for one-dimensional problems. The heat transfer between layers (or shells) of a material may be by conduction or radiation; also, an interface resistance between layers can be specified. Convection between layers can be accounted for by use of an effective thermal conductivity in which the convection effect is included or by a thermal conductance coefficient. GIFT was written for the Sigma 7 computer, a small digital computer with a versatile graphic display system. This system makes it possible to input the desired data in a question and answer mode and to see both the input and the output displayed on a screen in front of the user at all times. (auth)« less
NASA Astrophysics Data System (ADS)
Lau, Chun Sing
This thesis studies two types of problems in financial derivatives pricing. The first type is the free boundary problem, which can be formulated as a partial differential equation (PDE) subject to a set of free boundary condition. Although the functional form of the free boundary condition is given explicitly, the location of the free boundary is unknown and can only be determined implicitly by imposing continuity conditions on the solution. Two specific problems are studied in details, namely the valuation of fixed-rate mortgages and CEV American options. The second type is the multi-dimensional problem, which involves multiple correlated stochastic variables and their governing PDE. One typical problem we focus on is the valuation of basket-spread options, whose underlying asset prices are driven by correlated geometric Brownian motions (GBMs). Analytic approximate solutions are derived for each of these three problems. For each of the two free boundary problems, we propose a parametric moving boundary to approximate the unknown free boundary, so that the original problem transforms into a moving boundary problem which can be solved analytically. The governing parameter of the moving boundary is determined by imposing the first derivative continuity condition on the solution. The analytic form of the solution allows the price and the hedging parameters to be computed very efficiently. When compared against the benchmark finite-difference method, the computational time is significantly reduced without compromising the accuracy. The multi-stage scheme further allows the approximate results to systematically converge to the benchmark results as one recasts the moving boundary into a piecewise smooth continuous function. For the multi-dimensional problem, we generalize the Kirk (1995) approximate two-asset spread option formula to the case of multi-asset basket-spread option. Since the final formula is in closed form, all the hedging parameters can also be derived in closed form. Numerical examples demonstrate that the pricing and hedging errors are in general less than 1% relative to the benchmark prices obtained by numerical integration or Monte Carlo simulation. By exploiting an explicit relationship between the option price and the underlying probability distribution, we further derive an approximate distribution function for the general basket-spread variable. It can be used to approximate the transition probability distribution of any linear combination of correlated GBMs. Finally, an implicit perturbation is applied to reduce the pricing errors by factors of up to 100. When compared against the existing methods, the basket-spread option formula coupled with the implicit perturbation turns out to be one of the most robust and accurate approximation methods.
NASA Astrophysics Data System (ADS)
Chen, Gui-Qiang; Wang, Ya-Guang
2008-03-01
Compressible vortex sheets are fundamental waves, along with shocks and rarefaction waves, in entropy solutions to multidimensional hyperbolic systems of conservation laws. Understanding the behavior of compressible vortex sheets is an important step towards our full understanding of fluid motions and the behavior of entropy solutions. For the Euler equations in two-dimensional gas dynamics, the classical linearized stability analysis on compressible vortex sheets predicts stability when the Mach number M > sqrt{2} and instability when M < sqrt{2} ; and Artola and Majda’s analysis reveals that the nonlinear instability may occur if planar vortex sheets are perturbed by highly oscillatory waves even when M > sqrt{2} . For the Euler equations in three dimensions, every compressible vortex sheet is violently unstable and this instability is the analogue of the Kelvin Helmholtz instability for incompressible fluids. The purpose of this paper is to understand whether compressible vortex sheets in three dimensions, which are unstable in the regime of pure gas dynamics, become stable under the magnetic effect in three-dimensional magnetohydrodynamics (MHD). One of the main features is that the stability problem is equivalent to a free-boundary problem whose free boundary is a characteristic surface, which is more delicate than noncharacteristic free-boundary problems. Another feature is that the linearized problem for current-vortex sheets in MHD does not meet the uniform Kreiss Lopatinskii condition. These features cause additional analytical difficulties and especially prevent a direct use of the standard Picard iteration to the nonlinear problem. In this paper, we develop a nonlinear approach to deal with these difficulties in three-dimensional MHD. We first carefully formulate the linearized problem for the current-vortex sheets to show rigorously that the magnetic effect makes the problem weakly stable and establish energy estimates, especially high-order energy estimates, in terms of the nonhomogeneous terms and variable coefficients. Then we exploit these results to develop a suitable iteration scheme of the Nash Moser Hörmander type to deal with the loss of the order of derivative in the nonlinear level and establish its convergence, which leads to the existence and stability of compressible current-vortex sheets, locally in time, in three-dimensional MHD.
Decomposition and model selection for large contingency tables.
Dahinden, Corinne; Kalisch, Markus; Bühlmann, Peter
2010-04-01
Large contingency tables summarizing categorical variables arise in many areas. One example is in biology, where large numbers of biomarkers are cross-tabulated according to their discrete expression level. Interactions of the variables are of great interest and are generally studied with log-linear models. The structure of a log-linear model can be visually represented by a graph from which the conditional independence structure can then be easily read off. However, since the number of parameters in a saturated model grows exponentially in the number of variables, this generally comes with a heavy computational burden. Even if we restrict ourselves to models of lower-order interactions or other sparse structures, we are faced with the problem of a large number of cells which play the role of sample size. This is in sharp contrast to high-dimensional regression or classification procedures because, in addition to a high-dimensional parameter, we also have to deal with the analogue of a huge sample size. Furthermore, high-dimensional tables naturally feature a large number of sampling zeros which often leads to the nonexistence of the maximum likelihood estimate. We therefore present a decomposition approach, where we first divide the problem into several lower-dimensional problems and then combine these to form a global solution. Our methodology is computationally feasible for log-linear interaction models with many categorical variables each or some of them having many levels. We demonstrate the proposed method on simulated data and apply it to a bio-medical problem in cancer research.
Rescorla, Leslie; Ivanova, Masha Y; Achenbach, Thomas M; Begovac, Ivan; Chahed, Myriam; Drugli, May Britt; Emerich, Deisy Ribas; Fung, Daniel S S; Haider, Mariam; Hansson, Kjell; Hewitt, Nohelia; Jaimes, Stefanny; Larsson, Bo; Maggiolini, Alfio; Marković, Jasminka; Mitrović, Dragan; Moreira, Paulo; Oliveira, João Tiago; Olsson, Martin; Ooi, Yoon Phaik; Petot, Djaouida; Pisa, Cecilia; Pomalima, Rolando; da Rocha, Marina Monzani; Rudan, Vlasta; Sekulić, Slobodan; Shahini, Mimoza; de Mattos Silvares, Edwiges Ferreira; Szirovicza, Lajos; Valverde, José; Vera, Luis Anderssen; Villa, Maria Clara; Viola, Laura; Woo, Bernardine S C; Zhang, Eugene Yuqing
2012-12-01
To build on Achenbach, Rescorla, and Ivanova (2012) by (a) reporting new international findings for parent, teacher, and self-ratings on the Child Behavior Checklist, Youth Self-Report, and Teacher's Report Form; (b) testing the fit of syndrome models to new data from 17 societies, including previously underrepresented regions; (c) testing effects of society, gender, and age in 44 societies by integrating new and previous data; (d) testing cross-society correlations between mean item ratings; (e) describing the construction of multisociety norms; (f) illustrating clinical applications. Confirmatory factor analyses (CFAs) of parent, teacher, and self-ratings, performed separately for each society; tests of societal, gender, and age effects on dimensional syndrome scales, DSM-oriented scales, Internalizing, Externalizing, and Total Problems scales; tests of agreement between low, medium, and high ratings of problem items across societies. CFAs supported the tested syndrome models in all societies according to the primary fit index (Root Mean Square Error of Approximation [RMSEA]), but less consistently according to other indices; effect sizes were small-to-medium for societal differences in scale scores, but very small for gender, age, and interactions with society; items received similarly low, medium, or high ratings in different societies; problem scores from 44 societies fit three sets of multisociety norms. Statistically derived syndrome models fit parent, teacher, and self-ratings when tested individually in all 44 societies according to RMSEAs (but less consistently according to other indices). Small to medium differences in scale scores among societies supported the use of low-, medium-, and high-scoring norms in clinical assessment of individual children. Copyright © 2012 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.
Li, Jinyan; Fong, Simon; Wong, Raymond K; Millham, Richard; Wong, Kelvin K L
2017-06-28
Due to the high-dimensional characteristics of dataset, we propose a new method based on the Wolf Search Algorithm (WSA) for optimising the feature selection problem. The proposed approach uses the natural strategy established by Charles Darwin; that is, 'It is not the strongest of the species that survives, but the most adaptable'. This means that in the evolution of a swarm, the elitists are motivated to quickly obtain more and better resources. The memory function helps the proposed method to avoid repeat searches for the worst position in order to enhance the effectiveness of the search, while the binary strategy simplifies the feature selection problem into a similar problem of function optimisation. Furthermore, the wrapper strategy gathers these strengthened wolves with the classifier of extreme learning machine to find a sub-dataset with a reasonable number of features that offers the maximum correctness of global classification models. The experimental results from the six public high-dimensional bioinformatics datasets tested demonstrate that the proposed method can best some of the conventional feature selection methods up to 29% in classification accuracy, and outperform previous WSAs by up to 99.81% in computational time.
Studies of Inviscid Flux Schemes for Acoustics and Turbulence Problems
NASA Technical Reports Server (NTRS)
Morris, Chris
2013-01-01
Five different central difference schemes, based on a conservative differencing form of the Kennedy and Gruber skew-symmetric scheme, were compared with six different upwind schemes based on primitive variable reconstruction and the Roe flux. These eleven schemes were tested on a one-dimensional acoustic standing wave problem, the Taylor-Green vortex problem and a turbulent channel flow problem. The central schemes were generally very accurate and stable, provided the grid stretching rate was kept below 10%. As near-DNS grid resolutions, the results were comparable to reference DNS calculations. At coarser grid resolutions, the need for an LES SGS model became apparent. There was a noticeable improvement moving from CD-2 to CD-4, and higher-order schemes appear to yield clear benefits on coarser grids. The UB-7 and CU-5 upwind schemes also performed very well at near-DNS grid resolutions. The UB-5 upwind scheme does not do as well, but does appear to be suitable for well-resolved DNS. The UF-2 and UB-3 upwind schemes, which have significant dissipation over a wide spectral range, appear to be poorly suited for DNS or LES.
NASA Astrophysics Data System (ADS)
Prete, Antonio Del; Franchi, Rodolfo; Antermite, Fabrizio; Donatiello, Iolanda
2018-05-01
Residual stresses appear in a component as a consequence of thermo-mechanical processes (e.g. ring rolling process) casting and heat treatments. When machining these kinds of components, distortions arise due to the redistribution of residual stresses due to the foregoing process history inside the material. If distortions are excessive, they can lead to a large number of scrap parts. Since dimensional accuracy can affect directly the engines efficiency, the dimensional control for aerospace components is a non-trivial issue. In this paper, the problem related to the distortions of large thin walled aeroengines components in nickel superalloys has been addressed. In order to estimate distortions on inner diameters after internal turning operations, a 3D Finite Element Method (FEM) analysis has been developed on a real industrial test case. All the process history, has been taken into account by developing FEM models of ring rolling process and heat treatments. Three different strategies of ring rolling process have been studied and the combination of related parameters which allows to obtain the best dimensional accuracy has been found. Furthermore, grain size evolution and recrystallization phenomena during manufacturing process has been numerically investigated using a semi empirical Johnson-Mehl-Avrami-Kohnogorov (JMAK) model. The volume subtractions have been simulated by boolean trimming: a one step and a multi step analysis have been performed. The multi-step procedure has allowed to choose the best material removal sequence in order to reduce machining distortions.
NASA Technical Reports Server (NTRS)
Ukanwa, A. O.; Stermole, F. J.; Golden, J. O.
1972-01-01
Natural convection effects in phase change thermal control devices were studied. A mathematical model was developed to evaluate natural convection effects in a phase change test cell undergoing solidification. Although natural convection effects are minimized in flight spacecraft, all phase change devices are ground tested. The mathematical approach to the problem was to first develop a transient two-dimensional conduction heat transfer model for the solidification of a normal paraffin of finite geometry. Next, a transient two-dimensional model was developed for the solidification of the same paraffin by a combined conduction-natural-convection heat transfer model. Throughout the study, n-hexadecane (n-C16H34) was used as the phase-change material in both the theoretical and the experimental work. The models were based on the transient two-dimensional finite difference solutions of the energy, continuity, and momentum equations.
Scattering of charge and spin excitations and equilibration of a one-dimensional Wigner crystal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matveev, K. A.; Andreev, A. V.; Klironomos, A. D.
2014-07-01
We study scattering of charge and spin excitations in a system of interacting electrons in one dimension. At low densities, electrons form a one-dimensional Wigner crystal. To a first approximation, the charge excitations are the phonons in the Wigner crystal, and the spin excitations are described by the Heisenberg model with nearest-neighbor exchange coupling. This model is integrable and thus incapable of describing some important phenomena, such as scattering of excitations off each other and the resulting equilibration of the system. We obtain the leading corrections to this model, including charge-spin coupling and the next-nearest-neighbor exchange in the spin subsystem.more » We apply the results to the problem of equilibration of the one-dimensional Wigner crystal and find that the leading contribution to the equilibration rate arises from scattering of spin excitations off each other. We discuss the implications of our results for the conductance of quantum wires at low electron densities« less
Bound states of dipolar bosons in one-dimensional systems
NASA Astrophysics Data System (ADS)
Volosniev, A. G.; Armstrong, J. R.; Fedorov, D. V.; Jensen, A. S.; Valiente, M.; Zinner, N. T.
2013-04-01
We consider one-dimensional tubes containing bosonic polar molecules. The long-range dipole-dipole interactions act both within a single tube and between different tubes. We consider arbitrary values of the externally aligned dipole moments with respect to the symmetry axis of the tubes. The few-body structures in this geometry are determined as a function of polarization angles and dipole strength by using both essentially exact stochastic variational methods and the harmonic approximation. The main focus is on the three-, four- and five-body problems in two or more tubes. Our results indicate that in the weakly coupled limit the intertube interaction is similar to a zero-range term with a suitable rescaled strength. This allows us to address the corresponding many-body physics of the system by constructing a model where bound chains with one molecule in each tube are the effective degrees of freedom. This model can be mapped onto one-dimensional Hamiltonians for which exact solutions are known.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bolding, Simon R.; Cleveland, Mathew Allen; Morel, Jim E.
In this paper, we have implemented a new high-order low-order (HOLO) algorithm for solving thermal radiative transfer problems. The low-order (LO) system is based on the spatial and angular moments of the transport equation and a linear-discontinuous finite-element spatial representation, producing equations similar to the standard S 2 equations. The LO solver is fully implicit in time and efficiently resolves the nonlinear temperature dependence at each time step. The high-order (HO) solver utilizes exponentially convergent Monte Carlo (ECMC) to give a globally accurate solution for the angular intensity to a fixed-source pure-absorber transport problem. This global solution is used tomore » compute consistency terms, which require the HO and LO solutions to converge toward the same solution. The use of ECMC allows for the efficient reduction of statistical noise in the Monte Carlo solution, reducing inaccuracies introduced through the LO consistency terms. Finally, we compare results with an implicit Monte Carlo code for one-dimensional gray test problems and demonstrate the efficiency of ECMC over standard Monte Carlo in this HOLO algorithm.« less
Bolding, Simon R.; Cleveland, Mathew Allen; Morel, Jim E.
2016-10-21
In this paper, we have implemented a new high-order low-order (HOLO) algorithm for solving thermal radiative transfer problems. The low-order (LO) system is based on the spatial and angular moments of the transport equation and a linear-discontinuous finite-element spatial representation, producing equations similar to the standard S 2 equations. The LO solver is fully implicit in time and efficiently resolves the nonlinear temperature dependence at each time step. The high-order (HO) solver utilizes exponentially convergent Monte Carlo (ECMC) to give a globally accurate solution for the angular intensity to a fixed-source pure-absorber transport problem. This global solution is used tomore » compute consistency terms, which require the HO and LO solutions to converge toward the same solution. The use of ECMC allows for the efficient reduction of statistical noise in the Monte Carlo solution, reducing inaccuracies introduced through the LO consistency terms. Finally, we compare results with an implicit Monte Carlo code for one-dimensional gray test problems and demonstrate the efficiency of ECMC over standard Monte Carlo in this HOLO algorithm.« less
Additional extensions to the NASCAP computer code, volume 1
NASA Technical Reports Server (NTRS)
Mandell, M. J.; Katz, I.; Stannard, P. R.
1981-01-01
Extensions and revisions to a computer code that comprehensively analyzes problems of spacecraft charging (NASCAP) are documented. Using a fully three dimensional approach, it can accurately predict spacecraft potentials under a variety of conditions. Among the extensions are a multiple electron/ion gun test tank capability, and the ability to model anisotropic and time dependent space environments. Also documented are a greatly extended MATCHG program and the preliminary version of NASCAP/LEO. The interactive MATCHG code was developed into an extremely powerful tool for the study of material-environment interactions. The NASCAP/LEO, a three dimensional code to study current collection under conditions of high voltages and short Debye lengths, was distributed for preliminary testing.
Relations as Rules: The Role of Attention in the Dimensional Change Card Sort Task
ERIC Educational Resources Information Center
Honomichl, Ryan D.; Chen, Zhe
2011-01-01
Preschoolers are typically unable to switch sorting rules during the Dimensional Change Card Sort task. One explanation for this phenomenon is attentional inflexibility (Kirkham, Cruess, & Diamond, 2003). In 4 experiments with 3- to 4-year-olds, we tested this hypothesis by examining the influence of dimensional salience on switching performance.…
ERIC Educational Resources Information Center
Busch, Vincent; Laninga-Wijnen, Lydia; van Yperen, Tom Albert; Schrijvers, Augustinus Jacobus Petrus; De Leeuw, Johannes Rob Josephus
2015-01-01
Research on school bullying often focuses on the directional path of bullying and/or victimization leading to psychosocial problems, while such one-dimensional views have been shown to be too simplistic. Furthermore, recent research has shown that patterns of bullying at school differ for boys and girls, which makes gender a particularly relevant…
Discontinuous Galerkin Finite Element Method for Parabolic Problems
NASA Technical Reports Server (NTRS)
Kaneko, Hideaki; Bey, Kim S.; Hou, Gene J. W.
2004-01-01
In this paper, we develop a time and its corresponding spatial discretization scheme, based upon the assumption of a certain weak singularity of parallel ut(t) parallel Lz(omega) = parallel ut parallel2, for the discontinuous Galerkin finite element method for one-dimensional parabolic problems. Optimal convergence rates in both time and spatial variables are obtained. A discussion of automatic time-step control method is also included.
DOT National Transportation Integrated Search
2016-09-01
We consider the problem of subspace clustering: given points that lie on or near the union of many low-dimensional linear subspaces, recover the subspaces. To this end, one first identifies sets of points close to the same subspace and uses the sets ...
Stability analysis of Eulerian-Lagrangian methods for the one-dimensional shallow-water equations
Casulli, V.; Cheng, R.T.
1990-01-01
In this paper stability and error analyses are discussed for some finite difference methods when applied to the one-dimensional shallow-water equations. Two finite difference formulations, which are based on a combined Eulerian-Lagrangian approach, are discussed. In the first part of this paper the results of numerical analyses for an explicit Eulerian-Lagrangian method (ELM) have shown that the method is unconditionally stable. This method, which is a generalized fixed grid method of characteristics, covers the Courant-Isaacson-Rees method as a special case. Some artificial viscosity is introduced by this scheme. However, because the method is unconditionally stable, the artificial viscosity can be brought under control either by reducing the spatial increment or by increasing the size of time step. The second part of the paper discusses a class of semi-implicit finite difference methods for the one-dimensional shallow-water equations. This method, when the Eulerian-Lagrangian approach is used for the convective terms, is also unconditionally stable and highly accurate for small space increments or large time steps. The semi-implicit methods seem to be more computationally efficient than the explicit ELM; at each time step a single tridiagonal system of linear equations is solved. The combined explicit and implicit ELM is best used in formulating a solution strategy for solving a network of interconnected channels. The explicit ELM is used at channel junctions for each time step. The semi-implicit method is then applied to the interior points in each channel segment. Following this solution strategy, the channel network problem can be reduced to a set of independent one-dimensional open-channel flow problems. Numerical results support properties given by the stability and error analyses. ?? 1990.
The development of truncated inviscid turbulence and the FPU-problem
NASA Astrophysics Data System (ADS)
Ooms, G.; Boersma, B. J.
As is well known Fermi, Pasta and Ulam [1] studied the energy redistribution between the linear modes of a one-dimensional chain of particles connected via weakly nonlinear springs. To their surprise no apparent tendency to equipartition of energy was observed in their numerical experiments. Much more knowledge is now available about this problem (see, for instance, the recent book by Gallavotti [2] or the review by Cambell et al. [3] in the focus issue on the FPU-problem in the journal Chaos).
Thermoplasticity of coupled bodies in the case of stress-dependent heat transfer
NASA Technical Reports Server (NTRS)
Kilikovskaya, O. A.
1987-01-01
The problem of the thermal stresses in coupled deformable bodies is formulated for the case where the heat-transfer coefficient at the common boundary depends on the stress-strain state of the bodies (e.g., is a function of the normal pressure at the common boundary). Several one-dimensional problems are solved in this formulation. Among these problems is the determination of the thermal stresses in an n-layer plate and in a two-layer cylinder.
Whitham modulation theory for (2 + 1)-dimensional equations of Kadomtsev–Petviashvili type
NASA Astrophysics Data System (ADS)
Ablowitz, Mark J.; Biondini, Gino; Rumanov, Igor
2018-05-01
Whitham modulation theory for certain two-dimensional evolution equations of Kadomtsev–Petviashvili (KP) type is presented. Three specific examples are considered in detail: the KP equation, the two-dimensional Benjamin–Ono (2DBO) equation and a modified KP (m2KP) equation. A unified derivation is also provided. In the case of the m2KP equation, the corresponding Whitham modulation system exhibits features different from the other two. The approach presented here does not require integrability of the original evolution equation. Indeed, while the KP equation is known to be a completely integrable equation, the 2DBO equation and the m2KP equation are not known to be integrable. In each of the cases considered, the Whitham modulation system obtained consists of five first-order quasilinear partial differential equations. The Riemann problem (i.e. the analogue of the Gurevich–Pitaevskii problem) for the one-dimensional reduction of the m2KP equation is studied. For the m2KP equation, the system of modulation equations is used to analyze the linear stability of traveling wave solutions.