NASA Technical Reports Server (NTRS)
Chen, Xiaoqin; Tamma, Kumar K.; Sha, Desong
1993-01-01
The present paper describes a new explicit virtual-pulse time integral methodology for nonlinear structural dynamics problems. The purpose of the paper is to provide the theoretical basis of the methodology and to demonstrate applicability of the proposed formulations to nonlinear dynamic structures. Different from the existing numerical methods such as direct time integrations or mode superposition techniques, the proposed methodology offers new perspectives and methodology of development, and possesses several unique and attractive computational characteristics. The methodology is tested and compared with the implicit Newmark method (trapezoidal rule) through a nonlinear softening and hardening spring dynamic models. The numerical results indicate that the proposed explicit virtual-pulse time integral methodology is an excellent alternative for solving general nonlinear dynamic problems.
Towards Perfectly Absorbing Boundary Conditions for Euler Equations
NASA Technical Reports Server (NTRS)
Hayder, M. Ehtesham; Hu, Fang Q.; Hussaini, M. Yousuff
1997-01-01
In this paper, we examine the effectiveness of absorbing layers as non-reflecting computational boundaries for the Euler equations. The absorbing-layer equations are simply obtained by splitting the governing equations in the coordinate directions and introducing absorption coefficients in each split equation. This methodology is similar to that used by Berenger for the numerical solutions of Maxwell's equations. Specifically, we apply this methodology to three physical problems shock-vortex interactions, a plane free shear flow and an axisymmetric jet- with emphasis on acoustic wave propagation. Our numerical results indicate that the use of absorbing layers effectively minimizes numerical reflection in all three problems considered.
Numerical Determination of Critical Conditions for Thermal Ignition
NASA Technical Reports Server (NTRS)
Luo, W.; Wake, G. C.; Hawk, C. W.; Litchford, R. J.
2008-01-01
The determination of ignition or thermal explosion in an oxidizing porous body of material, as described by a dimensionless reaction-diffusion equation of the form .tu = .2u + .e-1/u over the bounded region O, is critically reexamined from a modern perspective using numerical methodologies. First, the classic stationary model is revisited to establish the proper reference frame for the steady-state solution space, and it is demonstrated how the resulting nonlinear two-point boundary value problem can be reexpressed as an initial value problem for a system of first-order differential equations, which may be readily solved using standard algorithms. Then, the numerical procedure is implemented and thoroughly validated against previous computational results based on sophisticated path-following techniques. Next, the transient nonstationary model is attacked, and the full nonlinear form of the reaction-diffusion equation, including a generalized convective boundary condition, is discretized and expressed as a system of linear algebraic equations. The numerical methodology is implemented as a computer algorithm, and validation computations are carried out as a prelude to a broad-ranging evaluation of the assembly problem and identification of the watershed critical initial temperature conditions for thermal ignition. This numerical methodology is then used as the basis for studying the relationship between the shape of the critical initial temperature distribution and the corresponding spatial moments of its energy content integral and an attempt to forge a fundamental conjecture governing this relation. Finally, the effects of dynamic boundary conditions on the classic storage problem are investigated and the groundwork is laid for the development of an approximate solution methodology based on adaptation of the standard stationary model.
Biomagnetic fluid flow in an aneurysm using ferrohydrodynamics principles
NASA Astrophysics Data System (ADS)
Tzirtzilakis, E. E.
2015-06-01
In this study, the fundamental problem of biomagnetic fluid flow in an aneurysmal geometry under the influence of a steady localized magnetic field is numerically investigated. The mathematical model used to formulate the problem is consistent with the principles of ferrohydrodynamics. Blood is considered to be an electrically non-conducting, homogeneous, non-isothermal Newtonian magnetic fluid. For the numerical solution of the problem, which is described by a coupled, non-linear system of Partial Differential Equations (PDEs), with appropriate boundary conditions, the stream function-vorticity formulation is adopted. The solution is obtained by applying an efficient pseudotransient numerical methodology using finite differences. This methodology is based on the application of a semi-implicit numerical technique, transformations, stretching of the grid, and construction of the boundary conditions for the vorticity. The results regarding the velocity and temperature field, skin friction, and rate of heat transfer indicate that the presence of a magnetic field considerably influences the flow field, particularly in the region of the aneurysm.
NASA Technical Reports Server (NTRS)
Tamma, Kumar K.; Railkar, Sudhir B.
1988-01-01
This paper describes new and recent advances in the development of a hybrid transfinite element computational methodology for applicability to conduction/convection/radiation heat transfer problems. The transfinite element methodology, while retaining the modeling versatility of contemporary finite element formulations, is based on application of transform techniques in conjunction with classical Galerkin schemes and is a hybrid approach. The purpose of this paper is to provide a viable hybrid computational methodology for applicability to general transient thermal analysis. Highlights and features of the methodology are described and developed via generalized formulations and applications to several test problems. The proposed transfinite element methodology successfully provides a viable computational approach and numerical test problems validate the proposed developments for conduction/convection/radiation thermal analysis.
Brain Dynamics: Methodological Issues and Applications in Psychiatric and Neurologic Diseases
NASA Astrophysics Data System (ADS)
Pezard, Laurent
The human brain is a complex dynamical system generating the EEG signal. Numerical methods developed to study complex physical dynamics have been used to characterize EEG since the mid-eighties. This endeavor raised several issues related to the specificity of EEG. Firstly, theoretical and methodological studies should address the major differences between the dynamics of the human brain and physical systems. Secondly, this approach of EEG signal should prove to be relevant for dealing with physiological or clinical problems. A set of studies performed in our group is presented here within the context of these two problematic aspects. After the discussion of methodological drawbacks, we review numerical simulations related to the high dimension and spatial extension of brain dynamics. Experimental studies in neurologic and psychiatric disease are then presented. We conclude that if it is now clear that brain dynamics changes in relation with clinical situations, methodological problems remain largely unsolved.
NASA Astrophysics Data System (ADS)
Gupta, Mahima; Mohanty, B. K.
2017-04-01
In this paper, we have developed a methodology to derive the level of compensation numerically in multiple criteria decision-making (MCDM) problems under fuzzy environment. The degree of compensation is dependent on the tranquility and anxiety level experienced by the decision-maker while taking the decision. Higher tranquility leads to the higher realisation of the compensation whereas the increased level of anxiety reduces the amount of compensation in the decision process. This work determines the level of tranquility (or anxiety) using the concept of fuzzy sets and its various level sets. The concepts of indexing of fuzzy numbers, the risk barriers and the tranquility level of the decision-maker are used to derive his/her risk prone or risk averse attitude of decision-maker in each criterion. The aggregation of the risk levels in each criterion gives us the amount of compensation in the entire MCDM problem. Inclusion of the compensation leads us to model the MCDM problem as binary integer programming problem (BIP). The solution to BIP gives us the compensatory decision to MCDM. The proposed methodology is illustrated through a numerical example.
Fuzzy multi objective transportation problem – evolutionary algorithm approach
NASA Astrophysics Data System (ADS)
Karthy, T.; Ganesan, K.
2018-04-01
This paper deals with fuzzy multi objective transportation problem. An fuzzy optimal compromise solution is obtained by using Fuzzy Genetic Algorithm. A numerical example is provided to illustrate the methodology.
NASA Technical Reports Server (NTRS)
Nakajima, Yukio; Padovan, Joe
1987-01-01
In a three-part series of papers, a generalized finite element methodology is formulated to handle traveling load problems involving large deformation fields in structure composed of viscoelastic media. The main thrust of this paper is to develop an overall finite element methodology and associated solution algorithms to handle the transient aspects of moving problems involving contact impact type loading fields. Based on the methodology and algorithms formulated, several numerical experiments are considered. These include the rolling/sliding impact of tires with road obstructions.
Level-Set Methodology on Adaptive Octree Grids
NASA Astrophysics Data System (ADS)
Gibou, Frederic; Guittet, Arthur; Mirzadeh, Mohammad; Theillard, Maxime
2017-11-01
Numerical simulations of interfacial problems in fluids require a methodology capable of tracking surfaces that can undergo changes in topology and capable to imposing jump boundary conditions in a sharp manner. In this talk, we will discuss recent advances in the level-set framework, in particular one that is based on adaptive grids.
NASA Technical Reports Server (NTRS)
Padovan, J.; Adams, M.; Lam, P.; Fertis, D.; Zeid, I.
1982-01-01
Second-year efforts within a three-year study to develop and extend finite element (FE) methodology to efficiently handle the transient/steady state response of rotor-bearing-stator structure associated with gas turbine engines are outlined. The two main areas aim at (1) implanting the squeeze film damper element into a general purpose FE code for testing and evaluation; and (2) determining the numerical characteristics of the FE-generated rotor-bearing-stator simulation scheme. The governing FE field equations are set out and the solution methodology is presented. The choice of ADINA as the general-purpose FE code is explained, and the numerical operational characteristics of the direct integration approach of FE-generated rotor-bearing-stator simulations is determined, including benchmarking, comparison of explicit vs. implicit methodologies of direct integration, and demonstration problems.
Methodology of Numerical Optimization for Orbital Parameters of Binary Systems
NASA Astrophysics Data System (ADS)
Araya, I.; Curé, M.
2010-02-01
The use of a numerical method of maximization (or minimization) in optimization processes allows us to obtain a great amount of solutions. Therefore, we can find a global maximum or minimum of the problem, but this is only possible if we used a suitable methodology. To obtain the global optimum values, we use the genetic algorithm called PIKAIA (P. Charbonneau) and other four algorithms implemented in Mathematica. We demonstrate that derived orbital parameters of binary systems published in some papers, based on radial velocity measurements, are local minimum instead of global ones.
Object oriented development of engineering software using CLIPS
NASA Technical Reports Server (NTRS)
Yoon, C. John
1991-01-01
Engineering applications involve numeric complexity and manipulations of a large amount of data. Traditionally, numeric computation has been the concern in developing an engineering software. As engineering application software became larger and more complex, management of resources such as data, rather than the numeric complexity, has become the major software design problem. Object oriented design and implementation methodologies can improve the reliability, flexibility, and maintainability of the resulting software; however, some tasks are better solved with the traditional procedural paradigm. The C Language Integrated Production System (CLIPS), with deffunction and defgeneric constructs, supports the procedural paradigm. The natural blending of object oriented and procedural paradigms has been cited as the reason for the popularity of the C++ language. The CLIPS Object Oriented Language's (COOL) object oriented features are more versatile than C++'s. A software design methodology based on object oriented and procedural approaches appropriate for engineering software, and to be implemented in CLIPS was outlined. A method for sensor placement for Space Station Freedom is being implemented in COOL as a sample problem.
An approach to solve replacement problems under intuitionistic fuzzy nature
NASA Astrophysics Data System (ADS)
Balaganesan, M.; Ganesan, K.
2018-04-01
Due to impreciseness to solve the day to day problems the researchers use fuzzy sets in their discussions of the replacement problems. The aim of this paper is to solve the replacement theory problems with triangular intuitionistic fuzzy numbers. An effective methodology based on fuzziness index and location index is proposed to determine the optimal solution of the replacement problem. A numerical example is illustrated to validate the proposed method.
NASA Astrophysics Data System (ADS)
Gimenez, Juan M.; González, Leo M.
2015-03-01
In this paper, a new generation of the particle method known as Particle Finite Element Method (PFEM), which combines convective particle movement and a fixed mesh resolution, is applied to free surface flows. This interesting variant, previously described in the literature as PFEM-2, is able to use larger time steps when compared to other similar numerical tools which implies shorter computational times while maintaining the accuracy of the computation. PFEM-2 has already been extended to free surface problems, being the main topic of this paper a deep validation of this methodology for a wider range of flows. To accomplish this task, different improved versions of discontinuous and continuous enriched basis functions for the pressure field have been developed to capture the free surface dynamics without artificial diffusion or undesired numerical effects when different density ratios are involved. A collection of problems has been carefully selected such that a wide variety of Froude numbers, density ratios and dominant dissipative cases are reported with the intention of presenting a general methodology, not restricted to a particular range of parameters, and capable of using large time-steps. The results of the different free-surface problems solved, which include: Rayleigh-Taylor instability, sloshing problems, viscous standing waves and the dam break problem, are compared to well validated numerical alternatives or experimental measurements obtaining accurate approximations for such complex flows.
Large scale nonlinear programming for the optimization of spacecraft trajectories
NASA Astrophysics Data System (ADS)
Arrieta-Camacho, Juan Jose
Despite the availability of high fidelity mathematical models, the computation of accurate optimal spacecraft trajectories has never been an easy task. While simplified models of spacecraft motion can provide useful estimates on energy requirements, sizing, and cost; the actual launch window and maneuver scheduling must rely on more accurate representations. We propose an alternative for the computation of optimal transfers that uses an accurate representation of the spacecraft dynamics. Like other methodologies for trajectory optimization, this alternative is able to consider all major disturbances. In contrast, it can handle explicitly equality and inequality constraints throughout the trajectory; it requires neither the derivation of costate equations nor the identification of the constrained arcs. The alternative consist of two steps: (1) discretizing the dynamic model using high-order collocation at Radau points, which displays numerical advantages, and (2) solution to the resulting Nonlinear Programming (NLP) problem using an interior point method, which does not suffer from the performance bottleneck associated with identifying the active set, as required by sequential quadratic programming methods; in this way the methodology exploits the availability of sound numerical methods, and next generation NLP solvers. In practice the methodology is versatile; it can be applied to a variety of aerospace problems like homing, guidance, and aircraft collision avoidance; the methodology is particularly well suited for low-thrust spacecraft trajectory optimization. Examples are presented which consider the optimization of a low-thrust orbit transfer subject to the main disturbances due to Earth's gravity field together with Lunar and Solar attraction. Other example considers the optimization of a multiple asteroid rendezvous problem. In both cases, the ability of our proposed methodology to consider non-standard objective functions and constraints is illustrated. Future research directions are identified, involving the automatic scheduling and optimization of trajectory correction maneuvers. The sensitivity information provided by the methodology is expected to be invaluable in such research pursuit. The collocation scheme and nonlinear programming algorithm presented in this work, complement other existing methodologies by providing reliable and efficient numerical methods able to handle large scale, nonlinear dynamic models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Ju, E-mail: jliu@ices.utexas.edu; Gomez, Hector; Evans, John A.
2013-09-01
We propose a new methodology for the numerical solution of the isothermal Navier–Stokes–Korteweg equations. Our methodology is based on a semi-discrete Galerkin method invoking functional entropy variables, a generalization of classical entropy variables, and a new time integration scheme. We show that the resulting fully discrete scheme is unconditionally stable-in-energy, second-order time-accurate, and mass-conservative. We utilize isogeometric analysis for spatial discretization and verify the aforementioned properties by adopting the method of manufactured solutions and comparing coarse mesh solutions with overkill solutions. Various problems are simulated to show the capability of the method. Our methodology provides a means of constructing unconditionallymore » stable numerical schemes for nonlinear non-convex hyperbolic systems of conservation laws.« less
On the generalized VIP time integral methodology for transient thermal problems
NASA Technical Reports Server (NTRS)
Mei, Youping; Chen, Xiaoqin; Tamma, Kumar K.; Sha, Desong
1993-01-01
The paper describes the development and applicability of a generalized VIrtual-Pulse (VIP) time integral method of computation for thermal problems. Unlike past approaches for general heat transfer computations, and with the advent of high speed computing technology and the importance of parallel computations for efficient use of computing environments, a major motivation via the developments described in this paper is the need for developing explicit computational procedures with improved accuracy and stability characteristics. As a consequence, a new and effective VIP methodology is described which inherits these improved characteristics. Numerical illustrative examples are provided to demonstrate the developments and validate the results obtained for thermal problems.
A robust optimization methodology for preliminary aircraft design
NASA Astrophysics Data System (ADS)
Prigent, S.; Maréchal, P.; Rondepierre, A.; Druot, T.; Belleville, M.
2016-05-01
This article focuses on a robust optimization of an aircraft preliminary design under operational constraints. According to engineers' know-how, the aircraft preliminary design problem can be modelled as an uncertain optimization problem whose objective (the cost or the fuel consumption) is almost affine, and whose constraints are convex. It is shown that this uncertain optimization problem can be approximated in a conservative manner by an uncertain linear optimization program, which enables the use of the techniques of robust linear programming of Ben-Tal, El Ghaoui, and Nemirovski [Robust Optimization, Princeton University Press, 2009]. This methodology is then applied to two real cases of aircraft design and numerical results are presented.
Bayesian design of decision rules for failure detection
NASA Technical Reports Server (NTRS)
Chow, E. Y.; Willsky, A. S.
1984-01-01
The formulation of the decision making process of a failure detection algorithm as a Bayes sequential decision problem provides a simple conceptualization of the decision rule design problem. As the optimal Bayes rule is not computable, a methodology that is based on the Bayesian approach and aimed at a reduced computational requirement is developed for designing suboptimal rules. A numerical algorithm is constructed to facilitate the design and performance evaluation of these suboptimal rules. The result of applying this design methodology to an example shows that this approach is potentially a useful one.
Efficient Computation of Info-Gap Robustness for Finite Element Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stull, Christopher J.; Hemez, Francois M.; Williams, Brian J.
2012-07-05
A recent research effort at LANL proposed info-gap decision theory as a framework by which to measure the predictive maturity of numerical models. Info-gap theory explores the trade-offs between accuracy, that is, the extent to which predictions reproduce the physical measurements, and robustness, that is, the extent to which predictions are insensitive to modeling assumptions. Both accuracy and robustness are necessary to demonstrate predictive maturity. However, conducting an info-gap analysis can present a formidable challenge, from the standpoint of the required computational resources. This is because a robustness function requires the resolution of multiple optimization problems. This report offers anmore » alternative, adjoint methodology to assess the info-gap robustness of Ax = b-like numerical models solved for a solution x. Two situations that can arise in structural analysis and design are briefly described and contextualized within the info-gap decision theory framework. The treatments of the info-gap problems, using the adjoint methodology are outlined in detail, and the latter problem is solved for four separate finite element models. As compared to statistical sampling, the proposed methodology offers highly accurate approximations of info-gap robustness functions for the finite element models considered in the report, at a small fraction of the computational cost. It is noted that this report considers only linear systems; a natural follow-on study would extend the methodologies described herein to include nonlinear systems.« less
Aircraft optimization by a system approach: Achievements and trends
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1992-01-01
Recently emerging methodology for optimal design of aircraft treated as a system of interacting physical phenomena and parts is examined. The methodology is found to coalesce into methods for hierarchic, non-hierarchic, and hybrid systems all dependent on sensitivity analysis. A separate category of methods has also evolved independent of sensitivity analysis, hence suitable for discrete problems. References and numerical applications are cited. Massively parallel computer processing is seen as enabling technology for practical implementation of the methodology.
Reliability based design optimization: Formulations and methodologies
NASA Astrophysics Data System (ADS)
Agarwal, Harish
Modern products ranging from simple components to complex systems should be designed to be optimal and reliable. The challenge of modern engineering is to ensure that manufacturing costs are reduced and design cycle times are minimized while achieving requirements for performance and reliability. If the market for the product is competitive, improved quality and reliability can generate very strong competitive advantages. Simulation based design plays an important role in designing almost any kind of automotive, aerospace, and consumer products under these competitive conditions. Single discipline simulations used for analysis are being coupled together to create complex coupled simulation tools. This investigation focuses on the development of efficient and robust methodologies for reliability based design optimization in a simulation based design environment. Original contributions of this research are the development of a novel efficient and robust unilevel methodology for reliability based design optimization, the development of an innovative decoupled reliability based design optimization methodology, the application of homotopy techniques in unilevel reliability based design optimization methodology, and the development of a new framework for reliability based design optimization under epistemic uncertainty. The unilevel methodology for reliability based design optimization is shown to be mathematically equivalent to the traditional nested formulation. Numerical test problems show that the unilevel methodology can reduce computational cost by at least 50% as compared to the nested approach. The decoupled reliability based design optimization methodology is an approximate technique to obtain consistent reliable designs at lesser computational expense. Test problems show that the methodology is computationally efficient compared to the nested approach. A framework for performing reliability based design optimization under epistemic uncertainty is also developed. A trust region managed sequential approximate optimization methodology is employed for this purpose. Results from numerical test studies indicate that the methodology can be used for performing design optimization under severe uncertainty.
Global Artificial Boundary Conditions for Computation of External Flow Problems with Propulsive Jets
NASA Technical Reports Server (NTRS)
Tsynkov, Semyon; Abarbanel, Saul; Nordstrom, Jan; Ryabenkii, Viktor; Vatsa, Veer
1998-01-01
We propose new global artificial boundary conditions (ABC's) for computation of flows with propulsive jets. The algorithm is based on application of the difference potentials method (DPM). Previously, similar boundary conditions have been implemented for calculation of external compressible viscous flows around finite bodies. The proposed modification substantially extends the applicability range of the DPM-based algorithm. In the paper, we present the general formulation of the problem, describe our numerical methodology, and discuss the corresponding computational results. The particular configuration that we analyze is a slender three-dimensional body with boat-tail geometry and supersonic jet exhaust in a subsonic external flow under zero angle of attack. Similarly to the results obtained earlier for the flows around airfoils and wings, current results for the jet flow case corroborate the superiority of the DPM-based ABC's over standard local methodologies from the standpoints of accuracy, overall numerical performance, and robustness.
Cooperative vehicle routing problem: an opportunity for cost saving
NASA Astrophysics Data System (ADS)
Zibaei, Sedighe; Hafezalkotob, Ashkan; Ghashami, Seyed Sajad
2016-09-01
In this paper, a novel methodology is proposed to solve a cooperative multi-depot vehicle routing problem. We establish a mathematical model for multi-owner VRP in which each owner (i.e. player) manages single or multiple depots. The basic idea consists of offering an option that owners cooperatively manage the VRP to save their costs. We present cooperative game theory techniques for cost saving allocations which are obtained from various coalitions of owners. The methodology is illustrated with a numerical example in which different coalitions of the players are evaluated along with the results of cooperation and cost saving allocation methods.
Solution methods for one-dimensional viscoelastic problems
NASA Technical Reports Server (NTRS)
Stubstad, John M.; Simitses, George J.
1987-01-01
A recently developed differential methodology for solution of one-dimensional nonlinear viscoelastic problems is presented. Using the example of an eccentrically loaded cantilever beam-column, the results from the differential formulation are compared to results generated using a previously published integral solution technique. It is shown that the results obtained from these distinct methodologies exhibit a surprisingly high degree of correlation with one another. A discussion of the various factors affecting the numerical accuracy and rate of convergence of these two procedures is also included. Finally, the influences of some 'higher order' effects, such as straining along the centroidal axis are discussed.
Using soft systems methodology to develop a simulation of out-patient services.
Lehaney, B; Paul, R J
1994-10-01
Discrete event simulation is an approach to modelling a system in the form of a set of mathematical equations and logical relationships, usually used for complex problems, which are difficult to address by using analytical or numerical methods. Managing out-patient services is such a problem. However, simulation is not in itself a systemic approach, in that it provides no methodology by which system boundaries and system activities may be identified. The investigation considers the use of soft systems methodology as an aid to drawing system boundaries and identifying system activities, for the purpose of simulating the outpatients' department at a local hospital. The long term aims are to examine the effects that the participative nature of soft systems methodology has on the acceptability of the simulation model, and to provide analysts and managers with a process that may assist in planning strategies for health care.
Force-controlled absorption in a fully-nonlinear numerical wave tank
NASA Astrophysics Data System (ADS)
Spinneken, Johannes; Christou, Marios; Swan, Chris
2014-09-01
An active control methodology for the absorption of water waves in a numerical wave tank is introduced. This methodology is based upon a force-feedback technique which has previously been shown to be very effective in physical wave tanks. Unlike other methods, an a-priori knowledge of the wave conditions in the tank is not required; the absorption controller being designed to automatically respond to a wide range of wave conditions. In comparison to numerical sponge layers, effective wave absorption is achieved on the boundary, thereby minimising the spatial extent of the numerical wave tank. In contrast to the imposition of radiation conditions, the scheme is inherently capable of absorbing irregular waves. Most importantly, simultaneous generation and absorption can be achieved. This is an important advance when considering inclusion of reflective bodies within the numerical wave tank. In designing the absorption controller, an infinite impulse response filter is adopted, thereby eliminating the problem of non-causality in the controller optimisation. Two alternative controllers are considered, both implemented in a fully-nonlinear wave tank based on a multiple-flux boundary element scheme. To simplify the problem under consideration, the present analysis is limited to water waves propagating in a two-dimensional domain. The paper presents an extensive numerical validation which demonstrates the success of the method for a wide range of wave conditions including regular, focused and random waves. The numerical investigation also highlights some of the limitations of the method, particularly in simultaneously generating and absorbing large amplitude or highly-nonlinear waves. The findings of the present numerical study are directly applicable to related fields where optimum absorption is sought; these include physical wavemaking, wave power absorption and a wide range of numerical wave tank schemes.
Vilas, Carlos; Balsa-Canto, Eva; García, Maria-Sonia G; Banga, Julio R; Alonso, Antonio A
2012-07-02
Systems biology allows the analysis of biological systems behavior under different conditions through in silico experimentation. The possibility of perturbing biological systems in different manners calls for the design of perturbations to achieve particular goals. Examples would include, the design of a chemical stimulation to maximize the amplitude of a given cellular signal or to achieve a desired pattern in pattern formation systems, etc. Such design problems can be mathematically formulated as dynamic optimization problems which are particularly challenging when the system is described by partial differential equations.This work addresses the numerical solution of such dynamic optimization problems for spatially distributed biological systems. The usual nonlinear and large scale nature of the mathematical models related to this class of systems and the presence of constraints on the optimization problems, impose a number of difficulties, such as the presence of suboptimal solutions, which call for robust and efficient numerical techniques. Here, the use of a control vector parameterization approach combined with efficient and robust hybrid global optimization methods and a reduced order model methodology is proposed. The capabilities of this strategy are illustrated considering the solution of a two challenging problems: bacterial chemotaxis and the FitzHugh-Nagumo model. In the process of chemotaxis the objective was to efficiently compute the time-varying optimal concentration of chemotractant in one of the spatial boundaries in order to achieve predefined cell distribution profiles. Results are in agreement with those previously published in the literature. The FitzHugh-Nagumo problem is also efficiently solved and it illustrates very well how dynamic optimization may be used to force a system to evolve from an undesired to a desired pattern with a reduced number of actuators. The presented methodology can be used for the efficient dynamic optimization of generic distributed biological systems.
NASA Technical Reports Server (NTRS)
Anderson, W. J.
1980-01-01
The considered investigations deal with some of the more important present day and future bearing requirements, and design methodologies available for coping with them. Solutions to many forthcoming bearing problems lie in the utilization of the most advanced materials, design methods, and lubrication techniques. Attention is given to materials for rolling element bearings, numerical analysis techniques and design methodology for rolling element bearing load support systems, lubrication of rolling element bearings, journal bearing design for high speed turbomachinery, design and energy losses in the case of turbulent flow bearings, and fluid film bearing response to dynamic loading.
Normalization of hydrocarbon emissions in Germany
NASA Astrophysics Data System (ADS)
Levitin, R. E.
2018-05-01
In connection with the integration of the Russian Federation into the European space, many technical regulations and methodologies are being corrected. The work deals with the German legislation in the field of determining of hydrocarbon emissions and the methodology for determining the emissions of oil products from vertical steel tanks. In German law, the Emission Protection Act establishes only basic requirements. Mainly technical details, which have importance for practice, are regulated in numerous Orders on the Procedure for the Implementation of the Law (German abbr. - BimSchV). Documents referred to by the Technical Manual on the Maintenance of Clean Air are a step below on the hierarchical ladder of legislative and regulatory documentation. This set of documents is represented by numerous DIN standards and VDI guidelines. The article considers the methodology from the guidance document VDI 3479. The shortcomings and problems of applying the given method in Russia are shown.
Fractional-order TV-L2 model for image denoising
NASA Astrophysics Data System (ADS)
Chen, Dali; Sun, Shenshen; Zhang, Congrong; Chen, YangQuan; Xue, Dingyu
2013-10-01
This paper proposes a new fractional order total variation (TV) denoising method, which provides a much more elegant and effective way of treating problems of the algorithm implementation, ill-posed inverse, regularization parameter selection and blocky effect. Two fractional order TV-L2 models are constructed for image denoising. The majorization-minimization (MM) algorithm is used to decompose these two complex fractional TV optimization problems into a set of linear optimization problems which can be solved by the conjugate gradient algorithm. The final adaptive numerical procedure is given. Finally, we report experimental results which show that the proposed methodology avoids the blocky effect and achieves state-of-the-art performance. In addition, two medical image processing experiments are presented to demonstrate the validity of the proposed methodology.
Recent advances in computational-analytical integral transforms for convection-diffusion problems
NASA Astrophysics Data System (ADS)
Cotta, R. M.; Naveira-Cotta, C. P.; Knupp, D. C.; Zotin, J. L. Z.; Pontes, P. C.; Almeida, A. P.
2017-10-01
An unifying overview of the Generalized Integral Transform Technique (GITT) as a computational-analytical approach for solving convection-diffusion problems is presented. This work is aimed at bringing together some of the most recent developments on both accuracy and convergence improvements on this well-established hybrid numerical-analytical methodology for partial differential equations. Special emphasis is given to novel algorithm implementations, all directly connected to enhancing the eigenfunction expansion basis, such as a single domain reformulation strategy for handling complex geometries, an integral balance scheme in dealing with multiscale problems, the adoption of convective eigenvalue problems in formulations with significant convection effects, and the direct integral transformation of nonlinear convection-diffusion problems based on nonlinear eigenvalue problems. Then, selected examples are presented that illustrate the improvement achieved in each class of extension, in terms of convergence acceleration and accuracy gain, which are related to conjugated heat transfer in complex or multiscale microchannel-substrate geometries, multidimensional Burgers equation model, and diffusive metal extraction through polymeric hollow fiber membranes. Numerical results are reported for each application and, where appropriate, critically compared against the traditional GITT scheme without convergence enhancement schemes and commercial or dedicated purely numerical approaches.
Interpretation methodology and analysis of in-flight lightning data
NASA Technical Reports Server (NTRS)
Rudolph, T.; Perala, R. A.
1982-01-01
A methodology is presented whereby electromagnetic measurements of inflight lightning stroke data can be understood and extended to other aircraft. Recent measurements made on the NASA F106B aircraft indicate that sophisticated numerical techniques and new developments in corona modeling are required to fully understand the data. Thus the problem is nontrivial and successful interpretation can lead to a significant understanding of the lightning/aircraft interaction event. This is of particular importance because of the problem of lightning induced transient upset of new technology low level microcircuitry which is being used in increasing quantities in modern and future avionics. Inflight lightning data is analyzed and lightning environments incident upon the F106B are determined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilmanov, Anvar, E-mail: agilmano@umn.edu; Le, Trung Bao, E-mail: lebao002@umn.edu; Sotiropoulos, Fotis, E-mail: fotis@umn.edu
We present a new numerical methodology for simulating fluid–structure interaction (FSI) problems involving thin flexible bodies in an incompressible fluid. The FSI algorithm uses the Dirichlet–Neumann partitioning technique. The curvilinear immersed boundary method (CURVIB) is coupled with a rotation-free finite element (FE) model for thin shells enabling the efficient simulation of FSI problems with arbitrarily large deformation. Turbulent flow problems are handled using large-eddy simulation with the dynamic Smagorinsky model in conjunction with a wall model to reconstruct boundary conditions near immersed boundaries. The CURVIB and FE solvers are coupled together on the flexible solid–fluid interfaces where the structural nodalmore » positions, displacements, velocities and loads are calculated and exchanged between the two solvers. Loose and strong coupling FSI schemes are employed enhanced by the Aitken acceleration technique to ensure robust coupling and fast convergence especially for low mass ratio problems. The coupled CURVIB-FE-FSI method is validated by applying it to simulate two FSI problems involving thin flexible structures: 1) vortex-induced vibrations of a cantilever mounted in the wake of a square cylinder at different mass ratios and at low Reynolds number; and 2) the more challenging high Reynolds number problem involving the oscillation of an inverted elastic flag. For both cases the computed results are in excellent agreement with previous numerical simulations and/or experiential measurements. Grid convergence tests/studies are carried out for both the cantilever and inverted flag problems, which show that the CURVIB-FE-FSI method provides their convergence. Finally, the capability of the new methodology in simulations of complex cardiovascular flows is demonstrated by applying it to simulate the FSI of a tri-leaflet, prosthetic heart valve in an anatomic aorta and under physiologic pulsatile conditions.« less
ERIC Educational Resources Information Center
Barnaud, Cecile; Promburom, Tanya; Trebuil, Guy; Bousquet, Francois
2007-01-01
The decentralization of natural resource management provides an opportunity for communities to increase their participation in related decision making. Research should propose adapted methodologies enabling the numerous stakeholders of these complex socioecological settings to define their problems and identify agreed-on solutions. This article…
Academic Ranking of World Universities by Broad Subject Fields
ERIC Educational Resources Information Center
Cheng, Ying; Liu, Nian Cai
2007-01-01
Upon numerous requests to provide ranking of world universities by broad subject fields/schools/colleges and by subject fields/programs/departments, the authors present the ranking methodologies and problems that arose from the research by the Institute of Higher Education, Shanghai Jiao Tong University on the Academic Ranking of World…
Numerical solution of the general coupled nonlinear Schrödinger equations on unbounded domains.
Li, Hongwei; Guo, Yue
2017-12-01
The numerical solution of the general coupled nonlinear Schrödinger equations on unbounded domains is considered by applying the artificial boundary method in this paper. In order to design the local absorbing boundary conditions for the coupled nonlinear Schrödinger equations, we generalize the unified approach previously proposed [J. Zhang et al., Phys. Rev. E 78, 026709 (2008)PLEEE81539-375510.1103/PhysRevE.78.026709]. Based on the methodology underlying the unified approach, the original problem is split into two parts, linear and nonlinear terms, and we then achieve a one-way operator to approximate the linear term to make the wave out-going, and finally we combine the one-way operator with the nonlinear term to derive the local absorbing boundary conditions. Then we reduce the original problem into an initial boundary value problem on the bounded domain, which can be solved by the finite difference method. The stability of the reduced problem is also analyzed by introducing some auxiliary variables. Ample numerical examples are presented to verify the accuracy and effectiveness of our proposed method.
Counterflow diffusion flames: effects of thermal expansion and non-unity Lewis numbers
NASA Astrophysics Data System (ADS)
Koundinyan, Sushilkumar P.; Matalon, Moshe; Stewart, D. Scott
2018-05-01
In this work we re-examine the counterflow diffusion flame problem focusing in particular on the flame-flow interactions due to thermal expansion and its influence on various flame properties such as flame location, flame temperature, reactant leakage and extinction conditions. The analysis follows two different procedures: an asymptotic approximation for large activation energy chemical reactions, and a direct numerical approach. The asymptotic treatment follows the general theory of Cheatham and Matalon, which consists of a free-boundary problem with jump conditions across the surface representing the reaction sheet, and is well suited for variable-density flows and for mixtures with non-unity and distinct Lewis numbers for the fuel and oxidiser. Due to density variations, the species and energy transport equations are coupled to the Navier-Stokes equations and the problem does not possess an analytical solution. We thus propose and implement a methodology for solving the free-boundary problem numerically. Results based on the asymptotic approximation are then verified against those obtained from the 'exact' numerical integration of the governing equations, comparing predictions of the various flame properties.
Regularization Reconstruction Method for Imaging Problems in Electrical Capacitance Tomography
NASA Astrophysics Data System (ADS)
Chu, Pan; Lei, Jing
2017-11-01
The electrical capacitance tomography (ECT) is deemed to be a powerful visualization measurement technique for the parametric measurement in a multiphase flow system. The inversion task in the ECT technology is an ill-posed inverse problem, and seeking for an efficient numerical method to improve the precision of the reconstruction images is important for practical measurements. By the introduction of the Tikhonov regularization (TR) methodology, in this paper a loss function that emphasizes the robustness of the estimation and the low rank property of the imaging targets is put forward to convert the solution of the inverse problem in the ECT reconstruction task into a minimization problem. Inspired by the split Bregman (SB) algorithm, an iteration scheme is developed for solving the proposed loss function. Numerical experiment results validate that the proposed inversion method not only reconstructs the fine structures of the imaging targets, but also improves the robustness.
Extension of the firefly algorithm and preference rules for solving MINLP problems
NASA Astrophysics Data System (ADS)
Costa, M. Fernanda P.; Francisco, Rogério B.; Rocha, Ana Maria A. C.; Fernandes, Edite M. G. P.
2017-07-01
An extension of the firefly algorithm (FA) for solving mixed-integer nonlinear programming (MINLP) problems is presented. Although penalty functions are nowadays frequently used to handle integrality conditions and inequality and equality constraints, this paper proposes the implementation within the FA of a simple rounded-based heuristic and four preference rules to find and converge to MINLP feasible solutions. Preliminary numerical experiments are carried out to validate the proposed methodology.
Towards lexicographic multi-objective linear programming using grossone methodology
NASA Astrophysics Data System (ADS)
Cococcioni, Marco; Pappalardo, Massimo; Sergeyev, Yaroslav D.
2016-10-01
Lexicographic Multi-Objective Linear Programming (LMOLP) problems can be solved in two ways: preemptive and nonpreemptive. The preemptive approach requires the solution of a series of LP problems, with changing constraints (each time the next objective is added, a new constraint appears). The nonpreemptive approach is based on a scalarization of the multiple objectives into a single-objective linear function by a weighted combination of the given objectives. It requires the specification of a set of weights, which is not straightforward and can be time consuming. In this work we present both mathematical and software ingredients necessary to solve LMOLP problems using a recently introduced computational methodology (allowing one to work numerically with infinities and infinitesimals) based on the concept of grossone. The ultimate goal of such an attempt is an implementation of a simplex-like algorithm, able to solve the original LMOLP problem by solving only one single-objective problem and without the need to specify finite weights. The expected advantages are therefore obvious.
An efficient direct solver for rarefied gas flows with arbitrary statistics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diaz, Manuel A., E-mail: f99543083@ntu.edu.tw; Yang, Jaw-Yen, E-mail: yangjy@iam.ntu.edu.tw; Center of Advanced Study in Theoretical Science, National Taiwan University, Taipei 10167, Taiwan
2016-01-15
A new numerical methodology associated with a unified treatment is presented to solve the Boltzmann–BGK equation of gas dynamics for the classical and quantum gases described by the Bose–Einstein and Fermi–Dirac statistics. Utilizing a class of globally-stiffly-accurate implicit–explicit Runge–Kutta scheme for the temporal evolution, associated with the discrete ordinate method for the quadratures in the momentum space and the weighted essentially non-oscillatory method for the spatial discretization, the proposed scheme is asymptotic-preserving and imposes no non-linear solver or requires the knowledge of fugacity and temperature to capture the flow structures in the hydrodynamic (Euler) limit. The proposed treatment overcomes themore » limitations found in the work by Yang and Muljadi (2011) [33] due to the non-linear nature of quantum relations, and can be applied in studying the dynamics of a gas with internal degrees of freedom with correct values of the ratio of specific heat for the flow regimes for all Knudsen numbers and energy wave lengths. The present methodology is numerically validated with the unified treatment by the one-dimensional shock tube problem and the two-dimensional Riemann problems for gases of arbitrary statistics. Descriptions of ideal quantum gases including rotational degrees of freedom have been successfully achieved under the proposed methodology.« less
A multi-resolution approach for optimal mass transport
NASA Astrophysics Data System (ADS)
Dominitz, Ayelet; Angenent, Sigurd; Tannenbaum, Allen
2007-09-01
Optimal mass transport is an important technique with numerous applications in econometrics, fluid dynamics, automatic control, statistical physics, shape optimization, expert systems, and meteorology. Motivated by certain problems in image registration and medical image visualization, in this note, we describe a simple gradient descent methodology for computing the optimal L2 transport mapping which may be easily implemented using a multiresolution scheme. We also indicate how the optimal transport map may be computed on the sphere. A numerical example is presented illustrating our ideas.
A numerical solution of Duffing's equations including the prediction of jump phenomena
NASA Technical Reports Server (NTRS)
Moyer, E. T., Jr.; Ghasghai-Abdi, E.
1987-01-01
Numerical methodology for the solution of Duffing's differential equation is presented. Algorithms for the prediction of multiple equilibrium solutions and jump phenomena are developed. In addition, a filtering algorithm for producing steady state solutions is presented. The problem of a rigidly clamped circular plate subjected to cosinusoidal pressure loading is solved using the developed algorithms (the plate is assumed to be in the geometrically nonlinear range). The results accurately predict regions of solution multiplicity and jump phenomena.
Deb, Kalyanmoy; Sinha, Ankur
2010-01-01
Bilevel optimization problems involve two optimization tasks (upper and lower level), in which every feasible upper level solution must correspond to an optimal solution to a lower level optimization problem. These problems commonly appear in many practical problem solving tasks including optimal control, process optimization, game-playing strategy developments, transportation problems, and others. However, they are commonly converted into a single level optimization problem by using an approximate solution procedure to replace the lower level optimization task. Although there exist a number of theoretical, numerical, and evolutionary optimization studies involving single-objective bilevel programming problems, not many studies look at the context of multiple conflicting objectives in each level of a bilevel programming problem. In this paper, we address certain intricate issues related to solving multi-objective bilevel programming problems, present challenging test problems, and propose a viable and hybrid evolutionary-cum-local-search based algorithm as a solution methodology. The hybrid approach performs better than a number of existing methodologies and scales well up to 40-variable difficult test problems used in this study. The population sizing and termination criteria are made self-adaptive, so that no additional parameters need to be supplied by the user. The study indicates a clear niche of evolutionary algorithms in solving such difficult problems of practical importance compared to their usual solution by a computationally expensive nested procedure. The study opens up many issues related to multi-objective bilevel programming and hopefully this study will motivate EMO and other researchers to pay more attention to this important and difficult problem solving activity.
Shinzato, Takashi
2016-12-01
The portfolio optimization problem in which the variances of the return rates of assets are not identical is analyzed in this paper using the methodology of statistical mechanical informatics, specifically, replica analysis. We defined two characteristic quantities of an optimal portfolio, namely, minimal investment risk and investment concentration, in order to solve the portfolio optimization problem and analytically determined their asymptotical behaviors using replica analysis. Numerical experiments were also performed, and a comparison between the results of our simulation and those obtained via replica analysis validated our proposed method.
NASA Astrophysics Data System (ADS)
Shinzato, Takashi
2016-12-01
The portfolio optimization problem in which the variances of the return rates of assets are not identical is analyzed in this paper using the methodology of statistical mechanical informatics, specifically, replica analysis. We defined two characteristic quantities of an optimal portfolio, namely, minimal investment risk and investment concentration, in order to solve the portfolio optimization problem and analytically determined their asymptotical behaviors using replica analysis. Numerical experiments were also performed, and a comparison between the results of our simulation and those obtained via replica analysis validated our proposed method.
Hybrid near-optimal aeroassisted orbit transfer plane change trajectories
NASA Technical Reports Server (NTRS)
Calise, Anthony J.; Duckeman, Gregory A.
1994-01-01
In this paper, a hybrid methodology is used to determine optimal open loop controls for the atmospheric portion of the aeroassisted plane change problem. The method is hybrid in the sense that it combines the features of numerical collocation with the analytically tractable portions of the problem which result when the two-point boundary value problem is cast in the form of a regular perturbation problem. Various levels of approximation are introduced by eliminating particular collocation parameters and their effect upon problem complexity and required number of nodes is discussed. The results include plane changes of 10, 20, and 30 degrees for a given vehicle.
Numerical Modeling in Geodynamics: Success, Failure and Perspective
NASA Astrophysics Data System (ADS)
Ismail-Zadeh, A.
2005-12-01
A real success in numerical modeling of dynamics of the Earth can be achieved only by multidisciplinary research teams of experts in geodynamics, applied and pure mathematics, and computer science. The success in numerical modeling is based on the following basic, but simple, rules. (i) People need simplicity most, but they understand intricacies best (B. Pasternak, writer). Start from a simple numerical model, which describes basic physical laws by a set of mathematical equations, and move then to a complex model. Never start from a complex model, because you cannot understand the contribution of each term of the equations to the modeled geophysical phenomenon. (ii) Study the numerical methods behind your computer code. Otherwise it becomes difficult to distinguish true and erroneous solutions to the geodynamic problem, especially when your problem is complex enough. (iii) Test your model versus analytical and asymptotic solutions, simple 2D and 3D model examples. Develop benchmark analysis of different numerical codes and compare numerical results with laboratory experiments. Remember that the numerical tool you employ is not perfect, and there are small bugs in every computer code. Therefore the testing is the most important part of your numerical modeling. (iv) Prove (if possible) or learn relevant statements concerning the existence, uniqueness and stability of the solution to the mathematical and discrete problems. Otherwise you can solve an improperly-posed problem, and the results of the modeling will be far from the true solution of your model problem. (v) Try to analyze numerical models of a geological phenomenon using as less as possible tuning model variables. Already two tuning variables give enough possibilities to constrain your model well enough with respect to observations. The data fitting sometimes is quite attractive and can take you far from a principal aim of your numerical modeling: to understand geophysical phenomena. (vi) If the number of tuning model variables are greater than two, test carefully the effect of each of the variables on the modeled phenomenon. Remember: With four exponents I can fit an elephant (E. Fermi, physicist). (vii) Make your numerical model as accurate as possible, but never put the aim to reach a great accuracy: Undue precision of computations is the first symptom of mathematical illiteracy (N. Krylov, mathematician). How complex should be a numerical model? A model which images any detail of the reality is as useful as a map of scale 1:1 (J. Robinson, economist). This message is quite important for geoscientists, who study numerical models of complex geodynamical processes. I believe that geoscientists will never create a model of the real Earth dynamics, but we should try to model the dynamics such a way to simulate basic geophysical processes and phenomena. Does a particular model have a predictive power? Each numerical model has a predictive power, otherwise the model is useless. The predictability of the model varies with its complexity. Remember that a solution to the numerical model is an approximate solution to the equations, which have been chosen in believe that they describe dynamic processes of the Earth. Hence a numerical model predicts dynamics of the Earth as well as the mathematical equations describe this dynamics. What methodological advances are still needed for testable geodynamic modeling? Inverse (time-reverse) numerical modeling and data assimilation are new methodologies in geodynamics. The inverse modeling can allow to test geodynamic models forward in time using restored (from present-day observations) initial conditions instead of unknown conditions.
Constructing space difference schemes which satisfy a cell entropy inequality
NASA Technical Reports Server (NTRS)
Merriam, Marshal L.
1989-01-01
A numerical methodology for solving convection problems is presented, using finite difference schemes which satisfy the second law of thermodynamics on a cell-by-cell basis in addition to the usual conservation laws. It is shown that satisfaction of a cell entropy inequality is sufficient, in some cases, to guarantee nonlinear stability. Some details are given for several one-dimensional problems, including the quasi-one-dimensional Euler equations applied to flow in a nozzle.
The New Method of Tsunami Source Reconstruction With r-Solution Inversion Method
NASA Astrophysics Data System (ADS)
Voronina, T. A.; Romanenko, A. A.
2016-12-01
Application of the r-solution method to reconstructing the initial tsunami waveform is discussed. This methodology is based on the inversion of remote measurements of water-level data. The wave propagation is considered within the scope of a linear shallow-water theory. The ill-posed inverse problem in question is regularized by means of a least square inversion using the truncated Singular Value Decomposition method. As a result of the numerical process, an r-solution is obtained. The method proposed allows one to control the instability of a numerical solution and to obtain an acceptable result in spite of ill posedness of the problem. Implementation of this methodology to reconstructing of the initial waveform to 2013 Solomon Islands tsunami validates the theoretical conclusion for synthetic data and a model tsunami source: the inversion result strongly depends on data noisiness, the azimuthal and temporal coverage of recording stations with respect to the source area. Furthermore, it is possible to make a preliminary selection of the most informative set of the available recording stations used in the inversion process.
Philip, Bobby; Berrill, Mark A.; Allu, Srikanth; ...
2015-01-26
We describe an efficient and nonlinearly consistent parallel solution methodology for solving coupled nonlinear thermal transport problems that occur in nuclear reactor applications over hundreds of individual 3D physical subdomains. Efficiency is obtained by leveraging knowledge of the physical domains, the physics on individual domains, and the couplings between them for preconditioning within a Jacobian Free Newton Krylov method. Details of the computational infrastructure that enabled this work, namely the open source Advanced Multi-Physics (AMP) package developed by the authors are described. The details of verification and validation experiments, and parallel performance analysis in weak and strong scaling studies demonstratingmore » the achieved efficiency of the algorithm are presented. Moreover, numerical experiments demonstrate that the preconditioner developed is independent of the number of fuel subdomains in a fuel rod, which is particularly important when simulating different types of fuel rods. Finally, we demonstrate the power of the coupling methodology by considering problems with couplings between surface and volume physics and coupling of nonlinear thermal transport in fuel rods to an external radiation transport code.« less
An adaptive gridless methodology in one dimension
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snyder, N.T.; Hailey, C.E.
1996-09-01
Gridless numerical analysis offers great potential for accurately solving for flow about complex geometries or moving boundary problems. Because gridless methods do not require point connection, the mesh cannot twist or distort. The gridless method utilizes a Taylor series about each point to obtain the unknown derivative terms from the current field variable estimates. The governing equation is then numerically integrated to determine the field variables for the next iteration. Effects of point spacing and Taylor series order on accuracy are studied, and they follow similar trends of traditional numerical techniques. Introducing adaption by point movement using a spring analogymore » allows the solution method to track a moving boundary. The adaptive gridless method models linear, nonlinear, steady, and transient problems. Comparison with known analytic solutions is given for these examples. Although point movement adaption does not provide a significant increase in accuracy, it helps capture important features and provides an improved solution.« less
A Fast Fourier transform stochastic analysis of the contaminant transport problem
Deng, F.W.; Cushman, J.H.; Delleur, J.W.
1993-01-01
A three-dimensional stochastic analysis of the contaminant transport problem is developed in the spirit of Naff (1990). The new derivation is more general and simpler than previous analysis. The fast Fourier transformation is used extensively to obtain numerical estimates of the mean concentration and various spatial moments. Data from both the Borden and Cape Cod experiments are used to test the methodology. Results are comparable to results obtained by other methods, and to the experiments themselves.
Computation of Nonlinear Backscattering Using a High-Order Numerical Method
NASA Technical Reports Server (NTRS)
Fibich, G.; Ilan, B.; Tsynkov, S.
2001-01-01
The nonlinear Schrodinger equation (NLS) is the standard model for propagation of intense laser beams in Kerr media. The NLS is derived from the nonlinear Helmholtz equation (NLH) by employing the paraxial approximation and neglecting the backscattered waves. In this study we use a fourth-order finite-difference method supplemented by special two-way artificial boundary conditions (ABCs) to solve the NLH as a boundary value problem. Our numerical methodology allows for a direct comparison of the NLH and NLS models and for an accurate quantitative assessment of the backscattered signal.
Pigache, Francois; Messine, Frédéric; Nogarede, Bertrand
2007-07-01
This paper deals with a deterministic and rational way to design piezoelectric transformers in radial mode. The proposed approach is based on the study of the inverse problem of design and on its reformulation as a mixed constrained global optimization problem. The methodology relies on the association of the analytical models for describing the corresponding optimization problem and on an exact global optimization software, named IBBA and developed by the second author to solve it. Numerical experiments are presented and compared in order to validate the proposed approach.
ERIC Educational Resources Information Center
Theriot, Matthew T.
2008-01-01
Although research has highlighted that dating violence is a serious and pervasive problem in many adolescent relationships, the prevalence and characteristics of such violence at schools is not fully understood. Yet, adolescents spend a great deal of time at school, and schools facilitate their relationships by providing numerous opportunities for…
NASA Astrophysics Data System (ADS)
Chanthawara, Krittidej; Kaennakham, Sayan; Toutip, Wattana
2016-02-01
The methodology of Dual Reciprocity Boundary Element Method (DRBEM) is applied to the convection-diffusion problems and investigating its performance is our first objective of the work. Seven types of Radial Basis Functions (RBF); Linear, Thin-plate Spline, Cubic, Compactly Supported, Inverse Multiquadric, Quadratic, and that proposed by [12], were closely investigated in order to numerically compare their effectiveness drawbacks etc. and this is taken as our second objective. A sufficient number of simulations were performed covering as many aspects as possible. Varidated against both exacts and other numerical works, the final results imply strongly that the Thin-Plate Spline and Linear type of RBF are superior to others in terms of both solutions' quality and CPU-time spent while the Inverse Multiquadric seems to poorly yield the results. It is also found that DRBEM can perform relatively well at moderate level of convective force and as anticipated becomes unstable when the problem becomes more convective-dominated, as normally found in all classical mesh-dependence methods.
Ovtchinnikov, Evgueni E.; Xanthis, Leonidas S.
2000-01-01
We present a methodology for the efficient numerical solution of eigenvalue problems of full three-dimensional elasticity for thin elastic structures, such as shells, plates and rods of arbitrary geometry, discretized by the finite element method. Such problems are solved by iterative methods, which, however, are known to suffer from slow convergence or even convergence failure, when the thickness is small. In this paper we show an effective way of resolving this difficulty by invoking a special preconditioning technique associated with the effective dimensional reduction algorithm (EDRA). As an example, we present an algorithm for computing the minimal eigenvalue of a thin elastic plate and we show both theoretically and numerically that it is robust with respect to both the thickness and discretization parameters, i.e. the convergence does not deteriorate with diminishing thickness or mesh refinement. This robustness is sine qua non for the efficient computation of large-scale eigenvalue problems for thin elastic structures. PMID:10655469
On the effect of model parameters on forecast objects
NASA Astrophysics Data System (ADS)
Marzban, Caren; Jones, Corinne; Li, Ning; Sandgathe, Scott
2018-04-01
Many physics-based numerical models produce a gridded, spatial field of forecasts, e.g., a temperature map
. The field for some quantities generally consists of spatially coherent and disconnected objects
. Such objects arise in many problems, including precipitation forecasts in atmospheric models, eddy currents in ocean models, and models of forest fires. Certain features of these objects (e.g., location, size, intensity, and shape) are generally of interest. Here, a methodology is developed for assessing the impact of model parameters on the features of forecast objects. The main ingredients of the methodology include the use of (1) Latin hypercube sampling for varying the values of the model parameters, (2) statistical clustering algorithms for identifying objects, (3) multivariate multiple regression for assessing the impact of multiple model parameters on the distribution (across the forecast domain) of object features, and (4) methods for reducing the number of hypothesis tests and controlling the resulting errors. The final output
of the methodology is a series of box plots and confidence intervals that visually display the sensitivities. The methodology is demonstrated on precipitation forecasts from a mesoscale numerical weather prediction model.
NASA Astrophysics Data System (ADS)
Igumnov, Leonid; Ipatov, Aleksandr; Belov, Aleksandr; Petrov, Andrey
2015-09-01
The report presents the development of the time-boundary element methodology and a description of the related software based on a stepped method of numerical inversion of the integral Laplace transform in combination with a family of Runge-Kutta methods for analyzing 3-D mixed initial boundary-value problems of the dynamics of inhomogeneous elastic and poro-elastic bodies. The results of the numerical investigation are presented. The investigation methodology is based on direct-approach boundary integral equations of 3-D isotropic linear theories of elasticity and poroelasticity in Laplace transforms. Poroelastic media are described using Biot models with four and five base functions. With the help of the boundary-element method, solutions in time are obtained, using the stepped method of numerically inverting Laplace transform on the nodes of Runge-Kutta methods. The boundary-element method is used in combination with the collocation method, local element-by-element approximation based on the matched interpolation model. The results of analyzing wave problems of the effect of a non-stationary force on elastic and poroelastic finite bodies, a poroelastic half-space (also with a fictitious boundary) and a layered half-space weakened by a cavity, and a half-space with a trench are presented. Excitation of a slow wave in a poroelastic medium is studied, using the stepped BEM-scheme on the nodes of Runge-Kutta methods.
Cwikel, Julie; Hoban, Elizabeth
2005-11-01
The trafficking of women and children for work in the globalized sex industry is a global social problem. Quality data is needed to provide a basis for legislation, policy, and programs, but first, numerous research design, ethical, and methodological problems must be addressed. Research design issues in studying women trafficked for sex work (WTSW) include how to (a) develop coalitions to fund and support research, (b) maintain a critical stance on prostitution, and therefore WTSW (c) use multiple paradigms and methods to accurately reflect WTSW's reality, (d) present the purpose of the study, and (e) protect respondents' identities. Ethical issues include (a) complications with informed consent procedures, (b) problematic access to WTSW (c) loss of WTSW to follow-up, (d) inability to intervene in illegal acts or human rights violations, and (e) the need to maintain trustworthiness as researchers. Methodological issues include (a) constructing representative samples, (b) managing media interest, and (c) handling incriminating materials about law enforcement and immigration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gatsonis, Nikolaos A.; Spirkin, Anton
2009-06-01
The mathematical formulation and computational implementation of a three-dimensional particle-in-cell methodology on unstructured Delaunay-Voronoi tetrahedral grids is presented. The method allows simulation of plasmas in complex domains and incorporates the duality of the Delaunay-Voronoi in all aspects of the particle-in-cell cycle. Charge assignment and field interpolation weighting schemes of zero- and first-order are formulated based on the theory of long-range constraints. Electric potential and fields are derived from a finite-volume formulation of Gauss' law using the Voronoi-Delaunay dual. Boundary conditions and the algorithms for injection, particle loading, particle motion, and particle tracking are implemented for unstructured Delaunay grids. Error andmore » sensitivity analysis examines the effects of particles/cell, grid scaling, and timestep on the numerical heating, the slowing-down time, and the deflection times. The problem of current collection by cylindrical Langmuir probes in collisionless plasmas is used for validation. Numerical results compare favorably with previous numerical and analytical solutions for a wide range of probe radius to Debye length ratios, probe potentials, and electron to ion temperature ratios. The versatility of the methodology is demonstrated with the simulation of a complex plasma microsensor, a directional micro-retarding potential analyzer that includes a low transparency micro-grid.« less
White Dwarf Mergers On Adaptive Meshes. I. Methodology And Code Verification
Katz, Max P.; Zingale, Michael; Calder, Alan C.; ...
2016-03-02
The Type Ia supernova (SN Ia) progenitor problem is one of the most perplexing and exciting problems in astrophysics, requiring detailed numerical modeling to complement observations of these explosions. One possible progenitor that has merited recent theoretical attention is the white dwarf (WD) merger scenario, which has the potential to naturally explain many of the observed characteristics of SNe Ia. To date there have been relatively few self-consistent simulations of merging WD systems using mesh-based hydrodynamics. This is the first study in a series describing simulations of these systems using a hydrodynamics code with adaptive mesh refinement. In this papermore » we describe our numerical methodology and discuss our implementation in the compressible hydrodynamics code CASTRO, which solves the Euler equations, and the Poisson equation for self-gravity, and couples the gravitational and rotation forces to the hydrodynamics. Standard techniques for coupling gravitation and rotation forces to the hydrodynamics do not adequately conserve the total energy of the system for our problem, but recent advances in the literature allow progress and we discuss our implementation here. We present a set of test problems demonstrating the extent to which our software sufficiently models a system where large amounts of mass are advected on the computational domain over long timescales. Finally, future papers in this series will describe our treatment of the initial conditions of these systems and will examine the early phases of the merger to determine its viability for triggering a thermonuclear detonation.« less
Benchmark Problems Used to Assess Computational Aeroacoustics Codes
NASA Technical Reports Server (NTRS)
Dahl, Milo D.; Envia, Edmane
2005-01-01
The field of computational aeroacoustics (CAA) encompasses numerical techniques for calculating all aspects of sound generation and propagation in air directly from fundamental governing equations. Aeroacoustic problems typically involve flow-generated noise, with and without the presence of a solid surface, and the propagation of the sound to a receiver far away from the noise source. It is a challenge to obtain accurate numerical solutions to these problems. The NASA Glenn Research Center has been at the forefront in developing and promoting the development of CAA techniques and methodologies for computing the noise generated by aircraft propulsion systems. To assess the technological advancement of CAA, Glenn, in cooperation with the Ohio Aerospace Institute and the AeroAcoustics Research Consortium, organized and hosted the Fourth CAA Workshop on Benchmark Problems. Participants from industry and academia from both the United States and abroad joined to present and discuss solutions to benchmark problems. These demonstrated technical progress ranging from the basic challenges to accurate CAA calculations to the solution of CAA problems of increasing complexity and difficulty. The results are documented in the proceedings of the workshop. Problems were solved in five categories. In three of the five categories, exact solutions were available for comparison with CAA results. A fourth category of problems representing sound generation from either a single airfoil or a blade row interacting with a gust (i.e., problems relevant to fan noise) had approximate analytical or completely numerical solutions. The fifth category of problems involved sound generation in a viscous flow. In this case, the CAA results were compared with experimental data.
Singular perturbation techniques for real time aircraft trajectory optimization and control
NASA Technical Reports Server (NTRS)
Calise, A. J.; Moerder, D. D.
1982-01-01
The usefulness of singular perturbation methods for developing real time computer algorithms to control and optimize aircraft flight trajectories is examined. A minimum time intercept problem using F-8 aerodynamic and propulsion data is used as a baseline. This provides a framework within which issues relating to problem formulation, solution methodology and real time implementation are examined. Theoretical questions relating to separability of dynamics are addressed. With respect to implementation, situations leading to numerical singularities are identified, and procedures for dealing with them are outlined. Also, particular attention is given to identifying quantities that can be precomputed and stored, thus greatly reducing the on-board computational load. Numerical results are given to illustrate the minimum time algorithm, and the resulting flight paths. An estimate is given for execution time and storage requirements.
Accurate Projection Methods for the Incompressible Navier–Stokes Equations
Brown, David L.; Cortez, Ricardo; Minion, Michael L.
2001-04-10
This paper considers the accuracy of projection method approximations to the initial–boundary-value problem for the incompressible Navier–Stokes equations. The issue of how to correctly specify numerical boundary conditions for these methods has been outstanding since the birth of the second-order methodology a decade and a half ago. It has been observed that while the velocity can be reliably computed to second-order accuracy in time and space, the pressure is typically only first-order accurate in the L ∞-norm. Here, we identify the source of this problem in the interplay of the global pressure-update formula with the numerical boundary conditions and presentsmore » an improved projection algorithm which is fully second-order accurate, as demonstrated by a normal mode analysis and numerical experiments. In addition, a numerical method based on a gauge variable formulation of the incompressible Navier–Stokes equations, which provides another option for obtaining fully second-order convergence in both velocity and pressure, is discussed. The connection between the boundary conditions for projection methods and the gauge method is explained in detail.« less
Flamm, Christoph; Graef, Andreas; Pirker, Susanne; Baumgartner, Christoph; Deistler, Manfred
2013-01-01
Granger causality is a useful concept for studying causal relations in networks. However, numerical problems occur when applying the corresponding methodology to high-dimensional time series showing co-movement, e.g. EEG recordings or economic data. In order to deal with these shortcomings, we propose a novel method for the causal analysis of such multivariate time series based on Granger causality and factor models. We present the theoretical background, successfully assess our methodology with the help of simulated data and show a potential application in EEG analysis of epileptic seizures. PMID:23354014
Methodology and Results of Mathematical Modelling of Complex Technological Processes
NASA Astrophysics Data System (ADS)
Mokrova, Nataliya V.
2018-03-01
The methodology of system analysis allows us to draw a mathematical model of the complex technological process. The mathematical description of the plasma-chemical process was proposed. The importance the quenching rate and initial temperature decrease time was confirmed for producing the maximum amount of the target product. The results of numerical integration of the system of differential equations can be used to describe reagent concentrations, plasma jet rate and temperature in order to achieve optimal mode of hardening. Such models are applicable both for solving control problems and predicting future states of sophisticated technological systems.
NASA Astrophysics Data System (ADS)
Dib, Alain; Kavvas, M. Levent
2018-03-01
The characteristic form of the Saint-Venant equations is solved in a stochastic setting by using a newly proposed Fokker-Planck Equation (FPE) methodology. This methodology computes the ensemble behavior and variability of the unsteady flow in open channels by directly solving for the flow variables' time-space evolutionary probability distribution. The new methodology is tested on a stochastic unsteady open-channel flow problem, with an uncertainty arising from the channel's roughness coefficient. The computed statistical descriptions of the flow variables are compared to the results obtained through Monte Carlo (MC) simulations in order to evaluate the performance of the FPE methodology. The comparisons show that the proposed methodology can adequately predict the results of the considered stochastic flow problem, including the ensemble averages, variances, and probability density functions in time and space. Unlike the large number of simulations performed by the MC approach, only one simulation is required by the FPE methodology. Moreover, the total computational time of the FPE methodology is smaller than that of the MC approach, which could prove to be a particularly crucial advantage in systems with a large number of uncertain parameters. As such, the results obtained in this study indicate that the proposed FPE methodology is a powerful and time-efficient approach for predicting the ensemble average and variance behavior, in both space and time, for an open-channel flow process under an uncertain roughness coefficient.
Data Collection Procedures for School-Based Surveys among Adolescents: The Youth in Europe Study
ERIC Educational Resources Information Center
Kristjansson, Alfgeir Logi; Sigfusson, Jon; Sigfusdottir, Inga Dora; Allegrante, John P.
2013-01-01
Background: Collection of valid and reliable surveillance data as a basis for school health promotion and education policy and practice for children and adolescence is of great importance. However, numerous methodological and practical problems arise in the planning and collection of such survey data that need to be resolved in order to ensure the…
Spline approximation, Part 1: Basic methodology
NASA Astrophysics Data System (ADS)
Ezhov, Nikolaj; Neitzel, Frank; Petrovic, Svetozar
2018-04-01
In engineering geodesy point clouds derived from terrestrial laser scanning or from photogrammetric approaches are almost never used as final results. For further processing and analysis a curve or surface approximation with a continuous mathematical function is required. In this paper the approximation of 2D curves by means of splines is treated. Splines offer quite flexible and elegant solutions for interpolation or approximation of "irregularly" distributed data. Depending on the problem they can be expressed as a function or as a set of equations that depend on some parameter. Many different types of splines can be used for spline approximation and all of them have certain advantages and disadvantages depending on the approximation problem. In a series of three articles spline approximation is presented from a geodetic point of view. In this paper (Part 1) the basic methodology of spline approximation is demonstrated using splines constructed from ordinary polynomials and splines constructed from truncated polynomials. In the forthcoming Part 2 the notion of B-spline will be explained in a unique way, namely by using the concept of convex combinations. The numerical stability of all spline approximation approaches as well as the utilization of splines for deformation detection will be investigated on numerical examples in Part 3.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Justin; Karra, Satish; Nakshatrala, Kalyana B.
It is well-known that the standard Galerkin formulation, which is often the formulation of choice under the finite element method for solving self-adjoint diffusion equations, does not meet maximum principles and the non-negative constraint for anisotropic diffusion equations. Recently, optimization-based methodologies that satisfy maximum principles and the non-negative constraint for steady-state and transient diffusion-type equations have been proposed. To date, these methodologies have been tested only on small-scale academic problems. The purpose of this paper is to systematically study the performance of the non-negative methodology in the context of high performance computing (HPC). PETSc and TAO libraries are, respectively, usedmore » for the parallel environment and optimization solvers. For large-scale problems, it is important for computational scientists to understand the computational performance of current algorithms available in these scientific libraries. The numerical experiments are conducted on the state-of-the-art HPC systems, and a single-core performance model is used to better characterize the efficiency of the solvers. Furthermore, our studies indicate that the proposed non-negative computational framework for diffusion-type equations exhibits excellent strong scaling for real-world large-scale problems.« less
Chang, Justin; Karra, Satish; Nakshatrala, Kalyana B.
2016-07-26
It is well-known that the standard Galerkin formulation, which is often the formulation of choice under the finite element method for solving self-adjoint diffusion equations, does not meet maximum principles and the non-negative constraint for anisotropic diffusion equations. Recently, optimization-based methodologies that satisfy maximum principles and the non-negative constraint for steady-state and transient diffusion-type equations have been proposed. To date, these methodologies have been tested only on small-scale academic problems. The purpose of this paper is to systematically study the performance of the non-negative methodology in the context of high performance computing (HPC). PETSc and TAO libraries are, respectively, usedmore » for the parallel environment and optimization solvers. For large-scale problems, it is important for computational scientists to understand the computational performance of current algorithms available in these scientific libraries. The numerical experiments are conducted on the state-of-the-art HPC systems, and a single-core performance model is used to better characterize the efficiency of the solvers. Furthermore, our studies indicate that the proposed non-negative computational framework for diffusion-type equations exhibits excellent strong scaling for real-world large-scale problems.« less
A hybrid nonlinear programming method for design optimization
NASA Technical Reports Server (NTRS)
Rajan, S. D.
1986-01-01
Solutions to engineering design problems formulated as nonlinear programming (NLP) problems usually require the use of more than one optimization technique. Moreover, the interaction between the user (analysis/synthesis) program and the NLP system can lead to interface, scaling, or convergence problems. An NLP solution system is presented that seeks to solve these problems by providing a programming system to ease the user-system interface. A simple set of rules is used to select an optimization technique or to switch from one technique to another in an attempt to detect, diagnose, and solve some potential problems. Numerical examples involving finite element based optimal design of space trusses and rotor bearing systems are used to illustrate the applicability of the proposed methodology.
NASA Astrophysics Data System (ADS)
Coutris, Pierre; Leroy, Delphine; Fontaine, Emmanuel; Schwarzenboeck, Alfons; Strapp, J. Walter
2016-04-01
A new method to retrieve cloud water content from in-situ measured 2D particle images from optical array probes (OAP) is presented. With the overall objective to build a statistical model of crystals' mass as a function of their size, environmental temperature and crystal microphysical history, this study presents the methodology to retrieve the mass of crystals sorted by size from 2D images using a numerical optimization approach. The methodology is validated using two datasets of in-situ measurements gathered during two airborne field campaigns held in Darwin, Australia (2014), and Cayenne, France (2015), in the frame of the High Altitude Ice Crystals (HAIC) / High Ice Water Content (HIWC) projects. During these campaigns, a Falcon F-20 research aircraft equipped with state-of-the art microphysical instrumentation sampled numerous mesoscale convective systems (MCS) in order to study dynamical and microphysical properties and processes of high ice water content areas. Experimentally, an isokinetic evaporator probe, referred to as IKP-2, provides a reference measurement of the total water content (TWC) which equals ice water content, (IWC) when (supercooled) liquid water is absent. Two optical array probes, namely 2D-S and PIP, produce 2D images of individual crystals ranging from 50 μm to 12840 μm from which particle size distributions (PSD) are derived. Mathematically, the problem is formulated as an inverse problem in which the crystals' mass is assumed constant over a size class and is computed for each size class from IWC and PSD data: PSD.m = IW C This problem is solved using numerical optimization technique in which an objective function is minimized. The objective function is defined as follows: 2 J(m)=∥P SD.m - IW C ∥ + λ.R (m) where the regularization parameter λ and the regularization function R(m) are tuned based on data characteristics. The method is implemented in two steps. First, the method is developed on synthetic crystal populations in order to evaluate the behavior of the iterative algorithm, the influence of data noise on the quality of the results, and to set up a regularization strategy. Therefore, 3D synthetic crystals have been generated and numerically processed to recreate the noise caused by 2D projections of randomly oriented 3D crystals and by the discretization of the PSD into size classes of predefined width. Subsequently, the method is applied to the experimental datasets and the comparison between the retrieved TWC (this methodology) and the measured ones (IKP-2 data) will enable the evaluation of the consistency and accuracy of the mass solution retrieved by the numerical optimization approach as well as preliminary assessment of the influence of temperature and dynamical parameters on crystals' masses.
Problem based learning - A brief review
NASA Astrophysics Data System (ADS)
Nunes, Sandra; Oliveira, Teresa A.; Oliveira, Amílcar
2017-07-01
Teaching is a complex mission that requires not only the theoretical knowledge transmission, but furthermore requires to provide the students the necessary skills for solving real problems in their respective professional activities where complex issues and problems must be frequently faced. Over more than twenty years we have been experiencing an increase in scholar failure in the scientific area of mathematics, which means that Teaching Mathematics and related areas can be even a more complex and hard task. Scholar failure is a complex phenomenon that depends on various factors as social factors, scholar factors or biophysical factors. After numerous attempts made in order to reduce scholar failure our goal in this paper is to understand the role of "Problem Based Learning" and how this methodology can contribute to the solution of both: increasing mathematical courses success and increasing skills in the near future professionals in Portugal. Before designing a proposal for applying this technique in our institutions, we decided to conduct a survey to provide us with the necessary information about and the respective advantages and disadvantages of this methodology, so this is the brief review aim.
Mimetic finite difference method
NASA Astrophysics Data System (ADS)
Lipnikov, Konstantin; Manzini, Gianmarco; Shashkov, Mikhail
2014-01-01
The mimetic finite difference (MFD) method mimics fundamental properties of mathematical and physical systems including conservation laws, symmetry and positivity of solutions, duality and self-adjointness of differential operators, and exact mathematical identities of the vector and tensor calculus. This article is the first comprehensive review of the 50-year long history of the mimetic methodology and describes in a systematic way the major mimetic ideas and their relevance to academic and real-life problems. The supporting applications include diffusion, electromagnetics, fluid flow, and Lagrangian hydrodynamics problems. The article provides enough details to build various discrete operators on unstructured polygonal and polyhedral meshes and summarizes the major convergence results for the mimetic approximations. Most of these theoretical results, which are presented here as lemmas, propositions and theorems, are either original or an extension of existing results to a more general formulation using polyhedral meshes. Finally, flexibility and extensibility of the mimetic methodology are shown by deriving higher-order approximations, enforcing discrete maximum principles for diffusion problems, and ensuring the numerical stability for saddle-point systems.
Common methodological flaws in economic evaluations.
Drummond, Michael; Sculpher, Mark
2005-07-01
Economic evaluations are increasingly being used by those bodies such as government agencies and managed care groups that make decisions about the reimbursement of health technologies. However, several reviews of economic evaluations point to numerous deficiencies in the methodology of studies or the failure to follow published methodological guidelines. This article, written for healthcare decision-makers and other users of economic evaluations, outlines the common methodological flaws in studies, focussing on those issues that are likely to be most important when deciding on the reimbursement, or guidance for use, of health technologies. The main flaws discussed are: (i) omission of important costs or benefits; (ii) inappropriate selection of alternatives for comparison; (iii) problems in making indirect comparisons; (iv) inadequate representation of the effectiveness data; (v) inappropriate extrapolation beyond the period observed in clinical studies; (vi) excessive use of assumptions rather than data; (vii) inadequate characterization of uncertainty; (viii) problems in aggregation of results; (ix) reporting of average cost-effectiveness ratios; (x) lack of consideration of generalizability issues; and (xi) selective reporting of findings. In each case examples are given from the literature and guidance is offered on how to detect flaws in economic evaluations.
ERIC Educational Resources Information Center
Cafri, Guy; van den Berg, Patricia; Brannick, Michael T.
2010-01-01
Difference scores are often used as a means of assessing body image satisfaction using silhouette scales. Unfortunately, difference scores suffer from numerous potential methodological problems, including reduced reliability, ambiguity, confounded effects, untested constraints, and dimensional reduction. In this article, the methodological…
ERIC Educational Resources Information Center
Scanland, Worth; Pepper, Dorothy
Numerous attempts by the U.S. Navy to identify leadership potential within its ranks and to train for such leadership skills have been mostly unsuccessful since the efforts were based upon very subjective perceptions of good leadership qualities. To address this problem, the Navy, with the assistance of Dr. David McClelland, has described a set of…
NASA Astrophysics Data System (ADS)
Pawar, Sumedh; Sharma, Atul
2018-01-01
This work presents mathematical model and solution methodology for a multiphysics engineering problem on arc formation during welding and inside a nozzle. A general-purpose commercial CFD solver ANSYS FLUENT 13.0.0 is used in this work. Arc formation involves strongly coupled gas dynamics and electro-dynamics, simulated by solution of coupled Navier-Stoke equations, Maxwell's equations and radiation heat-transfer equation. Validation of the present numerical methodology is demonstrated with an excellent agreement with the published results. The developed mathematical model and the user defined functions (UDFs) are independent of the geometry and are applicable to any system that involves arc-formation, in 2D axisymmetric coordinates system. The high-pressure flow of SF6 gas in the nozzle-arc system resembles arc chamber of SF6 gas circuit breaker; thus, this methodology can be extended to simulate arcing phenomenon during current interruption.
Failure detection system design methodology. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Chow, E. Y.
1980-01-01
The design of a failure detection and identification system consists of designing a robust residual generation process and a high performance decision making process. The design of these two processes are examined separately. Residual generation is based on analytical redundancy. Redundancy relations that are insensitive to modelling errors and noise effects are important for designing robust residual generation processes. The characterization of the concept of analytical redundancy in terms of a generalized parity space provides a framework in which a systematic approach to the determination of robust redundancy relations are developed. The Bayesian approach is adopted for the design of high performance decision processes. The FDI decision problem is formulated as a Bayes sequential decision problem. Since the optimal decision rule is incomputable, a methodology for designing suboptimal rules is proposed. A numerical algorithm is developed to facilitate the design and performance evaluation of suboptimal rules.
Palesh, Oxana; Peppone, Luke; Innominato, Pasquale F; Janelsins, Michelle; Jeong, Monica; Sprod, Lisa; Savard, Josee; Rotatori, Max; Kesler, Shelli; Telli, Melinda; Mustian, Karen
2012-01-01
Sleep problems are highly prevalent in cancer patients undergoing chemotherapy. This article reviews existing evidence on etiology, associated symptoms, and management of sleep problems associated with chemotherapy treatment during cancer. It also discusses limitations and methodological issues of current research. The existing literature suggests that subjectively and objectively measured sleep problems are the highest during the chemotherapy phase of cancer treatments. A possibly involved mechanism reviewed here includes the rise in the circulating proinflammatory cytokines and the associated disruption in circadian rhythm in the development and maintenance of sleep dysregulation in cancer patients during chemotherapy. Various approaches to the management of sleep problems during chemotherapy are discussed with behavioral intervention showing promise. Exercise, including yoga, also appear to be effective and safe at least for subclinical levels of sleep problems in cancer patients. Numerous challenges are associated with conducting research on sleep in cancer patients during chemotherapy treatments and they are discussed in this review. Dedicated intervention trials, methodologically sound and sufficiently powered, are needed to test current and novel treatments of sleep problems in cancer patients receiving chemotherapy. Optimal management of sleep problems in patients with cancer receiving treatment may improve not only the well-being of patients, but also their prognosis given the emerging experimental and clinical evidence suggesting that sleep disruption might adversely impact treatment and recovery from cancer. PMID:23486503
CFD methodology and validation for turbomachinery flows
NASA Astrophysics Data System (ADS)
Hirsch, Ch.
1994-05-01
The essential problem today, in the application of 3D Navier-Stokes simulations to the design and analysis of turbomachinery components, is the validation of the numerical approximation and of the physical models, in particular the turbulence modelling. Although most of the complex 3D flow phenomena occurring in turbomachinery bladings can be captured with relatively coarse meshes, many detailed flow features are dependent on mesh size, on the turbulence and transition models. A brief review of the present state of the art of CFD methodology is given with emphasis on quality and accuracy of numerical approximations related to viscous flow computations. Considerations related to the mesh influence on solution accuracy are stressed. The basic problems of turbulence and transition modelling are discussed next, with a short summary of the main turbulence models and their applications to representative turbomachinery flows. Validations of present turbulence models indicate that none of the available turbulence models is able to predict all the detailed flow behavior in complex flow interactions. In order to identify the phenomena that can be captured on coarser meshes a detailed understanding of the complex 3D flow in compressor and turbines is necessary. Examples of global validations for different flow configurations, representative of compressor and turbine aerodynamics are presented, including secondary and tip clearance flows.
NASA Astrophysics Data System (ADS)
Kotlan, Václav; Hamar, Roman; Pánek, David; Doležel, Ivo
2017-12-01
A model of hybrid cladding on a cylindrical surface is built and numerically solved. Heating of both substrate and the powder material to be deposited on its surface is realized by laser beam and preheating inductor. The task represents a hard-coupled electromagnetic-thermal problem with time-varying geometry. Two specific algorithms are developed to incorporate this effect into the model, driven by local distribution of temperature and its gradients. The algorithms are implemented into the COMSOL Multiphysics 5.2 code that is used for numerical computations of the task. The methodology is illustrated with a typical example whose results are discussed.
Modeling flow at the nozzle of a solid rocket motor
NASA Technical Reports Server (NTRS)
Chow, Alan S.; Jin, Kang-Ren
1991-01-01
The mechanical behavior of a rocket motor internal flow field results in a system of nonlinear partial differential equations which can be solved numerically. The accuracy and the convergence of the solution of the system of equations depends largely on how precisely the sharp gradients can be resolved. An adaptive grid generation scheme is incorporated into the computer algorithm to enhance the capability of numerical modeling. With this scheme, the grid is refined as the solution evolves. This scheme significantly improves the methodology of solving flow problems in rocket nozzle by putting the refinement part of grid generation into the computer algorithm.
Reliability-Based Control Design for Uncertain Systems
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.
2005-01-01
This paper presents a robust control design methodology for systems with probabilistic parametric uncertainty. Control design is carried out by solving a reliability-based multi-objective optimization problem where the probability of violating design requirements is minimized. Simultaneously, failure domains are optimally enlarged to enable global improvements in the closed-loop performance. To enable an efficient numerical implementation, a hybrid approach for estimating reliability metrics is developed. This approach, which integrates deterministic sampling and asymptotic approximations, greatly reduces the numerical burden associated with complex probabilistic computations without compromising the accuracy of the results. Examples using output-feedback and full-state feedback with state estimation are used to demonstrate the ideas proposed.
Numerical Modeling of Ablation Heat Transfer
NASA Technical Reports Server (NTRS)
Ewing, Mark E.; Laker, Travis S.; Walker, David T.
2013-01-01
A unique numerical method has been developed for solving one-dimensional ablation heat transfer problems. This paper provides a comprehensive description of the method, along with detailed derivations of the governing equations. This methodology supports solutions for traditional ablation modeling including such effects as heat transfer, material decomposition, pyrolysis gas permeation and heat exchange, and thermochemical surface erosion. The numerical scheme utilizes a control-volume approach with a variable grid to account for surface movement. This method directly supports implementation of nontraditional models such as material swelling and mechanical erosion, extending capabilities for modeling complex ablation phenomena. Verifications of the numerical implementation are provided using analytical solutions, code comparisons, and the method of manufactured solutions. These verifications are used to demonstrate solution accuracy and proper error convergence rates. A simple demonstration of a mechanical erosion (spallation) model is also provided to illustrate the unique capabilities of the method.
Simulation of Ejecta Production and Mixing Process of Sn Sample under shock loading
NASA Astrophysics Data System (ADS)
Wang, Pei; Chen, Dawei; Sun, Haiquan; Ma, Dongjun
2017-06-01
Ejection may occur when a strong shock wave release at the free surface of metal material and the ejecta of high-speed particulate matter will be formed and further mixed with the surrounding gas. Ejecta production and its mixing process has been one of the most difficult problems in shock physics remain unresolved, and have many important engineering applications in the imploding compression science. The present paper will introduce a methodology for the theoretical modeling and numerical simulation of the complex ejection and mixing process. The ejecta production is decoupled with the particle mixing process, and the ejecta state can be achieved by the direct numerical simulation for the evolution of initial defect on the metal surface. Then the particle mixing process can be simulated and resolved by a two phase gas-particle model which uses the aforementioned ejecta state as the initial condition. A preliminary ejecta experiment of planar Sn metal Sample has validated the feasibility of the proposed methodology.
Statistical theory and methodology for remote sensing data analysis
NASA Technical Reports Server (NTRS)
Odell, P. L.
1974-01-01
A model is developed for the evaluation of acreages (proportions) of different crop-types over a geographical area using a classification approach and methods for estimating the crop acreages are given. In estimating the acreages of a specific croptype such as wheat, it is suggested to treat the problem as a two-crop problem: wheat vs. nonwheat, since this simplifies the estimation problem considerably. The error analysis and the sample size problem is investigated for the two-crop approach. Certain numerical results for sample sizes are given for a JSC-ERTS-1 data example on wheat identification performance in Hill County, Montana and Burke County, North Dakota. Lastly, for a large area crop acreages inventory a sampling scheme is suggested for acquiring sample data and the problem of crop acreage estimation and the error analysis is discussed.
Numerical approach to constructing the lunar physical libration: results of the initial stage
NASA Astrophysics Data System (ADS)
Zagidullin, A.; Petrova, N.; Nefediev, Yu.; Usanin, V.; Glushkov, M.
2015-10-01
So called "main problem" it is taken as a model to develop the numerical approach in the theory of lunar physical libration. For the chosen model, there are both a good methodological basis and results obtained at the Kazan University as an outcome of the analytic theory construction. Results of the first stage in numerical approach are presented in this report. Three main limitation are taken to describe the main problem: -independent consideration of orbital and rotational motion of the Moon; - a rigid body model for the lunar body is taken and its dynamical figure is described by inertia ellipsoid, which gives us the mass distribution inside the Moon. - only gravitational interaction with the Earth and the Sun is considered. Development of selenopotential is limited on this stage by the second harmonic only. Inclusion of the 3-rd and 4-th order harmonics is the nearest task for the next stage.The full solution of libration problem consists of removing the below specified limitations: consideration of the fine effects, caused by planet perturbations, by visco-elastic properties of the lunar body, by the presence of a two-layer lunar core, by the Earth obliquity, by ecliptic rotation, if it is taken as a reference plane.
The exclusion problem in seasonally forced epidemiological systems.
Greenman, J V; Adams, B
2015-02-21
The pathogen exclusion problem is the problem of finding control measures that will exclude a pathogen from an ecological system or, if the system is already disease-free, maintain it in that state. To solve this problem we work within a holistic control theory framework which is consistent with conventional theory for simple systems (where there is no external forcing and constant controls) and seamlessly generalises to complex systems that are subject to multiple component seasonal forcing and targeted variable controls. We develop, customise and integrate a range of numerical and algebraic procedures that provide a coherent methodology powerful enough to solve the exclusion problem in the general case. An important aspect of our solution procedure is its two-stage structure which reveals the epidemiological consequences of the controls used for exclusion. This information augments technical and economic considerations in the design of an acceptable exclusion strategy. Our methodology is used in two examples to show how time-varying controls can exploit the interference and reinforcement created by the external and internal lag structure and encourage the system to 'take over' some of the exclusion effort. On-off control switching, resonant amplification, optimality and controllability are important issues that emerge in the discussion. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Manhardt, P. D.
1982-01-01
The CMC fluid mechanics program system was developed to transmit the theoretical solution of finite element numerical solution methodology, applied to nonlinear field problems into a versatile computer code for comprehensive flow field analysis. Data procedures for the CMC 3 dimensional Parabolic Navier-Stokes (PNS) algorithm are presented. General data procedures a juncture corner flow standard test case data deck is described. A listing of the data deck and an explanation of grid generation methodology are presented. Tabulations of all commands and variables available to the user are described. These are in alphabetical order with cross reference numbers which refer to storage addresses.
Probabilistic analysis of structures involving random stress-strain behavior
NASA Technical Reports Server (NTRS)
Millwater, H. R.; Thacker, B. H.; Harren, S. V.
1991-01-01
The present methodology for analysis of structures with random stress strain behavior characterizes the uniaxial stress-strain curve in terms of (1) elastic modulus, (2) engineering stress at initial yield, (3) initial plastic-hardening slope, (4) engineering stress at point of ultimate load, and (5) engineering strain at point of ultimate load. The methodology is incorporated into the Numerical Evaluation of Stochastic Structures Under Stress code for probabilistic structural analysis. The illustrative problem of a thick cylinder under internal pressure, where both the internal pressure and the stress-strain curve are random, is addressed by means of the code. The response value is the cumulative distribution function of the equivalent plastic strain at the inner radius.
Managing cognitive impairment in the elderly: conceptual, intervention and methodological issues.
Buckwalter, K C; Stolley, J M; Farran, C J
1999-11-11
With the aging of society, the incidence of dementia in the elderly is also increasing, and thus results in increased numbers of individuals with cognitive impairment. Nurses and other researchers have investigated issues concerning the management of cognitive impairment. This article highlights conceptual, intervention and methodological issues associated with this phenomenon. Cognitive change is a multivariate construct that includes alterations in a variety of information processing mechanisms such as problem solving ability, memory, perception, attention and learning, and judgement. Although there is a large body of research, conceptual, intervention and methodological issues remain. Much of the clinical research on cognitive impairment is atheoretical, with this issue only recently being addressed. While many clinical interventions have been proposed, few have been adequately tested. There are also various methodological concerns, such as small sample sizes and limited statistical power; study design issues (experimental vs. non-experimental), and internal and external validity problems. Clearly, additional research designed to intervene with these difficult behaviors is needed. A variety of psychosocial, environmental and physical parameters must be considered in the nursing care of persons with cognitive impairment. Special attention has been given to interventions associated with disruptive behaviors. Interventions are complex and knowledge must be integrated from both the biomedical and behavioral sciences in order to deal effectively with the numerous problems that can arise over a long and changing clinical course. Some researchers and clinicians have suggested that a new culture regarding dementia care is needed, one that focuses on changing attitudes and beliefs about persons with dementia and one that changes how organizations deliver that care. This review identifies key conceptual, intervention and methodological issues and recommends how these issues might be addressed in the future.
A Proposed Methodology for the Control of a Semi-Robotic Convoy
1991-01-01
verifies that the convoy is controlled within the specifications of the system. 0 Acknowledgements There are numerous people to whom I owe a great deal of...insurmountable problems almost * trivial. Thanks. It would not be complete without thanking the people who kept the world in per- iii 0 spective. Steve... succesive position samples of the lead vehicle, the velocity information is obtained. With this information, the trailing vehicles can repeat the learned
Pouria Bahmani; John W. van de Lindt; Mikhail Gershfeld; Gary L. Mochizuki; Steven E. Pryor; Douglas Rammer
2016-01-01
Soft-story wood-frame buildings have been recognized as a disaster preparedness problem for decades. There are tens of thousands of these multifamily three- and four-story structures throughout California and other parts of the United States. The majority were constructed between 1920 and 1970 and are prevalent in regions such as the San Francisco Bay Area in...
Prediction of discretization error using the error transport equation
NASA Astrophysics Data System (ADS)
Celik, Ismail B.; Parsons, Don Roscoe
2017-06-01
This study focuses on an approach to quantify the discretization error associated with numerical solutions of partial differential equations by solving an error transport equation (ETE). The goal is to develop a method that can be used to adequately predict the discretization error using the numerical solution on only one grid/mesh. The primary problem associated with solving the ETE is the formulation of the error source term which is required for accurately predicting the transport of the error. In this study, a novel approach is considered which involves fitting the numerical solution with a series of locally smooth curves and then blending them together with a weighted spline approach. The result is a continuously differentiable analytic expression that can be used to determine the error source term. Once the source term has been developed, the ETE can easily be solved using the same solver that is used to obtain the original numerical solution. The new methodology is applied to the two-dimensional Navier-Stokes equations in the laminar flow regime. A simple unsteady flow case is also considered. The discretization error predictions based on the methodology presented in this study are in good agreement with the 'true error'. While in most cases the error predictions are not quite as accurate as those from Richardson extrapolation, the results are reasonable and only require one numerical grid. The current results indicate that there is much promise going forward with the newly developed error source term evaluation technique and the ETE.
Advancing MODFLOW Applying the Derived Vector Space Method
NASA Astrophysics Data System (ADS)
Herrera, G. S.; Herrera, I.; Lemus-García, M.; Hernandez-Garcia, G. D.
2015-12-01
The most effective domain decomposition methods (DDM) are non-overlapping DDMs. Recently a new approach, the DVS-framework, based on an innovative discretization method that uses a non-overlapping system of nodes (the derived-nodes), was introduced and developed by I. Herrera et al. [1, 2]. Using the DVS-approach a group of four algorithms, referred to as the 'DVS-algorithms', which fulfill the DDM-paradigm (i.e. the solution of global problems is obtained by resolution of local problems exclusively) has been derived. Such procedures are applicable to any boundary-value problem, or system of such equations, for which a standard discretization method is available and then software with a high degree of parallelization can be constructed. In a parallel talk, in this AGU Fall Meeting, Ismael Herrera will introduce the general DVS methodology. The application of the DVS-algorithms has been demonstrated in the solution of several boundary values problems of interest in Geophysics. Numerical examples for a single-equation, for the cases of symmetric, non-symmetric and indefinite problems were demonstrated before [1,2]. For these problems DVS-algorithms exhibited significantly improved numerical performance with respect to standard versions of DDM algorithms. In view of these results our research group is in the process of applying the DVS method to a widely used simulator for the first time, here we present the advances of the application of this method for the parallelization of MODFLOW. Efficiency results for a group of tests will be presented. References [1] I. Herrera, L.M. de la Cruz and A. Rosas-Medina. Non overlapping discretization methods for partial differential equations, Numer Meth Part D E, (2013). [2] Herrera, I., & Contreras Iván "An Innovative Tool for Effectively Applying Highly Parallelized Software To Problems of Elasticity". Geofísica Internacional, 2015 (In press)
Fully-coupled analysis of jet mixing problems. Three-dimensional PNS model, SCIP3D
NASA Technical Reports Server (NTRS)
Wolf, D. E.; Sinha, N.; Dash, S. M.
1988-01-01
Numerical procedures formulated for the analysis of 3D jet mixing problems, as incorporated in the computer model, SCIP3D, are described. The overall methodology closely parallels that developed in the earlier 2D axisymmetric jet mixing model, SCIPVIS. SCIP3D integrates the 3D parabolized Navier-Stokes (PNS) jet mixing equations, cast in mapped cartesian or cylindrical coordinates, employing the explicit MacCormack Algorithm. A pressure split variant of this algorithm is employed in subsonic regions with a sublayer approximation utilized for treating the streamwise pressure component. SCIP3D contains both the ks and kW turbulence models, and employs a two component mixture approach to treat jet exhausts of arbitrary composition. Specialized grid procedures are used to adjust the grid growth in accordance with the growth of the jet, including a hybrid cartesian/cylindrical grid procedure for rectangular jets which moves the hybrid coordinate origin towards the flow origin as the jet transitions from a rectangular to circular shape. Numerous calculations are presented for rectangular mixing problems, as well as for a variety of basic unit problems exhibiting overall capabilities of SCIP3D.
Numerical Simulation of Hysteretic Live Load Effect in a Soil-Steel Bridge
NASA Astrophysics Data System (ADS)
Sobótka, Maciej
2014-03-01
The paper presents numerical simulation of hysteretic live load effect in a soil-steel bridge. The effect was originally identified experimentally by Machelski [1], [2]. The truck was crossing the bridge one way and the other in the full-scale test performed. At the same time, displacements and stress in the shell were measured. The major conclusion from the research was that the measured quantities formed hysteretic loops. A numerical simulation of that effect is addressed in the present work. The analysis was performed using Flac finite difference code. The methodology of solving the mechanical problems implemented in Flac enables us to solve the problem concerning a sequence of load and non-linear mechanical behaviour of the structure. The numerical model incorporates linear elastic constitutive relations for the soil backfill, for the steel shell and the sheet piles, being a flexible substructure for the shell. Contact zone between the shell and the soil backfill is assumed to reflect elastic-plastic constitutive model. Maximum shear stress in contact zone is limited by the Coulomb condition. The plastic flow rule is described by dilation angle ψ = 0. The obtained results of numerical analysis are in fair agreement with the experimental evidence. The primary finding from the performed simulation is that the slip in the interface can be considered an explanation of the hysteresis occurrence in the charts of displacement and stress in the shell.
First stirrings: cultural notes on orgasm, ejaculation, and wet dreams.
Janssen, Diederik F
2007-05-01
Both the findings and the limitations of numeric milestone research in sexology have a bearing on the pedagogical status of pleasure, as well as the cultural underpinnings of the notion of a psychosexual milestone. An overview is offered of international data pertaining to the chronology of three "milestones" in sexual autobiography: first orgasm (orgasmarche), first ejaculation (oigarche), and first wet dream (nocturnal emission). Methodological problems associated with the measurement of these variables are discussed. These problems are then situated in a culturalist perspective. It is concluded that orgasms are cultural artifacts in terms of their chronological occurrence as well as perceived salience, necessity, and "age appropriateness".
Development of a Aerothermoelastic-Acoustics Simulation Capability of Flight Vehicles
NASA Technical Reports Server (NTRS)
Gupta, K. K.; Choi, S. B.; Ibrahim, A.
2010-01-01
A novel numerical, finite element based analysis methodology is presented in this paper suitable for accurate and efficient simulation of practical, complex flight vehicles. An associated computer code, developed in this connection, is also described in some detail. Thermal effects of high speed flow obtained from a heat conduction analysis are incorporated in the modal analysis which in turn affects the unsteady flow arising out of interaction of elastic structures with the air. Numerical examples pertaining to representative problems are given in much detail testifying to the efficacy of the advocated techniques. This is a unique implementation of temperature effects in a finite element CFD based multidisciplinary simulation analysis capability involving large scale computations.
NASA Astrophysics Data System (ADS)
Rana, Sachin; Ertekin, Turgay; King, Gregory R.
2018-05-01
Reservoir history matching is frequently viewed as an optimization problem which involves minimizing misfit between simulated and observed data. Many gradient and evolutionary strategy based optimization algorithms have been proposed to solve this problem which typically require a large number of numerical simulations to find feasible solutions. Therefore, a new methodology referred to as GP-VARS is proposed in this study which uses forward and inverse Gaussian processes (GP) based proxy models combined with a novel application of variogram analysis of response surface (VARS) based sensitivity analysis to efficiently solve high dimensional history matching problems. Empirical Bayes approach is proposed to optimally train GP proxy models for any given data. The history matching solutions are found via Bayesian optimization (BO) on forward GP models and via predictions of inverse GP model in an iterative manner. An uncertainty quantification method using MCMC sampling in conjunction with GP model is also presented to obtain a probabilistic estimate of reservoir properties and estimated ultimate recovery (EUR). An application of the proposed GP-VARS methodology on PUNQ-S3 reservoir is presented in which it is shown that GP-VARS provides history match solutions in approximately four times less numerical simulations as compared to the differential evolution (DE) algorithm. Furthermore, a comparison of uncertainty quantification results obtained by GP-VARS, EnKF and other previously published methods shows that the P50 estimate of oil EUR obtained by GP-VARS is in close agreement to the true values for the PUNQ-S3 reservoir.
Recommendations for benefit-risk assessment methodologies and visual representations.
Hughes, Diana; Waddingham, Ed; Mt-Isa, Shahrul; Goginsky, Alesia; Chan, Edmond; Downey, Gerald F; Hallgreen, Christine E; Hockley, Kimberley S; Juhaeri, Juhaeri; Lieftucht, Alfons; Metcalf, Marilyn A; Noel, Rebecca A; Phillips, Lawrence D; Ashby, Deborah; Micaleff, Alain
2016-03-01
The purpose of this study is to draw on the practical experience from the PROTECT BR case studies and make recommendations regarding the application of a number of methodologies and visual representations for benefit-risk assessment. Eight case studies based on the benefit-risk balance of real medicines were used to test various methodologies that had been identified from the literature as having potential applications in benefit-risk assessment. Recommendations were drawn up based on the results of the case studies. A general pathway through the case studies was evident, with various classes of methodologies having roles to play at different stages. Descriptive and quantitative frameworks were widely used throughout to structure problems, with other methods such as metrics, estimation techniques and elicitation techniques providing ways to incorporate technical or numerical data from various sources. Similarly, tree diagrams and effects tables were universally adopted, with other visualisations available to suit specific methodologies or tasks as required. Every assessment was found to follow five broad stages: (i) Planning, (ii) Evidence gathering and data preparation, (iii) Analysis, (iv) Exploration and (v) Conclusion and dissemination. Adopting formal, structured approaches to benefit-risk assessment was feasible in real-world problems and facilitated clear, transparent decision-making. Prior to this work, no extensive practical application and appraisal of methodologies had been conducted using real-world case examples, leaving users with limited knowledge of their usefulness in the real world. The practical guidance provided here takes us one step closer to a harmonised approach to benefit-risk assessment from multiple perspectives. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Hardin, Jay C.; Pope, D. Stuart
1989-01-01
An engineering estimate of the spectrum of atmospheric microburst noise radiation in the range 2-20 Hz is developed. This prediction is obtained via a marriage of standard aeroacoustic theory with a numerical computation of the relevant fluid dynamics. The 'computational aeroacoustics' technique applied here to the interpretation of atmospheric noise measurements is illustrative of a methodology that can now be employed in a wide class of problems.
An Inverse Problem for a Class of Conditional Probability Measure-Dependent Evolution Equations
Mirzaev, Inom; Byrne, Erin C.; Bortz, David M.
2016-01-01
We investigate the inverse problem of identifying a conditional probability measure in measure-dependent evolution equations arising in size-structured population modeling. We formulate the inverse problem as a least squares problem for the probability measure estimation. Using the Prohorov metric framework, we prove existence and consistency of the least squares estimates and outline a discretization scheme for approximating a conditional probability measure. For this scheme, we prove general method stability. The work is motivated by Partial Differential Equation (PDE) models of flocculation for which the shape of the post-fragmentation conditional probability measure greatly impacts the solution dynamics. To illustrate our methodology, we apply the theory to a particular PDE model that arises in the study of population dynamics for flocculating bacterial aggregates in suspension, and provide numerical evidence for the utility of the approach. PMID:28316360
NASA Astrophysics Data System (ADS)
Chatterjee, K.; Schunk, R. W.
2017-12-01
The refilling of the plasmasphere following a geomagnetic storm remains one of the longstanding problems in the area of ionosphere-magnetosphere coupling. Both diffusion and hydrodynamic approximations have been adopted for the modeling and solution of this problem. The diffusion approximation neglects the nonlinear inertial term in the momentum equation and so this approximation is not rigorously valid immediately after the storm. Over the last few years, we have developed a hydrodynamic refilling model using the flux-corrected transport method, a numerical method that is extremely well suited to handling nonlinear problems with shocks and discontinuities. The plasma transport equations are solved along 1D closed magnetic field lines that connect conjugate ionospheres and the model currently includes three ion (H+, O+, He+) and two neutral (O, H) species. In this work, each ion species under consideration has been modeled as two separate streams emanating from the conjugate hemispheres and the model correctly predicts supersonic ion speeds and the presence of high levels of Helium during the early hours of refilling. The ultimate objective of this research is the development of a 3D model for the plasmasphere refilling problem and with additional development, the same methodology can potentially be applied to the study of other complex space plasma coupling problems in closed flux tube geometries. Index Terms: 2447 Modeling and forecasting [IONOSPHERE] 2753 Numerical modeling [MAGNETOSPHERIC PHYSICS] 7959 Models [SPACE WEATHER
Implementation of Preconditioned Dual-Time Procedures in OVERFLOW
NASA Technical Reports Server (NTRS)
Pandya, Shishir A.; Venkateswaran, Sankaran; Pulliam, Thomas H.; Kwak, Dochan (Technical Monitor)
2003-01-01
Preconditioning methods have become the method of choice for the solution of flowfields involving the simultaneous presence of low Mach and transonic regions. It is well known that these methods are important for insuring accurate numerical discretization as well as convergence efficiency over various operating conditions such as low Mach number, low Reynolds number and high Strouhal numbers. For unsteady problems, the preconditioning is introduced within a dual-time framework wherein the physical time-derivatives are used to march the unsteady equations and the preconditioned time-derivatives are used for purposes of numerical discretization and iterative solution. In this paper, we describe the implementation of the preconditioned dual-time methodology in the OVERFLOW code. To demonstrate the performance of the method, we employ both simple and practical unsteady flowfields, including vortex propagation in a low Mach number flow, flowfield of an impulsively started plate (Stokes' first problem) arid a cylindrical jet in a low Mach number crossflow with ground effect. All the results demonstrate that the preconditioning algorithm is responsible for improvements to both numerical accuracy and convergence efficiency and, thereby, enables low Mach number unsteady computations to be performed at a fraction of the cost of traditional time-marching methods.
Windowed Green function method for the Helmholtz equation in the presence of multiply layered media
NASA Astrophysics Data System (ADS)
Bruno, O. P.; Pérez-Arancibia, C.
2017-06-01
This paper presents a new methodology for the solution of problems of two- and three-dimensional acoustic scattering (and, in particular, two-dimensional electromagnetic scattering) by obstacles and defects in the presence of an arbitrary number of penetrable layers. Relying on the use of certain slow-rise windowing functions, the proposed windowed Green function approach efficiently evaluates oscillatory integrals over unbounded domains, with high accuracy, without recourse to the highly expensive Sommerfeld integrals that have typically been used to account for the effect of underlying planar multilayer structures. The proposed methodology, whose theoretical basis was presented in the recent contribution (Bruno et al. 2016 SIAM J. Appl. Math. 76, 1871-1898. (doi:10.1137/15M1033782)), is fast, accurate, flexible and easy to implement. Our numerical experiments demonstrate that the numerical errors resulting from the proposed approach decrease faster than any negative power of the window size. In a number of examples considered in this paper, the proposed method is up to thousands of times faster, for a given accuracy, than corresponding methods based on the use of Sommerfeld integrals.
Windowed Green function method for the Helmholtz equation in the presence of multiply layered media.
Bruno, O P; Pérez-Arancibia, C
2017-06-01
This paper presents a new methodology for the solution of problems of two- and three-dimensional acoustic scattering (and, in particular, two-dimensional electromagnetic scattering) by obstacles and defects in the presence of an arbitrary number of penetrable layers. Relying on the use of certain slow-rise windowing functions, the proposed windowed Green function approach efficiently evaluates oscillatory integrals over unbounded domains, with high accuracy, without recourse to the highly expensive Sommerfeld integrals that have typically been used to account for the effect of underlying planar multilayer structures. The proposed methodology, whose theoretical basis was presented in the recent contribution (Bruno et al. 2016 SIAM J. Appl. Math. 76 , 1871-1898. (doi:10.1137/15M1033782)), is fast, accurate, flexible and easy to implement. Our numerical experiments demonstrate that the numerical errors resulting from the proposed approach decrease faster than any negative power of the window size. In a number of examples considered in this paper, the proposed method is up to thousands of times faster, for a given accuracy, than corresponding methods based on the use of Sommerfeld integrals.
Outcomes of planetary close encounters - A systematic comparison of methodologies
NASA Technical Reports Server (NTRS)
Greenberg, Richard; Carusi, Andrea; Valsecchi, G. B.
1988-01-01
Several methods for estimating the outcomes of close planetary encounters are compared on the basis of the numerical integration of a range of encounter types. An attempt is made to lay the foundation for the development of predictive rules concerning the encounter outcomes applicable to the refinement of the statistical mechanics that apply to planet-formation and similar problems concerning planetary swarms. Attention is given to Oepik's (1976) formulation of the two-body approximation, whose predicted motion differs from the correct three-body behavior.
Fully Numerical Methods for Continuing Families of Quasi-Periodic Invariant Tori in Astrodynamics
NASA Astrophysics Data System (ADS)
Baresi, Nicola; Olikara, Zubin P.; Scheeres, Daniel J.
2018-06-01
Quasi-periodic invariant tori are of great interest in astrodynamics because of their capability to further expand the design space of satellite missions. However, there is no general consent on what is the best methodology for computing these dynamical structures. This paper compares the performance of four different approaches available in the literature. The first two methods compute invariant tori of flows by solving a system of partial differential equations via either central differences or Fourier techniques. In contrast, the other two strategies calculate invariant curves of maps via shooting algorithms: one using surfaces of section, and one using a stroboscopic map. All of the numerical procedures are tested in the co-rotating frame of the Earth as well as in the planar circular restricted three-body problem. The results of our numerical simulations show which of the reviewed procedures should be preferred for future studies of astrodynamics systems.
Well-balanced high-order solver for blood flow in networks of vessels with variable properties.
Müller, Lucas O; Toro, Eleuterio F
2013-12-01
We present a well-balanced, high-order non-linear numerical scheme for solving a hyperbolic system that models one-dimensional flow in blood vessels with variable mechanical and geometrical properties along their length. Using a suitable set of test problems with exact solution, we rigorously assess the performance of the scheme. In particular, we assess the well-balanced property and the effective order of accuracy through an empirical convergence rate study. Schemes of up to fifth order of accuracy in both space and time are implemented and assessed. The numerical methodology is then extended to realistic networks of elastic vessels and is validated against published state-of-the-art numerical solutions and experimental measurements. It is envisaged that the present scheme will constitute the building block for a closed, global model for the human circulation system involving arteries, veins, capillaries and cerebrospinal fluid. Copyright © 2013 John Wiley & Sons, Ltd.
Fully Numerical Methods for Continuing Families of Quasi-Periodic Invariant Tori in Astrodynamics
NASA Astrophysics Data System (ADS)
Baresi, Nicola; Olikara, Zubin P.; Scheeres, Daniel J.
2018-01-01
Quasi-periodic invariant tori are of great interest in astrodynamics because of their capability to further expand the design space of satellite missions. However, there is no general consent on what is the best methodology for computing these dynamical structures. This paper compares the performance of four different approaches available in the literature. The first two methods compute invariant tori of flows by solving a system of partial differential equations via either central differences or Fourier techniques. In contrast, the other two strategies calculate invariant curves of maps via shooting algorithms: one using surfaces of section, and one using a stroboscopic map. All of the numerical procedures are tested in the co-rotating frame of the Earth as well as in the planar circular restricted three-body problem. The results of our numerical simulations show which of the reviewed procedures should be preferred for future studies of astrodynamics systems.
NASA Technical Reports Server (NTRS)
Sreekanta Murthy, T.
1992-01-01
Results of the investigation of formal nonlinear programming-based numerical optimization techniques of helicopter airframe vibration reduction are summarized. The objective and constraint function and the sensitivity expressions used in the formulation of airframe vibration optimization problems are presented and discussed. Implementation of a new computational procedure based on MSC/NASTRAN and CONMIN in a computer program system called DYNOPT for optimizing airframes subject to strength, frequency, dynamic response, and dynamic stress constraints is described. An optimization methodology is proposed which is thought to provide a new way of applying formal optimization techniques during the various phases of the airframe design process. Numerical results obtained from the application of the DYNOPT optimization code to a helicopter airframe are discussed.
A new approach for minimum phase output definition
NASA Astrophysics Data System (ADS)
Jahangiri, Fatemeh; Talebi, Heidar Ali; Menhaj, Mohammad Bagher; Ebenbauer, Christian
2017-01-01
This paper presents a novel method for output redefinition for linear systems. The approach also determines possible relative degrees for the systems corresponding to any new output vector. To guarantee the minimum phase property with a prescribed relative degree, a set of new conditions is introduced. A key feature of these conditions is that there is no need to any form of transformations which make the scheme suitable for optimisation problems in control to ensure the minimum phase property. Moreover, the results are useful for sensor placement problems and for obtaining minimum phase approximations of non-minimum phase systems. Numerical examples including an example of unmanned aerial vehicle systems are given to demonstrate the effectiveness of the methodology.
NASA Astrophysics Data System (ADS)
Hobiny, Aatef D.; Abbas, Ibrahim A.
2018-01-01
The dual phase lag (DPL) heat transfer model is applied to study the photo-thermal interaction in an infinite semiconductor medium containing a spherical hole. The inner surface of the cavity was traction free and loaded thermally by pulse heat flux. By using the eigenvalue approach methodology and Laplace's transform, the physical variable solutions are obtained analytically. The numerical computations for the silicon-like semiconductor material are obtained. The comparison among the theories, i.e., dual phase lag (DPL), Lord and Shulman's (LS) and the classically coupled thermoelastic (CT) theory is presented graphically. The results further show that the analytical scheme can overcome mathematical problems by analyzing these problems.
Integrated Controls-Structures Design Methodology for Flexible Spacecraft
NASA Technical Reports Server (NTRS)
Maghami, P. G.; Joshi, S. M.; Price, D. B.
1995-01-01
This paper proposes an approach for the design of flexible spacecraft, wherein the structural design and the control system design are performed simultaneously. The integrated design problem is posed as an optimization problem in which both the structural parameters and the control system parameters constitute the design variables, which are used to optimize a common objective function, thereby resulting in an optimal overall design. The approach is demonstrated by application to the integrated design of a geostationary platform, and to a ground-based flexible structure experiment. The numerical results obtained indicate that the integrated design approach generally yields spacecraft designs that are substantially superior to the conventional approach, wherein the structural design and control design are performed sequentially.
Evolutionary fuzzy modeling human diagnostic decisions.
Peña-Reyes, Carlos Andrés
2004-05-01
Fuzzy CoCo is a methodology, combining fuzzy logic and evolutionary computation, for constructing systems able to accurately predict the outcome of a human decision-making process, while providing an understandable explanation of the underlying reasoning. Fuzzy logic provides a formal framework for constructing systems exhibiting both good numeric performance (accuracy) and linguistic representation (interpretability). However, fuzzy modeling--meaning the construction of fuzzy systems--is an arduous task, demanding the identification of many parameters. To solve it, we use evolutionary computation techniques (specifically cooperative coevolution), which are widely used to search for adequate solutions in complex spaces. We have successfully applied the algorithm to model the decision processes involved in two breast cancer diagnostic problems, the WBCD problem and the Catalonia mammography interpretation problem, obtaining systems both of high performance and high interpretability. For the Catalonia problem, an evolved system was embedded within a Web-based tool-called COBRA-for aiding radiologists in mammography interpretation.
NASA Astrophysics Data System (ADS)
Vera, N. C.; GMMC
2013-05-01
In this paper we present the results of macrohybrid mixed Darcian flow in porous media in a general three-dimensional domain. The global problem is solved as a set of local subproblems which are posed using a domain decomposition method. Unknown fields of local problems, velocity and pressure are approximated using mixed finite elements. For this application, a general three-dimensional domain is considered which is discretized using tetrahedra. The discrete domain is decomposed into subdomains and reformulated the original problem as a set of subproblems, communicated through their interfaces. To solve this set of subproblems, we use finite element mixed and parallel computing. The parallelization of a problem using this methodology can, in principle, to fully exploit a computer equipment and also provides results in less time, two very important elements in modeling. Referencias G.Alduncin and N.Vera-Guzmán Parallel proximal-point algorithms for mixed _nite element models of _ow in the subsurface, Commun. Numer. Meth. Engng 2004; 20:83-104 (DOI: 10.1002/cnm.647) Z. Chen, G.Huan and Y. Ma Computational Methods for Multiphase Flows in Porous Media, SIAM, Society for Industrial and Applied Mathematics, Philadelphia, 2006. A. Quarteroni and A. Valli, Numerical Approximation of Partial Differential Equations, Springer-Verlag, Berlin, 1994. Brezzi F, Fortin M. Mixed and Hybrid Finite Element Methods. Springer: New York, 1991.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ustinov, E. A., E-mail: eustinov@mail.wplus.net
This paper presents a refined technique to describe two-dimensional phase transitions in dense fluids adsorbed on a crystalline surface. Prediction of parameters of 2D liquid–solid equilibrium is known to be an extremely challenging problem, which is mainly due to a small difference in thermodynamic functions of coexisting phases and lack of accuracy of numerical experiments in case of their high density. This is a serious limitation of various attempts to circumvent this problem. To improve this situation, a new methodology based on the kinetic Monte Carlo method was applied. The methodology involves analysis of equilibrium gas–liquid and gas–solid systems undergoingmore » an external potential, which allows gradual shifting parameters of the phase coexistence. The interrelation of the chemical potential and tangential pressure for each system is then treated with the Gibbs–Duhem equation to obtain the point of intersection corresponding to the liquid/solid–solid equilibrium coexistence. The methodology is demonstrated on the krypton–graphite system below and above the 2D critical temperature. Using experimental data on the liquid–solid and the commensurate–incommensurate transitions in the krypton monolayer derived from adsorption isotherms, the Kr–graphite Lennard–Jones parameters have been corrected resulting in a higher periodic potential modulation.« less
Reliability analysis of composite structures
NASA Technical Reports Server (NTRS)
Kan, Han-Pin
1992-01-01
A probabilistic static stress analysis methodology has been developed to estimate the reliability of a composite structure. Closed form stress analysis methods are the primary analytical tools used in this methodology. These structural mechanics methods are used to identify independent variables whose variations significantly affect the performance of the structure. Once these variables are identified, scatter in their values is evaluated and statistically characterized. The scatter in applied loads and the structural parameters are then fitted to appropriate probabilistic distribution functions. Numerical integration techniques are applied to compute the structural reliability. The predicted reliability accounts for scatter due to variability in material strength, applied load, fabrication and assembly processes. The influence of structural geometry and mode of failure are also considerations in the evaluation. Example problems are given to illustrate various levels of analytical complexity.
A Joint Replenishment Inventory Model with Lost Sales
NASA Astrophysics Data System (ADS)
Devy, N. L.; Ai, T. J.; Astanti, R. D.
2018-04-01
This paper deals with two items joint replenishment inventory problem, in which the demand of each items are constant and deterministic. Inventory replenishment of items is conducted periodically every T time intervals. Among of these replenishments, joint replenishment of both items is possible. It is defined that item i is replenished every ZiT time intervals. Replenishment of items are instantaneous. All of shortages are considered as lost sales. The maximum allowance for lost sales of item i is Si. Mathematical model is formulated in order to determining the basic time cycle T, replenishment multiplier Zi , and maximum lost sales Si in order to minimize the total cost per unit time. A solution methodology is proposed for solve the model and a numerical example is provided for demonstrating the effectiveness of the proposed methodology.
A response surface methodology based damage identification technique
NASA Astrophysics Data System (ADS)
Fang, S. E.; Perera, R.
2009-06-01
Response surface methodology (RSM) is a combination of statistical and mathematical techniques to represent the relationship between the inputs and outputs of a physical system by explicit functions. This methodology has been widely employed in many applications such as design optimization, response prediction and model validation. But so far the literature related to its application in structural damage identification (SDI) is scarce. Therefore this study attempts to present a systematic SDI procedure comprising four sequential steps of feature selection, parameter screening, primary response surface (RS) modeling and updating, and reference-state RS modeling with SDI realization using the factorial design (FD) and the central composite design (CCD). The last two steps imply the implementation of inverse problems by model updating in which the RS models substitute the FE models. The proposed method was verified against a numerical beam, a tested reinforced concrete (RC) frame and an experimental full-scale bridge with the modal frequency being the output responses. It was found that the proposed RSM-based method performs well in predicting the damage of both numerical and experimental structures having single and multiple damage scenarios. The screening capacity of the FD can provide quantitative estimation of the significance levels of updating parameters. Meanwhile, the second-order polynomial model established by the CCD provides adequate accuracy in expressing the dynamic behavior of a physical system.
Transport in Dynamical Astronomy and Multibody Problems
NASA Astrophysics Data System (ADS)
Dellnitz, Michael; Junge, Oliver; Koon, Wang Sang; Lekien, Francois; Lo, Martin W.; Marsden, Jerrold E.; Padberg, Kathrin; Preis, Robert; Ross, Shane D.; Thiere, Bianca
We combine the techniques of almost invariant sets (using tree structured box elimination and graph partitioning algorithms) with invariant manifold and lobe dynamics techniques. The result is a new computational technique for computing key dynamical features, including almost invariant sets, resonance regions as well as transport rates and bottlenecks between regions in dynamical systems. This methodology can be applied to a variety of multibody problems, including those in molecular modeling, chemical reaction rates and dynamical astronomy. In this paper we focus on problems in dynamical astronomy to illustrate the power of the combination of these different numerical tools and their applicability. In particular, we compute transport rates between two resonance regions for the three-body system consisting of the Sun, Jupiter and a third body (such as an asteroid). These resonance regions are appropriate for certain comets and asteroids.
NASA Astrophysics Data System (ADS)
Aseev, Nikita; Agoshkov, Valery
2015-04-01
The report is devoted to the one approach to the problem of oil spill risk control of protected areas in the Baltic Sea (Aseev et al., 2014). By the problem of risk control is meant a problem of determination of optimal resources quantity which are necessary for decreasing the risk to some acceptable value. It is supposed that only moment of accident is a random variable. Mass of oil slick is chosen as a function of control. For the realization of the random variable the quadratic 'functional of cost' is introduced. It comprises cleaning costs and deviation of damage of oil pollution from its acceptable value. The problem of minimization of this functional is solved based on the methods of optimal control and the theory of adjoint equations (Agoshkov, 2003, Agoshkov et al., 2012). The solution of this problem is explicitly found. In order to solve the realistic problem of oil spill risk control in the Baltic Sea the 2d model of oil spill propagation on the sea surface based on the Seatrack Web model (Liungman, Mattson, 2011) is developed. The model takes into account such processes as oil transportation by sea currents and wind, turbulent diffusion, spreading, evaporation from sea surface, dispersion and formation of emulsion 'water-in-oil'. The model allows to calculate basic oil slick parameters: localization, mass, volume, thickness, density of oil, water content and viscosity of emulsion. The results of several numerical experiments in the Baltic Sea using the model and the methodology of oil spill risk control are presented. Along with moment of accident other parameters of oil spill and environment could be chosen as a random variables. The methodology of solution of oil spill risk control problem will remain the same but the computational complexity will increase. Conversion of the function of control to quantity of resources with a glance to methods of pollution removal should be processed. As a result, the developed 2d model of oil spill propagation combined with the methodology of solution of oil spill risk control problem could provide the basis for oil spill simulation systems, systems of evaluation and control of oil spill risk and damage in seas or decision support systems. References V.I. Agoshkov. The methods of optimal control and adjoint equations in problems of mathematical physics. // Moscow: INM RAS, 2003, 256 p. (in Russian). V.I. Agoshkov, N.A. Aseev, I.S. Novikov. The methods of investigation and solution of the problems of local sources and local or integral observations. // Moscow: INM RAS, 2012. 151 p. (in Russian). N.A. Aseev, V.I. Agoshkov, V.B. Zalesny, R. Aps, P. Kujala, and J. Rytkonen. The problem of control of oil pollution risk in the Baltic Sea // Russ. J. Numer. Analysis and Math. Modelling, 2014, V 29, No. 2, 93-105. O. Liungman, J. Mattson. Scientific documentation of Seatrack Web; physical processes, algorithms and references, 2011. // https://stw-helcom.smhi.se/
High-Order Moving Overlapping Grid Methodology in a Spectral Element Method
NASA Astrophysics Data System (ADS)
Merrill, Brandon E.
A moving overlapping mesh methodology that achieves spectral accuracy in space and up to second-order accuracy in time is developed for solution of unsteady incompressible flow equations in three-dimensional domains. The targeted applications are in aerospace and mechanical engineering domains and involve problems in turbomachinery, rotary aircrafts, wind turbines and others. The methodology is built within the dual-session communication framework initially developed for stationary overlapping meshes. The methodology employs semi-implicit spectral element discretization of equations in each subdomain and explicit treatment of subdomain interfaces with spectrally-accurate spatial interpolation and high-order accurate temporal extrapolation, and requires few, if any, iterations, yet maintains the global accuracy and stability of the underlying flow solver. Mesh movement is enabled through the Arbitrary Lagrangian-Eulerian formulation of the governing equations, which allows for prescription of arbitrary velocity values at discrete mesh points. The stationary and moving overlapping mesh methodologies are thoroughly validated using two- and three-dimensional benchmark problems in laminar and turbulent flows. The spatial and temporal global convergence, for both methods, is documented and is in agreement with the nominal order of accuracy of the underlying solver. Stationary overlapping mesh methodology was validated to assess the influence of long integration times and inflow-outflow global boundary conditions on the performance. In a turbulent benchmark of fully-developed turbulent pipe flow, the turbulent statistics are validated against the available data. Moving overlapping mesh simulations are validated on the problems of two-dimensional oscillating cylinder and a three-dimensional rotating sphere. The aerodynamic forces acting on these moving rigid bodies are determined, and all results are compared with published data. Scaling tests, with both methodologies, show near linear strong scaling, even for moderately large processor counts. The moving overlapping mesh methodology is utilized to investigate the effect of an upstream turbulent wake on a three-dimensional oscillating NACA0012 extruded airfoil. A direct numerical simulation (DNS) at Reynolds Number 44,000 is performed for steady inflow incident upon the airfoil oscillating between angle of attack 5.6° and 25° with reduced frequency k=0.16. Results are contrasted with subsequent DNS of the same oscillating airfoil in a turbulent wake generated by a stationary upstream cylinder.
NASA Astrophysics Data System (ADS)
Chen, Jung-Chieh
This paper presents a low complexity algorithmic framework for finding a broadcasting schedule in a low-altitude satellite system, i. e., the satellite broadcast scheduling (SBS) problem, based on the recent modeling and computational methodology of factor graphs. Inspired by the huge success of the low density parity check (LDPC) codes in the field of error control coding, in this paper, we transform the SBS problem into an LDPC-like problem through a factor graph instead of using the conventional neural network approaches to solve the SBS problem. Based on a factor graph framework, the soft-information, describing the probability that each satellite will broadcast information to a terminal at a specific time slot, is exchanged among the local processing in the proposed framework via the sum-product algorithm to iteratively optimize the satellite broadcasting schedule. Numerical results show that the proposed approach not only can obtain optimal solution but also enjoys the low complexity suitable for integral-circuit implementation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gernhofer, S.; Oliver, T.J.; Vasquez, R.
1994-12-31
A macro environmental risk assessment (ERA) methodology was developed for the Philippine Department of Environment and Natural Resources (DENR) as part of the US Agency for International Development Industrial Environmental Management Project. The DENR allocates its limited resources to mitigate those environmental problems that pose the greatest threat to human health and the environment. The National Regional Industry Prioritization Strategy (NRIPS) methodology was developed as a risk assessment tool to establish a national ranking of industrial facilities. The ranking establishes regional and national priorities, based on risk factors, that DENR can use to determine the most effective allocation of itsmore » limited resources. NRIPS is a systematic framework that examines the potential risk to human health and the environment from hazardous substances released from a facility, and, in doing so, generates a relative numerical score that represents that risk. More than 3,300 facilities throughout the Philippines were evaluated successfully with the NRIPS.« less
Arrieta-Camacho, Juan José; Biegler, Lorenz T
2005-12-01
Real time optimal guidance is considered for a class of low thrust spacecraft. In particular, nonlinear model predictive control (NMPC) is utilized for computing the optimal control actions required to transfer a spacecraft from a low Earth orbit to a mission orbit. The NMPC methodology presented is able to cope with unmodeled disturbances. The dynamics of the transfer are modeled using a set of modified equinoctial elements because they do not exhibit singularities for zero inclination and zero eccentricity. The idea behind NMPC is the repeated solution of optimal control problems; at each time step, a new control action is computed. The optimal control problem is solved using a direct method-fully discretizing the equations of motion. The large scale nonlinear program resulting from the discretization procedure is solved using IPOPT--a primal-dual interior point algorithm. Stability and robustness characteristics of the NMPC algorithm are reviewed. A numerical example is presented that encourages further development of the proposed methodology: the transfer from low-Earth orbit to a molniya orbit.
Cross hole GPR traveltime inversion using a fast and accurate neural network as a forward model
NASA Astrophysics Data System (ADS)
Mejer Hansen, Thomas
2017-04-01
Probabilistic formulated inverse problems can be solved using Monte Carlo based sampling methods. In principle both advanced prior information, such as based on geostatistics, and complex non-linear forward physical models can be considered. However, in practice these methods can be associated with huge computational costs that in practice limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error, that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival travel time inversion of cross hole ground-penetrating radar (GPR) data. An accurate forward model, based on 2D full-waveform modeling followed by automatic travel time picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the full forward model, and considerably faster, and more accurate, than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of the types of inverse problems that can be solved using non-linear Monte Carlo sampling techniques.
Conjugate-gradient optimization method for orbital-free density functional calculations.
Jiang, Hong; Yang, Weitao
2004-08-01
Orbital-free density functional theory as an extension of traditional Thomas-Fermi theory has attracted a lot of interest in the past decade because of developments in both more accurate kinetic energy functionals and highly efficient numerical methodology. In this paper, we developed a conjugate-gradient method for the numerical solution of spin-dependent extended Thomas-Fermi equation by incorporating techniques previously used in Kohn-Sham calculations. The key ingredient of the method is an approximate line-search scheme and a collective treatment of two spin densities in the case of spin-dependent extended Thomas-Fermi problem. Test calculations for a quartic two-dimensional quantum dot system and a three-dimensional sodium cluster Na216 with a local pseudopotential demonstrate that the method is accurate and efficient. (c) 2004 American Institute of Physics.
Nicholls, David P
2018-04-01
The faithful modelling of the propagation of linear waves in a layered, periodic structure is of paramount importance in many branches of the applied sciences. In this paper, we present a novel numerical algorithm for the simulation of such problems which is free of the artificial singularities present in related approaches. We advocate for a surface integral formulation which is phrased in terms of impedance-impedance operators that are immune to the Dirichlet eigenvalues which plague the Dirichlet-Neumann operators that appear in classical formulations. We demonstrate a high-order spectral algorithm to simulate these latter operators based upon a high-order perturbation of surfaces methodology which is rapid, robust and highly accurate. We demonstrate the validity and utility of our approach with a sequence of numerical simulations.
Suarez, V; Hernández Wong, J; Nogal, U; Calderón, A; Rojas-Trigos, J B; Juárez, A G; Marín, E
2014-01-01
It is reported the study of the heat transfer through a homogeneous and isotropic solid exited by square periodic light beam on its front surface. For this, we use the Infrared Photothermal Radiometry in order to obtain the evolution of the temperature difference on the rear surface of three samples, silicon, copper and wood, as a function of the exposure time. Also, we solved the heat transport equation for this problem with the boundary conditions congruent with the physical situation, by means of numerical simulation based in finite element analysis. Our results show a good agreement between the experimental and numerical simulated results, which demonstrate the utility of this methodology for the study of the thermal response of solids. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Nicholls, David P.
2018-04-01
The faithful modelling of the propagation of linear waves in a layered, periodic structure is of paramount importance in many branches of the applied sciences. In this paper, we present a novel numerical algorithm for the simulation of such problems which is free of the artificial singularities present in related approaches. We advocate for a surface integral formulation which is phrased in terms of impedance-impedance operators that are immune to the Dirichlet eigenvalues which plague the Dirichlet-Neumann operators that appear in classical formulations. We demonstrate a high-order spectral algorithm to simulate these latter operators based upon a high-order perturbation of surfaces methodology which is rapid, robust and highly accurate. We demonstrate the validity and utility of our approach with a sequence of numerical simulations.
Dupree, Jean A.; Crowfoot, Richard M.
2012-01-01
The drainage basin is a fundamental hydrologic entity used for studies of surface-water resources and during planning of water-related projects. Numeric drainage areas published by the U.S. Geological Survey water science centers in Annual Water Data Reports and on the National Water Information Systems (NWIS) Web site are still primarily derived from hard-copy sources and by manual delineation of polygonal basin areas on paper topographic map sheets. To expedite numeric drainage area determinations, the Colorado Water Science Center developed a digital database structure and a delineation methodology based on the hydrologic unit boundaries in the National Watershed Boundary Dataset. This report describes the digital database architecture and delineation methodology and also presents the results of a comparison of the numeric drainage areas derived using this digital methodology with those derived using traditional, non-digital methods. (Please see report for full Abstract)
NASA Astrophysics Data System (ADS)
Riasi, S.; Huang, G.; Montemagno, C.; Yeghiazarian, L.
2013-12-01
Micro-scale modeling of multiphase flow in porous media is critical to characterize porous materials. Several modeling techniques have been implemented to date, but none can be used as a general strategy for all porous media applications due to challenges presented by non-smooth high-curvature solid surfaces, and by a wide range of pore sizes and porosities. Finite approaches like the finite volume method require a high quality, problem-dependent mesh, while particle-based approaches like the lattice Boltzmann require too many particles to achieve a stable meaningful solution. Both come at a large computational cost. Other methods such as pore network modeling (PNM) have been developed to accelerate the solution process by simplifying the solution domain, but so far a unique and straightforward methodology to implement PNM is lacking. We have developed a general, stable and fast methodology to model multi-phase fluid flow in porous materials, irrespective of their porosity and solid phase topology. We have applied this methodology to highly porous fibrous materials in which void spaces are not distinctly separated, and where simplifying the geometry into a network of pore bodies and throats, as in PNM, does not result in a topology-consistent network. To this end, we have reduced the complexity of the 3-D void space geometry by working with its medial surface. We have used a non-iterative fast medial surface finder algorithm to determine a voxel-wide medial surface of the void space, and then solved the quasi-static drainage and imbibition on the resulting domain. The medial surface accurately represents the topology of the porous structure including corners, irregular cross sections, etc. This methodology is capable of capturing corner menisci and the snap-off mechanism numerically. It also allows for calculation of pore size distribution, permeability and capillary pressure-saturation-specific interfacial area surface of the porous structure. To show the capability of this method to numerically estimate the capillary pressure in irregular cross sections, we compared our results with analytical solutions available for capillary tubes with non-circular cross sections. We also validated this approach by implementing it on well-known benchmark problems such as a bundle of cylinders and packed spheres.
NASA Astrophysics Data System (ADS)
Liu, Xiaomei; Li, Shengtao; Zhang, Kanjian
2017-08-01
In this paper, we solve an optimal control problem for a class of time-invariant switched stochastic systems with multi-switching times, where the objective is to minimise a cost functional with different costs defined on the states. In particular, we focus on problems in which a pre-specified sequence of active subsystems is given and the switching times are the only control variables. Based on the calculus of variation, we derive the gradient of the cost functional with respect to the switching times on an especially simple form, which can be directly used in gradient descent algorithms to locate the optimal switching instants. Finally, a numerical example is given, highlighting the validity of the proposed methodology.
Heat transfer in aeropropulsion systems
NASA Astrophysics Data System (ADS)
Simoneau, R. J.
1985-07-01
Aeropropulsion heat transfer is reviewed. A research methodology based on a growing synergism between computations and experiments is examined. The aeropropulsion heat transfer arena is identified as high Reynolds number forced convection in a highly disturbed environment subject to strong gradients, body forces, abrupt geometry changes and high three dimensionality - all in an unsteady flow field. Numerous examples based on heat transfer to the aircraft gas turbine blade are presented to illustrate the types of heat transfer problems which are generic to aeropropulsion systems. The research focus of the near future in aeropropulsion heat transfer is projected.
Heat transfer in aeropropulsion systems
NASA Technical Reports Server (NTRS)
Simoneau, R. J.
1985-01-01
Aeropropulsion heat transfer is reviewed. A research methodology based on a growing synergism between computations and experiments is examined. The aeropropulsion heat transfer arena is identified as high Reynolds number forced convection in a highly disturbed environment subject to strong gradients, body forces, abrupt geometry changes and high three dimensionality - all in an unsteady flow field. Numerous examples based on heat transfer to the aircraft gas turbine blade are presented to illustrate the types of heat transfer problems which are generic to aeropropulsion systems. The research focus of the near future in aeropropulsion heat transfer is projected.
NASA Astrophysics Data System (ADS)
Hansen, T. M.; Cordua, K. S.
2017-12-01
Probabilistically formulated inverse problems can be solved using Monte Carlo-based sampling methods. In principle, both advanced prior information, based on for example, complex geostatistical models and non-linear forward models can be considered using such methods. However, Monte Carlo methods may be associated with huge computational costs that, in practice, limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical forward response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival traveltime inversion of crosshole ground penetrating radar data. An accurate forward model, based on 2-D full-waveform modeling followed by automatic traveltime picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the accurate and computationally expensive forward model, and also considerably faster and more accurate (i.e. with better resolution), than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of non-linear and non-Gaussian inverse problems that have to be solved using Monte Carlo sampling techniques.
NASA Astrophysics Data System (ADS)
Maire, Pierre-Henri; Abgrall, Rémi; Breil, Jérôme; Loubère, Raphaël; Rebourcet, Bernard
2013-02-01
In this paper, we describe a cell-centered Lagrangian scheme devoted to the numerical simulation of solid dynamics on two-dimensional unstructured grids in planar geometry. This numerical method, utilizes the classical elastic-perfectly plastic material model initially proposed by Wilkins [M.L. Wilkins, Calculation of elastic-plastic flow, Meth. Comput. Phys. (1964)]. In this model, the Cauchy stress tensor is decomposed into the sum of its deviatoric part and the thermodynamic pressure which is defined by means of an equation of state. Regarding the deviatoric stress, its time evolution is governed by a classical constitutive law for isotropic material. The plasticity model employs the von Mises yield criterion and is implemented by means of the radial return algorithm. The numerical scheme relies on a finite volume cell-centered method wherein numerical fluxes are expressed in terms of sub-cell force. The generic form of the sub-cell force is obtained by requiring the scheme to satisfy a semi-discrete dissipation inequality. Sub-cell force and nodal velocity to move the grid are computed consistently with cell volume variation by means of a node-centered solver, which results from total energy conservation. The nominally second-order extension is achieved by developing a two-dimensional extension in the Lagrangian framework of the Generalized Riemann Problem methodology, introduced by Ben-Artzi and Falcovitz [M. Ben-Artzi, J. Falcovitz, Generalized Riemann Problems in Computational Fluid Dynamics, Cambridge Monogr. Appl. Comput. Math. (2003)]. Finally, the robustness and the accuracy of the numerical scheme are assessed through the computation of several test cases.
A nonlinear dynamic finite element approach for simulating muscular hydrostats.
Vavourakis, V; Kazakidi, A; Tsakiris, D P; Ekaterinaris, J A
2014-01-01
An implicit nonlinear finite element model for simulating biological muscle mechanics is developed. The numerical method is suitable for dynamic simulations of three-dimensional, nonlinear, nearly incompressible, hyperelastic materials that undergo large deformations. These features characterise biological muscles, which consist of fibres and connective tissues. It can be assumed that the stress distribution inside the muscles is the superposition of stresses along the fibres and the connective tissues. The mechanical behaviour of the surrounding tissues is determined by adopting a Mooney-Rivlin constitutive model, while the mechanical description of fibres is considered to be the sum of active and passive stresses. Due to the nonlinear nature of the problem, evaluation of the Jacobian matrix is carried out in order to subsequently utilise the standard Newton-Raphson iterative procedure and to carry out time integration with an implicit scheme. The proposed methodology is implemented into our in-house, open source, finite element software, which is validated by comparing numerical results with experimental measurements and other numerical results. Finally, the numerical procedure is utilised to simulate primitive octopus arm manoeuvres, such as bending and reaching.
A Comparative Study of Three Methodologies for Modeling Dynamic Stall
NASA Technical Reports Server (NTRS)
Sankar, L.; Rhee, M.; Tung, C.; ZibiBailly, J.; LeBalleur, J. C.; Blaise, D.; Rouzaud, O.
2002-01-01
During the past two decades, there has been an increased reliance on the use of computational fluid dynamics methods for modeling rotors in high speed forward flight. Computational methods are being developed for modeling the shock induced loads on the advancing side, first-principles based modeling of the trailing wake evolution, and for retreating blade stall. The retreating blade dynamic stall problem has received particular attention, because the large variations in lift and pitching moments encountered in dynamic stall can lead to blade vibrations and pitch link fatigue. Restricting to aerodynamics, the numerical prediction of dynamic stall is still a complex and challenging CFD problem, that, even in two dimensions at low speed, gathers the major difficulties of aerodynamics, such as the grid resolution requirements for the viscous phenomena at leading-edge bubbles or in mixing-layers, the bias of the numerical viscosity, and the major difficulties of the physical modeling, such as the turbulence models, the transition models, whose both determinant influences, already present in static maximal-lift or stall computations, are emphasized by the dynamic aspect of the phenomena.
Distribution-dependent robust linear optimization with applications to inventory control
Kang, Seong-Cheol; Brisimi, Theodora S.
2014-01-01
This paper tackles linear programming problems with data uncertainty and applies it to an important inventory control problem. Each element of the constraint matrix is subject to uncertainty and is modeled as a random variable with a bounded support. The classical robust optimization approach to this problem yields a solution with guaranteed feasibility. As this approach tends to be too conservative when applications can tolerate a small chance of infeasibility, one would be interested in obtaining a less conservative solution with a certain probabilistic guarantee of feasibility. A robust formulation in the literature produces such a solution, but it does not use any distributional information on the uncertain data. In this work, we show that the use of distributional information leads to an equally robust solution (i.e., under the same probabilistic guarantee of feasibility) but with a better objective value. In particular, by exploiting distributional information, we establish stronger upper bounds on the constraint violation probability of a solution. These bounds enable us to “inject” less conservatism into the formulation, which in turn yields a more cost-effective solution (by 50% or more in some numerical instances). To illustrate the effectiveness of our methodology, we consider a discrete-time stochastic inventory control problem with certain quality of service constraints. Numerical tests demonstrate that the use of distributional information in the robust optimization of the inventory control problem results in 36%–54% cost savings, compared to the case where such information is not used. PMID:26347579
A novel approach based on preference-based index for interval bilevel linear programming problem.
Ren, Aihong; Wang, Yuping; Xue, Xingsi
2017-01-01
This paper proposes a new methodology for solving the interval bilevel linear programming problem in which all coefficients of both objective functions and constraints are considered as interval numbers. In order to keep as much uncertainty of the original constraint region as possible, the original problem is first converted into an interval bilevel programming problem with interval coefficients in both objective functions only through normal variation of interval number and chance-constrained programming. With the consideration of different preferences of different decision makers, the concept of the preference level that the interval objective function is preferred to a target interval is defined based on the preference-based index. Then a preference-based deterministic bilevel programming problem is constructed in terms of the preference level and the order relation [Formula: see text]. Furthermore, the concept of a preference δ -optimal solution is given. Subsequently, the constructed deterministic nonlinear bilevel problem is solved with the help of estimation of distribution algorithm. Finally, several numerical examples are provided to demonstrate the effectiveness of the proposed approach.
Optimization as a Tool for Consistency Maintenance in Multi-Resolution Simulation
NASA Technical Reports Server (NTRS)
Drewry, Darren T; Reynolds, Jr , Paul F; Emanuel, William R
2006-01-01
The need for new approaches to the consistent simulation of related phenomena at multiple levels of resolution is great. While many fields of application would benefit from a complete and approachable solution to this problem, such solutions have proven extremely difficult. We present a multi-resolution simulation methodology that uses numerical optimization as a tool for maintaining external consistency between models of the same phenomena operating at different levels of temporal and/or spatial resolution. Our approach follows from previous work in the disparate fields of inverse modeling and spacetime constraint-based animation. As a case study, our methodology is applied to two environmental models of forest canopy processes that make overlapping predictions under unique sets of operating assumptions, and which execute at different temporal resolutions. Experimental results are presented and future directions are addressed.
First-Order System Least-Squares for Second-Order Elliptic Problems with Discontinuous Coefficients
NASA Technical Reports Server (NTRS)
Manteuffel, Thomas A.; McCormick, Stephen F.; Starke, Gerhard
1996-01-01
The first-order system least-squares methodology represents an alternative to standard mixed finite element methods. Among its advantages is the fact that the finite element spaces approximating the pressure and flux variables are not restricted by the inf-sup condition and that the least-squares functional itself serves as an appropriate error measure. This paper studies the first-order system least-squares approach for scalar second-order elliptic boundary value problems with discontinuous coefficients. Ellipticity of an appropriately scaled least-squares bilinear form of the size of the jumps in the coefficients leading to adequate finite element approximation results. The occurrence of singularities at interface corners and cross-points is discussed. and a weighted least-squares functional is introduced to handle such cases. Numerical experiments are presented for two test problems to illustrate the performance of this approach.
On Multifunctional Collaborative Methods in Engineering Science
NASA Technical Reports Server (NTRS)
Ransom, Jonathan B.
2001-01-01
Multifunctional methodologies and analysis procedures are formulated for interfacing diverse subdomain idealizations including multi-fidelity modeling methods and multi-discipline analysis methods. These methods, based on the method of weighted residuals, ensure accurate compatibility of primary and secondary variables across the subdomain interfaces. Methods are developed using diverse mathematical modeling (i.e., finite difference and finite element methods) and multi-fidelity modeling among the subdomains. Several benchmark scalar-field and vector-field problems in engineering science are presented with extensions to multidisciplinary problems. Results for all problems presented are in overall good agreement with the exact analytical solution or the reference numerical solution. Based on the results, the integrated modeling approach using the finite element method for multi-fidelity discretization among the subdomains is identified as most robust. The multiple method approach is advantageous when interfacing diverse disciplines in which each of the method's strengths are utilized.
Traction patterns of tumor cells.
Ambrosi, D; Duperray, A; Peschetola, V; Verdier, C
2009-01-01
The traction exerted by a cell on a planar deformable substrate can be indirectly obtained on the basis of the displacement field of the underlying layer. The usual methodology used to address this inverse problem is based on the exploitation of the Green tensor of the linear elasticity problem in a half space (Boussinesq problem), coupled with a minimization algorithm under force penalization. A possible alternative strategy is to exploit an adjoint equation, obtained on the basis of a suitable minimization requirement. The resulting system of coupled elliptic partial differential equations is applied here to determine the force field per unit surface generated by T24 tumor cells on a polyacrylamide substrate. The shear stress obtained by numerical integration provides quantitative insight of the traction field and is a promising tool to investigate the spatial pattern of force per unit surface generated in cell motion, particularly in the case of such cancer cells.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cox, James V.; Wellman, Gerald William; Emery, John M.
2011-09-01
Fracture or tearing of ductile metals is a pervasive engineering concern, yet accurate prediction of the critical conditions of fracture remains elusive. Sandia National Laboratories has been developing and implementing several new modeling methodologies to address problems in fracture, including both new physical models and new numerical schemes. The present study provides a double-blind quantitative assessment of several computational capabilities including tearing parameters embedded in a conventional finite element code, localization elements, extended finite elements (XFEM), and peridynamics. For this assessment, each of four teams reported blind predictions for three challenge problems spanning crack initiation and crack propagation. After predictionsmore » had been reported, the predictions were compared to experimentally observed behavior. The metal alloys for these three problems were aluminum alloy 2024-T3 and precipitation hardened stainless steel PH13-8Mo H950. The predictive accuracies of the various methods are demonstrated, and the potential sources of error are discussed.« less
NASA Astrophysics Data System (ADS)
Bonacker, Esther; Gibali, Aviv; Küfer, Karl-Heinz; Süss, Philipp
2017-04-01
Multicriteria optimization problems occur in many real life applications, for example in cancer radiotherapy treatment and in particular in intensity modulated radiation therapy (IMRT). In this work we focus on optimization problems with multiple objectives that are ranked according to their importance. We solve these problems numerically by combining lexicographic optimization with our recently proposed level set scheme, which yields a sequence of auxiliary convex feasibility problems; solved here via projection methods. The projection enables us to combine the newly introduced superiorization methodology with multicriteria optimization methods to speed up computation while guaranteeing convergence of the optimization. We demonstrate our scheme with a simple 2D academic example (used in the literature) and also present results from calculations on four real head neck cases in IMRT (Radiation Oncology of the Ludwig-Maximilians University, Munich, Germany) for two different choices of superiorization parameter sets suited to yield fast convergence for each case individually or robust behavior for all four cases.
NASA Astrophysics Data System (ADS)
Navas, Pedro; Sanavia, Lorenzo; López-Querol, Susana; Yu, Rena C.
2017-12-01
Solving dynamic problems for fluid saturated porous media at large deformation regime is an interesting but complex issue. An implicit time integration scheme is herein developed within the framework of the u-w (solid displacement-relative fluid displacement) formulation for the Biot's equations. In particular, liquid water saturated porous media is considered and the linearization of the linear momentum equations taking into account all the inertia terms for both solid and fluid phases is for the first time presented. The spatial discretization is carried out through a meshfree method, in which the shape functions are based on the principle of local maximum entropy LME. The current methodology is firstly validated with the dynamic consolidation of a soil column and the plastic shear band formulation of a square domain loaded by a rigid footing. The feasibility of this new numerical approach for solving large deformation dynamic problems is finally demonstrated through the application to an embankment problem subjected to an earthquake.
NASA Astrophysics Data System (ADS)
Satyaramesh, P. V.
2014-01-01
This paper presents an application of finite n-person non-cooperative game theory for analyzing bidding strategies of generators in a deregulated energy marketplace with Pool Bilateral contracts so as to maximize their net profits. A new methodology to build bidding methodology for generators participating in oligopoly electricity market has been proposed in this paper. It is assumed that each generator bids a supply function. This methodology finds out the coefficients in the supply function of generators in order to maximize benefits in an environment of competing rival bidders. A natural choice for developing strategies is Nash Equilibrium (NE) model incorporating mixed strategies, for solving the bidding problem of electrical market. Associated optimal profits are evaluated for a combination of set of pure strategies of bidding of generators, and payoff matrix has been constructed. The optimal payoff is calculated by using NE. An attempt has also been made to minimize the gap between the optimal payoff and the payoff obtained by a possible mixed strategies combination. The algorithm is coded in MATLAB. A numerical example is used to illustrate the essential features of the approach and the results are proved to be the optimal values.
Common Methodological Problems in Research on the Addictions.
ERIC Educational Resources Information Center
Nathan, Peter E.; Lansky, David
1978-01-01
Identifies common problems in research on the addictions and offers suggestions for remediating these methodological problems. The addictions considered include alcoholism and drug dependencies. Problems considered are those arising from inadequate, incomplete, or biased reviews of relevant literatures and methodological shortcomings of subject…
Apostolopoulos, Yorghos; Lemke, Michael K; Barry, Adam E; Lich, Kristen Hassmiller
2018-02-01
Given the complexity of factors contributing to alcohol misuse, appropriate epistemologies and methodologies are needed to understand and intervene meaningfully. We aimed to (1) provide an overview of computational modeling methodologies, with an emphasis on system dynamics modeling; (2) explain how community-based system dynamics modeling can forge new directions in alcohol prevention research; and (3) present a primer on how to build alcohol misuse simulation models using system dynamics modeling, with an emphasis on stakeholder involvement, data sources and model validation. Throughout, we use alcohol misuse among college students in the United States as a heuristic example for demonstrating these methodologies. System dynamics modeling employs a top-down aggregate approach to understanding dynamically complex problems. Its three foundational properties-stocks, flows and feedbacks-capture non-linearity, time-delayed effects and other system characteristics. As a methodological choice, system dynamics modeling is amenable to participatory approaches; in particular, community-based system dynamics modeling has been used to build impactful models for addressing dynamically complex problems. The process of community-based system dynamics modeling consists of numerous stages: (1) creating model boundary charts, behavior-over-time-graphs and preliminary system dynamics models using group model-building techniques; (2) model formulation; (3) model calibration; (4) model testing and validation; and (5) model simulation using learning-laboratory techniques. Community-based system dynamics modeling can provide powerful tools for policy and intervention decisions that can result ultimately in sustainable changes in research and action in alcohol misuse prevention. © 2017 Society for the Study of Addiction.
Numerical and experimental investigation of a beveled trailing-edge flow field and noise emission
NASA Astrophysics Data System (ADS)
van der Velden, W. C. P.; Pröbsting, S.; van Zuijlen, A. H.; de Jong, A. T.; Guan, Y.; Morris, S. C.
2016-12-01
Efficient tools and methodology for the prediction of trailing-edge noise experience substantial interest within the wind turbine industry. In recent years, the Lattice Boltzmann Method has received increased attention for providing such an efficient alternative for the numerical solution of complex flow problems. Based on the fully explicit, transient, compressible solution of the Lattice Boltzmann Equation in combination with a Ffowcs-Williams and Hawking aeroacoustic analogy, an estimation of the acoustic radiation in the far field is obtained. To validate this methodology for the prediction of trailing-edge noise, the flow around a flat plate with an asymmetric 25° beveled trailing edge and obtuse corner in a low Mach number flow is analyzed. Flow field dynamics are compared to data obtained experimentally from Particle Image Velocimetry and Hot Wire Anemometry, and compare favorably in terms of mean velocity field and turbulent fluctuations. Moreover, the characteristics of the unsteady surface pressure, which are closely related to the acoustic emission, show good agreement between simulation and experiment. Finally, the prediction of the radiated sound is compared to the results obtained from acoustic phased array measurements in combination with a beamforming methodology. Vortex shedding results in a strong narrowband component centered at a constant Strouhal number in the acoustic spectrum. At higher frequency, a good agreement between simulation and experiment for the broadband noise component is obtained and a typical cardioid-like directivity is recovered.
Finite Element Method-Based Kinematics and Closed-Loop Control of Soft, Continuum Manipulators.
Bieze, Thor Morales; Largilliere, Frederick; Kruszewski, Alexandre; Zhang, Zhongkai; Merzouki, Rochdi; Duriez, Christian
2018-06-01
This article presents a modeling methodology and experimental validation for soft manipulators to obtain forward kinematic model (FKM) and inverse kinematic model (IKM) under quasi-static conditions (in the literature, these manipulators are usually classified as continuum robots. However, their main characteristic of interest in this article is that they create motion by deformation, as opposed to the classical use of articulations). It offers a way to obtain the kinematic characteristics of this type of soft robots that is suitable for offline path planning and position control. The modeling methodology presented relies on continuum mechanics, which does not provide analytic solutions in the general case. Our approach proposes a real-time numerical integration strategy based on finite element method with a numerical optimization based on Lagrange multipliers to obtain FKM and IKM. To reduce the dimension of the problem, at each step, a projection of the model to the constraint space (gathering actuators, sensors, and end-effector) is performed to obtain the smallest number possible of mathematical equations to be solved. This methodology is applied to obtain the kinematics of two different manipulators with complex structural geometry. An experimental comparison is also performed in one of the robots, between two other geometric approaches and the approach that is showcased in this article. A closed-loop controller based on a state estimator is proposed. The controller is experimentally validated and its robustness is evaluated using Lypunov stability method.
Parameterizing the Spatial Markov Model From Breakthrough Curve Data Alone
NASA Astrophysics Data System (ADS)
Sherman, Thomas; Fakhari, Abbas; Miller, Savannah; Singha, Kamini; Bolster, Diogo
2017-12-01
The spatial Markov model (SMM) is an upscaled Lagrangian model that effectively captures anomalous transport across a diverse range of hydrologic systems. The distinct feature of the SMM relative to other random walk models is that successive steps are correlated. To date, with some notable exceptions, the model has primarily been applied to data from high-resolution numerical simulations and correlation effects have been measured from simulated particle trajectories. In real systems such knowledge is practically unattainable and the best one might hope for is breakthrough curves (BTCs) at successive downstream locations. We introduce a novel methodology to quantify velocity correlation from BTC data alone. By discretizing two measured BTCs into a set of arrival times and developing an inverse model, we estimate velocity correlation, thereby enabling parameterization of the SMM in studies where detailed Lagrangian velocity statistics are unavailable. The proposed methodology is applied to two synthetic numerical problems, where we measure all details and thus test the veracity of the approach by comparison of estimated parameters with known simulated values. Our results suggest that our estimated transition probabilities agree with simulated values and using the SMM with this estimated parameterization accurately predicts BTCs downstream. Our methodology naturally allows for estimates of uncertainty by calculating lower and upper bounds of velocity correlation, enabling prediction of a range of BTCs. The measured BTCs fall within the range of predicted BTCs. This novel method to parameterize the SMM from BTC data alone is quite parsimonious, thereby widening the SMM's practical applicability.
Discrete element weld model, phase 2
NASA Technical Reports Server (NTRS)
Prakash, C.; Samonds, M.; Singhal, A. K.
1987-01-01
A numerical method was developed for analyzing the tungsten inert gas (TIG) welding process. The phenomena being modeled include melting under the arc and the flow in the melt under the action of buoyancy, surface tension, and electromagnetic forces. The latter entails the calculation of the electric potential and the computation of electric current and magnetic field therefrom. Melting may occur at a single temperature or over a temperature range, and the electrical and thermal conductivities can be a function of temperature. Results of sample calculations are presented and discussed at length. A major research contribution has been the development of numerical methodology for the calculation of phase change problems in a fixed grid framework. The model has been implemented on CHAM's general purpose computer code PHOENICS. The inputs to the computer model include: geometric parameters, material properties, and weld process parameters.
Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics
NASA Technical Reports Server (NTRS)
Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.
Optimal placement of actuators and sensors in control augmented structural optimization
NASA Technical Reports Server (NTRS)
Sepulveda, A. E.; Schmit, L. A., Jr.
1990-01-01
A control-augmented structural synthesis methodology is presented in which actuator and sensor placement is treated in terms of (0,1) variables. Structural member sizes and control variables are treated simultaneously as design variables. A multiobjective utopian approach is used to obtain a compromise solution for inherently conflicting objective functions such as strucutal mass control effort and number of actuators. Constraints are imposed on transient displacements, natural frequencies, actuator forces and dynamic stability as well as controllability and observability of the system. The combinatorial aspects of the mixed - (0,1) continuous variable design optimization problem are made tractable by combining approximation concepts with branch and bound techniques. Some numerical results for example problems are presented to illustrate the efficacy of the design procedure set forth.
Meshless Method for Simulation of Compressible Flow
NASA Astrophysics Data System (ADS)
Nabizadeh Shahrebabak, Ebrahim
In the present age, rapid development in computing technology and high speed supercomputers has made numerical analysis and computational simulation more practical than ever before for large and complex cases. Numerical simulations have also become an essential means for analyzing the engineering problems and the cases that experimental analysis is not practical. There are so many sophisticated and accurate numerical schemes, which do these simulations. The finite difference method (FDM) has been used to solve differential equation systems for decades. Additional numerical methods based on finite volume and finite element techniques are widely used in solving problems with complex geometry. All of these methods are mesh-based techniques. Mesh generation is an essential preprocessing part to discretize the computation domain for these conventional methods. However, when dealing with mesh-based complex geometries these conventional mesh-based techniques can become troublesome, difficult to implement, and prone to inaccuracies. In this study, a more robust, yet simple numerical approach is used to simulate problems in an easier manner for even complex problem. The meshless, or meshfree, method is one such development that is becoming the focus of much research in the recent years. The biggest advantage of meshfree methods is to circumvent mesh generation. Many algorithms have now been developed to help make this method more popular and understandable for everyone. These algorithms have been employed over a wide range of problems in computational analysis with various levels of success. Since there is no connectivity between the nodes in this method, the challenge was considerable. The most fundamental issue is lack of conservation, which can be a source of unpredictable errors in the solution process. This problem is particularly evident in the presence of steep gradient regions and discontinuities, such as shocks that frequently occur in high speed compressible flow problems. To solve this discontinuity problem, this research study deals with the implementation of a conservative meshless method and its applications in computational fluid dynamics (CFD). One of the most common types of collocating meshless method the RBF-DQ, is used to approximate the spatial derivatives. The issue with meshless methods when dealing with highly convective cases is that they cannot distinguish the influence of fluid flow from upstream or downstream and some methodology is needed to make the scheme stable. Therefore, an upwinding scheme similar to one used in the finite volume method is added to capture steep gradient or shocks. This scheme creates a flexible algorithm within which a wide range of numerical flux schemes, such as those commonly used in the finite volume method, can be employed. In addition, a blended RBF is used to decrease the dissipation ensuing from the use of a low shape parameter. All of these steps are formulated for the Euler equation and a series of test problems used to confirm convergence of the algorithm. The present scheme was first employed on several incompressible benchmarks to validate the framework. The application of this algorithm is illustrated by solving a set of incompressible Navier-Stokes problems. Results from the compressible problem are compared with the exact solution for the flow over a ramp and compared with solutions of finite volume discretization and the discontinuous Galerkin method, both requiring a mesh. The applicability of the algorithm and its robustness are shown to be applied to complex problems.
NASA Astrophysics Data System (ADS)
Dadashzadeh, N.; Duzgun, H. S. B.; Yesiloglu-Gultekin, N.
2017-08-01
While advanced numerical techniques in slope stability analysis are successfully used in deterministic studies, they have so far found limited use in probabilistic analyses due to their high computation cost. The first-order reliability method (FORM) is one of the most efficient probabilistic techniques to perform probabilistic stability analysis by considering the associated uncertainties in the analysis parameters. However, it is not possible to directly use FORM in numerical slope stability evaluations as it requires definition of a limit state performance function. In this study, an integrated methodology for probabilistic numerical modeling of rock slope stability is proposed. The methodology is based on response surface method, where FORM is used to develop an explicit performance function from the results of numerical simulations. The implementation of the proposed methodology is performed by considering a large potential rock wedge in Sumela Monastery, Turkey. The accuracy of the developed performance function to truly represent the limit state surface is evaluated by monitoring the slope behavior. The calculated probability of failure is compared with Monte Carlo simulation (MCS) method. The proposed methodology is found to be 72% more efficient than MCS, while the accuracy is decreased with an error of 24%.
Butler, T; Graham, L; Estep, D; Dawson, C; Westerink, J J
2015-04-01
The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.
NASA Astrophysics Data System (ADS)
Butler, T.; Graham, L.; Estep, D.; Dawson, C.; Westerink, J. J.
2015-04-01
The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-05
... Numerical Simulations Risk Management Methodology November 1, 2010. I. Introduction On August 25, 2010, The... Analysis and Numerical Simulations (``STANS'') risk management methodology. The rule change alters... collateral within the STANS Monte Carlo simulations.\\7\\ \\7\\ OCC believes the approach currently used to...
Ge, Liang; Sotiropoulos, Fotis
2007-08-01
A novel numerical method is developed that integrates boundary-conforming grids with a sharp interface, immersed boundary methodology. The method is intended for simulating internal flows containing complex, moving immersed boundaries such as those encountered in several cardiovascular applications. The background domain (e.g the empty aorta) is discretized efficiently with a curvilinear boundary-fitted mesh while the complex moving immersed boundary (say a prosthetic heart valve) is treated with the sharp-interface, hybrid Cartesian/immersed-boundary approach of Gilmanov and Sotiropoulos [1]. To facilitate the implementation of this novel modeling paradigm in complex flow simulations, an accurate and efficient numerical method is developed for solving the unsteady, incompressible Navier-Stokes equations in generalized curvilinear coordinates. The method employs a novel, fully-curvilinear staggered grid discretization approach, which does not require either the explicit evaluation of the Christoffel symbols or the discretization of all three momentum equations at cell interfaces as done in previous formulations. The equations are integrated in time using an efficient, second-order accurate fractional step methodology coupled with a Jacobian-free, Newton-Krylov solver for the momentum equations and a GMRES solver enhanced with multigrid as preconditioner for the Poisson equation. Several numerical experiments are carried out on fine computational meshes to demonstrate the accuracy and efficiency of the proposed method for standard benchmark problems as well as for unsteady, pulsatile flow through a curved, pipe bend. To demonstrate the ability of the method to simulate flows with complex, moving immersed boundaries we apply it to calculate pulsatile, physiological flow through a mechanical, bileaflet heart valve mounted in a model straight aorta with an anatomical-like triple sinus.
Ge, Liang; Sotiropoulos, Fotis
2008-01-01
A novel numerical method is developed that integrates boundary-conforming grids with a sharp interface, immersed boundary methodology. The method is intended for simulating internal flows containing complex, moving immersed boundaries such as those encountered in several cardiovascular applications. The background domain (e.g the empty aorta) is discretized efficiently with a curvilinear boundary-fitted mesh while the complex moving immersed boundary (say a prosthetic heart valve) is treated with the sharp-interface, hybrid Cartesian/immersed-boundary approach of Gilmanov and Sotiropoulos [1]. To facilitate the implementation of this novel modeling paradigm in complex flow simulations, an accurate and efficient numerical method is developed for solving the unsteady, incompressible Navier-Stokes equations in generalized curvilinear coordinates. The method employs a novel, fully-curvilinear staggered grid discretization approach, which does not require either the explicit evaluation of the Christoffel symbols or the discretization of all three momentum equations at cell interfaces as done in previous formulations. The equations are integrated in time using an efficient, second-order accurate fractional step methodology coupled with a Jacobian-free, Newton-Krylov solver for the momentum equations and a GMRES solver enhanced with multigrid as preconditioner for the Poisson equation. Several numerical experiments are carried out on fine computational meshes to demonstrate the accuracy and efficiency of the proposed method for standard benchmark problems as well as for unsteady, pulsatile flow through a curved, pipe bend. To demonstrate the ability of the method to simulate flows with complex, moving immersed boundaries we apply it to calculate pulsatile, physiological flow through a mechanical, bileaflet heart valve mounted in a model straight aorta with an anatomical-like triple sinus. PMID:19194533
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maire, Pierre-Henri, E-mail: maire@celia.u-bordeaux1.fr; Abgrall, Rémi, E-mail: remi.abgrall@math.u-bordeau1.fr; Breil, Jérôme, E-mail: breil@celia.u-bordeaux1.fr
2013-02-15
In this paper, we describe a cell-centered Lagrangian scheme devoted to the numerical simulation of solid dynamics on two-dimensional unstructured grids in planar geometry. This numerical method, utilizes the classical elastic-perfectly plastic material model initially proposed by Wilkins [M.L. Wilkins, Calculation of elastic–plastic flow, Meth. Comput. Phys. (1964)]. In this model, the Cauchy stress tensor is decomposed into the sum of its deviatoric part and the thermodynamic pressure which is defined by means of an equation of state. Regarding the deviatoric stress, its time evolution is governed by a classical constitutive law for isotropic material. The plasticity model employs themore » von Mises yield criterion and is implemented by means of the radial return algorithm. The numerical scheme relies on a finite volume cell-centered method wherein numerical fluxes are expressed in terms of sub-cell force. The generic form of the sub-cell force is obtained by requiring the scheme to satisfy a semi-discrete dissipation inequality. Sub-cell force and nodal velocity to move the grid are computed consistently with cell volume variation by means of a node-centered solver, which results from total energy conservation. The nominally second-order extension is achieved by developing a two-dimensional extension in the Lagrangian framework of the Generalized Riemann Problem methodology, introduced by Ben-Artzi and Falcovitz [M. Ben-Artzi, J. Falcovitz, Generalized Riemann Problems in Computational Fluid Dynamics, Cambridge Monogr. Appl. Comput. Math. (2003)]. Finally, the robustness and the accuracy of the numerical scheme are assessed through the computation of several test cases.« less
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung; Wang, Xiao-Yen; Chow, Chuen-Yen
1994-01-01
A new numerical discretization method for solving conservation laws is being developed. This new approach differs substantially in both concept and methodology from the well-established methods, i.e., finite difference, finite volume, finite element, and spectral methods. It is motivated by several important physical/numerical considerations and designed to avoid several key limitations of the above traditional methods. As a result of the above considerations, a set of key principles for the design of numerical schemes was put forth in a previous report. These principles were used to construct several numerical schemes that model a 1-D time-dependent convection-diffusion equation. These schemes were then extended to solve the time-dependent Euler and Navier-Stokes equations of a perfect gas. It was shown that the above schemes compared favorably with the traditional schemes in simplicity, generality, and accuracy. In this report, the 2-D versions of the above schemes, except the Navier-Stokes solver, are constructed using the same set of design principles. Their constructions are simplified greatly by the use of a nontraditional space-time mesh. Its use results in the simplest stencil possible, i.e., a tetrahedron in a 3-D space-time with a vertex at the upper time level and other three at the lower time level. Because of the similarity in their design, each of the present 2-D solvers virtually shares with its 1-D counterpart the same fundamental characteristics. Moreover, it is shown that the present Euler solver is capable of generating highly accurate solutions for a famous 2-D shock reflection problem. Specifically, both the incident and the reflected shocks can be resolved by a single data point without the presence of numerical oscillations near the discontinuity.
A Series of MATLAB Learning Modules to Enhance Numerical Competency in Applied Marine Sciences
NASA Astrophysics Data System (ADS)
Fischer, A. M.; Lucieer, V.; Burke, C.
2016-12-01
Enhanced numerical competency to navigate the massive data landscapes are critical skills students need to effectively explore, analyse and visualize complex patterns in high-dimensional data for addressing the complexity of many of the world's problems. This is especially the case for interdisciplinary, undergraduate applied marine science programs, where students are required to demonstrate competency in methods and ideas across multiple disciplines. In response to this challenge, we have developed a series of repository-based data exploration, analysis and visualization modules in MATLAB for integration across various attending and online classes within the University of Tasmania. The primary focus of these modules is to teach students to collect, aggregate and interpret data from large on-line marine scientific data repositories to, 1) gain technical skills in discovering, accessing, managing and visualising large, numerous data sources, 2) interpret, analyse and design approaches to visualise these data, and 3) to address, through numerical approaches, complex, real-world problems, that the traditional scientific methods cannot address. All modules, implemented through a MATLAB live script, include a short recorded lecture to introduce the topic, a handout that gives an overview of the activities, an instructor's manual with a detailed methodology and discussion points, a student assessment (quiz and level-specific challenge task), and a survey. The marine science themes addressed through these modules include biodiversity, habitat mapping, algal blooms and sea surface temperature change and utilize a series of marine science and oceanographic data portals. Through these modules students, with minimal experience in MATLAB or numerical methods are introduced to array indexing, concatenation, sorting, and reshaping, principal component analysis, spectral analysis and unsupervised classification within the context of oceanographic processes, marine geology and marine community ecology.
Local Analysis of Shock Capturing Using Discontinuous Galerkin Methodology
NASA Technical Reports Server (NTRS)
Atkins, H. L.
1997-01-01
The compact form of the discontinuous Galerkin method allows for a detailed local analysis of the method in the neighborhood of the shock for a non-linear model problem. Insight gained from the analysis leads to new flux formulas that are stable and that preserve the compactness of the method. Although developed for a model equation, the flux formulas are applicable to systems such as the Euler equations. This article presents the analysis for methods with a degree up to 5. The analysis is accompanied by supporting numerical experiments using Burgers' equation and the Euler equations.
Low-thrust trajectory analysis for the geosynchronous mission
NASA Technical Reports Server (NTRS)
Jasper, T. P.
1973-01-01
Methodology employed in development of a computer program designed to analyze optimal low-thrust trajectories is described, and application of the program to a Solar Electric Propulsion Stage (SEPS) geosynchronous mission is discussed. To avoid the zero inclination and eccentricity singularities which plague many small-force perturbation techniques, a special set of state variables (equinoctial) is used. Adjoint equations are derived for the minimum time problem and are also free from the singularities. Solutions to the state and adjoint equations are obtained by both orbit averaging and precision numerical integration; an evaluation of these approaches is made.
Continuation of probability density functions using a generalized Lyapunov approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baars, S., E-mail: s.baars@rug.nl; Viebahn, J.P., E-mail: viebahn@cwi.nl; Mulder, T.E., E-mail: t.e.mulder@uu.nl
Techniques from numerical bifurcation theory are very useful to study transitions between steady fluid flow patterns and the instabilities involved. Here, we provide computational methodology to use parameter continuation in determining probability density functions of systems of stochastic partial differential equations near fixed points, under a small noise approximation. Key innovation is the efficient solution of a generalized Lyapunov equation using an iterative method involving low-rank approximations. We apply and illustrate the capabilities of the method using a problem in physical oceanography, i.e. the occurrence of multiple steady states of the Atlantic Ocean circulation.
An approach to solving large reliability models
NASA Technical Reports Server (NTRS)
Boyd, Mark A.; Veeraraghavan, Malathi; Dugan, Joanne Bechta; Trivedi, Kishor S.
1988-01-01
This paper describes a unified approach to the problem of solving large realistic reliability models. The methodology integrates behavioral decomposition, state trunction, and efficient sparse matrix-based numerical methods. The use of fault trees, together with ancillary information regarding dependencies to automatically generate the underlying Markov model state space is proposed. The effectiveness of this approach is illustrated by modeling a state-of-the-art flight control system and a multiprocessor system. Nonexponential distributions for times to failure of components are assumed in the latter example. The modeling tool used for most of this analysis is HARP (the Hybrid Automated Reliability Predictor).
NASA Astrophysics Data System (ADS)
Savelyev, Andrey; Anisimov, Kirill; Kazhan, Egor; Kursakov, Innocentiy; Lysenkov, Alexandr
2016-10-01
The paper is devoted to the development of methodology to optimize external aerodynamics of the engine. Optimization procedure is based on numerical solution of the Reynolds-averaged Navier-Stokes equations. As a method of optimization the surrogate based method is used. As a test problem optimal shape design of turbofan nacelle is considered. The results of the first stage, which investigates classic airplane configuration with engine located under the wing, are presented. Described optimization procedure is considered in the context of multidisciplinary optimization of the 3rd generation, developed in the project AGILE.
Markov Chain Model with Catastrophe to Determine Mean Time to Default of Credit Risky Assets
NASA Astrophysics Data System (ADS)
Dharmaraja, Selvamuthu; Pasricha, Puneet; Tardelli, Paola
2017-11-01
This article deals with the problem of probabilistic prediction of the time distance to default for a firm. To model the credit risk, the dynamics of an asset is described as a function of a homogeneous discrete time Markov chain subject to a catastrophe, the default. The behaviour of the Markov chain is investigated and the mean time to the default is expressed in a closed form. The methodology to estimate the parameters is given. Numerical results are provided to illustrate the applicability of the proposed model on real data and their analysis is discussed.
Programming Probabilistic Structural Analysis for Parallel Processing Computer
NASA Technical Reports Server (NTRS)
Sues, Robert H.; Chen, Heh-Chyun; Twisdale, Lawrence A.; Chamis, Christos C.; Murthy, Pappu L. N.
1991-01-01
The ultimate goal of this research program is to make Probabilistic Structural Analysis (PSA) computationally efficient and hence practical for the design environment by achieving large scale parallelism. The paper identifies the multiple levels of parallelism in PSA, identifies methodologies for exploiting this parallelism, describes the development of a parallel stochastic finite element code, and presents results of two example applications. It is demonstrated that speeds within five percent of those theoretically possible can be achieved. A special-purpose numerical technique, the stochastic preconditioned conjugate gradient method, is also presented and demonstrated to be extremely efficient for certain classes of PSA problems.
Superiorization-based multi-energy CT image reconstruction
Yang, Q; Cong, W; Wang, G
2017-01-01
The recently-developed superiorization approach is efficient and robust for solving various constrained optimization problems. This methodology can be applied to multi-energy CT image reconstruction with the regularization in terms of the prior rank, intensity and sparsity model (PRISM). In this paper, we propose a superiorized version of the simultaneous algebraic reconstruction technique (SART) based on the PRISM model. Then, we compare the proposed superiorized algorithm with the Split-Bregman algorithm in numerical experiments. The results show that both the Superiorized-SART and the Split-Bregman algorithms generate good results with weak noise and reduced artefacts. PMID:28983142
Approximate dynamic programming for optimal stationary control with control-dependent noise.
Jiang, Yu; Jiang, Zhong-Ping
2011-12-01
This brief studies the stochastic optimal control problem via reinforcement learning and approximate/adaptive dynamic programming (ADP). A policy iteration algorithm is derived in the presence of both additive and multiplicative noise using Itô calculus. The expectation of the approximated cost matrix is guaranteed to converge to the solution of some algebraic Riccati equation that gives rise to the optimal cost value. Moreover, the covariance of the approximated cost matrix can be reduced by increasing the length of time interval between two consecutive iterations. Finally, a numerical example is given to illustrate the efficiency of the proposed ADP methodology.
Spectral Correlation of Thermal and Magnetotelluric Responses in a 2D Geothermal System
NASA Astrophysics Data System (ADS)
Pacheco, M. A.
2008-05-01
A methodology of thermal response observations at regional scale in geothermal systems was implemented using magnetotelluric(MT) data that was analyzed by spectral correlation of EM anomalies. Local favorability indices were obtained enhancing the anomalies of thermal flow and their corresponding magnetotelluric responses related to a common source. A C++ code was developed to compute magnetotelluric and thermal responses using finite differences of a geothermal field model. The problem of thermal convection was solved numerically using the approach of Boussinesq and temperature and thermal flow profiles are obtained, also is solved to the equations of electromagnetic induction 2D that govern the wave equation for the H-polarization case in a two-dimensional model of the system. This methodology is useful to find thermal anomalies in conductive or resistive structures of a geothermal system, which is directly associated with the litology of the model such as magmatic chamber, basement and hydrothermal reservoir.
A combinatorial framework to quantify peak/pit asymmetries in complex dynamics.
Hasson, Uri; Iacovacci, Jacopo; Davis, Ben; Flanagan, Ryan; Tagliazucchi, Enzo; Laufs, Helmut; Lacasa, Lucas
2018-02-23
We explore a combinatorial framework which efficiently quantifies the asymmetries between minima and maxima in local fluctuations of time series. We first showcase its performance by applying it to a battery of synthetic cases. We find rigorous results on some canonical dynamical models (stochastic processes with and without correlations, chaotic processes) complemented by extensive numerical simulations for a range of processes which indicate that the methodology correctly distinguishes different complex dynamics and outperforms state of the art metrics in several cases. Subsequently, we apply this methodology to real-world problems emerging across several disciplines including cases in neurobiology, finance and climate science. We conclude that differences between the statistics of local maxima and local minima in time series are highly informative of the complex underlying dynamics and a graph-theoretic extraction procedure allows to use these features for statistical learning purposes.
Transition Characteristic Analysis of Traffic Evolution Process for Urban Traffic Network
Chen, Hong; Li, Yang
2014-01-01
The characterization of the dynamics of traffic states remains fundamental to seeking for the solutions of diverse traffic problems. To gain more insights into traffic dynamics in the temporal domain, this paper explored temporal characteristics and distinct regularity in the traffic evolution process of urban traffic network. We defined traffic state pattern through clustering multidimensional traffic time series using self-organizing maps and construct a pattern transition network model that is appropriate for representing and analyzing the evolution progress. The methodology is illustrated by an application to data flow rate of multiple road sections from Network of Shenzhen's Nanshan District, China. Analysis and numerical results demonstrated that the methodology permits extracting many useful traffic transition characteristics including stability, preference, activity, and attractiveness. In addition, more information about the relationships between these characteristics was extracted, which should be helpful in understanding the complex behavior of the temporal evolution features of traffic patterns. PMID:24982969
NASA Astrophysics Data System (ADS)
Klepikova, Maria V.; Le Borgne, Tanguy; Bour, Olivier; Davy, Philippe
2011-09-01
SummaryTemperature profiles in the subsurface are known to be sensitive to groundwater flow. Here we show that they are also strongly related to vertical flow in the boreholes themselves. Based on a numerical model of flow and heat transfer at the borehole scale, we propose a method to invert temperature measurements to derive borehole flow velocities. This method is applied to an experimental site in fractured crystalline rocks. Vertical flow velocities deduced from the inversion of temperature measurements are compared with direct heat-pulse flowmeter measurements showing a good agreement over two orders of magnitudes. Applying this methodology under ambient, single and cross-borehole pumping conditions allows us to estimate fracture hydraulic head and local transmissivity, as well as inter-borehole fracture connectivity. Thus, these results provide new insights on how to include temperature profiles in inverse problems for estimating hydraulic fracture properties.
Recovery Discontinuous Galerkin Jacobian-free Newton-Krylov Method for all-speed flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
HyeongKae Park; Robert Nourgaliev; Vincent Mousseau
2008-07-01
There is an increasing interest to develop the next generation simulation tools for the advanced nuclear energy systems. These tools will utilize the state-of-art numerical algorithms and computer science technology in order to maximize the predictive capability, support advanced reactor designs, reduce uncertainty and increase safety margins. In analyzing nuclear energy systems, we are interested in compressible low-Mach number, high heat flux flows with a wide range of Re, Ra, and Pr numbers. Under these conditions, the focus is placed on turbulent heat transfer, in contrast to other industries whose main interest is in capturing turbulent mixing. Our objective ismore » to develop singlepoint turbulence closure models for large-scale engineering CFD code, using Direct Numerical Simulation (DNS) or Large Eddy Simulation (LES) tools, requireing very accurate and efficient numerical algorithms. The focus of this work is placed on fully-implicit, high-order spatiotemporal discretization based on the discontinuous Galerkin method solving the conservative form of the compressible Navier-Stokes equations. The method utilizes a local reconstruction procedure derived from weak formulation of the problem, which is inspired by the recovery diffusion flux algorithm of van Leer and Nomura [?] and by the piecewise parabolic reconstruction [?] in the finite volume method. The developed methodology is integrated into the Jacobianfree Newton-Krylov framework [?] to allow a fully-implicit solution of the problem.« less
NASA Astrophysics Data System (ADS)
Liu, Changying; Wu, Xinyuan
2017-07-01
In this paper we explore arbitrarily high-order Lagrange collocation-type time-stepping schemes for effectively solving high-dimensional nonlinear Klein-Gordon equations with different boundary conditions. We begin with one-dimensional periodic boundary problems and first formulate an abstract ordinary differential equation (ODE) on a suitable infinity-dimensional function space based on the operator spectrum theory. We then introduce an operator-variation-of-constants formula which is essential for the derivation of our arbitrarily high-order Lagrange collocation-type time-stepping schemes for the nonlinear abstract ODE. The nonlinear stability and convergence are rigorously analysed once the spatial differential operator is approximated by an appropriate positive semi-definite matrix under some suitable smoothness assumptions. With regard to the two dimensional Dirichlet or Neumann boundary problems, our new time-stepping schemes coupled with discrete Fast Sine / Cosine Transformation can be applied to simulate the two-dimensional nonlinear Klein-Gordon equations effectively. All essential features of the methodology are present in one-dimensional and two-dimensional cases, although the schemes to be analysed lend themselves with equal to higher-dimensional case. The numerical simulation is implemented and the numerical results clearly demonstrate the advantage and effectiveness of our new schemes in comparison with the existing numerical methods for solving nonlinear Klein-Gordon equations in the literature.
Behavioral interventions for agitation in older adults with dementia: an evaluative review.
Spira, Adam P; Edelstein, Barry A
2006-06-01
Older adults with dementia commonly exhibit agitated behavior that puts them at risk of injury and institutionalization and is associated with caregiver stress. A range of theoretical approaches has produced numerous interventions to manage these behavior problems. This paper critically reviews the empirical literature on behavioral interventions to reduce agitation in older adults with dementia. A literature search yielded 23 articles that met inclusion criteria. These articles described interventions that targeted wandering, disruptive vocalization, physical aggression, other agitated behaviors and a combination of these behaviors. Studies are summarized individually and then evaluated. Behavioral interventions targeting agitated behavior exhibited by older adults with dementia show considerable promise. A number of methodological issues must be addressed to advance this research area. Problem areas include inconsistent use of functional assessment techniques, failure to report quantitative findings and inadequate demonstrations of experimental control. The reviewed studies collectively provide evidence that warrants optimism regarding the application of behavioral principles to the management of agitation among older adults with dementia. Although the results of some studies were mixed and several studies revealed methodological shortcomings, many of them offered innovations that can be used in future, more rigorously designed, intervention studies.
Integrated design optimization research and development in an industrial environment
NASA Astrophysics Data System (ADS)
Kumar, V.; German, Marjorie D.; Lee, S.-J.
1989-04-01
An overview is given of a design optimization project that is in progress at the GE Research and Development Center for the past few years. The objective of this project is to develop a methodology and a software system for design automation and optimization of structural/mechanical components and systems. The effort focuses on research and development issues and also on optimization applications that can be related to real-life industrial design problems. The overall technical approach is based on integration of numerical optimization techniques, finite element methods, CAE and software engineering, and artificial intelligence/expert systems (AI/ES) concepts. The role of each of these engineering technologies in the development of a unified design methodology is illustrated. A software system DESIGN-OPT has been developed for both size and shape optimization of structural components subjected to static as well as dynamic loadings. By integrating this software with an automatic mesh generator, a geometric modeler and an attribute specification computer code, a software module SHAPE-OPT has been developed for shape optimization. Details of these software packages together with their applications to some 2- and 3-dimensional design problems are described.
Parallel methodology to capture cyclic variability in motored engines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ameen, Muhsin M.; Yang, Xiaofeng; Kuo, Tang-Wei
2016-07-28
Numerical prediction of of cycle-to-cycle variability (CCV) in SI engines is extremely challenging for two key reasons: (i) high-fidelity methods such as large eddy simulation (LES) are require to accurately capture the in-cylinder turbulent flowfield, and (ii) CCV is experienced over long timescales and hence the simulations need to be performed for hundreds of consecutive cycles. In this study, a new methodology is proposed to dissociate this long time-scale problem into several shorter time-scale problems, which can considerably reduce the computational time without sacrificing the fidelity of the simulations. The strategy is to perform multiple single-cycle simulations in parallel bymore » effectively perturbing the simulation parameters such as the initial and boundary conditions. It is shown that by perturbing the initial velocity field effectively based on the intensity of the in-cylinder turbulence, the mean and variance of the in-cylinder flowfield is captured reasonably well. Adding perturbations in the initial pressure field and the boundary pressure improves the predictions. It is shown that this new approach is able to give accurate predictions of the flowfield statistics in less than one-tenth of time required for the conventional approach of simulating consecutive engine cycles.« less
Integrated design optimization research and development in an industrial environment
NASA Technical Reports Server (NTRS)
Kumar, V.; German, Marjorie D.; Lee, S.-J.
1989-01-01
An overview is given of a design optimization project that is in progress at the GE Research and Development Center for the past few years. The objective of this project is to develop a methodology and a software system for design automation and optimization of structural/mechanical components and systems. The effort focuses on research and development issues and also on optimization applications that can be related to real-life industrial design problems. The overall technical approach is based on integration of numerical optimization techniques, finite element methods, CAE and software engineering, and artificial intelligence/expert systems (AI/ES) concepts. The role of each of these engineering technologies in the development of a unified design methodology is illustrated. A software system DESIGN-OPT has been developed for both size and shape optimization of structural components subjected to static as well as dynamic loadings. By integrating this software with an automatic mesh generator, a geometric modeler and an attribute specification computer code, a software module SHAPE-OPT has been developed for shape optimization. Details of these software packages together with their applications to some 2- and 3-dimensional design problems are described.
Pakenham, K I; Cox, S
2013-01-01
Few studies have examined the effects of parental MS on children, and those that have suffered from numerous methodological weaknesses, some of which are addressed in this study. This study investigated the effects of parental MS on children by comparing youth of a parent with MS to youth who have no family member with a serious health condition on adjustment outcomes, caregiving, attachment and family functioning. A questionnaire survey methodology was used. Measures included youth somatisation, health, pro-social behaviour, behavioural-social difficulties, caregiving, attachment and family functioning. A total of 126 youth of a parent with MS were recruited from MS Societies in Australia and, were matched one-to-one with youth who had no family member with a health condition drawn from a large community sample. Comparisons showed that youth of a parent with MS did not differ on any of the outcomes except for peer relationship problems: adolescent youth of a parent with MS reported lower peer relationship problems than control adolescents. Overall, results did not support prior research findings suggesting adverse impacts of parental MS on youth.
On uncertainty quantification in hydrogeology and hydrogeophysics
NASA Astrophysics Data System (ADS)
Linde, Niklas; Ginsbourger, David; Irving, James; Nobile, Fabio; Doucet, Arnaud
2017-12-01
Recent advances in sensor technologies, field methodologies, numerical modeling, and inversion approaches have contributed to unprecedented imaging of hydrogeological properties and detailed predictions at multiple temporal and spatial scales. Nevertheless, imaging results and predictions will always remain imprecise, which calls for appropriate uncertainty quantification (UQ). In this paper, we outline selected methodological developments together with pioneering UQ applications in hydrogeology and hydrogeophysics. The applied mathematics and statistics literature is not easy to penetrate and this review aims at helping hydrogeologists and hydrogeophysicists to identify suitable approaches for UQ that can be applied and further developed to their specific needs. To bypass the tremendous computational costs associated with forward UQ based on full-physics simulations, we discuss proxy-modeling strategies and multi-resolution (Multi-level Monte Carlo) methods. We consider Bayesian inversion for non-linear and non-Gaussian state-space problems and discuss how Sequential Monte Carlo may become a practical alternative. We also describe strategies to account for forward modeling errors in Bayesian inversion. Finally, we consider hydrogeophysical inversion, where petrophysical uncertainty is often ignored leading to overconfident parameter estimation. The high parameter and data dimensions encountered in hydrogeological and geophysical problems make UQ a complicated and important challenge that has only been partially addressed to date.
Arolt, V; Rothermundt, M; Peters, M; Leonard, B
2002-01-01
There is convincing evidence that cytokines are involved in the physiology and pathophysiology of brain function and interact with different neurotransmitter and neuroendocrine pathways. The possible involvement of the immune system in the neurobiological mechanisms that underlie psychiatric disorders has attracted increasing attention in recent years. Thus in the last decade, numerous clinical studies have demonstrated dysregulated immune functions in patients with psychiatric disorders. Such findings formed the basis of the 7th Expert Meeting on Psychiatry and Immunology in Muenster, Germany, where a consensus symposium was held to consider the strengths and weaknesses of current research in psychoneuroimmunology. Following a general overview of the field, the following topics were discussed: (1) methodological problems in laboratory procedures and recruitment of clinical samples; (2) the importance of pre-clinical research and animal models in psychiatric research; (3) the problem of statistical vs biological relevance. It was concluded that, despite a fruitful proliferation of research activities throughout the last decade, the continuous elaboration of methodological standards including the implementation of hypothesis-driven research represents a task that is likely to prove crucial for the future development of immunology research in clinical psychiatry.
Flap-lag-torsional dynamics of helicopter rotor blades in forward flight
NASA Technical Reports Server (NTRS)
Crespodasilva, M. R. M.
1986-01-01
A perturbation/numerical methodology to analyze the flap-lead/lag motion of a centrally hinged spring restrained rotor blade that is valid for both hover and for forward flight was developed. The derivation of the nonlinear differential equations of motion and the analysis of the stability of the steady state response of the blade were conducted entirely in a Symbolics 3670 Machine using MACSYMA to perform all the lengthy symbolic manipulations. It also includes generation of the fortran codes and plots of the results. The Floquet theory was also applied to the differential equations of motion in order to compare results with those obtained from the perturbation analysis. The results obtained from the perturbation methodology and from Floquet theory were found to be very close to each other, which demonstrates the usefullness of the perturbation methodology. Another problem under study consisted in the analysis of the influence of higher order terms in the response and stability of a flexible rotor blade in forward flight using Computerized Symbolic Manipulation and a perturbation technique to bypass the Floquet theory. The derivation of the partial differential equations of motion is presented.
VARIABLE SELECTION FOR REGRESSION MODELS WITH MISSING DATA
Garcia, Ramon I.; Ibrahim, Joseph G.; Zhu, Hongtu
2009-01-01
We consider the variable selection problem for a class of statistical models with missing data, including missing covariate and/or response data. We investigate the smoothly clipped absolute deviation penalty (SCAD) and adaptive LASSO and propose a unified model selection and estimation procedure for use in the presence of missing data. We develop a computationally attractive algorithm for simultaneously optimizing the penalized likelihood function and estimating the penalty parameters. Particularly, we propose to use a model selection criterion, called the ICQ statistic, for selecting the penalty parameters. We show that the variable selection procedure based on ICQ automatically and consistently selects the important covariates and leads to efficient estimates with oracle properties. The methodology is very general and can be applied to numerous situations involving missing data, from covariates missing at random in arbitrary regression models to nonignorably missing longitudinal responses and/or covariates. Simulations are given to demonstrate the methodology and examine the finite sample performance of the variable selection procedures. Melanoma data from a cancer clinical trial is presented to illustrate the proposed methodology. PMID:20336190
Space-Time Dependent Transport, Activation, and Dose Rates for Radioactivated Fluids.
NASA Astrophysics Data System (ADS)
Gavazza, Sergio
Two methods are developed to calculate the space - and time-dependent mass transport of radionuclides, their production and decay, and the associated dose rates generated from the radioactivated fluids flowing through pipes. The work couples space- and time-dependent phenomena, treated as only space- or time-dependent in the open literature. The transport and activation methodology (TAM) is used to numerically calculate space- and time-dependent transport and activation of radionuclides in fluids flowing through pipes exposed to radiation fields, and volumetric radioactive sources created by radionuclide motions. The computer program Radionuclide Activation and Transport in Pipe (RNATPA1) performs the numerical calculations required in TAM. The gamma ray dose methodology (GAM) is used to numerically calculate space- and time-dependent gamma ray dose equivalent rates from the volumetric radioactive sources determined by TAM. The computer program Gamma Ray Dose Equivalent Rate (GRDOSER) performs the numerical calculations required in GAM. The scope of conditions considered by TAM and GAM herein include (a) laminar flow in straight pipe, (b)recirculating flow schemes, (c) time-independent fluid velocity distributions, (d) space-dependent monoenergetic neutron flux distribution, (e) space- and time-dependent activation process of a single parent nuclide and transport and decay of a single daughter radionuclide, and (f) assessment of space- and time-dependent gamma ray dose rates, outside the pipe, generated by the space- and time-dependent source term distributions inside of it. The methodologies, however, can be easily extended to include all the situations of interest for solving the phenomena addressed in this dissertation. A comparison is made from results obtained by the described calculational procedures with analytical expressions. The physics of the problems addressed by the new technique and the increased accuracy versus non -space and time-dependent methods are presented. The value of the methods is also discussed. It has been demonstrated that TAM and GAM can be used to enhance the understanding of the space- and time-dependent mass transport of radionuclides, their production and decay, and the associated dose rates related to radioactivated fluids flowing through pipes.
Cihan, Abdullah; Birkholzer, Jens; Bianchi, Marco
2014-12-31
Large-scale pressure increases resulting from carbon dioxide (CO 2) injection in the subsurface can potentially impact caprock integrity, induce reactivation of critically stressed faults, and drive CO 2 or brine through conductive features into shallow groundwater. Pressure management involving the extraction of native fluids from storage formations can be used to minimize pressure increases while maximizing CO2 storage. However, brine extraction requires pumping, transportation, possibly treatment, and disposal of substantial volumes of extracted brackish or saline water, all of which can be technically challenging and expensive. This paper describes a constrained differential evolution (CDE) algorithm for optimal well placement andmore » injection/ extraction control with the goal of minimizing brine extraction while achieving predefined pressure contraints. The CDE methodology was tested for a simple optimization problem whose solution can be partially obtained with a gradient-based optimization methodology. The CDE successfully estimated the true global optimum for both extraction well location and extraction rate, needed for the test problem. A more complex example application of the developed strategy was also presented for a hypothetical CO 2 storage scenario in a heterogeneous reservoir consisting of a critically stressed fault nearby an injection zone. Through the CDE optimization algorithm coupled to a numerical vertically-averaged reservoir model, we successfully estimated optimal rates and locations for CO 2 injection and brine extraction wells while simultaneously satisfying multiple pressure buildup constraints to avoid fault activation and caprock fracturing. The study shows that the CDE methodology is a very promising tool to solve also other optimization problems related to GCS, such as reducing ‘Area of Review’, monitoring design, reducing risk of leakage and increasing storage capacity and trapping.« less
NASA Astrophysics Data System (ADS)
Bacchi, Vito; Duluc, Claire-Marie; Bertrand, Nathalie; Bardet, Lise
2017-04-01
In recent years, in the context of hydraulic risk assessment, much effort has been put into the development of sophisticated numerical model systems able reproducing surface flow field. These numerical models are based on a deterministic approach and the results are presented in terms of measurable quantities (water depths, flow velocities, etc…). However, the modelling of surface flows involves numerous uncertainties associated both to the numerical structure of the model, to the knowledge of the physical parameters which force the system and to the randomness inherent to natural phenomena. As a consequence, dealing with uncertainties can be a difficult task for both modelers and decision-makers [Ioss, 2011]. In the context of nuclear safety, IRSN assesses studies conducted by operators for different reference flood situations (local rain, small or large watershed flooding, sea levels, etc…), that are defined in the guide ASN N°13 [ASN, 2013]. The guide provides some recommendations to deal with uncertainties, by proposing a specific conservative approach to cover hydraulic modelling uncertainties. Depending of the situation, the influencing parameter might be the Strickler coefficient, levee behavior, simplified topographic assumptions, etc. Obviously, identifying the most influencing parameter and giving it a penalizing value is challenging and usually questionable. In this context, IRSN conducted cooperative (Compagnie Nationale du Rhone, I-CiTy laboratory of Polytech'Nice, Atomic Energy Commission, Bureau de Recherches Géologiques et Minières) research activities since 2011 in order to investigate feasibility and benefits of Uncertainties Analysis (UA) and Global Sensitivity Analysis (GSA) when applied to hydraulic modelling. A specific methodology was tested by using the computational environment Promethee, developed by IRSN, which allows carrying out uncertainties propagation study. This methodology was applied with various numerical models and in different contexts, as river flooding on the Rhône River (Nguyen et al., 2015) and on the Garonne River, for the studying of local rainfall (Abily et al., 2016) or for tsunami generation, in the framework of the ANR-research project TANDEM. The feedback issued from these previous studies is analyzed (technical problems, limitations, interesting results, etc…) and the perspectives and a discussion on how a probabilistic approach of uncertainties should improve the actual deterministic methodology for risk assessment (also for other engineering applications) will be finally given.
Reactor Dosimetry Applications Using RAPTOR-M3G:. a New Parallel 3-D Radiation Transport Code
NASA Astrophysics Data System (ADS)
Longoni, Gianluca; Anderson, Stanwood L.
2009-08-01
The numerical solution of the Linearized Boltzmann Equation (LBE) via the Discrete Ordinates method (SN) requires extensive computational resources for large 3-D neutron and gamma transport applications due to the concurrent discretization of the angular, spatial, and energy domains. This paper will discuss the development RAPTOR-M3G (RApid Parallel Transport Of Radiation - Multiple 3D Geometries), a new 3-D parallel radiation transport code, and its application to the calculation of ex-vessel neutron dosimetry responses in the cavity of a commercial 2-loop Pressurized Water Reactor (PWR). RAPTOR-M3G is based domain decomposition algorithms, where the spatial and angular domains are allocated and processed on multi-processor computer architectures. As compared to traditional single-processor applications, this approach reduces the computational load as well as the memory requirement per processor, yielding an efficient solution methodology for large 3-D problems. Measured neutron dosimetry responses in the reactor cavity air gap will be compared to the RAPTOR-M3G predictions. This paper is organized as follows: Section 1 discusses the RAPTOR-M3G methodology; Section 2 describes the 2-loop PWR model and the numerical results obtained. Section 3 addresses the parallel performance of the code, and Section 4 concludes this paper with final remarks and future work.
Efficient numerical method of freeform lens design for arbitrary irradiance shaping
NASA Astrophysics Data System (ADS)
Wojtanowski, Jacek
2018-05-01
A computational method to design a lens with a flat entrance surface and a freeform exit surface that can transform a collimated, generally non-uniform input beam into a beam with a desired irradiance distribution of arbitrary shape is presented. The methodology is based on non-linear elliptic partial differential equations, known as Monge-Ampère PDEs. This paper describes an original numerical algorithm to solve this problem by applying the Gauss-Seidel method with simplified boundary conditions. A joint MATLAB-ZEMAX environment is used to implement and verify the method. To prove the efficiency of the proposed approach, an exemplary study where the designed lens is faced with the challenging illumination task is shown. An analysis of solution stability, iteration-to-iteration ray mapping evolution (attached in video format), depth of focus and non-zero étendue efficiency is performed.
Simulation of wind turbine wakes using the actuator line technique
Sørensen, Jens N.; Mikkelsen, Robert F.; Henningson, Dan S.; Ivanell, Stefan; Sarmast, Sasan; Andersen, Søren J.
2015-01-01
The actuator line technique was introduced as a numerical tool to be employed in combination with large eddy simulations to enable the study of wakes and wake interaction in wind farms. The technique is today largely used for studying basic features of wakes as well as for making performance predictions of wind farms. In this paper, we give a short introduction to the wake problem and the actuator line methodology and present a study in which the technique is employed to determine the near-wake properties of wind turbines. The presented results include a comparison of experimental results of the wake characteristics of the flow around a three-bladed model wind turbine, the development of a simple analytical formula for determining the near-wake length behind a wind turbine and a detailed investigation of wake structures based on proper orthogonal decomposition analysis of numerically generated snapshots of the wake. PMID:25583862
Modeling and control of flexible space platforms with articulated payloads
NASA Technical Reports Server (NTRS)
Graves, Philip C.; Joshi, Suresh M.
1989-01-01
The first steps in developing a methodology for spacecraft control-structure interaction (CSI) optimization are identification and classification of anticipated missions, and the development of tractable mathematical models in each mission class. A mathematical model of a generic large flexible space platform (LFSP) with multiple independently pointed rigid payloads is considered. The objective is not to develop a general purpose numerical simulation, but rather to develop an analytically tractable mathematical model of such composite systems. The equations of motion for a single payload case are derived, and are linearized about zero steady-state. The resulting model is then extended to include multiple rigid payloads, yielding the desired analytical form. The mathematical models developed clearly show the internal inertial/elastic couplings, and are therefore suitable for analytical and numerical studies. A simple decentralized control law is proposed for fine pointing the payloads and LFSP attitude control, and simulation results are presented for an example problem. The decentralized controller is shown to be adequate for the example problem chosen, but does not, in general, guarantee stability. A centralized dissipative controller is then proposed, requiring a symmetric form of the composite system equations. Such a controller guarantees robust closed loop stability despite unmodeled elastic dynamics and parameter uncertainties.
Construction of high-rise buildings in the Far East of Russia
NASA Astrophysics Data System (ADS)
Kudryavtsev, Sergey; Bugunov, Semen; Pogulyaeva, Evgeniya; Peters, Anastasiya; Kotenko, Zhanna; Grigor'yev, Danil
2018-03-01
The construction of high-rise buildings on plate foundation in geotechnical conditions of the Russian Far East is a complicated problem. In this respect foundation engineering becomes rather essential. In order to set a firm foundation it is necessary to take into account the pressure distribution at the structure base, in homogeneity of building deformation, which is due to collaborative geotechnical calculations complicated by a number of factors: actual over-placement of soils, the complex geometry of the building under construction, spatial work of the foundation ground with consideration for physical nonlinearity, the influence of the stiffness of the superstructure (reinforced concrete framing) upon the development of foundation deformations, foundation performance (the performance of the bed plate under the building and stairwells), the origination of internal forces in the superstructure with differential settlement. The solution of spatial problems regarding the mutual interaction between buildings and foundations with account of the factors mentioned above is fully achievable via the application of numerical modeling methodology. The work makes a review of the results of high-rise plate building numerical modeling in geotechnical conditions of the Russian Far East by way of the example of Khabarovsk city.
Stochastic approach for radionuclides quantification
NASA Astrophysics Data System (ADS)
Clement, A.; Saurel, N.; Perrin, G.
2018-01-01
Gamma spectrometry is a passive non-destructive assay used to quantify radionuclides present in more or less complex objects. Basic methods using empirical calibration with a standard in order to quantify the activity of nuclear materials by determining the calibration coefficient are useless on non-reproducible, complex and single nuclear objects such as waste packages. Package specifications as composition or geometry change from one package to another and involve a high variability of objects. Current quantification process uses numerical modelling of the measured scene with few available data such as geometry or composition. These data are density, material, screen, geometric shape, matrix composition, matrix and source distribution. Some of them are strongly dependent on package data knowledge and operator backgrounds. The French Commissariat à l'Energie Atomique (CEA) is developing a new methodology to quantify nuclear materials in waste packages and waste drums without operator adjustment and internal package configuration knowledge. This method suggests combining a global stochastic approach which uses, among others, surrogate models available to simulate the gamma attenuation behaviour, a Bayesian approach which considers conditional probability densities of problem inputs, and Markov Chains Monte Carlo algorithms (MCMC) which solve inverse problems, with gamma ray emission radionuclide spectrum, and outside dimensions of interest objects. The methodology is testing to quantify actinide activity in different kind of matrix, composition, and configuration of sources standard in terms of actinide masses, locations and distributions. Activity uncertainties are taken into account by this adjustment methodology.
Dewey, Colin N
2012-01-01
Whole-genome alignment (WGA) is the prediction of evolutionary relationships at the nucleotide level between two or more genomes. It combines aspects of both colinear sequence alignment and gene orthology prediction, and is typically more challenging to address than either of these tasks due to the size and complexity of whole genomes. Despite the difficulty of this problem, numerous methods have been developed for its solution because WGAs are valuable for genome-wide analyses, such as phylogenetic inference, genome annotation, and function prediction. In this chapter, we discuss the meaning and significance of WGA and present an overview of the methods that address it. We also examine the problem of evaluating whole-genome aligners and offer a set of methodological challenges that need to be tackled in order to make the most effective use of our rapidly growing databases of whole genomes.
Unsteady numerical simulation of a round jet with impinging microjets for noise suppression
Lew, Phoi-Tack; Najafi-Yazdi, Alireza; Mongeau, Luc
2013-01-01
The objective of this study was to determine the feasibility of a lattice-Boltzmann method (LBM)-Large Eddy Simulation methodology for the prediction of sound radiation from a round jet-microjet combination. The distinct advantage of LBM over traditional computational fluid dynamics methods is its ease of handling problems with complex geometries. Numerical simulations of an isothermal Mach 0.5, ReD = 1 × 105 circular jet (Dj = 0.0508 m) with and without the presence of 18 microjets (Dmj = 1 mm) were performed. The presence of microjets resulted in a decrease in the axial turbulence intensity and turbulent kinetic energy. The associated decrease in radiated sound pressure level was around 1 dB. The far-field sound was computed using the porous Ffowcs Williams-Hawkings surface integral acoustic method. The trend obtained is in qualitative agreement with experimental observations. The results of this study support the accuracy of LBM based numerical simulations for predictions of the effects of noise suppression devices on the radiated sound power. PMID:23967931
Numerical 3+1 General Relativistic Magnetohydrodynamics: A Local Characteristic Approach
NASA Astrophysics Data System (ADS)
Antón, Luis; Zanotti, Olindo; Miralles, Juan A.; Martí, José M.; Ibáñez, José M.; Font, José A.; Pons, José A.
2006-01-01
We present a general procedure to solve numerically the general relativistic magnetohydrodynamics (GRMHD) equations within the framework of the 3+1 formalism. The work reported here extends our previous investigation in general relativistic hydrodynamics (Banyuls et al. 1997) where magnetic fields were not considered. The GRMHD equations are written in conservative form to exploit their hyperbolic character in the solution procedure. All theoretical ingredients necessary to build up high-resolution shock-capturing schemes based on the solution of local Riemann problems (i.e., Godunov-type schemes) are described. In particular, we use a renormalized set of regular eigenvectors of the flux Jacobians of the relativistic MHD equations. In addition, the paper describes a procedure based on the equivalence principle of general relativity that allows the use of Riemann solvers designed for special relativistic MHD in GRMHD. Our formulation and numerical methodology are assessed by performing various test simulations recently considered by different authors. These include magnetized shock tubes, spherical accretion onto a Schwarzschild black hole, equatorial accretion onto a Kerr black hole, and magnetized thick disks accreting onto a black hole and subject to the magnetorotational instability.
Multidisciplinary optimization of an HSCT wing using a response surface methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giunta, A.A.; Grossman, B.; Mason, W.H.
1994-12-31
Aerospace vehicle design is traditionally divided into three phases: conceptual, preliminary, and detailed. Each of these design phases entails a particular level of accuracy and computational expense. While there are several computer programs which perform inexpensive conceptual-level aircraft multidisciplinary design optimization (MDO), aircraft MDO remains prohibitively expensive using preliminary- and detailed-level analysis tools. This occurs due to the expense of computational analyses and because gradient-based optimization requires the analysis of hundreds or thousands of aircraft configurations to estimate design sensitivity information. A further hindrance to aircraft MDO is the problem of numerical noise which occurs frequently in engineering computations. Computermore » models produce numerical noise as a result of the incomplete convergence of iterative processes, round-off errors, and modeling errors. Such numerical noise is typically manifested as a high frequency, low amplitude variation in the results obtained from the computer models. Optimization attempted using noisy computer models may result in the erroneous calculation of design sensitivities and may slow or prevent convergence to an optimal design.« less
A mathematical solution for the parameters of three interfering resonances
NASA Astrophysics Data System (ADS)
Han, X.; Shen, C. P.
2018-04-01
The multiple-solution problem in determining the parameters of three interfering resonances from a fit to an experimentally measured distribution is considered from a mathematical viewpoint. It is shown that there are four numerical solutions for a fit with three coherent Breit-Wigner functions. Although explicit analytical formulae cannot be derived in this case, we provide some constraint equations between the four solutions. For the cases of nonrelativistic and relativistic Breit-Wigner forms of amplitude functions, a numerical method is provided to derive the other solutions from that already obtained, based on the obtained constraint equations. In real experimental measurements with more complicated amplitude forms similar to Breit-Wigner functions, the same method can be deduced and performed to get numerical solutions. The good agreement between the solutions found using this mathematical method and those directly from the fit verifies the correctness of the constraint equations and mathematical methodology used. Supported by National Natural Science Foundation of China (NSFC) (11575017, 11761141009), the Ministry of Science and Technology of China (2015CB856701) and the CAS Center for Excellence in Particle Physics (CCEPP)
A hybrid neural networks-fuzzy logic-genetic algorithm for grade estimation
NASA Astrophysics Data System (ADS)
Tahmasebi, Pejman; Hezarkhani, Ardeshir
2012-05-01
The grade estimation is a quite important and money/time-consuming stage in a mine project, which is considered as a challenge for the geologists and mining engineers due to the structural complexities in mineral ore deposits. To overcome this problem, several artificial intelligence techniques such as Artificial Neural Networks (ANN) and Fuzzy Logic (FL) have recently been employed with various architectures and properties. However, due to the constraints of both methods, they yield the desired results only under the specific circumstances. As an example, one major problem in FL is the difficulty of constructing the membership functions (MFs).Other problems such as architecture and local minima could also be located in ANN designing. Therefore, a new methodology is presented in this paper for grade estimation. This method which is based on ANN and FL is called "Coactive Neuro-Fuzzy Inference System" (CANFIS) which combines two approaches, ANN and FL. The combination of these two artificial intelligence approaches is achieved via the verbal and numerical power of intelligent systems. To improve the performance of this system, a Genetic Algorithm (GA) - as a well-known technique to solve the complex optimization problems - is also employed to optimize the network parameters including learning rate, momentum of the network and the number of MFs for each input. A comparison of these techniques (ANN, Adaptive Neuro-Fuzzy Inference System or ANFIS) with this new method (CANFIS-GA) is also carried out through a case study in Sungun copper deposit, located in East-Azerbaijan, Iran. The results show that CANFIS-GA could be a faster and more accurate alternative to the existing time-consuming methodologies for ore grade estimation and that is, therefore, suggested to be applied for grade estimation in similar problems.
A hybrid neural networks-fuzzy logic-genetic algorithm for grade estimation
Tahmasebi, Pejman; Hezarkhani, Ardeshir
2012-01-01
The grade estimation is a quite important and money/time-consuming stage in a mine project, which is considered as a challenge for the geologists and mining engineers due to the structural complexities in mineral ore deposits. To overcome this problem, several artificial intelligence techniques such as Artificial Neural Networks (ANN) and Fuzzy Logic (FL) have recently been employed with various architectures and properties. However, due to the constraints of both methods, they yield the desired results only under the specific circumstances. As an example, one major problem in FL is the difficulty of constructing the membership functions (MFs).Other problems such as architecture and local minima could also be located in ANN designing. Therefore, a new methodology is presented in this paper for grade estimation. This method which is based on ANN and FL is called “Coactive Neuro-Fuzzy Inference System” (CANFIS) which combines two approaches, ANN and FL. The combination of these two artificial intelligence approaches is achieved via the verbal and numerical power of intelligent systems. To improve the performance of this system, a Genetic Algorithm (GA) – as a well-known technique to solve the complex optimization problems – is also employed to optimize the network parameters including learning rate, momentum of the network and the number of MFs for each input. A comparison of these techniques (ANN, Adaptive Neuro-Fuzzy Inference System or ANFIS) with this new method (CANFIS–GA) is also carried out through a case study in Sungun copper deposit, located in East-Azerbaijan, Iran. The results show that CANFIS–GA could be a faster and more accurate alternative to the existing time-consuming methodologies for ore grade estimation and that is, therefore, suggested to be applied for grade estimation in similar problems. PMID:25540468
Problem solving using soft systems methodology.
Land, L
This article outlines a method of problem solving which considers holistic solutions to complex problems. Soft systems methodology allows people involved in the problem situation to have control over the decision-making process.
NASA Astrophysics Data System (ADS)
Liao, Haitao; Wu, Wenwang; Fang, Daining
2018-07-01
A coupled approach combining the reduced space Sequential Quadratic Programming (SQP) method with the harmonic balance condensation technique for finding the worst resonance response is developed. The nonlinear equality constraints of the optimization problem are imposed on the condensed harmonic balance equations. Making use of the null space decomposition technique, the original optimization formulation in the full space is mathematically simplified, and solved in the reduced space by means of the reduced SQP method. The transformation matrix that maps the full space to the null space of the constrained optimization problem is constructed via the coordinate basis scheme. The removal of the nonlinear equality constraints is accomplished, resulting in a simple optimization problem subject to bound constraints. Moreover, second order correction technique is introduced to overcome Maratos effect. The combination application of the reduced SQP method and condensation technique permits a large reduction of the computational cost. Finally, the effectiveness and applicability of the proposed methodology is demonstrated by two numerical examples.
Optimizing Multi-Product Multi-Constraint Inventory Control Systems with Stochastic Replenishments
NASA Astrophysics Data System (ADS)
Allah Taleizadeh, Ata; Aryanezhad, Mir-Bahador; Niaki, Seyed Taghi Akhavan
Multi-periodic inventory control problems are mainly studied employing two assumptions. The first is the continuous review, where depending on the inventory level orders can happen at any time and the other is the periodic review, where orders can only happen at the beginning of each period. In this study, we relax these assumptions and assume that the periodic replenishments are stochastic in nature. Furthermore, we assume that the periods between two replenishments are independent and identically random variables. For the problem at hand, the decision variables are of integer-type and there are two kinds of space and service level constraints for each product. We develop a model of the problem in which a combination of back-order and lost-sales are considered for the shortages. Then, we show that the model is of an integer-nonlinear-programming type and in order to solve it, a search algorithm can be utilized. We employ a simulated annealing approach and provide a numerical example to demonstrate the applicability of the proposed methodology.
Development of an adaptive hp-version finite element method for computational optimal control
NASA Technical Reports Server (NTRS)
Hodges, Dewey H.; Warner, Michael S.
1994-01-01
In this research effort, the usefulness of hp-version finite elements and adaptive solution-refinement techniques in generating numerical solutions to optimal control problems has been investigated. Under NAG-939, a general FORTRAN code was developed which approximated solutions to optimal control problems with control constraints and state constraints. Within that methodology, to get high-order accuracy in solutions, the finite element mesh would have to be refined repeatedly through bisection of the entire mesh in a given phase. In the current research effort, the order of the shape functions in each element has been made a variable, giving more flexibility in error reduction and smoothing. Similarly, individual elements can each be subdivided into many pieces, depending on the local error indicator, while other parts of the mesh remain coarsely discretized. The problem remains to reduce and smooth the error while still keeping computational effort reasonable enough to calculate time histories in a short enough time for on-board applications.
Standards and Guidelines for Numerical Models for Tsunami Hazard Mitigation
NASA Astrophysics Data System (ADS)
Titov, V.; Gonzalez, F.; Kanoglu, U.; Yalciner, A.; Synolakis, C. E.
2006-12-01
An increased number of nations around the workd need to develop tsunami mitigation plans which invariably involve inundation maps for warning guidance and evacuation planning. There is the risk that inundation maps may be produced with older or untested methodology, as there are currently no standards for modeling tools. In the aftermath of the 2004 megatsunami, some models were used to model inundation for Cascadia events with results much larger than sediment records and existing state-of-the-art studies suggest leading to confusion among emergency management. Incorrectly assessing tsunami impact is hazardous, as recent events in 2006 in Tonga, Kythira, Greece and Central Java have suggested (Synolakis and Bernard, 2006). To calculate tsunami currents, forces and runup on coastal structures, and inundation of coastlines one must calculate the evolution of the tsunami wave from the deep ocean to its target site, numerically. No matter what the numerical model, validation (the process of ensuring that the model solves the parent equations of motion accurately) and verification (the process of ensuring that the model used represents geophysical reality appropriately) both are an essential. Validation ensures that the model performs well in a wide range of circumstances and is accomplished through comparison with analytical solutions. Verification ensures that the computational code performs well over a range of geophysical problems. A few analytic solutions have been validated themselves with laboratory data. Even fewer existing numerical models have been both validated with the analytical solutions and verified with both laboratory measurements and field measurements, thus establishing a gold standard for numerical codes for inundation mapping. While there is in principle no absolute certainty that a numerical code that has performed well in all the benchmark tests will also produce correct inundation predictions with any given source motions, validated codes reduce the level of uncertainty in their results to the uncertainty in the geophysical initial conditions. Further, when coupled with real--time free--field tsunami measurements from tsunameters, validated codes are the only choice for realistic forecasting of inundation; the consequences of failure are too ghastly to take chances with numerical procedures that have not been validated. We discuss a ten step process of benchmark tests for models used for inundation mapping. The associated methodology and algorithmes have to first be validated with analytical solutions, then verified with laboratory measurements and field data. The models need to be published in the scientific literature in peer-review journals indexed by ISI. While this process may appear onerous, it reflects our state of knowledge, and is the only defensible methodology when human lives are at stake. Synolakis, C.E., and Bernard, E.N, Tsunami science before and beyond Boxing Day 2004, Phil. Trans. R. Soc. A 364 1845, 2231--2263, 2005.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yi; Jakeman, John; Gittelson, Claude
2015-01-08
In this paper we present a localized polynomial chaos expansion for partial differential equations (PDE) with random inputs. In particular, we focus on time independent linear stochastic problems with high dimensional random inputs, where the traditional polynomial chaos methods, and most of the existing methods, incur prohibitively high simulation cost. Furthermore, the local polynomial chaos method employs a domain decomposition technique to approximate the stochastic solution locally. In each subdomain, a subdomain problem is solved independently and, more importantly, in a much lower dimensional random space. In a postprocesing stage, accurate samples of the original stochastic problems are obtained frommore » the samples of the local solutions by enforcing the correct stochastic structure of the random inputs and the coupling conditions at the interfaces of the subdomains. Overall, the method is able to solve stochastic PDEs in very large dimensions by solving a collection of low dimensional local problems and can be highly efficient. In our paper we present the general mathematical framework of the methodology and use numerical examples to demonstrate the properties of the method.« less
Wang, Jiqiang
2016-03-01
Restricted sensing and actuation control represents an important area of research that has been overlooked in most of the design methodologies. In many practical control engineering problems, it is necessitated to implement the design through a single sensor and single actuator for multivariate performance variables. In this paper, a novel approach is proposed for the solution to the single sensor and single actuator control problem where performance over any prescribed frequency band can also be tailored. The results are obtained for the broad band control design based on the formulation for discrete frequency control. It is shown that the single sensor and single actuator control problem over a frequency band can be cast into a Nevanlinna-Pick interpolation problem. An optimal controller can then be obtained via the convex optimization over LMIs. Even remarkable is that robustness issues can also be tackled in this framework. A numerical example is provided for the broad band attenuation of rotor blade vibration to illustrate the proposed design procedures. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Ensemble-based data assimilation and optimal sensor placement for scalar source reconstruction
NASA Astrophysics Data System (ADS)
Mons, Vincent; Wang, Qi; Zaki, Tamer
2017-11-01
Reconstructing the characteristics of a scalar source from limited remote measurements in a turbulent flow is a problem of great interest for environmental monitoring, and is challenging due to several aspects. Firstly, the numerical estimation of the scalar dispersion in a turbulent flow requires significant computational resources. Secondly, in actual practice, only a limited number of observations are available, which generally makes the corresponding inverse problem ill-posed. Ensemble-based variational data assimilation techniques are adopted to solve the problem of scalar source localization in a turbulent channel flow at Reτ = 180 . This approach combines the components of variational data assimilation and ensemble Kalman filtering, and inherits the robustness from the former and the ease of implementation from the latter. An ensemble-based methodology for optimal sensor placement is also proposed in order to improve the condition of the inverse problem, which enhances the performances of the data assimilation scheme. This work has been partially funded by the Office of Naval Research (Grant N00014-16-1-2542) and by the National Science Foundation (Grant 1461870).
A methodology for constraining power in finite element modeling of radiofrequency ablation.
Jiang, Yansheng; Possebon, Ricardo; Mulier, Stefaan; Wang, Chong; Chen, Feng; Feng, Yuanbo; Xia, Qian; Liu, Yewei; Yin, Ting; Oyen, Raymond; Ni, Yicheng
2017-07-01
Radiofrequency ablation (RFA) is a minimally invasive thermal therapy for the treatment of cancer, hyperopia, and cardiac tachyarrhythmia. In RFA, the power delivered to the tissue is a key parameter. The objective of this study was to establish a methodology for the finite element modeling of RFA with constant power. Because of changes in the electric conductivity of tissue with temperature, a nonconventional boundary value problem arises in the mathematic modeling of RFA: neither the voltage (Dirichlet condition) nor the current (Neumann condition), but the power, that is, the product of voltage and current was prescribed on part of boundary. We solved the problem using Lagrange multiplier: the product of the voltage and current on the electrode surface is constrained to be equal to the Joule heating. We theoretically proved the equality between the product of the voltage and current on the surface of the electrode and the Joule heating in the domain. We also proved the well-posedness of the problem of solving the Laplace equation for the electric potential under a constant power constraint prescribed on the electrode surface. The Pennes bioheat transfer equation and the Laplace equation for electric potential augmented with the constraint of constant power were solved simultaneously using the Newton-Raphson algorithm. Three problems for validation were solved. Numerical results were compared either with an analytical solution deduced in this study or with results obtained by ANSYS or experiments. This work provides the finite element modeling of constant power RFA with a firm mathematical basis and opens pathway for achieving the optimal RFA power. Copyright © 2016 John Wiley & Sons, Ltd.
Analysis and control of high-speed wheeled vehicles
NASA Astrophysics Data System (ADS)
Velenis, Efstathios
In this work we reproduce driving techniques to mimic expert race drivers and obtain the open-loop control signals that may be used by auto-pilot agents driving autonomous ground wheeled vehicles. Race drivers operate their vehicles at the limits of the acceleration envelope. An accurate characterization of the acceleration capacity of the vehicle is required. Understanding and reproduction of such complex maneuvers also require a physics-based mathematical description of the vehicle dynamics. While most of the modeling issues of ground-vehicles/automobiles are already well established in the literature, lack of understanding of the physics associated with friction generation results in ad-hoc approaches to tire friction modeling. In this work we revisit this aspect of the overall vehicle modeling and develop a tire friction model that provides physical interpretation of the tire forces. The new model is free of those singularities at low vehicle speed and wheel angular rate that are inherent in the widely used empirical static models. In addition, the dynamic nature of the tire model proposed herein allows the study of dynamic effects such as transients and hysteresis. The trajectory-planning problem for an autonomous ground wheeled vehicle is formulated in an optimal control framework aiming to minimize the time of travel and maximize the use of the available acceleration capacity. The first approach to solve the optimal control problem is using numerical techniques. Numerical optimization allows incorporation of a vehicle model of high fidelity and generates realistic solutions. Such an optimization scheme provides an ideal platform to study the limit operation of the vehicle, which would not be possible via straightforward simulation. In this work we emphasize the importance of online applicability of the proposed methodologies. This underlines the need for optimal solutions that require little computational cost and are able to incorporate real, unpredictable environments. A semi-analytic methodology is developed to generate the optimal velocity profile for minimum time travel along a prescribed path. The semi-analytic nature ensures minimal computational cost while a receding horizon implementation allows application of the methodology in uncertain environments. Extensions to increase fidelity of the vehicle model are finally provided.
Nano-metrology and terrain modelling - convergent practice in surface characterisation
Pike, R.J.
2000-01-01
The quantification of magnetic-tape and disk topography has a macro-scale counterpart in the Earth sciences - terrain modelling, the numerical representation of relief and pattern of the ground surface. The two practices arose independently and continue to function separately. This methodological paper introduces terrain modelling, discusses its similarities to and differences from industrial surface metrology, and raises the possibility of a unified discipline of quantitative surface characterisation. A brief discussion of an Earth-science problem, subdividing a heterogeneous terrain surface from a set of sample measurements, exemplifies a multivariate statistical procedure that may transfer to tribological applications of 3-D metrological height data.
Multi-criteria analysis of potential recovery facilities in a reverse supply chain
NASA Astrophysics Data System (ADS)
Nukala, Satish; Gupta, Surendra M.
2005-11-01
Analytic Hierarchy Process (AHP) has been employed by researchers for solving multi-criteria analysis problems. However, AHP is often criticized for its unbalanced scale of judgments and failure to precisely handle the inherent uncertainty and vagueness in carrying out the pair-wise comparisons. With an objective to address these drawbacks, in this paper, we employ a fuzzy approach in selecting potential recovery facilities in the strategic planning of a reverse supply chain network that addresses the decision maker's level of confidence in the fuzzy assessments and his/her attitude towards risk. A numerical example is considered to illustrate the methodology.
"Epidemiological criminology": coming full circle.
Akers, Timothy A; Lanier, Mark M
2009-03-01
Members of the public health and criminal justice disciplines often work with marginalized populations: people at high risk of drug use, health problems, incarceration, and other difficulties. As these fields increasingly overlap, distinctions between them are blurred, as numerous research reports and funding trends document. However, explicit theoretical and methodological linkages between the 2 disciplines remain rare. A new paradigm that links methods and statistical models of public health with those of their criminal justice counterparts is needed, as are increased linkages between epidemiological analogies, theories, and models and the corresponding tools of criminology. We outline disciplinary commonalities and distinctions, present policy examples that integrate similarities, and propose "epidemiological criminology" as a bridging framework.
NASA Astrophysics Data System (ADS)
Aoki, Sinya
2013-07-01
We review the potential method in lattice QCD, which has recently been proposed to extract nucleon-nucleon interactions via numerical simulations. We focus on the methodology of this approach by emphasizing the strategy of the potential method, the theoretical foundation behind it, and special numerical techniques. We compare the potential method with the standard finite volume method in lattice QCD, in order to make pros and cons of the approach clear. We also present several numerical results for nucleon-nucleon potentials.
Butler, Troy; Graham, L.; Estep, D.; ...
2015-02-03
The uncertainty in spatially heterogeneous Manning’s n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented in this paper. Technical details that arise in practice by applying the framework to determine the Manning’s n parameter field in amore » shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of “condition” for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. Finally, this notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning’s n parameter and the effect on model predictions is analyzed.« less
Eigenvectors phase correction in inverse modal problem
NASA Astrophysics Data System (ADS)
Qiao, Guandong; Rahmatalla, Salam
2017-12-01
The solution of the inverse modal problem for the spatial parameters of mechanical and structural systems is heavily dependent on the quality of the modal parameters obtained from the experiments. While experimental and environmental noises will always exist during modal testing, the resulting modal parameters are expected to be corrupted with different levels of noise. A novel methodology is presented in this work to mitigate the errors in the eigenvectors when solving the inverse modal problem for the spatial parameters. The phases of the eigenvector component were utilized as design variables within an optimization problem that minimizes the difference between the calculated and experimental transfer functions. The equation of motion in terms of the modal and spatial parameters was used as a constraint in the optimization problem. Constraints that reserve the positive and semi-positive definiteness and the inter-connectivity of the spatial matrices were implemented using semi-definite programming. Numerical examples utilizing noisy eigenvectors with augmented Gaussian white noise of 1%, 5%, and 10% were used to demonstrate the efficacy of the proposed method. The results showed that the proposed method is superior when compared with a known method in the literature.
NASA Astrophysics Data System (ADS)
Cvetkovic, V.; Molin, S.
2012-02-01
We present a methodology that combines numerical simulations of groundwater flow and advective transport in heterogeneous porous media with analytical retention models for computing the infection risk probability from pathogens in aquifers. The methodology is based on the analytical results presented in [1,2] for utilising the colloid filtration theory in a time-domain random walk framework. It is shown that in uniform flow, the results from the numerical simulations of advection yield comparable results as the analytical TDRW model for generating advection segments. It is shown that spatial variability of the attachment rate may be significant, however, it appears to affect risk in a different manner depending on if the flow is uniform or radially converging. In spite of the fact that numerous issues remain open regarding pathogen transport in aquifers on the field scale, the methodology presented here may be useful for screening purposes, and may also serve as a basis for future studies that would include greater complexity.
Conceptual design and multidisciplinary optimization of in-plane morphing wing structures
NASA Astrophysics Data System (ADS)
Inoyama, Daisaku; Sanders, Brian P.; Joo, James J.
2006-03-01
In this paper, the topology optimization methodology for the synthesis of distributed actuation system with specific applications to the morphing air vehicle is discussed. The main emphasis is placed on the topology optimization problem formulations and the development of computational modeling concepts. For demonstration purposes, the inplane morphing wing model is presented. The analysis model is developed to meet several important criteria: It must allow large rigid-body displacements, as well as variation in planform area, with minimum strain on structural members while retaining acceptable numerical stability for finite element analysis. Preliminary work has indicated that addressed modeling concept meets the criteria and may be suitable for the purpose. Topology optimization is performed on the ground structure based on this modeling concept with design variables that control the system configuration. In other words, states of each element in the model are design variables and they are to be determined through optimization process. In effect, the optimization process assigns morphing members as 'soft' elements, non-morphing load-bearing members as 'stiff' elements, and non-existent members as 'voids.' In addition, the optimization process determines the location and relative force intensities of distributed actuators, which is represented computationally as equal and opposite nodal forces with soft axial stiffness. Several different optimization problem formulations are investigated to understand their potential benefits in solution quality, as well as meaningfulness of formulation itself. Sample in-plane morphing problems are solved to demonstrate the potential capability of the methodology introduced in this paper.
A Verification-Driven Approach to Control Analysis and Tuning
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2008-01-01
This paper proposes a methodology for the analysis and tuning of controllers using control verification metrics. These metrics, which are introduced in a companion paper, measure the size of the largest uncertainty set of a given class for which the closed-loop specifications are satisfied. This framework integrates deterministic and probabilistic uncertainty models into a setting that enables the deformation of sets in the parameter space, the control design space, and in the union of these two spaces. In regard to control analysis, we propose strategies that enable bounding regions of the design space where the specifications are satisfied by all the closed-loop systems associated with a prescribed uncertainty set. When this is unfeasible, we bound regions where the probability of satisfying the requirements exceeds a prescribed value. In regard to control tuning, we propose strategies for the improvement of the robust characteristics of a baseline controller. Some of these strategies use multi-point approximations to the control verification metrics in order to alleviate the numerical burden of solving a min-max problem. Since this methodology targets non-linear systems having an arbitrary, possibly implicit, functional dependency on the uncertain parameters and for which high-fidelity simulations are available, they are applicable to realistic engineering problems..
Dynamically consistent hydrography and absolute velocity in the eastern North Atlantic Ocean
NASA Technical Reports Server (NTRS)
Wunsch, Carl
1994-01-01
The problem of mapping a dynamically consistent hydrographic field and associated absolute geostrophic flow in the eastern North Atlantic between 24 deg and 36 deg N is related directly to the solution of the so-called thermocline equations. A nonlinear optimization problem involving Needler's P equation is solved to find the hydrography and resulting flow that minimizes the vertical mixing above about 1500 m in the ocean and is simultaneously consistent with the observations. A sharp minimum (at least in some dimensions) is found, apparently corresponding to a solution nearly conserving potential vorticity and with vertical eddy coefficient less than about 10(exp -5) sq m/s. Estimates of `residual' quantities such as eddy coefficients are extremely sensitive to slight modifications to the observed fields. Boundary conditions, vertical velocities, etc., are a product of the optimization and produce estimates differing quantitatively from prior ones relying directly upon observed hydrography. The results are generally insensitive to particular elements of the solution methodology, but many questions remain concerning the extent to which different synoptic sections can be asserted to represent the same ocean. The method can be regarded as a practical generalization of the beta spiral and geostrophic balance inverses for the estimate of absolute geostrophic flows. Numerous improvements to the methodology used in this preliminary attempt are possible.
NASA Astrophysics Data System (ADS)
Crevillén-García, D.; Power, H.
2017-08-01
In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen-Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error.
NASA Astrophysics Data System (ADS)
Potters, M. G.; Bombois, X.; Mansoori, M.; Hof, Paul M. J. Van den
2016-08-01
Estimation of physical parameters in dynamical systems driven by linear partial differential equations is an important problem. In this paper, we introduce the least costly experiment design framework for these systems. It enables parameter estimation with an accuracy that is specified by the experimenter prior to the identification experiment, while at the same time minimising the cost of the experiment. We show how to adapt the classical framework for these systems and take into account scaling and stability issues. We also introduce a progressive subdivision algorithm that further generalises the experiment design framework in the sense that it returns the lowest cost by finding the optimal input signal, and optimal sensor and actuator locations. Our methodology is then applied to a relevant problem in heat transfer studies: estimation of conductivity and diffusivity parameters in front-face experiments. We find good correspondence between numerical and theoretical results.
NASA Technical Reports Server (NTRS)
Arnold, S. M.; Binienda, W. K.; Tan, H. Q.; Xu, M. H.
1992-01-01
Analytical derivations of stress intensity factors (SIF's) of a multicracked plate can be complex and tedious. Recent advances, however, in intelligent application of symbolic computation can overcome these difficulties and provide the means to rigorously and efficiently analyze this class of problems. Here, the symbolic algorithm required to implement the methodology described in Part 1 is presented. The special problem-oriented symbolic functions to derive the fundamental kernels are described, and the associated automatically generated FORTRAN subroutines are given. As a result, a symbolic/FORTRAN package named SYMFRAC, capable of providing accurate SIF's at each crack tip, was developed and validated. Simple illustrative examples using SYMFRAC show the potential of the present approach for predicting the macrocrack propagation path due to existing microcracks in the vicinity of a macrocrack tip, when the influence of the microcrack's location, orientation, size, and interaction are taken into account.
High-resolution coupled physics solvers for analysing fine-scale nuclear reactor design problems.
Mahadevan, Vijay S; Merzari, Elia; Tautges, Timothy; Jain, Rajeev; Obabko, Aleksandr; Smith, Michael; Fischer, Paul
2014-08-06
An integrated multi-physics simulation capability for the design and analysis of current and future nuclear reactor models is being investigated, to tightly couple neutron transport and thermal-hydraulics physics under the SHARP framework. Over several years, high-fidelity, validated mono-physics solvers with proven scalability on petascale architectures have been developed independently. Based on a unified component-based architecture, these existing codes can be coupled with a mesh-data backplane and a flexible coupling-strategy-based driver suite to produce a viable tool for analysts. The goal of the SHARP framework is to perform fully resolved coupled physics analysis of a reactor on heterogeneous geometry, in order to reduce the overall numerical uncertainty while leveraging available computational resources. The coupling methodology and software interfaces of the framework are presented, along with verification studies on two representative fast sodium-cooled reactor demonstration problems to prove the usability of the SHARP framework.
NASA Astrophysics Data System (ADS)
Hsiao, Feng-Hsiag
2017-10-01
In order to obtain double encryption via elliptic curve cryptography (ECC) and chaotic synchronisation, this study presents a design methodology for neural-network (NN)-based secure communications in multiple time-delay chaotic systems. ECC is an asymmetric encryption and its strength is based on the difficulty of solving the elliptic curve discrete logarithm problem which is a much harder problem than factoring integers. Because it is much harder, we can get away with fewer bits to provide the same level of security. To enhance the strength of the cryptosystem, we conduct double encryption that combines chaotic synchronisation with ECC. According to the improved genetic algorithm, a fuzzy controller is synthesised to realise the exponential synchronisation and achieves optimal H∞ performance by minimising the disturbances attenuation level. Finally, a numerical example with simulations is given to demonstrate the effectiveness of the proposed approach.
High-resolution coupled physics solvers for analysing fine-scale nuclear reactor design problems
Mahadevan, Vijay S.; Merzari, Elia; Tautges, Timothy; Jain, Rajeev; Obabko, Aleksandr; Smith, Michael; Fischer, Paul
2014-01-01
An integrated multi-physics simulation capability for the design and analysis of current and future nuclear reactor models is being investigated, to tightly couple neutron transport and thermal-hydraulics physics under the SHARP framework. Over several years, high-fidelity, validated mono-physics solvers with proven scalability on petascale architectures have been developed independently. Based on a unified component-based architecture, these existing codes can be coupled with a mesh-data backplane and a flexible coupling-strategy-based driver suite to produce a viable tool for analysts. The goal of the SHARP framework is to perform fully resolved coupled physics analysis of a reactor on heterogeneous geometry, in order to reduce the overall numerical uncertainty while leveraging available computational resources. The coupling methodology and software interfaces of the framework are presented, along with verification studies on two representative fast sodium-cooled reactor demonstration problems to prove the usability of the SHARP framework. PMID:24982250
Dynamics and Stability of Rolling Viscoelastic Tires
DOE Office of Scientific and Technical Information (OSTI.GOV)
Potter, Trevor
2013-04-30
Current steady state rolling tire calculations often do not include treads because treads destroy the rotational symmetry of the tire. We describe two methodologies to compute time periodic solutions of a two-dimensional viscoelastic tire with treads: solving a minimization problem and solving a system of equations. We also expand on work by Oden and Lin on free spinning rolling elastic tires in which they disovered a hierachy of N-peak steady state standing wave solutions. In addition to discovering a two-dimensional hierarchy of standing wave solutions that includes their N-peak hiearchy, we consider the eects of viscoelasticity on the standing wavemore » solutions. Finally, a commonplace model of viscoelasticity used in our numerical experiments led to non-physical elastic energy growth for large tire speeds. We show that a viscoelastic model of Govindjee and Reese remedies the problem.« less
Crevillén-García, D; Power, H
2017-08-01
In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen-Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error.
Power, H.
2017-01-01
In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen–Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error. PMID:28878974
Evaluation of the HARDMAN comparability methodology for manpower, personnel and training
NASA Technical Reports Server (NTRS)
Zimmerman, W.; Butler, R.; Gray, V.; Rosenberg, L.
1984-01-01
The methodology evaluation and recommendation are part of an effort to improve Hardware versus Manpower (HARDMAN) methodology for projecting manpower, personnel, and training (MPT) to support new acquisition. Several different validity tests are employed to evaluate the methodology. The methodology conforms fairly well with both the MPT user needs and other accepted manpower modeling techniques. Audits of three completed HARDMAN applications reveal only a small number of potential problem areas compared to the total number of issues investigated. The reliability study results conform well with the problem areas uncovered through the audits. The results of the accuracy studies suggest that the manpower life-cycle cost component is only marginally sensitive to changes in other related cost variables. Even with some minor problems, the methodology seem sound and has good near term utility to the Army. Recommendations are provided to firm up the problem areas revealed through the evaluation.
NASA Astrophysics Data System (ADS)
Voronina, Tatyana; Romanenko, Alexey; Loskutov, Artem
2017-04-01
The key point in the state-of-the-art in the tsunami forecasting is constructing a reliable tsunami source. In this study, we present an application of the original numerical inversion technique to modeling the tsunami sources of the 16 September 2015 Chile tsunami. The problem of recovering a tsunami source from remote measurements of the incoming wave in the deep-water tsunameters is considered as an inverse problem of mathematical physics in the class of ill-posed problems. This approach is based on the least squares and the truncated singular value decomposition techniques. The tsunami wave propagation is considered within the scope of the linear shallow-water theory. As in inverse seismic problem, the numerical solutions obtained by mathematical methods become unstable due to the presence of noise in real data. A method of r-solutions makes it possible to avoid instability in the solution to the ill-posed problem under study. This method seems to be attractive from the computational point of view since the main efforts are required only once for calculating the matrix whose columns consist of computed waveforms for each harmonic as a source (an unknown tsunami source is represented as a part of a spatial harmonics series in the source area). Furthermore, analyzing the singular spectra of the matrix obtained in the course of numerical calculations one can estimate the future inversion by a certain observational system that will allow offering a more effective disposition for the tsunameters with the help of precomputations. In other words, the results obtained allow finding a way to improve the inversion by selecting the most informative set of available recording stations. The case study of the 6 February 2013 Solomon Islands tsunami highlights a critical role of arranging deep-water tsunameters for obtaining the inversion results. Implementation of the proposed methodology to the 16 September 2015 Chile tsunami has successfully produced tsunami source model. The function recovered by the method proposed can find practical applications both as an initial condition for various optimization approaches and for computer calculation of the tsunami wave propagation.
High-fidelity large eddy simulation for supersonic jet noise prediction
NASA Astrophysics Data System (ADS)
Aikens, Kurt M.
The problem of intense sound radiation from supersonic jets is a concern for both civil and military applications. As a result, many experimental and computational efforts are focused at evaluating possible noise suppression techniques. Large-eddy simulation (LES) is utilized in many computational studies to simulate the turbulent jet flowfield. Integral methods such as the Ffowcs Williams-Hawkings (FWH) method are then used for propagation of the sound waves to the farfield. Improving the accuracy of this two-step methodology and evaluating beveled converging-diverging nozzles for noise suppression are the main tasks of this work. First, a series of numerical experiments are undertaken to ensure adequate numerical accuracy of the FWH methodology. This includes an analysis of different treatments for the downstream integration surface: with or without including an end-cap, averaging over multiple end-caps, and including an approximate surface integral correction term. Secondly, shock-capturing methods based on characteristic filtering and adaptive spatial filtering are used to extend a highly-parallelizable multiblock subsonic LES code to enable simulations of supersonic jets. The code is based on high-order numerical methods for accurate prediction of the acoustic sources and propagation of the sound waves. Furthermore, this new code is more efficient than the legacy version, allows cylindrical multiblock topologies, and is capable of simulating nozzles with resolved turbulent boundary layers when coupled with an approximate turbulent inflow boundary condition. Even though such wall-resolved simulations are more physically accurate, their expense is often prohibitive. To make simulations more economical, a wall model is developed and implemented. The wall modeling methodology is validated for turbulent quasi-incompressible and compressible zero pressure gradient flat plate boundary layers, and for subsonic and supersonic jets. The supersonic code additions and the wall model treatment are then utilized to simulate military-style nozzles with and without beveling of the nozzle exit plane. Experiments of beveled converging-diverging nozzles have found reduced noise levels for some observer locations. Predicting the noise for these geometries provides a good initial test of the overall methodology for a more complex nozzle. The jet flowfield and acoustic data are analyzed and compared to similar experiments and excellent agreement is found. Potential areas of improvement are discussed for future research.
Optimal maintenance of a multi-unit system under dependencies
NASA Astrophysics Data System (ADS)
Sung, Ho-Joon
The availability, or reliability, of an engineering component greatly influences the operational cost and safety characteristics of a modern system over its life-cycle. Until recently, the reliance on past empirical data has been the industry-standard practice to develop maintenance policies that provide the minimum level of system reliability. Because such empirically-derived policies are vulnerable to unforeseen or fast-changing external factors, recent advancements in the study of topic on maintenance, which is known as optimal maintenance problem, has gained considerable interest as a legitimate area of research. An extensive body of applicable work is available, ranging from those concerned with identifying maintenance policies aimed at providing required system availability at minimum possible cost, to topics on imperfect maintenance of multi-unit system under dependencies. Nonetheless, these existing mathematical approaches to solve for optimal maintenance policies must be treated with caution when considered for broader applications, as they are accompanied by specialized treatments to ease the mathematical derivation of unknown functions in both objective function and constraint for a given optimal maintenance problem. These unknown functions are defined as reliability measures in this thesis, and theses measures (e.g., expected number of failures, system renewal cycle, expected system up time, etc.) do not often lend themselves to possess closed-form formulas. It is thus quite common to impose simplifying assumptions on input probability distributions of components' lifetime or repair policies. Simplifying the complex structure of a multi-unit system to a k-out-of-n system by neglecting any sources of dependencies is another commonly practiced technique intended to increase the mathematical tractability of a particular model. This dissertation presents a proposal for an alternative methodology to solve optimal maintenance problems by aiming to achieve the same end-goals as Reliability Centered Maintenance (RCM). RCM was first introduced to the aircraft industry in an attempt to bridge the gap between the empirically-driven and theory-driven approaches to establishing optimal maintenance policies. Under RCM, qualitative processes that enable the prioritizing of functions based on the criticality and influence would be combined with mathematical modeling to obtain the optimal maintenance policies. Where this thesis work deviates from RCM is its proposal to directly apply quantitative processes to model the reliability measures in optimal maintenance problem. First, Monte Carlo (MC) simulation, in conjunction with a pre-determined Design of Experiments (DOE) table, can be used as a numerical means of obtaining the corresponding discrete simulated outcomes of the reliability measures based on the combination of decision variables (e.g., periodic preventive maintenance interval, trigger age for opportunistic maintenance, etc.). These discrete simulation results can then be regressed as Response Surface Equations (RSEs) with respect to the decision variables. Such an approach to represent the reliability measures with continuous surrogate functions (i.e., the RSEs) not only enables the application of the numerical optimization technique to solve for optimal maintenance policies, but also obviates the need to make mathematical assumptions or impose over-simplifications on the structure of a multi-unit system for the sake of mathematical tractability. The applicability of the proposed methodology to a real-world optimal maintenance problem is showcased through its application to a Time Limited Dispatch (TLD) of Full Authority Digital Engine Control (FADEC) system. In broader terms, this proof-of-concept exercise can be described as a constrained optimization problem, whose objective is to identify the optimal system inspection interval that guarantees a certain level of availability for a multi-unit system. A variety of reputable numerical techniques were used to model the problem as accurately as possible, including algorithms for the MC simulation, imperfect maintenance model from quasi renewal processes, repair time simulation, and state transition rules. Variance Reduction Techniques (VRTs) were also used in an effort to enhance MC simulation efficiency. After accurate MC simulation results are obtained, the RSEs are generated based on the goodness-of-fit measure to yield as parsimonious model as possible to construct the optimization problem. Under the assumption of constant failure rate for lifetime distributions, the inspection interval from the proposed methodology was found to be consistent with the one from the common approach used in industry that leverages Continuous Time Markov Chain (CTMC). While the latter does not consider maintenance cost settings, the proposed methodology enables an operator to consider different types of maintenance cost settings, e.g., inspection cost, system corrective maintenance cost, etc., to result in more flexible maintenance policies. When the proposed methodology was applied to the same TLD of FADEC example, but under the more generalized assumption of strictly Increasing Failure Rate (IFR) for lifetime distribution, it was shown to successfully capture component wear-out, as well as the economic dependencies among the system components.
Evaluation of deconvolution modelling applied to numerical combustion
NASA Astrophysics Data System (ADS)
Mehl, Cédric; Idier, Jérôme; Fiorina, Benoît
2018-01-01
A possible modelling approach in the large eddy simulation (LES) of reactive flows is to deconvolve resolved scalars. Indeed, by inverting the LES filter, scalars such as mass fractions are reconstructed. This information can be used to close budget terms of filtered species balance equations, such as the filtered reaction rate. Being ill-posed in the mathematical sense, the problem is very sensitive to any numerical perturbation. The objective of the present study is to assess the ability of this kind of methodology to capture the chemical structure of premixed flames. For that purpose, three deconvolution methods are tested on a one-dimensional filtered laminar premixed flame configuration: the approximate deconvolution method based on Van Cittert iterative deconvolution, a Taylor decomposition-based method, and the regularised deconvolution method based on the minimisation of a quadratic criterion. These methods are then extended to the reconstruction of subgrid scale profiles. Two methodologies are proposed: the first one relies on subgrid scale interpolation of deconvolved profiles and the second uses parametric functions to describe small scales. Conducted tests analyse the ability of the method to capture the chemical filtered flame structure and front propagation speed. Results show that the deconvolution model should include information about small scales in order to regularise the filter inversion. a priori and a posteriori tests showed that the filtered flame propagation speed and structure cannot be captured if the filter size is too large.
Lattice Boltzmann methods for global linear instability analysis
NASA Astrophysics Data System (ADS)
Pérez, José Miguel; Aguilar, Alfonso; Theofilis, Vassilis
2017-12-01
Modal global linear instability analysis is performed using, for the first time ever, the lattice Boltzmann method (LBM) to analyze incompressible flows with two and three inhomogeneous spatial directions. Four linearization models have been implemented in order to recover the linearized Navier-Stokes equations in the incompressible limit. Two of those models employ the single relaxation time and have been proposed previously in the literature as linearization of the collision operator of the lattice Boltzmann equation. Two additional models are derived herein for the first time by linearizing the local equilibrium probability distribution function. Instability analysis results are obtained in three benchmark problems, two in closed geometries and one in open flow, namely the square and cubic lid-driven cavity flow and flow in the wake of the circular cylinder. Comparisons with results delivered by classic spectral element methods verify the accuracy of the proposed new methodologies and point potential limitations particular to the LBM approach. The known issue of appearance of numerical instabilities when the SRT model is used in direct numerical simulations employing the LBM is shown to be reflected in a spurious global eigenmode when the SRT model is used in the instability analysis. Although this mode is absent in the multiple relaxation times model, other spurious instabilities can also arise and are documented herein. Areas of potential improvements in order to make the proposed methodology competitive with established approaches for global instability analysis are discussed.
NASA Astrophysics Data System (ADS)
Claeys, M.; Sinou, J.-J.; Lambelin, J.-P.; Todeschini, R.
2016-08-01
The nonlinear vibration response of an assembly with friction joints - named "Harmony" - is studied both experimentally and numerically. The experimental results exhibit a softening effect and an increase of dissipation with excitation level. Modal interactions due to friction are also evidenced. The numerical methodology proposed groups together well-known structural dynamic methods, including finite elements, substructuring, Harmonic Balance and continuation methods. On the one hand, the application of this methodology proves its capacity to treat a complex system where several friction movements occur at the same time. On the other hand, the main contribution of this paper is the experimental and numerical study of evidence of modal interactions due to friction. The simulation methodology succeeds in reproducing complex form of dynamic behavior such as these modal interactions.
Methodological pitfalls in the analysis of contraceptive failure.
Trussell, J
1991-02-01
Although the literature on contraceptive failure is vast and is expanding rapidly, our understanding of the relative efficacy of methods is quite limited because of defects in the research design and in the analytical tools used by investigators. Errors in the literature range from simple arithmetical mistakes to outright fraud. In many studies the proportion of the original sample lost to follow-up is so large that the published results have little meaning. Investigators do not routinely use life table techniques to control for duration of exposure; many employ the Pearl index, which suffers from the same problem as does the crude death rate as a measure of mortality. Investigators routinely calculate 'method' failure rates by eliminating 'user' failures from the numerator (pregnancies) but fail to eliminate 'imperfect' use from the denominator (exposure); as a consequence, these 'method' rates are biased downward. This paper explores these and other common biases that snare investigators and establishes methodological guidelines for future research.
Imprecise (fuzzy) information in geostatistics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bardossy, A.; Bogardi, I.; Kelly, W.E.
1988-05-01
A methodology based on fuzzy set theory for the utilization of imprecise data in geostatistics is presented. A common problem preventing a broader use of geostatistics has been the insufficient amount of accurate measurement data. In certain cases, additional but uncertain (soft) information is available and can be encoded as subjective probabilities, and then the soft kriging method can be applied (Journal, 1986). In other cases, a fuzzy encoding of soft information may be more realistic and simplify the numerical calculations. Imprecise (fuzzy) spatial information on the possible variogram is integrated into a single variogram which is used in amore » fuzzy kriging procedure. The overall uncertainty of prediction is represented by the estimation variance and the calculated membership function for each kriged point. The methodology is applied to the permeability prediction of a soil liner for hazardous waste containment. The available number of hard measurement data (20) was not enough for a classical geostatistical analysis. An additional 20 soft data made it possible to prepare kriged contour maps using the fuzzy geostatistical procedure.« less
Computation of three-dimensional nozzle-exhaust flow fields with the GIM code
NASA Technical Reports Server (NTRS)
Spradley, L. W.; Anderson, P. G.
1978-01-01
A methodology is introduced for constructing numerical analogs of the partial differential equations of continuum mechanics. A general formulation is provided which permits classical finite element and many of the finite difference methods to be derived directly. The approach, termed the General Interpolants Method (GIM), can combined the best features of finite element and finite difference methods. A quasi-variational procedure is used to formulate the element equations, to introduce boundary conditions into the method and to provide a natural assembly sequence. A derivation is given in terms of general interpolation functions from this procedure. Example computations for transonic and supersonic flows in two and three dimensions are given to illustrate the utility of GIM. A three-dimensional nozzle-exhaust flow field is solved including interaction with the freestream and a coupled treatment of the shear layer. Potential applications of the GIM code to a variety of computational fluid dynamics problems is then discussed in terms of existing capability or by extension of the methodology.
Wave energy focusing to subsurface poroelastic formations to promote oil mobilization
NASA Astrophysics Data System (ADS)
Karve, Pranav M.; Kallivokas, Loukas F.
2015-07-01
We discuss an inverse source formulation aimed at focusing wave energy produced by ground surface sources to target subsurface poroelastic formations. The intent of the focusing is to facilitate or enhance the mobility of oil entrapped within the target formation. The underlying forward wave propagation problem is cast in two spatial dimensions for a heterogeneous poroelastic target embedded within a heterogeneous elastic semi-infinite host. The semi-infiniteness of the elastic host is simulated by augmenting the (finite) computational domain with a buffer of perfectly matched layers. The inverse source algorithm is based on a systematic framework of partial-differential-equation-constrained optimization. It is demonstrated, via numerical experiments, that the algorithm is capable of converging to the spatial and temporal characteristics of surface loads that maximize energy delivery to the target formation. Consequently, the methodology is well-suited for designing field implementations that could meet a desired oil mobility threshold. Even though the methodology, and the results presented herein are in two dimensions, extensions to three dimensions are straightforward.
Comparison of a 3-D CFD-DSMC Solution Methodology With a Wind Tunnel Experiment
NASA Technical Reports Server (NTRS)
Glass, Christopher E.; Horvath, Thomas J.
2002-01-01
A solution method for problems that contain both continuum and rarefied flow regions is presented. The methodology is applied to flow about the 3-D Mars Sample Return Orbiter (MSRO) that has a highly compressed forebody flow, a shear layer where the flow separates from a forebody lip, and a low density wake. Because blunt body flow fields contain such disparate regions, employing a single numerical technique to solve the entire 3-D flow field is often impractical, or the technique does not apply. Direct simulation Monte Carlo (DSMC) could be employed to solve the entire flow field; however, the technique requires inordinate computational resources for continuum and near-continuum regions, and is best suited for the wake region. Computational fluid dynamics (CFD) will solve the high-density forebody flow, but continuum assumptions do not apply in the rarefied wake region. The CFD-DSMC approach presented herein may be a suitable way to obtain a higher fidelity solution.
Influences of system uncertainties on the numerical transfer path analysis of engine systems
NASA Astrophysics Data System (ADS)
Acri, A.; Nijman, E.; Acri, A.; Offner, G.
2017-10-01
Practical mechanical systems operate with some degree of uncertainty. In numerical models uncertainties can result from poorly known or variable parameters, from geometrical approximation, from discretization or numerical errors, from uncertain inputs or from rapidly changing forcing that can be best described in a stochastic framework. Recently, random matrix theory was introduced to take parameter uncertainties into account in numerical modeling problems. In particular in this paper, Wishart random matrix theory is applied on a multi-body dynamic system to generate random variations of the properties of system components. Multi-body dynamics is a powerful numerical tool largely implemented during the design of new engines. In this paper the influence of model parameter variability on the results obtained from the multi-body simulation of engine dynamics is investigated. The aim is to define a methodology to properly assess and rank system sources when dealing with uncertainties. Particular attention is paid to the influence of these uncertainties on the analysis and the assessment of the different engine vibration sources. Examples of the effects of different levels of uncertainties are illustrated by means of examples using a representative numerical powertrain model. A numerical transfer path analysis, based on system dynamic substructuring, is used to derive and assess the internal engine vibration sources. The results obtained from this analysis are used to derive correlations between parameter uncertainties and statistical distribution of results. The derived statistical information can be used to advance the knowledge of the multi-body analysis and the assessment of system sources when uncertainties in model parameters are considered.
Enhanced Verification Test Suite for Physics Simulation Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamm, J R; Brock, J S; Brandon, S T
2008-10-10
This document discusses problems with which to augment, in quantity and in quality, the existing tri-laboratory suite of verification problems used by Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and Sandia National Laboratories (SNL). The purpose of verification analysis is demonstrate whether the numerical results of the discretization algorithms in physics and engineering simulation codes provide correct solutions of the corresponding continuum equations. The key points of this document are: (1) Verification deals with mathematical correctness of the numerical algorithms in a code, while validation deals with physical correctness of a simulation in a regime of interest.more » This document is about verification. (2) The current seven-problem Tri-Laboratory Verification Test Suite, which has been used for approximately five years at the DOE WP laboratories, is limited. (3) Both the methodology for and technology used in verification analysis have evolved and been improved since the original test suite was proposed. (4) The proposed test problems are in three basic areas: (a) Hydrodynamics; (b) Transport processes; and (c) Dynamic strength-of-materials. (5) For several of the proposed problems we provide a 'strong sense verification benchmark', consisting of (i) a clear mathematical statement of the problem with sufficient information to run a computer simulation, (ii) an explanation of how the code result and benchmark solution are to be evaluated, and (iii) a description of the acceptance criterion for simulation code results. (6) It is proposed that the set of verification test problems with which any particular code be evaluated include some of the problems described in this document. Analysis of the proposed verification test problems constitutes part of a necessary--but not sufficient--step that builds confidence in physics and engineering simulation codes. More complicated test cases, including physics models of greater sophistication or other physics regimes (e.g., energetic material response, magneto-hydrodynamics), would represent a scientifically desirable complement to the fundamental test cases discussed in this report. The authors believe that this document can be used to enhance the verification analyses undertaken at the DOE WP Laboratories and, thus, to improve the quality, credibility, and usefulness of the simulation codes that are analyzed with these problems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reckinger, Scott James; Livescu, Daniel; Vasilyev, Oleg V.
A comprehensive numerical methodology has been developed that handles the challenges introduced by considering the compressive nature of Rayleigh-Taylor instability (RTI) systems, which include sharp interfacial density gradients on strongly stratified background states, acoustic wave generation and removal at computational boundaries, and stratification-dependent vorticity production. The computational framework is used to simulate two-dimensional single-mode RTI to extreme late-times for a wide range of flow compressibility and variable density effects. The results show that flow compressibility acts to reduce the growth of RTI for low Atwood numbers, as predicted from linear stability analysis.
NASA Astrophysics Data System (ADS)
Janardhanan, S.; Datta, B.
2011-12-01
Surrogate models are widely used to develop computationally efficient simulation-optimization models to solve complex groundwater management problems. Artificial intelligence based models are most often used for this purpose where they are trained using predictor-predictand data obtained from a numerical simulation model. Most often this is implemented with the assumption that the parameters and boundary conditions used in the numerical simulation model are perfectly known. However, in most practical situations these values are uncertain. Under these circumstances the application of such approximation surrogates becomes limited. In our study we develop a surrogate model based coupled simulation optimization methodology for determining optimal pumping strategies for coastal aquifers considering parameter uncertainty. An ensemble surrogate modeling approach is used along with multiple realization optimization. The methodology is used to solve a multi-objective coastal aquifer management problem considering two conflicting objectives. Hydraulic conductivity and the aquifer recharge are considered as uncertain values. Three dimensional coupled flow and transport simulation model FEMWATER is used to simulate the aquifer responses for a number of scenarios corresponding to Latin hypercube samples of pumping and uncertain parameters to generate input-output patterns for training the surrogate models. Non-parametric bootstrap sampling of this original data set is used to generate multiple data sets which belong to different regions in the multi-dimensional decision and parameter space. These data sets are used to train and test multiple surrogate models based on genetic programming. The ensemble of surrogate models is then linked to a multi-objective genetic algorithm to solve the pumping optimization problem. Two conflicting objectives, viz, maximizing total pumping from beneficial wells and minimizing the total pumping from barrier wells for hydraulic control of saltwater intrusion are considered. The salinity levels resulting at strategic locations due to these pumping are predicted using the ensemble surrogates and are constrained to be within pre-specified levels. Different realizations of the concentration values are obtained from the ensemble predictions corresponding to each candidate solution of pumping. Reliability concept is incorporated as the percent of the total number of surrogate models which satisfy the imposed constraints. The methodology was applied to a realistic coastal aquifer system in Burdekin delta area in Australia. It was found that all optimal solutions corresponding to a reliability level of 0.99 satisfy all the constraints and as reducing reliability level decreases the constraint violation increases. Thus ensemble surrogate model based simulation-optimization was found to be useful in deriving multi-objective optimal pumping strategies for coastal aquifers under parameter uncertainty.
Applications of decision analysis and related techniques to industrial engineering problems at KSC
NASA Technical Reports Server (NTRS)
Evans, Gerald W.
1995-01-01
This report provides: (1) a discussion of the origination of decision analysis problems (well-structured problems) from ill-structured problems; (2) a review of the various methodologies and software packages for decision analysis and related problem areas; (3) a discussion of how the characteristics of a decision analysis problem affect the choice of modeling methodologies, thus providing a guide as to when to choose a particular methodology; and (4) examples of applications of decision analysis to particular problems encountered by the IE Group at KSC. With respect to the specific applications at KSC, particular emphasis is placed on the use of the Demos software package (Lumina Decision Systems, 1993).
Parallelized modelling and solution scheme for hierarchically scaled simulations
NASA Technical Reports Server (NTRS)
Padovan, Joe
1995-01-01
This two-part paper presents the results of a benchmarked analytical-numerical investigation into the operational characteristics of a unified parallel processing strategy for implicit fluid mechanics formulations. This hierarchical poly tree (HPT) strategy is based on multilevel substructural decomposition. The Tree morphology is chosen to minimize memory, communications and computational effort. The methodology is general enough to apply to existing finite difference (FD), finite element (FEM), finite volume (FV) or spectral element (SE) based computer programs without an extensive rewrite of code. In addition to finding large reductions in memory, communications, and computational effort associated with a parallel computing environment, substantial reductions are generated in the sequential mode of application. Such improvements grow with increasing problem size. Along with a theoretical development of general 2-D and 3-D HPT, several techniques for expanding the problem size that the current generation of computers are capable of solving, are presented and discussed. Among these techniques are several interpolative reduction methods. It was found that by combining several of these techniques that a relatively small interpolative reduction resulted in substantial performance gains. Several other unique features/benefits are discussed in this paper. Along with Part 1's theoretical development, Part 2 presents a numerical approach to the HPT along with four prototype CFD applications. These demonstrate the potential of the HPT strategy.
NASA Astrophysics Data System (ADS)
Ge, Liang; Sotiropoulos, Fotis
2007-08-01
A novel numerical method is developed that integrates boundary-conforming grids with a sharp interface, immersed boundary methodology. The method is intended for simulating internal flows containing complex, moving immersed boundaries such as those encountered in several cardiovascular applications. The background domain (e.g. the empty aorta) is discretized efficiently with a curvilinear boundary-fitted mesh while the complex moving immersed boundary (say a prosthetic heart valve) is treated with the sharp-interface, hybrid Cartesian/immersed-boundary approach of Gilmanov and Sotiropoulos [A. Gilmanov, F. Sotiropoulos, A hybrid cartesian/immersed boundary method for simulating flows with 3d, geometrically complex, moving bodies, Journal of Computational Physics 207 (2005) 457-492.]. To facilitate the implementation of this novel modeling paradigm in complex flow simulations, an accurate and efficient numerical method is developed for solving the unsteady, incompressible Navier-Stokes equations in generalized curvilinear coordinates. The method employs a novel, fully-curvilinear staggered grid discretization approach, which does not require either the explicit evaluation of the Christoffel symbols or the discretization of all three momentum equations at cell interfaces as done in previous formulations. The equations are integrated in time using an efficient, second-order accurate fractional step methodology coupled with a Jacobian-free, Newton-Krylov solver for the momentum equations and a GMRES solver enhanced with multigrid as preconditioner for the Poisson equation. Several numerical experiments are carried out on fine computational meshes to demonstrate the accuracy and efficiency of the proposed method for standard benchmark problems as well as for unsteady, pulsatile flow through a curved, pipe bend. To demonstrate the ability of the method to simulate flows with complex, moving immersed boundaries we apply it to calculate pulsatile, physiological flow through a mechanical, bileaflet heart valve mounted in a model straight aorta with an anatomical-like triple sinus.
Berteletti, Ilaria; Prado, Jérôme; Booth, James R
2014-08-01
Greater skill in solving single-digit multiplication problems requires a progressive shift from a reliance on numerical to verbal mechanisms over development. Children with mathematical learning disability (MD), however, are thought to suffer from a specific impairment in numerical mechanisms. Here we tested the hypothesis that this impairment might prevent MD children from transitioning toward verbal mechanisms when solving single-digit multiplication problems. Brain activations during multiplication problems were compared in MD and typically developing (TD) children (3rd to 7th graders) in numerical and verbal regions which were individuated by independent localizer tasks. We used small (e.g., 2 × 3) and large (e.g., 7 × 9) problems as these problems likely differ in their reliance on verbal versus numerical mechanisms. Results indicate that MD children have reduced activations in both the verbal (i.e., left inferior frontal gyrus and left middle temporal to superior temporal gyri) and the numerical (i.e., right superior parietal lobule including intra-parietal sulcus) regions suggesting that both mechanisms are impaired. Moreover, the only reliable activation observed for MD children was in the numerical region when solving small problems. This suggests that MD children could effectively engage numerical mechanisms only for the easier problems. Conversely, TD children showed a modulation of activation with problem size in the verbal regions. This suggests that TD children were effectively engaging verbal mechanisms for the easier problems. Moreover, TD children with better language skills were more effective at engaging verbal mechanisms. In conclusion, results suggest that the numerical- and language-related processes involved in solving multiplication problems are impaired in MD children. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Chiu, Y.; Nishikawa, T.
2013-12-01
With the increasing complexity of parameter-structure identification (PSI) in groundwater modeling, there is a need for robust, fast, and accurate optimizers in the groundwater-hydrology field. For this work, PSI is defined as identifying parameter dimension, structure, and value. In this study, Voronoi tessellation and differential evolution (DE) are used to solve the optimal PSI problem. Voronoi tessellation is used for automatic parameterization, whereby stepwise regression and the error covariance matrix are used to determine the optimal parameter dimension. DE is a novel global optimizer that can be used to solve nonlinear, nondifferentiable, and multimodal optimization problems. It can be viewed as an improved version of genetic algorithms and employs a simple cycle of mutation, crossover, and selection operations. DE is used to estimate the optimal parameter structure and its associated values. A synthetic numerical experiment of continuous hydraulic conductivity distribution was conducted to demonstrate the proposed methodology. The results indicate that DE can identify the global optimum effectively and efficiently. A sensitivity analysis of the control parameters (i.e., the population size, mutation scaling factor, crossover rate, and mutation schemes) was performed to examine their influence on the objective function. The proposed DE was then applied to solve a complex parameter-estimation problem for a small desert groundwater basin in Southern California. Hydraulic conductivity, specific yield, specific storage, fault conductance, and recharge components were estimated simultaneously. Comparison of DE and a traditional gradient-based approach (PEST) shows DE to be more robust and efficient. The results of this work not only provide an alternative for PSI in groundwater models, but also extend DE applications towards solving complex, regional-scale water management optimization problems.
General Methodology for Designing Spacecraft Trajectories
NASA Technical Reports Server (NTRS)
Condon, Gerald; Ocampo, Cesar; Mathur, Ravishankar; Morcos, Fady; Senent, Juan; Williams, Jacob; Davis, Elizabeth C.
2012-01-01
A methodology for designing spacecraft trajectories in any gravitational environment within the solar system has been developed. The methodology facilitates modeling and optimization for problems ranging from that of a single spacecraft orbiting a single celestial body to that of a mission involving multiple spacecraft and multiple propulsion systems operating in gravitational fields of multiple celestial bodies. The methodology consolidates almost all spacecraft trajectory design and optimization problems into a single conceptual framework requiring solution of either a system of nonlinear equations or a parameter-optimization problem with equality and/or inequality constraints.
Numerical methods in heat transfer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, R.W.
1985-01-01
This third volume in the series in Numerical Methods in Engineering presents expanded versions of selected papers given at the Conference on Numerical Methods in Thermal Problems held in Venice in July 1981. In this reference work, contributors offer the current state of knowledge on the numerical solution of convective heat transfer problems and conduction heat transfer problems.
Decision-problem state analysis methodology
NASA Technical Reports Server (NTRS)
Dieterly, D. L.
1980-01-01
A methodology for analyzing a decision-problem state is presented. The methodology is based on the analysis of an incident in terms of the set of decision-problem conditions encountered. By decomposing the events that preceded an unwanted outcome, such as an accident, into the set of decision-problem conditions that were resolved, a more comprehensive understanding is possible. All human-error accidents are not caused by faulty decision-problem resolutions, but it appears to be one of the major areas of accidents cited in the literature. A three-phase methodology is presented which accommodates a wide spectrum of events. It allows for a systems content analysis of the available data to establish: (1) the resolutions made, (2) alternatives not considered, (3) resolutions missed, and (4) possible conditions not considered. The product is a map of the decision-problem conditions that were encountered as well as a projected, assumed set of conditions that should have been considered. The application of this methodology introduces a systematic approach to decomposing the events that transpired prior to the accident. The initial emphasis is on decision and problem resolution. The technique allows for a standardized method of accident into a scenario which may used for review or the development of a training simulation.
NASA Astrophysics Data System (ADS)
Zheng, Mingfang; He, Cunfu; Lu, Yan; Wu, Bin
2018-01-01
We presented a numerical method to solve phase dispersion curve in general anisotropic plates. This approach involves an exact solution to the problem in the form of the Legendre polynomial of multiple integrals, which we substituted into the state-vector formalism. In order to improve the efficiency of the proposed method, we made a special effort to demonstrate the analytical methodology. Furthermore, we analyzed the algebraic symmetries of the matrices in the state-vector formalism for anisotropic plates. The basic feature of the proposed method was the expansion of field quantities by Legendre polynomials. The Legendre polynomial method avoid to solve the transcendental dispersion equation, which can only be solved numerically. This state-vector formalism combined with Legendre polynomial expansion distinguished the adjacent dispersion mode clearly, even when the modes were very close. We then illustrated the theoretical solutions of the dispersion curves by this method for isotropic and anisotropic plates. Finally, we compared the proposed method with the global matrix method (GMM), which shows excellent agreement.
Electro-thermo-optical simulation of vertical-cavity surface-emitting lasers
NASA Astrophysics Data System (ADS)
Smagley, Vladimir Anatolievich
Three-dimensional electro-thermal simulator based on the double-layer approximation for the active region was coupled to optical gain and optical field numerical simulators to provide a self-consistent steady-state solution of VCSEL current-voltage and current-output power characteristics. Methodology of VCSEL modeling had been established and applied to model a standard 850-nm VCSEL based on GaAs-active region and a novel intracavity-contacted 400-nm GaN-based VCSEL. Results of GaAs VCSEL simulation were in a good agreement with experiment. Correlations between current injection and radiative mode profiles have been observed. Physical sub-models of transport, optical gain and cavity optical field were developed. Carrier transport through DBRs was studied. Problem of optical fields in VCSEL cavity was treated numerically by the effective frequency method. All the sub-models were connected through spatially inhomogeneous rate equation system. It was shown that the conventional uncoupled analysis of every separate physical phenomenon would be insufficient to describe VCSEL operation.
Simulation of wind turbine wakes using the actuator line technique.
Sørensen, Jens N; Mikkelsen, Robert F; Henningson, Dan S; Ivanell, Stefan; Sarmast, Sasan; Andersen, Søren J
2015-02-28
The actuator line technique was introduced as a numerical tool to be employed in combination with large eddy simulations to enable the study of wakes and wake interaction in wind farms. The technique is today largely used for studying basic features of wakes as well as for making performance predictions of wind farms. In this paper, we give a short introduction to the wake problem and the actuator line methodology and present a study in which the technique is employed to determine the near-wake properties of wind turbines. The presented results include a comparison of experimental results of the wake characteristics of the flow around a three-bladed model wind turbine, the development of a simple analytical formula for determining the near-wake length behind a wind turbine and a detailed investigation of wake structures based on proper orthogonal decomposition analysis of numerically generated snapshots of the wake. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Compile-time estimation of communication costs in multicomputers
NASA Technical Reports Server (NTRS)
Gupta, Manish; Banerjee, Prithviraj
1991-01-01
An important problem facing numerous research projects on parallelizing compilers for distributed memory machines is that of automatically determining a suitable data partitioning scheme for a program. Any strategy for automatic data partitioning needs a mechanism for estimating the performance of a program under a given partitioning scheme, the most crucial part of which involves determining the communication costs incurred by the program. A methodology is described for estimating the communication costs at compile-time as functions of the numbers of processors over which various arrays are distributed. A strategy is described along with its theoretical basis, for making program transformations that expose opportunities for combining of messages, leading to considerable savings in the communication costs. For certain loops with regular dependences, the compiler can detect the possibility of pipelining, and thus estimate communication costs more accurately than it could otherwise. These results are of great significance to any parallelization system supporting numeric applications on multicomputers. In particular, they lay down a framework for effective synthesis of communication on multicomputers from sequential program references.
NASA Astrophysics Data System (ADS)
Macías-Díaz, J. E.
2017-12-01
In this manuscript, we consider an initial-boundary-value problem governed by a (1 + 1)-dimensional hyperbolic partial differential equation with constant damping that generalizes many nonlinear wave equations from mathematical physics. The model considers the presence of a spatial Laplacian of fractional order which is defined in terms of Riesz fractional derivatives, as well as the inclusion of a generic continuously differentiable potential. It is known that the undamped regime has an associated positive energy functional, and we show here that it is preserved throughout time under suitable boundary conditions. To approximate the solutions of this model, we propose a finite-difference discretization based on fractional centered differences. Some discrete quantities are proposed in this work to estimate the energy functional, and we show that the numerical method is capable of conserving the discrete energy under the same boundary conditions for which the continuous model is conservative. Moreover, we establish suitable computational constraints under which the discrete energy of the system is positive. The method is consistent of second order, and is both stable and convergent. The numerical simulations shown here illustrate the most important features of our numerical methodology.
NASA Technical Reports Server (NTRS)
Ryabenkii, V. S.; Turchaninov, V. I.; Tsynkov, S. V.
1999-01-01
We propose a family of algorithms for solving numerically a Cauchy problem for the three-dimensional wave equation. The sources that drive the equation (i.e., the right-hand side) are compactly supported in space for any given time; they, however, may actually move in space with a subsonic speed. The solution is calculated inside a finite domain (e.g., sphere) that also moves with a subsonic speed and always contains the support of the right-hand side. The algorithms employ a standard consistent and stable explicit finite-difference scheme for the wave equation. They allow one to calculate tile solution for arbitrarily long time intervals without error accumulation and with the fixed non-growing amount of tile CPU time and memory required for advancing one time step. The algorithms are inherently three-dimensional; they rely on the presence of lacunae in the solutions of the wave equation in oddly dimensional spaces. The methodology presented in the paper is, in fact, a building block for constructing the nonlocal highly accurate unsteady artificial boundary conditions to be used for the numerical simulation of waves propagating with finite speed over unbounded domains.
A boundary element method for Stokes flows with interfaces
NASA Astrophysics Data System (ADS)
Alinovi, Edoardo; Bottaro, Alessandro
2018-03-01
The boundary element method is a widely used and powerful technique to numerically describe multiphase flows with interfaces, satisfying Stokes' approximation. However, low viscosity ratios between immiscible fluids in contact at an interface and large surface tensions may lead to consistency issues as far as mass conservation is concerned. A simple and effective approach is described to ensure mass conservation at all viscosity ratios and capillary numbers within a standard boundary element framework. Benchmark cases are initially considered demonstrating the efficacy of the proposed technique in satisfying mass conservation, comparing with approaches and other solutions present in the literature. The methodology developed is finally applied to the problem of slippage over superhydrophobic surfaces.
“Epidemiological Criminology”: Coming Full Circle
Lanier, Mark M.
2009-01-01
Members of the public health and criminal justice disciplines often work with marginalized populations: people at high risk of drug use, health problems, incarceration, and other difficulties. As these fields increasingly overlap, distinctions between them are blurred, as numerous research reports and funding trends document. However, explicit theoretical and methodological linkages between the 2 disciplines remain rare. A new paradigm that links methods and statistical models of public health with those of their criminal justice counterparts is needed, as are increased linkages between epidemiological analogies, theories, and models and the corresponding tools of criminology. We outline disciplinary commonalities and distinctions, present policy examples that integrate similarities, and propose “epidemiological criminology” as a bridging framework. PMID:19150901
Progress on bioinspired, biomimetic, and bioreplication routes to harvest solar energy
NASA Astrophysics Data System (ADS)
Martín-Palma, Raúl J.; Lakhtakia, Akhlesh
2017-06-01
Although humans have long been imitating biological structures to serve their particular purposes, only a few decades ago engineered biomimicry began to be considered a technoscientific discipline with a great problem-solving potential. The three methodologies of engineered biomimicry-viz., bioinspiration, biomimetic, and bioreplication-employ and impact numerous technoscientific fields. For producing fuels and electricity by artificial photosynthesis, both processes and porous surfaces inspired by plants and certain marine animals are under active investigation. Biomimetically textured surfaces on the subwavelength scale have been shown to reduce the reflectance of photovoltaic solar cells over the visible and the near-infrared regimes. Lenticular compound lenses bioreplicated from insect eyes by an industrially scalable technique offer a similar promise.
Topology synthesis and size optimization of morphing wing structures
NASA Astrophysics Data System (ADS)
Inoyama, Daisaku
This research demonstrates a novel topology and size optimization methodology for synthesis of distributed actuation systems with specific applications to morphing air vehicle structures. The main emphasis is placed on the topology and size optimization problem formulations and the development of computational modeling concepts. The analysis model is developed to meet several important criteria: It must allow a rigid-body displacement, as well as a variation in planform area, with minimum strain on structural members while retaining acceptable numerical stability for finite element analysis. Topology optimization is performed on a semi-ground structure with design variables that control the system configuration. In effect, the optimization process assigns morphing members as "soft" elements, non-morphing load-bearing members as "stiff' elements, and non-existent members as "voids." The optimization process also determines the optimum actuator placement, where each actuator is represented computationally by equal and opposite nodal forces with soft axial stiffness. In addition, the configuration of attachments that connect the morphing structure to a non-morphing structure is determined simultaneously. Several different optimization problem formulations are investigated to understand their potential benefits in solution quality, as well as meaningfulness of the formulations. Extensions and enhancements to the initial concept and problem formulations are made to accommodate multiple-configuration definitions. In addition, the principal issues on the external-load dependency and the reversibility of a design, as well as the appropriate selection of a reference configuration, are addressed in the research. The methodology to control actuator distributions and concentrations is also discussed. Finally, the strategy to transfer the topology solution to the sizing optimization is developed and cross-sectional areas of existent structural members are optimized under applied aerodynamic loads. That is, the optimization process is implemented in sequential order: The actuation system layout is first determined through multi-disciplinary topology optimization process, and then the thickness or cross-sectional area of each existent member is optimized under given constraints and boundary conditions. Sample problems are solved to demonstrate the potential capabilities of the presented methodology. The research demonstrates an innovative structural design procedure from a computational perspective and opens new insights into the potential design requirements and characteristics of morphing structures.
Design of bearings for rotor systems based on stability
NASA Technical Reports Server (NTRS)
Dhar, D.; Barrett, L. E.; Knospe, C. R.
1992-01-01
Design of rotor systems incorporating stable behavior is of great importance to manufacturers of high speed centrifugal machinery since destabilizing mechanisms (from bearings, seals, aerodynamic cross coupling, noncolocation effects from magnetic bearings, etc.) increase with machine efficiency and power density. A new method of designing bearing parameters (stiffness and damping coefficients or coefficients of the controller transfer function) is proposed, based on a numerical search in the parameter space. The feedback control law is based on a decentralized low order controller structure, and the various design requirements are specified as constraints in the specification and parameter spaces. An algorithm is proposed for solving the problem as a sequence of constrained 'minimax' problems, with more and more eigenvalues into an acceptable region in the complex plane. The algorithm uses the method of feasible directions to solve the nonlinear constrained minimization problem at each stage. This methodology emphasizes the designer's interaction with the algorithm to generate acceptable designs by relaxing various constraints and changing initial guesses interactively. A design oriented user interface is proposed to facilitate the interaction.
Luo, Haoxiang; Mittal, Rajat; Zheng, Xudong; Bielamowicz, Steven A.; Walsh, Raymond J.; Hahn, James K.
2008-01-01
A new numerical approach for modeling a class of flow–structure interaction problems typically encountered in biological systems is presented. In this approach, a previously developed, sharp-interface, immersed-boundary method for incompressible flows is used to model the fluid flow and a new, sharp-interface Cartesian grid, immersed boundary method is devised to solve the equations of linear viscoelasticity that governs the solid. The two solvers are coupled to model flow–structure interaction. This coupled solver has the advantage of simple grid generation and efficient computation on simple, single-block structured grids. The accuracy of the solid-mechanics solver is examined by applying it to a canonical problem. The solution methodology is then applied to the problem of laryngeal aerodynamics and vocal fold vibration during human phonation. This includes a three-dimensional eigen analysis for a multi-layered vocal fold prototype as well as two-dimensional, flow-induced vocal fold vibration in a modeled larynx. Several salient features of the aerodynamics as well as vocal-fold dynamics are presented. PMID:19936017
Global Optimal Trajectory in Chaos and NP-Hardness
NASA Astrophysics Data System (ADS)
Latorre, Vittorio; Gao, David Yang
This paper presents an unconventional theory and method for solving general nonlinear dynamical systems. Instead of the direct iterative methods, the discretized nonlinear system is first formulated as a global optimization problem via the least squares method. A newly developed canonical duality theory shows that this nonconvex minimization problem can be solved deterministically in polynomial time if a global optimality condition is satisfied. The so-called pseudo-chaos produced by linear iterative methods are mainly due to the intrinsic numerical error accumulations. Otherwise, the global optimization problem could be NP-hard and the nonlinear system can be really chaotic. A conjecture is proposed, which reveals the connection between chaos in nonlinear dynamics and NP-hardness in computer science. The methodology and the conjecture are verified by applications to the well-known logistic equation, a forced memristive circuit and the Lorenz system. Computational results show that the canonical duality theory can be used to identify chaotic systems and to obtain realistic global optimal solutions in nonlinear dynamical systems. The method and results presented in this paper should bring some new insights into nonlinear dynamical systems and NP-hardness in computational complexity theory.
SPH for impact force and ricochet behavior of water-entry bodies
NASA Astrophysics Data System (ADS)
Omidvar, Pourya; Farghadani, Omid; Nikeghbali, Pooyan
The numerical modeling of fluid interaction with a bouncing body has many applications in scientific and engineering application. In this paper, the problem of water impact of a body on free-surface is investigated, where the fixed ghost boundary condition is added to the open source code SPHysics2D1 to rectify the oscillations in pressure distributions with the repulsive boundary condition. First, after introducing the methodology of SPH and the option of boundary conditions, the still water problem is simulated using two types of boundary conditions. It is shown that the fixed ghost boundary condition gives a better result for a hydrostatics pressure. Then, the dam-break problem, which is a bench mark test case in SPH, is simulated and compared with available data. In order to show the behavior of the hydrostatics forces on bodies, a fix/floating cylinder is placed on free surface looking carefully at the force and heaving profile. Finally, the impact of a body on free-surface is successfully simulated for different impact angles and velocities.
NASA Astrophysics Data System (ADS)
Moraes Rêgo, Patrícia Helena; Viana da Fonseca Neto, João; Ferreira, Ernesto M.
2015-08-01
The main focus of this article is to present a proposal to solve, via UDUT factorisation, the convergence and numerical stability problems that are related to the covariance matrix ill-conditioning of the recursive least squares (RLS) approach for online approximations of the algebraic Riccati equation (ARE) solution associated with the discrete linear quadratic regulator (DLQR) problem formulated in the actor-critic reinforcement learning and approximate dynamic programming context. The parameterisations of the Bellman equation, utility function and dynamic system as well as the algebra of Kronecker product assemble a framework for the solution of the DLQR problem. The condition number and the positivity parameter of the covariance matrix are associated with statistical metrics for evaluating the approximation performance of the ARE solution via RLS-based estimators. The performance of RLS approximators is also evaluated in terms of consistence and polarisation when associated with reinforcement learning methods. The used methodology contemplates realisations of online designs for DLQR controllers that is evaluated in a multivariable dynamic system model.
ORACLS: A system for linear-quadratic-Gaussian control law design
NASA Technical Reports Server (NTRS)
Armstrong, E. S.
1978-01-01
A modern control theory design package (ORACLS) for constructing controllers and optimal filters for systems modeled by linear time-invariant differential or difference equations is described. Numerical linear-algebra procedures are used to implement the linear-quadratic-Gaussian (LQG) methodology of modern control theory. Algorithms are included for computing eigensystems of real matrices, the relative stability of a matrix, factored forms for nonnegative definite matrices, the solutions and least squares approximations to the solutions of certain linear matrix algebraic equations, the controllability properties of a linear time-invariant system, and the steady state covariance matrix of an open-loop stable system forced by white noise. Subroutines are provided for solving both the continuous and discrete optimal linear regulator problems with noise free measurements and the sampled-data optimal linear regulator problem. For measurement noise, duality theory and the optimal regulator algorithms are used to solve the continuous and discrete Kalman-Bucy filter problems. Subroutines are also included which give control laws causing the output of a system to track the output of a prescribed model.
Use of paired simple and complex models to reduce predictive bias and quantify uncertainty
NASA Astrophysics Data System (ADS)
Doherty, John; Christensen, Steen
2011-12-01
Modern environmental management and decision-making is based on the use of increasingly complex numerical models. Such models have the advantage of allowing representation of complex processes and heterogeneous system property distributions inasmuch as these are understood at any particular study site. The latter are often represented stochastically, this reflecting knowledge of the character of system heterogeneity at the same time as it reflects a lack of knowledge of its spatial details. Unfortunately, however, complex models are often difficult to calibrate because of their long run times and sometimes questionable numerical stability. Analysis of predictive uncertainty is also a difficult undertaking when using models such as these. Such analysis must reflect a lack of knowledge of spatial hydraulic property details. At the same time, it must be subject to constraints on the spatial variability of these details born of the necessity for model outputs to replicate observations of historical system behavior. In contrast, the rapid run times and general numerical reliability of simple models often promulgates good calibration and ready implementation of sophisticated methods of calibration-constrained uncertainty analysis. Unfortunately, however, many system and process details on which uncertainty may depend are, by design, omitted from simple models. This can lead to underestimation of the uncertainty associated with many predictions of management interest. The present paper proposes a methodology that attempts to overcome the problems associated with complex models on the one hand and simple models on the other hand, while allowing access to the benefits each of them offers. It provides a theoretical analysis of the simplification process from a subspace point of view, this yielding insights into the costs of model simplification, and into how some of these costs may be reduced. It then describes a methodology for paired model usage through which predictive bias of a simplified model can be detected and corrected, and postcalibration predictive uncertainty can be quantified. The methodology is demonstrated using a synthetic example based on groundwater modeling environments commonly encountered in northern Europe and North America.
Global dynamic optimization approach to predict activation in metabolic pathways.
de Hijas-Liste, Gundián M; Klipp, Edda; Balsa-Canto, Eva; Banga, Julio R
2014-01-06
During the last decade, a number of authors have shown that the genetic regulation of metabolic networks may follow optimality principles. Optimal control theory has been successfully used to compute optimal enzyme profiles considering simple metabolic pathways. However, applying this optimal control framework to more general networks (e.g. branched networks, or networks incorporating enzyme production dynamics) yields problems that are analytically intractable and/or numerically very challenging. Further, these previous studies have only considered a single-objective framework. In this work we consider a more general multi-objective formulation and we present solutions based on recent developments in global dynamic optimization techniques. We illustrate the performance and capabilities of these techniques considering two sets of problems. First, we consider a set of single-objective examples of increasing complexity taken from the recent literature. We analyze the multimodal character of the associated non linear optimization problems, and we also evaluate different global optimization approaches in terms of numerical robustness, efficiency and scalability. Second, we consider generalized multi-objective formulations for several examples, and we show how this framework results in more biologically meaningful results. The proposed strategy was used to solve a set of single-objective case studies related to unbranched and branched metabolic networks of different levels of complexity. All problems were successfully solved in reasonable computation times with our global dynamic optimization approach, reaching solutions which were comparable or better than those reported in previous literature. Further, we considered, for the first time, multi-objective formulations, illustrating how activation in metabolic pathways can be explained in terms of the best trade-offs between conflicting objectives. This new methodology can be applied to metabolic networks with arbitrary topologies, non-linear dynamics and constraints.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolly, S; Chen, H; Mutic, S
Purpose: A persistent challenge for the quality assessment of radiation therapy treatments (e.g. contouring accuracy) is the absence of the known, ground truth for patient data. Moreover, assessment results are often patient-dependent. Computer simulation studies utilizing numerical phantoms can be performed for quality assessment with a known ground truth. However, previously reported numerical phantoms do not include the statistical properties of inter-patient variations, as their models are based on only one patient. In addition, these models do not incorporate tumor data. In this study, a methodology was developed for generating numerical phantoms which encapsulate the statistical variations of patients withinmore » radiation therapy, including tumors. Methods: Based on previous work in contouring assessment, geometric attribute distribution (GAD) models were employed to model both the deterministic and stochastic properties of individual organs via principle component analysis. Using pre-existing radiation therapy contour data, the GAD models are trained to model the shape and centroid distributions of each organ. Then, organs with different shapes and positions can be generated by assigning statistically sound weights to the GAD model parameters. Organ contour data from 20 retrospective prostate patient cases were manually extracted and utilized to train the GAD models. As a demonstration, computer-simulated CT images of generated numerical phantoms were calculated and assessed subjectively and objectively for realism. Results: A cohort of numerical phantoms of the male human pelvis was generated. CT images were deemed realistic both subjectively and objectively in terms of image noise power spectrum. Conclusion: A methodology has been developed to generate realistic numerical anthropomorphic phantoms using pre-existing radiation therapy data. The GAD models guarantee that generated organs span the statistical distribution of observed radiation therapy patients, according to the training dataset. The methodology enables radiation therapy treatment assessment with multi-modality imaging and a known ground truth, and without patient-dependent bias.« less
ERIC Educational Resources Information Center
Gauthier, Benoit; And Others
1997-01-01
Identifies the more representative problem-solving models in environmental education. Suggests the addition of a strategy for defining a problem situation using Soft Systems Methodology to environmental education activities explicitly designed for the development of critical thinking. Contains 45 references. (JRH)
Development of numerical techniques for simulation of magnetogasdynamics and hypersonic chemistry
NASA Astrophysics Data System (ADS)
Damevin, Henri-Marie
Magnetogasdynamics, the science concerned with the mutual interaction between electromagnetic field and flow of electrically conducting gas, offers promising advances in flow control and propulsion of future hypersonic vehicles. Numerical simulations are essential for understanding phenomena, and for research and development. The current dissertation is devoted to the development and validation of numerical algorithms for the solution of multidimensional magnetogasdynamic equations and the simulation of hypersonic high-temperature effects. Governing equations are derived, based on classical magnetogasdynamic assumptions. Two sets of equations are considered, namely the full equations and equations in the low magnetic Reynolds number approximation. Equations are expressed in a suitable formulation for discretization by finite differences in a computational space. For the full equations, Gauss law for magnetism is enforced using Powell's methodology. The time integration method is a four-stage modified Runge-Kutta scheme, amended with a Total Variation Diminishing model in a postprocessing stage. The eigensystem, required for the Total Variation Diminishing scheme, is derived in generalized three-dimensional coordinate system. For the simulation of hypersonic high-temperature effects, two chemical models are utilized, namely a nonequilibrium model and an equilibrium model. A loosely coupled approach is implemented to communicate between the magnetogasdynamic equations and the chemical models. The nonequilibrium model is a one-temperature, five-species, seventeen-reaction model solved by an implicit flux-vector splitting scheme. The chemical equilibrium model computes thermodynamics properties using curve fit procedures. Selected results are provided, which explore the different features of the numerical algorithms. The shock-capturing properties are validated for shock-tube simulations using numerical solutions reported in the literature. The computations of superfast flows over corners and in convergent channels demonstrate the performances of the algorithm in multiple dimensions. The implementation of diffusion terms is validated by solving the magnetic Rayleigh problem and Hartmann problem, for which analytical solutions are available. Prediction of blunt-body type flow are investigated and compared with numerical solutions reported in the literature. The effectiveness of the chemical models for hypersonic flow over blunt body is examined in various flow conditions. It is shown that the proposed schemes perform well in a variety of test cases, though some limitations have been identified.
Practical global oceanic state estimation
NASA Astrophysics Data System (ADS)
Wunsch, Carl; Heimbach, Patrick
2007-06-01
The problem of oceanographic state estimation, by means of an ocean general circulation model (GCM) and a multitude of observations, is described and contrasted with the meteorological process of data assimilation. In practice, all such methods reduce, on the computer, to forms of least-squares. The global oceanographic problem is at the present time focussed primarily on smoothing, rather than forecasting, and the data types are unlike meteorological ones. As formulated in the consortium Estimating the Circulation and Climate of the Ocean (ECCO), an automatic differentiation tool is used to calculate the so-called adjoint code of the GCM, and the method of Lagrange multipliers used to render the problem one of unconstrained least-squares minimization. Major problems today lie less with the numerical algorithms (least-squares problems can be solved by many means) than with the issues of data and model error. Results of ongoing calculations covering the period of the World Ocean Circulation Experiment, and including among other data, satellite altimetry from TOPEX/POSEIDON, Jason-1, ERS- 1/2, ENVISAT, and GFO, a global array of profiling floats from the Argo program, and satellite gravity data from the GRACE mission, suggest that the solutions are now useful for scientific purposes. Both methodology and applications are developing in a number of different directions.
Ravishankar, Saiprasad; Nadakuditi, Raj Rao; Fessler, Jeffrey A
2017-12-01
The sparsity of signals in a transform domain or dictionary has been exploited in applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns. The resulting block coordinate descent algorithms involve efficient closed-form solutions. Furthermore, we consider the problem of dictionary-blind image reconstruction, and propose novel and efficient algorithms for adaptive image reconstruction using block coordinate descent and sum of outer products methodologies. We provide a convergence study of the algorithms for dictionary learning and dictionary-blind image reconstruction. Our numerical experiments show the promising performance and speedups provided by the proposed methods over previous schemes in sparse data representation and compressed sensing-based image reconstruction.
Ravishankar, Saiprasad; Nadakuditi, Raj Rao; Fessler, Jeffrey A.
2017-01-01
The sparsity of signals in a transform domain or dictionary has been exploited in applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns. The resulting block coordinate descent algorithms involve efficient closed-form solutions. Furthermore, we consider the problem of dictionary-blind image reconstruction, and propose novel and efficient algorithms for adaptive image reconstruction using block coordinate descent and sum of outer products methodologies. We provide a convergence study of the algorithms for dictionary learning and dictionary-blind image reconstruction. Our numerical experiments show the promising performance and speedups provided by the proposed methods over previous schemes in sparse data representation and compressed sensing-based image reconstruction. PMID:29376111
A Computational Methodology for Simulating Thermal Loss Testing of the Advanced Stirling Convertor
NASA Technical Reports Server (NTRS)
Reid, Terry V.; Wilson, Scott D.; Schifer, Nicholas A.; Briggs, Maxwell H.
2012-01-01
The U.S. Department of Energy (DOE) and Lockheed Martin Space Systems Company (LMSSC) have been developing the Advanced Stirling Radioisotope Generator (ASRG) for use as a power system for space science missions. This generator would use two highefficiency Advanced Stirling Convertors (ASCs), developed by Sunpower Inc. and NASA Glenn Research Center (GRC). The ASCs convert thermal energy from a radioisotope heat source into electricity. As part of ground testing of these ASCs, different operating conditions are used to simulate expected mission conditions. These conditions require achieving a particular operating frequency, hot end and cold end temperatures, and specified electrical power output for a given net heat input. In an effort to improve net heat input predictions, numerous tasks have been performed which provided a more accurate value for net heat input into the ASCs, including the use of multidimensional numerical models. Validation test hardware has also been used to provide a direct comparison of numerical results and validate the multi-dimensional numerical models used to predict convertor net heat input and efficiency. These validation tests were designed to simulate the temperature profile of an operating Stirling convertor and resulted in a measured net heat input of 244.4 W. The methodology was applied to the multi-dimensional numerical model which resulted in a net heat input of 240.3 W. The computational methodology resulted in a value of net heat input that was 1.7 percent less than that measured during laboratory testing. The resulting computational methodology and results are discussed.
Khoram, Nafiseh; Zayane, Chadia; Djellouli, Rabia; Laleg-Kirati, Taous-Meriem
2016-03-15
The calibration of the hemodynamic model that describes changes in blood flow and blood oxygenation during brain activation is a crucial step for successfully monitoring and possibly predicting brain activity. This in turn has the potential to provide diagnosis and treatment of brain diseases in early stages. We propose an efficient numerical procedure for calibrating the hemodynamic model using some fMRI measurements. The proposed solution methodology is a regularized iterative method equipped with a Kalman filtering-type procedure. The Newton component of the proposed method addresses the nonlinear aspect of the problem. The regularization feature is used to ensure the stability of the algorithm. The Kalman filter procedure is incorporated here to address the noise in the data. Numerical results obtained with synthetic data as well as with real fMRI measurements are presented to illustrate the accuracy, robustness to the noise, and the cost-effectiveness of the proposed method. We present numerical results that clearly demonstrate that the proposed method outperforms the Cubature Kalman Filter (CKF), one of the most prominent existing numerical methods. We have designed an iterative numerical technique, called the TNM-CKF algorithm, for calibrating the mathematical model that describes the single-event related brain response when fMRI measurements are given. The method appears to be highly accurate and effective in reconstructing the BOLD signal even when the measurements are tainted with high noise level (as high as 30%). Published by Elsevier B.V.
Autonomous interplanetary constellation design
NASA Astrophysics Data System (ADS)
Chow, Cornelius Channing, II
According to NASA's integrated space technology roadmaps, space-based infrastructures are envisioned as necessary ingredients to a sustained effort in continuing space exploration. Whether it be for extra-terrestrial habitats, roving/cargo vehicles, or space tourism, autonomous space networks will provide a vital communications lifeline for both future robotic and human missions alike. Projecting that the Moon will be a bustling hub of activity within a few decades, a near-term opportunity for in-situ infrastructure development is within reach. This dissertation addresses the anticipated need for in-space infrastructure by investigating a general design methodology for autonomous interplanetary constellations; to illustrate the theory, this manuscript presents results from an application to the Earth-Moon neighborhood. The constellation design methodology is formulated as an optimization problem, involving a trajectory design step followed by a spacecraft placement sequence. Modeling the dynamics as a restricted 3-body problem, the investigated design space consists of families of periodic orbits which play host to the constellations, punctuated by arrangements of spacecraft autonomously guided by a navigation strategy called LiAISON (Linked Autonomous Interplanetary Satellite Orbit Navigation). Instead of more traditional exhaustive search methods, a numerical continuation approach is implemented to map the admissible configuration space. In particular, Keller's pseudo-arclength technique is used to follow folding/bifurcating solution manifolds, which are otherwise inaccessible with other parameter continuation schemes. A succinct characterization of the underlying structure of the local, as well as global, extrema is thus achievable with little a priori intuition of the solution space. Furthermore, the proposed design methodology offers benefits in computation speed plus the ability to handle mildly stochastic systems. An application of the constellation design methodology to the restricted Earth-Moon system, reveals optimal pairwise configurations for various L1, L2, and L5 (halo, axial, and vertical) periodic orbit families. Navigation accuracies, ranging from O (10+/-1) meters in position space, are obtained for the optimal Earth-Moon constellations, given measurement noise on the order of 1 meter.
NASA Astrophysics Data System (ADS)
Fekete, Tamás
2018-05-01
Structural integrity calculations play a crucial role in designing large-scale pressure vessels. Used in the electric power generation industry, these kinds of vessels undergo extensive safety analyses and certification procedures before deemed feasible for future long-term operation. The calculations are nowadays directed and supported by international standards and guides based on state-of-the-art results of applied research and technical development. However, their ability to predict a vessel's behavior under accidental circumstances after long-term operation is largely limited by the strong dependence of the analysis methodology on empirical models that are correlated to the behavior of structural materials and their changes during material aging. Recently a new scientific engineering paradigm, structural integrity has been developing that is essentially a synergistic collaboration between a number of scientific and engineering disciplines, modeling, experiments and numerics. Although the application of the structural integrity paradigm highly contributed to improving the accuracy of safety evaluations of large-scale pressure vessels, the predictive power of the analysis methodology has not yet improved significantly. This is due to the fact that already existing structural integrity calculation methodologies are based on the widespread and commonly accepted 'traditional' engineering thermal stress approach, which is essentially based on the weakly coupled model of thermomechanics and fracture mechanics. Recently, a research has been initiated in MTA EK with the aim to review and evaluate current methodologies and models applied in structural integrity calculations, including their scope of validity. The research intends to come to a better understanding of the physical problems that are inherently present in the pool of structural integrity problems of reactor pressure vessels, and to ultimately find a theoretical framework that could serve as a well-grounded theoretical foundation for a new modeling framework of structural integrity. This paper presents the first findings of the research project.
A Novel Numerical Method for Fuzzy Boundary Value Problems
NASA Astrophysics Data System (ADS)
Can, E.; Bayrak, M. A.; Hicdurmaz
2016-05-01
In the present paper, a new numerical method is proposed for solving fuzzy differential equations which are utilized for the modeling problems in science and engineering. Fuzzy approach is selected due to its important applications on processing uncertainty or subjective information for mathematical models of physical problems. A second-order fuzzy linear boundary value problem is considered in particular due to its important applications in physics. Moreover, numerical experiments are presented to show the effectiveness of the proposed numerical method on specific physical problems such as heat conduction in an infinite plate and a fin.
Robust approximate optimal guidance strategies for aeroassisted orbital transfer missions
NASA Astrophysics Data System (ADS)
Ilgen, Marc R.
This thesis presents the application of game theoretic and regular perturbation methods to the problem of determining robust approximate optimal guidance laws for aeroassisted orbital transfer missions with atmospheric density and navigated state uncertainties. The optimal guidance problem is reformulated as a differential game problem with the guidance law designer and Nature as opposing players. The resulting equations comprise the necessary conditions for the optimal closed loop guidance strategy in the presence of worst case parameter variations. While these equations are nonlinear and cannot be solved analytically, the presence of a small parameter in the equations of motion allows the method of regular perturbations to be used to solve the equations approximately. This thesis is divided into five parts. The first part introduces the class of problems to be considered and presents results of previous research. The second part then presents explicit semianalytical guidance law techniques for the aerodynamically dominated region of flight. These guidance techniques are applied to unconstrained and control constrained aeroassisted plane change missions and Mars aerocapture missions, all subject to significant atmospheric density variations. The third part presents a guidance technique for aeroassisted orbital transfer problems in the gravitationally dominated region of flight. Regular perturbations are used to design an implicit guidance technique similar to the second variation technique but that removes the need for numerically computing an optimal trajectory prior to flight. This methodology is then applied to a set of aeroassisted inclination change missions. In the fourth part, the explicit regular perturbation solution technique is extended to include the class of guidance laws with partial state information. This methodology is then applied to an aeroassisted plane change mission using inertial measurements and subject to uncertainties in the initial value of the flight path angle. A summary of performance results for all these guidance laws is presented in the fifth part of this thesis along with recommendations for further research.
NASA Technical Reports Server (NTRS)
Gossard, Myron L
1952-01-01
An iterative transformation procedure suggested by H. Wielandt for numerical solution of flutter and similar characteristic-value problems is presented. Application of this procedure to ordinary natural-vibration problems and to flutter problems is shown by numerical examples. Comparisons of computed results with experimental values and with results obtained by other methods of analysis are made.
Applications of numerical methods to simulate the movement of contaminants in groundwater.
Sun, N Z
1989-01-01
This paper reviews mathematical models and numerical methods that have been extensively used to simulate the movement of contaminants through the subsurface. The major emphasis is placed on the numerical methods of advection-dominated transport problems and inverse problems. Several mathematical models that are commonly used in field problems are listed. A variety of numerical solutions for three-dimensional models are introduced, including the multiple cell balance method that can be considered a variation of the finite element method. The multiple cell balance method is easy to understand and convenient for solving field problems. When the advection transport dominates the dispersion transport, two kinds of numerical difficulties, overshoot and numerical dispersion, are always involved in solving standard, finite difference methods and finite element methods. To overcome these numerical difficulties, various numerical techniques are developed, such as upstream weighting methods and moving point methods. A complete review of these methods is given and we also mention the problems of parameter identification, reliability analysis, and optimal-experiment design that are absolutely necessary for constructing a practical model. PMID:2695327
A homogenization-based quasi-discrete method for the fracture of heterogeneous materials
NASA Astrophysics Data System (ADS)
Berke, P. Z.; Peerlings, R. H. J.; Massart, T. J.; Geers, M. G. D.
2014-05-01
The understanding and the prediction of the failure behaviour of materials with pronounced microstructural effects is of crucial importance. This paper presents a novel computational methodology for the handling of fracture on the basis of the microscale behaviour. The basic principles presented here allow the incorporation of an adaptive discretization scheme of the structure as a function of the evolution of strain localization in the underlying microstructure. The proposed quasi-discrete methodology bridges two scales: the scale of the material microstructure, modelled with a continuum type description; and the structural scale, where a discrete description of the material is adopted. The damaging material at the structural scale is divided into unit volumes, called cells, which are represented as a discrete network of points. The scale transition is inspired by computational homogenization techniques; however it does not rely on classical averaging theorems. The structural discrete equilibrium problem is formulated in terms of the underlying fine scale computations. Particular boundary conditions are developed on the scale of the material microstructure to address damage localization problems. The performance of this quasi-discrete method with the enhanced boundary conditions is assessed using different computational test cases. The predictions of the quasi-discrete scheme agree well with reference solutions obtained through direct numerical simulations, both in terms of crack patterns and load versus displacement responses.
NASA Technical Reports Server (NTRS)
Shkarayev, S.; Krashantisa, R.; Tessler, A.
2004-01-01
An important and challenging technology aimed at the next generation of aerospace vehicles is that of structural health monitoring. The key problem is to determine accurately, reliably, and in real time the applied loads, stresses, and displacements experienced in flight, with such data establishing an information database for structural health monitoring. The present effort is aimed at developing a finite element-based methodology involving an inverse formulation that employs measured surface strains to recover the applied loads, stresses, and displacements in an aerospace vehicle in real time. The computational procedure uses a standard finite element model (i.e., "direct analysis") of a given airframe, with the subsequent application of the inverse interpolation approach. The inverse interpolation formulation is based on a parametric approximation of the loading and is further constructed through a least-squares minimization of calculated and measured strains. This procedure results in the governing system of linear algebraic equations, providing the unknown coefficients that accurately define the load approximation. Numerical simulations are carried out for problems involving various levels of structural approximation. These include plate-loading examples and an aircraft wing box. Accuracy and computational efficiency of the proposed method are discussed in detail. The experimental validation of the methodology by way of structural testing of an aircraft wing is also discussed.
NASA Astrophysics Data System (ADS)
Leu, Jun-Der; Lee, Larry Jung-Hsing
2017-09-01
Enterprise resource planning (ERP) is a software solution that integrates the operational processes of the business functions of an enterprise. However, implementing ERP systems is a complex process. In addition to the technical issues, companies must address problems associated with business process re-engineering, time and budget control, and organisational change. Numerous industrial studies have shown that the failure rate of ERP implementation is high, even for well-designed systems. Thus, ERP projects typically require a clear methodology to support the project execution and effectiveness. In this study, we propose a theoretical model for ERP implementation. The value engineering (VE) method forms the basis of the proposed framework, which integrates Six Sigma tools. The proposed framework encompasses five phases: knowledge generation, analysis, creation, development and execution. In the VE method, potential ERP problems related to software, hardware, consultation and organisation are analysed in a group-decision manner and in relation to value, and Six Sigma tools are applied to avoid any project defects. We validate the feasibility of the proposed model by applying it to an international manufacturing enterprise in Taiwan. The results show improvements in customer response time and operational efficiency in terms of work-in-process and turnover of materials. Based on the evidence from the case study, the theoretical framework is discussed together with the study's limitations and suggestions for future research.
Concurrent airline fleet allocation and aircraft design with profit modeling for multiple airlines
NASA Astrophysics Data System (ADS)
Govindaraju, Parithi
A "System of Systems" (SoS) approach is particularly beneficial in analyzing complex large scale systems comprised of numerous independent systems -- each capable of independent operations in their own right -- that when brought in conjunction offer capabilities and performance beyond the constituents of the individual systems. The variable resource allocation problem is a type of SoS problem, which includes the allocation of "yet-to-be-designed" systems in addition to existing resources and systems. The methodology presented here expands upon earlier work that demonstrated a decomposition approach that sought to simultaneously design a new aircraft and allocate this new aircraft along with existing aircraft in an effort to meet passenger demand at minimum fleet level operating cost for a single airline. The result of this describes important characteristics of the new aircraft. The ticket price model developed and implemented here enables analysis of the system using profit maximization studies instead of cost minimization. A multiobjective problem formulation has been implemented to determine characteristics of a new aircraft that maximizes the profit of multiple airlines to recognize the fact that aircraft manufacturers sell their aircraft to multiple customers and seldom design aircraft customized to a single airline's operations. The route network characteristics of two simple airlines serve as the example problem for the initial studies. The resulting problem formulation is a mixed-integer nonlinear programming problem, which is typically difficult to solve. A sequential decomposition strategy is applied as a solution methodology by segregating the allocation (integer programming) and aircraft design (non-linear programming) subspaces. After solving a simple problem considering two airlines, the decomposition approach is then applied to two larger airline route networks representing actual airline operations in the year 2005. The decomposition strategy serves as a promising technique for future detailed analyses. Results from the profit maximization studies favor a smaller aircraft in terms of passenger capacity due to its higher yield generation capability on shorter routes while results from the cost minimization studies favor a larger aircraft due to its lower direct operating cost per seat mile.
Multidisciplinary System Reliability Analysis
NASA Technical Reports Server (NTRS)
Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)
2001-01-01
The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.
Multi-Disciplinary System Reliability Analysis
NASA Technical Reports Server (NTRS)
Mahadevan, Sankaran; Han, Song
1997-01-01
The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code developed under the leadership of NASA Lewis Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multi-disciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.
Numerical characteristics of quantum computer simulation
NASA Astrophysics Data System (ADS)
Chernyavskiy, A.; Khamitov, K.; Teplov, A.; Voevodin, V.; Voevodin, Vl.
2016-12-01
The simulation of quantum circuits is significantly important for the implementation of quantum information technologies. The main difficulty of such modeling is the exponential growth of dimensionality, thus the usage of modern high-performance parallel computations is relevant. As it is well known, arbitrary quantum computation in circuit model can be done by only single- and two-qubit gates, and we analyze the computational structure and properties of the simulation of such gates. We investigate the fact that the unique properties of quantum nature lead to the computational properties of the considered algorithms: the quantum parallelism make the simulation of quantum gates highly parallel, and on the other hand, quantum entanglement leads to the problem of computational locality during simulation. We use the methodology of the AlgoWiki project (algowiki-project.org) to analyze the algorithm. This methodology consists of theoretical (sequential and parallel complexity, macro structure, and visual informational graph) and experimental (locality and memory access, scalability and more specific dynamic characteristics) parts. Experimental part was made by using the petascale Lomonosov supercomputer (Moscow State University, Russia). We show that the simulation of quantum gates is a good base for the research and testing of the development methods for data intense parallel software, and considered methodology of the analysis can be successfully used for the improvement of the algorithms in quantum information science.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xia, Yidong; Andrs, David; Martineau, Richard Charles
This document presents the theoretical background for a hybrid finite-element / finite-volume fluid flow solver, namely BIGHORN, based on the Multiphysics Object Oriented Simulation Environment (MOOSE) computational framework developed at the Idaho National Laboratory (INL). An overview of the numerical methods used in BIGHORN are discussed and followed by a presentation of the formulation details. The document begins with the governing equations for the compressible fluid flow, with an outline of the requisite constitutive relations. A second-order finite volume method used for solving the compressible fluid flow problems is presented next. A Pressure-Corrected Implicit Continuous-fluid Eulerian (PCICE) formulation for timemore » integration is also presented. The multi-fluid formulation is being developed. Although multi-fluid is not fully-developed, BIGHORN has been designed to handle multi-fluid problems. Due to the flexibility in the underlying MOOSE framework, BIGHORN is quite extensible, and can accommodate both multi-species and multi-phase formulations. This document also presents a suite of verification & validation benchmark test problems for BIGHORN. The intent for this suite of problems is to provide baseline comparison data that demonstrates the performance of the BIGHORN solution methods on problems that vary in complexity from laminar to turbulent flows. Wherever possible, some form of solution verification has been attempted to identify sensitivities in the solution methods, and suggest best practices when using BIGHORN.« less
Computational fluid dynamics combustion analysis evaluation
NASA Technical Reports Server (NTRS)
Kim, Y. M.; Shang, H. M.; Chen, C. P.; Ziebarth, J. P.
1992-01-01
This study involves the development of numerical modelling in spray combustion. These modelling efforts are mainly motivated to improve the computational efficiency in the stochastic particle tracking method as well as to incorporate the physical submodels of turbulence, combustion, vaporization, and dense spray effects. The present mathematical formulation and numerical methodologies can be casted in any time-marching pressure correction methodologies (PCM) such as FDNS code and MAST code. A sequence of validation cases involving steady burning sprays and transient evaporating sprays will be included.
Hybrid perturbation methods based on statistical time series models
NASA Astrophysics Data System (ADS)
San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario
2016-04-01
In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.
Implicitly solving phase appearance and disappearance problems using two-fluid six-equation model
Zou, Ling; Zhao, Haihua; Zhang, Hongbin
2016-01-25
Phase appearance and disappearance issue presents serious numerical challenges in two-phase flow simulations using the two-fluid six-equation model. Numerical challenges arise from the singular equation system when one phase is absent, as well as from the discontinuity in the solution space when one phase appears or disappears. In this work, a high-resolution spatial discretization scheme on staggered grids and fully implicit methods were applied for the simulation of two-phase flow problems using the two-fluid six-equation model. A Jacobian-free Newton-Krylov (JFNK) method was used to solve the discretized nonlinear problem. An improved numerical treatment was proposed and proved to be effectivemore » to handle the numerical challenges. The treatment scheme is conceptually simple, easy to implement, and does not require explicit truncations on solutions, which is essential to conserve mass and energy. Various types of phase appearance and disappearance problems relevant to thermal-hydraulics analysis have been investigated, including a sedimentation problem, an oscillating manometer problem, a non-condensable gas injection problem, a single-phase flow with heat addition problem and a subcooled flow boiling problem. Successful simulations of these problems demonstrate the capability and robustness of the proposed numerical methods and numerical treatments. As a result, volume fraction of the absent phase can be calculated effectively as zero.« less
Eco-analytical Methodology in Environmental Problems Monitoring
NASA Astrophysics Data System (ADS)
Agienko, M. I.; Bondareva, E. P.; Chistyakova, G. V.; Zhironkina, O. V.; Kalinina, O. I.
2017-01-01
Among the problems common to all mankind, which solutions influence the prospects of civilization, the problem of ecological situation monitoring takes very important place. Solution of this problem requires specific methodology based on eco-analytical comprehension of global issues. Eco-analytical methodology should help searching for the optimum balance between environmental problems and accelerating scientific and technical progress. The fact that Governments, corporations, scientists and nations focus on the production and consumption of material goods cause great damage to environment. As a result, the activity of environmentalists is developing quite spontaneously, as a complement to productive activities. Therefore, the challenge posed by the environmental problems for the science is the formation of geo-analytical reasoning and the monitoring of global problems common for the whole humanity. So it is expected to find the optimal trajectory of industrial development to prevent irreversible problems in the biosphere that could stop progress of civilization.
Discussion of DNS: Past, Present, and Future
NASA Technical Reports Server (NTRS)
Joslin, Ronald D.
1997-01-01
This paper covers the review, status, and projected future of direct numerical simulation (DNS) methodology relative to the state-of-the-art in computer technology, numerical methods, and the trends in fundamental research programs.
Białk-Bielińska, Anna; Kumirska, Jolanta; Borecka, Marta; Caban, Magda; Paszkiewicz, Monika; Pazdro, Ksenia; Stepnowski, Piotr
2016-03-20
Recent developments and improvements in advanced instruments and analytical methodologies have made the detection of pharmaceuticals at low concentration levels in different environmental matrices possible. As a result of these advances, over the last 15 years residues of these compounds and their metabolites have been detected in different environmental compartments and pharmaceuticals have now become recognized as so-called 'emerging' contaminants. To date, a lot of papers have been published presenting the development of analytical methodologies for the determination of pharmaceuticals in aqueous and solid environmental samples. Many papers have also been published on the application of the new methodologies, mainly to the assessment of the environmental fate of pharmaceuticals. Although impressive improvements have undoubtedly been made, in order to fully understand the behavior of these chemicals in the environment, there are still numerous methodological challenges to be overcome. The aim of this paper therefore, is to present a review of selected recent improvements and challenges in the determination of pharmaceuticals in environmental samples. Special attention has been paid to the strategies used and the current challenges (also in terms of Green Analytical Chemistry) that exist in the analysis of these chemicals in soils, marine environments and drinking waters. There is a particular focus on the applicability of modern sorbents such as carbon nanotubes (CNTs) in sample preparation techniques, to overcome some of the problems that exist in the analysis of pharmaceuticals in different environmental samples. Copyright © 2016 Elsevier B.V. All rights reserved.
Variational data assimilation system "INM RAS - Black Sea"
NASA Astrophysics Data System (ADS)
Parmuzin, Eugene; Agoshkov, Valery; Assovskiy, Maksim; Giniatulin, Sergey; Zakharova, Natalia; Kuimov, Grigory; Fomin, Vladimir
2013-04-01
Development of Informational-Computational Systems (ICS) for Data Assimilation Procedures is one of multidisciplinary problems. To study and solve these problems one needs to apply modern results from different disciplines and recent developments in: mathematical modeling; theory of adjoint equations and optimal control; inverse problems; numerical methods theory; numerical algebra and scientific computing. The problems discussed above are studied in the Institute of Numerical Mathematics of the Russian Academy of Science (INM RAS) in ICS for Personal Computers (PC). Special problems and questions arise while effective ICS versions for PC are being developed. These problems and questions can be solved with applying modern methods of numerical mathematics and by solving "parallelism problem" using OpenMP technology and special linear algebra packages. In this work the results on the ICS development for PC-ICS "INM RAS - Black Sea" are presented. In the work the following problems and questions are discussed: practical problems that can be studied by ICS; parallelism problems and their solutions with applying of OpenMP technology and the linear algebra packages used in ICS "INM - Black Sea"; Interface of ICS. The results of ICS "INM RAS - Black Sea" testing are presented. Efficiency of technologies and methods applied are discussed. The work was supported by RFBR, grants No. 13-01-00753, 13-05-00715 and by The Ministry of education and science of Russian Federation, project 8291, project 11.519.11.1005 References: [1] V.I. Agoshkov, M.V. Assovskii, S.A. Lebedev, Numerical simulation of Black Sea hydrothermodynamics taking into account tide-forming forces. Russ. J. Numer. Anal. Math. Modelling (2012) 27, No.1, 5-31 [2] E.I. Parmuzin, V.I. Agoshkov, Numerical solution of the variational assimilation problem for sea surface temperature in the model of the Black Sea dynamics. Russ. J. Numer. Anal. Math. Modelling (2012) 27, No.1, 69-94 [3] V.B. Zalesny, N.A. Diansky, V.V. Fomin, S.N. Moshonkin, S.G. Demyshev, Numerical model of the circulation of Black Sea and Sea of Azov. Russ. J. Numer. Anal. Math. Modelling (2012) 27, No.1, 95-111 [4] V.I. Agoshkov, S.V. Giniatulin, G.V. Kuimov. OpenMP technology and linear algebra packages in the variation data assimilation systems. - Abstracts of the 1-st China-Russia Conference on Numerical Algebra with Applications in Radiactive Hydrodynamics, Beijing, China, October 16-18, 2012. [5] Zakharova N.B., Agoshkov V.I., Parmuzin E.I., The new method of ARGO buoys system observation data interpolation. Russian Journal of Numerical Analysis and Mathematical Modelling. Vol. 28, Issue 1, 2013.
Dynamic Decision Making under Uncertainty and Partial Information
2017-01-30
order to address these problems, we investigated efficient computational methodologies for dynamic decision making under uncertainty and partial...information. In the course of this research, we developed and studied efficient simulation-based methodologies for dynamic decision making under...uncertainty and partial information; (ii) studied the application of these decision making models and methodologies to practical problems, such as those
SAMSAN- MODERN NUMERICAL METHODS FOR CLASSICAL SAMPLED SYSTEM ANALYSIS
NASA Technical Reports Server (NTRS)
Frisch, H. P.
1994-01-01
SAMSAN was developed to aid the control system analyst by providing a self consistent set of computer algorithms that support large order control system design and evaluation studies, with an emphasis placed on sampled system analysis. Control system analysts have access to a vast array of published algorithms to solve an equally large spectrum of controls related computational problems. The analyst usually spends considerable time and effort bringing these published algorithms to an integrated operational status and often finds them less general than desired. SAMSAN reduces the burden on the analyst by providing a set of algorithms that have been well tested and documented, and that can be readily integrated for solving control system problems. Algorithm selection for SAMSAN has been biased toward numerical accuracy for large order systems with computational speed and portability being considered important but not paramount. In addition to containing relevant subroutines from EISPAK for eigen-analysis and from LINPAK for the solution of linear systems and related problems, SAMSAN contains the following not so generally available capabilities: 1) Reduction of a real non-symmetric matrix to block diagonal form via a real similarity transformation matrix which is well conditioned with respect to inversion, 2) Solution of the generalized eigenvalue problem with balancing and grading, 3) Computation of all zeros of the determinant of a matrix of polynomials, 4) Matrix exponentiation and the evaluation of integrals involving the matrix exponential, with option to first block diagonalize, 5) Root locus and frequency response for single variable transfer functions in the S, Z, and W domains, 6) Several methods of computing zeros for linear systems, and 7) The ability to generate documentation "on demand". All matrix operations in the SAMSAN algorithms assume non-symmetric matrices with real double precision elements. There is no fixed size limit on any matrix in any SAMSAN algorithm; however, it is generally agreed by experienced users, and in the numerical error analysis literature, that computation with non-symmetric matrices of order greater than about 200 should be avoided or treated with extreme care. SAMSAN attempts to support the needs of application oriented analysis by providing: 1) a methodology with unlimited growth potential, 2) a methodology to insure that associated documentation is current and available "on demand", 3) a foundation of basic computational algorithms that most controls analysis procedures are based upon, 4) a set of check out and evaluation programs which demonstrate usage of the algorithms on a series of problems which are structured to expose the limits of each algorithm's applicability, and 5) capabilities which support both a priori and a posteriori error analysis for the computational algorithms provided. The SAMSAN algorithms are coded in FORTRAN 77 for batch or interactive execution and have been implemented on a DEC VAX computer under VMS 4.7. An effort was made to assure that the FORTRAN source code was portable and thus SAMSAN may be adaptable to other machine environments. The documentation is included on the distribution tape or can be purchased separately at the price below. SAMSAN version 2.0 was developed in 1982 and updated to version 3.0 in 1988.
Numerical Computation of Sensitivities and the Adjoint Approach
NASA Technical Reports Server (NTRS)
Lewis, Robert Michael
1997-01-01
We discuss the numerical computation of sensitivities via the adjoint approach in optimization problems governed by differential equations. We focus on the adjoint problem in its weak form. We show how one can avoid some of the problems with the adjoint approach, such as deriving suitable boundary conditions for the adjoint equation. We discuss the convergence of numerical approximations of the costate computed via the weak form of the adjoint problem and show the significance for the discrete adjoint problem.
NASA Technical Reports Server (NTRS)
Oden, J. T.; Becker, E. B.; Lin, T. L.; Hsieh, K. T.
1984-01-01
The formulation and numerical analysis of several problems related to the behavior of pneumatic tires are considered. These problems include the general rolling contact problem of a rubber-like viscoelastic cylinder undergoing finite deformations and the finite deformation of cord-reinforced rubber composites. New finite element models are developed for these problems. Numerical results obtained for several representative cases are presented.
Tracking initially unresolved thrusting objects in 3D using a single stationary optical sensor
NASA Astrophysics Data System (ADS)
Lu, Qin; Bar-Shalom, Yaakov; Willett, Peter; Granström, Karl; Ben-Dov, R.; Milgrom, B.
2017-05-01
This paper considers the problem of estimating the 3D states of a salvo of thrusting/ballistic endo-atmospheric objects using 2D Cartesian measurements from the focal plane array (FPA) of a single fixed optical sensor. Since the initial separations in the FPA are smaller than the resolution of the sensor, this results in merged measurements in the FPA, compounding the usual false-alarm and missed-detection uncertainty. We present a two-step methodology. First, we assume a Wiener process acceleration (WPA) model for the motion of the images of the projectiles in the optical sensor's FPA. We model the merged measurements with increased variance, and thence employ a multi-Bernoulli (MB) filter using the 2D measurements in the FPA. Second, using the set of associated measurements for each confirmed MB track, we formulate a parameter estimation problem, whose maximum likelihood estimate can be obtained via numerical search and can be used for impact point prediction. Simulation results illustrate the performance of the proposed method.
NASA Astrophysics Data System (ADS)
Seo, Jongmin; Schiavazzi, Daniele; Marsden, Alison
2017-11-01
Cardiovascular simulations are increasingly used in clinical decision making, surgical planning, and disease diagnostics. Patient-specific modeling and simulation typically proceeds through a pipeline from anatomic model construction using medical image data to blood flow simulation and analysis. To provide confidence intervals on simulation predictions, we use an uncertainty quantification (UQ) framework to analyze the effects of numerous uncertainties that stem from clinical data acquisition, modeling, material properties, and boundary condition selection. However, UQ poses a computational challenge requiring multiple evaluations of the Navier-Stokes equations in complex 3-D models. To achieve efficiency in UQ problems with many function evaluations, we implement and compare a range of iterative linear solver and preconditioning techniques in our flow solver. We then discuss applications to patient-specific cardiovascular simulation and how the problem/boundary condition formulation in the solver affects the selection of the most efficient linear solver. Finally, we discuss performance improvements in the context of uncertainty propagation. Support from National Institute of Health (R01 EB018302) is greatly appreciated.
NASA Astrophysics Data System (ADS)
Barkanov, E.; Eglītis, E.; Almeida, F.; Bowering, M. C.; Watson, G.
2013-07-01
The present investigation is devoted to the development of new optimal design concepts that exploit the full potential of advanced composite materials in the upper covers of aircraft lateral wings. A finite-element simulation of three-rib-bay laminated composite panels with T-stiffeners and a stiffener pitch of 200 mm is carried out using ANSYS to investigate the effect of rib attachment to stiffener webs on the performance of stiffened panels in terms of their buckling behavior and in relation to skin and stiffener lay-ups, stiffener height, and root width. Due to the large dimension of numerical problems to be solved, an optimization methodology is developed employing the method of experimental design and the response surface technique. Minimal-weight optimization problems were solved for four load levels with account of manufacturing, repairability, and damage tolerance requirements. The optimal results were verified successfully by using the ANSYS and ABAQUS shared-node models.
High-resolution coupled physics solvers for analysing fine-scale nuclear reactor design problems
Mahadevan, Vijay S.; Merzari, Elia; Tautges, Timothy; ...
2014-06-30
An integrated multi-physics simulation capability for the design and analysis of current and future nuclear reactor models is being investigated, to tightly couple neutron transport and thermal-hydraulics physics under the SHARP framework. Over several years, high-fidelity, validated mono-physics solvers with proven scalability on petascale architectures have been developed independently. Based on a unified component-based architecture, these existing codes can be coupled with a mesh-data backplane and a flexible coupling-strategy-based driver suite to produce a viable tool for analysts. The goal of the SHARP framework is to perform fully resolved coupled physics analysis of a reactor on heterogeneous geometry, in ordermore » to reduce the overall numerical uncertainty while leveraging available computational resources. Finally, the coupling methodology and software interfaces of the framework are presented, along with verification studies on two representative fast sodium-cooled reactor demonstration problems to prove the usability of the SHARP framework.« less
Manuel Stein's Five Decades of Structural Mechanics Contributions (1944-1988)
NASA Technical Reports Server (NTRS)
Mikulas, Martin M.; Card, Michael F.; Peterson, Jim P.; Starnes, James H., Jr.
1998-01-01
Manuel Stein went to work for NACA (National Advisory Committee for Aeronautics) in 1944 and left in 1988. His research contributions spanned five decades of extremely defining times for the aerospace industry. Problems arising from the analysis and design of efficient thin plate and shell aerospace structures have stimulated research over the past half century. The primary structural technology drivers during Dr. Stein's career included 1940's aluminum aircraft, 1950's jet aircraft, 1960's launch vehicles and advanced spacecraft, 1970's reusable launch vehicles and commercial aircraft, and 1980's composite aircraft. Dr. Stein's research was driven by these areas and he made lasting contributions for each. Dr. Stein's research can be characterized by a judicious mixture of physical insight into the problem, understanding of the basic mechanisms, mathematical modeling of the observed phenomena, and extraordinary analytical and numerical solution methodologies of the resulting mathematical models. This paper summarizes Dr. Stein's life and his contributions to the technical community.
Inverse problems in heterogeneous and fractured media using peridynamics
Turner, Daniel Z.; van Bloemen Waanders, Bart G.; Parks, Michael L.
2015-12-10
The following work presents an adjoint-based methodology for solving inverse problems in heterogeneous and fractured media using state-based peridynamics. We show that the inner product involving the peridynamic operators is self-adjoint. The proposed method is illustrated for several numerical examples with constant and spatially varying material parameters as well as in the context of fractures. We also present a framework for obtaining material parameters by integrating digital image correlation (DIC) with inverse analysis. This framework is demonstrated by evaluating the bulk and shear moduli for a sample of nuclear graphite using digital photographs taken during the experiment. The resulting measuredmore » values correspond well with other results reported in the literature. Lastly, we show that this framework can be used to determine the load state given observed measurements of a crack opening. Furthermore, this type of analysis has many applications in characterizing subsurface stress-state conditions given fracture patterns in cores of geologic material.« less
Situating the Debate on "Geometrical Algebra" within the Framework of Premodern Algebra.
Sialaros, Michalis; Christianidis, Jean
2016-06-01
Argument The aim of this paper is to employ the newly contextualized historiographical category of "premodern algebra" in order to revisit the arguably most controversial topic of the last decades in the field of Greek mathematics, namely the debate on "geometrical algebra." Within this framework, we shift focus from the discrepancy among the views expressed in the debate to some of the historiographical assumptions and methodological approaches that the opposing sides shared. Moreover, by using a series of propositions related to Elem. II.5 as a case study, we discuss Euclid's geometrical proofs, the so-called "semi-algebraic" alternative demonstrations attributed to Heron of Alexandria, as well as the solutions given by Diophantus, al-Sulamī, and al-Khwārizmī to the corresponding numerical problem. This comparative analysis offers a new reading of Heron's practice, highlights the significance of contextualizing "premodern algebra," and indicates that the origins of algebraic reasoning should be sought in the problem-solving practice, rather than in the theorem-proving tradition.
Adjoint-Based Methodology for Time-Dependent Optimization
NASA Technical Reports Server (NTRS)
Yamaleev, N. K.; Diskin, B.; Nielsen, E. J.
2008-01-01
This paper presents a discrete adjoint method for a broad class of time-dependent optimization problems. The time-dependent adjoint equations are derived in terms of the discrete residual of an arbitrary finite volume scheme which approximates unsteady conservation law equations. Although only the 2-D unsteady Euler equations are considered in the present analysis, this time-dependent adjoint method is applicable to the 3-D unsteady Reynolds-averaged Navier-Stokes equations with minor modifications. The discrete adjoint operators involving the derivatives of the discrete residual and the cost functional with respect to the flow variables are computed using a complex-variable approach, which provides discrete consistency and drastically reduces the implementation and debugging cycle. The implementation of the time-dependent adjoint method is validated by comparing the sensitivity derivative with that obtained by forward mode differentiation. Our numerical results show that O(10) optimization iterations of the steepest descent method are needed to reduce the objective functional by 3-6 orders of magnitude for test problems considered.
NASA Astrophysics Data System (ADS)
Lappa, Marcello; Drikakis, Dimitris; Kokkinakis, Ioannis
2017-03-01
This paper concerns the propagation of shock waves in an enclosure filled with dusty gas. The main motivation for this problem is to probe the effect on such dynamics of solid particles dispersed in the fluid medium. This subject, which has attracted so much attention over recent years given its important implications in the study of the structural stability of systems exposed to high-energy internal detonations, is approached here in the framework of a hybrid numerical two-way coupled Eulerian-Lagrangian methodology. In particular, insights are sought by considering a relatively simple archetypal setting corresponding to a shock wave originating from a small spherical region initialized on the basis of available analytic solutions. The response of the system is explored numerically with respect to several parameters, including the blast intensity (via the related value of the initial shock Mach number), the solid mass fraction (mass load), and the particle size (Stokes number). Results are presented in terms of pressure-load diagrams. Beyond practical applications, it is shown that a kaleidoscope of fascinating patterns is produced by the "triadic" relationships among multiple shock reflection events and particle-fluid and particle-wall interaction dynamics. These would be of great interest to researchers and scientists interested in fundamental problems relating to the general theory of pattern formation in complex nonlinear multiphase systems.
Analytical-numerical solution of a nonlinear integrodifferential equation in econometrics
NASA Astrophysics Data System (ADS)
Kakhktsyan, V. M.; Khachatryan, A. Kh.
2013-07-01
A mixed problem for a nonlinear integrodifferential equation arising in econometrics is considered. An analytical-numerical method is proposed for solving the problem. Some numerical results are presented.
Enviroplan—a summary methodology for comprehensive environmental planning and design
Robert Allen Jr.; George Nez; Fred Nicholson; Larry Sutphin
1979-01-01
This paper will discuss a comprehensive environmental assessment methodology that includes a numerical method for visual management and analysis. This methodology employs resource and human activity units as a means to produce a visual form unit which is the fundamental unit of the perceptual environment. The resource unit is based on the ecosystem as the fundamental...
Moving research beyond the spanking debate.
MacMillan, Harriet L; Mikton, Christopher R
2017-09-01
Despite numerous studies identifying a broad range of harms associated with the use of spanking and other types of physical punishment, debate continues about its use as a form of discipline. In this commentary, we recommend four strategies to move the field forward and beyond the spanking debate including: 1) use of methodological approaches that allow for stronger causal inference; 2) consideration of human rights issues; 3) a focus on understanding the causes of spanking and reasons for its decline in certain countries; and 4) more emphasis on evidence-based approaches to changing social norms to reject spanking as a form of discipline. Physical punishment needs to be recognized as an important public health problem. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Yi-Ling; Liu, Zhen-Bo; Ma, Qing-Yu; Guo, Xia-Sheng; Zhang, Dong
2010-08-01
Magnetoacoustic tomography with magnetic induction has shown potential applications in imaging the electrical impedance for biological tissues. We present a novel methodology for the inverse problem solution of the 2-D Lorentz force distribution reconstruction based on the acoustic straight line propagation theory. The magnetic induction and acoustic generation as well as acoustic detection are theoretically provided as explicit formulae and also validated by the numerical simulations for a multilayered cylindrical phantom model. The reconstructed 2-D Lorentz force distribution reveals not only the conductivity configuration in terms of shape and size but also the amplitude value of the Lorentz force in the examined layer. This study provides a basis for further study of conductivity distribution reconstruction of MAT-MI in medical imaging.
Split Node and Stress Glut Methods for Dynamic Rupture Simulations in Finite Elements.
NASA Astrophysics Data System (ADS)
Ramirez-Guzman, L.; Bielak, J.
2008-12-01
I present two numerical techniques to solve the Dynamic problem. I revisit and modify the Split Node approach and introduce a Stress Glut type Method. Both algorithms are implemented using a iso/sub- parametric FEM solver. In the first case, I discuss the formulation and perform an analysis of convergence for different orders of approximation for the acoustic case. I describe the algorithm of the second methodology as well as the assumptions made. The key to the new technique is to have an accurate representation of the traction. Thus, I devote part of the discussion to analyze the tractions for a simple example. The sensitivity of the method is tested by comparing against Split Node solutions.
Adaptive PID formation control of nonholonomic robots without leader's velocity information.
Shen, Dongbin; Sun, Weijie; Sun, Zhendong
2014-03-01
This paper proposes an adaptive proportional integral derivative (PID) algorithm to solve a formation control problem in the leader-follower framework where the leader robot's velocities are unknown for the follower robots. The main idea is first to design some proper ideal control law for the formation system to obtain a required performance, and then to propose the adaptive PID methodology to approach the ideal controller. As a result, the formation is achieved with much more enhanced robust formation performance. The stability of the closed-loop system is theoretically proved by Lyapunov method. Both numerical simulations and physical vehicle experiments are presented to verify the effectiveness of the proposed adaptive PID algorithm. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Markowitz portfolio optimization model employing fuzzy measure
NASA Astrophysics Data System (ADS)
Ramli, Suhailywati; Jaaman, Saiful Hafizah
2017-04-01
Markowitz in 1952 introduced the mean-variance methodology for the portfolio selection problems. His pioneering research has shaped the portfolio risk-return model and become one of the most important research fields in modern finance. This paper extends the classical Markowitz's mean-variance portfolio selection model applying the fuzzy measure to determine the risk and return. In this paper, we apply the original mean-variance model as a benchmark, fuzzy mean-variance model with fuzzy return and the model with return are modeled by specific types of fuzzy number for comparison. The model with fuzzy approach gives better performance as compared to the mean-variance approach. The numerical examples are included to illustrate these models by employing Malaysian share market data.
NASA Astrophysics Data System (ADS)
Acri, Antonio; Offner, Guenter; Nijman, Eugene; Rejlek, Jan
2016-10-01
Noise legislations and the increasing customer demands determine the Noise Vibration and Harshness (NVH) development of modern commercial vehicles. In order to meet the stringent legislative requirements for the vehicle noise emission, exact knowledge of all vehicle noise sources and their acoustic behavior is required. Transfer path analysis (TPA) is a fairly well established technique for estimating and ranking individual low-frequency noise or vibration contributions via the different transmission paths. Transmission paths from different sources to target points of interest and their contributions can be analyzed by applying TPA. This technique is applied on test measurements, which can only be available on prototypes, at the end of the designing process. In order to overcome the limits of TPA, a numerical transfer path analysis methodology based on the substructuring of a multibody system is proposed in this paper. Being based on numerical simulation, this methodology can be performed starting from the first steps of the designing process. The main target of the proposed methodology is to get information of noise sources contributions of a dynamic system considering the possibility to have multiple forces contemporary acting on the system. The contributions of these forces are investigated with particular focus on distribute or moving forces. In this paper, the mathematical basics of the proposed methodology and its advantages in comparison with TPA will be discussed. Then, a dynamic system is investigated with a combination of two methods. Being based on the dynamic substructuring (DS) of the investigated model, the methodology proposed requires the evaluation of the contact forces at interfaces, which are computed with a flexible multi-body dynamic (FMBD) simulation. Then, the structure-borne noise paths are computed with the wave based method (WBM). As an example application a 4-cylinder engine is investigated and the proposed methodology is applied on the engine block. The aim is to get accurate and clear relationships between excitations and responses of the simulated dynamic system, analyzing the noise and vibrational sources inside a car engine, showing the main advantages of a numerical methodology.
Modeling for free surface flow with phase change and its application to fusion technology
NASA Astrophysics Data System (ADS)
Luo, Xiaoyong
The development of predictive capabilities for free surface flow with phase change is essential to evaluate liquid wall protection schemes for various fusion chambers. With inertial fusion energy (IFE) concepts such as HYLIFE-II, rapid condensation into cold liquid surfaces is required when using liquid curtains for protecting reactor walls from blasts and intense neutron radiation. With magnetic fusion energy (MFE) concepts, droplets are injected onto the free surface of the liquid to minimize evaporation by minimizing the surface temperature. This dissertation presents a numerical methodology for free surface flow with phase change to help resolve feasibility issues encountered in the aforementioned fusion engineering fields, especially spray droplet condensation efficiency in IFE and droplet heat transfer enhancement on free surface liquid divertors in MFE. The numerical methodology is being conducted within the framework of the incompressible flow with the phase change model. A new second-order projection method is presented in conjunction with Approximate-Factorization techniques (AF method) for incompressible Navier-Stokes equations. A sub-cell conception is introduced and the Ghost Fluid Method in extended in a modified mass transfer model to accurately calculate the mass transfer across the interface. The Crank-Nicholson method is used for the diffusion term to eliminate the numerical viscous stability restriction. The third-order ENO scheme is used for the convective term to guarantee the accuracy of the method. The level set method is used to capture accurately the free surface of the flow and the deformation of the droplets. This numerical investigation identifies the physics characterizing transient heat and mass transfer of the droplet and the free surface flow. The results show that the numerical methodology is quite successful in modeling the free surface with phase change even though some severe deformations such as breaking and merging occur. The versatility of the numerical methodology shows that the work can easily handle complex physical conditions that occur in the fusion science and engineering.
Prediction of invasion from the early stage of an epidemic
Pérez-Reche, Francisco J.; Neri, Franco M.; Taraskin, Sergei N.; Gilligan, Christopher A.
2012-01-01
Predictability of undesired events is a question of great interest in many scientific disciplines including seismology, economy and epidemiology. Here, we focus on the predictability of invasion of a broad class of epidemics caused by diseases that lead to permanent immunity of infected hosts after recovery or death. We approach the problem from the perspective of the science of complexity by proposing and testing several strategies for the estimation of important characteristics of epidemics, such as the probability of invasion. Our results suggest that parsimonious approximate methodologies may lead to the most reliable and robust predictions. The proposed methodologies are first applied to analysis of experimentally observed epidemics: invasion of the fungal plant pathogen Rhizoctonia solani in replicated host microcosms. We then consider numerical experiments of the susceptible–infected–removed model to investigate the performance of the proposed methods in further detail. The suggested framework can be used as a valuable tool for quick assessment of epidemic threat at the stage when epidemics only start developing. Moreover, our work amplifies the significance of the small-scale and finite-time microcosm realizations of epidemics revealing their predictive power. PMID:22513723
Coupled variational formulations of linear elasticity and the DPG methodology
NASA Astrophysics Data System (ADS)
Fuentes, Federico; Keith, Brendan; Demkowicz, Leszek; Le Tallec, Patrick
2017-11-01
This article presents a general approach akin to domain-decomposition methods to solve a single linear PDE, but where each subdomain of a partitioned domain is associated to a distinct variational formulation coming from a mutually well-posed family of broken variational formulations of the original PDE. It can be exploited to solve challenging problems in a variety of physical scenarios where stability or a particular mode of convergence is desired in a part of the domain. The linear elasticity equations are solved in this work, but the approach can be applied to other equations as well. The broken variational formulations, which are essentially extensions of more standard formulations, are characterized by the presence of mesh-dependent broken test spaces and interface trial variables at the boundaries of the elements of the mesh. This allows necessary information to be naturally transmitted between adjacent subdomains, resulting in coupled variational formulations which are then proved to be globally well-posed. They are solved numerically using the DPG methodology, which is especially crafted to produce stable discretizations of broken formulations. Finally, expected convergence rates are verified in two different and illustrative examples.
NASA Astrophysics Data System (ADS)
Guerrero Prado, Patricio; Nguyen, Mai K.; Dumas, Laurent; Cohen, Serge X.
2017-01-01
Characterization and interpretation of flat ancient material objects, such as those found in archaeology, paleoenvironments, paleontology, and cultural heritage, have remained a challenging task to perform by means of conventional x-ray tomography methods due to their anisotropic morphology and flattened geometry. To overcome the limitations of the mentioned methodologies for such samples, an imaging modality based on Compton scattering is proposed in this work. Classical x-ray tomography treats Compton scattering data as noise in the image formation process, while in Compton scattering tomography the conditions are set such that Compton data become the principal image contrasting agent. Under these conditions, we are able, first, to avoid relative rotations between the sample and the imaging setup, and second, to obtain three-dimensional data even when the object is supported by a dense material by exploiting backscattered photons. Mathematically this problem is addressed by means of a conical Radon transform and its inversion. The image formation process and object reconstruction model are presented. The feasibility of this methodology is supported by numerical simulations.
Efficient Robust Optimization of Metal Forming Processes using a Sequential Metamodel Based Strategy
NASA Astrophysics Data System (ADS)
Wiebenga, J. H.; Klaseboer, G.; van den Boogaard, A. H.
2011-08-01
The coupling of Finite Element (FE) simulations to mathematical optimization techniques has contributed significantly to product improvements and cost reductions in the metal forming industries. The next challenge is to bridge the gap between deterministic optimization techniques and the industrial need for robustness. This paper introduces a new and generally applicable structured methodology for modeling and solving robust optimization problems. Stochastic design variables or noise variables are taken into account explicitly in the optimization procedure. The metamodel-based strategy is combined with a sequential improvement algorithm to efficiently increase the accuracy of the objective function prediction. This is only done at regions of interest containing the optimal robust design. Application of the methodology to an industrial V-bending process resulted in valuable process insights and an improved robust process design. Moreover, a significant improvement of the robustness (>2σ) was obtained by minimizing the deteriorating effects of several noise variables. The robust optimization results demonstrate the general applicability of the robust optimization strategy and underline the importance of including uncertainty and robustness explicitly in the numerical optimization procedure.
A study of different modeling choices for simulating platelets within the immersed boundary method
Shankar, Varun; Wright, Grady B.; Fogelson, Aaron L.; Kirby, Robert M.
2012-01-01
The Immersed Boundary (IB) method is a widely-used numerical methodology for the simulation of fluid–structure interaction problems. The IB method utilizes an Eulerian discretization for the fluid equations of motion while maintaining a Lagrangian representation of structural objects. Operators are defined for transmitting information (forces and velocities) between these two representations. Most IB simulations represent their structures with piecewise linear approximations and utilize Hookean spring models to approximate structural forces. Our specific motivation is the modeling of platelets in hemodynamic flows. In this paper, we study two alternative representations – radial basis functions (RBFs) and Fourier-based (trigonometric polynomials and spherical harmonics) representations – for the modeling of platelets in two and three dimensions within the IB framework, and compare our results with the traditional piecewise linear approximation methodology. For different representative shapes, we examine the geometric modeling errors (position and normal vectors), force computation errors, and computational cost and provide an engineering trade-off strategy for when and why one might select to employ these different representations. PMID:23585704
Numerical Boundary Conditions for Computational Aeroacoustics Benchmark Problems
NASA Technical Reports Server (NTRS)
Tam, Chritsopher K. W.; Kurbatskii, Konstantin A.; Fang, Jun
1997-01-01
Category 1, Problems 1 and 2, Category 2, Problem 2, and Category 3, Problem 2 are solved computationally using the Dispersion-Relation-Preserving (DRP) scheme. All these problems are governed by the linearized Euler equations. The resolution requirements of the DRP scheme for maintaining low numerical dispersion and dissipation as well as accurate wave speeds in solving the linearized Euler equations are now well understood. As long as 8 or more mesh points per wavelength is employed in the numerical computation, high quality results are assured. For the first three categories of benchmark problems, therefore, the real challenge is to develop high quality numerical boundary conditions. For Category 1, Problems 1 and 2, it is the curved wall boundary conditions. For Category 2, Problem 2, it is the internal radiation boundary conditions inside the duct. For Category 3, Problem 2, they are the inflow and outflow boundary conditions upstream and downstream of the blade row. These are the foci of the present investigation. Special nonhomogeneous radiation boundary conditions that generate the incoming disturbances and at the same time allow the outgoing reflected or scattered acoustic disturbances to leave the computation domain without significant reflection are developed. Numerical results based on these boundary conditions are provided.
Setting numerical population objectives for priority landbird species
Kenneth V. Rosenberg; Peter J. Blancher
2005-01-01
Following the example of the North American Waterfowl Management Plan, deriving numerical population estimates and conservation targets for priority landbird species is considered a desirable, if not necessary, element of the Partners in Flight planning process. Methodology for deriving such estimates remains in its infancy, however, and the use of numerical population...
Structure-preserving spectral element method in attenuating seismic wave modeling
NASA Astrophysics Data System (ADS)
Cai, Wenjun; Zhang, Huai
2016-04-01
This work describes the extension of the conformal symplectic method to solve the damped acoustic wave equation and the elastic wave equations in the framework of the spectral element method. The conformal symplectic method is a variation of conventional symplectic methods to treat non-conservative time evolution problems which has superior behaviors in long-time stability and dissipation preservation. To construct the conformal symplectic method, we first reformulate the damped acoustic wave equation and the elastic wave equations in their equivalent conformal multi-symplectic structures, which naturally reveal the intrinsic properties of the original systems, especially, the dissipation laws. We thereafter separate each structures into a conservative Hamiltonian system and a purely dissipative ordinary differential equation system. Based on the splitting methodology, we solve the two subsystems respectively. The dissipative one is cheaply solved by its analytic solution. While for the conservative system, we combine a fourth-order symplectic Nyström method in time and the spectral element method in space to cover the circumstances in realistic geological structures involving complex free-surface topography. The Strang composition method is adopted thereby to concatenate the corresponding two parts of solutions and generate the completed numerical scheme, which is conformal symplectic and can therefore guarantee the numerical stability and dissipation preservation after a large time modeling. Additionally, a relative larger Courant number than that of the traditional Newmark scheme is found in the numerical experiments in conjunction with a spatial sampling of approximately 5 points per wavelength. A benchmark test for the damped acoustic wave equation validates the effectiveness of our proposed method in precisely capturing dissipation rate. The classical Lamb problem is used to demonstrate the ability of modeling Rayleigh-wave propagation. More comprehensive numerical experiments are presented to investigate the long-time simulation, low dispersion and energy conservation properties of the conformal symplectic method in both the attenuating homogeneous and heterogeneous mediums.
NASA Astrophysics Data System (ADS)
Zamzamir, Zamzana; Murid, Ali H. M.; Ismail, Munira
2014-06-01
Numerical solution for uniquely solvable exterior Riemann-Hilbert problem on region with corners at offcorner points has been explored by discretizing the related integral equation using Picard iteration method without any modifications to the left-hand side (LHS) and right-hand side (RHS) of the integral equation. Numerical errors for all iterations are converge to the required solution. However, for certain problems, it gives lower accuracy. Hence, this paper presents a new numerical approach for the problem by treating the generalized Neumann kernel at LHS and the function at RHS of the integral equation. Due to the existence of the corner points, Gaussian quadrature is employed which avoids the corner points during numerical integration. Numerical example on a test region is presented to demonstrate the effectiveness of this formulation.
Methodological issues in the study of violence against women
Ruiz‐Pérez, Isabel; Plazaola‐Castaño, Juncal; Vives‐Cases, Carmen
2007-01-01
The objective of this paper is to review the methodological issues that arise when studying violence against women as a public health problem, focusing on intimate partner violence (IPV), since this is the form of violence that has the greatest consequences at a social and political level. The paper focuses first on the problems of defining what is meant by IPV. Secondly, the paper describes the difficulties in assessing the magnitude of the problem. Obtaining reliable data on this type of violence is a complex task, because of the methodological issues derived from the very nature of the phenomenon, such as the private, intimate context in which this violence often takes place, which means the problem cannot be directly observed. Finally, the paper examines the limitations and bias in research on violence, including the lack of consensus with regard to measuring events that may or may not represent a risk factor for violence against women or the methodological problem related to the type of sampling used in both aetiological and prevalence studies. PMID:18000113
Classical problems in computational aero-acoustics
NASA Technical Reports Server (NTRS)
Hardin, Jay C.
1996-01-01
In relation to the expected problems in the development of computational aeroacoustics (CAA), the preliminary applications were to classical problems where the known analytical solutions could be used to validate the numerical results. Such comparisons were used to overcome the numerical problems inherent in these calculations. Comparisons were made between the various numerical approaches to the problems such as direct simulations, acoustic analogies and acoustic/viscous splitting techniques. The aim was to demonstrate the applicability of CAA as a tool in the same class as computational fluid dynamics. The scattering problems that occur are considered and simple sources are discussed.
NASA Technical Reports Server (NTRS)
Raymond, William H.; Olson, William S.; Callan, Geary
1990-01-01
The focus of this part of the investigation is to find one or more general modeling techniques that will help reduce the time taken by numerical forecast models to initiate or spin-up precipitation processes and enhance storm intensity. If the conventional data base could explain the atmospheric mesoscale flow in detail, then much of our problem would be eliminated. But the data base is primarily synoptic scale, requiring that a solution must be sought either in nonconventional data, in methods to initialize mesoscale circulations, or in ways of retaining between forecasts the model generated mesoscale dynamics and precipitation fields. All three methods are investigated. The initialization and assimilation of explicit cloud and rainwater quantities computed from conservation equations in a mesoscale regional model are examined. The physical processes include condensation, evaporation, autoconversion, accretion, and the removal of rainwater by fallout. The question of how to initialize the explicit liquid water calculations in numerical models and how to retain information about precipitation processes during the 4-D assimilation cycle are important issues that are addressed. The explicit cloud calculations were purposely kept simple so that different initialization techniques can be easily and economically tested. Precipitation spin-up processes associated with three different types of weather phenomena are examined. Our findings show that diabatic initialization, or diabatic initialization in combination with a new diabatic forcing procedure, work effectively to enhance the spin-up of precipitation in a mesoscale numerical weather prediction forecast. Also, the retention of cloud and rain water during the analysis phase of the 4-D data assimilation procedure is shown to be valuable. Without detailed observations, the vertical placement of the diabatic heating remains a critical problem.
Fuzzy logic and neural networks in artificial intelligence and pattern recognition
NASA Astrophysics Data System (ADS)
Sanchez, Elie
1991-10-01
With the use of fuzzy logic techniques, neural computing can be integrated in symbolic reasoning to solve complex real world problems. In fact, artificial neural networks, expert systems, and fuzzy logic systems, in the context of approximate reasoning, share common features and techniques. A model of Fuzzy Connectionist Expert System is introduced, in which an artificial neural network is designed to construct the knowledge base of an expert system from, training examples (this model can also be used for specifications of rules in fuzzy logic control). Two types of weights are associated with the synaptic connections in an AND-OR structure: primary linguistic weights, interpreted as labels of fuzzy sets, and secondary numerical weights. Cell activation is computed through min-max fuzzy equations of the weights. Learning consists in finding the (numerical) weights and the network topology. This feedforward network is described and first illustrated in a biomedical application (medical diagnosis assistance from inflammatory-syndromes/proteins profiles). Then, it is shown how this methodology can be utilized for handwritten pattern recognition (characters play the role of diagnoses): in a fuzzy neuron describing a number for example, the linguistic weights represent fuzzy sets on cross-detecting lines and the numerical weights reflect the importance (or weakness) of connections between cross-detecting lines and characters.
Numerical Modeling of Propellant Boil-Off in a Cryogenic Storage Tank
NASA Technical Reports Server (NTRS)
Majumdar, A. K.; Steadman, T. E.; Maroney, J. L.; Sass, J. P.; Fesmire, J. E.
2007-01-01
A numerical model to predict boil-off of stored propellant in large spherical cryogenic tanks has been developed. Accurate prediction of tank boil-off rates for different thermal insulation systems was the goal of this collaboration effort. The Generalized Fluid System Simulation Program, integrating flow analysis and conjugate heat transfer for solving complex fluid system problems, was used to create the model. Calculation of tank boil-off rate requires simultaneous simulation of heat transfer processes among liquid propellant, vapor ullage space, and tank structure. The reference tank for the boil-off model was the 850,000 gallon liquid hydrogen tank at Launch Complex 39B (LC- 39B) at Kennedy Space Center, which is under study for future infrastructure improvements to support the Constellation program. The methodology employed in the numerical model was validated using a sub-scale model and tank. Experimental test data from a 1/15th scale version of the LC-39B tank using both liquid hydrogen and liquid nitrogen were used to anchor the analytical predictions of the sub-scale model. Favorable correlations between sub-scale model and experimental test data have provided confidence in full-scale tank boil-off predictions. These methods are now being used in the preliminary design for other cases including future launch vehicles
Novel approach for dam break flow modeling using computational intelligence
NASA Astrophysics Data System (ADS)
Seyedashraf, Omid; Mehrabi, Mohammad; Akhtari, Ali Akbar
2018-04-01
A new methodology based on the computational intelligence (CI) system is proposed and tested for modeling the classic 1D dam-break flow problem. The reason to seek for a new solution lies in the shortcomings of the existing analytical and numerical models. This includes the difficulty of using the exact solutions and the unwanted fluctuations, which arise in the numerical results. In this research, the application of the radial-basis-function (RBF) and multi-layer-perceptron (MLP) systems is detailed for the solution of twenty-nine dam-break scenarios. The models are developed using seven variables, i.e. the length of the channel, the depths of the up-and downstream sections, time, and distance as the inputs. Moreover, the depths and velocities of each computational node in the flow domain are considered as the model outputs. The models are validated against the analytical, and Lax-Wendroff and MacCormack FDM schemes. The findings indicate that the employed CI models are able to replicate the overall shape of the shock- and rarefaction-waves. Furthermore, the MLP system outperforms RBF and the tested numerical schemes. A new monolithic equation is proposed based on the best fitting model, which can be used as an efficient alternative to the existing piecewise analytic equations.
Vibrations of a Mindlin plate subjected to a pair of inertial loads moving in opposite directions
NASA Astrophysics Data System (ADS)
Dyniewicz, Bartłomiej; Pisarski, Dominik; Bajer, Czesław I.
2017-01-01
A Mindlin plate subjected to a pair of inertial loads traveling at a constant high speed in opposite directions along arbitrary trajectory, straight or curved, is presented. The masses represent vehicles passing a bridge or track plates. A numerical solution is obtained using the space-time finite element method, since it allows a clear and simple derivation of the characteristic matrices of the time-stepping procedure. The transition from one spatial finite element to another must be energetically consistent. In the case of the moving inertial load the classical time-integration schemes are methodologically difficult, since we consider the Dirac delta term with a moving argument. The proposed numerical approach provides the correct definition of force equilibrium in the time interval. The given approach closes the problem of the numerical analysis of vibration of a structure subjected to inertial loads moving arbitrarily with acceleration. The results obtained for a massless and an inertial load traveling over a Mindlin plate at various speeds are compared with benchmark results obtained for a Kirchhoff plate. The pair of inertial forces traveling in opposite directions causes displacements and stresses more than twice as large as their corresponding quantities observed for the passage of a single mass.
Training effectiveness assessment: Methodological problems and issues
NASA Technical Reports Server (NTRS)
Cross, Kenneth D.
1992-01-01
The U.S. military uses a large number of simulators to train and sustain the flying skills of helicopter pilots. Despite the enormous resources required to purchase, maintain, and use those simulators, little effort has been expended in assessing their training effectiveness. One reason for this is the lack of an evaluation methodology that yields comprehensive and valid data at a practical cost. Some of these methodological problems and issues that arise in assessing simulator training effectiveness, as well as problems with the classical transfer-of-learning paradigm were discussed.
Applications of fuzzy theories to multi-objective system optimization
NASA Technical Reports Server (NTRS)
Rao, S. S.; Dhingra, A. K.
1991-01-01
Most of the computer aided design techniques developed so far deal with the optimization of a single objective function over the feasible design space. However, there often exist several engineering design problems which require a simultaneous consideration of several objective functions. This work presents several techniques of multiobjective optimization. In addition, a new formulation, based on fuzzy theories, is also introduced for the solution of multiobjective system optimization problems. The fuzzy formulation is useful in dealing with systems which are described imprecisely using fuzzy terms such as, 'sufficiently large', 'very strong', or 'satisfactory'. The proposed theory translates the imprecise linguistic statements and multiple objectives into equivalent crisp mathematical statements using fuzzy logic. The effectiveness of all the methodologies and theories presented is illustrated by formulating and solving two different engineering design problems. The first one involves the flight trajectory optimization and the main rotor design of helicopters. The second one is concerned with the integrated kinematic-dynamic synthesis of planar mechanisms. The use and effectiveness of nonlinear membership functions in fuzzy formulation is also demonstrated. The numerical results indicate that the fuzzy formulation could yield results which are qualitatively different from those provided by the crisp formulation. It is felt that the fuzzy formulation will handle real life design problems on a more rational basis.
A Numerical, Literal, and Converged Perturbation Algorithm
NASA Astrophysics Data System (ADS)
Wiesel, William E.
2017-09-01
The KAM theorem and von Ziepel's method are applied to a perturbed harmonic oscillator, and it is noted that the KAM methodology does not allow for necessary frequency or angle corrections, while von Ziepel does. The KAM methodology can be carried out with purely numerical methods, since its generating function does not contain momentum dependence. The KAM iteration is extended to allow for frequency and angle changes, and in the process apparently can be successfully applied to degenerate systems normally ruled out by the classical KAM theorem. Convergence is observed to be geometric, not exponential, but it does proceed smoothly to machine precision. The algorithm produces a converged perturbation solution by numerical methods, while still retaining literal variable dependence, at least in the vicinity of a given trajectory.
Ethical and Legal Implications of the Methodological Crisis in Neuroimaging.
Kellmeyer, Philipp
2017-10-01
Currently, many scientific fields such as psychology or biomedicine face a methodological crisis concerning the reproducibility, replicability, and validity of their research. In neuroimaging, similar methodological concerns have taken hold of the field, and researchers are working frantically toward finding solutions for the methodological problems specific to neuroimaging. This article examines some ethical and legal implications of this methodological crisis in neuroimaging. With respect to ethical challenges, the article discusses the impact of flawed methods in neuroimaging research in cognitive and clinical neuroscience, particularly with respect to faulty brain-based models of human cognition, behavior, and personality. Specifically examined is whether such faulty models, when they are applied to neurological or psychiatric diseases, could put patients at risk, and whether this places special obligations on researchers using neuroimaging. In the legal domain, the actual use of neuroimaging as evidence in United States courtrooms is surveyed, followed by an examination of ways that the methodological problems may create challenges for the criminal justice system. Finally, the article reviews and promotes some promising ideas and initiatives from within the neuroimaging community for addressing the methodological problems.
Zambri, Brian; Djellouli, Rabia; Laleg-Kirati, Taous-Meriem
2015-08-01
Our aim is to propose a numerical strategy for retrieving accurately and efficiently the biophysiological parameters as well as the external stimulus characteristics corresponding to the hemodynamic mathematical model that describes changes in blood flow and blood oxygenation during brain activation. The proposed method employs the TNM-CKF method developed in [1], but in a prediction/correction framework. We present numerical results using both real and synthetic functional Magnetic Resonance Imaging (fMRI) measurements to highlight the performance characteristics of this computational methodology.
NASA Technical Reports Server (NTRS)
Tamma, Kumar K.; Namburu, Raju R.
1989-01-01
Numerical simulations are presented for hyperbolic heat-conduction problems that involve non-Fourier effects, using explicit, Lax-Wendroff/Taylor-Galerkin FEM formulations as the principal computational tool. Also employed are smoothing techniques which stabilize the numerical noise and accurately predict the propagating thermal disturbances. The accurate capture of propagating thermal disturbances at characteristic time-step values is achieved; numerical test cases are presented which validate the proposed hyperbolic heat-conduction problem concepts.
An assessment of the potential of PFEM-2 for solving long real-time industrial applications
NASA Astrophysics Data System (ADS)
Gimenez, Juan M.; Ramajo, Damián E.; Márquez Damián, Santiago; Nigro, Norberto M.; Idelsohn, Sergio R.
2017-07-01
The latest generation of the particle finite element method (PFEM-2) is a numerical method based on the Lagrangian formulation of the equations, which presents advantages in terms of robustness and efficiency over classical Eulerian methodologies when certain kind of flows are simulated, especially those where convection plays an important role. These situations are often encountered in real engineering problems, where very complex geometries and operating conditions require very large and long computations. The advantages that the parallelism introduced in the computational fluid dynamics making affordable computations with very fine spatial discretizations are well known. However, it is not possible to have the time parallelized, despite the effort that is being dedicated to use space-time formulations. In this sense, PFEM-2 adds a valuable feature in that its strong stability with little loss of accuracy provides an interesting way of satisfying the real-life computation needs. After having already demonstrated in previous publications its ability to achieve academic-based solutions with a good compromise between accuracy and efficiency, in this work, the method is revisited and employed to solve several nonacademic problems of technological interest, which fall into that category. Simulations concerning oil-water separation, waste-water treatment, metallurgical foundries, and safety assessment are presented. These cases are selected due to their particular requirements of long simulation times and or intensive interface treatment. Thus, large time-steps may be employed with PFEM-2 without compromising the accuracy and robustness of the simulation, as occurs with Eulerian alternatives, showing the potentiality of the methodology for solving not only academic tests but also real engineering problems.
A new approach to enforce element-wise mass/species balance using the augmented Lagrangian method
NASA Astrophysics Data System (ADS)
Chang, J.; Nakshatrala, K.
2015-12-01
The least-squares finite element method (LSFEM) is one of many ways in which one can discretize and express a set of first ordered partial differential equations as a mixed formulation. However, the standard LSFEM is not locally conservative by design. The absence of this physical property can have serious implications in the numerical simulation of subsurface flow and transport. Two commonly employed ways to circumvent this issue is through the Lagrange multiplier method, which explicitly satisfies the element-wise divergence by introducing new unknowns, or through appending a penalty factor to the continuity constraint, which reduces the violation in the mass balance. However, these methodologies have some well-known drawbacks. Herein, we propose a new approach to improve the local balance of species/mass balance. The approach augments constraints to a least-square function by a novel mathematical construction of the local species/mass balance, which is different from the conventional ways. The resulting constrained optimization problem is solved using the augmented Lagrangian, which corrects the balance errors in an iterative fashion. The advantages of this methodology are that the problem size is not increased (thus preserving the symmetry and positive definite-ness) and that one need not provide an accurate guess for the initial penalty to reach a prescribed mass balance tolerance. We derive the least-squares weighting needed to ensure accurate solutions. We also demonstrate the robustness of the weighted LSFEM coupled with the augmented Lagrangian by solving large-scale heterogenous and variably saturated flow through porous media problems. The performance of the iterative solvers with respect to various user-defined augmented Lagrangian parameters will be documented.
Constrained orbital intercept-evasion
NASA Astrophysics Data System (ADS)
Zatezalo, Aleksandar; Stipanovic, Dusan M.; Mehra, Raman K.; Pham, Khanh
2014-06-01
An effective characterization of intercept-evasion confrontations in various space environments and a derivation of corresponding solutions considering a variety of real-world constraints are daunting theoretical and practical challenges. Current and future space-based platforms have to simultaneously operate as components of satellite formations and/or systems and at the same time, have a capability to evade potential collisions with other maneuver constrained space objects. In this article, we formulate and numerically approximate solutions of a Low Earth Orbit (LEO) intercept-maneuver problem in terms of game-theoretic capture-evasion guaranteed strategies. The space intercept-evasion approach is based on Liapunov methodology that has been successfully implemented in a number of air and ground based multi-player multi-goal game/control applications. The corresponding numerical algorithms are derived using computationally efficient and orbital propagator independent methods that are previously developed for Space Situational Awareness (SSA). This game theoretical but at the same time robust and practical approach is demonstrated on a realistic LEO scenario using existing Two Line Element (TLE) sets and Simplified General Perturbation-4 (SGP-4) propagator.
NASA Astrophysics Data System (ADS)
Redonnet, S.; Ben Khelil, S.; Bulté, J.; Cunha, G.
2017-09-01
With the objective of aircraft noise mitigation, we here address the numerical characterization of the aeroacoustics by a simplified nose landing gear (NLG), through the use of advanced simulation and signal processing techniques. To this end, the NLG noise physics is first simulated through an advanced hybrid approach, which relies on Computational Fluid Dynamics (CFD) and Computational AeroAcoustics (CAA) calculations. Compared to more traditional hybrid methods (e.g. those relying on the use of an Acoustic Analogy), and although it is used here with some approximations made (e.g. design of the CFD-CAA interface), the present approach does not rely on restrictive assumptions (e.g. equivalent noise source, homogeneous propagation medium), which allows to incorporate more realism into the prediction. In a second step, the outputs coming from such CFD-CAA hybrid calculations are processed through both traditional and advanced post-processing techniques, thus offering to further investigate the NLG's noise source mechanisms. Among other things, this work highlights how advanced computational methodologies are now mature enough to not only simulate realistic problems of airframe noise emission, but also to investigate their underlying physics.
NASA Astrophysics Data System (ADS)
Crowther, Ashley R.; Singh, Rajendra; Zhang, Nong; Chapman, Chris
2007-10-01
Impulsive responses in geared systems with multiple clearances are studied when the mean torque excitation and system load change abruptly, with application to a vehicle driveline with an automatic transmission. First, torsional lumped-mass models of the planetary and differential gear sets are formulated using matrix elements. The model is then reduced to address tractable nonlinear problems while successfully retaining the main modes of interest. Second, numerical simulations for the nonlinear model are performed for transient conditions and a typical driving situation that induces an impulsive behaviour simulated. However, initial conditions and excitation and load profiles have to be carefully defined before the model can be numerically solved. It is shown that the impacts within the planetary or differential gears may occur under combinations of engine, braking and vehicle load transients. Our analysis shows that the shaping of the engine transient by the torque converter before reaching the clearance locations is more critical. Third, a free vibration experiment is developed for an analogous driveline with multiple clearances and three experiments that excite different response regimes have been carried out. Good correlations validate the proposed methodology.
Solving ordinary differential equations by electrical analogy: a multidisciplinary teaching tool
NASA Astrophysics Data System (ADS)
Sanchez Perez, J. F.; Conesa, M.; Alhama, I.
2016-11-01
Ordinary differential equations are the mathematical formulation for a great variety of problems in science and engineering, and frequently, two different problems are equivalent from a mathematical point of view when they are formulated by the same equations. Students acquire the knowledge of how to solve these equations (at least some types of them) using protocols and strict algorithms of mathematical calculation without thinking about the meaning of the equation. The aim of this work is that students learn to design network models or circuits in this way; with simple knowledge of them, students can establish the association of electric circuits and differential equations and their equivalences, from a formal point of view, that allows them to associate knowledge of two disciplines and promote the use of this interdisciplinary approach to address complex problems. Therefore, they learn to use a multidisciplinary tool that allows them to solve these kinds of equations, even students of first course of engineering, whatever the order, grade or type of non-linearity. This methodology has been implemented in numerous final degree projects in engineering and science, e.g., chemical engineering, building engineering, industrial engineering, mechanical engineering, architecture, etc. Applications are presented to illustrate the subject of this manuscript.
A new approach to the convective parameterization of the regional atmospheric model BRAMS
NASA Astrophysics Data System (ADS)
Dos Santos, A. F.; Freitas, S. R.; de Campos Velho, H. F.; Luz, E. F.; Gan, M. A.; de Mattos, J. Z.; Grell, G. A.
2013-05-01
The summer characteristics of January 2010 was performed using the atmospheric model Brazilian developments on the Regional Atmospheric Modeling System (BRAMS). The convective parameterization scheme of Grell and Dévényi was used to represent clouds and their interaction with the large scale environment. As a result, the precipitation forecasts can be combined in several ways, generating a numerical representation of precipitation and atmospheric heating and moistening rates. The purpose of this study was to generate a set of weights to compute a best combination of the hypothesis of the convective scheme. It is an inverse problem of parameter estimation and the problem is solved as an optimization problem. To minimize the difference between observed data and forecasted precipitation, the objective function was computed with the quadratic difference between five simulated precipitation fields and observation. The precipitation field estimated by the Tropical Rainfall Measuring Mission satellite was used as observed data. Weights were obtained using the firefly algorithm and the mass fluxes of each closure of the convective scheme were weighted generating a new set of mass fluxes. The results indicated the better skill of the model with the new methodology compared with the old ensemble mean calculation.
NASA Astrophysics Data System (ADS)
Allah Taleizadeh, Ata; Niaki, Seyed Taghi Akhavan; Aryanezhad, Mir-Bahador
2010-10-01
While the usual assumptions in multi-periodic inventory control problems are that the orders are placed at the beginning of each period (periodic review) or depending on the inventory level they can happen at any time (continuous review), in this article, we relax these assumptions and assume that the periods between two replenishments of the products are independent and identically distributed random variables. Furthermore, assuming that the purchasing price are triangular fuzzy variables, the quantities of the orders are of integer-type and that there are space and service level constraints, total discount are considered to purchase products and a combination of back-order and lost-sales are taken into account for the shortages. We show that the model of this problem is a fuzzy mixed-integer nonlinear programming type and in order to solve it, a hybrid meta-heuristic intelligent algorithm is proposed. At the end, a numerical example is given to demonstrate the applicability of the proposed methodology and to compare its performance with one of the existing algorithms in real world inventory control problems.
A methodology for the rigorous verification of plasma simulation codes
NASA Astrophysics Data System (ADS)
Riva, Fabio
2016-10-01
The methodology used to assess the reliability of numerical simulation codes constitutes the Verification and Validation (V&V) procedure. V&V is composed by two separate tasks: the verification, which is a mathematical issue targeted to assess that the physical model is correctly solved, and the validation, which determines the consistency of the code results, and therefore of the physical model, with experimental data. In the present talk we focus our attention on the verification, which in turn is composed by the code verification, targeted to assess that a physical model is correctly implemented in a simulation code, and the solution verification, that quantifies the numerical error affecting a simulation. Bridging the gap between plasma physics and other scientific domains, we introduced for the first time in our domain a rigorous methodology for the code verification, based on the method of manufactured solutions, as well as a solution verification based on the Richardson extrapolation. This methodology was applied to GBS, a three-dimensional fluid code based on a finite difference scheme, used to investigate the plasma turbulence in basic plasma physics experiments and in the tokamak scrape-off layer. Overcoming the difficulty of dealing with a numerical method intrinsically affected by statistical noise, we have now generalized the rigorous verification methodology to simulation codes based on the particle-in-cell algorithm, which are employed to solve Vlasov equation in the investigation of a number of plasma physics phenomena.
Conceptual, Methodological, and Ethical Problems in Communicating Uncertainty in Clinical Evidence
Han, Paul K. J.
2014-01-01
The communication of uncertainty in clinical evidence is an important endeavor that poses difficult conceptual, methodological, and ethical problems. Conceptual problems include logical paradoxes in the meaning of probability and “ambiguity”— second-order uncertainty arising from the lack of reliability, credibility, or adequacy of probability information. Methodological problems include questions about optimal methods for representing fundamental uncertainties and for communicating these uncertainties in clinical practice. Ethical problems include questions about whether communicating uncertainty enhances or diminishes patient autonomy and produces net benefits or harms. This article reviews the limited but growing literature on these problems and efforts to address them and identifies key areas of focus for future research. It is argued that the critical need moving forward is for greater conceptual clarity and consistent representational methods that make the meaning of various uncertainties understandable, and for clinical interventions to support patients in coping with uncertainty in decision making. PMID:23132891
Equivalent Viscous Damping Methodologies Applied on VEGA Launch Vehicle Numerical Model
NASA Astrophysics Data System (ADS)
Bartoccini, D.; Di Trapani, C.; Fransen, S.
2014-06-01
Part of the mission analysis of a spacecraft is the so- called launcher-satellite coupled loads analysis which aims at computing the dynamic environment of the satellite and of the launch vehicle for the most severe load cases in flight. Evidently the damping of the coupled system shall be defined with care as to not overestimate or underestimate the loads derived for the spacecraft. In this paper the application of several EqVD (Equivalent Viscous Damping) for Craig an Bampton (CB)-systems are investigated. Based on the structural damping defined for the various materials in the parent FE-models of the CB-components, EqVD matrices can be computed according to different methodologies. The effect of these methodologies on the numerical reconstruction of the VEGA launch vehicle dynamic environment will be presented.
NASA Technical Reports Server (NTRS)
Newman, James C., III
1995-01-01
The limiting factor in simulating flows past realistic configurations of interest has been the discretization of the physical domain on which the governing equations of fluid flow may be solved. In an attempt to circumvent this problem, many Computational Fluid Dynamic (CFD) methodologies that are based on different grid generation and domain decomposition techniques have been developed. However, due to the costs involved and expertise required, very few comparative studies between these methods have been performed. In the present work, the two CFD methodologies which show the most promise for treating complex three-dimensional configurations as well as unsteady moving boundary problems are evaluated. These are namely the structured-overlapped and the unstructured grid schemes. Both methods use a cell centered, finite volume, upwind approach. The structured-overlapped algorithm uses an approximately factored, alternating direction implicit scheme to perform the time integration, whereas, the unstructured algorithm uses an explicit Runge-Kutta method. To examine the accuracy, efficiency, and limitations of each scheme, they are applied to the same steady complex multicomponent configurations and unsteady moving boundary problems. The steady complex cases consist of computing the subsonic flow about a two-dimensional high-lift multielement airfoil and the transonic flow about a three-dimensional wing/pylon/finned store assembly. The unsteady moving boundary problems are a forced pitching oscillation of an airfoil in a transonic freestream and a two-dimensional, subsonic airfoil/store separation sequence. Accuracy was accessed through the comparison of computed and experimentally measured pressure coefficient data on several of the wing/pylon/finned store assembly's components and at numerous angles-of-attack for the pitching airfoil. From this study, it was found that both the structured-overlapped and the unstructured grid schemes yielded flow solutions of comparable accuracy for these simulations. This study also indicated that, overall, the structured-overlapped scheme was slightly more CPU efficient than the unstructured approach.
Numerical Problems and Agent-Based Models for a Mass Transfer Course
ERIC Educational Resources Information Center
Murthi, Manohar; Shea, Lonnie D.; Snurr, Randall Q.
2009-01-01
Problems requiring numerical solutions of differential equations or the use of agent-based modeling are presented for use in a course on mass transfer. These problems were solved using the popular technical computing language MATLABTM. Students were introduced to MATLAB via a problem with an analytical solution. A more complex problem to which no…
A suite of benchmark and challenge problems for enhanced geothermal systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Mark; Fu, Pengcheng; McClure, Mark
A diverse suite of numerical simulators is currently being applied to predict or understand the performance of enhanced geothermal systems (EGS). To build confidence and identify critical development needs for these analytical tools, the United States Department of Energy, Geothermal Technologies Office sponsored a Code Comparison Study (GTO-CCS), with participants from universities, industry, and national laboratories. A principal objective for the study was to create a community forum for improvement and verification of numerical simulators for EGS modeling. Teams participating in the study were those representing U.S. national laboratories, universities, and industries, and each team brought unique numerical simulation capabilitiesmore » to bear on the problems. Two classes of problems were developed during the study, benchmark problems and challenge problems. The benchmark problems were structured to test the ability of the collection of numerical simulators to solve various combinations of coupled thermal, hydrologic, geomechanical, and geochemical processes. This class of problems was strictly defined in terms of properties, driving forces, initial conditions, and boundary conditions. The challenge problems were based on the enhanced geothermal systems research conducted at Fenton Hill, near Los Alamos, New Mexico, between 1974 and 1995. The problems involved two phases of research, stimulation, development, and circulation in two separate reservoirs. The challenge problems had specific questions to be answered via numerical simulation in three topical areas: 1) reservoir creation/stimulation, 2) reactive and passive transport, and 3) thermal recovery. Whereas the benchmark class of problems were designed to test capabilities for modeling coupled processes under strictly specified conditions, the stated objective for the challenge class of problems was to demonstrate what new understanding of the Fenton Hill experiments could be realized via the application of modern numerical simulation tools by recognized expert practitioners. We present the suite of benchmark and challenge problems developed for the GTO-CCS, providing problem descriptions and sample solutions.« less
NASA Astrophysics Data System (ADS)
Boz, Utku; Basdogan, Ipek
2015-12-01
Structural vibrations is a major cause for noise problems, discomfort and mechanical failures in aerospace, automotive and marine systems, which are mainly composed of plate-like structures. In order to reduce structural vibrations on these structures, active vibration control (AVC) is an effective approach. Adaptive filtering methodologies are preferred in AVC due to their ability to adjust themselves for varying dynamics of the structure during the operation. The filtered-X LMS (FXLMS) algorithm is a simple adaptive filtering algorithm widely implemented in active control applications. Proper implementation of FXLMS requires availability of a reference signal to mimic the disturbance and model of the dynamics between the control actuator and the error sensor, namely the secondary path. However, the controller output could interfere with the reference signal and the secondary path dynamics may change during the operation. This interference problem can be resolved by using an infinite impulse response (IIR) filter which considers feedback of the one or more previous control signals to the controller output and the changing secondary path dynamics can be updated using an online modeling technique. In this paper, IIR filtering based filtered-U LMS (FULMS) controller is combined with online secondary path modeling algorithm to suppress the vibrations of a plate-like structure. The results are validated through numerical and experimental studies. The results show that the FULMS with online secondary path modeling approach has more vibration rejection capabilities with higher convergence rate than the FXLMS counterpart.
Multiscale Multilevel Approach to Solution of Nanotechnology Problems
NASA Astrophysics Data System (ADS)
Polyakov, Sergey; Podryga, Viktoriia
2018-02-01
The paper is devoted to a multiscale multilevel approach for the solution of nanotechnology problems on supercomputer systems. The approach uses the combination of continuum mechanics models and the Newton dynamics for individual particles. This combination includes three scale levels: macroscopic, mesoscopic and microscopic. For gas-metal technical systems the following models are used. The quasihydrodynamic system of equations is used as a mathematical model at the macrolevel for gas and solid states. The system of Newton equations is used as a mathematical model at the mesoand microlevels; it is written for nanoparticles of the medium and larger particles moving in the medium. The numerical implementation of the approach is based on the method of splitting into physical processes. The quasihydrodynamic equations are solved by the finite volume method on grids of different types. The Newton equations of motion are solved by Verlet integration in each cell of the grid independently or in groups of connected cells. In the framework of the general methodology, four classes of algorithms and methods of their parallelization are provided. The parallelization uses the principles of geometric parallelism and the efficient partitioning of the computational domain. A special dynamic algorithm is used for load balancing the solvers. The testing of the developed approach was made by the example of the nitrogen outflow from a balloon with high pressure to a vacuum chamber through a micronozzle and a microchannel. The obtained results confirm the high efficiency of the developed methodology.
[Methodological problems in the use of information technologies in physical education].
Martirosov, E G; Zaĭtseva, G A
2000-01-01
The paper considers methodological problems in the use of computer technologies in physical education by applying diagnostic and consulting systems, educational and educational-and-training process automation systems, and control and self-control programmes for athletes and others.
Numerical methods for stiff systems of two-point boundary value problems
NASA Technical Reports Server (NTRS)
Flaherty, J. E.; Omalley, R. E., Jr.
1983-01-01
Numerical procedures are developed for constructing asymptotic solutions of certain nonlinear singularly perturbed vector two-point boundary value problems having boundary layers at one or both endpoints. The asymptotic approximations are generated numerically and can either be used as is or to furnish a general purpose two-point boundary value code with an initial approximation and the nonuniform computational mesh needed for such problems. The procedures are applied to a model problem that has multiple solutions and to problems describing the deformation of thin nonlinear elastic beam that is resting on an elastic foundation.
Reconstruction of local perturbations in periodic surfaces
NASA Astrophysics Data System (ADS)
Lechleiter, Armin; Zhang, Ruming
2018-03-01
This paper concerns the inverse scattering problem to reconstruct a local perturbation in a periodic structure. Unlike the periodic problems, the periodicity for the scattered field no longer holds, thus classical methods, which reduce quasi-periodic fields in one periodic cell, are no longer available. Based on the Floquet-Bloch transform, a numerical method has been developed to solve the direct problem, that leads to a possibility to design an algorithm for the inverse problem. The numerical method introduced in this paper contains two steps. The first step is initialization, that is to locate the support of the perturbation by a simple method. This step reduces the inverse problem in an infinite domain into one periodic cell. The second step is to apply the Newton-CG method to solve the associated optimization problem. The perturbation is then approximated by a finite spline basis. Numerical examples are given at the end of this paper, showing the efficiency of the numerical method.
Evaluating time-lapse ERT for monitoring DNAPL remediation via numerical simulation
NASA Astrophysics Data System (ADS)
Power, C.; Karaoulis, M.; Gerhard, J.; Tsourlos, P.; Giannopoulos, A.
2012-12-01
Dense non-aqueous phase liquids (DNAPLs) remain a challenging geoenvironmental problem in the near subsurface. Numerous thermal, chemical, and biological treatment methods are being applied at sites but without a non-destructive, rapid technique to map the evolution of DNAPL mass in space and time, the degree of remedial success is difficult to quantify. Electrical resistivity tomography (ERT) has long been presented as highly promising in this context but has not yet become a practitioner's tool due to challenges in interpreting the survey results at real sites where the initial condition (DNAPL mass, DNAPL distribution, subsurface heterogeneity) is typically unknown. Recently, a new numerical model was presented that couples DNAPL and ERT simulation at the field scale, providing a tool for optimizing ERT application and interpretation at DNAPL sites (Power et al., 2011, Fall AGU, H31D-1191). The objective of this study is to employ this tool to evaluate the effectiveness of time-lapse ERT to monitor DNAPL source zone remediation, taking advantage of new inversion methodologies that exploit the differences in the target over time. Several three-dimensional releases of chlorinated solvent DNAPLs into heterogeneous clayey sand at the field scale were generated, varying in the depth and complexity of the source zone (target). Over time, dissolution of the DNAPL in groundwater was simulated with simultaneous mapping via periodic ERT surveys. Both surface and borehole ERT surveys were conducted for comparison purposes. The latest four-dimensional ERT inversion algorithms were employed to generate time-lapse isosurfaces of the DNAPL source zone for all cases. This methodology provided a qualitative assessment of the ability of ERT to track DNAPL mass removal for complex source zones in realistically heterogeneous environments. In addition, it provided a quantitative comparison between the actual DNAPL mass removed and that interpreted by ERT as a function of depth below the water table, as well as an estimate of the minimum DNAPL saturation changes necessary for an observable response from ERT.
The Carbon Aerosol / Particles Nucleation with a Lidar: Numerical Simulations and Field Studies
NASA Astrophysics Data System (ADS)
Miffre, Alain; Anselmo, Christophe; Francis, Mirvatte; David, Gregory; Rairoux, Patrick
2016-06-01
In this contribution, we present the results of two recent papers [1,2] published in Optics Express, dedicated to the development of two new lidar methodologies. In [1], while the carbon aerosol (for example, soot particles) is recognized as a major uncertainty on climate and public health, we couple lidar remote sensing with Laser-Induced-Incandescence (LII) to allow retrieving the vertical profile of very low thermal radiation emitted by the carbon aerosol, in agreement with Planck's law, in an urban atmosphere over several hundred meters altitude. In paper [2], awarded as June 2014 OSA Spotlight, we identify the optical requirements ensuring an elastic lidar to be sensitive to new particles formation events (NPF-events) in the atmosphere, while, in the literature, all the ingredients initiating nucleation are still being unrevealed [3]. Both papers proceed with the same methodology by identifying the optical requirements from numerical simulation (Planck and Kirchhoff's laws in [1], Mie and T-matrix numerical codes in [2]), then presenting lidar field application case studies. We believe these new lidar methodologies may be useful for climate, geophysical, as well as fundamental purposes.
Straus, Murray A
2012-01-01
More than 200 studies have found "gender symmetry" in perpetration of violence against a marital or dating partner in the sense that about the same percent of women as men physically assault a marital or dating partner. Most of these studies obtained the data using the Conflict Tactics Scales (CTS). However, these results have been challenged by numerous articles in the past 25 years that have asserted that the CTS is invalid. This article identifies and responds to 11 purported methodological problems of the CTS, and two other bases for the belief that the CTS is not valid. The discussion argues that the repeated assertion over the past 25 years that the CTS is invalid is not primarily about methodology. Rather it is primarily about theories and values concerning the results of research showing gender symmetry in perpetration. According to the prevailing "patriarchal dominance" theory, these results cannot be true and therefore the CTS must be invalid. The conclusion suggests that an essential part of the effort to prevent and treat violence against women and by women requires taking into account the dyadic nature of partner violence through use of instruments such as the CTS that measure violence by both partners. Copyright © 2012 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Bazilevs, Y.; Kamran, K.; Moutsanidis, G.; Benson, D. J.; Oñate, E.
2017-07-01
In this two-part paper we begin the development of a new class of methods for modeling fluid-structure interaction (FSI) phenomena for air blast. We aim to develop accurate, robust, and practical computational methodology, which is capable of modeling the dynamics of air blast coupled with the structure response, where the latter involves large, inelastic deformations and disintegration into fragments. An immersed approach is adopted, which leads to an a-priori monolithic FSI formulation with intrinsic contact detection between solid objects, and without formal restrictions on the solid motions. In Part I of this paper, the core air-blast FSI methodology suitable for a variety of discretizations is presented and tested using standard finite elements. Part II of this paper focuses on a particular instantiation of the proposed framework, which couples isogeometric analysis (IGA) based on non-uniform rational B-splines and a reproducing-kernel particle method (RKPM), which is a Meshfree technique. The combination of IGA and RKPM is felt to be particularly attractive for the problem class of interest due to the higher-order accuracy and smoothness of both discretizations, and relative simplicity of RKPM in handling fragmentation scenarios. A collection of mostly 2D numerical examples is presented in each of the parts to illustrate the good performance of the proposed air-blast FSI framework.
Assessment of capillary suction time (CST) test methodologies.
Sawalha, O; Scholz, M
2007-12-01
The capillary suction time (CST) test is a commonly used method to measure the filterability and the easiness of removing moisture from slurry and sludge in numerous environmental and industrial applications. This study assessed several novel alterations of both the test methodology and the current standard capillary suction time (CST) apparatus. Twelve different papers including the standard Whatman No. 17 chromatographic paper were tested. The tests were run using four different types of sludge including a synthetic sludge, which was specifically developed for benchmarking purposes. The standard apparatus was altered by the introduction of a novel rectangular funnel instead of a standard circular one. A stirrer was also introduced to solve the problem of test inconsistency (e.g. high CST variability) particularly for heavy types of sludge. Results showed that several alternative papers, which are cheaper than the standard paper, can be used to estimate CST values accurately, and that the test repeatability can be improved in many cases and for different types of sludge. The introduction of the rectangular funnel demonstrated an obvious enhancement of test repeatability. The use of a stirrer to avoid sedimentation of heavy sludge did not have statistically significant impact on the CST values or the corresponding data variability. The application of synthetic sludge can support the testing of experimental methodologies and should be used for subsequent benchmarking purposes.
Numerical simulation of the hydrodynamic instabilities of Richtmyer-Meshkov and Rayleigh-Taylor
NASA Astrophysics Data System (ADS)
Fortova, S. V.; Shepelev, V. V.; Troshkin, O. V.; Kozlov, S. A.
2017-09-01
The paper presents the results of numerical simulation of the development of hydrodynamic instabilities of Richtmyer-Meshkov and Rayleigh-Taylor encountered in experiments [1-3]. For the numerical solution used the TPS software package (Turbulence Problem Solver) that implements a generalized approach to constructing computer programs for a wide range of problems of hydrodynamics, described by the system of equations of hyperbolic type. As numerical methods are used the method of large particles and ENO-scheme of the second order with Roe solver for the approximate solution of the Riemann problem.
Air quality: from observation to applied studies
NASA Astrophysics Data System (ADS)
Weber, Christiane H.; Wania, Annett; Hirsch, Jacky; Bruse, Michael
2004-10-01
Air qualities studies in urban areas embrace several directions that are strongly associated with urban complexity. In the last centuries cities evolution implied changes in urbanization trends: urban sprawl has modified the relationship between cities and surroundings settlements. The existence and protection of urban green and open areas is promoted as a mean to improve the quality of life of their citizens and increase the satisfactory level of the inhabitants against pollution and noise adverse effects. This paper outlines the methods and approaches used in the EU research project Benefits of Urban Green Space (BUGS). The main target of BUGS is to assess the role of urban green spaces in alleviating the adverse effects of urbanization trends by developing an integrative methodology, ranging from participatory planning tools to numerical simulation models. The influence of urban structures on atmospheric pollutants distribution is investigated as a multi-scale problem ranging from micro to macro/regional scale. Traditionally, air quality models are applied on a single scale, seldom considering the joint effects of traffic network and urban development together. In BUGS, several numerical models are applied to cope with urban complexity and to provide quantitative and qualitative results. The differing input data requirements for the various models demanded a methodology which ensures a coherent data extraction and application procedure. In this paper, the stepwise procedure used for BUGS is presented after a general presentation of the research project and the models implied. A discussion part will highlight the statements induced by the choices made and a conclusive part bring to the stage some insights for future investigations.
Use of Green's functions in the numerical solution of two-point boundary value problems
NASA Technical Reports Server (NTRS)
Gallaher, L. J.; Perlin, I. E.
1974-01-01
This study investigates the use of Green's functions in the numerical solution of the two-point boundary value problem. The first part deals with the role of the Green's function in solving both linear and nonlinear second order ordinary differential equations with boundary conditions and systems of such equations. The second part describes procedures for numerical construction of Green's functions and considers briefly the conditions for their existence. Finally, there is a description of some numerical experiments using nonlinear problems for which the known existence, uniqueness or convergence theorems do not apply. Examples here include some problems in finding rendezvous orbits of the restricted three body system.
ERIC Educational Resources Information Center
Garske, Steven Ray
2010-01-01
Backsourcing is the act of an organization changing an outsourcing relationship through insourcing, vendor change, or elimination of the outsourced service. This study discovered numerous problematic outsourcing manipulations conducted by suppliers, and identified backsourcing methodologies to correct these manipulations across multiple supplier…
Conjugate gradient based projection - A new explicit methodology for frictional contact
NASA Technical Reports Server (NTRS)
Tamma, Kumar K.; Li, Maocheng; Sha, Desong
1993-01-01
With special attention towards the applicability to parallel computation or vectorization, a new and effective explicit approach for linear complementary formulations involving a conjugate gradient based projection methodology is proposed in this study for contact problems with Coulomb friction. The overall objectives are focussed towards providing an explicit methodology of computation for the complete contact problem with friction. In this regard, the primary idea for solving the linear complementary formulations stems from an established search direction which is projected to a feasible region determined by the non-negative constraint condition; this direction is then applied to the Fletcher-Reeves conjugate gradient method resulting in a powerful explicit methodology which possesses high accuracy, excellent convergence characteristics, fast computational speed and is relatively simple to implement for contact problems involving Coulomb friction.
Spinal Cord Injury-Induced Dysautonomia via Plasticity in Paravertebral Sympathetic Postganglionic
2017-10-01
their near anatomical inaccessibility. We have solved the accessibility problem with a strategic methodological advance. We will determine the extent...inaccessibility. We have solved the accessibility problem with a strategic methodological advance. We will determine the extent to which paravertebral
Human Prenatal Effects: Methodological Problems and Some Suggested Solutions
ERIC Educational Resources Information Center
Copans, Stuart A.
1974-01-01
Briefly reviews the relevant literature on human prenatal effects, describes some of the possible designs for such studies; and discusses some of the methodological problem areas: sample choice, measurement of prenatal variables, monitoring of labor and delivery, and neonatal assessment. (CS)
Zhang, Yong-Feng; Chiang, Hsiao-Dong
2017-09-01
A novel three-stage methodology, termed the "consensus-based particle swarm optimization (PSO)-assisted Trust-Tech methodology," to find global optimal solutions for nonlinear optimization problems is presented. It is composed of Trust-Tech methods, consensus-based PSO, and local optimization methods that are integrated to compute a set of high-quality local optimal solutions that can contain the global optimal solution. The proposed methodology compares very favorably with several recently developed PSO algorithms based on a set of small-dimension benchmark optimization problems and 20 large-dimension test functions from the CEC 2010 competition. The analytical basis for the proposed methodology is also provided. Experimental results demonstrate that the proposed methodology can rapidly obtain high-quality optimal solutions that can contain the global optimal solution. The scalability of the proposed methodology is promising.
Comments on "Failures in detecting volcanic ash from a satellite-based technique"
Prata, F.; Bluth, G.; Rose, B.; Schneider, D.; Tupper, A.
2001-01-01
The recent paper by Simpson et al. [Remote Sens. Environ. 72 (2000) 191.] on failures to detect volcanic ash using the 'reverse' absorption technique provides a timely reminder of the danger that volcanic ash presents to aviation and the urgent need for some form of effective remote detection. The paper unfortunately suffers from a fundamental flaw in its methodology and numerous errors of fact and interpretation. For the moment, the 'reverse' absorption technique provides the best means for discriminating volcanic ash clouds from meteorological clouds. The purpose of our comment is not to defend any particular algorithm; rather, we point out some problems with Simpson et al.'s analysis and re-state the conditions under which the 'reverse' absorption algorithm is likely to succeed. ?? 2001 Elsevier Science Inc. All rights reserved.
Measurement of the translation and rotation of a sphere in fluid flow
NASA Astrophysics Data System (ADS)
Barros, Diogo; Hiltbrand, Ben; Longmire, Ellen K.
2018-06-01
The problem of determining the translation and rotation of a spherical particle moving in fluid flow is considered. Lagrangian tracking of markers printed over the surface of a sphere is employed to compute the center motion and the angular velocity of the solid body. The method initially calculates the sphere center from the 3D coordinates of the reconstructed markers, then finds the optimal rotation matrix that aligns a set of markers tracked at sequential time steps. The parameters involved in the experimental implementation of this procedure are discussed, and the associated uncertainty is estimated from numerical analysis. Finally, the proposed methodology is applied to characterize the motion of a large spherical particle released in a turbulent boundary layer developing in a water channel.
NASA Technical Reports Server (NTRS)
Syed, S. A.; Chiappetta, L. M.
1985-01-01
A methodological evaluation for two-finite differencing schemes for computer-aided gas turbine design is presented. The two computational schemes include; a Bounded Skewed Finite Differencing Scheme (BSUDS); and a Quadratic Upwind Differencing Scheme (QSDS). In the evaluation, the derivations of the schemes were incorporated into two-dimensional and three-dimensional versions of the Teaching Axisymmetric Characteristics Heuristically (TEACH) computer code. Assessments were made according to performance criteria for the solution of problems of turbulent, laminar, and coannular turbulent flow. The specific performance criteria used in the evaluation were simplicity, accuracy, and computational economy. It is found that the BSUDS scheme performed better with respect to the criteria than the QUDS. Some of the reasons for the more successful performance BSUDS are discussed.
NASA Technical Reports Server (NTRS)
Papadopoulos, Periklis; Venkatapathy, Ethiraj; Prabhu, Dinesh; Loomis, Mark P.; Olynick, Dave; Arnold, James O. (Technical Monitor)
1998-01-01
Recent advances in computational power enable computational fluid dynamic modeling of increasingly complex configurations. A review of grid generation methodologies implemented in support of the computational work performed for the X-38 and X-33 are presented. In strategizing topological constructs and blocking structures factors considered are the geometric configuration, optimal grid size, numerical algorithms, accuracy requirements, physics of the problem at hand, computational expense, and the available computer hardware. Also addressed are grid refinement strategies, the effects of wall spacing, and convergence. The significance of grid is demonstrated through a comparison of computational and experimental results of the aeroheating environment experienced by the X-38 vehicle. Special topics on grid generation strategies are also addressed to model control surface deflections, and material mapping.
S&T converging trends in dealing with disaster: A review on AI tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hasan, Abu Bakar, E-mail: abakarh@usim.edu.my; Isa, Mohd Hafez Mohd.
Science and Technology (S&T) has been able to help mankind to solve or minimize problems when arise. Different methodologies, techniques and tools were developed or used for specific cases by researchers, engineers, scientists throughout the world, and numerous papers and articles have been written by them. Nine selected cases such as flash flood, earthquakes, workplace accident, fault in aircraft industry, seismic vulnerability, disaster mitigation and management, and early fault detection in nuclear industry have been studied. This paper looked at those cases, and their results showed nearly 60% uses artificial intelligence (AI) as a tool. This paper also did somemore » review that will help young researchers in deciding the types of AI tools to be selected; thus proving the future trends in S&T.« less
Kowalski, Karol
2009-05-21
In this article we discuss the problem of proper balancing of the noniterative corrections to the ground- and excited-state energies obtained with approximate coupled cluster (CC) and equation-of-motion CC (EOMCC) approaches. It is demonstrated that for a class of excited states dominated by single excitations and for states with medium doubly excited component, the newly introduced nested variant of the method of moments of CC equations provides mathematically rigorous way of balancing the ground- and excited-state correlation effects. The resulting noniterative methodology accounting for the effect of triples is tested using its parallel implementation on the systems, for which iterative CC/EOMCC calculations with full inclusion of triply excited configurations or their most important subset are numerically feasible.
Petroleum refinery operational planning using robust optimization
NASA Astrophysics Data System (ADS)
Leiras, A.; Hamacher, S.; Elkamel, A.
2010-12-01
In this article, the robust optimization methodology is applied to deal with uncertainties in the prices of saleable products, operating costs, product demand, and product yield in the context of refinery operational planning. A numerical study demonstrates the effectiveness of the proposed robust approach. The benefits of incorporating uncertainty in the different model parameters were evaluated in terms of the cost of ignoring uncertainty in the problem. The calculations suggest that this benefit is equivalent to 7.47% of the deterministic solution value, which indicates that the robust model may offer advantages to those involved with refinery operational planning. In addition, the probability bounds of constraint violation are calculated to help the decision-maker adopt a more appropriate parameter to control robustness and judge the tradeoff between conservatism and total profit.
Integration of PGD-virtual charts into an engineering design process
NASA Astrophysics Data System (ADS)
Courard, Amaury; Néron, David; Ladevèze, Pierre; Ballere, Ludovic
2016-04-01
This article deals with the efficient construction of approximations of fields and quantities of interest used in geometric optimisation of complex shapes that can be encountered in engineering structures. The strategy, which is developed herein, is based on the construction of virtual charts that allow, once computed offline, to optimise the structure for a negligible online CPU cost. These virtual charts can be used as a powerful numerical decision support tool during the design of industrial structures. They are built using the proper generalized decomposition (PGD) that offers a very convenient framework to solve parametrised problems. In this paper, particular attention has been paid to the integration of the procedure into a genuine engineering design process. In particular, a dedicated methodology is proposed to interface the PGD approach with commercial software.
Generation of linear dynamic models from a digital nonlinear simulation
NASA Technical Reports Server (NTRS)
Daniele, C. J.; Krosel, S. M.
1979-01-01
The results and methodology used to derive linear models from a nonlinear simulation are presented. It is shown that averaged positive and negative perturbations in the state variables can reduce numerical errors in finite difference, partial derivative approximations and, in the control inputs, can better approximate the system response in both directions about the operating point. Both explicit and implicit formulations are addressed. Linear models are derived for the F 100 engine, and comparisons of transients are made with the nonlinear simulation. The problem of startup transients in the nonlinear simulation in making these comparisons is addressed. Also, reduction of the linear models is investigated using the modal and normal techniques. Reduced-order models of the F 100 are derived and compared with the full-state models.
Application of neural networks and sensitivity analysis to improved prediction of trauma survival.
Hunter, A; Kennedy, L; Henry, J; Ferguson, I
2000-05-01
The performance of trauma departments is widely audited by applying predictive models that assess probability of survival, and examining the rate of unexpected survivals and deaths. Although the TRISS methodology, a logistic regression modelling technique, is still the de facto standard, it is known that neural network models perform better. A key issue when applying neural network models is the selection of input variables. This paper proposes a novel form of sensitivity analysis, which is simpler to apply than existing techniques, and can be used for both numeric and nominal input variables. The technique is applied to the audit survival problem, and used to analyse the TRISS variables. The conclusions discuss the implications for the design of further improved scoring schemes and predictive models.
NASA Technical Reports Server (NTRS)
Chhikara, R. S.; Perry, C. R., Jr. (Principal Investigator)
1980-01-01
The problem of determining the stratum variances required for an optimum sample allocation for remotely sensed crop surveys is investigated with emphasis on an approach based on the concept of stratum variance as a function of the sampling unit size. A methodology using the existing and easily available information of historical statistics is developed for obtaining initial estimates of stratum variances. The procedure is applied to variance for wheat in the U.S. Great Plains and is evaluated based on the numerical results obtained. It is shown that the proposed technique is viable and performs satisfactorily with the use of a conservative value (smaller than the expected value) for the field size and with the use of crop statistics from the small political division level.
Human computers: the first pioneers of the information age.
Grier, D A
2001-03-01
Before computers were machines, they were people. They were men and women, young and old, well educated and common. They were the workers who convinced scientists that large-scale calculation had value. Long before Presper Eckert and John Mauchly built the ENIAC at the Moore School of Electronics, Philadelphia, or Maurice Wilkes designed the EDSAC for Manchester University, human computers had created the discipline of computation. They developed numerical methodologies and proved them on practical problems. These human computers were not savants or calculating geniuses. Some knew little more than basic arithmetic. A few were near equals of the scientists they served and, in a different time or place, might have become practicing scientists had they not been barred from a scientific career by their class, education, gender or ethnicity.
Heat transfer monitoring by means of the hot wire technique and finite element analysis software.
Hernández Wong, J; Suarez, V; Guarachi, J; Calderón, A; Rojas-Trigos, J B; Juárez, A G; Marín, E
2014-01-01
It is reported the study of the radial heat transfer in a homogeneous and isotropic substance with a heat linear source in its axial axis. For this purpose, the hot wire characterization technique has been used, in order to obtain the temperature distribution as a function of radial distance from the axial axis and time exposure. Also, the solution of the transient heat transport equation for this problem was obtained under appropriate boundary conditions, by means of finite element technique. A comparison between experimental, conventional theoretical model and numerical simulated results is done to demonstrate the utility of the finite element analysis simulation methodology in the investigation of the thermal response of substances. Copyright © 2013 Elsevier Ltd. All rights reserved.
Detecting opportunities for parallel observations on the Hubble Space Telescope
NASA Technical Reports Server (NTRS)
Lucks, Michael
1992-01-01
The presence of multiple scientific instruments aboard the Hubble Space Telescope provides opportunities for parallel science, i.e., the simultaneous use of different instruments for different observations. Determining whether candidate observations are suitable for parallel execution depends on numerous criteria (some involving quantitative tradeoffs) that may change frequently. A knowledge based approach is presented for constructing a scoring function to rank candidate pairs of observations for parallel science. In the Parallel Observation Matching System (POMS), spacecraft knowledge and schedulers' preferences are represented using a uniform set of mappings, or knowledge functions. Assessment of parallel science opportunities is achieved via composition of the knowledge functions in a prescribed manner. The knowledge acquisition, and explanation facilities of the system are presented. The methodology is applicable to many other multiple criteria assessment problems.
Discrete size optimization of steel trusses using a refined big bang-big crunch algorithm
NASA Astrophysics Data System (ADS)
Hasançebi, O.; Kazemzadeh Azad, S.
2014-01-01
This article presents a methodology that provides a method for design optimization of steel truss structures based on a refined big bang-big crunch (BB-BC) algorithm. It is shown that a standard formulation of the BB-BC algorithm occasionally falls short of producing acceptable solutions to problems from discrete size optimum design of steel trusses. A reformulation of the algorithm is proposed and implemented for design optimization of various discrete truss structures according to American Institute of Steel Construction Allowable Stress Design (AISC-ASD) specifications. Furthermore, the performance of the proposed BB-BC algorithm is compared to its standard version as well as other well-known metaheuristic techniques. The numerical results confirm the efficiency of the proposed algorithm in practical design optimization of truss structures.
S&T converging trends in dealing with disaster: A review on AI tools
NASA Astrophysics Data System (ADS)
Hasan, Abu Bakar; Isa, Mohd. Hafez Mohd.
2016-01-01
Science and Technology (S&T) has been able to help mankind to solve or minimize problems when arise. Different methodologies, techniques and tools were developed or used for specific cases by researchers, engineers, scientists throughout the world, and numerous papers and articles have been written by them. Nine selected cases such as flash flood, earthquakes, workplace accident, fault in aircraft industry, seismic vulnerability, disaster mitigation and management, and early fault detection in nuclear industry have been studied. This paper looked at those cases, and their results showed nearly 60% uses artificial intelligence (AI) as a tool. This paper also did some review that will help young researchers in deciding the types of AI tools to be selected; thus proving the future trends in S&T.
Potential of neuro-fuzzy methodology to estimate noise level of wind turbines
NASA Astrophysics Data System (ADS)
Nikolić, Vlastimir; Petković, Dalibor; Por, Lip Yee; Shamshirband, Shahaboddin; Zamani, Mazdak; Ćojbašić, Žarko; Motamedi, Shervin
2016-01-01
Wind turbines noise effect became large problem because of increasing of wind farms numbers since renewable energy becomes the most influential energy sources. However, wind turbine noise generation and propagation is not understandable in all aspects. Mechanical noise of wind turbines can be ignored since aerodynamic noise of wind turbine blades is the main source of the noise generation. Numerical simulations of the noise effects of the wind turbine can be very challenging task. Therefore in this article soft computing method is used to evaluate noise level of wind turbines. The main goal of the study is to estimate wind turbine noise in regard of wind speed at different heights and for different sound frequency. Adaptive neuro-fuzzy inference system (ANFIS) is used to estimate the wind turbine noise levels.
Overcoming an obstacle in expanding a UMLS semantic type extent.
Chen, Yan; Gu, Huanying; Perl, Yehoshua; Geller, James
2012-02-01
This paper strives to overcome a major problem encountered by a previous expansion methodology for discovering concepts highly likely to be missing a specific semantic type assignment in the UMLS. This methodology is the basis for an algorithm that presents the discovered concepts to a human auditor for review and possible correction. We analyzed the problem of the previous expansion methodology and discovered that it was due to an obstacle constituted by one or more concepts assigned the UMLS Semantic Network semantic type Classification. A new methodology was designed that bypasses such an obstacle without a combinatorial explosion in the number of concepts presented to the human auditor for review. The new expansion methodology with obstacle avoidance was tested with the semantic type Experimental Model of Disease and found over 500 concepts missed by the previous methodology that are in need of this semantic type assignment. Furthermore, other semantic types suffering from the same major problem were discovered, indicating that the methodology is of more general applicability. The algorithmic discovery of concepts that are likely missing a semantic type assignment is possible even in the face of obstacles, without an explosion in the number of processed concepts. Copyright © 2011 Elsevier Inc. All rights reserved.
Overcoming an Obstacle in Expanding a UMLS Semantic Type Extent
Chen, Yan; Gu, Huanying; Perl, Yehoshua; Geller, James
2011-01-01
This paper strives to overcome a major problem encountered by a previous expansion methodology for discovering concepts highly likely to be missing a specific semantic type assignment in the UMLS. This methodology is the basis for an algorithm that presents the discovered concepts to a human auditor for review and possible correction. We analyzed the problem of the previous expansion methodology and discovered that it was due to an obstacle constituted by one or more concepts assigned the UMLS Semantic Network semantic type Classification. A new methodology was designed that bypasses such an obstacle without a combinatorial explosion in the number of concepts presented to the human auditor for review. The new expansion methodology with obstacle avoidance was tested with the semantic type Experimental Model of Disease and found over 500 concepts missed by the previous methodology that are in need of this semantic type assignment. Furthermore, other semantic types suffering from the same major problem were discovered, indicating that the methodology is of more general applicability. The algorithmic discovery of concepts that are likely missing a semantic type assignment is possible even in the face of obstacles, without an explosion in the number of processed concepts. PMID:21925287
Numerical Leak Detection in a Pipeline Network of Complex Structure with Unsteady Flow
NASA Astrophysics Data System (ADS)
Aida-zade, K. R.; Ashrafova, E. R.
2017-12-01
An inverse problem for a pipeline network of complex loopback structure is solved numerically. The problem is to determine the locations and amounts of leaks from unsteady flow characteristics measured at some pipeline points. The features of the problem include impulse functions involved in a system of hyperbolic differential equations, the absence of classical initial conditions, and boundary conditions specified as nonseparated relations between the states at the endpoints of adjacent pipeline segments. The problem is reduced to a parametric optimal control problem without initial conditions, but with nonseparated boundary conditions. The latter problem is solved by applying first-order optimization methods. Results of numerical experiments are presented.
Summary of research in applied mathematics, numerical analysis, and computer sciences
NASA Technical Reports Server (NTRS)
1986-01-01
The major categories of current ICASE research programs addressed include: numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; control and parameter identification problems, with emphasis on effective numerical methods; computational problems in engineering and physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and computer systems and software, especially vector and parallel computers.
Adaptive multiresolution modeling of groundwater flow in heterogeneous porous media
NASA Astrophysics Data System (ADS)
Malenica, Luka; Gotovac, Hrvoje; Srzic, Veljko; Andric, Ivo
2016-04-01
Proposed methodology was originally developed by our scientific team in Split who designed multiresolution approach for analyzing flow and transport processes in highly heterogeneous porous media. The main properties of the adaptive Fup multi-resolution approach are: 1) computational capabilities of Fup basis functions with compact support capable to resolve all spatial and temporal scales, 2) multi-resolution presentation of heterogeneity as well as all other input and output variables, 3) accurate, adaptive and efficient strategy and 4) semi-analytical properties which increase our understanding of usually complex flow and transport processes in porous media. The main computational idea behind this approach is to separately find the minimum number of basis functions and resolution levels necessary to describe each flow and transport variable with the desired accuracy on a particular adaptive grid. Therefore, each variable is separately analyzed, and the adaptive and multi-scale nature of the methodology enables not only computational efficiency and accuracy, but it also describes subsurface processes closely related to their understood physical interpretation. The methodology inherently supports a mesh-free procedure, avoiding the classical numerical integration, and yields continuous velocity and flux fields, which is vitally important for flow and transport simulations. In this paper, we will show recent improvements within the proposed methodology. Since "state of the art" multiresolution approach usually uses method of lines and only spatial adaptive procedure, temporal approximation was rarely considered as a multiscale. Therefore, novel adaptive implicit Fup integration scheme is developed, resolving all time scales within each global time step. It means that algorithm uses smaller time steps only in lines where solution changes are intensive. Application of Fup basis functions enables continuous time approximation, simple interpolation calculations across different temporal lines and local time stepping control. Critical aspect of time integration accuracy is construction of spatial stencil due to accurate calculation of spatial derivatives. Since common approach applied for wavelets and splines uses a finite difference operator, we developed here collocation one including solution values and differential operator. In this way, new improved algorithm is adaptive in space and time enabling accurate solution for groundwater flow problems, especially in highly heterogeneous porous media with large lnK variances and different correlation length scales. In addition, differences between collocation and finite volume approaches are discussed. Finally, results show application of methodology to the groundwater flow problems in highly heterogeneous confined and unconfined aquifers.
Fuchs, Lynn S; Geary, David C; Compton, Donald L; Fuchs, Douglas; Hamlett, Carol L; Seethaler, Pamela M; Bryant, Joan D; Schatschneider, Christopher
2010-11-01
The purpose of this study was to examine the interplay between basic numerical cognition and domain-general abilities (such as working memory) in explaining school mathematics learning. First graders (N = 280; mean age = 5.77 years) were assessed on 2 types of basic numerical cognition, 8 domain-general abilities, procedural calculations, and word problems in fall and then reassessed on procedural calculations and word problems in spring. Development was indexed by latent change scores, and the interplay between numerical and domain-general abilities was analyzed by multiple regression. Results suggest that the development of different types of formal school mathematics depends on different constellations of numerical versus general cognitive abilities. When controlling for 8 domain-general abilities, both aspects of basic numerical cognition were uniquely predictive of procedural calculations and word problems development. Yet, for procedural calculations development, the additional amount of variance explained by the set of domain-general abilities was not significant, and only counting span was uniquely predictive. By contrast, for word problems development, the set of domain-general abilities did provide additional explanatory value, accounting for about the same amount of variance as the basic numerical cognition variables. Language, attentive behavior, nonverbal problem solving, and listening span were uniquely predictive.
E-therapy for mental health problems: a systematic review.
Postel, Marloes G; de Haan, Hein A; De Jong, Cor A J
2008-09-01
The widespread availability of the Internet offers opportunities for improving access to therapy for people with mental health problems. There is a seemingly infinite supply of Internet-based interventions available on the World Wide Web. The aim of the present study is to systematically assess the methodological quality of randomized controlled trials (RCTs) concerning e-therapy for mental health problems. Two reviewers independently assessed the methodological quality of the RCTs, based on a list of criteria for the methodological quality assessment as recommended by the Cochrane Back Review Group. The search yielded 14 papers that reported RCTs concerning e-therapy for mental-health problems. The methodological quality of studies included in this review was generally low. It is concluded that e-therapy may turn out to be an appropriate therapeutic entity, but the evidence needs to be more convincing. Recommendations are made concerning the method of reporting RCTs and the need to add some content items to an e-therapy study.
Navy Community of Practice for Programmers and Developers
2016-12-01
execute cyber missions. The methodology employed in this research is human-centered design via a social interaction prototype, which allows us to learn...for Navy programmers and developers. Chapter V details the methodology used to design the proposed CoP. This chapter summarizes the results from...thirty years the term has evolved to incorporate ideas from numerous design methodologies and movements [57]. In the 1980s, revealed design began to
Manfredi, Simone; Cristobal, Jorge
2016-09-01
Trying to respond to the latest policy needs, the work presented in this article aims at developing a life-cycle based framework methodology to quantitatively evaluate the environmental and economic sustainability of European food waste management options. The methodology is structured into six steps aimed at defining boundaries and scope of the evaluation, evaluating environmental and economic impacts and identifying best performing options. The methodology is able to accommodate additional assessment criteria, for example the social dimension of sustainability, thus moving towards a comprehensive sustainability assessment framework. A numerical case study is also developed to provide an example of application of the proposed methodology to an average European context. Different options for food waste treatment are compared, including landfilling, composting, anaerobic digestion and incineration. The environmental dimension is evaluated with the software EASETECH, while the economic assessment is conducted based on different indicators expressing the costs associated with food waste management. Results show that the proposed methodology allows for a straightforward identification of the most sustainable options for food waste, thus can provide factual support to decision/policy making. However, it was also observed that results markedly depend on a number of user-defined assumptions, for example on the choice of the indicators to express the environmental and economic performance. © The Author(s) 2016.
A general higher-order remap algorithm for ALE calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiravalle, Vincent P
2011-01-05
A numerical technique for solving the equations of fluid dynamics with arbitrary mesh motion is presented. The three phases of the Arbitrary Lagrangian Eulerian (ALE) methodology are outlined: the Lagrangian phase, grid relaxation phase and remap phase. The Lagrangian phase follows a well known approach from the HEMP code; in addition the strain rate andflow divergence are calculated in a consistent manner according to Margolin. A donor cell method from the SALE code forms the basis of the remap step, but unlike SALE a higher order correction based on monotone gradients is also added to the remap. Four test problemsmore » were explored to evaluate the fidelity of these numerical techniques, as implemented in a simple test code, written in the C programming language, called Cercion. Novel cell-centered data structures are used in Cercion to reduce the complexity of the programming and maximize the efficiency of memory usage. The locations of the shock and contact discontinuity in the Riemann shock tube problem are well captured. Cercion demonstrates a high degree of symmetry when calculating the Sedov blast wave solution, with a peak density at the shock front that is similar to the value determined by the RAGE code. For a flyer plate test problem both Cercion and FLAG give virtually the same velocity temporal profile at the target-vacuum interface. When calculating a cylindrical implosion of a steel shell, Cercion and FLAG agree well and the Cercion results are insensitive to the use of ALE.« less
A numerical solution of a singular boundary value problem arising in boundary layer theory.
Hu, Jiancheng
2016-01-01
In this paper, a second-order nonlinear singular boundary value problem is presented, which is equivalent to the well-known Falkner-Skan equation. And the one-dimensional third-order boundary value problem on interval [Formula: see text] is equivalently transformed into a second-order boundary value problem on finite interval [Formula: see text]. The finite difference method is utilized to solve the singular boundary value problem, in which the amount of computational effort is significantly less than the other numerical methods. The numerical solutions obtained by the finite difference method are in agreement with those obtained by previous authors.
Fitting methods to paradigms: are ergonomics methods fit for systems thinking?
Salmon, Paul M; Walker, Guy H; M Read, Gemma J; Goode, Natassia; Stanton, Neville A
2017-02-01
The issues being tackled within ergonomics problem spaces are shifting. Although existing paradigms appear relevant for modern day systems, it is worth questioning whether our methods are. This paper asks whether the complexities of systems thinking, a currently ubiquitous ergonomics paradigm, are outpacing the capabilities of our methodological toolkit. This is achieved through examining the contemporary ergonomics problem space and the extent to which ergonomics methods can meet the challenges posed. Specifically, five key areas within the ergonomics paradigm of systems thinking are focused on: normal performance as a cause of accidents, accident prediction, system migration, systems concepts and ergonomics in design. The methods available for pursuing each line of inquiry are discussed, along with their ability to respond to key requirements. In doing so, a series of new methodological requirements and capabilities are identified. It is argued that further methodological development is required to provide researchers and practitioners with appropriate tools to explore both contemporary and future problems. Practitioner Summary: Ergonomics methods are the cornerstone of our discipline. This paper examines whether our current methodological toolkit is fit for purpose given the changing nature of ergonomics problems. The findings provide key research and practice requirements for methodological development.
Controlling Reflections from Mesh Refinement Interfaces in Numerical Relativity
NASA Technical Reports Server (NTRS)
Baker, John G.; Van Meter, James R.
2005-01-01
A leading approach to improving the accuracy on numerical relativity simulations of black hole systems is through fixed or adaptive mesh refinement techniques. We describe a generic numerical error which manifests as slowly converging, artificial reflections from refinement boundaries in a broad class of mesh-refinement implementations, potentially limiting the effectiveness of mesh- refinement techniques for some numerical relativity applications. We elucidate this numerical effect by presenting a model problem which exhibits the phenomenon, but which is simple enough that its numerical error can be understood analytically. Our analysis shows that the effect is caused by variations in finite differencing error generated across low and high resolution regions, and that its slow convergence is caused by the presence of dramatic speed differences among propagation modes typical of 3+1 relativity. Lastly, we resolve the problem, presenting a class of finite-differencing stencil modifications which eliminate this pathology in both our model problem and in numerical relativity examples.
Benchmark Problems of the Geothermal Technologies Office Code Comparison Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Mark D.; Podgorney, Robert; Kelkar, Sharad M.
A diverse suite of numerical simulators is currently being applied to predict or understand the performance of enhanced geothermal systems (EGS). To build confidence and identify critical development needs for these analytical tools, the United States Department of Energy, Geothermal Technologies Office has sponsored a Code Comparison Study (GTO-CCS), with participants from universities, industry, and national laboratories. A principal objective for the study was to create a community forum for improvement and verification of numerical simulators for EGS modeling. Teams participating in the study were those representing U.S. national laboratories, universities, and industries, and each team brought unique numerical simulationmore » capabilities to bear on the problems. Two classes of problems were developed during the study, benchmark problems and challenge problems. The benchmark problems were structured to test the ability of the collection of numerical simulators to solve various combinations of coupled thermal, hydrologic, geomechanical, and geochemical processes. This class of problems was strictly defined in terms of properties, driving forces, initial conditions, and boundary conditions. Study participants submitted solutions to problems for which their simulation tools were deemed capable or nearly capable. Some participating codes were originally developed for EGS applications whereas some others were designed for different applications but can simulate processes similar to those in EGS. Solution submissions from both were encouraged. In some cases, participants made small incremental changes to their numerical simulation codes to address specific elements of the problem, and in other cases participants submitted solutions with existing simulation tools, acknowledging the limitations of the code. The challenge problems were based on the enhanced geothermal systems research conducted at Fenton Hill, near Los Alamos, New Mexico, between 1974 and 1995. The problems involved two phases of research, stimulation, development, and circulation in two separate reservoirs. The challenge problems had specific questions to be answered via numerical simulation in three topical areas: 1) reservoir creation/stimulation, 2) reactive and passive transport, and 3) thermal recovery. Whereas the benchmark class of problems were designed to test capabilities for modeling coupled processes under strictly specified conditions, the stated objective for the challenge class of problems was to demonstrate what new understanding of the Fenton Hill experiments could be realized via the application of modern numerical simulation tools by recognized expert practitioners.« less
Increasing accuracy in the assessment of motion sickness: A construct methodology
NASA Technical Reports Server (NTRS)
Stout, Cynthia S.; Cowings, Patricia S.
1993-01-01
The purpose is to introduce a new methodology that should improve the accuracy of the assessment of motion sickness. This construct methodology utilizes both subjective reports of motion sickness and objective measures of physiological correlates to assess motion sickness. Current techniques and methods used in the framework of a construct methodology are inadequate. Current assessment techniques for diagnosing motion sickness and space motion sickness are reviewed, and attention is called to the problems with the current methods. Further, principles of psychophysiology that when applied will probably resolve some of these problems are described in detail.
NASA Astrophysics Data System (ADS)
Lukyanenko, D. V.; Shishlenin, M. A.; Volkov, V. T.
2018-01-01
We propose the numerical method for solving coefficient inverse problem for a nonlinear singularly perturbed reaction-diffusion-advection equation with the final time observation data based on the asymptotic analysis and the gradient method. Asymptotic analysis allows us to extract a priory information about interior layer (moving front), which appears in the direct problem, and boundary layers, which appear in the conjugate problem. We describe and implement the method of constructing a dynamically adapted mesh based on this a priory information. The dynamically adapted mesh significantly reduces the complexity of the numerical calculations and improve the numerical stability in comparison with the usual approaches. Numerical example shows the effectiveness of the proposed method.
Rating of Dynamic Coefficient for Simple Beam Bridge Design on High-Speed Railways
NASA Astrophysics Data System (ADS)
Diachenko, Leonid; Benin, Andrey; Smirnov, Vladimir; Diachenko, Anastasia
2018-06-01
The aim of the work is to improve the methodology for the dynamic computation of simple beam spans during the impact of high-speed trains. Mathematical simulation utilizing numerical and analytical methods of structural mechanics is used in the research. The article analyses parameters of the effect of high-speed trains on simple beam spanning bridge structures and suggests a technique of determining of the dynamic index to the live load. Reliability of the proposed methodology is confirmed by results of numerical simulation of high-speed train passage over spans with different speeds. The proposed algorithm of dynamic computation is based on a connection between maximum acceleration of the span in the resonance mode of vibrations and the main factors of stress-strain state. The methodology allows determining maximum and also minimum values of the main efforts in the construction that makes possible to perform endurance tests. It is noted that dynamic additions for the components of the stress-strain state (bending moments, transverse force and vertical deflections) are different. This condition determines the necessity for differentiated approach to evaluation of dynamic coefficients performing design verification of I and II groups of limiting state. The practical importance: the methodology of determining the dynamic coefficients allows making dynamic calculation and determining the main efforts in split beam spans without numerical simulation and direct dynamic analysis that significantly reduces the labour costs for design.
Problem Solving in Biology: A Methodology
ERIC Educational Resources Information Center
Wisehart, Gary; Mandell, Mark
2008-01-01
A methodology is described that teaches science process by combining informal logic and a heuristic for rating factual reliability. This system facilitates student hypothesis formation, testing, and evaluation of results. After problem solving with this scheme, students are asked to examine and evaluate arguments for the underlying principles of…
SOME POSSIBLE APPLICATIONS OF PROJECT OUTCOMES RESEARCH METHODOLOGY
Section I, refers to the possibility of using the theory and methodology of Project Outcomes to problems of strategic information. It is felt that...purposes of assessing present and future organizational effectiveness . Section IV, refers to the applications that our study may have for problems of
[Problem-based learning in cardiopulmonary resuscitation: basic life support].
Sardo, Pedro Miguel Garcez; Dal Sasso, Grace Terezinha Marcon
2008-12-01
Descriptive and exploratory study, aimed to develop an educational practice of Problem-Based Learning in CPR/BLS with 24 students in the third stage of the Nursing Undergraduate Course in a University in the Southern region of Brazil. The study used the PBL methodology, focused on problem situations of cardiopulmonary arrest, and was approved by the CONEP. The methodological strategies for data collection, such as participative observation and questionnaires to evaluate the learning, the educational practices and their methodology, allowed for grouping the results in: students' expectations; group activities; individual activities; practical activities; evaluation of the meetings and their methodology. The study showed that PBL allows the educator to evaluate the academic learning process in several dimensions, functioning as a motivating factor for both the educator and the student, because it allows the theoretical-practical integration in an integrated learning process.
Wightman, Jade; Julio, Flávia; Virués-Ortega, Javier
2014-05-01
Experimental functional analysis is an assessment methodology to identify the environmental factors that maintain problem behavior in individuals with developmental disabilities and in other populations. Functional analysis provides the basis for the development of reinforcement-based approaches to treatment. This article reviews the procedures, validity, and clinical implementation of the methodological variations of functional analysis and function-based interventions. We present six variations of functional analysis methodology in addition to the typical functional analysis: brief functional analysis, single-function tests, latency-based functional analysis, functional analysis of precursors, and trial-based functional analysis. We also present the three general categories of function-based interventions: extinction, antecedent manipulation, and differential reinforcement. Functional analysis methodology is a valid and efficient approach to the assessment of problem behavior and the selection of treatment strategies.
Allometric scaling theory applied to FIA biomass estimation
David C. Chojnacky
2002-01-01
Tree biomass estimates in the Forest Inventory and Analysis (FIA) database are derived from numerous methodologies whose abundance and complexity raise questions about consistent results throughout the U.S. A new model based on allometric scaling theory ("WBE") offers simplified methodology and a theoretically sound basis for improving the reliability and...
Methodological Problems on the Way to Integrative Human Neuroscience.
Kotchoubey, Boris; Tretter, Felix; Braun, Hans A; Buchheim, Thomas; Draguhn, Andreas; Fuchs, Thomas; Hasler, Felix; Hastedt, Heiner; Hinterberger, Thilo; Northoff, Georg; Rentschler, Ingo; Schleim, Stephan; Sellmaier, Stephan; Tebartz Van Elst, Ludger; Tschacher, Wolfgang
2016-01-01
Neuroscience is a multidisciplinary effort to understand the structures and functions of the brain and brain-mind relations. This effort results in an increasing amount of data, generated by sophisticated technologies. However, these data enhance our descriptive knowledge , rather than improve our understanding of brain functions. This is caused by methodological gaps both within and between subdisciplines constituting neuroscience, and the atomistic approach that limits the study of macro- and mesoscopic issues. Whole-brain measurement technologies do not resolve these issues, but rather aggravate them by the complexity problem. The present article is devoted to methodological and epistemic problems that obstruct the development of human neuroscience. We neither discuss ontological questions (e.g., the nature of the mind) nor review data, except when it is necessary to demonstrate a methodological issue. As regards intradisciplinary methodological problems, we concentrate on those within neurobiology (e.g., the gap between electrical and chemical approaches to neurophysiological processes) and psychology (missing theoretical concepts). As regards interdisciplinary problems, we suggest that core disciplines of neuroscience can be integrated using systemic concepts that also entail human-environment relations. We emphasize the necessity of a meta-discussion that should entail a closer cooperation with philosophy as a discipline of systematic reflection. The atomistic reduction should be complemented by the explicit consideration of the embodiedness of the brain and the embeddedness of humans. The discussion is aimed at the development of an explicit methodology of integrative human neuroscience , which will not only link different fields and levels, but also help in understanding clinical phenomena.
Methodological Problems on the Way to Integrative Human Neuroscience
Kotchoubey, Boris; Tretter, Felix; Braun, Hans A.; Buchheim, Thomas; Draguhn, Andreas; Fuchs, Thomas; Hasler, Felix; Hastedt, Heiner; Hinterberger, Thilo; Northoff, Georg; Rentschler, Ingo; Schleim, Stephan; Sellmaier, Stephan; Tebartz Van Elst, Ludger; Tschacher, Wolfgang
2016-01-01
Neuroscience is a multidisciplinary effort to understand the structures and functions of the brain and brain-mind relations. This effort results in an increasing amount of data, generated by sophisticated technologies. However, these data enhance our descriptive knowledge, rather than improve our understanding of brain functions. This is caused by methodological gaps both within and between subdisciplines constituting neuroscience, and the atomistic approach that limits the study of macro- and mesoscopic issues. Whole-brain measurement technologies do not resolve these issues, but rather aggravate them by the complexity problem. The present article is devoted to methodological and epistemic problems that obstruct the development of human neuroscience. We neither discuss ontological questions (e.g., the nature of the mind) nor review data, except when it is necessary to demonstrate a methodological issue. As regards intradisciplinary methodological problems, we concentrate on those within neurobiology (e.g., the gap between electrical and chemical approaches to neurophysiological processes) and psychology (missing theoretical concepts). As regards interdisciplinary problems, we suggest that core disciplines of neuroscience can be integrated using systemic concepts that also entail human-environment relations. We emphasize the necessity of a meta-discussion that should entail a closer cooperation with philosophy as a discipline of systematic reflection. The atomistic reduction should be complemented by the explicit consideration of the embodiedness of the brain and the embeddedness of humans. The discussion is aimed at the development of an explicit methodology of integrative human neuroscience, which will not only link different fields and levels, but also help in understanding clinical phenomena. PMID:27965548
Case study of a problem-based learning course of physics in a telecommunications engineering degree
NASA Astrophysics Data System (ADS)
Macho-Stadler, Erica; Jesús Elejalde-García, Maria
2013-08-01
Active learning methods can be appropriate in engineering, as their methodology promotes meta-cognition, independent learning and problem-solving skills. Problem-based learning is the educational process by which problem-solving activities and instructor's guidance facilitate learning. Its key characteristic involves posing a 'concrete problem' to initiate the learning process, generally implemented by small groups of students. Many universities have developed and used active methodologies successfully in the teaching-learning process. During the past few years, the University of the Basque Country has promoted the use of active methodologies through several teacher training programmes. In this paper, we describe and analyse the results of the educational experience using the problem-based learning (PBL) method in a physics course for undergraduates enrolled in the technical telecommunications engineering degree programme. From an instructors' perspective, PBL strengths include better student attitude in class and increased instructor-student and student-student interactions. The students emphasised developing teamwork and communication skills in a good learning atmosphere as positive aspects.
NASA Astrophysics Data System (ADS)
Lezina, Natalya; Agoshkov, Valery
2017-04-01
Domain decomposition method (DDM) allows one to present a domain with complex geometry as a set of essentially simpler subdomains. This method is particularly applied for the hydrodynamics of oceans and seas. In each subdomain the system of thermo-hydrodynamic equations in the Boussinesq and hydrostatic approximations is solved. The problem of obtaining solution in the whole domain is that it is necessary to combine solutions in subdomains. For this purposes iterative algorithm is created and numerical experiments are conducted to investigate an effectiveness of developed algorithm using DDM. For symmetric operators in DDM, Poincare-Steklov's operators [1] are used, but for the problems of the hydrodynamics, it is not suitable. In this case for the problem, adjoint equation method [2] and inverse problem theory are used. In addition, it is possible to create algorithms for the parallel calculations using DDM on multiprocessor computer system. DDM for the model of the Baltic Sea dynamics is numerically studied. The results of numerical experiments using DDM are compared with the solution of the system of hydrodynamic equations in the whole domain. The work was supported by the Russian Science Foundation (project 14-11-00609, the formulation of the iterative process and numerical experiments). [1] V.I. Agoshkov, Domain Decompositions Methods in the Mathematical Physics Problem // Numerical processes and systems, No 8, Moscow, 1991 (in Russian). [2] V.I. Agoshkov, Optimal Control Approaches and Adjoint Equations in the Mathematical Physics Problem, Institute of Numerical Mathematics, RAS, Moscow, 2003 (in Russian).
ERIC Educational Resources Information Center
Scott, Fraser J.
2016-01-01
The "mathematics problem" is a well-known source of difficulty for students attempting numerical problem solving questions in the context of science education. This paper illuminates this problem from a biology education perspective by invoking Hogan's numeracy framework. In doing so, this study has revealed that the contextualisation of…
Layer Stripping Solutions of Inverse Seismic Problems.
1985-03-21
problems--more so than has generally been recognized. The subject of this thesis is the theoretical development of the . layer-stripping methodology , and...medium varies sharply at each interface, which would be expected to cause difficulties for the algorithm, since it was designed for a smoothy varying... methodology was applied in a novel way. The inverse problem considered in this chapter was that of reconstructing a layered medium from measurement of its
Numerical simulation of transient, incongruent vaporization induced by high power laser
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsai, C.H.
1981-01-01
A mathematical model and numerical calculations were developed to solve the heat and mass transfer problems specifically for uranum oxide subject to laser irradiation. It can easily be modified for other heat sources or/and other materials. In the uranium-oxygen system, oxygen is the preferentially vaporizing component, and as a result of the finite mobility of oxygen in the solid, an oxygen deficiency is set up near the surface. Because of the bivariant behavior of uranium oxide, the heat transfer problem and the oxygen diffusion problem are coupled and a numerical method of simultaneously solving the two boundary value problems ismore » studied. The temperature dependence of the thermal properties and oxygen diffusivity, as well as the highly ablative effect on the surface, leads to considerable non-linearities in both the governing differential equations and the boundary conditions. Based on the earlier work done in this laboratory by Olstad and Olander on Iron and on Zirconium hydride, the generality of the problem is expanded and the efficiency of the numerical scheme is improved. The finite difference method, along with some advanced numerical techniques, is found to be an efficient way to solve this problem.« less
NASA Astrophysics Data System (ADS)
Kreiss, Gunilla; Holmgren, Hanna; Kronbichler, Martin; Ge, Anthony; Brant, Luca
2017-11-01
The conventional no-slip boundary condition leads to a non-integrable stress singularity at a moving contact line. This makes numerical simulations of two-phase flow challenging, especially when capillarity of the contact point is essential for the dynamics of the flow. We will describe a modeling methodology, which is suitable for numerical simulations, and present results from numerical computations. The methodology is based on combining a relation between the apparent contact angle and the contact line velocity, with the similarity solution for Stokes flow at a planar interface. The relation between angle and velocity can be determined by theoretical arguments, or from simulations using a more detailed model. In our approach we have used results from phase field simulations in a small domain, but using a molecular dynamics model should also be possible. In both cases more physics is included and the stress singularity is removed.
A quasi-spectral method for Cauchy problem of 2/D Laplace equation on an annulus
NASA Astrophysics Data System (ADS)
Saito, Katsuyoshi; Nakada, Manabu; Iijima, Kentaro; Onishi, Kazuei
2005-01-01
Real numbers are usually represented in the computer as a finite number of digits hexa-decimal floating point numbers. Accordingly the numerical analysis is often suffered from rounding errors. The rounding errors particularly deteriorate the precision of numerical solution in inverse and ill-posed problems. We attempt to use a multi-precision arithmetic for reducing the rounding error evil. The use of the multi-precision arithmetic system is by the courtesy of Dr Fujiwara of Kyoto University. In this paper we try to show effectiveness of the multi-precision arithmetic by taking two typical examples; the Cauchy problem of the Laplace equation in two dimensions and the shape identification problem by inverse scattering in three dimensions. It is concluded from a few numerical examples that the multi-precision arithmetic works well on the resolution of those numerical solutions, as it is combined with the high order finite difference method for the Cauchy problem and with the eigenfunction expansion method for the inverse scattering problem.
Evaluation of a transfinite element numerical solution method for nonlinear heat transfer problems
NASA Technical Reports Server (NTRS)
Cerro, J. A.; Scotti, S. J.
1991-01-01
Laplace transform techniques have been widely used to solve linear, transient field problems. A transform-based algorithm enables calculation of the response at selected times of interest without the need for stepping in time as required by conventional time integration schemes. The elimination of time stepping can substantially reduce computer time when transform techniques are implemented in a numerical finite element program. The coupling of transform techniques with spatial discretization techniques such as the finite element method has resulted in what are known as transfinite element methods. Recently attempts have been made to extend the transfinite element method to solve nonlinear, transient field problems. This paper examines the theoretical basis and numerical implementation of one such algorithm, applied to nonlinear heat transfer problems. The problem is linearized and solved by requiring a numerical iteration at selected times of interest. While shown to be acceptable for weakly nonlinear problems, this algorithm is ineffective as a general nonlinear solution method.
An Initial Multi-Domain Modeling of an Actively Cooled Structure
NASA Technical Reports Server (NTRS)
Steinthorsson, Erlendur
1997-01-01
A methodology for the simulation of turbine cooling flows is being developed. The methodology seeks to combine numerical techniques that optimize both accuracy and computational efficiency. Key components of the methodology include the use of multiblock grid systems for modeling complex geometries, and multigrid convergence acceleration for enhancing computational efficiency in highly resolved fluid flow simulations. The use of the methodology has been demonstrated in several turbo machinery flow and heat transfer studies. Ongoing and future work involves implementing additional turbulence models, improving computational efficiency, adding AMR.
Turovets, Sergei; Volkov, Vasily; Zherdetsky, Aleksej; Prakonina, Alena; Malony, Allen D
2014-01-01
The Electrical Impedance Tomography (EIT) and electroencephalography (EEG) forward problems in anisotropic inhomogeneous media like the human head belongs to the class of the three-dimensional boundary value problems for elliptic equations with mixed derivatives. We introduce and explore the performance of several new promising numerical techniques, which seem to be more suitable for solving these problems. The proposed numerical schemes combine the fictitious domain approach together with the finite-difference method and the optimally preconditioned Conjugate Gradient- (CG-) type iterative method for treatment of the discrete model. The numerical scheme includes the standard operations of summation and multiplication of sparse matrices and vector, as well as FFT, making it easy to implement and eligible for the effective parallel implementation. Some typical use cases for the EIT/EEG problems are considered demonstrating high efficiency of the proposed numerical technique.
Numerical aerodynamic simulation facility preliminary study: Executive study
NASA Technical Reports Server (NTRS)
1977-01-01
A computing system was designed with the capability of providing an effective throughput of one billion floating point operations per second for three dimensional Navier-Stokes codes. The methodology used in defining the baseline design, and the major elements of the numerical aerodynamic simulation facility are described.
Researching Street Children: Methodological and Ethical Issues.
ERIC Educational Resources Information Center
Hutz, Claudio S.; And Others
This paper describes the ethical and methodological problems associated with studying prosocial moral reasoning of street children and children of low and high SES living with their families, and problems associated with studying sexual attitudes and behavior of street children and their knowledge of sexually transmitted diseases, especially AIDS.…
Problem-Based Learning: Lessons for Administrators, Educators and Learners
ERIC Educational Resources Information Center
Yeo, Roland
2005-01-01
Purpose: The paper aims to explore the challenges of problem-based learning (PBL) as an unconventional teaching methodology experienced by a higher learning institute in Singapore. Design/methodology/approach: The exploratory study was conducted using focus group discussions and semi-structured interviews. Four groups of people were invited to…
RT DDA: A hybrid method for predicting the scattering properties by densely packed media
NASA Astrophysics Data System (ADS)
Ramezan Pour, B.; Mackowski, D.
2017-12-01
The most accurate approaches to predicting the scattering properties of particulate media are based on exact solutions of the Maxwell's equations (MEs), such as the T-matrix and discrete dipole methods. Applying these techniques for optically thick targets is challenging problem due to the large-scale computations and are usually substituted by phenomenological radiative transfer (RT) methods. On the other hand, the RT technique is of questionable validity in media with large particle packing densities. In recent works, we used numerically exact ME solvers to examine the effects of particle concentration on the polarized reflection properties of plane parallel random media. The simulations were performed for plane parallel layers of wavelength-sized spherical particles, and results were compared with RT predictions. We have shown that RTE results monotonically converge to the exact solution as the particle volume fraction becomes smaller and one can observe a nearly perfect fit for packing densities of 2%-5%. This study describes the hybrid technique composed of exact and numerical scalar RT methods. The exact methodology in this work is the plane parallel discrete dipole approximation whereas the numerical method is based on the adding and doubling method. This approach not only decreases the computational time owing to the RT method but also includes the interference and multiple scattering effects, so it may be applicable to large particle density conditions.
Lagrangian predictability characteristics of an Ocean Model
NASA Astrophysics Data System (ADS)
Lacorata, Guglielmo; Palatella, Luigi; Santoleri, Rosalia
2014-11-01
The Mediterranean Forecasting System (MFS) Ocean Model, provided by INGV, has been chosen as case study to analyze Lagrangian trajectory predictability by means of a dynamical systems approach. To this regard, numerical trajectories are tested against a large amount of Mediterranean drifter data, used as sample of the actual tracer dynamics across the sea. The separation rate of a trajectory pair is measured by computing the Finite-Scale Lyapunov Exponent (FSLE) of first and second kind. An additional kinematic Lagrangian model (KLM), suitably treated to avoid "sweeping"-related problems, has been nested into the MFS in order to recover, in a statistical sense, the velocity field contributions to pair particle dispersion, at mesoscale level, smoothed out by finite resolution effects. Some of the results emerging from this work are: (a) drifter pair dispersion displays Richardson's turbulent diffusion inside the [10-100] km range, while numerical simulations of MFS alone (i.e., without subgrid model) indicate exponential separation; (b) adding the subgrid model, model pair dispersion gets very close to observed data, indicating that KLM is effective in filling the energy "mesoscale gap" present in MFS velocity fields; (c) there exists a threshold size beyond which pair dispersion becomes weakly sensitive to the difference between model and "real" dynamics; (d) the whole methodology here presented can be used to quantify model errors and validate numerical current fields, as far as forecasts of Lagrangian dispersion are concerned.
Designing Adaptive Low Dissipative High Order Schemes
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sjoegreen, B.; Parks, John W. (Technical Monitor)
2002-01-01
Proper control of the numerical dissipation/filter to accurately resolve all relevant multiscales of complex flow problems while still maintaining nonlinear stability and efficiency for long-time numerical integrations poses a great challenge to the design of numerical methods. The required type and amount of numerical dissipation/filter are not only physical problem dependent, but also vary from one flow region to another. This is particularly true for unsteady high-speed shock/shear/boundary-layer/turbulence/acoustics interactions and/or combustion problems since the dynamics of the nonlinear effect of these flows are not well-understood. Even with extensive grid refinement, it is of paramount importance to have proper control on the type and amount of numerical dissipation/filter in regions where it is needed.
NASA Astrophysics Data System (ADS)
An, Li-sha; Liu, Chun-jiao; Liu, Ying-wen
2018-05-01
In the polysilicon chemical vapor deposition reactor, the operating parameters are complex to affect the polysilicon's output. Therefore, it is very important to address the coupling problem of multiple parameters and solve the optimization in a computationally efficient manner. Here, we adopted Response Surface Methodology (RSM) to analyze the complex coupling effects of different operating parameters on silicon deposition rate (R) and further achieve effective optimization of the silicon CVD system. Based on finite numerical experiments, an accurate RSM regression model is obtained and applied to predict the R with different operating parameters, including temperature (T), pressure (P), inlet velocity (V), and inlet mole fraction of H2 (M). The analysis of variance is conducted to describe the rationality of regression model and examine the statistical significance of each factor. Consequently, the optimum combination of operating parameters for the silicon CVD reactor is: T = 1400 K, P = 3.82 atm, V = 3.41 m/s, M = 0.91. The validation tests and optimum solution show that the results are in good agreement with those from CFD model and the deviations of the predicted values are less than 4.19%. This work provides a theoretical guidance to operate the polysilicon CVD process.
A systematic examination of a random sampling strategy for source apportionment calculations.
Andersson, August
2011-12-15
Estimating the relative contributions from multiple potential sources of a specific component in a mixed environmental matrix is a general challenge in diverse fields such as atmospheric, environmental and earth sciences. Perhaps the most common strategy for tackling such problems is by setting up a system of linear equations for the fractional influence of different sources. Even though an algebraic solution of this approach is possible for the common situation with N+1 sources and N source markers, such methodology introduces a bias, since it is implicitly assumed that the calculated fractions and the corresponding uncertainties are independent of the variability of the source distributions. Here, a random sampling (RS) strategy for accounting for such statistical bias is examined by investigating rationally designed synthetic data sets. This random sampling methodology is found to be robust and accurate with respect to reproducibility and predictability. This method is also compared to a numerical integration solution for a two-source situation where source variability also is included. A general observation from this examination is that the variability of the source profiles not only affects the calculated precision but also the mean/median source contributions. Copyright © 2011 Elsevier B.V. All rights reserved.
The application of fuzzy Delphi and fuzzy inference system in supplier ranking and selection
NASA Astrophysics Data System (ADS)
Tahriri, Farzad; Mousavi, Maryam; Hozhabri Haghighi, Siamak; Zawiah Md Dawal, Siti
2014-06-01
In today's highly rival market, an effective supplier selection process is vital to the success of any manufacturing system. Selecting the appropriate supplier is always a difficult task because suppliers posses varied strengths and weaknesses that necessitate careful evaluations prior to suppliers' ranking. This is a complex process with many subjective and objective factors to consider before the benefits of supplier selection are achieved. This paper identifies six extremely critical criteria and thirteen sub-criteria based on the literature. A new methodology employing those criteria and sub-criteria is proposed for the assessment and ranking of a given set of suppliers. To handle the subjectivity of the decision maker's assessment, an integration of fuzzy Delphi with fuzzy inference system has been applied and a new ranking method is proposed for supplier selection problem. This supplier selection model enables decision makers to rank the suppliers based on three classifications including "extremely preferred", "moderately preferred", and "weakly preferred". In addition, in each classification, suppliers are put in order from highest final score to the lowest. Finally, the methodology is verified and validated through an example of a numerical test bed.
Fisicaro, G; Pelaz, L; Lopez, P; La Magna, A
2012-09-01
Pulsed laser irradiation of damaged solids promotes ultrafast nonequilibrium kinetics, on the submicrosecond scale, leading to microscopic modifications of the material state. Reliable theoretical predictions of this evolution can be achieved only by simulating particle interactions in the presence of large and transient gradients of the thermal field. We propose a kinetic Monte Carlo (KMC) method for the simulation of damaged systems in the extremely far-from-equilibrium conditions caused by the laser irradiation. The reference systems are nonideal crystals containing point defect excesses, an order of magnitude larger than the equilibrium density, due to a preirradiation ion implantation process. The thermal and, eventual, melting problem is solved within the phase-field methodology, and the numerical solutions for the space- and time-dependent thermal field were then dynamically coupled to the KMC code. The formalism, implementation, and related tests of our computational code are discussed in detail. As an application example we analyze the evolution of the defect system caused by P ion implantation in Si under nanosecond pulsed irradiation. The simulation results suggest a significant annihilation of the implantation damage which can be well controlled by the laser fluence.
2018-03-14
pricing, Appl. Math . Comp. Vol.305, 174-187 (2017) 5. W. Li, S. Wang, Pricing European options with proportional transaction costs and stochastic...for fractional differential equation. Numer. Math . Theor. Methods Appl. 5, 229–241, 2012. [23] Kilbas A.A. and Marzan, S.A., Cauchy problem for...numerical technique for solving fractional optimal control problems, Comput. Math . Appl., 62, Issue 3, 1055–1067, 2011. [26] Lotfi A., Yousefi SA., Dehghan M
Numerical Optimization Using Computer Experiments
NASA Technical Reports Server (NTRS)
Trosset, Michael W.; Torczon, Virginia
1997-01-01
Engineering design optimization often gives rise to problems in which expensive objective functions are minimized by derivative-free methods. We propose a method for solving such problems that synthesizes ideas from the numerical optimization and computer experiment literatures. Our approach relies on kriging known function values to construct a sequence of surrogate models of the objective function that are used to guide a grid search for a minimizer. Results from numerical experiments on a standard test problem are presented.
NASA Astrophysics Data System (ADS)
Guha, Anirban
2017-11-01
Theoretical studies on linear shear instabilities as well as different kinds of wave interactions often use simple velocity and/or density profiles (e.g. constant, piecewise) for obtaining good qualitative and quantitative predictions of the initial disturbances. Moreover, such simple profiles provide a minimal model to obtain a mechanistic understanding of shear instabilities. Here we have extended this minimal paradigm into nonlinear domain using vortex method. Making use of unsteady Bernoulli's equation in presence of linear shear, and extending Birkhoff-Rott equation to multiple interfaces, we have numerically simulated the interaction between multiple fully nonlinear waves. This methodology is quite general, and has allowed us to simulate diverse problems that can be essentially reduced to the minimal system with interacting waves, e.g. spilling and plunging breakers, stratified shear instabilities (Holmboe, Taylor-Caulfield, stratified Rayleigh), jet flows, and even wave-topography interaction problem like Bragg resonance. We found that the minimal models capture key nonlinear features (e.g. wave breaking features like cusp formation and roll-ups) which are observed in experiments and/or extensive simulations with smooth, realistic profiles.
Prevalence of DSM-IV major depression among U.S. military personnel: Meta-analysis and simulation
Gadermann, Anne M.; Engel, COL Charles C.; Naifeh, James A.; Nock, Matthew K.; Petukhova, Maria; Santiago, LCDR Patcho N.; Benjamin, Wu; Zaslavsky, Alan M.; Kessler, Ronald C.
2014-01-01
A meta-analysis of 25 epidemiological studies estimated the prevalence of recent DSM-IV major depression among U.S. military personnel. Best estimates of recent prevalence (standard error) were 12.0 percent (1.2) among currently deployed, 13.1 percent (1.8) among previously deployed and 5.7 percent (1.2) among never deployed. Consistent correlates of prevalence were being female, enlisted, young (ages 17 to 25), unmarried and having less than a college education. Simulation of data from a national general population survey was used to estimate expected lifetime prevalence of major depression among respondents with the socio-demographic profile and none of the enlistment exclusions of Army personnel. In this simulated sample, 16.2 percent (3.1) of respondents had lifetime major depression and 69.7 percent (8.5) of first onsets occurred before expected age of enlistment. Numerous methodological problems limit the results of the meta-analysis and simulation. The paper closes with a discussion of recommendations for correcting these problems in future surveillance and operational stress studies. PMID:22953441
Expanded DEMATEL for Determining Cause and Effect Group in Bidirectional Relations
Falatoonitoosi, Elham; Ahmed, Shamsuddin; Sorooshian, Shahryar
2014-01-01
Decision-Making Trial and Evaluation Laboratory (DEMATEL) methodology has been proposed to solve complex and intertwined problem groups in many situations such as developing the capabilities, complex group decision making, security problems, marketing approaches, global managers, and control systems. DEMATEL is able to realize casual relationships by dividing important issues into cause and effect group as well as making it possible to visualize the casual relationships of subcriteria and systems in the course of casual diagram that it may demonstrate communication network or a little control relationships between individuals. Despite of its ability to visualize cause and effect inside a network, the original DEMATEL has not been able to find the cause and effect group between different networks. Therefore, the aim of this study is proposing the expanded DEMATEL to cover this deficiency by new formulations to determine cause and effect factors between separate networks that have bidirectional direct impact on each other. At the end, the feasibility of new extra formulations is validated by case study in three numerical examples of green supply chain networks for an automotive company. PMID:24693224
Expanded DEMATEL for determining cause and effect group in bidirectional relations.
Falatoonitoosi, Elham; Ahmed, Shamsuddin; Sorooshian, Shahryar
2014-01-01
Decision-Making Trial and Evaluation Laboratory (DEMATEL) methodology has been proposed to solve complex and intertwined problem groups in many situations such as developing the capabilities, complex group decision making, security problems, marketing approaches, global managers, and control systems. DEMATEL is able to realize casual relationships by dividing important issues into cause and effect group as well as making it possible to visualize the casual relationships of subcriteria and systems in the course of casual diagram that it may demonstrate communication network or a little control relationships between individuals. Despite of its ability to visualize cause and effect inside a network, the original DEMATEL has not been able to find the cause and effect group between different networks. Therefore, the aim of this study is proposing the expanded DEMATEL to cover this deficiency by new formulations to determine cause and effect factors between separate networks that have bidirectional direct impact on each other. At the end, the feasibility of new extra formulations is validated by case study in three numerical examples of green supply chain networks for an automotive company.
Prevalence of DSM-IV major depression among U.S. military personnel: meta-analysis and simulation.
Gadermann, Anne M; Engel, Charles C; Naifeh, James A; Nock, Matthew K; Petukhova, Maria; Santiago, Patcho N; Wu, Benjamin; Zaslavsky, Alan M; Kessler, Ronald C
2012-08-01
A meta-analysis of 25 epidemiological studies estimated the prevalence of recent Diagnostic and Statistical Manual of Mental Disorders-IV (DSM-IV) major depression (MD) among U.S. military personnel. Best estimates of recent prevalence (standard error) were 12.0% (1.2) among currently deployed, 13.1% (1.8) among previously deployed, and 5.7% (1.2) among never deployed. Consistent correlates of prevalence were being female, enlisted, young (ages 17-25), unmarried, and having less than a college education. Simulation of data from a national general population survey was used to estimate expected lifetime prevalence of MD among respondents with the sociodemographic profile and none of the enlistment exclusions of Army personnel. In this Simulated sample, 16.2% (3.1) of respondents had lifetime MD and 69.7% (8.5) of first onsets occurred before expected age of enlistment. Numerous methodological problems limit the results of the meta-analysis and simulation. The article closes with a discussion of recommendations for correcting these problems in future surveillance and operational stress studies.
NASA Astrophysics Data System (ADS)
Blajer, W.; Dziewiecki, K.; Kołodziejczyk, K.; Mazur, Z.
2011-05-01
Underactuated systems are featured by fewer control inputs than the degrees-of-freedom, m < n. The determination of an input control strategy that forces such a system to complete a set of m specified motion tasks is a challenging task, and the explicit solution existence is conditioned to differential flatness of the problem. The flatness-based solution denotes that all the 2 n states and m control inputs can be algebraically expressed in terms of the m specified outputs and their time derivatives up to a certain order, which is in practice attainable only for simple systems. In this contribution the problem is posed in a more practical way as a set of index-three differential-algebraic equations, and the solution is obtained numerically. The formulation is then illustrated by a two-degree-of-freedom underactuated system composed of two rotating discs connected by a torsional spring, in which the pre-specified motion of one of the discs is actuated by the torque applied to the other disc, n = 2 and m = 1. Experimental verification of the inverse simulation control methodology is reported.
The Speaker Respoken: Material Rhetoric as Feminist Methodology.
ERIC Educational Resources Information Center
Collins, Vicki Tolar
1999-01-01
Presents a methodology based on the concept of "material rhetoric" that can help scholars avoid problems as they reclaim women's historical texts. Defines material rhetoric and positions it theoretically in relation to other methodologies, including bibliographical studies, reception theory, and established feminist methodologies. Illustrates…
NASA Astrophysics Data System (ADS)
Malekan, Mohammad; Barros, Felício B.
2017-12-01
Generalized or extended finite element method (G/XFEM) models the crack by enriching functions of partition of unity type with discontinuous functions that represent well the physical behavior of the problem. However, this enrichment functions are not available for all problem types. Thus, one can use numerically-built (global-local) enrichment functions to have a better approximate procedure. This paper investigates the effects of micro-defects/inhomogeneities on a main crack behavior by modeling the micro-defects/inhomogeneities in the local problem using a two-scale G/XFEM. The global-local enrichment functions are influenced by the micro-defects/inhomogeneities from the local problem and thus change the approximate solution of the global problem with the main crack. This approach is presented in detail by solving three different linear elastic fracture mechanics problems for different cases: two plane stress and a Reissner-Mindlin plate problems. The numerical results obtained with the two-scale G/XFEM are compared with the reference solutions from the analytical, numerical solution using standard G/XFEM method and ABAQUS as well, and from the literature.
Approximation and Numerical Analysis of Nonlinear Equations of Evolution.
1980-01-31
dominant convective terms, or Stefan type problems such as the flow of fluids through porous media or the melting and freezing of ice. Such problems...means of formulating time-dependent Stefan problems was initiated. Classes of problems considered here include the one-phase and two-phase Stefan ...some new numerical methods were 2 developed for two dimensional, two-phase Stefan problems with time dependent boundary conditions. A variety of example
Identification of subsurface structures using electromagnetic data and shape priors
NASA Astrophysics Data System (ADS)
Tveit, Svenn; Bakr, Shaaban A.; Lien, Martha; Mannseth, Trond
2015-03-01
We consider the inverse problem of identifying large-scale subsurface structures using the controlled source electromagnetic method. To identify structures in the subsurface where the contrast in electric conductivity can be small, regularization is needed to bias the solution towards preserving structural information. We propose to combine two approaches for regularization of the inverse problem. In the first approach we utilize a model-based, reduced, composite representation of the electric conductivity that is highly flexible, even for a moderate number of degrees of freedom. With a low number of parameters, the inverse problem is efficiently solved using a standard, second-order gradient-based optimization algorithm. Further regularization is obtained using structural prior information, available, e.g., from interpreted seismic data. The reduced conductivity representation is suitable for incorporation of structural prior information. Such prior information cannot, however, be accurately modeled with a gaussian distribution. To alleviate this, we incorporate the structural information using shape priors. The shape prior technique requires the choice of kernel function, which is application dependent. We argue for using the conditionally positive definite kernel which is shown to have computational advantages over the commonly applied gaussian kernel for our problem. Numerical experiments on various test cases show that the methodology is able to identify fairly complex subsurface electric conductivity distributions while preserving structural prior information during the inversion.
Two-Body Approximations in the Design of Low-Energy Transfers Between Galilean Moons
NASA Astrophysics Data System (ADS)
Fantino, Elena; Castelli, Roberto
Over the past two decades, the robotic exploration of the Solar System has reached the moons of the giant planets. In the case of Jupiter, a strong scientific interest towards its icy moons has motivated important space missions (e.g., ESAs' JUICE and NASA's Europa Mission). A major issue in this context is the design of efficient trajectories enabling satellite tours, i.e., visiting the several moons in succession. Concepts like the Petit Grand Tour and the Multi-Moon Orbiter have been developed to this purpose, and the literature on the subject is quite rich. The models adopted are the two-body problem (with the patched conics approximation and gravity assists) and the three-body problem (giving rise to the so-called low-energy transfers, LETs). In this contribution, we deal with the connection between two moons, Europa and Ganymede, and we investigate a two-body approximation of trajectories originating from the stable/unstable invariant manifolds of the two circular restricted three body problems, i.e., Jupiter-Ganymede and Jupiter-Europa. We develop ad-hoc algorithms to determine the intersections of the resulting elliptical arcs, and the magnitude of the maneuver at the intersections. We provide a means to perform very fast and accurate evaluations of the minimum-cost trajectories between the two moons. Eventually, we validate the methodology by comparison with numerical integrations in the three-body problem.
Computing Evans functions numerically via boundary-value problems
NASA Astrophysics Data System (ADS)
Barker, Blake; Nguyen, Rose; Sandstede, Björn; Ventura, Nathaniel; Wahl, Colin
2018-03-01
The Evans function has been used extensively to study spectral stability of travelling-wave solutions in spatially extended partial differential equations. To compute Evans functions numerically, several shooting methods have been developed. In this paper, an alternative scheme for the numerical computation of Evans functions is presented that relies on an appropriate boundary-value problem formulation. Convergence of the algorithm is proved, and several examples, including the computation of eigenvalues for a multi-dimensional problem, are given. The main advantage of the scheme proposed here compared with earlier methods is that the scheme is linear and scalable to large problems.
Adaptive Grid Generation for Numerical Solution of Partial Differential Equations.
1983-12-01
numerical solution of fluid dynamics problems is presented. However, the method is applicable to the numer- ical evaluation of any partial differential...emphasis is being placed on numerical solution of the governing differential equations by finite difference methods . In the past two decades, considerable...original equations presented in that paper. The solution of the second problem is more difficult. 2 The method of Thompson et al. provides control for
A methodology for the assessment of manned flight simulator fidelity
NASA Technical Reports Server (NTRS)
Hess, Ronald A.; Malsbury, Terry N.
1989-01-01
A relatively simple analytical methodology for assessing the fidelity of manned flight simulators for specific vehicles and tasks is offered. The methodology is based upon an application of a structural model of the human pilot, including motion cue effects. In particular, predicted pilot/vehicle dynamic characteristics are obtained with and without simulator limitations. A procedure for selecting model parameters can be implemented, given a probable pilot control strategy. In analyzing a pair of piloting tasks for which flight and simulation data are available, the methodology correctly predicted the existence of simulator fidelity problems. The methodology permitted the analytical evaluation of a change in simulator characteristics and indicated that a major source of the fidelity problems was a visual time delay in the simulation.
Numerical optimization methods for controlled systems with parameters
NASA Astrophysics Data System (ADS)
Tyatyushkin, A. I.
2017-10-01
First- and second-order numerical methods for optimizing controlled dynamical systems with parameters are discussed. In unconstrained-parameter problems, the control parameters are optimized by applying the conjugate gradient method. A more accurate numerical solution in these problems is produced by Newton's method based on a second-order functional increment formula. Next, a general optimal control problem with state constraints and parameters involved on the righthand sides of the controlled system and in the initial conditions is considered. This complicated problem is reduced to a mathematical programming one, followed by the search for optimal parameter values and control functions by applying a multimethod algorithm. The performance of the proposed technique is demonstrated by solving application problems.
Solving traveling salesman problems with DNA molecules encoding numerical values.
Lee, Ji Youn; Shin, Soo-Yong; Park, Tai Hyun; Zhang, Byoung-Tak
2004-12-01
We introduce a DNA encoding method to represent numerical values and a biased molecular algorithm based on the thermodynamic properties of DNA. DNA strands are designed to encode real values by variation of their melting temperatures. The thermodynamic properties of DNA are used for effective local search of optimal solutions using biochemical techniques, such as denaturation temperature gradient polymerase chain reaction and temperature gradient gel electrophoresis. The proposed method was successfully applied to the traveling salesman problem, an instance of optimization problems on weighted graphs. This work extends the capability of DNA computing to solving numerical optimization problems, which is contrasted with other DNA computing methods focusing on logical problem solving.
Intellectual Abilities That Discriminate Good and Poor Problem Solvers.
ERIC Educational Resources Information Center
Meyer, Ruth Ann
1981-01-01
This study compared good and poor fourth-grade problem solvers on a battery of 19 "reference" tests for verbal, induction, numerical, word fluency, memory, perceptual speed, and simple visualization abilities. Results suggest verbal, numerical, and especially induction abilities are important to successful mathematical problem solving.…
A new shock-capturing numerical scheme for ideal hydrodynamics
NASA Astrophysics Data System (ADS)
Fecková, Z.; Tomášik, B.
2015-05-01
We present a new algorithm for solving ideal relativistic hydrodynamics based on Godunov method with an exact solution of Riemann problem for an arbitrary equation of state. Standard numerical tests are executed, such as the sound wave propagation and the shock tube problem. Low numerical viscosity and high precision are attained with proper discretization.
Multiaxis Computer Numerical Control Internship Report
ERIC Educational Resources Information Center
Rouse, Sharon M.
2012-01-01
(Purpose) The purpose of this paper was to examine the issues associated with bringing new technology into the classroom, in particular, the vocational/technical classroom. (Methodology) A new Haas 5 axis vertical Computer Numerical Control machining center was purchased to update the CNC machining curriculum at a community college and the process…
Parallel Algorithm Solves Coupled Differential Equations
NASA Technical Reports Server (NTRS)
Hayashi, A.
1987-01-01
Numerical methods adapted to concurrent processing. Algorithm solves set of coupled partial differential equations by numerical integration. Adapted to run on hypercube computer, algorithm separates problem into smaller problems solved concurrently. Increase in computing speed with concurrent processing over that achievable with conventional sequential processing appreciable, especially for large problems.
Galois groups of Schubert problems via homotopy computation
NASA Astrophysics Data System (ADS)
Leykin, Anton; Sottile, Frank
2009-09-01
Numerical homotopy continuation of solutions to polynomial equations is the foundation for numerical algebraic geometry, whose development has been driven by applications of mathematics. We use numerical homotopy continuation to investigate the problem in pure mathematics of determining Galois groups in the Schubert calculus. For example, we show by direct computation that the Galois group of the Schubert problem of 3-planes in mathbb{C}^8 meeting 15 fixed 5-planes non-trivially is the full symmetric group S_{6006} .
A methodology to enhance electromagnetic compatibility in joint military operations
NASA Astrophysics Data System (ADS)
Buckellew, William R.
The development and validation of an improved methodology to identify, characterize, and prioritize potential joint EMI (electromagnetic interference) interactions and identify and develop solutions to reduce the effects of the interference are discussed. The methodology identifies potential EMI problems using results from field operations, historical data bases, and analytical modeling. Operational expertise, engineering analysis, and testing are used to characterize and prioritize the potential EMI problems. Results can be used to resolve potential EMI during the development and acquisition of new systems and to develop engineering fixes and operational workarounds for systems already employed. The analytic modeling portion of the methodology is a predictive process that uses progressive refinement of the analysis and the operational electronic environment to eliminate noninterfering equipment pairs, defer further analysis on pairs lacking operational significance, and resolve the remaining EMI problems. Tests are conducted on equipment pairs to ensure that the analytical models provide a realistic description of the predicted interference.
Brush seal numerical simulation: Concepts and advances
NASA Technical Reports Server (NTRS)
Braun, M. J.; Kudriavtsev, V. V.
1994-01-01
The development of the brush seal is considered to be most promising among the advanced type seals that are presently in use in the high speed turbomachinery. The brush is usually mounted on the stationary portions of the engine and has direct contact with the rotating element, in the process of limiting the 'unwanted' leakage flows between stages, or various engine cavities. This type of sealing technology is providing high (in comparison with conventional seals) pressure drops due mainly to the high packing density (around 100 bristles/sq mm), and brush compliance with the rotor motions. In the design of modern aerospace turbomachinery leakage flows between the stages must be minimal, thus contributing to the higher efficiency of the engine. Use of the brush seal instead of the labyrinth seal reduces the leakage flow by one order of magnitude. Brush seals also have been found to enhance dynamic performance, cost less, and are lighter than labyrinth seals. Even though industrial brush seals have been successfully developed through extensive experimentation, there is no comprehensive numerical methodology for the design or prediction of their performance. The existing analytical/numerical approaches are based on bulk flow models and do not allow the investigation of the effects of brush morphology (bristle arrangement), or brushes arrangement (number of brushes, spacing between them), on the pressure drops and flow leakage. An increase in the brush seal efficiency is clearly a complex problem that is closely related to the brush geometry and arrangement, and can be solved most likely only by means of a numerically distributed model.
Evaluation of Tsunami Run-Up on Coastal Areas at Regional Scale
NASA Astrophysics Data System (ADS)
González, M.; Aniel-Quiroga, Í.; Gutiérrez, O.
2017-12-01
Tsunami hazard assessment is tackled by means of numerical simulations, giving as a result, the areas flooded by tsunami wave inland. To get this, some input data is required, i.e., the high resolution topobathymetry of the study area, the earthquake focal mechanism parameters, etc. The computational cost of these kinds of simulations are still excessive. An important restriction for the elaboration of large scale maps at National or regional scale is the reconstruction of high resolution topobathymetry on the coastal zone. An alternative and traditional method consists of the application of empirical-analytical formulations to calculate run-up at several coastal profiles (i.e. Synolakis, 1987), combined with numerical simulations offshore without including coastal inundation. In this case, the numerical simulations are faster but some limitations are added as the coastal bathymetric profiles are very simply idealized. In this work, we present a complementary methodology based on a hybrid numerical model, formed by 2 models that were coupled ad hoc for this work: a non-linear shallow water equations model (NLSWE) for the offshore part of the propagation and a Volume of Fluid model (VOF) for the areas near the coast and inland, applying each numerical scheme where they better reproduce the tsunami wave. The run-up of a tsunami scenario is obtained by applying the coupled model to an ad-hoc numerical flume. To design this methodology, hundreds of worldwide topobathymetric profiles have been parameterized, using 5 parameters (2 depths and 3 slopes). In addition, tsunami waves have been also parameterized by their height and period. As an application of the numerical flume methodology, the coastal parameterized profiles and tsunami waves have been combined to build a populated database of run-up calculations. The combination was tackled by means of numerical simulations in the numerical flume The result is a tsunami run-up database that considers real profiles shape, realistic tsunami waves, and optimized numerical simulations. This database allows the calculation of the run-up of any new tsunami wave by interpolation on the database, in a short period of time, based on the tsunami wave characteristics provided as an output of the NLSWE model along the coast at a large scale domain (regional or National scale).
Advances in Numerical Boundary Conditions for Computational Aeroacoustics
NASA Technical Reports Server (NTRS)
Tam, Christopher K. W.
1997-01-01
Advances in Computational Aeroacoustics (CAA) depend critically on the availability of accurate, nondispersive, least dissipative computation algorithm as well as high quality numerical boundary treatments. This paper focuses on the recent developments of numerical boundary conditions. In a typical CAA problem, one often encounters two types of boundaries. Because a finite computation domain is used, there are external boundaries. On the external boundaries, boundary conditions simulating the solution outside the computation domain are to be imposed. Inside the computation domain, there may be internal boundaries. On these internal boundaries, boundary conditions simulating the presence of an object or surface with specific acoustic characteristics are to be applied. Numerical boundary conditions, both external or internal, developed for simple model problems are reviewed and examined. Numerical boundary conditions for real aeroacoustic problems are also discussed through specific examples. The paper concludes with a description of some much needed research in numerical boundary conditions for CAA.
ERIC Educational Resources Information Center
Knewstubb, Bernadette; Nicholas, Howard
2017-01-01
Numerous higher education researchers have studied the ways in which students' or academics' beliefs and conceptions affect their educational experiences and outcomes. However, studying the learning-teaching relationship has proved challenging, requiring researchers to simultaneously address both the invisible internal world(s) of the student and…
The Use of Online Methodologies in Data Collection for Gambling and Gaming Addictions
ERIC Educational Resources Information Center
Griffiths, Mark D.
2010-01-01
The paper outlines the advantages, disadvantages, and other implications of using the Internet to collect data from gaming addicts. Drawing from experience of numerous addiction studies carried out online by the author, and by reviewing the methodological literature examining online data collection among both gambling addicts and video game…
ERIC Educational Resources Information Center
Hare, Kathleen A.; Dubé, Anik; Marshall, Zack; Gahagan, Jacqueline; Harris, Gregory E.; Tucker, Maryanne; Dykeman, Margaret; MacDonald, Jo-Ann
2016-01-01
Policy scoping reviews are an effective method for generating evidence-informed policies. However, when applying guiding methodological frameworks to complex policy evidence, numerous, unexpected challenges can emerge. This paper details five challenges experienced and addressed by a policy trainee-led, multi-disciplinary research team, while…
Studying Urban History through Oral History and Q Methodology: A Comparative Analysis.
ERIC Educational Resources Information Center
Jimenez, Rebecca S.
Oral history and Q methodology (a social science technique designed to document objectively and numerically the reactions of individuals to selected issues) were used to investigate urban renewal in Waco, Texas. Nineteen persons directly involved in the city's relocation and rehabilitation projects granted interviews. From these oral histories, 70…
How number line estimation skills relate to neural activations in single digit subtraction problems
Berteletti, I.; Man, G.; Booth, J.R.
2014-01-01
The Number Line (NL) task requires judging the relative numerical magnitude of a number and estimating its value spatially on a continuous line. Children's skill on this task has been shown to correlate with and predict future mathematical competence. Neurofunctionally, this task has been shown to rely on brain regions involved in numerical processing. However, there is no direct evidence that performance on the NL task is related to brain areas recruited during arithmetical processing and that these areas are domain-specific to numerical processing. In this study, we test whether 8- to 14-year-old's behavioral performance on the NL task is related to fMRI activation during small and large single-digit subtraction problems. Domain-specific areas for numerical processing were independently localized through a numerosity judgment task. Results show a direct relation between NL estimation performance and the amount of the activation in key areas for arithmetical processing. Better NL estimators showed a larger problem size effect than poorer NL estimators in numerical magnitude (i.e., intraparietal sulcus) and visuospatial areas (i.e., posterior superior parietal lobules), marked by less activation for small problems. In addition, the direction of the activation with problem size within the IPS was associated to differences in accuracies for small subtraction problems. This study is the first to show that performance in the NL task, i.e. estimating the spatial position of a number on an interval, correlates with brain activity observed during single-digit subtraction problem in regions thought to be involved numerical magnitude and spatial processes. PMID:25497398
Methodological Issues and Practical Problems in Conducting Research on Abused Children.
ERIC Educational Resources Information Center
Kinard, E. Milling
In order to inform policy and programs, research on child abuse must be not only methodologically rigorous, but also practically feasible. However, practical problems make child abuse research difficult to conduct. Definitions of abuse must be explicit and different types of abuse must be assessed separately. Study samples should be as…
ERIC Educational Resources Information Center
Soh, Kaycheng
2013-01-01
Recent research into university ranking methodologies uncovered several methodological problems among the systems currently in vogue. One of these is the discrepancy between the nominal and attained weights. The problem is the summation of unstandardized indicators for the total scores used in ranking. It is demonstrated that weight discrepancy…
A Methodological Critique of "Interventions for Boys with Conduct Problems"
ERIC Educational Resources Information Center
Kent, Ronald; And Others
1976-01-01
Kent criticizes Patterson's study on treating the behavior problems of boys, on several methodological bases concluding that more rigorous research is required in this field. Patterson answers Kent's criticisms arguing that they are not based on sound grounds. Patterson offers further evidence to support the efficacy of his treatment procedures.…
NASA Astrophysics Data System (ADS)
D'Ambrosio, Raffaele; Moccaldi, Martina; Paternoster, Beatrice
2018-05-01
In this paper, an adapted numerical scheme for reaction-diffusion problems generating periodic wavefronts is introduced. Adapted numerical methods for such evolutionary problems are specially tuned to follow prescribed qualitative behaviors of the solutions, making the numerical scheme more accurate and efficient as compared with traditional schemes already known in the literature. Adaptation through the so-called exponential fitting technique leads to methods whose coefficients depend on unknown parameters related to the dynamics and aimed to be numerically computed. Here we propose a strategy for a cheap and accurate estimation of such parameters, which consists essentially in minimizing the leading term of the local truncation error whose expression is provided in a rigorous accuracy analysis. In particular, the presented estimation technique has been applied to a numerical scheme based on combining an adapted finite difference discretization in space with an implicit-explicit time discretization. Numerical experiments confirming the effectiveness of the approach are also provided.
Research Methodology in Second Language Studies: Trends, Concerns, and New Directions
ERIC Educational Resources Information Center
King, Kendall A.; Mackey, Alison
2016-01-01
The field of second language studies is using increasingly sophisticated methodological approaches to address a growing number of urgent, real-world problems. These methodological developments bring both new challenges and opportunities. This article briefly reviews recent ontological and methodological debates in the field, then builds on these…
QUADRO: A SUPERVISED DIMENSION REDUCTION METHOD VIA RAYLEIGH QUOTIENT OPTIMIZATION.
Fan, Jianqing; Ke, Zheng Tracy; Liu, Han; Xia, Lucy
We propose a novel Rayleigh quotient based sparse quadratic dimension reduction method-named QUADRO (Quadratic Dimension Reduction via Rayleigh Optimization)-for analyzing high-dimensional data. Unlike in the linear setting where Rayleigh quotient optimization coincides with classification, these two problems are very different under nonlinear settings. In this paper, we clarify this difference and show that Rayleigh quotient optimization may be of independent scientific interests. One major challenge of Rayleigh quotient optimization is that the variance of quadratic statistics involves all fourth cross-moments of predictors, which are infeasible to compute for high-dimensional applications and may accumulate too many stochastic errors. This issue is resolved by considering a family of elliptical models. Moreover, for heavy-tail distributions, robust estimates of mean vectors and covariance matrices are employed to guarantee uniform convergence in estimating non-polynomially many parameters, even though only the fourth moments are assumed. Methodologically, QUADRO is based on elliptical models which allow us to formulate the Rayleigh quotient maximization as a convex optimization problem. Computationally, we propose an efficient linearized augmented Lagrangian method to solve the constrained optimization problem. Theoretically, we provide explicit rates of convergence in terms of Rayleigh quotient under both Gaussian and general elliptical models. Thorough numerical results on both synthetic and real datasets are also provided to back up our theoretical results.
NASA Astrophysics Data System (ADS)
Pontes, P. C.; Naveira-Cotta, C. P.
2016-09-01
The theoretical analysis for the design of microreactors in biodiesel production is a complicated task due to the complex liquid-liquid flow and mass transfer processes, and the transesterification reaction that takes place within these microsystems. Thus, computational simulation is an important tool that aids in understanding the physical-chemical phenomenon and, consequently, in determining the suitable conditions that maximize the conversion of triglycerides during the biodiesel synthesis. A diffusive-convective-reactive coupled nonlinear mathematical model, that governs the mass transfer process during the transesterification reaction in parallel plates microreactors, under isothermal conditions, is here described. A hybrid numerical-analytical solution via the Generalized Integral Transform Technique (GITT) for this partial differential system is developed and the eigenfunction expansions convergence rates are extensively analyzed and illustrated. The heuristic method of Particle Swarm Optimization (PSO) is applied in the inverse analysis of the proposed direct problem, to estimate the reaction kinetics constants, which is a critical step in the design of such microsystems. The results present a good agreement with the limited experimental data in the literature, but indicate that the GITT methodology combined with the PSO approach provide a reliable computational algorithm for direct-inverse analysis in such reactive mass transfer problems.
Reflection Patterns Generated by Condensed-Phase Oblique Detonation Interaction with a Rigid Wall
NASA Astrophysics Data System (ADS)
Short, Mark; Chiquete, Carlos; Bdzil, John; Meyer, Chad
2017-11-01
We examine numerically the wave reflection patterns generated by a detonation in a condensed phase explosive inclined obliquely but traveling parallel to a rigid wall as a function of incident angle. The problem is motivated by the characterization of detonation-material confiner interactions. We compare the reflection patterns for two detonation models, one where the reaction zone is spatially distributed, and the other where the reaction is instantaneous (a Chapman-Jouguet detonation). For the Chapman-Jouguet model, we compare the results of the computations with an asymptotic study recently conducted by Bdzil and Short for small detonation incident angles. We show that the ability of a spatially distributed reaction energy release to turn flow streamlines has a significant impact on the nature of the observed reflection patterns. The computational approach uses a shock-fit methodology.
Stratum variance estimation for sample allocation in crop surveys. [Great Plains Corridor
NASA Technical Reports Server (NTRS)
Perry, C. R., Jr.; Chhikara, R. S. (Principal Investigator)
1980-01-01
The problem of determining stratum variances needed in achieving an optimum sample allocation for crop surveys by remote sensing is investigated by considering an approach based on the concept of stratum variance as a function of the sampling unit size. A methodology using the existing and easily available information of historical crop statistics is developed for obtaining initial estimates of tratum variances. The procedure is applied to estimate stratum variances for wheat in the U.S. Great Plains and is evaluated based on the numerical results thus obtained. It is shown that the proposed technique is viable and performs satisfactorily, with the use of a conservative value for the field size and the crop statistics from the small political subdivision level, when the estimated stratum variances were compared to those obtained using the LANDSAT data.
Uses of ecologic analysis in epidemiologic research.
Morgenstern, H
1982-01-01
Despite the widespread use of ecologic analysis in epidemiologic research and health planning, little attention has been given by health scientists and practitioners to the methodological aspects of this approach. This paper reviews the major types of ecologic study designs, the analytic methods appropriate for each, the limitations of ecologic data for making causal inferences and what can be done to minimize these problems, and the relative advantages of ecologic analysis. Numerous examples are provided to illustrate the important principles and methods. A careful distinction is made between ecologic studies that generate or test etiologic hypotheses and those that evaluate the impact of intervention programs or policies (given adequate knowledge of disease etiology). Failure to recognize this difference in the conduct of ecologic studies can lead to results that are not very informative or that are misinterpreted by others. PMID:7137430
A programing system for research and applications in structural optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; Rogers, J. L., Jr.
1981-01-01
The paper describes a computer programming system designed to be used for methodology research as well as applications in structural optimization. The flexibility necessary for such diverse utilizations is achieved by combining, in a modular manner, a state-of-the-art optimization program, a production level structural analysis program, and user supplied and problem dependent interface programs. Standard utility capabilities existing in modern computer operating systems are used to integrate these programs. This approach results in flexibility of the optimization procedure organization and versatility in the formulation of contraints and design variables. Features shown in numerical examples include: (1) variability of structural layout and overall shape geometry, (2) static strength and stiffness constraints, (3) local buckling failure, and (4) vibration constraints. The paper concludes with a review of the further development trends of this programing system.
The Application of High Energy Resolution Green's Functions to Threat Scenario Simulation
NASA Astrophysics Data System (ADS)
Thoreson, Gregory G.; Schneider, Erich A.
2012-04-01
Radiation detectors installed at key interdiction points provide defense against nuclear smuggling attempts by scanning vehicles and traffic for illicit nuclear material. These hypothetical threat scenarios may be modeled using radiation transport simulations. However, high-fidelity models are computationally intensive. Furthermore, the range of smuggler attributes and detector technologies create a large problem space not easily overcome by brute-force methods. Previous research has demonstrated that decomposing the scenario into independently simulated components using Green's functions can simulate photon detector signals with coarse energy resolution. This paper extends this methodology by presenting physics enhancements and numerical treatments which allow for an arbitrary level of energy resolution for photon transport. As a result, spectroscopic detector signals produced from full forward transport simulations can be replicated while requiring multiple orders of magnitude less computation time.
Study of compressible turbulent flows in supersonic environment by large-eddy simulation
NASA Astrophysics Data System (ADS)
Genin, Franklin
The numerical resolution of turbulent flows in high-speed environment is of fundamental importance but remains a very challenging problem. First, the capture of strong discontinuities, typical of high-speed flows, requires the use of shock-capturing schemes, which are not adapted to the resolution of turbulent structures due to their intrinsic dissipation. On the other hand, low-dissipation schemes are unable to resolve shock fronts and other sharp gradients without creating high amplitude numerical oscillations. Second, the nature of turbulence in high-speed flows differs from its incompressible behavior, and, in the context of Large-Eddy Simulation, the subgrid closure must be adapted to the modeling of compressibility effects and shock waves on turbulent flows. The developments described in this thesis are two-fold. First, a state of the art closure approach for LES is extended to model subgrid turbulence in compressible flows. The energy transfers due to compressible turbulence and the diffusion of turbulent kinetic energy by pressure fluctuations are assessed and integrated in the Localized Dynamic ksgs model. Second, a hybrid numerical scheme is developed for the resolution of the LES equations and of the model transport equation, which combines a central scheme for turbulent resolutions to a shock-capturing method. A smoothness parameter is defined and used to switch from the base smooth solver to the upwind scheme in regions of discontinuities. It is shown that the developed hybrid methodology permits a capture of shock/turbulence interactions in direct simulations that agrees well with other reference simulations, and that the LES methodology effectively reproduces the turbulence evolution and physical phenomena involved in the interaction. This numerical approach is then employed to study a problem of practical importance in high-speed mixing. The interaction of two shock waves with a high-speed turbulent shear layer as a mixing augmentation technique is considered. It is shown that the levels of turbulence are increased through the interaction, and that the mixing is significantly improved in this flow configuration. However, the region of increased mixing is found to be localized to a region close to the impact of the shocks, and that the statistical levels of turbulence relax to their undisturbed levels some short distance downstream of the interaction. The present developments are finally applied to a practical configuration relevant to scramjet injection. The normal injection of a sonic jet into a supersonic crossflow is considered numerically, and compared to the results of an experimental study. A fair agreement in the statistics of mean and fluctuating velocity fields is obtained. Furthermore, some of the instantaneous flow structures observed in experimental visualizations are identified in the present simulation. The dynamics of the interaction for the reference case, based on the experimental study, as well as for a case of higher freestream Mach number and a case of higher momentum ratio, are examined. The classical instantaneous vortical structures are identified, and their generation mechanisms, specific to supersonic flow, are highlighted. Furthermore, two vortical structures, recently revealed in low-speed jets in crossflow but never documented for high-speed flows, are identified during the flow evolution.
Reusable design: A proposed approach to Public Health Informatics system design
2011-01-01
Background Since it was first defined in 1995, Public Health Informatics (PHI) has become a recognized discipline, with a research agenda, defined domain-specific competencies and a specialized corpus of technical knowledge. Information systems form a cornerstone of PHI research and implementation, representing significant progress for the nascent field. However, PHI does not advocate or incorporate standard, domain-appropriate design methods for implementing public health information systems. Reusable design is generalized design advice that can be reused in a range of similar contexts. We propose that PHI create and reuse information design knowledge by taking a systems approach that incorporates design methods from the disciplines of Human-Computer Interaction, Interaction Design and other related disciplines. Discussion Although PHI operates in a domain with unique characteristics, many design problems in public health correspond to classic design problems, suggesting that existing design methods and solution approaches are applicable to the design of public health information systems. Among the numerous methodological frameworks used in other disciplines, we identify scenario-based design and participatory design as two widely-employed methodologies that are appropriate for adoption as PHI standards. We make the case that these methods show promise to create reusable design knowledge in PHI. Summary We propose the formalization of a set of standard design methods within PHI that can be used to pursue a strategy of design knowledge creation and reuse for cost-effective, interoperable public health information systems. We suggest that all public health informaticians should be able to use these design methods and the methods should be incorporated into PHI training. PMID:21333000
Investigation of Error Patterns in Geographical Databases
NASA Technical Reports Server (NTRS)
Dryer, David; Jacobs, Derya A.; Karayaz, Gamze; Gronbech, Chris; Jones, Denise R. (Technical Monitor)
2002-01-01
The objective of the research conducted in this project is to develop a methodology to investigate the accuracy of Airport Safety Modeling Data (ASMD) using statistical, visualization, and Artificial Neural Network (ANN) techniques. Such a methodology can contribute to answering the following research questions: Over a representative sampling of ASMD databases, can statistical error analysis techniques be accurately learned and replicated by ANN modeling techniques? This representative ASMD sample should include numerous airports and a variety of terrain characterizations. Is it possible to identify and automate the recognition of patterns of error related to geographical features? Do such patterns of error relate to specific geographical features, such as elevation or terrain slope? Is it possible to combine the errors in small regions into an error prediction for a larger region? What are the data density reduction implications of this work? ASMD may be used as the source of terrain data for a synthetic visual system to be used in the cockpit of aircraft when visual reference to ground features is not possible during conditions of marginal weather or reduced visibility. In this research, United States Geologic Survey (USGS) digital elevation model (DEM) data has been selected as the benchmark. Artificial Neural Networks (ANNS) have been used and tested as alternate methods in place of the statistical methods in similar problems. They often perform better in pattern recognition, prediction and classification and categorization problems. Many studies show that when the data is complex and noisy, the accuracy of ANN models is generally higher than those of comparable traditional methods.
Klaczynski, Paul A.
2014-01-01
In Stanovich's (2009a, 2011) dual-process theory, analytic processing occurs in the algorithmic and reflective minds. Thinking dispositions, indexes of reflective mind functioning, are believed to regulate operations at the algorithmic level, indexed by general cognitive ability. General limitations at the algorithmic level impose constraints on, and affect the adequacy of, specific strategies and abilities (e.g., numeracy). In a study of 216 undergraduates, the hypothesis that thinking dispositions and general ability moderate the relationship between numeracy (understanding of mathematical concepts and attention to numerical information) and normative responses on probabilistic heuristics and biases (HB) problems was tested. Although all three individual difference measures predicted normative responses, the numeracy-normative response association depended on thinking dispositions and general ability. Specifically, numeracy directly affected normative responding only at relatively high levels of thinking dispositions and general ability. At low levels of thinking dispositions, neither general ability nor numeric skills related to normative responses. Discussion focuses on the consistency of these findings with the hypothesis that the implementation of specific skills is constrained by limitations at both the reflective level and the algorithmic level, methodological limitations that prohibit definitive conclusions, and alternative explanations. PMID:25071639